diff --git a/intro_8K/test_introduction_short_2404.16260v1.json b/intro_8K/test_introduction_short_2404.16260v1.json new file mode 100644 index 0000000000000000000000000000000000000000..3dfc7c89defb999f3ef5ac1fd297e7e6deabd44c --- /dev/null +++ b/intro_8K/test_introduction_short_2404.16260v1.json @@ -0,0 +1,19 @@ +{ + "url": "http://arxiv.org/abs/2404.16260v1", + "title": "OmniSearchSage: Multi-Task Multi-Entity Embeddings for Pinterest Search", + "abstract": "In this paper, we present OmniSearchSage, a versatile and scalable system for\nunderstanding search queries, pins, and products for Pinterest search. We\njointly learn a unified query embedding coupled with pin and product\nembeddings, leading to an improvement of $>8\\%$ relevance, $>7\\%$ engagement,\nand $>5\\%$ ads CTR in Pinterest's production search system. The main\ncontributors to these gains are improved content understanding, better\nmulti-task learning, and real-time serving. We enrich our entity\nrepresentations using diverse text derived from image captions from a\ngenerative LLM, historical engagement, and user-curated boards. Our multitask\nlearning setup produces a single search query embedding in the same space as\npin and product embeddings and compatible with pre-existing pin and product\nembeddings. We show the value of each feature through ablation studies, and\nshow the effectiveness of a unified model compared to standalone counterparts.\nFinally, we share how these embeddings have been deployed across the Pinterest\nsearch stack, from retrieval to ranking, scaling to serve $300k$ requests per\nsecond at low latency. Our implementation of this work is available at\nhttps://github.com/pinterest/atg-research/tree/main/omnisearchsage.", + "authors": "Prabhat Agarwal, Minhazul Islam Sk, Nikil Pancha, Kurchi Subhra Hazra, Jiajing Xu, Chuck Rosenberg", + "published": "2024-04-25", + "updated": "2024-04-25", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.LG", + "H.3.3" + ], + "label": "Original Paper", + "paper_cat": "LLM Fairness", + "gt": "Pinterest\u2019s mission is to bring everyone the inspiration to create a life they love. Search is one of the key surfaces on Pinterest where users seek inspiration spanning a wide range of interests, such as decorating their homes, planning weddings, or keeping up with the latest trends in beauty and fashion. In order to enhance the search experience, modern search systems aim to incorporate various types of content such as web documents, news, shopping items, videos, and more. Similarly, Pinterest\u2019s search feed encompasses a diverse range of content, including pins, shopping items, video pins, and related queries. To construct an inspiring feed for each of the more than 6 billion searches per month on Pinterest we must uncover relevant content from billions of pins and products. We must also find relevant queries to help users refine their queries and navigate their search journey. As an additional challenge, Pinterest search is global and multi- lingual with searchers using more than 45 languages to find inspi- rational content. Embeddings are useful building blocks in recommendation sys- tems, especially search, where natural language understanding is key [11, 23, 24]. Embeddings can power retrieval use cases via ap- proximate nearest neighbor (ANN) search [14, 22], enable detailed content and query understanding in ranking models without the overhead of processing raw data, and serve as a strong base to learn in low-data use-cases [31]. Despite their utility, embeddings come with their own challenges: if we learn a separate embedding for every use-case, there is an explosion of potentially expensive mod- els that must be inferred on every request and used in downstream models. This also may lead to suboptimal recommendation quality \u2013 some use-cases may not have enough labels to learn an optimal representation. In practice, it could entail additional maintenance costs and technical debt for upgrading to new versions of embed- dings in certain applications, as some data may have been collected over the course of months or years. Through rigorous offline experimentation, we show the impact of our key decisions in building embeddings for web-scale search at Pinterest: \u2022 Pin and product representations can be substantially en- riched using diverse text derived from image captions from arXiv:2404.16260v1 [cs.IR] 25 Apr 2024 WWW \u201924 Companion, May 13\u201317, 2024, Singapore, Singapore Prabhat Agarwal et al. a generative LLM, historical engagement, and user-curated boards. \u2022 A single query embedding can be used to retrieve queries, products, and Pins with nearly the same effectiveness as task-specific embeddings. \u2022 A single query embedding can learn compatibility with multi- ple pre-existing embeddings and learned entity embeddings, and perform well when compared across tasks. OmniSearchSage has been deployed at Pinterest and is an in- tegral component of the search stack. It powers embedding-based retrieval for standard and product pins, queries and ads. It is also one of the most important feature in multi-stage ranking models and various query classification models. These gains all arise despite the existence of other features enabling pin and product under- standing, which highlights the importance optimizing embeddings end-to-end for search.", + "main_content": "Our work to build multi-task multi-entity embeddings for search draws upon broad areas of work. Our representation of pins and products extends existing work on multi-modal learning and two tower models for search retrieval. These have been extensively applied in the context of search and recommendation systems as an efficient way to retrieve results not purely related to the search query based on text. In OmniSearchSage, we demonstrate that the embeddings generated by these models can also serve as features in ranking and relevance models. Additionally, we offer a brief examination of specific embeddings within the Pinterest ecosystem. 2.1 Model-based Search Retrieval Historically, search systems have been powered by two stages: token-based matching, or candidate generation, and then scoring with a complex model. These have drawbacks, especially when users make complex queries or content is not primarily textual. This has led to the exploration of two tower models, which encode a query into a single embedding or a small set of embeddings, and then use those to retrieve relevant documents with approximate or exact nearest neighbor search [5, 11, 18, 20, 21, 24, 40]. Two natural topics in learning embeddings for search are document representation, and query representation. Depending on the learning objective, this query representation could be personalized, or it could be a pure text embedding model. Many architectures for query embeddings in industry have been proposed based on simple CNNs [12], bag of words models [11, 23], transformers [19], and more, but they share a basic structure involving query understanding and sometimes context understanding. Document representation is also a major challenge. The text associated directly with an item is popular as a key feature, but depending on the task, other sources have been found to provide great value, including queries where other users have engaged with a given item [5, 24, 25] and image content embeddings [19]. 2.2 Multi-task, multi-modal, and multi-entity embeddings The area of learning embeddings isn\u2019t exclusive to the realm of recommendation systems and has been studied extensively [4, 6, 29, 30]. Multi-task learning is a technique commonly utilized in ranking models to optimize for multiple objectives concurrently, aiming for enhanced performance or more efficient information sharing [33, 41]. A less frequently encountered approach involves the joint learning of embeddings for more than two entities. Though this methodology is sometimes implemented in graph learning scenarios, it can also be perceived as an extension of multi-task learning [39]. Multi-modal embeddings are of substantial interest in the industry since the majority of web content is multi-modal, typically including at both text and images [18, 19, 38]. One can take embeddings or raw data from each modality as inputs, and merge them at any stage of the model. The methodology typically involves utilizing embeddings or raw data from each mode as inputs, which are then merge at different stages in the model. Early-stage fusion can pose computational hurdles; therefore, in cases where performance is indifferent, utilizing embeddings instead of raw data is generally the preferred course of action [38]. 2.3 Embeddings at Pinterest PinSage [37] is a scalable GNN-based embedding representing pins. It is based on the GraphSage GCN algorithm [10], sampling neighborhoods with personalized PageRank to augment pin understanding, instead of simple heuristics like \ud835\udc5b-hop neighbors. It aggregates some basic visual [2] and text information into a single dense representation, and is a critical feature in many models. To represent products, we have an embedding, ItemSage [1], which aggregates raw data about products, including metadata from product pages, and potentially many images of the product. ItemSage is trained for compatibility with PinSage, and the search query embedding preceding OmniSearchSage, meaning that the distance between ItemSage and these two embeddings can be used for retrieving or ranking content [27]. 3 METHOD 3.1 Problem Formulation In order to enhance the search experience, modern search systems aim to incorporate various types of content such as web documents, news, shopping items, videos, and more. Similarly, Pinterest\u2019s search feed encompasses a diverse range of content, including pins, shopping items, video pins, and related queries. Training separate query embedding models for each content type and its representation proves to be resource-intensive and inefficient. To address this issue, we introduce OmniSearchSage, which offers a unified query embedding model that jointly trains query embeddings for query-query, query-pin, and query-product retrieval and ranking. Another requirement in production systems is compatibility with existing embeddings, which is essential for purposes such as cost-efficiency and simplified migration. Hence we also train the query embeddings to be compatible with the corresponding preexisting embeddings for the entities. As a side effect, we also get compatibility with some embeddings due to the triangle inequality property inherent to cosine similarity. OmniSearchSage: Multi-Task Multi-Entity Embeddings for Pinterest Search WWW \u201924 Companion, May 13\u201317, 2024, Singapore, Singapore 3.2 Enriching Entity Representations On Pinterest, each pin or product is associated with an image and title, along with an optional text (known as description) and link. Beyond these typical attributes, products may carry additional metadata, such as brand information, color description, and more. Document expansion techniques has been empirically demonstrated to significantly enhance the performance of not just token-based, but also embedding-based search retrieval systems [8, 25, 26, 28, 34]. Hence, in OmniSearchSage, we enrich our entity representations using diverse text derived from image captions from a generative LLM, historical engagement, and user-curated boards as described below. In the dataset, 71% of pins and products feature a title or description, 91% include non-empty board titles, and 65% contain non-empty engaged queries. Synthetic GenAI captions are generated for all pins and products, ensuring full coverage. Section 4.3.2 discusses the importance of each of these enrichment. 3.2.1 Synthetic GenAI Captions. On our platform, a substantial volume of pins (about 30%) lack associated titles or descriptions, or possess noisy and/or irrelevant title or description. We address this issue by employing an off-the-shelf image captioning model, BLIP [17], to generate synthetic descriptions for these images. To assess the quality of these synthetically generated descriptions, we enlisted human evaluators to judge their relevance and quality. For a robust assessment, three distinct ratings were collected for each image within a sample of 10\ud835\udc58images, curated uniformly across various broad pin categories. The results indicated that an overwhelming 87.84% of the generated descriptions were both relevant and of high quality, while a meager 1.16% were deemed irrelevant and of poor quality. These synthetically generated descriptions serve as an added feature in our model, enriching the diversity of data associated with each entity. Despite not being directly visible to the users, their addition significantly contributes to a deeper understanding of the pins\u2019 content. 3.2.2 Board Titles. On Pinterest, users explore and save pins to their personal collections, referred to as boards. Each board carries an associated title, reflecting the topic or theme of the collection. Most often, these user-crafted boards are meticulously organized, each focusing on a distinct theme or purpose. A user might, for instance, create discrete boards for \u201cSocial Media Marketing\" and \u201cGraphic Design\u2019\u00a8. Consequently, these board titles provide valuable, user-generated descriptors for the pins within the respective boards. We exploit this user-curated information by accumulating the titles of all boards each pin has been saved to. We limit our selection to a maximum of 10 unique board titles for each pin/product, systematically eliminating any potentially noisy or redundant titles as described next. First, each title is assigned a score influenced by two factors: its frequency of occurrence and the prevalence of its comprising words. Following this, titles are then ranked based on a hierarchy of their score (ascending), word count (descending), and character length (descending). The resulting top 10 board titles are subsequently incorporated as a feature in our model. This process eliminates any potentially noisy or redundant titles from the feature. Query Encoder Query Encoder Unified Pin-Product Encoder PinSage Unified Pin-Product Encoder ItemSage Query Pin Item Query L(query, query) L(query, pin) L(query, pin_c) L(query, product) L(query, product_c) Pretrained and Frozen Trained from scratch Figure 1: Diagrammatic Representation of OmniSearchSage\u2019s Multi-Entity, Multi-Task Architecture. 3.2.3 Engaged Queries. When multiple users interact with a specific pin or product for a certain query within a search feed, it signifies that pin\u2019s relevance to that query. We can use these queries to expand our understanding of the pin/product. For every pin, we generate a list of queries that have attracted user engagements, along with the counts and types of such engagements. This list of queries is then sorted using a function based on the count for each type of engagement. We use the top 20 queries from these sorted lists as a feature in our model. Through experimentation with diverse time-windows of query logs for feature creation, we discovered that larger windows yield superior performance. Consequently, we have opted for a twoyear window for feature calculation. However, the complexity of computing this from scratch every time presents a challenge. To mitigate this, we deploy an incremental approach. Every \ud835\udc5bdays, we examine new query logs, create a list of queries for every pin, and then blend it with the previously existing top 20 queries, thereby updating the latest value of the feature. 3.3 Entity Features The features we incorporate include PinSage [37] and unified image embeddings [2] to capture the essence of each pin. Additionally, for product pins, we use ItemSage [1] given its capability in effectively representing product-related pins. Text-based features such as the title and description of each pin are also integral to our feature set. Furthermore, we augment the text associated with each pin with the inclusion of synthetic captions, board titles, and engagement queries as outlined earlier. By integrating all these features, we attain a comprehensive and multi-dimensional representation of each pin, hence facilitating enhanced learning of representations. 3.4 Encoders In our work, we consider 3 entity types, namely, pin, product and query. Our model consists of an encoder for query, a unified learned encoder for both pin and product, and dedicated compatibility encoders for pin and product, respectively. 3.4.1 Query Encoder. The query encoder in our model (depicted in Figure 2) is based on a multilingual version of the DistilBERT WWW \u201924 Companion, May 13\u201317, 2024, Singapore, Singapore Prabhat Agarwal et al. Multilingual DistilBERT [CLS] antique copper bat ##hro ##om sin ##k Project and L2 Normalize Figure 2: Overview of the query encoder architecture. The encoder takes the output from the last layer associated with the \u2018CLS\u2019 token, projects it onto a 256-dimensional vector space, and finally L2-normalizes the output to generate the final embedding. (distilbert-base-multilingual-cased2) [32]. This choice facilitates efficient handling of queries across a variety of languages. The encoder utilizes the output from the last layer corresponding to the \ud835\udc36\ud835\udc3f\ud835\udc46token and thereafter projects it to a 256-dimensional vector space. Post projection, we apply a \ud835\udc3f2 normalization on the 256-dimensional vectors to obtain the final embedding. This normalization greatly simplifies the calculation of cosine-distance in downstream applications, allowing for a straightforward dot product operation. 3.4.2 Unified Pin and Product Encoder. In our model, we utilize a single unified encoder for both pins and products (depicted in Figure 3), and this encoder is jointly trained with the query embeddings. Designed to process both textual features and continuous features, it plays a crucial role in learning the respective embeddings of pins and products. In cases where certain features are defined for one entity but not the other, we substitute them with zero, ensuring a consistent data input. As detailed in section 3.5, we utilize in-batch negatives to train our model. Prior research [9, 15, 16, 29] has empirically demonstrated that larger batches with a substantial number of negatives help in learning better representations. Therefore, to accommodate a larger batch size in the GPU memory, we employ a simple pin encoder model. The following encoder design has been determined through numerous ablation studies. These studies have allowed us to select the most effective configuration for each of the components, while still considering the importance of both training and serving efficiencies. The encoder uses three distinct tokenizers to process the textual features associated with a pin [1, 13, 23]. These include (i) a word unigram tokenizer that uses a vocabulary encompassing the 200\ud835\udc58most frequent word unigrams, (ii) a word bigram tokenizer that makes use of a vocabulary comprising the 1\ud835\udc40most frequent word bigrams, and (iii) a character trigram tokenizer that utilizes a vocabulary of 64\ud835\udc58character trigrams. The tokens are mapped to their respective IDs in the vocabulary V which constitute all three 2https://huggingface.co/distilbert-base-multilingual-cased Image Encoder PinSAGE ItemSAGE MLP & L2 Normalize Hash Embedder Word Unigram Tokenizer Word Bigram Tokenizer Character Trigram Tokenizer Tokenizer Pin Text Board Titles Engaged Queries Synthetic GenAI Captions Figure 3: Schematic of the unified encoder model for pins and products, illustrating the use of three different tokenizers, a hash embedding table, and an MLP layer for combining text embeddings with other continuous features. tokenizers. Any token that falls out of this combined vocabulary gets discarded. The use of these combined tokenizers effectively helps in capturing the semantics of various texts associated with a pin/product. For token embedding learning, we use a 2-hash hash embedding table of size 100, 000 [1, 35]. Each identified token\u2019s ID \ud835\udc56is hashed into two places within the embedding table using hash functions \u210e1(\ud835\udc56) and \u210e2(\ud835\udc56). The ultimate embedding of a token with ID \ud835\udc56is a weighted interpolation of the two locations: \ud835\udc4a1\ud835\udc56\u210e1(\ud835\udc56) +\ud835\udc4a2\ud835\udc56\u210e2(\ud835\udc56), where \ud835\udc4a1 and \ud835\udc4a2 are learned weight vectors of size |V| each. The sum of all token embeddings and the embedding features are concatenated and fed into a 3-layer MLP, with layer sizes of 1024, 1024, 256. Following this, the output of the MLP layer undergoes L2-normalization just like the query embedding. 3.4.3 Compatibility Encoders. In our model, we employ two discrete compatibility encoders individually dedicated to pins and products. These encoders leverages the pre-existing pin and product embeddings, represented by PinSage for pins and ItemSage for products. This allows the model to adeptly learn query embeddings that align effectively with PinSage and ItemSage embeddings. 3.5 Multi-Task Sampled Softmax Loss Taking inspiration from Itemsage [1], the problem of learning query and entity embeddings is treated as an extreme classification problem, with the aim of predicting entities relevant to a given query [7]. We employ the sampled softmax loss with logQ correction [36] to train our model. We use multitasking to jointly train entity embeddings and train the query embeddings to be compatible with existing entity embeddings. Formally, we define a task \ud835\udc47\u2208T as a tuple of a dataset of query-entity pairs (D = {(\ud835\udc65,\ud835\udc66)\ud835\udc56}) and an entity encoder E. \ud835\udc47\u225c{D, E}. For a batch of data, B = {(\ud835\udc65,\ud835\udc66)\ud835\udc56} \u2282D, for task\ud835\udc47\u2208T, the aim is to learn query embedding \ud835\udc5e\ud835\udc65\ud835\udc56and entity embedding \ud835\udc5d\ud835\udc66\ud835\udc56= E(\ud835\udc66\ud835\udc56) such that the cosine similarity of the embeddings \ud835\udc5e\ud835\udc65\ud835\udc56\u00b7 \ud835\udc5d\ud835\udc66\ud835\udc56is maximized. OmniSearchSage: Multi-Task Multi-Entity Embeddings for Pinterest Search WWW \u201924 Companion, May 13\u201317, 2024, Singapore, Singapore This is achieved by minimizing the softmax loss: \ud835\udc3f\ud835\udc47= \u22121 |B| |B| \u2211\ufe01 \ud835\udc56=1 log exp(\ud835\udc5e\ud835\udc65\ud835\udc56\u00b7 \ud835\udc5d\ud835\udc66\ud835\udc56) \u00cd \ud835\udc66\u2208\ud835\udc36exp(\ud835\udc5e\ud835\udc65\ud835\udc56\u00b7 \ud835\udc5d\ud835\udc66) , (1) where C is the catalog of all entities of the same type as\ud835\udc66\ud835\udc56. To ensure problem tractability, the normalization term in the denominator is approximated using a sample of the catalog \ud835\udc36. We use (i) positives in the batch, \ud835\udc35\ud835\udc41= {\ud835\udc66\ud835\udc56|(\ud835\udc65\ud835\udc56,\ud835\udc66\ud835\udc56) \u2208B}, and (ii) a random sample of the catalog, \ud835\udc36\u2032. To rectify any bias that might have been introduced through sampling, we utilize the logQ correction technique. This method operates by deducting the sampling probability of the negative, represented as log\ud835\udc44(\ud835\udc66|\ud835\udc65\ud835\udc56), from the existing logits. This is crucial to ensure that popular entities aren\u2019t disproportionately penalized. \ud835\udc3f\ud835\udc47= \ud835\udc3f\ud835\udc46\ud835\udc4f\ud835\udc5b \ud835\udc47 + \ud835\udc3f\ud835\udc46\ud835\udc5f\ud835\udc5b \ud835\udc47 (2) \ud835\udc3f\ud835\udc46\ud835\udc4f\ud835\udc5b \ud835\udc47 = \u22121 |B| |B| \u2211\ufe01 \ud835\udc56=1 log exp(\ud835\udc5e\ud835\udc65\ud835\udc56\u00b7 \ud835\udc5d\ud835\udc66\ud835\udc56\u2212log\ud835\udc44(\ud835\udc66\ud835\udc56|\ud835\udc65\ud835\udc56)) \u00cd \ud835\udc67\u2208\ud835\udc35\ud835\udc41exp(\ud835\udc5e\ud835\udc65\ud835\udc56\u00b7 \ud835\udc5d\ud835\udc67\u2212log\ud835\udc44(\ud835\udc67|\ud835\udc65\ud835\udc56)) (3) \ud835\udc3f\ud835\udc46\ud835\udc5f\ud835\udc5b \ud835\udc47 = \u22121 |B| |B| \u2211\ufe01 \ud835\udc56=1 log exp(\ud835\udc5e\ud835\udc65\ud835\udc56\u00b7 \ud835\udc5d\ud835\udc66\ud835\udc56\u2212log\ud835\udc44(\ud835\udc66\ud835\udc56|\ud835\udc65\ud835\udc56)) \u00cd \ud835\udc66\u2208\ud835\udc36\u2032 exp(\ud835\udc5e\ud835\udc65\ud835\udc56\u00b7 \ud835\udc5d\ud835\udc66\u2212log\ud835\udc44(\ud835\udc66|\ud835\udc65\ud835\udc56)) (4) = \u22121 |B| |B| \u2211\ufe01 \ud835\udc56=1 log exp(\ud835\udc5e\ud835\udc65\ud835\udc56\u00b7 \ud835\udc5d\ud835\udc66\ud835\udc56\u2212log\ud835\udc44(\ud835\udc66\ud835\udc56|\ud835\udc65\ud835\udc56)) \u00cd \ud835\udc66\u2208\ud835\udc36\u2032 exp(\ud835\udc5e\ud835\udc65\ud835\udc56\u00b7 \ud835\udc5d\ud835\udc66\u2212log\ud835\udc44\ud835\udc5b(\ud835\udc66)) , (5) since \ud835\udc66is sampled independently The total loss is defined as the sum of all individual task losses, \ud835\udc3f= \u2211\ufe01 \ud835\udc47\u2208T \ud835\udc3f\ud835\udc47. (6) We mix together different tasks together in one batch and control the influence of each task on the model through this composition. To increase training efficiency, we share the pairs in the batch across all tasks with the same dataset. 3.6 Model Serving OmniSearchSage query embeddings are integral to numerous applications in the search stack, which necessitates us to maintain a strict latency budget. For real-time inference with minimized latency, our query encoder is served on GPUs by our in-house C++based machine learning model server, the Scorpion Model Server (SMS). Factoring in that query distribution complies with Zipf\u2019s law, we have instituted a cache-based system to curb costs and shorten response times. The query embedding server first verifies if a query is cached before resorting to the query inference server should it be absent from the cache. After testing various Cache Time-To-Live (TTL) periods, a TTL of 30 days was established as optimal. The system is equipped for handling 300\ud835\udc58requests per second, maintaining a median (p50) latency of just 3ms, and 90 percentile (p90) latency of 20ms. The implementation of this cachebased system efficiently reduces the load on the inference server to approximately 500 QPS, leading to substantial cost and latency reductions. The pin and product embeddings are derived offline on a daily basis through batch inference on GPUs and are subsequently published to our signal store for consumption. Pair Source Actions Size Query-Pin Query Logs repin, longclick 1.5B Query-Product Query Logs repin, longclick 136M Query-Product Offsite logs add-to-cart, checkout 2.5M Query-Query Query Logs click 195M Table 1: Summary of the different training datasets. 4 EXPERIMENTS 4.1 Dataset Our dataset is primarily constructed by extracting unique queryentity pairs from one year of search query logs. We consider various forms of engagement on the platform when extracting these pairs, including \u2018saves\u2019 (when a user saves a pin to a board) and \u2018long clicks\u2019 (instances where users browse the linked page for more than 10 seconds before returning to Pinterest). For products, we enrich our dataset by incorporating offsite actions as well. Thus, we also include anonymized pairs tied to significant actions like \u2018add to cart\u2019 and \u2018checkout\u2019. A common challenge in recommendation systems is the popularity bias, where certain pins are overrepresented due to their high appeal. To counteract this bias, we impose a limit on the number of times the same pin can be paired. This limit is capped at 50 pairs for pins and is extended to 200 pairs for products (since products have lower volume and engagement). By adopting this strategy, we ensure our dataset is robust and truly representative of the user\u2019s activity on the platform. Our model training is further extended to encompass queryquery pairs. On Pinterest, users are presented with similar query suggestions, and engagements with these recommendations are recorded in the search logs. We leverage these records, extracting such pairs from an entire year\u2019s logs, thus enriching our training dataset. A detailed breakdown of the positive labels in the dataset is provided in Table 1. 4.2 Offline Evaluation Metrics Our evaluation of the model encompasses both user engagement data and human-labeled relevance data. Relevance gets measured using human-labeled pairs of queries and pins, sampled from production traffic from four distinct countries: US, UK, France, and Germany. This strategy serves to assess the model\u2019s performance in handling multiple languages and cultural contexts. Evaluation of user engagement considers a selected 7-day period. We ensure no data leakage\u2014possible due to the inclusion of engagement features such as engaged queries\u2014by maintaining a 15-day separation between the end of the training dataset and the beginning of the evaluation phase. We sample 80\ud835\udc58pairs from the defined evaluation duration to represent repins and long clicks for both pins and products. Another 80\ud835\udc58pairs, corresponding to clicks for queries, are also included for comprehensive performance evaluation. The primary metric we used for evaluation is named \u2018Recall@10\u2019. This metric denotes the likelihood of the occurrence of the engaged entity within the top 10 entities when these entities are sorted in descending order based on their similarity to the query. WWW \u201924 Companion, May 13\u201317, 2024, Singapore, Singapore Prabhat Agarwal et al. Metric SearchSage OmniSearchSage Gain Pin Save 0.39 0.65 +67% Long-Click 0.45 0.73 +62% Relevance (US) 0.25 0.45 +80% Relevance (UK) 0.29 0.51 +76% Relevance (FR) 0.23 0.43 +87% Relevance (DE) 0.28 0.46 +64% Product Save 0.57 0.73 +28% Long-Click 0.58 0.73 +26% Query Click 0.54 0.78 +44% Table 2: Comparative analysis of OmniSearchSage and the baseline SearchSage across various tasks Pin, Product, and Query. Consider a dataset \ud835\udc37= (\ud835\udc5e\ud835\udc56,\ud835\udc52\ud835\udc56)\ud835\udc5b \ud835\udc56=1, where each (\ud835\udc5e\ud835\udc56,\ud835\udc52\ud835\udc56) denotes a query-engaged entity pair, and also consider a random corpus \ud835\udc36 with \ud835\udc5aentities. The Recall@10 metric can then be defined as the average over all queries of the indicator function 1, where 1 equals 1 if the engaged entity \ud835\udc52\ud835\udc56is amongst the top 10 entities in \ud835\udc36when ranked by their dot product with the query \ud835\udc5e\ud835\udc56. Recall@10 = 1 |\ud835\udc37| |\ud835\udc37| \u2211\ufe01 \ud835\udc56=1 1[( \u2211\ufe01 \ud835\udc66\u2208\ud835\udc36 \ud835\udc65\ud835\udc56\u00b7 \ud835\udc66> \ud835\udc65\ud835\udc56\u00b7 \ud835\udc66\ud835\udc56) > 10] For every pin, query, and product, we employ a uniformly distributed random sample of \ud835\udc5a= 1.5\ud835\udc40entities from our corpus. 4.3 Offline Results In this section, we provide a comprehensive comparison between our proposed model, OmniSearchSage, and the existing baselines, which helps showcase its performance enhancements. Subsequently, we undertake an in-depth exploration of key influential aspects such as the significance of text enrichments, the pros and cons of adopting multitasking approaches, and the operational efficacy of compatibility encoders in the context of our model. 4.3.1 Comparison with Baselines. In this study, the existing version of SearchSage [27] serves as our comparison baseline. It operates using fixed PinSage and ItemSage embeddings for pins and products, respectively. For OmniSearchSage, we utilize the query encoder to derive query embeddings and the unified pin and product encoder to generate pin and product embeddings. In Table 2, comparisons are drawn between OmniSearchSage and SearchSage, with both models being trained and evaluated on the same dataset. It is important to highlight that the baseline model, SearchSage, does not involve query-query pairs for training purposes. On the pin dataset, OmniSearchSage shows a significant gain, between 60% and 90%, over SearchSage across all metrics. Recall is relatively consistent across different countries, reflecting the multilingual robustness of OmniSearchSage. Analysis of the product dataset reveals that OmniSearchSage outperforms the baseline model by about 27% in predicting product save long-click relevance No captions 0.51 0.60 0.36 With captions 0.66 0.76 0.36 Improvement +30.43% +25.58% 0% Table 3: Comparative assessment displaying the influence of Synthetic GenAI Captions on pins lacking titles and descriptions. engagement. This increment is less prominent as compared to the pins dataset, mainly because ItemSage, upon which this comparison is based, has already undergone training on search tasks. Nevertheless, the observed improvement shows the positive impact of incorporating new features as well as the benefit of multi-tasking. Interestingly, SearchSage is able to predict related query clicks substantially better than random despite not being trained on this task. However, when we directly optimize for this objective in OmniSearchSage, we see a substantial +44% improvement. We show this improvement can be attributed to both training on related queries, and multi-task learning in Section 4.3.3. 4.3.2 Importance of content enrichment. In this section, we delve into an analysis of the importance of various text enhancements described in Section 3.2. To maintain brevity, the evaluation focuses solely on the metrics related to the query-pin task. Our first direction of investigation centers around the impact of integrating synthetic captions for pins that lack both a title and description. For this purpose, we extracted pairs from the evaluation dataset in which the engaged pin was missing a title or a description. This resulted in a narrowed evaluation dataset of 24\ud835\udc58pairs. The model\u2019s performance, initially based on solely continuous features and native text, was then compared to a model additionally enriched with captions. Table 3 presents the results of this comparison. When synthetic captions were added, both \u2018save\u2019 and \u2018long-click\u2019 metrics saw substantial improvements \u2014 approximately +30% and +26% respectively. However, the relevance metric remained unchanged. This suggests that adding synthetic captions can significantly enhance the model\u2019s performance for certain metrics when representing pins that lack a title and description. Table 4 illustrates the impact of adding different text enrichments on the model\u2019s performance. Each percentage increase is relative to the previous row, displaying the additional improvement from each additional feature. Our baseline model utilizes only continuous features for training and its performance values are reflected in the first row. Upon adding \u2018Title\u2019, \u2018Description\u2019, and \u2018Synthetic GenAI Captions\u2019 to the baseline model, we notice a robust improvement across all metrics. save long-click relevance Continuous Features Only 0.43 0.53 0.30 Adding Title, Description and Synthetic GenAI Captions 0.52 (+21%) 0.63 (+19%) 0.39 (+30%) Adding Board Titles 0.61 (+17%) 0.68 (+8%) 0.44 (+13%) Adding Engaged Queries 0.65 (+7%) 0.73 (+7%) 0.46 (+5%) Table 4: Impact of adding different text enrichments on the model\u2019s performance. Each percentage increase is relative to the previous row, displaying the additional improvement from each additional feature. OmniSearchSage: Multi-Task Multi-Entity Embeddings for Pinterest Search WWW \u201924 Companion, May 13\u201317, 2024, Singapore, Singapore Dataset Pin Only Product only Query Only OmniSearchSage pin save 0.68 0.65 long-click 0.75 0.73 avg relevance 0.45 0.46 product save 0.73 0.73 long-click 0.73 0.73 query click 0.73 0.78 Table 5: Comparative analysis illustrating the contrasts between our unified multi-task model and models trained individually for each task pin, product, and query. There is a 20% improvement in the engagement datasets, while the relevance metric improves by a notable 30%, demonstrating the substantial impact of these text features. The model enhancement continues with adding board titles to the feature set, leading to a further increase of 8 \u221215% in different metrics. This affirms the relevance of board titles in improving predictive accuracy. Finally, we incorporated engaged queries feature into the model, resulting in a consistent, albeit smaller growth across all three metrics. Although the incremental relative gain appears smaller, it still constitutes a significant improvement when compared to the baseline model. In summary, each text enrichment feature contributes significantly to improving model performance as seen by the increment in metrics compared to their immediate preceding state. 4.3.3 Effect of multi-tasking. In Table 5, we present a comparative analysis between models trained independently for each task (pin, product, and query) and our consolidated multitask model. For this comparison, both the independent and multitask models were trained under equivalent conditions with matching batch sizes, computational power, and iterations. The datasets used for both training and evaluation were also identical, with the sole difference that the individual models were trained on their respective subset of pairs from the dataset. This systematic approach ensures the fair and accurate assessment of the performance of the multitask model in relation to the independent task models. On the pin task, we see slight degradation in quality from multitask learning, but, on product and query tasks, results are neutral to positive. This aligns with general notions about multi-task learning: low-data tasks are unlikely to see regressions from multi-task learning, while the pin task using 1.5\ud835\udc35pairs sees a very slight drop in performance. Despite this drop, the simplification benefits of multi-task learning outweigh the metric loss. 4.3.4 Effect of compatibility encoders. We examine the influence of incorporating compatibility encoders on the effectiveness of the learned pin/product embeddings. We train a model that comprises only the query and unified pin and product encoder. Subsequently, this model is compared with another model that fully incorporates all the encoders. Interestingly, there is almost no noticeable degradation in the metrics of the learned encoder, thereby essentially achieving seamless compatibility of the query embedding with pre-existing embeddings at no substantial cost. Furthermore, as demonstrated in Table 6, the performance of the compatibility encoders in the OmniSearchSage model is either on par with or surpasses that of the SearchSage model, which is trained utilising only compatibility encoders. Dataset SearchSage OmniSearchSage pin save 0.39 0.39 long-click 0.45 0.43 avg relevance 0.26 0.26 product save 0.57 0.57 long-click 0.58 0.57 Table 6: Comparison of co-trained compatibility encoders with independently trained compatibility encoders. Product Embedding Index (HNSW) Ads Embedding Index (HNSW) Pin Embedding Index (HNSW) Pin Inverted Token Index Product Inverted Token Index Ads Inverted Token Index L1 Scoring Model User Input Query Query Understanding L2 Scoring Model Query Embedding Server User, Query, Pin Features Figure 4: A simplified depiction of the search retrieval and ranking stack at Pinterest highlighting the integration points for OmniSearchSage embeddings. 5 APPLICATIONS IN PINTEREST SEARCH OmniSearchSage embeddings find wide applications throughout the Pinterest search stack, primarily in retrieval and ranking tasks. Figure 4 presents a simplified depiction of the search retrieval and ranking stack at Pinterest and highlights the integration points for OmniSearchSage embeddings. These embeddings are employed to power the retrieval of pins and products using HNSW [22]. They are also instrumental in the L1 scoring model, where they enhance the efficiency of token-based retrieval sources. Moreover, OmniSearchSage embeddings serve as one of the most critical features in the L2 scoring and relevance models. In this section, we delineate the results derived from the A/B tests we conducted. In these tests, production SearchSage embeddings were replaced with OmniSearchSage embeddings, resulting in boosted performance in both organic and promoted content (Ads) in search. Additionally, we provide results from a human relevance assessment conducted on actual production-sampled traffic. This evaluation further confirms the improved performance derived from the utilization of OmniSearchSage embeddings. Finally, we demonstrate how employing query embeddings also enhances performance in other tasks, such as classification, particularly in situations where data availability is limited. This highlights the ability of the OmniSearchSage model to generalize to tasks different from its original training objectives. 5.1 Human Relevance Evaluation To understand advantages of OmniSearchSage, we enlisted human evaluators to assess the relevance of candidates retrieved via two WWW \u201924 Companion, May 13\u201317, 2024, Singapore, Singapore Prabhat Agarwal et al. (a) Token-based (b) OmniSearchSage-based Figure 5: Comparative display of pins retrieved in response to the query \u2019antique copper bathroom sink\u2019 from the tokenbased system and the OmniSearchSage-based system. Pins deemed relevant are outlined in green, while those considered irrelevant are encircled in red. methods: OmniSearchSage embeddings-based pin retrieval and token-based pin retrieval. For this evaluation, we selected a set of 300 queries, deliberately stratified across both head and tail queries. The top 8 candidate pins were then retrieved from each system using these queries, and human evaluators determined the relevance of the pins to the corresponding query. Every query-pin pair received three judgements, with an inter-annotator agreement rate of 0.89. Evaluation results revealed a noticeable improvement with OmniSearchSage, showing a 10% increase in relevance compared to the token-based system. Figure 5 offers a distinct comparison of retrieved pins for the query \u2018antique copper bathroom sink\u2019 between the candidates retrieved by the token-based system and the OmniSearchSage-based system. The token-based retrieval system often fetches pins related to only part of the query and fails to fetch consistently relevant results. In striking contrast, nearly all pins retrieved by the OmniSearchSage-based system are highly relevant to the specified query, underlining the efficacy of the OmniSearchSage model in understanding the query and aligning similar pins and queries in the same space together. 5.2 Organic Search In this section, we outline the results of the A/B testing conducted to substitute the existing production SearchSage query and entity embeddings with OmniSearchSage embeddings for organic content within Pinterest search. Within the context of search experiments at Pinterest, our attention is largely concentrated on two key metrics: the search fulfillment rate and relevance. The search fulfillment rate is defined as the proportion of searches that result in a user engagement action of significance. Relevance is calculated as the weighted average relevance of the top eight pins for each query, assessed across different query segments. This is measured through human evaluation. The impact on these two metrics, from replacing SearchSage with OmniSearchSage, is presented in Table 7. The table provides data drawn from experiments for three distinct use-cases: (i) retrieval of pins and products, (ii) L1 scoring model, and (iii) L2 scoring model and relevance model. Search Fulfilment Rate Relevance Pin and Product Retrieval +4.1% +0.5% L1 Scoring +0.5% +0.0% L2 Scoring and Relevance Model +2.8% +3.0% Table 7: Online A/B experiment results of OmniSearchSage in Organic Search. gCTR Product Ads Retrieval +5.27% Ads Search Engagement Model +2.96% Ads Search Relevance Model +1.55% Table 8: Online A/B experiment results of OmniSearchSage for Ads in Search. 5.3 Ads in Search The OmniSearchSage embeddings have also successfully replaced the SearchSage embeddings in various applications within Ads on Search surface. We present the results of three use cases: search engagement model, search relevance model, and product ads retrieval. Uniformly, we noted substantial improvements in engagement and relevance within Ads across all use cases. These increments, specifically in the long clickthrough rate (gCTR), are outlined in Table 8. Furthermore, OmniSearchSage led to a noteworthy 4.95% increase in Ads relevance within the Search Ads relevance model. These gains highlight the positive impact of transitioning to OmniSearchSage embeddings for Ads on Search. 5.4 Classification One of the primary advantages of developing robust query representation such as OmniSearchSage is its utility in powering downstream applications, particularly when there is a lack of labels for learning large models. One example of this at Pinterest is interest classification, where we classify queries into a hierarchical taxonomy. Using OmniSearchSage query embeddings for query representation, we were able to increase performance when compared to the baseline FastText [3] model. Precision increased by 30% on average across levels, with the larger gains coming from more granular levels. 6 CONCLUSION In this work, we presented OmniSearchSage, an end-to-end optimized set of query, pin, and product embeddings for Pinterest search, which have shown value across many applications. In contrast to other work focused on learning embeddings for search, we demonstrate the value of unified query, pin, and product embeddings as both candidate generators and features in Pinterest search. We show a great improvement over previous solutions at Pinterest can be attributed to rich document text representations, which improved offline evaluation metrics by > 50%. We also describe practical decisions enabling serving and adoption, including compatibilty encoders, multi-task learning, and long-TTL caching. Lastly, we summarize results from online A/B experiments across organic and ads applications, which have directly led to cumulative gains of +7.4% fulfilment rate on searches, and +3.5% relevance. OmniSearchSage: Multi-Task Multi-Entity Embeddings for Pinterest Search WWW \u201924 Companion, May 13\u201317, 2024, Singapore, Singapore" +} \ No newline at end of file diff --git a/intro_8K/test_introduction_short_2404.16277v1.json b/intro_8K/test_introduction_short_2404.16277v1.json new file mode 100644 index 0000000000000000000000000000000000000000..b58eaf971cc684be72fef04f20dae03520865c48 --- /dev/null +++ b/intro_8K/test_introduction_short_2404.16277v1.json @@ -0,0 +1,17 @@ +{ + "url": "http://arxiv.org/abs/2404.16277v1", + "title": "Causally Inspired Regularization Enables Domain General Representations", + "abstract": "Given a causal graph representing the data-generating process shared across\ndifferent domains/distributions, enforcing sufficient graph-implied conditional\nindependencies can identify domain-general (non-spurious) feature\nrepresentations. For the standard input-output predictive setting, we\ncategorize the set of graphs considered in the literature into two distinct\ngroups: (i) those in which the empirical risk minimizer across training domains\ngives domain-general representations and (ii) those where it does not. For the\nlatter case (ii), we propose a novel framework with regularizations, which we\ndemonstrate are sufficient for identifying domain-general feature\nrepresentations without a priori knowledge (or proxies) of the spurious\nfeatures. Empirically, our proposed method is effective for both (semi)\nsynthetic and real-world data, outperforming other state-of-the-art methods in\naverage and worst-domain transfer accuracy.", + "authors": "Olawale Salaudeen, Sanmi Koyejo", + "published": "2024-04-25", + "updated": "2024-04-25", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "label": "Original Paper", + "paper_cat": "Knowledge AND Graph", + "gt": "A key feature of machine learning is its capacity to generalize across new domains. When these domains present di\ufb00erent data distributions, the algorithm must leverage shared structural concepts to achieve out- of-distribution (OOD) or out-of-domain generalization. This capability is vital in numerous important real- world machine learning applications. For example, in safety-critical settings such as autonomous driving, a lack of resilience to unfamiliar distributions could lead to human casualties. Likewise, in the healthcare sector, where ethical considerations are critical, an inability to adjust to shifts in data distribution can result in unfair biases, manifesting as inconsistent performance across di\ufb00erent demographic groups. An in\ufb02uential approach to domain generalization is Invariant Causal Prediction (ICP; [Peters et al., 2016]). ICP posits that although some aspects of data distributions (like spurious or non-causal mechanisms [Pearl, 2010]) may change across domains, certain causal mechanisms remain constant. ICP suggests fo- cusing on these invariant mechanisms for prediction. However, the estimation method for these invariant mechanisms suggested by [Peters et al., 2016] struggles with scalability in high-dimensional feature spaces. To overcome this, Arjovsky et al. [2019] introduced Invariant Risk Minimization (IRM), designed to identify these invariant mechanisms by minimizing an objective. However, requires strong assumptions for identify- ing the desired domain-general solutions [Ahuja et al., 2021, Rosenfeld et al., 2022]; for instance, observing a number of domains proportional to the spurious features\u2019 dimensions is necessary, posing a signi\ufb01cant challenge in these high-dimensional settings. Subsequent variants of IRM have been developed with improved capabilities for identifying domain- general solutions [Ahuja et al., 2020, Krueger et al., 2021, Robey et al., 2021, Wang et al., 2022, Ahuja et al., 2021]. Additionally, regularizers for Distributionally Robust Optimization with subgroup shift have been proposed (GroupDRO) [Sagawa et al., 2019]. However, despite their solid theoretical motivation, empirical evidence suggests that these methods may not consistently deliver domain-general solutions in practice Gulrajani and Lopez-Paz [2020], Kaur et al. [2022], Rosenfeld et al. [2022]. \u2217Contact: oes2@illinois.edu 1 Kaur et al. [2022] demonstrated that regularizing directly for conditional independencies implied by the generative process can give domain-general solutions, including conditional independencies beyond those considered by IRM. However, their experimental approach involves regularization terms that require direct observation of spurious features, a condition not always feasible in real-world applications. Our proposed methodology also leverages regularizers inspired by the conditional independencies indicated by causal graphs but, crucially, it does so without necessitating prior knowledge (or proxies) of the spurious features. 1.1 Contributions In this work, \u2022 we outline su\ufb03cient properties to uniquely identify domain-general predictors for a general set of generative processes that include domain-correlated spurious features, \u2022 we propose regularizers to implement these constraints without independent observations of the spuri- ous features, and \u2022 \ufb01nally, we show that the proposed framework outperforms the state-of-the-art on semi-synthetic and real-world data. The code for our proposed method is provided at https://github.com/olawalesalaudeen/tcri. Notation: Capital letters denote bounded random variables, and corresponding lowercase letters denote their value. Unless otherwise stated, we represent latent domain-general features as Zdg \u2208Zdg \u2261Rm and spurious latent features as Zspu \u2208Zspu \u2261Ro. Let X \u2208X \u2261Rd be the observed feature space and the output space of an invertible function \u0393 : Zdg \u00d7 Zspu 7\u2192X and Y \u2208Y \u2261{0, 1, . . ., K \u22121} be the observed label space for a K-class classi\ufb01cation task. We then de\ufb01ne feature extractors aimed at identifying latent features \u03a6dg : X 7\u2192Rm, \u03a6spu : X 7\u2192Ro so that \u03a6 : X 7\u2192Rm+o \u0000 that is \u03a6(x) = [\u03a6dg(x); \u03a6spu(x)]\u2200x \u2208X \u0001 . We de\ufb01ne e as a discrete random variable denoting domains and E = {P e(Zdg, Zspu, X, Y ) : e = 1, 2, . . .} to be the set of possible domains. Etr \u2282E is the set of observed domains available during training.", + "main_content": "The source of distribution shift can be isolated to components of the joint distribution. One special case of distribution shift is covariate shift [Shimodaira, 2000, Zadrozny, 2004, Huang et al., 2006, Gretton et al., 2009, Sugiyama et al., 2007, Bickel et al., 2009, Chen et al., 2016, Schneider et al., 2020], where only the covariate distribution P(X) changes across domains. Ben-David et al. [2009] give upper-bounds on target error based on the H-divergence between the source and target covariate distributions, which motivates domain alignment methods like the Domain Adversarial Neural Networks [Ganin et al., 2016] and others [Long et al., 2015, Blanchard et al., 2017]. Others have followed up on this work with other notions of covariate distance for domain adaptation, such as mean maximum discrepancy (MMD) [Long et al., 2016], Wasserstein distance [Courty et al., 2017], etc. However, Kpotufe and Martinet [2018] show that these divergence metrics fail to capture many important properties of transferability, such as asymmetry and non-overlapping support. Furthermore, Zhao et al. [2019] shows that even with the alignment of covariates, large distances between label distributions can inhibit transfer; they propose a label conditional importance weighting adjustment to address this limitation. Other works have also proposed conditional covariate alignment [des Combes et al., 2020, Li et al., 2018c,b]. Another form of distribution shift is label shift, where only the label distribution changes across domains. Lipton et al. [2018] propose a method to address this scenario. Schrouff et al. [2022] illustrate that many real-world problems exhibit more complex \u2019compound\u2019 shifts than just covariate or label shifts alone. One can leverage domain adaptation to address distribution shifts; however, these methods are contingent on having access to unlabeled or partially labeled samples from the target domain during training. When such samples are available, more sophisticated domain adaptation strategies aim to leverage and adapt spurious feature information to enhance performance [Liu et al., 2021, Zhang et al., 2021, Kirichenko et al., 2022]. 2 However, domain generalization, as a problem, does not assume access to such samples [Muandet et al., 2013]. To address the domain generalization problem, Invariant Causal Predictors (ICP) leverage shared causal structure to learn domain-general predictors [Peters et al., 2016]. Previous works, enumerated in the introduction (Section 1), have proposed various algorithms to identify domain-general predictors. Arjovsky et al. [2019]\u2019s proposed invariance risk minimization (IRM) and its variants motivated by domain invariance: min w,\u03a6 1 |Etr| X e\u2208Etr Re(w \u25e6\u03a6) s.t. w \u2208argmin e w Re( e w \u00b7 \u03a6), \u2200e \u2208Etr, where Re(w \u25e6\u03a6) = E \u0002 \u2113(y, w \u00b7 \u03a6(x)) \u0003 , with loss function \u2113, feature extractor \u03a6, and linear predictor w. This objective aims to learn a representation \u03a6 such that predictor w that minimizes empirical risks on average across all domains also minimizes within-domain empirical risk for all domains. However, Rosenfeld et al. [2020], Ahuja et al. [2020] showed that this objective requires unreasonable constraints on the number of observed domains at train times, e.g., observing distinct domains on the order of the rank of spurious features. Follow-up works have attempted to improve these limitations with stronger constraints on the problem \u2013 enumerated in the introduction section. Our method falls under domain generalization; however, unlike the domain-general solutions previously discussed, our proposed solution leverages di\ufb00erent conditions than domain invariance directly, which we show may be more suited to learning domain-general representations. 3 Causality and Domain Generalization We often represent causal relationships with a causal graph. A causal graph is a directed acyclic graph (DAG), G = (V, E), with nodes V representing random variables and directed edges E representing causal relationships, i.e., parents are causes and children are e\ufb00ects. A structural equation model (SEM) provides a mathematical representation of the causal relationships in its corresponding DAG. Each variable Y \u2208V is given by Y = fY (X) + \u03b5Y , where X denotes the parents of Y in G, fY is a deterministic function, and \u03b5Y is an error capturing exogenous in\ufb02uences on Y . The main property we need here is that fY is invariant to interventions to V \\{Y } and is consequently invariant to changes in P(V ) induced by these interventions. Interventions refer to changes to fZ, Z \u2208V \\{Y }. In this work, we focus on domain-general predictors dg that are linear functions of features with domaingeneral mechanisms, denoted as gdg := w \u25e6\u03a6dg, where w is a linear predictor and \u03a6dg identi\ufb01es features with domain-general mechanisms. We use domain-general rather than domain-invariant since domain-invariance is strongly tied to the property: Y \u22a5 \u22a5e | Zdg [Arjovsky et al., 2019]. As shown in the subsequent sections, this work leverages other properties of appropriate causal graphs to obtain domain-general features. This distinction is crucial given the challenges associated with learning domain-general features through domaininvariance methods [Rosenfeld et al., 2020]. Given the presence of a distribution shift, it\u2019s essential to identify some common structure across domains that can be utilized for out-of-distribution (OOD) generalization. For example, Shimodaira [2000] assume P(Y |X) is shared across all domains for the covariate shift problem. In this work, we consider a setting where each domain is composed of observed features and labels, X \u2208X, Y \u2208Y, where X is given by an invertible function \u0393 of two latent random variables: domain-general Zdg \u2208Zdg and spurious Zspu \u2208Zspu. By construction, the conditional expectation of the label Y given the domain-general features Zdg is the same across domains, i.e., Eei [Y |Zdg = zdg] = Eej [Y |Zdg = zdg] (1) \u2200zdg \u2208Zdg, \u2200ei \u0338= ej \u2208E. Conversely, this robustness to e does not necessarily extend to spurious features Zspu; in other words, Zspu may assume values that could lead a predictor relying on it to experience arbitrarily high error rates. Then, a sound strategy for learning a domain-general predictor \u2013 one that is robust to distribution shifts \u2013 is to identify the latent domain-general Zdg from the observed features X. 3 e Zdg Zspu Y X Figure 1: Partial Ancestral Graph representing all non-trivial and valid generative processes (DAGs); dashed edges indicate that an edge may or may not exist. The approach we take to do this is motivated by the Reichenbach Common Cause Principle, which claims that if two events are correlated, there is either a causal connection between the correlated events that is responsible for the correlation or there is a third event, a so-called (Reichenbachian) common cause, which brings about the correlation [Hitchcock and R\u00e9dei, 2021, R\u00e9dei, 2002]. This principle allows us to posit the class of generative processes or causal mechanisms that give rise to the correlated observed features and labels, where the observed features are a function of domain-general and spurious features. We represent these generative processes as causal graphs. Importantly, the mapping from a node\u2019s causal parents to itself is preserved in all distributions generated by the causal graph (Equation 1), and distributions can vary arbitrarily so long as they preserve the conditional independencies implied by the DAG (Markov Property [Pearl, 2010]). We now enumerate DAGs that give observe features with spurious correlations with the label. Valid DAGs. We consider generative processes, where both latent features, Zspu, Zdg, and observed X are correlated with Y , and the observed X is a function of only Zdg and Zspu (Figure 1). Given this setup, there is an enumerable set of valid generative processes. Such processes are (i) without cycles, (ii) are feature complete \u2013 including edges from Zdg and Zspu to X, i.e., Zdg \u2192X \u2190Zspu, and (iii) where the observed features mediate domain in\ufb02uence, i.e., there is no direct domain in\ufb02uence on the label e \u0338\u2192Y . We discuss this enumeration in detail in Appendix B. The result of our analysis is identifying a representative set of DAGs that describe valid generative processes \u2013 these DAGs come from orienting the partial ancestral graph (PAG) in Figure 1. We compare the conditional independencies implied by the DAGs de\ufb01ned by Figure 1 as illustrated in Figure 2, resulting in three canonical DAGs in the literature (see Appendix B for further discussion). Other DAGs that induce spurious correlations are outside the scope of this work. e Zdg Zspu Y X (a) Causal [Arjovsky et al., 2019]. e Zdg Zspu Y X (b) Anticausal [Rosenfeld et al., 2020]. e Zdg Zspu Y X (c) Fully Informative Causal [Ahuja et al., 2021]. Figure 2: Generative Processes. Graphical models depicting the structure of possible data-generating processes \u2013 shaded nodes indicate observed variables. X represents the observed features, Y represents observed targets, and e represents domain in\ufb02uences (domain indexes in practice). There is an explicit separation of domain-general Zdg and domain-speci\ufb01c Zspu features; they are combined to generate observed X. Dashed edges indicate the possibility of an edge. Conditional independencies implied by identi\ufb01ed DAGs (Figure 2). 4 Table 1: Generative Processes and Su\ufb03cient Conditions for Domain-Generality Graphs in Figure 2 (a) (b) (c) Zdg \u22a5 \u22a5Zspu | {Y, e} \u2713 \u2713 \u2717 Identifying Zdg is necessary \u2713 \u2713 \u2717 Fig. 2a: Zdg \u22a5 \u22a5Zspu | {Y, e}; Y \u22a5 \u22a5e | Zdg. This causal graphical model implies that the mapping from Zdg to its causal child Y is preserved and consequently, Equation 1 holds [Pearl, 2010, Peters et al., 2016]. As an example, consider the task of predicting the spread of a disease. Features may include causes (vaccination rate and public health policies) and e\ufb00ects (coughing). e is the time of month; the distribution of coughing changes depending on the season. Fig. 2b: Zdg \u22a5 \u22a5Zspu | {Y, e}; Zdg \u22a5 \u22a5Zspu | Y ; Y \u22a5 \u22a5e | Zdg, Zdg \u22a5 \u22a5e. The causal graphical model does not directly imply that Zdg \u2192Y is preserved across domains. However, in this work, it represents the setting where the inverse of the causal direction is preserved (inverse: Zdg \u2192Y ), and thus Equation 1 holds. A context where this setting is relevant is in healthcare where medical conditions (Y ) cause symptoms (Zdg), but the prediction task is often predicting conditions from symptoms, and this mapping Zdg \u2192Y , opposite of the causal direction, is preserved across distributions. Again, we may consider e as the time of month; the distribution of coughing changes depending on the season. Fig. 2c: Y \u22a5 \u22a5e | Zdg; Zdg \u22a5 \u22a5e. Similar to Figure 2a, this causal graphical model implies that the mapping from Zdg to its causal child Y is preserved, so Equation 1 holds [Pearl, 2010, Peters et al., 2016]. This setting is especially interesting because it represents a Fully Informative Invariant Features setting, that is Zspu \u22a5 \u22a5Y | Zdg [Ahuja et al., 2021]. Said di\ufb00erently, Zspu does not induce a backdoor path from e to Y that Zdg does not block. As an example of this, we can consider the task of predicting hospital readmission rates. Features may include the severity of illness, which is a direct cause of readmission rates, and also include the length of stay, which is also caused by the severity of illness. However, length of stay may not be a cause of readmission; the correlation between the two would be a result of the confounding e\ufb00ect of a common cause, illness severity. e is an indicator for distinct hospitals. We call the condition Y \u22a5 \u22a5e | Zdg the domain invariance property. This condition is common to all the DAGs in Figure 2. We call the condition Zdg \u22a5 \u22a5Zspu | {Y, e} the target conditioned representation independence (TCRI) property. This condition is common to the DAGs in Figure 2a, 2b. In the settings considered in this work, the TCRI property is equivalently Zdg \u22a5 \u22a5Zspu | Y\u2200e \u2208E since e will simply index the set of empirical distributions available at training. Domain generalization with conditional independencies. Kaur et al. [2022] showed that su\ufb03ciently regularizing for the correct conditional independencies described by the appropriate DAGs can give domaingeneral solutions, i.e., identi\ufb01es Zdg. However, in practice, one does not (partially) observe the latent features independently to regularize directly. Other works have also highlighted the need to consider generative processes when designing robust algorithms to distribute shifts [Veitch et al., 2021, Makar et al., 2022]. However, previous work has largely focused on regularizing for the domain invariance property, ignoring the conditional independence property Zdg \u22a5 \u22a5Zspu | Y, e. Su\ufb03ciency of ERM under Fully Informative Invariant Features. Despite the known challenges of learning domain-general features from the domain-invariance properties in practice, this approach persists, 5 likely due to it being the only property shared across all DAGs. We alleviate this constraint by observing that Graph (Fig. 2c) falls under what Ahuja et al. [2021] refer to as the fully informative invariant features settings, meaning that Zspu is redundant, having only information about Y that is already in Zdg. Ahuja et al. [2021] show that the empirical risk minimizer is domain-general for bounded features. Easy vs. hard DAGs imply the generality of TCRI. Consequently, we categorize the generative processes into easy and hard cases Table 1: (i) easy meaning that minimizing average risk gives domaingeneral solutions, i.e., ERM is su\ufb03cient (Fig. 2c), and (ii) hard meaning that one needs to identify Zdg to obtain domain-general solutions (Figs. 2a-2b). We show empirically that regularizing for Zdg \u22a5 \u22a5Zspu | Y \u2200e \u2208 E also gives a domain-general solution in the easy case. The generality of TCRI follows from its su\ufb03ciency for identifying domain-general Zdg in the hard cases while still giving domain-general solutions empirically in the easy case. 4 Proposed Learning Framework We have now clari\ufb01ed that hard DAGs (i.e., those not solved by ERM) share the TCRI property. The challenge is that Zdg and Zspu are not independently observed; otherwise, one could directly regularize. Existing work such as Kaur et al. [2022] empirically study semi-synthetic datasets where Zspu is (partially) observed and directly learn Zdg by regularizing that \u03a6(X) \u22a5 \u22a5Zspu | Y, e for feature extractor \u03a6. To our knowledge, we are the \ufb01rst to leverage the TCRI property without requiring observation of Zspu. Next, we set up our approach with some key assumptions. The \ufb01rst is that the observed distributions are Markov to an appropriate DAG. Assumption 4.1. All distributions, sources and targets, are generated by one of the structural causal models SCM that follow: causal z }| { SCM(e) := \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 Z(e) dg \u223cP (e) Zdg, Y (e) \u2190\u27e8w\u2217 dg, Z(e) dg \u27e9+ \u03b7Y , Z(e) spu \u2190\u27e8w\u2217 spu, Y \u27e9+ \u03b7(e) Zspu, X \u2190\u0393(Zdg, Zspu), (2) anticausal z }| { SCM(e) := \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 Y (e) \u223cPY , Z(e) dg \u2190\u27e8e wdg, Y \u27e9+ \u03b7(e) Zdg, Z(e) spu \u2190\u27e8w\u2217 spu, Y \u27e9+ \u03b7(e) Zspu, X \u2190\u0393(Zdg, Zspu), (3) F IIF z }| { SCM(e) := \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 Z(e) dg \u223cP (e) Zdg, Y (e) \u2190\u27e8w\u2217 dg, Z(e) dg \u27e9+ \u03b7Y , Z(e) spu \u2190\u27e8w\u2217 spu, Zdg\u27e9+ \u03b7(e) Zspu, X \u2190\u0393(Zdg, Zspu), (4) where PZdg is the causal covariate distribution, w\u2019s are linear generative mechanisms, \u03b7\u2019s are exogenous independent noise variables, and \u0393 : Zdg \u00d7 Zspu \u2192X is an invertible function. It follows from having causal mechanisms that we can learn a predictor w\u2217 dg for Zdg that is domain-general (Equation 2-4) \u2013 w\u2217 dg inverts the mapping e wdg in the anticausal case. These structural causal models (Equation 2-4) correspond to causal graphs Figures 2a-2c, respectively. Assumption 4.2 (Structural). Causal Graphs and their distributions are Markov and Faithful [Pearl, 2010]. Given Assumption 4.2, we aim to leverage TCRI property (Zdg \u22a5 \u22a5Zspu | Y \u2200e \u2208Etr) to learn the latent Zdg without observing Zspu directly. We do this by learning two feature extractors that, together, recover Zdg and Zspu and satisfy TCRI (Figure 3). We formally de\ufb01ne these properties as follows. De\ufb01nition 4.3 (Total Information Criterion (TIC)). \u03a6 = \u03a6dg \u2295\u03a6spu satis\ufb01es TIC with respect to random variables X, Y, e if for \u03a6(Xe) = [\u03a6dg(Xe); \u03a6spu(Xe)], there exists a linear operator T s.t., T (\u03a6(Xe)) = [Ze dg; Ze spu]\u2200e \u2208Etr. 6 Xe \u03a6dg \u03a6spu b Zdg \u03b8c \u2295 b Zspu \u03b8e b yc b ye Figure 3: Modeling approach. During training, both representations, \u03a6dg, and \u03a6spu, generate domaingeneral and domain-speci\ufb01c predictions, respectively. However, only the domain-invariant representations/predictions are used during testing \u2013 indicated by the solid red arrows. In other words, a feature extractor that satis\ufb01es the total information criterion recovers the complete latent feature sets Zdg, Zspu. This allows us to de\ufb01ne the proposed implementation of the TCRI property non-trivially \u2013 the conditional independence of subsets of the latents may not have the same implications on domain generalization. We note that X \u22a5 \u22a5Y |Zdg, Zspu, so X has no information about Y that is not in Zdg, Zspu. De\ufb01nition 4.4 (Target Conditioned Representation Independence). \u03a6 = \u03a6dg \u2295\u03a6spu satis\ufb01es TCRI with respect to random variables X, Y, e if \u03a6dg(X) \u22a5 \u22a5\u03a6spu(X) | Y \u2200e \u2208E. Proposition 4.5. Assume that \u03a6dg(X) and \u03a6spu(X) are correlated with Y . Given Assumptions 4.1-4.2 and a representation \u03a6 = \u03a6dg \u2295\u03a6spu that satis\ufb01es TIC, \u03a6dg(X) = Zdg \u21d0 \u21d2\u03a6 satis\ufb01es TCRI. (see Appendix C for proof). Proposition 4.5 shows that TCRI is necessary and su\ufb03cient to identify Zdg from a set of training domains. We note that we can verify if \u03a6dg(X) and \u03a6spu(X) are correlated with Y by checking if the learned predictors are equivalent to chance. Next, we describe our proposed algorithm to implement the conditions to learn such a feature map. Figure 3 illustrates the learning framework. Learning Objective: The \ufb01rst term in our proposed objective is L\u03a6dg = Re(\u03b8c \u25e6\u03a6dg), where \u03a6dg : X 7\u2192Rm is a feature extractor, \u03b8c : Rm 7\u2192Y is a linear predictor, and Re(\u03b8c \u25e6\u03a6dg) = E \u0002 \u2113(y, \u03b8c \u00b7 \u03a6(x)) \u0003 is the empirical risk achieved by the feature extractor and predictor pair on samples from domain e. \u03a6dg and \u03b8c are designed to capture the domain-general portion of the framework. Next, to implement the total information criterion, we use another feature extractor \u03a6spu : X 7\u2192Ro, designed to capture the domain-speci\ufb01c information in X that is not captured by \u03a6dg. Together, we have \u03a6 = \u03a6dg \u2295\u03a6spu where \u03a6 has domain-speci\ufb01c predictors \u03b8e : Rm+o 7\u2192Y for each training domain, allowing the feature extractor to utilize domain-speci\ufb01c information to learn distinct optimal domain-speci\ufb01c (nongeneral) predictors: L\u03a6 = Re\u0000\u03b8e \u25e6\u03a6 \u0001 . L\u03a6 aims to ensure that \u03a6dg and \u03a6spu capture all of the information about Y in X \u2013 total information criterion. Since we do not know o, m, we select them to be the same size on our experiments; o, m could be treated as hyperparameters though we do not treat them as such. Finally, we implement the TCRI property (De\ufb01nition 4.4). We denote LT CRI to be a conditional independence penalty for \u03a6dg and \u03a6spu. We utilize the Hilbert Schmidt independence Criterion (HSIC) [Gretton et al., 2007] as LT CRI. However, in principle, any conditional independence penalty can be used in its place. HSIC: LT CRI(\u03a6dg, \u03a6spu) = 1 2 X k\u2208{0,1} \\ HSIC \u0010 \u03a6dg(X), \u03a6spu(X) \u0011y=k = 1 2 X k\u2208{0,1} 1 n2 k tr \u0010 K\u03a6dgHnkK\u03a6spuHnk \u0011y=k , 7 where k, indicates which class the examples in the estimate correspond to, C is the number of classes, K\u03a6dg \u2208 Rnk\u00d7nk, K\u03a6spu \u2208Rnk\u00d7nk are Gram matrices, Ki,j \u03a6 = \u03ba(\u03a6dg(X)i, \u03a6dg(X)j), Ki,j \u03a6spu = \u03c9(\u03a6spu(X)i, \u03a6spu(X)j) with kernels \u03ba, \u03c9 are radial basis functions, Hnk = Ink \u2212 1 n2 k 11\u22a4is a centering matrix, Ink is the nk \u00d7 nk dimensional identity matrix, 1nk is the nk-dimensional vector whose elements are all 1, and \u22a4denotes the transpose. We condition on the label by taking only examples of each label and computing the empirical HSIC; then, we take the average. Taken together, the full objective to be minimized is as follows: L = 1 Etr X e\u2208Etr \" Re(\u03b8c \u25e6\u03a6dg) + Re(\u03b8e \u25e6\u03a6) + \u03b2LT CRI(\u03a6dg, \u03a6spu) # , where \u03b2 > 0 is a hyperparameter and Etr is the number of training domains. Figure 3 shows the full framework. We note that when \u03b2 = 0, this loss reduces to ERM. Note that while we minimize this objective with respect to \u03a6, \u03b8c, \u03b81, . . . , \u03b8Etr, only the domain-general representation and its predictor, \u03b8c \u00b7 \u03a6dg are used for inference. 5 Experiments We begin by evaluating with simulated data, i.e., with known ground truth mechanisms; we use Equation 5 to generate our simulated data, with domain parameter \u03c3ei; code is provided in the supplemental materials. SCM(ei) := \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 Z(ei) dg \u223cN \u00000, \u03c32 ei \u0001 y(ei) = Z(ei) dg + N \u00000, \u03c32 y \u0001 , Z(ei) spu = Y (ei) + N \u00000, \u03c32 ei \u0001 . (5) Table 2: Continuous Simulated Results \u2013 Feature Extractor with a dummy predictor \u03b8c = 1., i.e., b y = x \u00b7 \u03a6dg \u00b7 w, where x \u2208RN\u00d72, \u03a6dg, \u03a6spu \u2208R2\u00d71, w \u2208R. Oracle indicates the coe\ufb03cients achieved by regressing y on zc directly. Algorithm (\u03a6dg)0 (\u03a6dg)1 (i.e., Zdg weight) (i.e., Zspu weight) ERM 0.29 0.71 IRM 0.28 0.71 TCRI 1.01 0.06 Oracle 1.04 0.00 We observe 2 domains with parameters \u03c3e=0 = 0.1, \u03c3e=1 = 0.2 with \u03c3y = 0.25, 5000 samples, and linear feature extractors and predictors. We use partial covariance as our conditional independence penalty LT CRI. Table 2 shows the learned value of \u03a6dg, where \u2018Oracle\u2019 indicates the true coe\ufb03cients obtained by regressing Y on domain-general Zdg directly. The ideal \u03a6dg recovers Zdg and puts zero weight on Zspu. Now, we evaluate the e\ufb03cacy of our proposed objective on non-simulated datasets. 5.1 Semisynthetic and Real-World Datasets Algorithms: We compare our method to baselines corresponding to DAG properties: Empirical Risk Minimization (ERM, [Vapnik, 1991]), Invariant Risk Minimization (IRM [Arjovsky et al., 2019]), Variance Risk Extrapolation (V-REx, [Krueger et al., 2021]), [Li et al., 2018a]), Group Distributionally Robust Optimization (GroupDRO), [Sagawa et al., 2019]), and Information Bottleneck methods (IB_ERM/IB_IRM, [Ahuja et al., 2021]). Additional baseline methods are provided in the Appendix A. We evaluate our proposed method on the semisynthetic ColoredMNIST [Arjovsky et al., 2019] and realworld Terra Incognita dataset [Beery et al., 2018]. Given observed domains Etr = {e : 1, 2, . . . , Etr}, we train on Etr \\ ei and evaluate the model on the unseen domain ei, for each e \u2208Etr. ColoredMNIST: The ColoredMNIST dataset [Arjovsky et al., 2019] is composed of 7000 (2 \u00d7 28 \u00d7 28, 1) images of a hand-written digit and binary-label pairs. There are three domains with di\ufb00erent correlations between image color and label, i.e., the image color is spuriously related to the label by assigning a color to 8 each of the two classes (0: digits 0-4, 1: digits 5-9). The color is then \ufb02ipped with probabilities {0.1, 0.2, 0.9} to create three domains, making the color-label relationship domain-speci\ufb01c because it changes across domains. There is also label \ufb02ip noise of 0.25, so we expect that the best accuracy a domain-general model can achieve is 75%, while a non-domain general model can achieve higher. In this dataset, Zdg corresponds to the original image, Zspu the color, e the label-color correlation, Y the image label, and X the observed colored image. This DAG follows the generative process of Figure 2a [Arjovsky et al., 2019]. Spurrious PACS: Variables. X: images, Y : non-urban (elephant, gira\ufb00e, horse) vs. urban (dog, guitar, house, person). Domains. {{cartoon, art painting}, {art painting, cartoon}, {photo}} [Li et al., 2017]. The photo domain is the same as in the original dataset. In the {cartoon, art painting} domain, urban examples are selected from the original cartoon domain, while non-urban examples are selected from the original art painting domain. In the {art painting, cartoon} domain, urban examples are selected from the original art painting domain, while non-urban examples are selected from the original cartoon domain. This sampling encourages the model to use spurious correlations (domain-related information) to predict the labels; however, since these relationships are \ufb02ipped between domains {{cartoon, art painting} and {art painting, cartoon}, these predictions will be wrong when generalized to other domains. Terra Incognita: The Terra Incognita dataset contains subsets of the Caltech Camera Traps dataset [Beery et al., 2018] de\ufb01ned by [Gulrajani and Lopez-Paz, 2020]. There are four domains representing di\ufb00erent locations {L100, L38, L43, L46} of cameras in the American Southwest. There are 9 species of wild animals {bird, bobcat, cat, coyote, dog, empty, opossum, rabbit, raccoon, squirrel} and a \u2018no-animal\u2019 class to be predicted. Like Ahuja et al. [2021], we classify this dataset as following the generative process in Figure 2c, the Fully Informative Invariant Features (FIIF) setting. Additional details on model architecture, training, and hyperparameters are detailed in Appendix 5. Model Selection. The standard approach for model selection is a training-domain hold-out validation set accuracy. We \ufb01nd that model selection across hyperparameters using this held-out training domain validation accuracy often returns non-domain-general models in the \u2018hard\u2019 cases. One advantage of our model is that we can do model selection based on the TCRI condition (conditional independence between the two representations) on held-out training domain validation examples to mitigate this challenge. In the easy case, we expect the empirical risk minimizer to be domain-general, so selecting the best-performing trainingdomain model is sound \u2013 we additionally do this for all baselines (see Appendix A.1 for further discussion). We \ufb01nd that, empirically, this heuristic works in the examples we study in this work. Nevertheless, model selection under distribution shift remains a signi\ufb01cant bottleneck for domain generalization. 5.2 Results and Discussion Table 3: E\\etest \u2192etest (model selection on held-out source domains validation set). The \u2018mean\u2019 column indicates the average generalization accuracy over all three domains as the etest distinctly; the \u2018min\u2019 column indicates the worst generalization accuracy. ColoredMNIST Spurious PACS Terra Incognita Algorithm average worst-case average worst-case average worst-case ERM 51.6 \u00b1 0.1 10.0 \u00b1 0.1 57.2 \u00b1 0.7 31.2 \u00b1 1.3 44.2 \u00b1 1.8 35.1 \u00b1 2.8 IRM 51.7 \u00b1 0.1 9.9 \u00b1 0.1 54.7 \u00b1 0.8 30.3 \u00b1 0.3 38.9 \u00b1 3.7 32.6 \u00b1 4.7 GroupDRO 52.0 \u00b1 0.1 9.9 \u00b1 0.1 58.5 \u00b1 0.4 37.7 \u00b1 0.7 47.8 \u00b1 0.9 39.9 \u00b1 0.7 VREx 51.7 \u00b1 0.2 10.2 \u00b1 0.0 58.8 \u00b1 0.4 37.5 \u00b1 1.1 45.1 \u00b1 0.4 38.1 \u00b1 1.3 IB_ERM 51.5 \u00b1 0.2 10.0 \u00b1 0.1 56.3 \u00b1 1.1 35.5 \u00b1 0.4 46.0 \u00b1 1.4 39.3 \u00b1 1.1 IB_IRM 51.7 \u00b1 0.0 9.9 \u00b1 0.0 55.9 \u00b1 1.2 33.8 \u00b1 2.2 37.0 \u00b1 2.8 29.6 \u00b1 4.1 TCRI_HSIC 59.6 \u00b1 1.8 45.1 \u00b1 6.7 63.4 \u00b1 0.2 62.3 \u00b1 0.2 49.2 \u00b1 0.3 40.4 \u00b1 1.6 9 Table 4: Total Information Criterion: Domain General (DG) and Domain Speci\ufb01c (DS) Accuracies. The DG classi\ufb01er is shared across all training domains, and the DS classi\ufb01ers are trained on each domain. The \ufb01rst row indicates the domain from which the held-out examples are sampled, and the second indicates which domain-speci\ufb01c predictor is used. {+90%, +80%, -90%} indicate domains \u2013 {0.1, 0.2, 0.9} digit label and color correlation, respectively. DG Classi\ufb01er DS Classi\ufb01er on +90 DS Classi\ufb01er on +80 DS Classi\ufb01er on -90 Test Domain No DS clf. +90% +80% -90% +90% +80% -90% +90% +80% -90% +90% +80% -90% +90% 68.7 69.0 68.5 90.1 9.8 79.9 20.1 10.4 89.9 +80% 63.1 62.4 64.4 76.3 24.3 70.0 30.4 24.5 76.3 -90% 65.6 63.4 44.1 75.3 75.3 69.2 69.5 29.3 26.0 Table 5: TIC ablation for ColoredMNIST. Algorithm average worst-case TCRI_HSIC (No TIC) 51.8 \u00b1 5.9 27.7 \u00b1 8.9 TCRI_HSIC 59.6 \u00b1 1.8 45.1 \u00b1 6.7 Worst-domain Accuracy. A critical implication of domain generality is stability \u2013 robustness in worstdomain performance up to domain di\ufb03culty. While average accuracy across domains provides some insight into an algorithm\u2019s ability to generalize to new domains, the average hides the variance of performance across domains. Average improvement can be increased while the worst-domain accuracy stays the same or decreases, leading to incorrect conclusions about domain generalization. Additionally, in real-world challenges such as algorithmic fairness where worst-group performance is considered, some metrics or fairness are analogous to achieving domain generalization [Creager et al., 2021]. Results. TCRI achieves the highest average and worst-case accuracy across all baselines (Table 3). We \ufb01nd no method recovers the exact domain-general model\u2019s accuracy of 75%. However, TCRI achieves over 7% increase in both average accuracy and worst-case accuracy. Appendix A.2 shows transfer accuracies with cross-validation on held-out test domain examples (oracle) and TCRI again outperforms all baselines, achieving an average accuracy of 70.0% \u00b1 0.4% and a worst-case accuracy of 65.7% \u00b1 1.5, showing that regularizing for TCRI gives very close to optimal domain-general solutions. Similarly, for the Spurious-PACS dataset, we observe that TCRI outperforms the baselines. TRCI achieves the highest average accuracy of 63.4% \u00b1 0.2 and worst-case accuracy of 62.3% \u00b1 0.1 with the next best, VREx, achieving 58.8 \u00b1 1.0 and 33.8 \u00b1 0.0, respectively. Additionally, for the Terra-Incognita dataset, TCRI achieves the highest average and worst-case accuracies of 49.2% \u00b1 0.3% and 40.4% \u00b1 1.6% with the next best, GroupDRO, achieving 47.8 \u00b1 0.9 and 39.9 \u00b1 0.7, respectively. Appendix A.2 shows transfer accuracies with cross-validation held-out target domain examples (oracle) where we observe that TCRI also obtains the highest average and worst-case accuracy for Spurrious-PACS and Terra Incognita. Overall, regularizing for TCRI gives the most domain-general solutions compared to our baselines, achieving the highest worst-case accuracy on all benchmarks. Additionally, TCRI achieves the highest average accuracy on ColoredMNIST and Spurious-PAC and the second highest on Terra Incognita, where we expect the empirical risk minimizer to be domain-general. Additional results are provided in the Appendix A. The E\ufb00ect of the Total Information Criterion. Without the TIC loss term, our proposed method is less e\ufb00ective. Table 5 shows that for Colored MNIST, the hardest \u2018hard\u2019 case we encounter, removing the TIC criteria, performs worse in average and worst case accuracy, dropping over 8% and 18, respectively. Separation of Domain General and Domain Speci\ufb01c Features . In the case of Colored MNIST, we can reason about the extent of feature disentanglement from the accuracies achieved by the domain-general and domain-speci\ufb01c predictors. Table 4 shows how much each component of \u03a6, \u03a6dg and \u03a6spu, behaves as 10 expected. For each domain, we observe that the domain-speci\ufb01c predictors\u2019 accuracies follow the same trend as the color-label correlation, indicating that they capture the color-label relationship. The domain-general predictor, however, does not follow such a trend, indicating that it is not using color as the predictor. For example, when evaluating the domain-speci\ufb01c predictors from the +90% test domain experiment (row +90%) on held-out examples from the +80% training domain (column \"DS Classi\ufb01er on +80%\"), we \ufb01nd that the +80% domain-speci\ufb01c predictor achieves an accuracy of nearly 79.9% \u2013 exactly what one would expect from a predictor that uses a color correlation with the same direction \u2018+\u2019. Conversely, the -90% predictor achieves an accuracy of 20.1%, exactly what one would expect from a predictor that uses a color correlation with the opposite direction \u2018-\u2019. The -90% domain has the opposite label-color pairing, so a color-based classi\ufb01er will give the opposite label in any \u2018+\u2019 domain. Another advantage of this method, exempli\ufb01ed by Table 4, is that if one believes a particular domain is close to one of the training domains, one can opt to use the close domain\u2019s domain-speci\ufb01c predictor and leverage spurious information to improve performance. On Benchmarking Domain Generalization. Previous work on benchmarking domain generalization showed that across standard benchmarks, the domain-unaware empirical risk minimizer outperforms or achieves equivalent performance to the state-of-the-art domain generalization methods [Gulrajani and Lopez-Paz, 2020]. Additionally, Rosenfeld et al. [2022] gives results that show weak conditions that de\ufb01ne regimes where the empirical risk minimizer across domains is optimal in both average and worst-case accuracy. Consequently, to accurately evaluate our work and baselines, we focus on settings where it is clear that (i) the empirical risk minimizer fails, (ii) spurious features, as we have de\ufb01ned them, do not generalize across the observed domains, and (iii) there is room for improvement via better domain-general predictions. We discuss this point further in the Appendix A.1. Oracle Transfer Accuracies. While model selection is an integral part of the machine learning development cycle, it remains a non-trivial challenge when there is a distribution shift. While we have proposed a selection process tailored to our method that can be generalized to other methods with an assumed causal graph, we acknowledge that model selection under distribution shift is still an important open problem. Consequently, we disentangle this challenge from the learning problem and evaluate an algorithm\u2019s capacity to give domain-general solutions independently of model selection. We report experimental reports using heldout test-set examples for model selection in Appendix A Table 6. We \ufb01nd that our method, TCRI_HSIC, also outperforms baselines in this setting. 6 Conclusion and Future Work We reduce the gap in learning domain-general predictors by leveraging conditional independence properties implied by generative processes to identify domain-general mechanisms. We do this without independent observations of domain-general and spurious mechanisms and show that our framework outperforms other state-of-the-art domain-generalization algorithms on real-world datasets in average and worst-case across domains. Future work includes further improvements to the framework to fully recover the strict set of domain-general mechanisms and model selection strategies that preserve desired domain-general properties. Acknowledgements OS was partially supported by the UIUC Beckman Institute Graduate Research Fellowship, NSF-NRT 1735252. This work is partially supported by the NSF III 2046795, IIS 1909577, CCF 1934986, NIH 1R01MH116226-01A, NIFA award 2020-67021-32799, the Alfred P. Sloan Foundation, and Google Inc." +} \ No newline at end of file diff --git a/intro_8K/test_introduction_short_2404.16283v1.json b/intro_8K/test_introduction_short_2404.16283v1.json new file mode 100644 index 0000000000000000000000000000000000000000..4642405a2f03858a5c43a8287b725b3cf0155f70 --- /dev/null +++ b/intro_8K/test_introduction_short_2404.16283v1.json @@ -0,0 +1,17 @@ +{ + "url": "http://arxiv.org/abs/2404.16283v1", + "title": "Andes: Defining and Enhancing Quality-of-Experience in LLM-Based Text Streaming Services", + "abstract": "The advent of large language models (LLMs) has transformed text-based\nservices, enabling capabilities ranging from real-time translation to AI-driven\nchatbots. However, existing serving systems primarily focus on optimizing\nserver-side aggregate metrics like token generation throughput, ignoring\nindividual user experience with streamed text. As a result, under high and/or\nbursty load, a significant number of users can receive unfavorable service\nquality or poor Quality-of-Experience (QoE). In this paper, we first formally\ndefine QoE of text streaming services, where text is delivered incrementally\nand interactively to users, by considering the end-to-end token delivery\nprocess throughout the entire interaction with the user. Thereafter, we propose\nAndes, a QoE-aware serving system that enhances user experience for LLM-enabled\ntext streaming services. At its core, Andes strategically allocates contended\nGPU resources among multiple requests over time to optimize their QoE. Our\nevaluations demonstrate that, compared to the state-of-the-art LLM serving\nsystems like vLLM, Andes improves the average QoE by up to 3.2$\\times$ under\nhigh request rate, or alternatively, it attains up to 1.6$\\times$ higher\nrequest rate while preserving high QoE.", + "authors": "Jiachen Liu, Zhiyu Wu, Jae-Won Chung, Fan Lai, Myungjin Lee, Mosharaf Chowdhury", + "published": "2024-04-25", + "updated": "2024-04-25", + "primary_cat": "cs.DC", + "cats": [ + "cs.DC", + "cs.LG" + ], + "label": "Original Paper", + "paper_cat": "LLM Fairness", + "gt": "Large language Models (LLMs) [4, 9, 21, 46, 51] have revo- lutionized natural language processing. By generating con- textually relevant responses, they power a wide range of applications, more than 60% of which are centered around conversational interactions like chatbots, virtual assistants, language translation, and customer support systems [15]. In particular, the meteoric rise of ChatGPT [35] spearheaded the growth of conversational AI services by attracting over 100 million users in just two months after its launch [29]. Conversational AI services, by nature, provide interactive conversations between the user and an AI agent. At its core, an LLM generates tokens one by one1 and streams them back to the user to be digested, be it as written text or speech. As 1LLMs process and generate text in units of tokens. For instance, the word \u201cstreaming\u201d may be broken down into two tokens: \u201cstream\u201d and \u201cing.\u201d Req 2 Req 1 Request 1 and 2 arrive Quality of Experience is a different story. Req 1 Request 1 and 2 arrive Quality of Experience is a different story. Throughput is not all you need. Throughput is not all you need. User 1 User 2 User 1 User 2 Req 2 Req 1 Req 2 Server Server TTFT TTFT (a) Existing LLM serving systems are oblivious of QoE. User 2 experiences a long wait time (TTFT) and therefore lower QoE. Req 2 Req 1 Request 1 and 2 arrive Quality of Experience is a different story. Req 1 Request 1 and 2 arrive Quality of Experience is a different story. Throughput is not all you need. Throughput is not all you need. User 1 User 2 User 1 User 2 Req 2 Req 1 Req 2 Server Server TTFT TTFT (b) A QoE-aware LLM serving system can schedule token gen- eration over time to enhance QoE. User 2\u2019s TTFT is drastically improved without affecting User 1\u2019s token delivery timeline. Figure 1. Server-side token generation timeline and user- side response digestion progress. Even if the server generates tokens very fast, users cannot digest them at such a pace. this token-by-token streaming nature is akin to the frame- by-frame streaming nature of video streaming services, we dub such services text streaming services. In this paper, we seek to characterize and enhance the Quality-of-Experience (QoE) of text streaming services (\u00a72.2). We realize that user interaction with LLM responses happens at moments when each new token is delivered (e.g., displayed or spoken) to the user over time. Thus, we define token deliv- ery timeline (TDT), a series of timestamps when each token was delivered to a user, to capture the user\u2019s interaction with the service for a single request. The ideal TDT a user expects from a text streaming service can vary significantly based on the type of the service and user demographics. For instance, a chat service that uses a text-to-speech model to read out the LLM\u2019s response to users (e.g., voice chat in ChatGPT, real-time speech translation) could be less stringent in terms of its minimum token delivery speed (TDS) compared to a chat service in raw text, because a user\u2019s speaking speed is often slower than their reading speed, but it may require smaller time to first token (TTFT) to better resemble real-life arXiv:2404.16283v1 [cs.DC] 25 Apr 2024 verbal conversations. The minimum TDS and TTFT together define the expected TDT of a request. Unfortunately, existing LLM serving systems [20, 25, 30, 50] are designed to optimize aggregated server-side perfor- mance metrics such as token generation throughput [25, 50], which are not necessarily aligned with optimizing the QoE of text streaming services (\u00a72.3). More importantly, by re- aligning the objectives of LLM serving systems towards QoE optimization, a QoE-aware serving system can utilize the same resources more effectively to manage a greater num- ber of concurrent requests while ensuring high QoE, thus reducing the cost per request. To illustrate, we compare ex- isting serving systems with a QoE-aware one, each with a serving capacity of 1, in Figure 1. In Figure 1a, due to the commonly adopted first-come-first-serve (FCFS) scheduling policy [25, 50, 52], User 2 experiences a long initial waiting time (TTFT). In contrast, in Figure 1b, a QoE-aware serving system schedules token generation in a manner that is aware of each user\u2019s reading speed, leading to a shorter wait time for User 2 without affecting User 1\u2019s interaction with the service. Although the average server-side token generation throughput or latency are the same for the two systems, over- all user experience is improved in the QoE-aware system. We attribute this to the na\u00efve FCFS scheduling policy in existing serving systems, which fails to account for the QoE requirements of individual requests and cannot efficiently utilize resources (\u00a72.4). Consequently, some users may expe- rience extended waiting time during their interaction with the service, especially when the system is under higher re- quest rate or is serving requests with longer context lengths. To preserve good user experience, the service provider must provision more compute resources proportional to the excess request load, leading to higher operational costs. Designing a QoE-aware LLM serving system, however, is challenging from both conceptual and practical perspectives. Defining the QoE metric to capture the user experience in text streaming services is non-trivial. It should encapsulate the continuous interaction process over time, accounting for factors like TTFT and TDS. Designing a QoE-aware serving system faces several systems challenges as well: (a) Dynamic and unpredictable resource demand: Re- quests arrive dynamically with varying expected TDT and prompt length and the number of output tokens is not known a priori, making it challenging to implement a one-size-fits-all scheduling strategy such as round-robin. (b) Constrained resource supply: The system has limited GPU memory and computation resources, restricting the number of concurrent in-flight requests. To meet the QoE requirements of individual requests, the system needs to make runtime decisions to allocate resources among requests, which may incur non-negligible overhead. To this end, we first propose a mathematical definition of QoE for text streaming services (\u00a73.1). Our QoE metric Age Group Reading Speed 18-24 (28.0%) 236 WPM 25-44 (51.9%) 200 WPM 45-54 (11.2%) 192 WPM 55-64 (5.6%) 185 WPM 65+ (3.3%) 175 WPM Table 1. Reading speed (Word Per Minute) by age group [10, 29]. Language Speaking Speed English (79.3%) 150 WPM Chinese (7.0%) 158 WPM Korean (6.9%) 150 WPM French (3.6%) 195 WPM Spanish (3.2%) 218 WPM Table 2. Speaking speed (Word Per Minute) by lan- guage [8, 29, 36]. compares the actual TDT of a request with its expected TDT, reflecting the user\u2019s experience throughout their entire in- teraction with the service. Then, we propose Andes, an LLM serving system that optimizes the overall QoE of text stream- ing services (\u00a74). Andes employs a dynamic priority-based preemptive scheduler that operates at the granularity of to- kens. Andes strategically allocates system resources to more urgent requests and preempts requests that have already received sufficient service, all to enhance QoE. By satisfying more requests with high QoE using the same amount of re- source, Andes eliminates the need for additional resource provisioning, thus reducing LLM serving cost. Andes also co- designs a client-side token buffer that temporarily withholds excess tokens and displays them to the user at their expected pace (\u00a75). This design ensures users experience smooth token delivery, oblivious to the intricacies of server-side scheduling or network fluctuations. We evaluate Andes using the OPT [51] family of mod- els, ranging from 13B to 175B parameters (\u00a76). Compared to vLLM [25], we find that Andes can manage 1.6\u00d7 higher request rate with high QoE, or alternatively, improve the average QoE by 3.2\u00d7 given the same amount of resource. Overall, we make the following contributions in this paper: 1. We identify an emerging category of LLM-based ap- plications (text streaming services) and define a QoE metric for them. 2. We propose Andes, a QoE-aware LLM serving system designed to optimize QoE for text streaming services. 3. We evaluate Andes under different workloads and se- tups and show that Andes significantly improves QoE with negligible system overhead.", + "main_content": "2.1 LLM Serving Systems LLM text generation using Transformer-based [47] models is characterized by autoregressive token generation and significant memory usage. First, the LLM generates tokens 2 Time #Tokens Req 1 Req 2 Req 3 Req 4 Expected TDT Figure 2. Four requests arrive at \ud835\udc61= 0. Requests 1 and 2 are equally satisfying. Requests 3 and 4 are frustrating, with request 4 being more so as it delivers fewer tokens earlier on, despite having the same TTFT and average token latency. sequentially, where the next token is conditioned on the previous tokens. Second, the LLM requires a large amount of memory to store intermediate data for each token in its input prompt and output response, known as KV cache [47]. As the number of tokens generated increases, so does the KV cache size. For instance, GPT-3 175B [9] requires 7 GB of GPU memory for a 1000-token request, limiting the number of requests that can be handled concurrently. 2.2 User Experience of Text Streaming Services Compared to traditional services that generate entire responses at once, text streaming services allow the user to start digesting the response as early as possible. The user experience includes two phases: Wait Phase. Users wait for the first token to arrive, known as the time-to-first-token (TTFT). For web applications, studies indicate that users expect an initial response to arrive within one second, with a significant 32% dropout rate if the response takes longer than three seconds [6]. Digest Phase. Following the first token, users enter the digest phase, which may last for tens of seconds or more [50], Hence, it is a common practice to stream tokens to the user on the fly so that they can start digesting the response as early as possible. The expected rate of token delivery, i.e., the Token Delivery Speed (TDS), depends on factors such as application type and user demographics. For example, reading speeds, measured in words per minute (WPM), differ across age groups (Table 1), while speaking speeds vary among languages (Table 2). By translating words to tokens using the average word-to-token ratio [38], we can estimate the average reading speed to 4.8 tokens/s and average speaking speed to 3.3 tokens/s. Intuition Behind QoE of Text Streaming Services. The expected TTFT and the expected TDS together define the expected token delivery timeline (TDT), represented by the black line in Figure 2. Similar to QoE in video streaming, a desired QoE metric should capture the gap between the actual TDT and the expected TDT. Intuitively, users are satisfied when the actual TDT is above the expected TDT, otherwise, they prefer to receive more tokens earlier on, as illustrated in 2 4 Request rate (req/s) 10 0 10 1 10 2 TTFT (s) Expected TTFT QoE-unaware QoE-aware (a) 90\ud835\udc61\u210e-p TTFT increases dramatically as the request rate surpasses the server\u2019s capacity. 2 3 4 5 Request rate (req/s) 0 5 10 TDS (tokens/s) Reading speed Speaking speed QoE-unaware QoE-aware (b) Token generation speed is much faster than the userexpected speed. Figure 3. System performance under different request rates. Figure 2. Therefore, the QoE should comprehensively measure the token delivery timeline throughout the entire user interaction, going beyond an aggregated number like TTFT or average token latency. We formally define such a QoE metric in Section 3.1. 2.3 Problems and Opportunities Existing LLM serving systems have primarily focused on optimizing aggregated server-side metrics, and often employ a first-come-first-serve (FCFS) scheduling approach without considering the user experience. In our experiment with ShareGPT [45] on OPT 66B [51] with 4 A100 GPUs, we notice that especially under high request rate, two issues arise: (1) certain users may encounter extended TTFT; (2) conversely, other users might receive tokens at a pace surpassing their digestion ability. Prolonged TTFT. As depicted in Figure 3a, the 90\ud835\udc61\u210epercentile TTFT increases dramatically as the server faces more bursty request rates, resulting in a longer queuing delay and degraded user experience. To accommodate such bursty request volumes, service providers often have to over-provision resources, such as by adding more GPUs, which significantly increases operational costs. Excessively High Token Generation Speed. Conversely, as shown in Figure 3b, we report the token generation speed under different request rates. The observed server-side token generation speed (\u22656.6 tokens/s) is much faster than the userexpected speed (3.3 or 4.8 tokens/s), as referenced in Table 1 and Table 2. This discrepancy indicates that the server often generates tokens faster than the user can consume them. While this might seem efficient from the server\u2019s perspective, it may overwhelm this user while starving others. Opportunities. We observe that there is an opportunity to optimize user experience by balancing prolonged TTFT and excessively fast token generation speed. By temporarily pausing the response generation for requests with already sufficient tokens generated, we can spare the limited GPU resources to other pending requests. The ratio between the expected token generation speed \ud835\udc47\ud835\udc37\ud835\udc46expected and the actual token generation speed \ud835\udc47\ud835\udc37\ud835\udc46actual 3 Response length Prompt length Memory usage = Request Spec Request ID 1 2 3 4 Prompt length 90 90 180 90 Response length 10 10 10 20 Expected TTFT (s) 1 1 2 2 Expected TDS 1.25 1.25 5 5 (tokens/s) Server memory capacity 1 2 3 4 1,2,3,4 FCFS 1 2 3 4 1 2 3 4 1,2,3,4 Round Robin 1 2 3 4 1 2 4 1,2,3,4 QoE-aware 10 20 #Token 0 2 4 6 8 Time 10 20 #Token 0 2 4 6 8 Time 0 2 4 6 8 Time Req 1 Req 2 Req 3 Req 4 Expected TDT Figure 4. Suboptimal user experience from QoE-unaware scheduling policies. In this illustrative toy example, we consider a server that can serve at most 200 tokens simultaneously due to memory constraints. We consider four requests with different prompt lengths, response lengths, as well as different expected TTFT and TDS values, arriving at time 0. The figure shows the serving order (first row) and the cumulative tokens delivered over time for each request (second and third rows). Colored lines represent actual TDT, while the black line indicates the expected TDT. An optimal QoE is achieved when the actual token delivery curve is completely left and/or above the expected token delivery curve. determines the slack for which a request can be preempted, allowing the system to accommodate more concurrent requests. Thus, with appropriate request preemption and restarting, we can serve \ud835\udc47\ud835\udc37\ud835\udc46actual \ud835\udc47\ud835\udc37\ud835\udc46expected \u00d7 concurrent requests than without request preemption, significantly improving user experience. In the example of text-based and voice-based chat services in Figure 3b, we could have increased the serving capacity by 6.6 4.8 = 1.38\u00d7 and 6.6 3.3 = 2\u00d7, respectively. Our evaluation shows that Andes can nearly achieve this theoretical improvement in practice. 2.4 Limitation of Existing Solutions Let us consider a toy example in Figure 4 to illustrate the limitations of existing QoE-unaware scheduling (FCFS used by vLLM [25] and Round Robin). Under FCFS scheduling, while requests 1, 2, and 3 are served immediately, request 4 suffers from longer TTFT due to queuing delays. Round Robin partially mitigates queuing delay using fair-sharing but still fails to align the token delivery in the later stage of the interaction, leading to suboptimal QoE. In contrast, the QoE-aware policy manages to meet the QoE requirements for all requests by prioritizing requests based on their QoE requirements and resource demand. It prioritizes requests with stringent TTFT requirements. Meanwhile, it monitors the resource demand of each request to prevent small requests from being starved of necessary resources. As the served requests accumulate enough tokens for the user to digest, the system upgrades the priority of request 3, which then requires more urgent servicing, and serves it. Finally, the system brings back requests 1, 2, and 4 to continue supplying tokens. In sum, when the server load is below its capacity, all requests can be served promptly and achieve perfect QoE without smart request scheduling. However, when the server is operating at capacity due to unpredictable higher request loads, QoE-aware scheduling can significantly improve the user experience without over-provisioning resources. 3 Overview In this section, we first introduce a formal definition of Quality-of-Experience (QoE) for text streaming services (\u00a73.1). Then, we provide an overview of Andes, an LLM serving system that optimizes QoE of text streaming services (\u00a73.2). 3.1 Quality-of-Experience (QoE) in Text Streaming Text streaming services allow the developer to specify the expected token delivery timeline (TDT) in a request. We derive the QoE of a request by comparing its actual TDT with the expected TDT, considering the entire token delivery process. Informed by the distinctions between superior and inferior service depicted in Figure 2, the formulation of our QoE metric is guided by a set of principles that reflect user expectations and experiences throughout their interaction: 1. Perfect Satisfaction: Users are satisfied when the actual token delivery perfectly aligns with or exceeds the expected delivery, resulting in maximum QoE (QoE = 1). We normalize QoE \u2208[0, 1] for generality across applications. 2. Excess Token Delivery: At any given time, delivering tokens faster than the user\u2019s digest speed does not add 4 ) Perfect QoE (d) Pause in the middle Expected TDT Server generates User digests Sexpected Sactual Time #Tokens (a) TTFT missed. Time #Tokens (b) TDS missed. Time #Tokens (c) Perfect QoE. Time #Tokens (d) Pause in the middle. Figure 5. QoE example. The slope of the actual token delivery curve on the user side is capped by the expected TDS. value to the user experience, as the user cannot digest all tokens at once. So the QoE remains unchanged. 3. Early Token Delivery: Users prefer receiving more tokens earlier to start processing the response sooner. In scenarios where perfect satisfaction is not achieved, the QoE is higher for scenarios where more tokens are delivered earlier. For example, the QoE is worse for a longer TTFT with the same TDS, and similarly, the QoE is worse for a slower TDS with the same TTFT. Following these principles, we formalize the QoE metric by comparing two curves: (a) The expected token delivery curve \ud835\udc47(\ud835\udc61) that is defined by expected TTFT and TDS. Specifically, \ud835\udc47(\ud835\udc61) = \ud835\udc47\ud835\udc37\ud835\udc46expected\u00b7 (\ud835\udc61\u2212\ud835\udc47\ud835\udc47\ud835\udc39\ud835\udc47expected) represents the ideal timeline at which tokens should be delivered to the user (black lines in Figure 5). (b) The actual token delivery curve \ud835\udc34(\ud835\udc61) reflects the timeline of how tokens are digested by the user over time (black dotted lines in Figure 5), with its slope at any time capped by the expected TDS. To quantify the QoE of a request with response length \ud835\udc59, we measure the area under both curves up to the actual time to the last token (TTLT). We then define QoE as the ratio of the actual and expected areas, as shown in Figure 5: \ud835\udc44\ud835\udc5c\ud835\udc38= \ud835\udc46actual \ud835\udc46expected = \u222b\ud835\udc47\ud835\udc47\ud835\udc3f\ud835\udc47 0 \ud835\udc34(\ud835\udc61)\ud835\udc51\ud835\udc61 \u222b\ud835\udc47\ud835\udc47\ud835\udc3f\ud835\udc47 0 min(\ud835\udc47(\ud835\udc61),\ud835\udc59)\ud835\udc51\ud835\udc61 (1) This formulation focuses on the relative QoE relationship between services, but Andes allows the service provider to prioritize specific aspects. For example, to stress a shorter TTFT, the provider can add a penalizing term on the defined QoE as \ud835\udefc\ud835\udc47\ud835\udc47\ud835\udc39\ud835\udc47actual\u2212\ud835\udc47\ud835\udc47\ud835\udc39\ud835\udc47expected \u00b7 \ud835\udc46actual \ud835\udc46expected , where \ud835\udefc\u2208[0, 1]. In this paper, we will use the QoE definition in Equation 1 by default. Running Waiting Queue \u2026 \u2026 1 Request Client Server 4 5 Buffer Request Priority GPU Admit Evict Submit Request {Prompt: \u2019What is the probability that this paper will be accepted?\u2019, TTFT: 1s, TDS: 5 tokens/s} Token Context Length QoE Tracker 2 3 3 Worker 0 Worker 1 Worker W-1 Request Metadata Receive Token Figure 6. Andes Overview. 3.2 Andes Overview The workflow of Andes is shown in Figure 6. 1 The interaction begins with the user submitting a request to the server. The request comes with its QoE requirement, which is prespecified by the application developer. 2 Upon receiving the request, the QoE tracker assigns a scheduling priority and puts it in the waiting queue. 3 At each scheduling iteration, the QoE tracker refreshes the priorities of all requests, both in the waiting and running queues. Then Andes reschedules the requests based on their priorities by admitting high-priority waiting requests to GPU workers and evicting low-priority running requests back to the server. For these evicted requests, their states (e.g., KV cache) are stored in the request metadata store on CPU RAM for future retrieval. 4 During each inference iteration, each running request generates one token, which is then sent to the client. 5 As tokens are delivered to the client, a token buffer is responsible for storing excess tokens and displaying them at the expected speed, ensuring smooth token delivery. 4 QoE-Aware Scheduling In this section, we describe how Andes schedules token generation across multiple requests to maximize the total QoE. Section 4.1 formulates the scheduling problem as a Knapsack variant, and Section 4.2 introduces an efficient solution. 4.1 Problem Formulation The core of Andes is an online preemptive scheduling algorithm for token generation, which requires designing three elements: (1) How often to make scheduling decisions (time quantum), (2) which requests to serve (scheduling objective), and (3) how many requests to serve at a time (batch size). Time Quantum. At the beginning of each time quantum, the scheduler inspects both queued and running requests, and determines which ones to admit and preempt. Following the 5 continuous batching used in existing systems [25, 50], Andes invokes its scheduler at the beginning of each iteration. Scheduling Objective. Just like any other online serving system, it is impractical to perfectly plan execution into the future. Therefore, Andes serves the set of requests that maximizes the scheduling objective in the upcoming time frame of length \u0394\ud835\udc61. The parameter \u0394\ud835\udc61cannot be too short, as scheduling decisions will become shortsighted, or too long, as the actual system state would deviate too far from estimations. We find that setting it as the average request completion time is reasonable, and show in Section 6.5 that Andes is not sensitive to the setting of \u0394\ud835\udc61. Andes supports various scheduling objectives including max average QoE and max-min QoE by designing its scheduling objective function appropriately. For the sake of presentation, we will focus on maximizing average QoE here (See Appendix A for alternative objectives). The objective function for request \ud835\udc56is defined as: \ud835\udc44serve,\ud835\udc56\u2212\ud835\udc44wait,\ud835\udc56 (2) where \ud835\udc44serve,\ud835\udc56and \ud835\udc44wait,\ud835\udc56are the QoE of request \ud835\udc56after \u0394\ud835\udc61 if it is served and not served, respectively. In simple terms, Equation 2 is the amount of QoE gain when we decide to serve request \ud835\udc56compared to when it is not served, and we naturally want to serve more of the requests that give us large QoE gains when served. Batch Size. The number of requests picked to run in the upcoming quantum, or batch size, is limited by two factors. First, each token in a request\u2019s context (prompt plus all generated tokens) consumes one entry in the LLM serving system\u2019s KV cache [9], whose size is bounded by GPU memory. Thus, we have the following constraint: \ud835\udc41 \u2211\ufe01 \ud835\udc56=1 \ud835\udc59\ud835\udc56\ud835\udc65\ud835\udc56\u2264\ud835\udc40 (3) where there are \ud835\udc41requests in total (queued or running), \ud835\udc59\ud835\udc56 is request \ud835\udc56\u2019s context length, \ud835\udc65\ud835\udc56is an indicator variable that is 1 if request \ud835\udc56is served and 0 otherwise, and \ud835\udc40is the total number of tokens that can fit in GPU memory. Furthermore, Andes must take into account the latency to generate one token. That is, while a large batch size may increase server-side token generation throughput, the increase in the amount of compute will inflate the latency to generate one token from the perspective of each request, potentially hurting their QoE by delaying TTFT or failing to meet the expected TDS. On the other hand, a small batch size will be able to deliver tokens faster to each running request, but in turn more requests will not be served at all, again potentially hurting their QoE. Thus, the right intermediate batch size will have to be chosen in order to maximize average QoE. Knapsack Formulation. Putting these together, we observe that the problem setting resembles that of the classic knapsack problem [23]. The goal is to select items (requests) Time # Tokens Qserve(50) Qserve(30) Qserve(10) t Time # Tokens Qwait t Expected Actual Future Time # Tokens Qserve(50) Qserve(30) Qserve(10) t (a) \ud835\udc44serve, i(\ud835\udc35) Time # Tokens Qwait t (b) \ud835\udc44wait,\ud835\udc56 Figure 7. Visualization of \ud835\udc44serve, i(\ud835\udc35) and \ud835\udc44wait,\ud835\udc56. The former depends on batch size \ud835\udc35whereas the latter is a constant. With batch size 50, request \ud835\udc56no longer has perfect QoE. to put in a knapsack (GPU) so that total item value (QoE gain) is maximized and total weight (\ud835\udc59\ud835\udc56) does not exceed the knapsack\u2019s capacity (\ud835\udc40). However, our problem setting deviates from that of the classical knapsack because the value of each item depends on how many items there are in the knapsack. This is because, as noted above, the number of requests in the knapsack (batch size) affects token generation latency, which in turn means that \ud835\udc44serve,\ud835\udc56is actually a function of batch size \ud835\udc35.2 Figure 7 visualizes this. When \ud835\udc35is just 10 or 30, the request maintains perfect QoE by always running ahead. However, when \ud835\udc35is 50, the computation time of one iteration becomes longer and slows down token generation, degrading the request\u2019s QoE by failing to meet its TDS expectation. On the other hand, \ud835\udc44wait,\ud835\udc56does not depend on the batch size because it simply sits in the queue, waiting to be served. Thus, for a specific batch size \ud835\udc35, we would like to solve: max \ud835\udc65 \ud835\udc41 \u2211\ufe01 \ud835\udc56=1 \u0000\ud835\udc44serve,\ud835\udc56(\ud835\udc35) \u2212\ud835\udc44wait,\ud835\udc56 \u0001 \u00b7 \ud835\udc65\ud835\udc56 s.t. \ud835\udc65\ud835\udc56\u2208{0, 1}, \ud835\udc56\u22081, . . . , \ud835\udc41 \ud835\udc41 \u2211\ufe01 \ud835\udc56=1 \ud835\udc65\ud835\udc56= \ud835\udc35 \ud835\udc41 \u2211\ufe01 \ud835\udc56=1 \ud835\udc59\ud835\udc56\ud835\udc65\ud835\udc56\u2264\ud835\udc40 (4) where the optimization variable \ud835\udc65is a length \ud835\udc41array of \ud835\udc65\ud835\udc56s. The second constraint ensures that exactly \ud835\udc35many requests are chosen, whereas the final constraint ensures that the GPU memory capacity is not exceeded. Equation 4 should be solved for each possible batch size \ud835\udc35and the solution that yields the best objective value should be selected. 2More precisely, token generation latency is a function of batch size and the total number of tokens in the batch, but batch size and total number of tokens are nearly perfectly correlated, allowing us to eliminate the latter and only leave batch size. See Appendix B for more detailed analysis. 6 4.2 Solution Design In this section, we discuss the hardness of the problem formulated in the previous section in terms of algorithmic hardness and systems overhead. Then, we propose efficiency optimizations and a greedy algorithm that gives an approximate solution with low systems overhead. Algorithmic Hardness. As Andes must solve its optimization problem repetitively online to determine the set of requests to solve, an efficient algorithm is needed. However, Equation 4 is a variant of the knapsack problem called the Exact K-item Knapsack, which is weakly NP-Hard [23]. We give an optimal 3D dynamic programming solution to the problem that runs in pseudo-polynomial time \ud835\udc42(\ud835\udc40\u00b7 \ud835\udc412) in Appendix C. However, such an algorithm is also too slow in our case as the number of requests \ud835\udc41and the maximum number of tokens that can fit in memory \ud835\udc40are easily in the order of hundreds and thousands, respectively. Furthermore, we need to solve Equation 4 for each possible batch size \ud835\udc35\u2208[1, \ud835\udc41], which is clearly intractable. Preemption Overhead. When some requests that were running in the previous time quantum are not selected to run on the next, such requests are preempted. This is the core mechanism that reduces TTFT inflation from head-of-line blocking. For this, Andes supports two preemption mechanisms: swapping and recomputation. The former moves the request\u2019s KV cache entries between the GPU and CPU memory, whereas the latter drops all entries on preemption and recomputes them when the request restarts. If Andes runs out of host memory for storing KV cache, the preemption mechanism will automatically switch to recomputation. Preemption is not free \u2013 in general, the latency overhead of swapping is similar to one token generation iteration (See Appendix D for detailed benchmarking). Frequent preemption may slow down token generation and delay token delivery, potentially degrading request throughput and QoE. Therefore, our scheduling algorithm must make preemption decisions that strike a good balance between reaping QoE gains and causing slowdowns. Optimization #1: Selective Triggering. We observe that Equation 4 only needs to be solved when batch size is limited either by memory capacity or computation time. The former case can be detected easily by monitoring the KV cache occupancy and having a high-memory watermark (e.g., 90%). For the latter case, Andes monitors token generation latency and detects when it begins to exceed the most minimum token delivery speed requirement of the most stringent request. In all other cases, Andes does not trigger the optimization problem solver and serves every request. Optimization #2: Batch Size Search Space Pruning. In order to reduce the number of times Equation 4 needs to be solved, we reduce the search space of batch size \ud835\udc35from [1, \ud835\udc41] to [\ud835\udc35min, \ud835\udc35max]. First, there is no point in exploring very large Algorithm 1 Greedy packing algorithm for Equation 4 Inputs: Number of requests \ud835\udc41and KV cache capacity \ud835\udc40 Request context length array \ud835\udc59[\ud835\udc41] Request QoE gain array \ud835\udc5e[\ud835\udc41] Target batch size \ud835\udc35 Output: Solution array \ud835\udc65[\ud835\udc41] 1: Initialize priority array \ud835\udc5d[\ud835\udc41] with all zeros 2: for \ud835\udc56= 0 to \ud835\udc41\u22121 do 3: \ud835\udc5d[\ud835\udc56] = \ud835\udc5e[\ud835\udc56] \ud835\udc59[\ud835\udc56] \u22b2Priority of request \ud835\udc56 4: \ud835\udc40current = 0 5: \ud835\udc41current = 0 6: Initialize solution array \ud835\udc65[\ud835\udc41] with all zeros 7: for all \ud835\udc56\u2208[0, \ud835\udc41\u22121] in descending order of \ud835\udc5d[\ud835\udc56] do 8: if \ud835\udc40current + \ud835\udc59[\ud835\udc56] \u2264\ud835\udc40and \ud835\udc41current + 1 \u2264\ud835\udc35then 9: \ud835\udc65[\ud835\udc56] = 1 \u22b2Serve request \ud835\udc56 10: \ud835\udc40current = \ud835\udc40current + \ud835\udc59[\ud835\udc56] 11: \ud835\udc41current = \ud835\udc41current + 1 12: else 13: break 14: return \ud835\udc65 batch sizes that cannot be realized. Thus, \ud835\udc35max is determined by adding to the batch requests with the shortest context lengths until the total number of tokens in the batch reaches \ud835\udc40, at which point the batch size is the largest that can be realized. On the other hand, very small batch sizes that can generate tokens faster than the expected TDS of any request are also suboptimal. This is because going that fast does not increase the QoE of requests that are served, but on the other hand will serve a smaller number of requests, potentially degrading the QoE of requests that are left waiting. Thus, \ud835\udc35min is set as the largest batch size that generates tokens faster than the most stringent TDS among all requests. Optimization #3: Greedy Packing for Knapsack. A direct solution to the exact k-item knapsack problem in Equation 4 is computationally too heavy. Instead, Andes designs an efficient algorithm that computes each request\u2019s priority and greedily packs requests in that order. In designing the priority function, we have three goals: (a) Reflecting merit: Requests that yield high QoE gain and consume less resource should have high priority. (b) Preventing starvation: Requests should be automatically deprioritized as they receive service. (c) Reducing preemption: Selecting high priority requests should reduce the need for preemption. In light of these goals, request \ud835\udc56\u2019s priority is defined as: \ud835\udc44serve,\ud835\udc56(\ud835\udc35) \u2212\ud835\udc44wait,\ud835\udc56 \ud835\udc59\ud835\udc56 (5) This priority function meets our goals. (a) A higher QoE gain will increase the request\u2019s priority, but simultaneously discounted by the amount of GPU memory it will use. (b) As 7 a request receives service, its context length (\ud835\udc59\ud835\udc56) will increase, automatically deprioritizing itself. In contrast, requests will have higher QoE gain the more they wait, automatically boosting their priorities. (c) Finally, a request with long context length (\ud835\udc59\ud835\udc56) will be preempted first, freeing enough GPU memory to potentially bring in more than one waiting requests.3 This reduces the number of preemptions required to alleviate head-of-line blocking. The whole procedure is given in Algorithm 1. The greedy packing algorithm offers time complexity \ud835\udc42(\ud835\udc41log \ud835\udc41). We empirically show in Section 6.5 that this greedy solution can achieve performance comparable to the 3D DP algorithm while greatly reducing scheduling overhead. Optimization #4: Preemption Cap. We have discussed that preemption is not free and can potentially degrade QoE. However, we can empirically and theoretically show that Andes commonly does not result in excessive preemptions/thrashing that may cause average QoE to degrade. Empirically, Andes consistently maintains an average preemption frequency below 1 per request, even under a high server load (\u00a76.2.3). Theoretically, the number of preemptions needed to optimize the QoE of requests is contingent upon the excessive request load. Assume the serving system can handle \ud835\udc5f0 requests per second and the actual request rate is \ud835\udc58\u00b7 \ud835\udc5f0 requests per second, where \ud835\udc58\u22651. Thus, there would be (\ud835\udc58\u22121) \u00b7\ud835\udc5f0 requests whose QoE might be degraded due to the queuing delay. To mitigate this, we need roughly one preemption to accommodate each of these requests. Sometimes, a single preemption of a long request can allow multiple new requests to be served, which further reduces the number of preemptions needed. Therefore, the average preemption frequency needed is bounded by \ud835\udc58\u22121, which is small as long as the load is not excessively high. Nevertheless, in order to safeguard against thrashing that may happen in the worst case request pattern, Andes supports setting a cap \ud835\udc43on the average number of preemptions a request can experience throughout its lifetime. Too high a \ud835\udc43will not be able to act as a safeguard, whereas too small a \ud835\udc43will prevent even absolutely necessary preemptions from happening. We find that setting \ud835\udc43= 1, i.e., a request on average experiences at most one preemption during its lifetime, is a good default (Section 6.5). 5 Implementation The two core elements of Andes are its QoE-aware scheduler and a client-side token buffer. Server-Side QoE-Aware Scheduler. Andes\u2019s scheduling algorithm can work with any LLM serving system that supports continuous batching and at least one preemption mechanism (swapping or recomputation). We note that an LLM 3The overhead of preemption depends on how much memory was freed, not the number of requests. Therefore, for the same amount of memory freed from preemption, it\u2019s better to free a smaller number of requests. 0 50 100 150 200 250 #Tokens Generation Pause Network Fluctuation 0 10 20 30 40 50 Time (s) 0 100 #Tokens in buffer Client receives User digests Figure 8. The client-side token buffer holds excess tokens sent from the server to absorb token generation fluctuations and paces token delivery based on the user\u2019s expected TDS. serving system that implements Paged Attention [25] is likely to also support at least one preemption mechanism to prevent the system from running out of memory. As a reference, we implemented Andes\u2019s scheduling algorithm on top of vLLM [25]. The scheduler only manages requests coming into the vLLM instance it is integrated with, assuming that cluster-level load balancing and fault tolerance are done separately. Client-Side Token Buffer. The server sends tokens to the buffer as soon as they are generated, even if they were generated at a pace that exceeds the user\u2019s expected TDS. Then, the token buffer smooths out the token delivery timeline to pace tokens at the user\u2019s expected TDS. The token buffer can also naturally smooth out some fluctuations in network latency, for instance in crowded mobile networks. The buffer should be implemented appropriately depending on the destination of streaming \u2013 e.g., TypeScript for web frontend, Python for API use. Figure 8 visualizes the token buffer in action. With an initial burst generation faster than the user\u2019s expected TDS, the buffer withholds excess tokens and paces token delivery, thus growing in size. The server is fully aware of the token buffer, and preempts the request to serve other requests. During this time, the buffer drains at a rate that matches the user\u2019s expected TDS. Finally, the server brings back the request and starts generating tokens again, and together with the token buffer, perfect QoE was achieved. 6 Evaluation We evaluate the performance of Andes under different workloads. We demonstrate that: 1. Andes improves the average QoE up to 3.2\u00d7 when the system experiences high/bursty load (\u00a76.2.1). 8 Model size 13B 30B 66B 175B GPUs A100 4\u00d7A100 4\u00d7A100 4\u00d7A100 GPU Memory 80 GB 320 GB 320 GB 320 GB Precision FP16 FP16 FP16 8-bit [14] Model Memory 26 GB 60 GB 132 GB 180 GB Table 3. OPT model family and GPU specifications used. 2. Andes can handle up to 1.6\u00d7 higher request rates while preserving high QoE without additional resources, significantly reducing the serving cost(\u00a76.2.2). 3. Andes maintains similar token generation throughput as the baseline, with a minor drop (\u226410%) in throughput as the request rate increases (\u00a76.2.3). 4. Andes significantly improves TTFT, while maintaining TDS above user expected speed (\u00a76.3). 5. Andes outperforms the baselines across different workloads (\u00a76.4) and setups (\u00a76.5). 6.1 Experiment Setup Model and Server Configurations. Following state-ofthe-art LLM serving systems [25], we evaluate Andes using the OPT [51] series with 13B, 30B, 66B, and 175B parameters, with the 175B model employing INT8 quantization. We run all experiments on NVIDIA A100 GPUs in Chameleon Cloud [22], and use tensor parallelism to deploy the models, using the default configuration in vLLM [25]. We use swap as the preemption mechanism and set the CPU swap space to 240 GB in total. Detailed hardware specifications are provided in Table 3. Workloads. We experiment on ShareGPT [45], a dataset that gathers conversations shared by users with ChatGPT [35], including multiple rounds of input prompt and output response. By concatenating multiple rounds of conversations into one input while limiting its length to 1k tokens to fit the model\u2019s maximum context length, and setting the final response as the output, we create the Multi-Round ShareGPT dataset for longer conversations. As shown in Figure 9, MultiRound-ShareGPT has about 3\u00d7 longer input than ShareGPT, while both datasets have similar output length distribution. We generate request arrival traces using Poisson distribution with different arrival rates. The request\u2019s QoE requirement trace is created with different expected TTFT and TDS. TTFT is set to 1 second for all, while TDS is based on user reading speeds (Table 1), and is translated from words to tokens using the average word-to-token ratio for ChatGPT [38]. In real applications, QoE requirements should be set depending on the application\u2019s specific use case. For instance, reading speed (and thus expected TDS) may be measured using screen scrolling [18] or eye-tracking [3, 34]. Another potential use case is to introduce API price tiering, 0 500 1000 1500 2000 #Tokens 0 200 400 Density Input (mean: 174.55) Output (mean: 314.22) (a) ShareGPT. 0 200 400 600 800 1000 #Tokens 0 200 400 600 Density Input (mean: 624.22) Output (mean: 365.52) (b) Multi-Round ShareGPT. Figure 9. Input and output length distributions of datasets. where a higher per-token price provides faster TDS, and API users can select the tier suitable for downstream digestion. Baselines. We compare Andes with vLLM (version 0.2.7). vLLM uses first-come-first-serve (FCFS) scheduling policy by default. We implement another scheduling policy, RoundRobin (RR), atop vLLM for more informed comparison, which is designed to guarantee equal service to requests through cyclic request preemption. For RR, we set the service interval to 50 inference iterations, maximizing its QoE performance. Metrics. We focus on the following metrics in evaluations: \u2022 Average QoE: We set the threshold to 0.9 as the minimum acceptable average QoE. The QoE of 0.9 corresponds to a 5% delay in TTFT, a 10% slowdown in TDS, or something in the middle. \u2022 System capacity: It measures the maximum request rate that the system can handle while maintaining an average QoE above the threshold. \u2022 System throughput: It measures how many tokens the system generates per second. We also report normalized latency, which is used by vLLM[25] and Orca[50], in Appendix E. 6.2 End-to-End Experiments In this section, we report the performance of Andes in terms of average QoE (\u00a76.2.1), system capacity (\u00a76.2.2), and system throughput (\u00a76.2.3) under different setups. 6.2.1 Improvement on Average QoE. We evaluate the performance of Andes on all four models and two datasets. Figure 10 and Figure 11 show the result on the ShareGPT dataset and Multi-Round ShareGPT dataset respectively. As the request rate increases, Andes maintains a high average QoE, outperforming the baseline whose average QoE sharply decreases. In other words, Andes can serve more concurrent requests without compromising user experience. For ShareGPT dataset, Andes increases average QoE up to 3.1\u00d7 at the same request rate, while maintaining an average QoE of 0.9, all with the same resources. For Multi-Round ShareGPT dataset, Andes improves average QoE up to 3.2\u00d7. For OPT-30B model, the improvement is less significant, as the model is less resource-constrained when compared to the OPT-66B model. 9 1.4 1.6 1.8 2.0 2.2 Request rate (req/s) 0.00 0.25 0.50 0.75 1.00 Avg QoE RR vLLM Andes 5.0 7.5 10.0 Request rate (req/s) 0.0 0.5 1.0 Avg QoE (a) OPT-13B 5 10 Request rate (req/s) 0.0 0.5 1.0 Avg QoE (b) OPT-30B 3 4 5 Request rate (req/s) 0.0 0.5 1.0 Avg QoE (c) OPT-66B 1.4 1.6 Request rate (req/s) 0.0 0.5 1.0 Avg QoE (d) OPT-175B. Figure 10. Average QoE for different request rates using the ShareGPT dataset. 1.4 1.6 1.8 2.0 2.2 Request rate (req/s) 0.00 0.25 0.50 0.75 1.00 Avg QoE RR vLLM Andes 2 3 4 Request rate (req/s) 0.0 0.5 1.0 Avg QoE (a) OPT-13B. 2 4 6 Request rate (req/s) 0.0 0.5 1.0 Avg QoE (b) OPT-30B. 1.5 2.0 Request rate (req/s) 0.0 0.5 1.0 Avg QoE (c) OPT-66B. 0.8 1.0 Request rate (req/s) 0.0 0.5 1.0 Avg QoE (d) OPT-175B. Figure 11. Average QoE for different request rates using the Multi-Round ShareGPT dataset. These improvements can be attributed to Andes\u2019s QoEaware scheduling policy, which dynamically prioritizes resources for urgent requests that risk falling below their expected QoE, preempting those that have been sufficiently served. In contrast, under higher load, traditional FCFS scheduling policy suffers from head-of-line blocking, leading to significant queuing delay. Although the RR policy mitigates head-of-line blocking by preemptions, frequent preemptions introduce significant overhead and degrade the average QoE. 6.2.2 Improvement on Server Capacity. As shown in Figures 10 and 11, the horizontal dotted lines represent the average QoE threshold of 0.9. For ShareGPT dataset, Andes can manage 1.2\u00d7\u22121.6\u00d7 higher request rate than vLLM while maintaining an average QoE above the threshold. Specifically, for the OPT-66B model, Andes can handle 1.25\u00d7 higher request rate than vLLM, nearing the 1.38\u00d7 theoretical improvement suggested in Section 2.3, showcasing Andes\u2019s ability to optimize resource allocation and average QoE effectively. For Multi-Round ShareGPT dataset, Andes can serve 1.1 \u00d7 \u22121.3\u00d7 higher request rate. Additionally, by serving higher request rates with the same resources, Andes effectively reduces the resource cost per request. 6.2.3 Impact of Andes on System Throughput. We report the token generation throughput and the preemption frequency of Andes on OPT-66B with both datasets, as shown in Figure 12 and Figure 13. In both datasets, Andes maintains the same token throughput as vLLM when the request rate is moderate, and experiences a minor drop (\u226410%) in throughput as the request rate increases. This demonstrates that 1.4 1.6 1.8 2.0 2.2 Request rate (req/s) 0.00 0.25 0.50 0.75 1.00 Avg QoE RR vLLM Andes 3 4 5 Request rate (req/s) 0 50 Throughput (tokens/s) (a) ShareGPT. 1.5 2.0 Request rate (req/s) 0 50 Throughput (tokens/s) (b) Multi-Round ShareGPT. Figure 12. Token generation throughput with OPT-66B under different request arrival rates. Andes marginally impacts system throughput. The throughput decrease can be attributed to the overheads introduced by request preemption. Despite the active request scheduling, the frequency of preemptions per request remains low (\u22640.5) under reasonable average QoE as shown in Figure 13, minimizing the impact of overheads on throughput; Despite the minor decrease in throughput, the up to 60% improvement in server capacity offered by Andes can compensate for this, effectively reducing the resource cost per request while maintaining a satisfactory user experience. 6.3 Breakdown Analysis To understand Andes\u2019s performance in detail, we conducted a breakdown analysis focusing on QoE, time to first token (TTFT), and token delivery speed (TDS), as shown in Table 4. We report Andes\u2019s performance on OPT-66B and ShareGPT dataset with a request rate of 3.3, where Andes achieved an average QoE of 0.92. With these breakdown analyses, we can 10 3 4 5 Request rate (req/s) 0.0 0.5 1.0 Avg preemption frequency Andes (a) ShareGPT. 1.5 2.0 2.5 Request rate (req/s) 0.0 0.5 1.0 Avg preemption frequency Andes (b) Multi-Round ShareGPT. Figure 13. Preemption frequency with OPT-66B under different request arrival rates. Metric Percentile Method vLLM Andes 10\ud835\udc61\u210e 0.05 0.77 50\ud835\udc61\u210e 0.39 1.00 QoE 90\ud835\udc61\u210e 1.00 1.00 10\ud835\udc61\u210e 0.33 0.35 50\ud835\udc61\u210e 56.73 0.47 TTFT (s) 90\ud835\udc61\u210e 144.95 0.66 10\ud835\udc61\u210e 6.05 5.32 50\ud835\udc61\u210e 6.45 5.44 TDS (tokens/s) 90\ud835\udc61\u210e 7.84 7.02 Table 4. Andes significantly improves QoE and TTFT, while maintaining TDS above user expected speed. provide granular insights into individual user satisfaction under this level of QoE. QoE distribution. Andes significantly improves the lower and median user experiences, with the 10th percentile rising from 0.05 to 0.77 and the 50th percentile achieving a perfect score of 1, compared to 0.39 in vLLM. In order to understand how Andes handles requests with different request lengths, we present a scatter plot of QoE across different total lengths as shown in Figure 14. We observe Andes slightly starves a small fraction of longer requests, as they consume more resources or take longer time to complete. In contrast, FCFS starves lots of shorter requests that are blocked by longer requests. Token delivery timeline. Andes greatly enhances initial responsiveness, reducing median TTFT from 56.73 seconds in vLLM to just 0.47 seconds, and similarly improving the 90th percentile from 144.95 seconds to 0.66 seconds. This improved performance is attributed to Andes\u2019s QoE-aware scheduling, which effectively mitigates head-of-line blocking and reduces queuing delays. Additionally, we analyze the percentile distribution of the average TDS observed by users, excluding TTFT. While Andes slightly slows the average TDS, it remains above the user\u2019s expected speed, ensuring balanced delivery that neither overwhelms nor starves users. 0 1000 2000 Total Length 0 1 QoE (a) vLLM. 0 1000 2000 Total Length 0 1 QoE (b) Andes. Figure 14. QoE distribution across different total lengths. 6.4 Robustness to Diverse Workloads We evaluate the robustness of Andes under diverse settings including different hardware, arrival patterns, and QoE traces. We observed similar trends in diverse settings; therefore, we report our results with OPT-66B and ShareGPT. Hardware. We evaluate Andes on the NVIDIA A40 GPU with 46 GB RAM, as shown in Figure 15a. Andes improves average QoE up to 7\u00d7 under a higher request rate and serves 1.1\u00d7 higher request rate while maintaining an average QoE of 0.9. The reason for the smaller improvement on server capacity is that the A40 has a lower computational capability than the A100, leading to a slower average token generation speed. Consequently, the gap between the expected TDS and actual TDS on the A40 is smaller than on the A100, providing less opportunity for request scheduling and improving average QoE. However, as newer generations of GPUs are becoming more powerful in terms of computational capability, the potential improvement of Andes will be more significant. Bursty Arrival Process. We use a Gamma arrival process with the same request rate and a coefficient of variation of 3 to simulate the burst arrival of user requests. Figure 15b indicates that under bursty workload, the average QoE for FCFS policy begins to decrease at a lower request rate compared to the Poisson arrival, due to increased queuing delays. In contrast, Andes sustains a high average QoE, achieving up to a 2.7\u00d7 improvement on average QoE at the same request rate and serves 1.3\u00d7 higher request rate, showing Andes\u2019s adaptability to bursty workload. Different QoE Traces. Due to the unique QoE requirements of different applications, we evaluate Andes\u2019s performance under a voice chat QoE trace, with expected TTFT at 1 second and slower expected TDS adjusted according to the speaking speed outlined in Table 2. As shown in Figure 15c, both Andes and baseline achieve better average QoE even on higher request rates, attributed to the less strict TDS requirements. Nevertheless, Andes improves average QoE up to 1.25\u00d7 and manages 2\u00d7 request rate, which approaches the theoretical maximum improvement of 2\u00d7 as discussed in Section 2.3. 6.5 Sensitivity Analysis All experiments in sensitivity analysis are conducted on OPT66B with the ShareGPT dataset and a request rate of 3.3. 11 1.4 1.6 1.8 2.0 2.2 Request rate (req/s) 0.00 0.25 0.50 0.75 1.00 Avg QoE RR vLLM Andes 0.4 0.5 0.6 Request rate (req/s) 0.0 0.5 1.0 Avg QoE (a) NVIDIA A40. 3 4 5 Request rate (req/s) 0.0 0.5 1.0 Avg QoE (b) Burst request arrival. 5 10 Request rate (req/s) 0.0 0.5 1.0 Avg QoE (c) Voice chat QoE trace. Figure 15. Robustness analysis on OPT-66B with ShareGPT dataset. 0.0 0.5 1.0 1.5 Preemption frequency cap p 0.5 1.0 Avg QoE vLLM Sedna 0.0 0.5 1.0 1.5 Preemption frequency cap P 0.5 1.0 Avg QoE (a) Average QoE. 0.0 0.5 1.0 1.5 Preemption frequency cap P 0 50 Throughput (tokens/s) (b) Throughput. Figure 16. Tuning preemption frequency cap \ud835\udc43. 0 50 100 150 t 0.4 0.6 0.8 1.0 Avg QoE vLLM Andes Figure 17. Tuning \u0394\ud835\udc61. 3 4 5 Request rate (req/s) 0.0 0.5 1.0 Avg QoE vLLM Andes w/ greedy Andes w/ DP Figure 18. Different solver. Preemption Frequency Cap \ud835\udc43. Increasing preemption frequency cap \ud835\udc43can lead to finer-grained scheduling, potentially enhancing average QoE, but at the cost of increased overhead and reduced throughput. Figure 16a shows the average QoE under different \ud835\udc43. Improvements in QoE are observed as \ud835\udc43increases up to 0.4 preemptions per request, stabilizing beyond this point. Conversely, Figure 16b illustrates a slight decrease in system throughput with increased \ud835\udc43, stabilizing beyond 0.4 preemption per request. These observations suggest a trade-off between average QoE and system throughput, indicating the current setting of \ud835\udc43nearly optimizes QoE while maintaining satisfactory throughput. Prediction Timeframe \u0394\ud835\udc61. We evaluate how different \u0394\ud835\udc61 influences average QoE to understand its effect on system performance. Figure 17 illustrates that the average QoE remains roughly consistent for \u0394\ud835\udc61values greater than 50, and significantly outperforms the baselines, indicating that Andes is not sensitive to the setting of \u0394\ud835\udc61. Different Knapsack Solution. We compare the performance of Andes with different knapsack solutions between greedy and dynamic programming (DP). Figure 18 shows that the greedy consistently surpasses the DP solution, while both solutions outperform the baselines. The lower performance of the DP is due to its substantial computational overhead, which delays the inference process and degrades the average QoE. This suggests that the greedy approach is a more practical and efficient solution for Andes. 7 Related Work General Model Serving Systems. A variety of model serving systems have emerged, ranging from general-purpose, production-level frameworks like TensorFlow Serving [33] and NVIDIA Triton [31] to specialized systems such as Clipper [11], which sets application-level SLOs. Recent systems including Nexus[42], DeepRecSys [17], Clockwork [16], INFaaS [40], SuperServe [24] and AlpaServe [26] have introduced features like serving pipelines, hardware platform diversity, advanced scheduling, dynamic model selection, and model parallelism to boost resource efficiency. However, these general systems neglect the unique characteristics of LLM inference, leaving potential avenues for optimization. LLM Serving Systems. Numerous model serving systems are proposed to address the unique challenges of LLMs. Orca [50] introduced an iteration-level scheduling policy to enhance the throughput of batching inference, and vLLM [25] developed a PagedAttention to reduce the memory usage of LLMs. Splitwise [37], DistServe [52], TetriInfer [19] and Sarathi-Serve [1, 2] optimize the computation of prefill and decode phases through disaggregating or merging them. Some other systems focus on GPU kernel optimization and kernel fusion[5, 12, 32], model parallelism [5, 39], batching algorithm [13, 43, 50], KV-cache management [27, 28, 44] and parameter-sharing [53]. However, these systems focus on optimizing aggregated server-side performance and simply adopt a FCFS scheduling policy, which fail to address the queuing delay problem under higher request load. Finally, shortest remaining processing time [41] is a preemptive scheduling policy, but it does not consider the QoE of individual requests and requires knowledge of the response length of requests. To the best of our knowledge, Andes is the first to define and optimize QoE of text streaming services. 12 Video Streaming and QoE. The concept of text streaming draws inspiration from video streaming but encounters unique challenges and has a different QoE definition. While video streaming services are primarily limited by network bandwidth and latency [7], text streaming services are mainly constrained on computational resources [48]. Additionally, the QoE in video streaming is often measured by metrics like buffering ratio, resolution stability, and playback smoothness [7], while the QoE in text streaming primarily considers the token delivery timelines (TDT). 8 Conclusion In this paper, we define and optimize the Quality-of-Experience (QoE) for text streaming services, a critical aspect often overlooked by existing serving systems. We propose a QoE-aware LLM serving system, Andes, which is able to serve more concurrent requests while meeting their QoE requirements, significantly reducing the cost per request. We demonstrate the effectiveness of Andes through extensive experiments on various real-world datasets and LLMs, showing that Andes can handle up to 1.6\u00d7 higher request rate while preserving high QoE, or enhance QoE by up to 3.2\u00d7 without additional resource expenditure." +} \ No newline at end of file diff --git a/intro_8K/test_introduction_short_2404.16294v1.json b/intro_8K/test_introduction_short_2404.16294v1.json new file mode 100644 index 0000000000000000000000000000000000000000..44f807c88c1c9c6689d5a37a87cfcce776660cfd --- /dev/null +++ b/intro_8K/test_introduction_short_2404.16294v1.json @@ -0,0 +1,17 @@ +{ + "url": "http://arxiv.org/abs/2404.16294v1", + "title": "LLM-Based Section Identifiers Excel on Open Source but Stumble in Real World Applications", + "abstract": "Electronic health records (EHR) even though a boon for healthcare\npractitioners, are growing convoluted and longer every day. Sifting around\nthese lengthy EHRs is taxing and becomes a cumbersome part of physician-patient\ninteraction. Several approaches have been proposed to help alleviate this\nprevalent issue either via summarization or sectioning, however, only a few\napproaches have truly been helpful in the past. With the rise of automated\nmethods, machine learning (ML) has shown promise in solving the task of\nidentifying relevant sections in EHR. However, most ML methods rely on labeled\ndata which is difficult to get in healthcare. Large language models (LLMs) on\nthe other hand, have performed impressive feats in natural language processing\n(NLP), that too in a zero-shot manner, i.e. without any labeled data. To that\nend, we propose using LLMs to identify relevant section headers. We find that\nGPT-4 can effectively solve the task on both zero and few-shot settings as well\nas segment dramatically better than state-of-the-art methods. Additionally, we\nalso annotate a much harder real world dataset and find that GPT-4 struggles to\nperform well, alluding to further research and harder benchmarks.", + "authors": "Saranya Krishnamoorthy, Ayush Singh, Shabnam Tafreshi", + "published": "2024-04-25", + "updated": "2024-04-25", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Original Paper", + "paper_cat": "LLM Fairness", + "gt": "Modern day healthcare systems are increasingly moving towards large scale adoption of maintain- ing electronic health records (EHR) of patients (Congress, 2009). EHRs help healthcare practi- tioners with relevant information about a patient such as history, medications, etc. However, in re- cent times this practice has led to very long and convoluted EHRs (Rule et al., 2021). Naturally, the need for better information retrieval tools emerged due to the progressively lengthy and unstructured doctor notes. One such need is the accurate identi- fication of sections in an EHR, pertinent to a physi- cian\u2019s inquiry. For instance, a question like \u201cWhat Figure 1: Sample real world obscure image of an outpa- tient paper-based patient encounter form comprising of numerous sections (Hersh and Hoyt, 2018). treatments has the patient undergone in the past?\u201d concerning prior treatments administered to a pa- tient necessitates the swift extraction of informa- tion from the \u201ctreatments\u201d and \u201cpast medical his- tory\u201d sections, while excluding sections related to \u201cancestral medical history\u201d. This swift extraction is vital for timely decision-making in patient care. Additionally, during critical procedures such as the evaluation of medical necessity for prior authoriza- tion requests, it is customary for experienced clin- icians to locate vital data within specific sections. An illustrative case entails examining the \u201cphysical exam\u201d section to identify particular findings, such as signs of neurological disorders or movement- associated pain, indicating the need for additional diagnostic tests. The timely identification of such information is of utmost importance in ensuring the provision of appropriate care and reducing the risk of potential complications. arXiv:2404.16294v1 [cs.CL] 25 Apr 2024 In general, regions found in EHR would often have a section heading preceding the body of the section, as can be seen in example Table 1. Even though these section types have limited cardinal- ity, however, more often than not, physicians would fail to adhere to standards and use lexical variations generated on the fly. Moreover, practitioners not only will generate lexical variations of sections on the fly but also completely new sections altogether for valid reasons like imaging reports, etc. Apart from these variations, oftentimes there would be no headers at all, even though the information present could ideally be part of a pre-existing section in a document or a new section altogether. While studies like Gao et al. (2022) utilize the Subjective, Objective, Assessment and Plan heading (SOAP) framework, real-world clinical notes often contain sections beyond these categories. This limitation is further emphasized in Landes et al. (2022), war- ranting further investigation and analysis. The aforementioned factors have consequently contributed to the establishment of Section Iden- tification (SI) as a distinct and enduring prob- lem within the academic discourse (McKnight and Srinivasan, 2003), making it an indispensable com- ponent of any clinical natural language processing (NLP) pipeline. A SI task entails finding regions of text that are semantically related to an aspect of a patient\u2019s medical profile. More importantly, it helps to improve pre-existing information retrieval systems by enabling them to be more targeted and specific. Lastly, in light of recent findings of the negative impact of note bloat within EHRs on even the most sophisticated systems (Liu et al., 2022), using SI to shorten or create from EHR, a sub-EHR specific to a given task would prove to be a worth- while effort for humans and machines both. Because finding sections and hence their cor- responding headers involves inherent variability, machine learning (ML) methods have played an important role in this natural language processing (Pomares-Quimbaya et al., 2019). ML has increas- ingly been shown to be efficient in finding rele- vant sections within a document, however, a key drawback of traditional ML methods has been the dependence on labeled data (Tepper et al., 2012). Reliance on annotated data for training ML models to be able to predict the beginning and end of sec- tion headers has stalled the field from fully solving the task. The emergence of large language mod- els (LLMs) in contemporary research presents a promising avenue to overcome the limitations in- herent in traditional machine learning approaches, thereby expanding the scope of their applications. LLMs have emerged as the de-facto system for NLP in scenarios where data is scarce (OpenAI, 2023). The key distinction between traditional Ma- chine Learning (ML) models and Large Language Models (LLMs) lies in their ability to understand tasks in natural language. While traditional ML models require labeled data for training, LLMs can leverage pre-training on vast amounts of unstruc- tured text data, enabling them to perform tasks with minimal task-specific fine-tuning. This makes ML possible in an unsupervised manner (no need for labeled data) and therefore opens room for appli- cations in domains where annotated data is hard to acquire like healthcare. While LLMs have been evaluated on a wide array of NLP tasks in health- care (Nori et al., 2023), they are yet to be evaluated on their effectiveness in segmenting a document into semantically relevant sections. In this work, we address this gap and evaluate the efficacy of our approach on a widely-known datasets in the clinical medical domain. Findings show that GPT-4 (OpenAI, 2023) almost solved the section identification problem on the benchmark open-sourced dataset, however, on a private dataset the performance lags. Our contributions are three- fold, listed as follows: 1. We show that GPT-4 can generate zero-shot headings of records with very high accuracy. 2. Contrary to the above, we find that its perfor- mance drops on internal real-world datasets. 3. An ontology of numerous section headers seen in real world EHR systems is shared which has much higher coverage.", + "main_content": "Traditionally, SI task has been done using a pre-defined dictionary of plausible candidates. Pomares-Quimbaya et al. (2019) performed a comprehensive survey and found that rule-based methods still dominated the array of methods proposed while ML systems increasingly achieved better coverage when combined in a hybrid manner with rulebased methods. McKnight and Srinivasan (2003) later on extracted bag-of-words from MedLINE abstracts and used a support vector machine to train a classifier to categorize sentences into either Introduction, Method, Result, or Conclusion, demonstrating promising results. Similarly, Hirohata et al. Allergies Allergies: Patient recorded as having No Known Allergies to Drugs... History of Present Illness HPI: 61M w/ incidental L renal mass found during W/U for brachytherapy for low-grade [**Last Name (STitle) **], now w/ gradually worsening gross hematuria for the past several days. Labs Imaging Pertinent Results: [**2160-4-10**] 07:30AM BLOOD WBC-12.6* RBC-3.20* Hgb-8.2* Hct-24.5* MCV-77* MCH-25.6* MCHC-33.4 RDW-17.1* Plt Ct-438. Hospital Course Brief Hospital Course: 61M w/ low-grade [**Month/Day/Year **] awaiting brachytherapy and locallyadvanced L renal mass w/ collecting system invasion, renal vein thrombus, and likely metastases, presented w/gradually worsening gross hematuria. Table 1: This figure illustrates a sample data point from the MIMIC-III database, highlighting the sections annotated with MedSecID corpus. (2008) achieved very high accuracy by using conditional random fields to label scientific abstracts into Objectives, Methods, Results, and Conclusions. Over time and with the inclusion of ML, the field re-framed this problem as one of span-level entity identification i.e. the system would be tasked with predicting whether each token in a sequence belongs to one of the predefined section types using the Inside-Outside-Beginning (IOB) tagging system (Ramshaw and Marcus, 1999). Tepper et al. (2012) addresses the task of segmenting clinical records into distinct sections using a two-step approach. First, the section boundaries are identified. Then, the sections are passed to the second step, where a classifier is used to label each token as Begin, In or Out of the span of a section. Nair et al. (2021) proposes several transfer learning models based on clinical contextual embeddings for classifying clinical notes into the major SOAP sections (Podder et al., 2023). Zhou et al. (2023) investigates the effectiveness of continued pre-training in enhancing the transferability of clinical note section classification models. Both of the above papers resemble our work, however, they restrict them to SOAP sections and train specific models to do so. While the techniques devised so far have shown promise, to the best of our knowledge none of the previous works have tried in an unsupervised manner. With the advent of LLMs (Devlin et al., 2018; OpenAI, 2023), several works have shown the efficacy of LLMs in doing unsupervised zero-shot information extraction. The primary method for interacting with generative LLMs is by the use of natural language prompts. Wei et al. (2022) found a significant performance boost by asking the model to explain its chain of thought before answering the query. Further, Brown et al. (2020) showed that additional performance can be gained by passing some examples as part of the prompt, they named it Few-Shot prompting. Wang et al. (2023); Bian et al. (2023); Ashok and Lipton (2023) have shown the efficacy of prompting the LLM to extract biomedical named entities from scientific articles. More recently, Liu et al. (2023) used GPT-4 to de-identify documents in a zero-shot manner. This hints at the immense document understanding capabilities of LLMs and opens doors to its application to a wide array of previously unresolved tasks such as SI. Apart from the advancements in the field of ML and SI, to evaluate how well SI systems perform, a standardization of tasks as well as datasets is required. To that end, Uzuner et al. (2011) first proposed a SI task as part of Informatics for Integrating Biology and the Bedside (i2b2) benchmarks. Recently, Landes et al. (2022) argued that the previous dataset did not fully cover the nuances in SI task and proposed a dataset an order of magnitude larger as well as more comprehensive than one by Uzuner et al. (2011). However, the dataset proposed by Landes et al. (2022) is based on a clean source Johnson et al. (2016), which oftentimes is not the case in real-world scenarios. To that end, we also annotated a real-world dataset to evaluate LLMs on it as well. 3 Datasets 3.1 i2b2 2010 In their study, Tepper et al. (2012) meticulously curated a corpus comprising 183 annotated clinical notes extracted from a selection of discharge summaries within the i2b2 2010 (Uzuner et al., 2011) dataset. This dataset was annotated by an expert and served as a valuable resource for their research. However, owing to constraints imposed by Institutional Review Boards (IRBs), our current access to the i2b2 2010 dataset is limited. As a result, we were only able to procure clinical notes for 96 out of the originally annotated 183 documents. Dataset MedSedId i2b2 2010 Real World Document count 2002 96 100 Average token length 2307 1283 7841 Std. dev. token length 1732 726 8093 Average sections per doc 12 17 12 Std. dev. sections per doc 5.7 6.2 8 Table 2: Corpus Statistics 3.2 MedSecID MedSecID (Landes et al., 2022) is a publicly available corpus of 2,002 fully annotated medical notes from the MIMIC-III (Johnson et al., 2016) clinical record database. Each note has been manually annotated with section boundaries and section labels (See Table 1 for an example of a typical clinical note consisting of well-defined sections). The section labels correspond to different types of information that are typically found in clinical notes, such as history of present illness, physical exam findings, and progress notes. 3.3 Real-world In an increasingly digital world, one would be inclined to assume healthcare data also lives digitally. Surprisingly, that is not the case almost 75% of the healthcare dataset still lives in faxes (CCSI, 2022) (see figure 1 for a sample handwritten and faxed clinical notes). Whereas all preexisting SI datasets are digitally derived from clean EHR systems, which even though offer us some insight into the performance of state of art, however, fail to paint the full picture. Therefore, we use an internal dataset of prior authorization requests derived from faxed-in images being transcribed to text via an optical character recognition system (OCR). These requests contain EHR of patients in the form of doctors\u2019 notes, submitted in both PDF and image formats. These documents lack a standardized structure, with segments and titles that can vary significantly in length. Although it\u2019s possible to group these titles into clusters of similar meaning, the language and number of titles differ across documents. Additionally, OCR inaccuracies arise from unclear text, spelling errors, complex table structures, and handwritten content, resulting in highly noisy input for any SI system to process. 4 Annotation Methods In this section, we describe the dataset and the annotation design in our study. As we described before we decided to choose section identification (SI), a method to identify sections and sub-sections in EHR documents to split them into smaller text chunks and create some structure in these unstructured data. We designed a manual annotation task to identify these sections and create categorical section types. Below we explain the annotation task design, the result, and the challenges. 4.1 Annotation Design We randomly selected 100 records from a pool of one million records we have in our corpus. These records are in two forms, PDF or fax images which doctors submit to insurance companies, and hence, can arrive from any arbitrary format. We refer to these records as documents in the span of this manuscript. These documents have no standard structures and sometimes they contain multiple patients information at the same time. Six annotators with higher education and non-native speakers of English carry the annotation task. Each annotates an equal amount and random selection of these documents. We used Label Studio1, an open source data labeling platform. PDF or image file of each record is uploaded to label studio and the task was to mark the section and sub-section in each file and manually enter the corresponding text of these sections and sub-sections. To instruct the annotators, we provided written instructions as well as held a video discussion session and explained the task to the annotators. 4.2 Annotation Result We aggregate the sections per document to form the final section and sub-section list. A total of 912 sections and subsections are identified which makes 14 sections and sub-sections on average per document. Then one annotator, different from the ones who have annotated the documents, categorized these sections and sub-sections into more gen1https://labelstud.io/ Figure 2: Section categories which are selected based on observation of top-header sections in the corpus and human judgment to associate section names to their topic or category of representations. eral categories based on the Consolidated Clinical Document Architecture (C-CDA) implementation guide2. In other words, the diverse categories are mapped to a category to unify them. This allows us to calculate IAA and be able to use the text semantic similarity method to find these sections in the unannotated documents. A total of 464 categories are coded of which 394 of these categories have a frequency of 1 and 70 categories have a frequency of 2 or more. We provide a small sample of the most frequent categories in Table 3 and Figure 2. 24 documents have been randomly selected and on each of these documents, a second annotator annotated the document. Further, we calculated the Jaccard similarity to report Inter-Annotator Agreement (IAA), The Jaccard similarity is a measure of the similarity between two sets of data. We obtained a Jaccard distance of 0.40, which is a fair agreement and an indication that the annotation task is challenging. The most diverse section and sub-section lists that each normalized into one section name are shown in table 4. Notably, the diversity of these two general categories indicates the challenge involved in structuring and identifying these sections in these documents. In some cases, categories such as Order Report or Medication Reconciliation can be both a section and sub-section according to the annotation results. This characteristic does not enforce the decision to select the general category for these types. 2C-CDA contains a library of CDA templates, incorporating and harmonizing previous efforts from Health Level Seven (HL7), Integrating the Healthcare Enterprise (IHE), and Health Information Technology Standards Panel (HITSP). https://www.hl7.org/ccdasearch/ 5 Experimental Setup Our task here is to take as input a document and output all the section headers found in it. For our underlying use case, we carried out testing with various LLMs like GPT-4 8k (OpenAI, 2023), LLaMa2 7B (Touvron et al., 2023), and more recent Mistral 7B (Jiang et al., 2023) prompting strategies3 (as shown in figure 3) and contrasted them with a baseline experiment that used keyword search, regex, MedSpacy library (Eyre et al., 2021) and the best model reported by Landes et al. (2022). MedSpacy is a clinical NLP toolkit built on the foundation of SpaCy, specifically designed to address the unique challenges of processing and extracting information from clinical text. This enables healthcare professionals to efficiently process and derive valuable insights from unstructured medical narratives. We did not restrict the tokens and used the entire clinical note for MedSecId. We extracted the actual section header using the header span mentioned in the MedSecId annotation and used it as the ground truth for our task. Because of the longer length of real-world data, we used the 32k version of GPT4 while keeping all the hyper-parameters to default such as the temperature, frequency penalty, and presence penalty to 0 and max tokens to 1000. Lastly, in this study, we utilized a privately hosted instance of GPT-4 to ensure the prevention of any potential data leakage. Prior to initiating the experiment, we implemented a thorough anonymization procedure to protect the dataset Protected health information (PHI). This involved substituting all 3CoT A5, One Shot A4 and Close Ended A6 prompting strategies are elaborated in appendix A. Medications Section Information about the current and past Medications Order Info This section consists of additional items that are required to conclude the assessments. Examples of such items are Mammograms, x-rays, etc., or the information about the provider of such items. Results Section Usually contains of lab results Physical Exam Section Result of physical exams such as Integumentary, Chest and Lung Exam, Cardiovascular, Abdomen, etc. Table 3: A sample of sections and subsections with the highest frequency. Medications Section Medications, Medication Changes, Medication List at End of Visit, Medication, Medication Reconciliation, Preventive Medicine, Medication List, Medication List at End of Visith, Medications (active prior today), Medications (Added, Consumed or Stopped today), Medications (Added, Continued or Stopped today), Medications Changes, Medications Discontinued During This Encounter, Medications Ordered This Encounter, Medications Places This Encounter, MEDICATIONS PRESCRIBED THIS VISIT, Medications Reviewed As Of This Encounter, Meds, Outpatient Medications, Patients Medication, Preventive Medication, Previous Medications, Previous medications Order Info Orders Placed, Order Questions, Order, Order Details, Order Information, Order Providers, Order Report, Ordering Provider, Order Name, Order name, Order Number, Order Plain X-ray/Interpretation, Order Requisition, Order Tracking, Order Transmittal Tracking, Order User/Provider Detail, Order-Level Documents, Ordering Provider Information, Orders, Orders Placed This Encounter, Orders Requiring a Screening Form Table 4: The list of sections and subsections that are normalized into one section name. You are a clinician and you read the given clinical document and identify section headers from them. Find section headers only from the clinical text. For each section header, return the answer as a JSON object by filling in the following dictionary. {section title: string representing the section header} Here are some clinical notes of a patient from a doctor. ### {context text} ### Figure 3: Basic Prompt Template personal identifiers, such as names, identification numbers, and ages, with fictitious entities. Apart from the basic prompts, we also experiment with combining them with Few-Shot (Brown et al., 2020) and CoT Prompting (Wei et al., 2022) where we ask the LLM to think step-by-step along with providing an example of the clinical note and a list of headings. We keep the prompts same across all the datasets. Lastly, the evaluation metric used here is the exact match (EM) accuracy as well as precision (P), recall (R), and F1-score calculated by comparing GPT-4\u2019s output to that of ground truth in the Inside-Outside-Beginning (IOB) scheme (Ramshaw and Marcus, 1999) as used in work by Landes et al. (2022). Similar GPT-4 experiments were conducted on i2b2 2010 dataset but as the context length of i2b2 was smaller, in all the experiments we use GPT-4 8K. Lastly, because of cost constraints, we chose the best-performing model on above mentioned benchmarks to be evaluated against our internal real-world dataset. 6 Results Even though GPT-4 was able to perform very well on open source benchmark datasets, it was unable to reach the same level of performance on our internal corpus due to its complexity as shown in table 7. Experiments showed that GPT-4 was able to achieve an accuracy of only 37% in contrast to that of 96% on MedSecId corpus. LLaMa-2 and MedSpacy performed equally well, in that, former achieved higher recall than latter. This can be attributed to the global knowledge encoded in the LLMs, which is not the case with MedSpacy, while on the other hand MedSpacy would be much faster to run with less overhead. Results in table 5 and 6 show that one-shot GPT-4 OpenAI (2023) performed the best and achieved a new state of the art on MedSecId outperforming previous models by a significant margin. This unsupervised methodology Method Accuracy(%) Precision(%) Recall(%) F1(%) EM(%) Keyword Based 36.07 100 36.07 53.01 36.05 Regex 49.24 100 30.07 46.24 50.8 MedSpacy 56.63 100 38.29 55.38 62.63 GPT-4 Close Ended Prompt 73.23 100 73.23 84.55 73.2 GPT-4 Chain-of-Thought (CoT) 94.9 100 88.62 93.97 92.47 GPT-4 Zero Shot Prompt 94.41 100 87.61 93.40 92.05 GPT-4 One Shot Prompt 96.86 100 92.93 96.24 96.11 LLaMa-2 Close Ended Prompt 39.96 100 39.96 57.10 39.94 LLaMa-2 Zero Shot Prompt 52.29 94.61 32.92 48.82 62.25 LLaMa-2 One Shot Prompt 13.95 94.57 6.86 12.80 16.86 LLaMa-2 Chain-of-Thought (CoT) 38.21 93.95 21.11 34.48 46.95 Mistral Close Ended Prompt 5.24 100 5.24 9.96 5.24 Mistral Zero Shot Prompt 11.51 97.43 5.23 9.93 14.45 Mistral One Shot Prompt 8.41 98.61 4.07 7.82 10.48 Mistral Chain-of-Thought (CoT) 11.99 98.61 5.64 10.67 15.53 BiLSTM-CRF (Landes et al., 2022) 82.2 95 95 95 Table 5: Results on MedSecId Corpus Method Accuracy(%) Precision(%) Recall(%) F1(%) EM(%) Keyword Based 10.98 100 8.78 16.14 69.5 Regex 66.26 100 48.27 65.11 56.8 MedSpacy 38.45 100 21.92 35.96 38.14 GPT-4 Close Ended Prompt 11.82 78.24 8.46 15.27 73.8 GPT-4 Chain-of-Thought (CoT) 86.26 99.85 74.65 85.43 84.33 GPT-4 Zero Shot Prompt 89.47 100 78.46 87.93 84.58 GPT-4 One Shot Prompt 93.03 100 85.36 92.10 89.45 LLaMa-2 Close Ended Prompt 88.79 100 83.57 91.05 86.54 LLaMa-2 Zero Shot Prompt 56.2 100 36.62 53.61 58.59 LLaMa-2 One Shot Prompt 30.54 100 16.75 28.69 21.2 LLaMa-2 Chain-of-Thought (CoT) 40.23 99.83 22.61 36.87 50.7 Mistral Close Ended Prompt 10.41 100 6.65 12.48 19.34 Mistral Zero Shot Prompt 35.30 100 18.98 31.90 36.17 Mistral One Shot Prompt 6.58 100 3.24 6.29 7.80 Mistral Chain-of-Thought (CoT) 32.13 99.80 17.03 29.09 33.66 Maximum Entropy (Tepper et al., 2012) 91.1 90.8 91 Table 6: Results on i2b2 Corpus. While GPT-4 has superior performance, LLaMa-2 is not far behind. Method A P R F1 EM Regex 67.64 98.69 51.30 67.51 71.9 MedSpacy 5.92 100 4.13 7.93 15.72 GPT-4 ZS 37.53 100 24.18 38.95 37.29 LLaMa-2 ZS 13.33 100 7.81 14.49 19.75 Mistral ZS 3.67 100 1.83 3.60 5.24 Table 7: Results on Real-World Corpus. ZS stands for Zero-Shot prompting beats all the supervised models on the MedSecId corpus (Landes et al., 2022). Similarly, one-shot also had a state-of-the-art performance on i2b2 2010 dataset. On the other hand, LLaMa-2 did not perform as well as GPT-4, but nevertheless had on par performance with regex. Additionally, LLaMa-2 Touvron et al. (2023) performance on i2b2 dataset came very close to that of GPT-4 itself. This disparity in performance of LLaMa-2 as well as its variation in results across the experiments leads to inconclusive results. Lastly, Mistral (Jiang et al., 2023) performance was sub-optimal, exhibiting only a marginal improvement than a naive keyword based approach. 7 Discussion We performed an in-depth error analysis on the subset of records that GPT-4 was unable to predict correction. Our analysis found errors in the MedSecId dataset itself, which is one of the reasons GPT-4 did not get a 100% performance. Error analysis reveals on the rest of 2.8% missed sections of the GPT-4 finds that 18% of the above stated 2.8% belong to the \u201cFindings\u201d section label and 13% belong to the \u201cImage-Type\u201d category. Most of the documents did not have those section headers explicitly mentioned and were hidden as part of the text. Even though the precision was 100% in i2b2 2010 dataset, the granularity of the subsections, the Section Categories Number of Sections in Category Frequency Frequency (%) Assessment & Plan 413 958 60.98 physical exam 66 152 9.67 Personal Info 54 73 4.64 Medication 19 55 3.50 History of Present Illness 3 44 2.80 Family History 5 40 2.54 Allergies 4 40 2.54 Order Info 17 38 2.41 Clinical Info 16 36 2.29 UNKNOWN 13 25 1.59 Additional Info 4 18 1.14 Appointment Date 6 15 0.95 Progress Notes 1 15 0.95 Results 7 12 0.76 Mental Status 6 10 0.65 History 3 10 0.64 Lab Results 5 6 0.38 Alcohol Use 2 5 0.31 Abdomen 2 5 0.31 Referral 3 3 0.19 Active Medication 3 3 0.19" +} \ No newline at end of file diff --git a/intro_8K/test_introduction_short_2404.16297v1.json b/intro_8K/test_introduction_short_2404.16297v1.json new file mode 100644 index 0000000000000000000000000000000000000000..411c0b4c1793a6ff60055914604e325c35d4183f --- /dev/null +++ b/intro_8K/test_introduction_short_2404.16297v1.json @@ -0,0 +1,17 @@ +{ + "url": "http://arxiv.org/abs/2404.16297v1", + "title": "When Fuzzing Meets LLMs: Challenges and Opportunities", + "abstract": "Fuzzing, a widely-used technique for bug detection, has seen advancements\nthrough Large Language Models (LLMs). Despite their potential, LLMs face\nspecific challenges in fuzzing. In this paper, we identified five major\nchallenges of LLM-assisted fuzzing. To support our findings, we revisited the\nmost recent papers from top-tier conferences, confirming that these challenges\nare widespread. As a remedy, we propose some actionable recommendations to help\nimprove applying LLM in Fuzzing and conduct preliminary evaluations on DBMS\nfuzzing. The results demonstrate that our recommendations effectively address\nthe identified challenges.", + "authors": "Yu Jiang, Jie Liang, Fuchen Ma, Yuanliang Chen, Chijin Zhou, Yuheng Shen, Zhiyong Wu, Jingzhou Fu, Mingzhe Wang, ShanShan Li, Quan Zhang", + "published": "2024-04-25", + "updated": "2024-04-25", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE", + "cs.AI" + ], + "label": "Original Paper", + "paper_cat": "LLM Fairness", + "gt": "Fuzzing is a promising technique for software bug detection [8, 26]. Large Language Models (LLM) are rapidly gaining popularity across various applications for their versatility and capability [14, 15]. From natural language processing [7, 22, 27] to code generation [19, 24], LLM\u2019s broad utility is making it a prominent and sought-after solution in diverse domains. This development has naturally influ- enced fuzzing research: to help improve the fuzzing effectiveness, LLM has now become one of the key enablers to assist the core processes of fuzzing, including driver synthesis [28, 39], input gen- eration [9, 10], and bug detection [11, 17]. While excelling in natural language analysis, LLM encounters some common pitfalls like limited context length [20] and hallu- cination problems [16, 23, 31], etc. Consequently, LLM exhibits limitations in complex program analysis. These pitfalls of LLM affect the effectiveness of fuzzing, leading to testing performance degradation, manifesting as high false positives, low test coverage, and limited scalability. In this paper, we identify five common challenges when using LLM-based fuzzing technology: 1) Firstly, they often produce low- quality outputs in fuzzing driver synthesis, lacking the precision required for effective bug detection. 2) Secondly, these models demonstrate a limited scope in their understanding and processing capabilities, constraining their utility in diverse fuzzing scenarios. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. FSE\u201924, July 2024, Porto de Galinhas, Brazil \u00a9 2024 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-x-xxxx-xxxx-x/YY/MM https://doi.org/10.1145/nnnnnnn.nnnnnnn 3) Thirdly, LLMs struggle with generating sufficiently diverse in- puts during the fuzzing process, which is critical for thorough and effective bug detection. 4) Fourthly, they face challenges in main- taining the validity of generated inputs, a crucial factor for accurate and reliable fuzzing. 5) Lastly, LLMs\u2019 inaccurate understanding of bug detection mechanisms hinders their ability to identify and ad- dress complex software vulnerabilities effectively, thereby limiting their overall effectiveness in the fuzzing process. We performed a comprehensive survey and revisited most recent fuzzing works that rely on LLM for tackling different problems in the fuzzing process. To our surprise, the results show that each work encounters at least one of these challenges. 1 Although LLMs are widespread, it is more important for us to avoid its weakness, and at the same time take advantage of its strengths. To this end, we perform an impact analysis of the impli- cations in three key fuzzing steps. These findings inspire us with some opportunities for better usage of LLM in each fuzzing step according to whether the corresponding corpus and documentation are rich. Furthermore, we performed some preliminary evalua- tions according to these opportunities by applying LLM in fuzzing database management systems(DBMS). The results demonstrate that the reasonable instantiation of those recommendations can overcome the challenges in LLM-assisted DBMS fuzzing.", + "main_content": "Limited Training Corpus ted Trai Corpus Limited Long-text Understanding Limited Long-tex Understanding Hallucination C3.1: Inaccurate Understanding C3.1: Inaccurate Understanding C2.1: Insufficient Diversity C2.1: Insuffic Diversity C1.1: Prone to Error C2.2: Limited Validity C2.2: Limit Validity C1.2: Limited Scope Bug Detection Input Generation Driver Synthesis Target Program Target Program Prompt Bug Repor Bug Report Challenges Fuzzing Loop Large Language Model Figure 1: Fuzzing Workflow with LLM enhanced. 2.1 Driver Synthesis Description. Recently, several pioneer works have been proposed to utilize LLMs to enhance driver synthesis [11, 12, 28, 38, 39]. 1Remark: The purpose of this work is not to point fingers or critique. Instead, it wants to show how we can overcome the challenges of LLM-assisted fuzzing and effectively leverage the advantages of LLMs and make it truly beneficial for the fuzzing process. arXiv:2404.16297v1 [cs.SE] 25 Apr 2024 FSE\u201924, July 2024, Porto de Galinhas, Brazil Jiang et al. Their basic idea is to use API documentation as the prompt context, and then ask LLMs to generate API invoking sequences as fuzzing drivers. For example, both TitanFuzz [11] and PromptFuzz [28] design customized prompt templates to guide LLMs in generating code that follows programming syntax and semantics. Challenges. The application of LLMs to driver synthesis can be ineffective if done directly, as LLMs have a tendency to produce hallucinations [7, 20] and perform less effectively on programs that are not included in their training corpus [20]. These limitations present two challenges for driver synthesis. The first one is that the synthesized drivers are prone to error, leading to a non-negligible number of false positives during fuzzing. For example, according to comprehensive evaluation on LLM-based driver synthesis for OSS-Fuzz projects [39], GPT-4 can correctly generate roughly 40% drivers, while the rest of the drivers contain errors. Among the erroneous drivers, 93% exhibit one or more of the following issues: type errors, mis-initialized function arguments, usage of non-existing identifiers, and imprecise control-flow dependencies. This occurrence primarily arises due to LLMs relying on pre-trained knowledge for driver synthesis, leading to the production of hallucinations [16]. The second challenge is that the application of directly using LLMs for driver synthesis has limited scope because LLMs have limited knowledge on unseen programs. For those target programs, LLMs sometimes use training knowledge to fill the gap, thus generating incorrect API invoking sequences. For example, developers from Google\u2019s OSS-Fuzz project [35] attempted to leverage LLMs to synthesize drivers. Out of 31 tested OSS-Fuzz projects, 14 successfully compiled new targets and increased coverage with the synthesized drivers. The drivers unsuccessfully synthesized by LLMs typically originated from less common projects like krb5 and rtpproxy. In contrast, LLMs are more likely to generate compilable and effective drivers for more common projects, such as tinyxml2 and cjson. Recommendations. We have the following recommendations: REC 1.1 Some targets whose code or use cases have been included in the training corpus. For these cases, employing LLM for automated synthesis of fuzz drivers, complemented by error-guided corrective measures, is a practical approach. Iteratively querying the LLM based on identified errors and fixing the errors are practical measures [39], which helps to address the prone-to-error challenge. For example, libpng is a common library and has already been seen by GPT4 in its training process. Consequently, it is possible to directly ask GPT4 to generate a fuzz testing driver for libpng by giving the prompt \u201cGenerating LLVMFuzzerTestOneInput for test libpng.\u201d However, the generated driver might still contain errors in grammar or encounter issues during the process of compiling and linking. Test engineers can subsequently submit individual LLM queries containing the error messages to rectify these issues, occasionally necessitating multiple iterations. REC 1.2 For targets without a dedicated corpus in training, one can collect valuable materials such as function prototypes, example programs, or connection rules between functions. Conducting prompt engineering which involves embedding these materials, enhances the precision in generating logical sequences of function calls for the creation of drivers. The prompt engineering approach is a practical solution to tackle the challenge of limited scope. For example, typst is a new markup-based typesetting system like LaTex and claims it is more easier to learn and use. To generate a fuzz driver for it, feed the prompt \u201cGenerate LLVMFuzzerTestOneInput for typst\u201d to ChatGPT-3.5 will encounter hallucination problems and generate a completely non-existent driver. Instead, the project typst has lots of documents and unit tests. Feeding these materials that illustrate the usage of the functions is helpful for LLMs to generate effective drivers [35]. Additionally, it is also feasible to iteratively query LLMs to address any errors that may be present in the drivers. REC 1.3 Sometimes, even with adequate documentation and examples, LLMs can still encounter challenges in generating valid drivers at times, especially for extremely complex targets like Linux kernel. These systems frequently involve intricate dependencies among their APIs, or there exist implicit dependencies among lowerlevel systems that pose challenges for LLM to capture. For these targets, it is advisable to refrain from relying on LLMs. Instead, it is more practical and feasible to explore conventional methods. For example, KSG [33] uses the ebpf to dynamically infer the kernel\u2019s system call argument type and value constraints. In contrast, LLM-based approaches such as KernelGPT [38] use static inference based on kernel man pages and source code. But they may find some complex dummy operations. And it\u2019s hard for them to deduct pointer references. Therefore, KSG can generate 2,433 Syzlang, which is 17.86\u00d7 more compared to KernelGPT [38]. 2.2 Input Generation Description. Recently, several pioneer works [5, 34, 36, 37] have been proposed to utilize LLM to enhance input generation. Their basic idea is to use input specifications and input examples as the prompt context and then ask LLMs to generate new inputs. For example, LLMFuzzer [5] feeds input specifications to LLMs to generate initial seeds for mutation-based fuzzers. Challenges. The application of LLMs to input generation can be ineffective if done directly, as LLMs heavily rely on training corpus and have limited long-text understanding [20, 32]. These limitations present two challenges for input generation. The first one is that the generated inputs have insufficient diversity, leading to inefficient exploration of the input space. This is because LLMs are pre-trained models and prone to responding to users\u2019 queries in a similar manner when given the same prompt context. Therefore, it is difficult for LLMs to generate diverse inputs if they only provide limited information. For example, ChatAFL [29] demonstrates a significant limitation when directly applying LLMs to the RTPS protocol fuzzing. If only a limited amount of protocol information is provided in the prompts, LLMs can only generate inputs that cover 4 states out of 10 states that the RTPS protocol supported. This results in a substantial portion of the RTSP state remaining unexplored. The second challenge is that the generated inputs often have limited validity, leading to early termination when the target program executes these inputs. This is because LLMs cannot fully understand the long texts of input formats or examples due to limited ability on long text processing [32]. For example, Border Gateway Protocol (BGP) is a complex protocol, whose document (BGP RFC 9952) has more than 28,000 words to describe its functionalities. When generating inputs of BGP based on the RFC description, LLMs usually forget to generate the length field of the TLV substructures in the BGP message because the description of the main message structure and the TLV substructures are a little far, making LLMs hard to totally understand BGP format. Recommendations. We have the following recommendations: REC 2.1 Some of the testing inputs to the system are common and have a large number of examples on the web, and they have When Fuzzing Meets LLMs: Challenges and Opportunities FSE\u201924, July 2024, Porto de Galinhas, Brazil been included in the LLM\u2019s training corpus. It is possible to directly employ LLM to generate test cases for them, combining methodologies focused on diversification. These methods encompass internal approaches, such as meticulously crafted prompts that demand using diverse features, as well as external methods, such as coverageguided genetic algorithms. They both contribute to address the challenge of insufficient diversity. For instance, when testing common text protocols such as HTTP and FTP, where LLM excels in its support for text-based languages, it is feasible to directly instruct LLM to generate test cases for these protocols. To increase diversity, for internal approaches, we can use prompts that encourage LLM to generate HTTP files with various methods (e.g., GET, POST, PUT), different headers, different query parameters, URL structures, various payloads, and other aspects. We can also interactively ask LLM to cover more types of messages [29]. For external approaches, we can utilize coverageguided generation used in conventional fuzzing along with more real-world examples to enhance LLM. REC 2.2 In many cases, the LLM is not trained with a dedicated training corpus specifically tailored for the test subjects. Rather than employing LLM directly for generating the final test cases, we suggest utilizing LLM to transform well-known knowledge to formulate the input specifications or build initial test cases. The input specification helps address the challenge of limited validity, and the initial test cases help address the challenge of insufficient diversity. For instance, in the case of protocol implementations lacking machine-readable grammar, generating valid test inputs automatically to adhere to the necessary structure and order becomes challenging. In such scenarios, leveraging that LLM has been trained on established protocols, allows the transfer of grammars from these protocols with the assistance of LLM and recorded message sequences. The grammar can enhance the validity of the generated test cases. With the grammar, conventional grammar-based fuzzers could be utilized to generate more test cases [29]. Another instance is transforming test cases of popular database systems to initial seeds for the tested database system. The SQL queries of popular database systems like PostgreSQL have rich diversity and they have already been trained for LLM. Therefore, leveraging the knowledge of LLM to transform them into the format of the target database system is feasible. Providing them to the fuzzer as the initial seed helps enhance the diversity of generated test cases. 2.3 Bug Detection Description. Recently, several pioneer works [21, 25] utilize LLM to enhance bug detection. Their basic idea is to use functionality descriptions of the target program as the prompt context, and then ask LLMs to generate code that implements the same functionalities with the target program. By comparing the execution results of the two functionally equivalent programs, they can detect logic bugs in the target program. For example, Differential Prompting [25] queries LLMs about the intention of a piece of provided code and then uses the obtained intention as a new prompt context for LLMs to generate code with the same intention. Challenges. The application of LLMs to bug detection can be ineffective if done directly, as LLMs have limited long-text understanding [32], posing a challenge to inaccurate understand of the semantics of the target program. For example, researchers [25] found that LLMs may misconstrue code designed to identify the longest common substring as being intended for finding the longest common subsequence. This misinterpretation can occur even though these two problems require entirely distinct code solutions. As a result, LLMs may generate code whose functionality deviates from the target program, thus leading to an inaccurate test oracle. According to the experiment results of Differential Prompting [25], it achieves 66.7% success rate when generating reference implementation for programs from the programming contest website Codeforces. While this is substantially better than its baseline, it still results in a false-positive rate of 33.3%, which is still not sufficient for practical usage. Recommendations. We have the following recommendations: REC 3.1 Defining test oracles is highly dependent on specific targets and scenarios, presenting the most formidable aspect of fuzzing. For complicated targets, we suggest to avoid analyzing results with LLM directly. Instead, consider employing LLM to extract features or patterns associated with a specific bug type, leveraging domain knowledge. Subsequently, monitoring the system using these patterns aids in addressing the challenge of inaccurate understanding. For example, many time-series databases like IoTDB implicitly handle exceptions. Consequently, the system will not crash or exhibit other abnormal behaviors. Nevertheless, these database systems generate extensive logs, and errors manifest as exceptions in these logs. Therefore, it becomes feasible to use LLM for analyzing the logs to discern error patterns. In such scenarios, we recommend employing LLM to scrutinize the logs, identify error patterns, and subsequently leverage these patterns for detecting logic errors. REC 3.2 Some targets or projects contain well-defined documentations, where the expected behaviors are clearly described, like the RFCs for protocols. For these cases, we suggest to leverage the natural language understanding ability of LLM to extract the expected behaviors from the documentations for test oracle definition. This helps LLM to understand the intention and design of the target programs, thus addressing the challenge of inaccurate understanding. For example, the RFCs for protocols usually contain detailed descriptions of the protocol\u2019s expected behaviors. Take the RFC 854 [4] for Telnet protocol as an example. It specifies expected behaviors during the negotiation of some disabled command options or unnegotiated commands. These can be used as test oracles and can be further used to uncover CVE-2021-40523 [30]. 3 POTENTIAL SOLUTIONS To demonstrate the practicality of our recommendations, we use the Database Management System (DBMS) as the target for LLMassisted fuzzing. Addressing challenges in driver synthesis, input generation, and bug detection, we propose three potential solutions: state-aware driver synthesis, cross-DBMS SQL transfer, and log-based Oracle definition. These solutions are implemented and compared with rudimentary uses of LLM, where it is directly employed. Experiments are conducted under identical settings on a machine with 256 cores (AMD EPYC 7742 Processor @ 2.25 GHz) and 512 GiB of main memory, demonstrating the efficacy of our recommended approaches in enhancing LLM-based fuzzing for intricate systems like DBMSs. 3.1 LLM-Enhanced Connector Synthesis Obstacle: Database connectors, also commonly known as database drivers, serve as intermediary components facilitating communication between applications and databases. These connectors define standard a set of interfaces, encompassing functions and parameters. The driver for fuzzing database connector consists of a sequence FSE\u201924, July 2024, Porto de Galinhas, Brazil Jiang et al. of these interfaces. Directly utilizing LLM to generate drivers for database connector will encounter two challenges: First is prone to error: API sequences contain semantic information that is embedded in the state of the database connector, directly generating sequences may import errors. Second is limited scope: LLM lacks the state transition knowledge of the connectors because it lacks the related corpus in training. Solution: Following REC 1.2 , we propose LLM-enhanced stateaware database connector synthesis. We first collect JDBC function prototypes and example programs that utilize JDBC. Then we model the connection relationships between JDBC functions as state-transition rules. Next, we gather the function prototypes, example programs, and connection rules as input for LLM. The prompt we give is like \u201c Based on the state-transition rules and state description of functions, please generate a sequence of APIS within length 15. It is required to cover a different combination of state transitions than before.\u201d Result: We implement LLM-enhanced connector synthesis into Wingfuzz\ud835\udc50\ud835\udc5c\ud835\udc5b\ud835\udc5band compare it against LLM\ud835\udc50\ud835\udc5c\ud835\udc5b\ud835\udc5b, which directly utilizes LLM to generate drivers for MySQL Connector/J [3], MariaDB Connector/J [2], and AWS JDBC Driver for MySQL [1]. We perform fuzzing on ClickHouse for each tool. Table 1 shows the driver correctness ratios and branch coverage by LLM\ud835\udc50\ud835\udc5c\ud835\udc5b\ud835\udc5band Wingfuzz\ud835\udc50\ud835\udc5c\ud835\udc5b\ud835\udc5bon three selected DBMSs in 12 hours. These statistics show that Wingfuzz\ud835\udc50\ud835\udc5c\ud835\udc5b\ud835\udc5balways performs better in both driver correctness ratio and branch coverage than LLM\ud835\udc50\ud835\udc5c\ud835\udc5b\ud835\udc5bon all three DBMSs. Specifically, Wingfuzz\ud835\udc50\ud835\udc5c\ud835\udc5b\ud835\udc5barchives 94% more correctness rate for driven synthesis. And the drivers generated by Wingfuzz\ud835\udc50\ud835\udc5c\ud835\udc5b\ud835\udc5bcover 56% more branches on average. The main reason is that the state-transition rules embed semantic information, and it also helps LLM generate API sequences that account for the diverse states within the database connector. Table 1: Driver Correctness Ratios and Branch Coverage. DBMS Driver Correctness Ratios Branch Coverage LLM\ud835\udc50\ud835\udc5c\ud835\udc5b\ud835\udc5b Wingfuzz\ud835\udc50\ud835\udc5c\ud835\udc5b\ud835\udc5b LLM\ud835\udc50\ud835\udc5c\ud835\udc5b\ud835\udc5b Wingfuzz\ud835\udc50\ud835\udc5c\ud835\udc5b\ud835\udc5b MariaDB Connector/J 0.142 0.331 583 843 MySQL Connector/J 0.216 0.367 1256 1982 AWS MySQL JDBC 0.203 0.394 1382 2293 3.2 Cross-DBMS SQL Transfer Obstacle: SQL queries, as the inputs of DBMS, are vital to DBMS fuzzing. Generating SQL queries directly via LLM faces two main challenges: ensuring semantic correctness and promoting query diversity. Semantically correct SQL queries are vital for triggering complex DBMS behaviors, as syntactical errors lead to parsing failures. The intricate SQL grammar, encompassing various clauses, expressions, and rules, poses a challenge for LLM in achieving semantic correct. Furthermore, diversity in SQL queries is crucial for probing deep DBMS logic. However, LLM\u2019s constrained variety, influenced by the absence of DBMS feedback, limits the exploration of diverse query structures. Solution: To overcome these challenges, we introduce the crossDBMS SQL transfer approach, aligned with the recommendation REC 2.2 , for SQL generation. In contrast to directly generating the SQL queries, we use LLM to transfer the test cases from other DBMSs as the initial seeds for fuzzing the target DBMS. These initial seeds are used to mutate new SQL test cases during the fuzzing loop. The process contains three key steps. First, it executes existing SQL test cases within its native DBMS to capture the schema information during execution. Second, it utilizes LLMs along with the captured schema information to guide the generation of new test cases based on the LLM responses. Finally, it temporarily comments out unparsable sections for fuzzers to ensure proper parsing and subsequently uncomments them after mutation. Result: We implement the solution called Wingfuzz\ud835\udc56\ud835\udc5b\ud835\udc5d\ud835\udc62\ud835\udc61and compare it with LLM\ud835\udc56\ud835\udc5b\ud835\udc5d\ud835\udc62\ud835\udc61, which directly uses LLM to generate the SQL queries. We run Wingfuzz\ud835\udc56\ud835\udc5b\ud835\udc5d\ud835\udc62\ud835\udc61and LLM\ud835\udc56\ud835\udc5b\ud835\udc5d\ud835\udc62\ud835\udc61on three DBMS: MonetDB [6], DuckDB [13], and ClickHouse [18]. Table 2: Semantic Correctness Ratios and Branch Coverage. DBMS Semantic Correctness Ratios Branch Coverage LLM\ud835\udc56\ud835\udc5b\ud835\udc5d\ud835\udc62\ud835\udc61 Wingfuzz\ud835\udc56\ud835\udc5b\ud835\udc5d\ud835\udc62\ud835\udc61 LLM\ud835\udc56\ud835\udc5b\ud835\udc5d\ud835\udc62\ud835\udc61 Wingfuzz\ud835\udc56\ud835\udc5b\ud835\udc5d\ud835\udc62\ud835\udc61 MonetDB 0.1594 0.4134 26,828 41,840 DuckDB 0.2551 0.3486 57,937 70,583 ClickHouse 0.1458 0.3093 124,887 145,383 Table 2 shows semantic correctness ratios and covered branches of LLM\ud835\udc56\ud835\udc5b\ud835\udc5d\ud835\udc62\ud835\udc61and Wingfuzz\ud835\udc56\ud835\udc5b\ud835\udc5d\ud835\udc62\ud835\udc61on three selected DBMSs in 12 hours. From the table, we can see that Wingfuzz\ud835\udc56\ud835\udc5b\ud835\udc5d\ud835\udc62\ud835\udc61performs better than LLM\ud835\udc56\ud835\udc5b\ud835\udc5d\ud835\udc62\ud835\udc61on DBMS fuzzing. Specifically, the test cases generated by Wingfuzz\ud835\udc56\ud835\udc5b\ud835\udc5d\ud835\udc62\ud835\udc61contain 159.35%, 36.65%, and 112.14% more semantic-correct SQL statements, and cover 55.96%, 21.83%, and 16.41% more code branches than that of LLM\ud835\udc56\ud835\udc5b\ud835\udc5d\ud835\udc62\ud835\udc61on MonetDB, DuckDB, and ClickHouse, respectively. It indicates that LLM can not directly generate high-quality SQL queries as the input for DBMS fuzzing. The main reason is that the transfer seeds improve the diversity of mutated test cases, and the fuzzer\u2019s mutator promises the semantic correctness of SQL queries. 3.3 Monitor-Based DBMS Bug Detection Obstacle: The most critical step for DBMS bug detection is to construct the test oracles to identify the logic or performance bugs in DBMS. A test oracle refers to a mechanism in DBMS fuzzing to determine the correctness or validity of the DBMS\u2019s behaviors. Directly using LLMs to construct the test oracle is challenging as LLMs lack specific knowledge about the intricate workings and behaviors of DBMS. They can not access the internal logic, making it difficult to accurately predict or emulate DBMS behavior. Solution: To address the challenges, we propose the Runtime Monitor-Based DBMS Bug Detection following the REC 3.1 , which detects the anomalies of DBMS by analyzing the runtime information of DBMS in real-time. To ensure the robustness of DBMS, the DBMS usually contains the implicit exception handler mechanism, which captures the internal exceptions to avoid system crashes. These exceptions usually output some key internal states and behaviors of DBMS, such as wrong execution logic. Unlike directly using LLM to construct the test oracle by checking the execution result of the SQL query, our approach involves collecting runtime information from the DBMS and using LLM to analyze the runtime information for bug detection. The process contains two main steps. First, it instruments an agent to extract the runtime information of DBMS. Then, it collects the runtime information and uses LLM to detect the anomaly by predefining some error pattern. Table 3: Number of Reported Bugs and Real Bugs. DBMS LLM\ud835\udc4f\ud835\udc62\ud835\udc54 Wingfuzz\ud835\udc4f\ud835\udc62\ud835\udc54 Name Reported Real Reported Real MonetDB 61 0 6 3 DuckDB 54 0 5 3 ClickHouse 67 1 3 3 Result: To evaluate the effectiveness of our recommendation, we implement the solution with Wingfuzz\ud835\udc4f\ud835\udc62\ud835\udc54and compare it with LLM\ud835\udc4f\ud835\udc62\ud835\udc54, which directly uses LLM to determine whether the execution of the SQL query is right during the fuzz loop. Table 3 When Fuzzing Meets LLMs: Challenges and Opportunities FSE\u201924, July 2024, Porto de Galinhas, Brazil shows the number of reported bugs and real bugs by LLM\ud835\udc4f\ud835\udc62\ud835\udc54and Wingfuzz\ud835\udc4f\ud835\udc62\ud835\udc54in 12 hours on MonetDB, DuckDB and ClickHouse. It shows the Wingfuzz\ud835\udc4f\ud835\udc62\ud835\udc54can detect more anomalies and has fewer false positives than LLM\ud835\udc4f\ud835\udc62\ud835\udc54. Specifically, LLM\ud835\udc4f\ud835\udc62\ud835\udc54totoally reported 182 bugs but only 1 bug is real. Instead, Wingfuzz\ud835\udc4f\ud835\udc62\ud835\udc54reported 14 bugs and 9 bugs are real bugs and have been confirmed. The main reason is that the collected runtime information contains the error message of DBMS, and it helps LLM to analyze and detect bugs. 4 CONCLUSION We identify and systematically analyze five major challenges when using LLM in fuzzing and confirm their prevalence through a review of most recent top-tier conference papers. These challenges affect the effectiveness and accuracy of the LLM-based fuzzing technologies. To support researchers in avoiding them, we provide recommendations that are applicable to effectively assist the main steps in fuzzing, depending on the richness of the relevant corpus and documentation. Our preliminary evaluation further demonstrates that these recommendations effectively address the challenges in LLM-assisted DBMS fuzzing." +} \ No newline at end of file diff --git a/intro_8K/test_introduction_short_2404.16300v1.json b/intro_8K/test_introduction_short_2404.16300v1.json new file mode 100644 index 0000000000000000000000000000000000000000..35e5d1f152af0966aa2fd40a4e4c92f4439919e2 --- /dev/null +++ b/intro_8K/test_introduction_short_2404.16300v1.json @@ -0,0 +1,17 @@ +{ + "url": "http://arxiv.org/abs/2404.16300v1", + "title": "Reinforcement Learning with Generative Models for Compact Support Sets", + "abstract": "Foundation models contain a wealth of information from their vast number of\ntraining samples. However, most prior arts fail to extract this information in\na precise and efficient way for small sample sizes. In this work, we propose a\nframework utilizing reinforcement learning as a control for foundation models,\nallowing for the granular generation of small, focused synthetic support sets\nto augment the performance of neural network models on real data classification\ntasks. We first allow a reinforcement learning agent access to a novel context\nbased dictionary; the agent then uses this dictionary with a novel prompt\nstructure to form and optimize prompts as inputs to generative models,\nreceiving feedback based on a reward function combining the change in\nvalidation accuracy and entropy. A support set is formed this way over several\nexploration steps. Our framework produced excellent results, increasing\nclassification accuracy by significant margins for no additional labelling or\ndata cost.", + "authors": "Nico Schiavone, Xingyu Li", + "published": "2024-04-25", + "updated": "2024-04-25", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CV" + ], + "label": "Original Paper", + "paper_cat": "Model AND Based AND Reinforcement AND Learning", + "gt": "Deep learning [10] is one of the most popular and suc- cessful methods for any task where a large dataset can be procured, including fundamental computer vision tasks like classification. However, large, well-balanced, well-labelled datasets are often difficult and prohibitively expensive to ac- quire. Consequently, much of contemporary image classi- fication utilizes a high quality source dataset and support sets with highly relevant data to the target task. The gen- eration of such support sets has been a focus of contem- porary research, and recently utilizes the output of the un- precedented success of large pretrained generative models like Stable Diffusion [13]. The advancements in generative models have led to the rise of synthetic datasets, where im- ages are generated in large scale according to the target task and used in place of a real training dataset, yielding excel- lent results [6, 11, 22]. Despite these advancements, the body of research relat- ing to synthetic datasets remains primarily focused on large- batch image synthesis. In this way, any issues caused by the unpredictable behaviour of modern generative models can easily be smoothed out. However, this results in the major- ity of successful applications requiring tens of thousands of images generated for a single task [6, 11], which is ineffi- cient in time and cost. The goal of creating specific, highly focused support sets composed of several hundred images rather than sev- eral thousand is currently an open problem at the forefront of generative computer vision research. Consequently, it raises the question of if synthetic data can supplement real data, making up a very small portion of the overall dataset to shore up specific weaknesses, or whether synthetic data must make up a significant amount of the dataset if it is to be used at all. Reinforcement learning [14] is a popular control scheme that has an agent learn the optimal behaviour given an en- vironment and a reward for desirable interactions. Recent studies have found reinforcement learning effective at writ- ing and re-writing prompts [3, 7], but the use of reinforce- ment learning to guide the evolution of prompts has yet to be explored. Reinforcement learning is an excellent frame- work for imposing specific learned behaviours upon the re- sulting agent, and we posit that combining reinforcement learning with pretrained generative models will impart that much-needed specificity on the synthesized images, result- ing in significant performance gains for a relatively small number of synthetic images. In this work, we introduce a framework utilizing rein- forcement learning as a control for large generative models to synthesize precise support sets, intended to bolster the lacking aspects of real datasets without overwriting them for increased model performance at no extra data or labelling costs. To accomplish this, we utilize a dictionary based on the features of the original training dataset, and allow a re- inforcement learning agent to learn the optimal structures and word choice to generate high quality, specific prompts for Stable Diffusion. The controlled output of Stable Dif- fusion is then used to supplement the existing training data for a neural network model, and the performance of this model on a validation set is given as feedback to the agent. 1 arXiv:2404.16300v1 [cs.LG] 25 Apr 2024 In this way, the framework allows Stable Diffusion to act as an extension of the reinforcement learning agent, acting directly to improve the performance of the model by tweak- ing the prompts that make up the support set. We evalu- ate this framework on several datasets, including CIFAR- 10 [8], and Tiny-ImageNet [9], showing free improvements on neural networks of \u223c1% for less than 500 total images in the support set. The main contributions for this work are: \u2022 A novel framework combining reinforcement learning and large pretrained generative models for the construc- tion of small, focused, and effective synthetic support sets. \u2022 A new reward scheme that facilitates a better interaction between reinforcement learning and classification.", + "main_content": "2.1. Reinforcement Learning Reinforcement learning [14] defines an agent and an environment with rules on how they can interact. The agent receives rewards based on how their actions affect the environment, with one of several reward schemes. The rewards inform the optimal behaviour of the agent, and thus the desirable properties of the end model. Popular reward schemes include exploration-based, which incentivizes exploring the action space, and goal-based, which explores to achieve set goals. Past works have attempted to use reinforcement learning directly in classification algorithms, but this generally yields lacklustre results for the amount of effort and training time required [4]. This is due to the long convergence time of conventional reinforcement learning algorithms, and the relative ease of using simple deep learning models when a well-labelled dataset is available, rather than optimizing the loss with an agent. In our framework, we circumvent this issue by using a deep learning model for classification and optimizing it by altering the training set, rather than directly making the predictions using the agent. 2.2. Generative Models Generative models have shown unprecedented success in many tasks in natural language processing and computer vision [1, 13]. Such models are often trained on datasets with in excess of one billion images, which stores a large wealth of knowledge that can be accessed through their generation capabilities [1]. These generative models have been widely used in contemporary research for image synthesis, such as augmentation of existing samples to artificially simulate a larger dataset [19, 20]. Replacing the dataset entirely with synthetic images is also a topic of interest, with excellent preliminary results despite no real data [22]. Finally, the generation of large support sets to supplement real data has Figure 1. Overall framework also been explored, but this mainly utilizes synthesis over a large scale to shore up the weaknesses of a dataset [11]. Contemporary generative models usually require text prompts to guide their behaviour. General prompting is successful in simple tasks, such as guided image synthesis, but complex and specific prompts often lead to unexpected results. This leads to an area of research known as prompt engineering, which is the focus of much of the recent literature in the topic of large models [2]. Common approaches generally utilize a fixed set of prompts that have been carefully engineered to produce certain results; in our framework, we allow the prompts to evolve naturally from a general structure to their optimal state using reinforcement learning to choose the subjects and the model performance as feedback. 3. Methods 3.1. Problem Formulation Initially, there is a well-labelled dataset D, consisting of N training samples, and a synthetic support set S, consisting of k\u2217m samples, where k is the current step number, and m is the number of samples generated per step. In this work, we impose an extra limit Nsyn on the number of samples in S. There is also a validation set V, and a test set T . Our goal in this study is to train a reinforcement learning agent A to optimally control a pretrained generative model, such as Stable Diffusion, to optimally populate S with at most Nsyn synthetic images, where Nsyn << N. As shown in Fig. 1, in each step, the agent forms a prompt, feeds it to Stable Diffusion, and the resulting images are added to S. The resulting dataset D+S is used to train a model M , and its performance on V is passed back to A as feedback. This 2 Figure 2. Images generated using our framework using CIFAR10 [8] labels. continues until a total of Nsyn images are contained within S, at which point the exploration thread terminates. When all exploration threads within the preset exploration budget are explored, the resulting framework is tested on the test set T yielding the final performance. 3.2. Image Synthesis For image synthesis, we are using Stable Diffusion [13], a successful text-to-image model that is trained on billions of text-image pairs.Stable Diffusion has already been used to great effect in contemporary works when the aim is to replace a real dataset [18, 22], and to augment existing samples [19, 20], but with comparatively fewer works focusing on consistently generating small, effective support sets. 3.3. Controlling the Synthesis with RL Reinforcement learning (RL) defines an agent and an environment, and gives a set of actions that the agent can take to affect the environment. In our framework, we take a classification model and its training dataset as the Environment. The reinforcement learning agent adaptively selects text prompts for the generative model towards image synthesis, which supplements the training set for classification performance improvement. The agent then receives feedback based on the change in the model\u2019s performance, which is taken as the State in our reinforcement framework. In this study, we adopt the policy-based method for agent optimization, building a policy \u03c0 : s \u2212 \u2192a that maps states to optimal actions [14]. The specific objective function is: L(\u03b8) = \u02c6 E[min(rt(\u03b8) \u02c6 At, clip(rt(\u03b8), 1 \u2212\u03f5, 1 + \u03f5) \u02c6 At)]. (1) where rt = \u03c0\u03b8(at|st) \u03c0\u03b8old(at|st) is the probability ratio, \u02c6 At is the estimator of the advantage function at step t, and \u03f5 is a small value. Action space: Our framework allows the reinforcement learning agent to interact with Stable Diffusion by forming prompts. Prompts of unlimited length are subject to unmanageable time complexity, so we utilize a set dictionary based on the dataset. We formulate the interaction with a basic sentence structure with enough expression to accurately place the image, and pose the following format: \u201dA {domain} of a {class}, {class}, and {class}\u201d. Domains include photographs, digital artwork, paintings, mosaics, and other clear representations of the class. Next, three class names are chosen from the list of classes in the dataset. We notice that Stable Diffusion usually puts more attention on the first \u201dclass\u201d term and generates the corresponding theme in the resulting image. Thus, our prompt design allows the agent to position the generated images at the boundaries between classes, which is where new images are most effective for improving classification performance [12]. This is in contrast to traditional prompting methods, where the prompt describes the primary subject of interest with qualifiers for other subjects. We instead follow contemporary diversity research, prioritizing brevity and maximal control [15]. The benefits of our approach are that single-class representative samples can be easily generated as follows: \u201dA {domain} of a car, car, and car\u201d, which has the added benefit of including more representative features from the chosen class due to the repetition. Multi-class samples can be equally easily generated by including two or three different class names, and the significance of each class can be altered by changing the order the classes appear in. In this way, our method allows the agent a yet unseen amount of control over the output of Stable Diffusion, resulting in significantly improved precision. Reward function: The agent\u2019s desired behaviour is to increase the accuracy of the classification model as much as possible with limited image synthesis. In our framework, we use a combined reward function, utilizing the validation set accuracy and the entropy to bias our model towards high, balanced accuracy. Under the assumption of a welllabelled training dataset, the former (i.e. classification accuracy on validate set) offers the most unfiltered access to the state changes in the model\u2019s performance. It is noteworthy that different from previous works utilizing reinforcement learning for classification, the accuracy alone is used, the addition of entropy in our reward allows the framework to simultaneously reward the improvement of weak classes, which improves the overall model performance on underrepresented classes. The formulation of our reward function is shown in Eq. 2, where the entropy under a state s can be calculated following Eq. 3. r(s, s\u2032) = \u2206Acc(s \u2192s\u2032) \u2212\u2206\u03c3entropy(s \u2192s\u2032), (2) \u03c3entropy(x, M) = \u2212\u03a3k i=1pM(yi|x) log pM(yi|x), (3) where s\u2032 is the state after performing action a, and s is the state before performing action a, and pM(\u02c6 y|x) represents the class probability of sample x under model M. 3 Pretrained Rand Syn Ours ResNet-18 92.0 92.3 92.7 ResNet-50 93.9 94.2 94.5 VGG-16 93.9 94.1 94.9 ShuffleNetV2 93.3 93.6 94.1 EfficientNetV2S 94.1 94.3 95.2 Table 1. Classification accuracy (%) on CIFAR-10 [8]. Pretrained Rand Syn Ours ResNet-18 54.3 54.4 54.7 ResNet-50 71.1 71.1 71.5 VGG-16 63.2 63.4 63.9 ShuffleNetV2 48.6 48.6 48.8 EfficientNetV2S 69.9 70.0 70.4 Table 2. Classification accuracy (%) on Tiny ImageNet [9]. 3.4. Full Algorithm One training step for the agent A consists of the following processes, in order: 1. A chooses a domain and three classes in the prompt to represent the generated images. 2. m images are generated following the prompt, which are added to S. 3. M is trained on D + S, and tested again on V, reporting the accuracy and entropy of the predictions. 4. The reward r(s, s\u2032) is given back to the agent. If k = 1, then the pretrained statistics are used in place of the data from the previous state s. This sequence is optimized using Proximal-PolicyOptimization [14] to find the optimal set of Nsyn synthetic samples contained in S. After the training process is completed, the algorithm has found the optimal prompts for to generate the optimal support set, and runs a final time without feedback to form S, the desired support set. 4. Results & Discussion 4.1. Datasets We evaluate our framework on two popular natural image datasets, CIFAR-10 [8] and Tiny ImageNet [9]. We chose these datasets due to computational reasons \u2013 the action space complexity scales as n3, where n is the number of classes in the dataset. Tiny ImageNet is a 200 class balanced dataset of 100 000 64x64 coloured images, and CIFAR-10 is a 10 class balanced dataset of 60 000 32x32 coloured images. In each case, we split the datasets using an 80:10:10 ratio of train:validation:test. 4.2. Experimental Protocol We follow the setup laid out in Section 3. For both datasets, we use a domain dictionary of {\u201dphotograph\u201d, \u201dpainting\u201d, \u201dstill-life\u201d, \u201dimage\u201d, \u201ddigital image\u201d} and a class dictionary composed of each class name once. In experiments, we select k = 10 to generate 10 images per step and our algorithm will run until a maximum of Nsyn = 400 images. Various models, including ResNet18, ResNet50 [5], ShuffleNetV2 [17], VGG-16 [16], and EfficientNetV2 [21], are evaluated in our experiments. We compare the results of our framework against vanilla trained models and the models trained with random synthetic images in equal number. The \u2019Random Synthesis\u2019 setting adds to the training set 400 images synthesized by selecting random classes to fill the blanks in the prompt, and our method uses the full reinforcement learning framework. 4.3. Main Results and Discussion The results of applying our framework are reported in Tables 1 and 2. In addition, example images generated off of the CIFAR-10 dataset are demonstrated in Fig. 2. From these results, we can see that our framework is superior to random synthesis for small-batch support set synthesis, increasing the accuracy by as much as 0.9% over the random synthesis method, and 1.1% over the baseline model. Notably, for two backbones on Tiny ImageNet, random synthesis fails to improve the performance of the model by > 0.1%, while our framework increases the accuracy by \u223c0.2%. In addition, our method adds only 0.33% extra images for CIFAR-10, and 0.2% for Tiny-ImageNet. Our experimental results show that the proposed framework has a high performance gain relative to the number of samples synthesized, a characteristic not seen in prior arts. We attribute this gain to the fine control that our designed reinforcement learning agent gives over the output of the large pretrained model, and the effectiveness of the feedback given back to the agent. Our framework currently requires some amount of information about the target dataset in order to work: class names, and a rough domain. This could be bypassed by forming the dictionary using an image-to-text encoder on representative samples after clustering by an unsupervised learning algorithm, but we leave the pursuit of this direction for future work. 5. Conclusions In this work, we proposed a framework allowing for the granular generation of small, focused synthetic support sets to augment the performance of general backbone networks on real data classification tasks. Our framework exploits the wealth of information present in large pretrained models by controlling their output using reinforcement learning agents, so that optimal, explainable prompts can be generated over many training steps. Our framework produced excellent results on a variety of backbones, increasing classification accuracy by significant margins for no additional labelling or data cost. 4" +} \ No newline at end of file diff --git a/intro_8K/test_introduction_short_2404.16301v1.json b/intro_8K/test_introduction_short_2404.16301v1.json new file mode 100644 index 0000000000000000000000000000000000000000..41539f8dfb49da589a14a745bae3fb697deeebb5 --- /dev/null +++ b/intro_8K/test_introduction_short_2404.16301v1.json @@ -0,0 +1,16 @@ +{ + "url": "http://arxiv.org/abs/2404.16301v1", + "title": "Style Adaptation for Domain-adaptive Semantic Segmentation", + "abstract": "Unsupervised Domain Adaptation (UDA) refers to the method that utilizes\nannotated source domain data and unlabeled target domain data to train a model\ncapable of generalizing to the target domain data. Domain discrepancy leads to\na significant decrease in the performance of general network models trained on\nthe source domain data when applied to the target domain. We introduce a\nstraightforward approach to mitigate the domain discrepancy, which necessitates\nno additional parameter calculations and seamlessly integrates with\nself-training-based UDA methods. Through the transfer of the target domain\nstyle to the source domain in the latent feature space, the model is trained to\nprioritize the target domain style during the decision-making process. We\ntackle the problem at both the image-level and shallow feature map level by\ntransferring the style information from the target domain to the source domain\ndata. As a result, we obtain a model that exhibits superior performance on the\ntarget domain. Our method yields remarkable enhancements in the\nstate-of-the-art performance for synthetic-to-real UDA tasks. For example, our\nproposed method attains a noteworthy UDA performance of 76.93 mIoU on the\nGTA->Cityscapes dataset, representing a notable improvement of +1.03 percentage\npoints over the previous state-of-the-art results.", + "authors": "Ting Li, Jianshu Chao, Deyu An", + "published": "2024-04-25", + "updated": "2024-04-25", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Original Paper", + "paper_cat": "Semantic AND Segmentation AND Image", + "gt": "Neural Networks [1] and Transformers [2] have achieved great success in semantic segmentation tasks, but super- vised tasks typically require a large amount of annotated data. Pixel-level annotation is needed, with at least an hour for each image [3], which significantly increases the cost. One approach to address this problem is to utilize existing annotated data or easily obtainable synthetic data to train models and test them on target data. However, due to domain differences, the model\u2019s performance metrics often decline substantially when tested on target data. In order to obtain a more robust model, researchers have proposed UDA meth- ods [4][5][6], transferring knowledge from annotated source domain data to unannotated target data. It has been proven that CNNs are sensitive to distribu- tion shifts [7] in image classification. Recent studies [8] have shown that Transformers are more robust compared to these factors. In addition, CNNs mainly focus on texture [9], while Transformers emphasize shape, which is more similar to hu- man vision. Some researches have revealed significant dif- ferences between the induction bias of standard CNNs and human vision: humans primarily rely on object content (i.e., shape) for recognition [10], while CNNs exhibit a strong pref- erence for style (i.e., texture) [9]. This explains why CNNs are more susceptible to changes when switching between do- mains, as image style is more likely to vary across different domains. Early studies [11][12][13] have confirmed that feature dis- tribution shifts caused by style differences mainly occur in the shallow layers of the network. This implies that the shal- low layers\u2019 feature distribution in the network can reflect the style information of the input images. Therefore, following these works\u2019 methods, we manipulate the style features of the feature maps in the shallow layers of the network. The fea- ture extractor captures the style features of the target domain while preserving the content of the source domain. This ap- proach weakens the style features of the source domain while enhancing the style features of the target domain, achieving style feature transfer.", + "main_content": "2.1. Image to Image Domain Adaptation In UDA, we are given a source dataset as Ds = {(xs i, ys i )}Ns i=1 (1) where Ns is the number of the color images in the dataset, and ys \u2208RH\u00d7W represents the associated semantic map of xs \u2208RH\u00d7W \u00d73. Similarly, Dt = {xt i}Nt i=1 (2) arXiv:2404.16301v1 [cs.CV] 25 Apr 2024 is the target dataset where true semantic labels are missing. Typically, segmentation networks trained on Ds exhibit performance degradation when tested on Dt. Here, we use Fourier Domain Adaptation (FDA) [14] and RGB adaptation to reduce the domain gap between the two datasets at the image-level. FDA aims to minimize domain differences by replacing the low-frequency components in the target domain with those from the source domain. This is because low-frequency components can be inferred as the domain style. FDA has achieved significant improvements in semantic segmentation. Therefore, we employ the FDA method for data augmentation, as expressed by the formula: xs\u2192t = F\u22121([\u03b2\u25e6FA(xt)+(1\u2212\u03b2)\u25e6FA(xs), FP (xs)]) (3) The variables FA and FP denote the amplitude and phase components of the Fourier transform, respectively. In the inverse Fourier transform, the phase and amplitude components are remapped to the image space. The hyperparameter \u03b2 determines the filter\u2019s size in the inverse Fourier transform. Random RGB shift is a prevalent and widely adopted technique for data augmentation. Through our experimental observations, we fortuitously discovered that employing random RGB shift as a data augmentation technique significantly enhances the model\u2019s performance. Our hypothesis is that the image-level implementation of random RGB shift enables a closer resemblance between the style of the source and target domains, thereby mitigating the domain gap. Building upon the concept of random RGB shift, we introduce a RGB adaptation method as a solution for domain adaptation. The mean value of each channel is calculated for RGB images x as follows: \u00b5(x) = 1 HW H X h=1 W X w=1 xhw (4) xs\u2192t = xs + (\u00b5(xt) \u2212\u00b5(xs)) (5) The variables \u00b5(s) and \u00b5(t) represent the mean values of the source domain image and the target domain image, respectively, along the channel dimension. By employing this method, the content of the source domain image remains unaltered, thus preserving the availability of accurate labels. Additionally, it facilitates the closer alignment of the source domain image with the target domain image within the RGB space. 2.2. Style Adaptive Instance Normalization In UDA methods, the primary factor causing domain shift is the disparity in styles across domains. The presence of domain shift constrains the models\u2019 capacity for generalization in both domain adaptation and domain generalization tasks. Previous studies have demonstrated that shallow features extracted by backbone networks possess the capability to capture style information in images. Established approaches typically characterize the style features of an image by computing the mean and standard deviation along the channel dimension of shallow features. \u03c3(x) = v u u t 1 HW H X h=1 W X w=1 (xhw \u2212\u00b5(x))2 + \u03f5 (6) Conventional instance normalization can eliminate specific stylistic information from an image. Directly applying this method to UDA can diminish the network\u2019s capacity to learn the style information of the source domain images. However, it also disregards the style information of the target domain, resulting in diminished performance and limited generalization ability on the target domain. To decrease the network\u2019s ability to learn style information from the source domain images while enhancing the style information of the target domain images, we apply AdaIN [12] to replace the style information of the source domain images with that of the target domain images. Meanwhile, this method retains the content information of the source domain images. We term the proposed approach as Style Adaptive Instance Normalization (SAIN). The specific implementation formula is as follows: SAIN(xs, xt) = \u03c3(xt) \u0012xs \u2212\u00b5(xs) \u03c3(xs) \u0013 + \u00b5(xt) (7) \u00b5 and \u03c3 represent the mean and standard deviation of the feature map in the channel dimension, respectively. By transferring the style of the target domain to the source domain during the training process, the network g\u03b8 biased towards content no longer relies on the style of the source domain to make decisions but focuses more on content while also paying attention to the style of the target domain. During testing, we directly use network g\u03b8 without SAIN to ensure the independence of predictions and reduce computational burden. Therefore, we replace the original loss function with a content-biased loss, shown as follows: LS i = \u2212 H\u00d7W X j=1 C X c=1 yS (i,j) log SAIN \u0010 g\u03b8(xS i )(j,c), g\u03b8(xT i )(j,c)\u0011 (8) Furthermore, we follow the consistency training in DAFormer, which involves training the teacher network on augmented target data using DACS [15], while the teacher model generates pseudo-labels using non-augmented target images. 3. EXPERIMENTS 3.1. Implementation Details The proposed method is applied to two challenging unsupervised domain adaptation tasks, where there are abundant semantic segmentation labels in the synthetic domain (source domain), but not in the real domain (target domain). The two synthetic datasets used are GTA5 [16] and SYNTHIA [17], while the real domain dataset is CityScapes [3]. The proposed method is validated based on the DAFormer network and the Mix Transformer-B5 encoder [18]. All backbone networks are pretrained on ImageNet. In the default UDA setting, the MIC [6] masked image self-training strategy and the training parameters are used, including the AdamW optimizer, the encoder learning rate of 6 \u00d7 10\u22125, the decoder learning rate of 6 \u00d7 10\u22124, 60k training iterations, a batch size of 2, linear learning rate warm-up, and DACS [15] data augmentation. 3.2. Evaluation First, we integrate RGB adaptation with several significant UDA methods, including DAFormer [4], HRDA [5] and MIC [6], using the DAFormer framework. Table 1 demonstrates that RGB adaptation achieves notable improvement compared to previous UDA methods without RGB adaptation. Karras et al. [19] demonstrated that styles at different levels encode distinct visual attributes. Styles from fine-grained spatial resolution (lower levels in our network) encode lowlevel attributes like color and fine textures, whereas styles from coarse-grained spatial resolution (higher levels in our network) encode high-level attributes including global structure and textures. Therefore, the application of our SAIN module at the appropriate level is necessary to mitigate adverse style-induced biases. The networks from Block 1 to Block 4 become increasingly deeper. Figure 1 illustrates that the most notable improvement is achieved when applying SAIN in Block 3. However, applying SAIN to features at excessively low levels only has a limited impact on reducing feature biases. Additionally, using SAIN in excessively high-level styles may result in the loss of essential semantic information. Through our experimental findings, we discovered that the concurrent application of SAIN to both Block 2 and Block 3 results in optimal performance. Visual comparisons are conducted with the second performer (i.e., MIC), which utilizes the same segmentation network backbone as ours. Figure 2 illustrates that our model\u2019s prediction results demonstrate higher accuracy. Additionally, our approach demonstrates strong performance on some common categories, including the first row with the terrain, wall in the second row and building in the third and truck in fourth rows. We attribute this phenomenon to the transferability of RGB adaptation and SAIN, which enables the model to learn more style information from the target domain. 3.3. Influence of Style on UDA In the following, we analyze the underlying principles of our method on GTA\u2192Cityscapes. Firstly, we analyze the impact Table 1. Performance (IoU) of RGB adaptation with different UDA methods on GTA\u2192Cityscapes. Network UDA Method w/o RGB Adapt. w/ RGB Adapt. DAFormer DAFormer 68.3 69.37 DAFormer HRDA 73.8 74.45 DAFormer MIC 75.9 76.64 Fig. 1. The effect of SAIN on different blocks. of SAIN on UDA at various feature levels. As shown in Figure 1, as the network depth increases from Block 1 to Block 3, the improvement in the performance of UDA using SAIN also increases accordingly. The results in Table 2 and Table 3 demonstrate significant performance improvements across all benchmarks. In particular, our method has led to a +1.03 increase in mIoU for GTA\u2192CS and a +1.05 increase for Synthia\u2192CS. For most categories, such as building, fence, rider, truck, and train, there is a certain performance improvement. However, there are also some categories that have a slight performance decrease after using SAIN, such as bike. This may be due to the difference in annotation strategies for the bike category between the Cityscapes dataset and the GTA dataset. 4. CONCLUSION We have proposed a straightforward method for reducing domain discrepancy, which requires no additional learning and can be seamlessly integrated into self-supervised UDA. By transferring the target domain style to the source domain within the latent feature space, the model is trained to prioritize the style of the target domain during its decision-making process. Our experiments validate the remarkable performance enhancements achieved by our proposed method in Transformer-based domain adaptation. Despite its simplicity, the results indicate that our method actually surpasses the current state-of-the-art techniques. This suggests that the distributional misalignment caused by shallow-level statistics can indeed impact cross-domain generalization, but it can be Table 2. Semantic segmentation performance (IoU) on GTA\u2192Cityscapes Method Road S.walk Build. Wall Fence Pole Tr.light Tr.sign Veget. Terrain Sky Person Rider Car Truck Bus Train M.bike Bike mIoU ADVENT 89.4 33.1 81.0 26.6 26.8 27.2 33.5 24.7 83.9 36.7 78.8 58.7 30.5 84.8 38.5 44.5 1.7 31.6 32.4 45.5 DACS 89.9 39.7 87.9 30.7 39.5 38.5 46.4 52.8 88.0 44.0 88.8 67.2 35.8 84.5 45.7 50.2 0.0 27.3 34.0 52.1 ProDA 87.8 56.0 79.7 46.3 44.8 45.6 53.5 53.5 88.6 45.2 82.1 70.7 39.2 88.8 45.5 59.4 1.0 48.9 56.4 57.5 DAFormer 95.7 70.2 89.4 53.5 48.1 49.6 55.8 59.4 89.9 47.9 92.5 72.2 44.7 92.3 74.5 78.2 65.1 55.9 61.8 68.3 HRDA 96.4 74.4 91.0 61.6 51.5 57.1 63.9 69.3 91.3 48.4 94.2 79.0 52.9 93.9 84.1 85.7 75.9 63.9 67.5 73.8 MIC 97.4 80.1 91.7 61.2 56.9 59.7 66.0 71.3 91.7 51.4 94.3 79.8 56.1 94.6 85.4 90.3 80.4 64.5 68.5 75.9 Ours 97.24 79.12 92.15 61.45 58.5 60.98 69.23 72.58 91.93 53.33 93.99 81.26 60.68 94.84 88.3 90.5 83.24 65.59 66.82 76.93 Table 3. Semantic segmentation performance (IoU) on Synthia\u2192Cityscapes. Method Road S.walk Build. Wall Fence Pole Tr.Light Tr.Sign Veget. Terrain Sky Person Rider Car Truck Bus Train M.bike Bike mIoU ADVENT 85.6 42.2 79.7 8.7 0.4 25.9 5.4 8.1 80.4 \u2013 84.1 57.9 23.8 73.3 \u2013 36.4 \u2013 14.2 33.0 41.2 DACS 80.6 25.1 81.9 21.5 2.9 37.2 22.7 24.0 83.7 \u2013 90.8 67.6 38.3 82.9 \u2013 38.9 \u2013 28.5 47.6 48.3 ProDA 87.8 45.7 84.6 37.1 0.6 44.0 54.6 37.0 88.1 \u2013 84.4 74.2 24.3 88.2 \u2013 51.1 \u2013 40.5 45.6 55.5 DAFormer 84.5 40.7 88.4 41.5 6.5 50.0 55.0 54.6 86.0 \u2013 89.8 73.2 48.2 87.2 \u2013 53.2 \u2013 53.9 61.7 60.9 HRDA 85.2 47.7 88.8 49.5 4.8 57.2 65.7 60.9 85.3 \u2013 92.9 79.4 52.8 89.0 \u2013 64.7 \u2013 63.9 64.9 65.8 MIC 86.6 50.5 89.3 47.9 7.8 59.4 66.7 63.4 87.1 \u2013 94.6 81.0 58.9 90.1 \u2013 61.9 \u2013 67.1 64.3 67.3 Ours 89.06 57.39 90.1 51.37 7.99 60.53 69.03 63.44 86.57 \u2013 94.91 82.33 61.1 89.4 \u2013 57.28 \u2013 67.92 65.24 68.35 Fig. 2. Qualitative comparison with the previous state-of-the-art method MIC on GTA\u2192CS. The proposed method gets better segmentation for classes such as terrain, fence, building, and truck. mitigated through image translation and SAIN. The issue of model robustness in machine learning remains a challenging problem, and while we do not assert that our method is optimal, its simplicity may also yield performance improvements in other domain adaptation tasks. Acknowledgements: This work is supported by STS Project of Fujian Science and Technology Program (No. 2023T3042). 5." +} \ No newline at end of file diff --git a/intro_8K/test_introduction_short_2404.16302v1.json b/intro_8K/test_introduction_short_2404.16302v1.json new file mode 100644 index 0000000000000000000000000000000000000000..9f5ebcc7fe18b7915d2cf899f8f722367e3c04d5 --- /dev/null +++ b/intro_8K/test_introduction_short_2404.16302v1.json @@ -0,0 +1,19 @@ +{ + "url": "http://arxiv.org/abs/2404.16302v1", + "title": "CFMW: Cross-modality Fusion Mamba for Multispectral Object Detection under Adverse Weather Conditions", + "abstract": "Cross-modality images that integrate visible-infrared spectra cues can\nprovide richer complementary information for object detection. Despite this,\nexisting visible-infrared object detection methods severely degrade in severe\nweather conditions. This failure stems from the pronounced sensitivity of\nvisible images to environmental perturbations, such as rain, haze, and snow,\nwhich frequently cause false negatives and false positives in detection. To\naddress this issue, we introduce a novel and challenging task, termed\nvisible-infrared object detection under adverse weather conditions. To foster\nthis task, we have constructed a new Severe Weather Visible-Infrared Dataset\n(SWVID) with diverse severe weather scenes. Furthermore, we introduce the\nCross-modality Fusion Mamba with Weather-removal (CFMW) to augment detection\naccuracy in adverse weather conditions. Thanks to the proposed Weather Removal\nDiffusion Model (WRDM) and Cross-modality Fusion Mamba (CFM) modules, CFMW is\nable to mine more essential information of pedestrian features in\ncross-modality fusion, thus could transfer to other rarer scenarios with high\nefficiency and has adequate availability on those platforms with low computing\npower. To the best of our knowledge, this is the first study that targeted\nimprovement and integrated both Diffusion and Mamba modules in cross-modality\nobject detection, successfully expanding the practical application of this type\nof model with its higher accuracy and more advanced architecture. Extensive\nexperiments on both well-recognized and self-created datasets conclusively\ndemonstrate that our CFMW achieves state-of-the-art detection performance,\nsurpassing existing benchmarks. The dataset and source code will be made\npublicly available at https://github.com/lhy-zjut/CFMW.", + "authors": "Haoyuan Li, Qi Hu, You Yao, Kailun Yang, Peng Chen", + "published": "2024-04-25", + "updated": "2024-04-25", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.MM", + "cs.RO", + "eess.IV" + ], + "label": "Original Paper", + "paper_cat": "Mamba", + "gt": "In an open and dynamic environment, object detection faces chal- lenging weather conditions such as rain, haze, and snow. The rapid advancement of deep-learning-based object detection methods has significantly improved the ability to identify and classify objects. Benefiting from the advanced feature extraction and fusion strate- gies, cross-modality object detection methods have achieved high accuracy, e.g., CFT [34], GAFF [56], and CFR_3 [54]. However, as shown in Fig. 1, the performance of these methods is often chal- lenged by adverse weather conditions, which can severely impact the visibility and quality of visual data. Although the infrared image \u2217Equal contribution. \u2020Corresponding authors (e-mail: chenpeng@zjut.edu.cn, kailun.yang@kit.edu). Figure 1: The proposed method can achieve high-precision cross-modality object detection under adverse weather condi- tions. The top two examples are results from CFT [34], while the bottom two examples are results from CFMW (ours). could provide complementary cues to some extent, it cannot re- pair the appearance distortion or information loss of visual images. Thus, traditional cross-modality object detection methods still face severe performance degradation under adverse weather. Existing methods cannot be directly applied to adverse weather conditions, since the color gamut of visible images is weakened by environmental disturbance and the existing fusion methods are difficult to fully fuse visible and infrared spectra, nor have they made sufficient training under corresponding datasets. To make up the blank in this research area, we construct and release a new dataset, named Severe Weather Visible-Infrared Dataset (SWVID), as well as propose a novel framework named Cross-modality Fusion Mamba with Weather-removal (CFMW). To facilitate research in this area, we propose a new visible- infrared dataset, named SWVID, which is designed to encompass diverse severe weather scenarios by mathematically formalizing the impact of various weather phenomena on images. Specifically, SWVID comprises 20, 000 aligned visible-infrared image pairs, span- ning three weather conditions and two scenes, with each condition and scene evenly distributed. Motivated by the critical research gap highlighted in Fig. 1, where current methods falter in adverse weather, we introduce CFMW for multispectral object detection under adverse weather conditions. Our CFMW leverages a Weather Removal Diffusion Model (WRDM) and Cross-modality Fusion Mamba (CFM) to enhance detection accuracy amid adverse weather arXiv:2404.16302v1 [cs.CV] 25 Apr 2024 conditions while minimizing computational burden. Specifically, WRDM is employed to restore affected visible images before fusion with infrared counterparts, offering plug-and-play compatibility with image fusion networks. Based on learning reversal to increase the order of noise and disrupt the process of data samples, the WRDM model is advantageous to minimize the impact of adverse weather conditions. Additionally, CFM can be integrated into the feature extraction backbone, effectively integrating global contex- tual information from diverse modalities. Recent research shows that Mamba [10] achieves higher inference speed and overall met- rics than the equivalent-scale transformer. To our knowledge, this study represents the first endeavor to employ Diffusion models and Mamba for multispectral object detection. Extensive experiments on both well-established and self-created datasets demonstrate that our CFMW method achieves superior detection performance compared to existing benchmarks. Specifi- cally, we achieved about 17% performance improvement compared with the current state-of-the-art image restoration methods. The proposed method achieves about 8% accuracy improvement while saving 51.2% GPU memory compared with CFT [34], a state-of-the- art cross-modality object detection method. At a glance, we summarize the main contributions as follows: \u2022 We introduce a novel task focusing on visible-infrared object detection under adverse weather conditions and develop a new dataset called the Severe Weather Visible-Infrared Dataset (SWVID), which simulates real-world conditions. SWVID comprises 60, 000 paired visible-infrared images and labels, encompassing weather conditions such as rain, haze, and snow; \u2022 We propose a novel approach, Cross-modality Fusion Mamba with Weather-removal (CFMW) for multispectral object de- tection under adverse weather conditions; \u2022 We introduce a novel Weather Removal Diffusion Model (WRDM) and Cross-modality Fusion Mamba (CFM) modules to tackle image de-weathering and visible-infrared object detection tasks simultaneously; \u2022 Extensive experiments demonstrate that this integration achieves the best task migration capacity, resulting in state- of-the-art performance for both tasks.", + "main_content": "In this section, we briefly review previous related works about crossmodality object detection, state space model, and multi-weather image restoration. Cross-modality Object Detection The existing cross-modality object detection methods can be divided into two categories: feature level and pixel level fusion, distinguished through feature fusion methods and timing. Recently, dual stream object detection models based on convolutional neural networks have made great progress in improving recognition performance [4, 34, 37, 54, 55], while pixel level fusion methods have also achieved good performance [5, 44, 59]. Other works employing methods such as GAN to effective integration also have achieved good results [51, 58, 59]. Those works can be integrated into downstream tasks such as object detection. Traditional convolutional neural networks have limited receptive fields that the information is only integrated into a local area when using the convolution operator, where the self-attention operator of the transformer can learn long-range dependencies [43]. Thus, a transformer-based method, named Cross-Modality Fusion Transformer (CFT) [34], was presented and achieved state-of-theart detection performance. Differing from these works, we first introduce Mamba into cross-modality object detection to learn long-range dependencies with gating mechanisms, achieving high accuracy and low computation overhead simultaneously. State Space Model The concept of the State Space Model was initially introduced in the S4 model [11], presenting a distinctive architecture capable of effectively modeling global information, compared with traditional convolutional neural networks and transformers. Based on S4, the S5 model [38] reduces complexity to a linear level, with H3 [31] introducing it into language model tasks. Mamba [10] introduced an input-activate mechanism to enhance the State Space model, achieving higher inference speed and overall metrics compared with equivalent-scale transformers. With the introduction of Vision Mamba [61] and Vmamba [30], the application of the State Space Model has been extended into visual tasks. Currently, existing research does not consider effectively generalizing the State Space Model to cross-modality object detection. Multi-Weather Image Restoration Recently, some attempts have been made to unity multiple recovery tasks in a single deep learning framework, including generating modeling solutions to recover superimposed noise types [9], recovering superimposed noise or weather damage with unknown test time, or especially unfavorable multi-weather image fading [3, 22, 42]. All in One [23] unified a weather restoration method with a multi-encoder and decoder architecture. It is worth noting that diffusion-based conditional generative models have shown state-of-the-art performance in various tasks such as class-conditional data synthesis with classifier guidance [7], image super-resolution [14], image deblurring [48]. Denosing diffusion restoration models (DDRM) [21] were proposed for general linear inverse image restoration problems, exploiting pro-trained denoising diffusion models for unsupervised posterior sampling. Generally, diffusion models were so far not considered to be generalized to adverse weather scenes in the cross-modality image fusion field. Unlike existing works, we expand the multiweather restoration to the field of cross-modality fusion. 3 PROPOSED FRAMEWORK 3.1 Overview As shown in Fig. 2, CFMW comprises two main stages. In the multi-weather image restoration stage, we aim to achieve image restoration of three types of adverse weather conditions (rain, snow, and haze) and implement it using a unified framework with only one pre-trained weight. In the cross-modality fusion stage, we aim to integrate unique features of different modalities. Inspired by CFT [34], to show the effectiveness of our proposed CFM fusion model, we extend the framework of YOLOv5 to enable multispectral object detection. We present our carefully designed loss functions and training procedure for WRDM and CFM in the last subsection. 3.2 Weather Removal Diffusion Model (WRDM) Denoising diffusion models [13, 39] are a class of generative models, that learn a Markov chain that gradually transforms a Gaussian Figure 2: Framework of Cross-Modality Fusion Mamba backbone. It has three parts: a Weather Removal Diffusion Model (WRDM), a two-stream feature extraction network (our baseline), and three Cross-Modality Fusion Mamba (CFM) modules. \u00c9 represents element-wise add, \u00cb represents element-wise multiply, and C1 is short of 1-dimension convolutions. noise distribution into the data distribution trained by the models. The original denoising diffusion probabilistic models (DDPMs)[13] diffusion process (data to noise) and generative process (noise to data) are based on a Markov chain process, resulting in a large number of steps and huge time consumption. Thus, denoising diffusion implicit models (DDIMs) [40] were presented to accelerate sampling, providing a more efficient class of iterative implicit probabilistic models. DDIMs define the generative process via a class of non-Markovian diffusion processes that lead to the same training objective as DDPMs but can produce deterministic generative processes, thus speeding up sample generation. In DDIMs, implicit sampling refers to the generation of samples from the latent space of the model in a deterministic manner. Implicit sampling using a noise estimator network can be performed by: \ud835\udc4b\ud835\udc61\u22121 = \u221a\u00af \ud835\udefc\ud835\udc61\u22121 \u00b7 (\ud835\udc4b\ud835\udc61\u2212\u221a1 \u2212\u00af \ud835\udefc\ud835\udc61\u00b7 \ud835\udf16\ud835\udf03(\ud835\udc4b\ud835\udc61,\ud835\udc61) \u221a\u00af \ud835\udefc\ud835\udc61 ) +\u221a1 \u2212 \u00af \ud835\udefc\ud835\udc61\u22121 \u00b7 \ud835\udf16\ud835\udf03(\ud835\udc4b\ud835\udc61,\ud835\udc61). (1) where \ud835\udc4b\ud835\udc61and \ud835\udc4b\ud835\udc61\u22121 represent the data \ud835\udc4b0 \u223c\ud835\udc5e(\ud835\udc4b0)) in different diffusion time steps, \ud835\udefc\ud835\udc61= 1 \u2212\ud835\udefd\ud835\udc61, \u00af \ud835\udefc\ud835\udc61= \ud835\udc61 \u00ce \ud835\udc56=1 \ud835\udefc\ud835\udc56, and \ud835\udf16\ud835\udf03(\ud835\udc4b\ud835\udc61,\ud835\udc61) can be optimized as: E\ud835\udc4b0,\ud835\udc61,\ud835\udf16\ud835\udc61\u223c\ud835\udc41(0, \ud835\udc70), [\u2225\ud835\udf16\ud835\udc61\u2212\ud835\udf16\ud835\udf03(\u221a\u00af \ud835\udefc\ud835\udc61\ud835\udc4b0+\u221a1 \u2212\u00af \ud835\udefc\ud835\udc61\ud835\udf16\ud835\udc61,\ud835\udc61\u22252]. Conditional diffusion models have shown state-of-the-art imageconditional data synthesis and editing capabilities [6, 7]. The core idea is to learn a conditional reverse process without changing the diffusion process. Our proposed WRDM is a conditional diffusion model, adding reference images (clear images) in the process of sampling to guide the reconstructed image to be similar to reference images. As shown in Fig. 3, we introduce a new parameter e \ud835\udc4b, which represents the weather-degraded observation. A Markov chain is defined as a diffusion process, and Gaussian noise is gradually added to simulate the gradual degradation of data samples until reaching time point \ud835\udc47. We ground our model hyper-parameters via a U-Net architecture based on WideResNet [52]. For the input images conditional reflection, we connect patch \ud835\udc65\ud835\udc47and e \ud835\udc65, to obtain the six-dimensional input image channel. Conditioning the reverse process on e \ud835\udc4bcan maintain its compatibility with implicit sampling, so we could expand Eq. (1) as: \ud835\udc4b\ud835\udc61\u22121 = \u221a\u00af \ud835\udefc\ud835\udc61\u22121 \u00b7 (\ud835\udc4b\ud835\udc61\u2212\u221a1 \u2212\u00af \ud835\udefc\ud835\udc61\u00b7 \ud835\udf16\ud835\udf03(\ud835\udc4b\ud835\udc61, e \ud835\udc4b,\ud835\udc61) \u221a\u00af \ud835\udefc\ud835\udc61 ) +\u221a1 \u2212 \u00af \ud835\udefc\ud835\udc61\u22121 \u00b7 \ud835\udf16\ud835\udf03(\ud835\udc4b\ud835\udc61, e \ud835\udc4b,\ud835\udc61). (2) The sampling process starts from \ud835\udc4b\ud835\udc47\u223c\ud835\udc41(0, \ud835\udc70), following a deterministic reverse path towards \ud835\udc4b0 with fidelity. See more derivation details in the supplementary material. Our proposed WRDM is a patch-based conditional diffusion model, guiding the reverse sampling process toward smoothness across neighboring patches. During training, we randomly sample the \ud835\udc5d\ud835\udc65\ud835\udc5dpatch location for \ud835\udc43\ud835\udc56within the compute of image dimensions. Under any given time step \ud835\udc47, we reverse-sample the average estimated noise of each pixel in the overlapping patch area according to Fig. 3, which effectively controls the reverse sampling process to ensure that all adjacent patches have higher fidelity. Furthermore, WRDM can be regarded as a plug-in, embedded into other works such as visible-infrared image fusion to remove the influence of multi-weather conditions, which is demonstrated experimentally in Fig. 5. 3.3 Cross-modality Fusion Mamba (CFM) The goal of Cross-modality Fusion Mamba (CFM) is to introduce the advanced state space model (SSM), or Mamba [10], to crossmodality object detection. Structured state space sequence models (S4) and Mamba are inspired by the continuous system, mapping a 1-D function or sequence \ud835\udc65(\ud835\udc61) \u2208R \u2192\ud835\udc66(\ud835\udc61) through a hidden Figure 3: Schematic diagram of WRDM training and reasoning process. The left side is the framework of WRDM. We use a paired data distribution (e \ud835\udc4b,\ud835\udc4b\ud835\udc61), splitting into (e \ud835\udc4b(\ud835\udc51),\ud835\udc4b(\ud835\udc51) \ud835\udc61 ) for model-training. The right side is the illustration of the patch-based diffusive image restoration pipeline (4 patches for example here). state \u210e(\ud835\udc61) \u2208R\ud835\udc41. This system uses \ud835\udc68\u2208R\ud835\udc41\u00d7\ud835\udc41as the evolution parameter and \ud835\udc69\u2208R\ud835\udc41\u00d71, \ud835\udc6a\u2208R1\u00d7\ud835\udc41as the projection parameters, so that \ud835\udc66(\ud835\udc61) could evolve as follows: \u210e\u2032(\ud835\udc61) = \ud835\udc68\u210e(\ud835\udc61) + \ud835\udc69\ud835\udc65(\ud835\udc61), \ud835\udc66(\ud835\udc61) = \ud835\udc6a\u210e\u2032(\ud835\udc61). (3) Notice that S4 and Mamba are the discrete versions of the continuous system, including a timescale parameter \u0394 to transform the continuous parameters \ud835\udc34, \ud835\udc35to discrete parameters \u00af \ud835\udc68, \u00af \ud835\udc69as follows: \u00af \ud835\udc68= \ud835\udc52\ud835\udc65\ud835\udc5d(\u0394\ud835\udc68), \u00af \ud835\udc69= (\u0394\ud835\udc68)\u22121(\ud835\udc52\ud835\udc65\ud835\udc5d(\u0394\ud835\udc68) \u2212\ud835\udc70) \u00b7 \u0394\ud835\udc69. (4) After that, Eq. (3) could be rewritten as: \u210e\ud835\udc61= \u00af \ud835\udc68\u210e\ud835\udc61\u22121 + \u00af \ud835\udc69\ud835\udc65\ud835\udc61, \ud835\udc66\ud835\udc61= \ud835\udc6a\u210e\ud835\udc61. (5) Finally, the models compute output through a global convolution as follows: \u00af \ud835\udc72= \ud835\udc6a\u00af \ud835\udc69, \ud835\udc6a\u00af \ud835\udc68\u00af \ud835\udc69, ..., \ud835\udc6a\u00af \ud835\udc68\ud835\udc74\u22121 \u00af \ud835\udc69, \ud835\udc66= \ud835\udc65\u2217\u00af \ud835\udc72. (6) where \ud835\udc74is the length of the input sequence x, and \u00af \ud835\udc72\u2208R\ud835\udc40is a structured convolution kernel. Standard Mamba is designed for the 1-D sequence. As shown in Vision Mamba (Vim), 2-D multispectral images \ud835\udc61\u2208R\ud835\udc3b\u00d7\ud835\udc4a\u00d7\ud835\udc36 could be transformed into the flattened 2-D patches \ud835\udc65\ud835\udc5d\u2208R\ud835\udc3d\u00d7(\ud835\udc432\u00b7\ud835\udc36), where (\ud835\udc3b,\ud835\udc4a) represents the size of input images, \ud835\udc36is the channels, and \ud835\udc43is the size of image patches. Similarly, we linearly project the \ud835\udc65\ud835\udc5dto the vector with size \ud835\udc37and add position embeddings \ud835\udc6c\ud835\udc5d\ud835\udc5c\ud835\udc60\u2208R(\ud835\udc3d+1)\u00d7\ud835\udc37as follows: \ud835\udc7b0 = [\ud835\udc61\ud835\udc50\ud835\udc59\ud835\udc60;\ud835\udc611 \ud835\udc5d\ud835\udc7e;\ud835\udc612 \ud835\udc5d\ud835\udc7e; ...;\ud835\udc61\ud835\udc3d \ud835\udc5d\ud835\udc7e] + \ud835\udc6c\ud835\udc5d\ud835\udc5c\ud835\udc60. (7) where \ud835\udc61\ud835\udc57 \ud835\udc43is the \ud835\udc57\u2212\ud835\udc61\u210epath of \ud835\udc95, \ud835\udc7e\u2208R(\ud835\udc432\u00b7\ud835\udc36)\u00d7\ud835\udc37is the learnable projection matrix. Here are more details of the proposed CFM. As mentioned in the introduction section, the RGB modality and the Thermal modality show different features under different lighting and weather conditions, which are complementary and redundant. Therefore, we aim to design a block to suppress redundant features and fuse complementary to efficiently harvest essential cross-modal cues for object detection against adverse weather conditions. Motivated by the concept of Cross-Attention [1], we introduce a new crossmodality Mamba block to fuse features from different modalities. As shown in Fig. 2, to encourage feature interaction between RGB and Thermal modalities, we use a Channel Swapping Mamba block (CS) [12], which incorporates information from different channels and enhances cross-modality correlations. Given RGB features \ud835\udc39\ud835\udc45\ud835\udc56 and Thermal features \ud835\udc39\ud835\udc47\ud835\udc56, the first half of channels from \ud835\udc39\ud835\udc45\ud835\udc56will be concatenated with the latter half of \ud835\udc39\ud835\udc47\ud835\udc56and processed through the Mamba block for feature extraction. The obtained features are added to \ud835\udc39\ud835\udc45\ud835\udc56, creating a new feature \ud835\udc39\ud835\udc45\ud835\udc56 \u2032. Meanwhile, the first half of \ud835\udc39\ud835\udc47\ud835\udc56is concatenated with the latter half of \ud835\udc39\ud835\udc45\ud835\udc56, then passes through the Mamba block. The obtained features are added to \ud835\udc39\ud835\udc47\ud835\udc56, creating a new feature \ud835\udc39\ud835\udc47\ud835\udc56 \u2032. Subsequently, we project the features: \ud835\udc39\ud835\udc45\ud835\udc56 \u2032 and \ud835\udc39\ud835\udc47\ud835\udc56 \u2032 into the shared space during the feature fusion process, using the gating mechanism to encourage complementary feature learning while restraining redundant features. As shown in Fig. 2, we first normalize every token sequence in \ud835\udc39\ud835\udc45\ud835\udc56 \u2032 and \ud835\udc39\ud835\udc47\ud835\udc56 \u2032 with Norm block, which helps to improve the convergence speed and performance of the model. Then project the input sequence through linear layers and apply SiLu as the activation function. \u00af \ud835\udc68\ud835\udc90, \u00af \ud835\udc69\ud835\udc90, and \ud835\udc6a\ud835\udc90can be generated by the Parameters Function: \u00af \ud835\udc68\ud835\udc90, \u00af \ud835\udc69\ud835\udc90, \ud835\udc6a\ud835\udc90= \ud835\udc43\ud835\udc4e\ud835\udc5f\ud835\udc4e\ud835\udc5a\ud835\udc52\ud835\udc61\ud835\udc52\ud835\udc5f\ud835\udc60\ud835\udc39\ud835\udc62\ud835\udc5b\ud835\udc50\ud835\udc61\ud835\udc56\ud835\udc5c\ud835\udc5b(\ud835\udc65\ud835\udc5c\u2032), (8) where \ud835\udc65\u2032 \ud835\udc5c= \ud835\udc73\ud835\udc8a\ud835\udc8f\ud835\udc86\ud835\udc82\ud835\udc93(\ud835\udc65\ud835\udc65 \ud835\udc5c\ud835\udc75\ud835\udc90\ud835\udc93\ud835\udc8e(\ud835\udc39\ud835\udc5c\u2032 \ud835\udc56)). After that, we apply State Space Model (SSM): \ud835\udc66\ud835\udc5c= \ud835\udc7a\ud835\udc7a\ud835\udc74( \u00af \ud835\udc68\ud835\udc90, \u00af \ud835\udc69\ud835\udc90, \ud835\udc6a\ud835\udc90)(\ud835\udc65\ud835\udc5c\u2032), (9) Figure 4: Overview of the established SWVID benchmarks. The dataset includes three weather conditions (i.e., Rain, Foggy, and Snow), and two scenarios (i.e., Daylight and Night), providing 60, 000 images in total. Then we apply the gating operation, followed by residual connection: \ud835\udc67= \ud835\udc73\ud835\udc8a\ud835\udc8f\ud835\udc86\ud835\udc82\ud835\udc93\ud835\udc9b(\ud835\udc39\ud835\udc47\ud835\udc56 \u2032), (10) \ud835\udc66\ud835\udc45\u2032 = \ud835\udc66\ud835\udc45\u2299\ud835\udc7a\ud835\udc8a\ud835\udc73\ud835\udc7c(\ud835\udc67), (11) \ud835\udc66\ud835\udc47\u2032 = \ud835\udc66\ud835\udc47\u2299\ud835\udc7a\ud835\udc8a\ud835\udc73\ud835\udc7c(\ud835\udc67), (12) \ud835\udc39\ud835\udc56= \ud835\udc79\ud835\udc86\ud835\udc94\ud835\udc89\ud835\udc82\ud835\udc91\ud835\udc86(\ud835\udc73\ud835\udc8a\ud835\udc8f\ud835\udc86\ud835\udc82\ud835\udc93\ud835\udc47(\ud835\udc66\ud835\udc45\u2032 + \ud835\udc66\ud835\udc47\u2032) + \ud835\udc39\ud835\udc56\u2032). (13) Finally, we get the fused 2-D feature \ud835\udc39\ud835\udc56successfully. Different from CFT [34], our fusion block improves computational efficiency while inheriting the components of global receptive field and dynamic weight. Comparing the state space model (SSM) in our CFM block with the self-attention mechanism of transformers in CFT [34], both of them play an important role in providing global context adaptively, but self-attention is quadratic to sequence length while SSM is linear to sequence length [61]. To achieve lower memory usage when dealing with long-sequence works, CFM chooses the recomputation method as the same as Mamba. Experiment on the SWVID and LLVIP dataset, whose resolution is 1080 \u00d7 720, shows that CFT requires 21.88GB GPU memory while CFM only requires 10.72GB, saving 11.16GB in the same configuration. 3.4 Loss Functions As a two-stage pre-training model, we carefully design the training loss functions to produce enhanced results with minimum blurriness and the closest details to ground-truth images and to extract the differences between RGB and thermal modalities. For training WRDM, the goal of the loss function in this stage is to maximize the data log-likelihood \ud835\udc59\ud835\udc5c\ud835\udc54\ud835\udc5d\ud835\udf03(\ud835\udc650). Since maximizing this target directly is very challenging, we use variational inference to approximate this target. Variational inference approximates the true posterior distribution \ud835\udc5d\ud835\udf03(\ud835\udc650 : \ud835\udc47) by introducing a variational Table 1: Comparisons of SWVID benchmark with existing visible-infrared datasets. !means available while %denotes the opposite. Dataset Year Resolution Publication Scene Daylight Night Weather KAIST [16] 2015 640 \u00d7 512 CVPR \" \" % FLIR [8] 2018 640 \u00d7 512 \" \" % RoadScene [50] 2020 640 \u00d7 512 AAAI \" \" % LLVIP [18] 2021 1080 \u00d7 720 ICCV \" \" % MSRS [41] 2022 640 \u00d7 480 Info. Fusion \" \" % M3FD [27] 2022 640 \u00d7 512 CVPR \" \" % VTUAV [32] 2022 1920 \u00d7 1080 CVPR \" \" % SWVID 2024 1080 \u00d7 720 Proposed \" \" \" distribution\ud835\udc5e(\ud835\udc651 : \ud835\udc47|\ud835\udc650) and then minimizes the difference between these two distributions. Here we define L\ud835\udf03= \u2212\ud835\udc59\ud835\udc5c\ud835\udc54\ud835\udc5d\ud835\udf03(\ud835\udc650), we have: L\ud835\udf03= \ud835\udc47 \u2211\ufe01 \ud835\udc61=1 E\ud835\udc5e[\ud835\udc59\ud835\udc5c\ud835\udc54\ud835\udc5d\ud835\udf03(\ud835\udc650|\ud835\udc65\ud835\udc47)] \u2212 \ud835\udc47\u22121 \u2211\ufe01 \ud835\udc61=1 E\ud835\udc5e(\ud835\udc65\ud835\udc61\u22121|\ud835\udc65\ud835\udc61) [\ud835\udc37\ud835\udc3e\ud835\udc3f(\ud835\udc5e(\ud835\udc65\ud835\udc61\u22121|\ud835\udc65\ud835\udc61,\ud835\udc650))||\ud835\udc5d\ud835\udf03(\ud835\udc65\ud835\udc61\u22121|\ud835\udc65\ud835\udc61)]. (14) where the second term is the expected value of the Kullback-Leibler divergence between \ud835\udc5e(\ud835\udc65\ud835\udc61\u22121|\ud835\udc65\ud835\udc61) and \ud835\udc5d\ud835\udf03(\ud835\udc65\ud835\udc61\u22121|\ud835\udc65\ud835\udc61). In alignment with the prevalent practices in this field, the overall loss function (L\ud835\udc61\ud835\udc5c\ud835\udc61\ud835\udc4e\ud835\udc59) is a sum of the bounding-box regression loss (L\ud835\udc4f\ud835\udc5c\ud835\udc65), the classification loss (L\ud835\udc50\ud835\udc59\ud835\udc60), and the confidence loss (L\ud835\udc50\ud835\udc5c\ud835\udc5b\ud835\udc53= L\ud835\udc5b\ud835\udc5c\ud835\udc5c\ud835\udc4f\ud835\udc57+ L\ud835\udc5c\ud835\udc4f\ud835\udc57). L\ud835\udc61\ud835\udc5c\ud835\udc61\ud835\udc4e\ud835\udc59= L\ud835\udc4f\ud835\udc5c\ud835\udc65+ L\ud835\udc50\ud835\udc59\ud835\udc60+ L\ud835\udc5b\ud835\udc5c\ud835\udc5c\ud835\udc4f\ud835\udc57+ L\ud835\udc5c\ud835\udc4f\ud835\udc57, (15) Details of the loss function for CFMW are elucidated in the supplementary material. 4 EXPERIMENTS 4.1 Established SWVID benchmark Dataset. The color gamut of visible images is weakened by environmental disturbance in dynamic environments, and the existing fusion methods make it difficult to fully fuse visible and infrared spectra because of a deficiency of sufficient training under corresponding datasets. As shown in Fig. 4, we established the benchmark, SWVID, which is constructed from the public datasets (i.e. LLVIP [18], M3FD [27], MSRS [41]) collected in the real scene. It contains a variety of uniformly distributed scenes (daylight, night, rain, foggy, and snow), simulating real environments through the combination of different scenes. Furthermore, we provide the corresponding ground-truth images for each visible image affected by adverse weather conditions for image fusion and image restoration network training. As shown in Table 1, compared with previous visible-infrared datasets, SWVID is the first one that considers weather conditions. Specifically, we have constructed the dataset from public visible-infrared datasets as follows: D\ud835\udc5f\ud835\udc4e\ud835\udc56\ud835\udc5b(\ud835\udc3d(\ud835\udc65)) = \ud835\udc3d(\ud835\udc65)(1 \u2212\ud835\udc40\ud835\udc5f(\ud835\udc65)) + \ud835\udc45(\ud835\udc65)\ud835\udc40\ud835\udc5f(\ud835\udc65), (16) D\ud835\udc60\ud835\udc5b\ud835\udc5c\ud835\udc64(\ud835\udc3d(\ud835\udc65)) = \ud835\udc3d(\ud835\udc65)(1 \u2212\ud835\udc40\ud835\udc60(\ud835\udc65)) + \ud835\udc46(\ud835\udc65)\ud835\udc40\ud835\udc60(\ud835\udc65), (17) D\ud835\udc53\ud835\udc5c\ud835\udc54\ud835\udc54\ud835\udc66(\ud835\udc3d(\ud835\udc65)) = \ud835\udc3d(\ud835\udc65)\ud835\udc52\u2212 \u222b\ud835\udc51(\ud835\udc65) 0 \ud835\udefd\ud835\udc51\ud835\udc59+ \u222b\ud835\udc51(\ud835\udc65) 0 \ud835\udc3f\u221e\ud835\udefd\ud835\udc52\u2212\ud835\udefd\ud835\udc59\ud835\udc51\ud835\udc59. (18) Figure 5: Examples of daylight and night scenes for multimodal fusion and object detection visualization, including three kinds of adverse weather conditions (rain, haze, and snow). We embed WRDM into two state-of-the-art visible-infrared fusion methods (i.e., CDDFuse [59] and DeFusion [25]) to mitigate the adverse impact of weather conditions. where \ud835\udc65represents the spatial location in an image, D\ud835\udc5f\ud835\udc4e\ud835\udc56\ud835\udc5b(\ud835\udc3d(\ud835\udc65)), D\ud835\udc60\ud835\udc5b\ud835\udc5c\ud835\udc64(\ud835\udc3d(\ud835\udc65)) and D\ud835\udc53\ud835\udc5c\ud835\udc54\ud835\udc54\ud835\udc66(\ud835\udc3d(\ud835\udc65)) represent a function that maps a clear image to one with rain, snow, and fog particle effects, \ud835\udc3d(\ud835\udc65) represents the clear image with no weather effects, \ud835\udc40\ud835\udc5f(\ud835\udc65) and \ud835\udc40\ud835\udc60(\ud835\udc65) represent rain and snow equivalents, \ud835\udc45(\ud835\udc65) represents a map of the rain masks, \ud835\udc46(\ud835\udc65) represents a chromatic aberration map of the snow particles. Considering scattering effects, \ud835\udc51(\ud835\udc65) represents the distance from the observer at a pixel location \ud835\udc65, \ud835\udefdis an atmospheric attenuation coefficient, and \ud835\udc3f\u221eis the radiance of light. We divide SWVID into the training set (34, 280 images), validation set (17, 140 images), and test set (8, 570 images), each folder contains three parts: pairs of visible-infrared images and corresponding weather-influenced visible images. Notice that weather-influenced visible images contain three kinds of weather conditions, classified as SWVID-snow, SWVID-rain, and SWVID-foggy. During the training period, we use the pairs of images (weather-influenced and ground-truth) to train WRDM in the first stage, then use the pairs of images (ground-truth and infrared) with corresponding labels to train CFM in the second stage. During the validating and testing period, we use the pairs of images (weather-influenced and infrared) directly, verifying and testing the performance of CFMW under real conditions. Also, we use the same way when evaluating other networks in comparative experiments. Evaluation metrics. We adopt the conventional peak signalto-noise ratio (PSNR) [15] and structural similarity (SSIM) [47] for quantitative evaluations between ground truth and restored images. PSNR is mainly used to evaluate the degree of distortion after image processing, while SSIM pays more attention to the Table 2: Quantitative comparisons in terms of PSNR and SSIM (higher is better) with state-of-the-art image deraining, dehazing, and desnowing methods. For the sake of fairness, we uniformly use the visible light part of the established SWVID dataset as the evaluation dataset. Image-Deraining Task SWVID-rain (RGB) Image-Dehazing Task SWVID-foggy (RGB) Image-Desnowing Task SWVID-snow (RGB) PSNR\u2191 SSIM\u2191 PSNR\u2191 SSIM\u2191 PSNR\u2191 SSIM\u2191 pix2pix [17] 19.95 0.7270 pix2pix [17] 25.12 0.8359 SPANet [46] 29.92 0.8260 CycleGAN [60] 17.65 0.6452 DuRN [29] 31.44 0.9256 DDMSNet [57] 34.87 0.9462 PCNet [19] 27.13 0.8546 AttentiveGAN [33] 32.56 0.9331 DesnowNet [2] 32.15 0.9416 MPRNet [53] 29.14 0.9022 IDT [49] 34.14 0.9412 RESCAN [24] 30.57 0.9003 de-rain (ours) 36.78 0.9464 de-haze (ours) 36.53 0.9795 de-snow (ours) 42.23 0.9821 All-in-One [23] 25.13 0.8856 All-in-One [23] 31.24 0.9122 All-in-One [23] 28.12 0.8815 TransWeather [42] 29.77 0.9107 TransWeather [42] 33.85 0.9388 TransWeather [42] 35.15 0.9417 WRDM (ours) 35.02 0.9322 WRDM (ours) 35.88 0.9602 WRDM (ours) 40.98 0.9578 Table 3: Comparison of performances with other networks on the SWVID-snow dataset. Model Data Backbone mAP50\u2191 mAP75\u2191 mAP\u2191 mono-modaltiy networks Faster R-CNN [36] RGB ResNet50 82.3 34.6 30.7 Faster R-CNN [36] Thermal ResNet50 90.6 63.7 55.4 SDD [28] RGB VGG16 73.6 37.8 38.6 SDD [28] Thermal VGG16 88.6 55.6 50.2 YOLOv3 [35] RGB Darknet53 78.3 29.4 24.4 YOLOv3 [35] Thermal Darknet53 84.6 50.7 47.4 YOLOv5 [20] RGB CSPD53 80.7 38.2 30.7 YOLOv5 [20] Thermal CSPD53 90.5 65.2 57.6 YOLOv7 [45] RGB CSPD53 85.3 41.8 34.9 YOLOv7 [45] Thermal CSPD53 91.8 67.6 60.4 multi-modality networks Baseline RGB+T CSPD53 92.2 68.4 59.3 CFT [34] RGB+T CFB 92.4 71.1 58.4 CFMW (ours) RGB+T CFM 97.2 76.9 63.4 structural information and visual quality of the images. \ud835\udc43\ud835\udc46\ud835\udc41\ud835\udc45= 10 \u00d7 \ud835\udc59\ud835\udc54( (2\ud835\udc5b\u22121)2 \ud835\udc40\ud835\udc46\ud835\udc38 ), (19) \ud835\udc46\ud835\udc46\ud835\udc3c\ud835\udc40= [\ud835\udc59(\ud835\udc65,\ud835\udc66)]\ud835\udefc\u00b7 [\ud835\udc50(\ud835\udc65,\ud835\udc66)]\ud835\udefd\u00b7 [\ud835\udc60(\ud835\udc65,\ud835\udc66)]\ud835\udefe, (20) As for object detection quantitative experiments, we introduced three object detection metrics: mean Average Precision (mAP, mAP50, and mAP75) to evaluate the accuracy of the object detection models. For more calculation details, please refer to the supplementary material. 4.2 Implantation Details As for WRDM, we performed experiments both in specific-weather conditions and multi-weather conditions image restoration settings. We denote our specific-weather restoration models as de-rain, desnow, and de-foggy to verify the general WRDM model under specific weather conditions. We trained the 128 \u00d7 128 patch size version of all models. We use NVIDIA RTX 4090 cards to perform all the experiments. We use Adam as an optimizer while training all the models we compare. During the training process, we trained WRDM for 3 \u00d7 106 iterations. As for CFM, we did not perform Table 4: Comparison of performances with other networks on the LLVIP [18] dataset. Model Data Backbone mAP50\u2191 mAP75\u2191 mAP\u2191 mono-modaltiy networks Faster R-CNN [36] RGB ResNet50 91.4 48.0 49.2 Faster R-CNN [36] Thermal ResNet50 96.1 68.5 61.1 SDD [28] RGB VGG16 82.6 31.8 39.8 SDD [28] Thermal VGG16 90.2 57.9 53.5 YOLOv3 [35] RGB Darknet53 85.9 37.9 43.3 YOLOv3 [35] Thermal Darknet53 89.7 53.4 52.8 YOLOv5 [20] RGB CSPD53 90.8 51.9 50.0 YOLOv5 [20] Thermal CSPD53 94.6 72.2 61.9 YOLOv7 [45] RGB CSPD53 91.4 58.4 53.6 YOLOv7 [45] Thermal CSPD53 94.6 70.6 62.4 multi-modality networks Baseline RGB+T CSPD53 95.2 71.4 62.3 CFT [34] RGB+T CFB 97.5 72.9 63.6 CFMW (ours) RGB+T CFM 98.8 77.2 64.8 task-specific parameter tuning or modifications to the network architecture. For better performance, we select the YOLOv5 model\u2019s public weight initialization (yolov5s.pt), which is pre-trained on the COCO dataset [26]. 4.3 Comparative Experiments In this section, we make comparisons with several state-of-theart methods in image deweathering and cross-modality object detection separately. In Table 2, we perform comparisons with methods for image desnowing (i.e. SPANet [46], DDMSNet [57], DesnowNet [2], RESCAN [24]), deraining (i.e. pix2pix [17], CycleGAN [60], PCNet [19], MPRNet [53]), and dehazing (i.e. pix2pix [17], DuRN [29], Attentive-GAN [33], IDT [49]), as well as two state-ofthe-art multi-weather image restoration methods: All in One [23] and TransWeather [42]. In Table 3 and Table 4, to prove the consistent improvements of CFMW, we compare with several base single-modality object detection methods (i.e., Faster R-CNN [36], SDD [28], YOLOv3 [35], YOLOv5 [20], YOLOv7 [45]) and several multi-modality object detection methods (i.e., our baseline, standard two-stream YOLOv5 object detection network, and CFT [34]). Table 5: Ablation experiments on SWVID-snow dataset. To present the general effectiveness of our CFMW, we further combine the WRDM and CFM module with other classical detectors (i.e., YOLOv7, YOLOv5, Faster R-CNN). Modality Method Detector mAP50\u2191 mAP75\u2191 mAP\u2191 RGB CSPDarknet53 YOLOv7 [45] 85.3 41.8 34.9 Thermal CSPDarknet53 95.8 72.6 60.4 RGB+T +two stream 95.4 68.1 60.4 +CFM 95.5 68.6 63.3 +WRDM 96.5 70.9 63.1 +CFM&WRDM 96.6 75.1 64.1 RGB CSPDarknet53 YOLOv5 [20] 80.7 38.2 30.7 Thermal CSPDarknet53 90.5 65.2 57.6 RGB+T +two stream 92.2 68.4 59.3 +CFM 96.5 70.6 63.3 +WRDM 96.4 71.2 62.8 +CFM&WRDM 97.2 76.9 63.4 RGB Resnet53 Faster R-CNN [36] 82.3 34.6 30.7 Thermal Resnet53 90.6 63.7 55.4 RGB+T +two stream 93.7 62.8 55.4 +CFM 96.7 69.5 61.9 +WRDM 96.2 69.4 61.6 +CFM&WRDM 96.2 69.7 62.2 Comparison of image deweathering. As shown in Table 2, we use the single RGB modality of the SWVID dataset (including rain, foggy, and haze weather conditions) as a comparative dataset to measure the performance of different models under different weather conditions. The top of the table contains results from specific-weather image restoration, where we show \ud835\udc46= 50 sampling time steps. For image-deraining, image-dehazing, and image-desnowing tasks, the proposed solution consistently achieves the best results (36.78/0.9464 on SWVID-rain, 36.53/0.9795 on SWVID-foggy, and 42.23/0.9821 on SWVID-snow). Especially, in the image de-rain task, the performance improvement is about 24% compared with the current state-of-the-art method (MPRNet [53]). For multi-weather image restoration, although the results are not as good as the specific-weather model due to the complexity of the task, the proposed method also reaches the best results ( 35.02/0.9322 on SWVID-rain, 35.88/0.9602 on SWVID-foggy, and 40.98/0.9578 on SWVID-snow) compared with All in One [23] and TransWeather [42], with about 17% performance improvement compared against TransWeather [42] and about 25% performance improvement compared against All in One [23]. Comparison of cross-modality object detection. As shown in Table 3 and Table 4, we use LLVIP [18] and SWVID-snow as the comparative datasets. Compared with SWVID-rain and SWVIDfoggy, the size of pedestrians in these two datasets is more in line with the general object detection standards. There are more complex cases of pedestrian overlap in these two datasets, which can better measure the accuracy of the object detection networks. The top of the table contains results from single-modality networks, each network uses the RGB modality or the thermal modality for detection. The bottom of the table shows results from multi-modality networks, including our baseline, CFT [34] and the proposed CFMW. According to Table 3, it can be observed that with the integration of WRDM and CFM, CFMW achieves an overwhelming performance improvement on each metric (mAP50:2.3\u2191, mAP75:4.3\u2191, mAP:3.0\u2191) on SWVID-snow compared with the best existing network on each metric, which shows that it has preferable adaptability under adverse weather conditions. Also, CFMW can achieve a more accurate detection (mAP50:98.8, mAP75:77.2, mAP:64.8) with lower computational consumption, as shown in Table 4, which demonstrates the commonality of CFWM. 4.4 Ablation Study In this section, we analyze the effectiveness of CFMW. We first validate the importance of WRDM and CFM modules in performance improvement in a parametric form through detailed ablation experiments, then visually show the role of WRDM in cross-modality fusion and object detection tasks to highlight its versatility as a weather-restoration plug-in. Ablation experiments To understand the impact of each component in our method, we have performed a comprehensive set of ablation experiments. As shown in Table 5, we further combine the CFM and WRDM with other classical detectors, i.e. YOLOv7 [45], YOLOv5 [20] and Faster R-CNN [36] to present the general effectiveness of our CFMW. The proposed CFMW improves the performance of cross-modality object detection using either a one-stage or twostage detector under complex weather conditions. Specifically, CFM achieves an 11.3% gain on mAP50, an 81.6% gain on mAP75, and a 78.3% gain on mAP (on YOLOv5 [20] ). After adding WRDM, we achieved a 12.1% gain on mAP50, an 88.2% gain on mAP75, and an 80.4% gain on mAP. CFM and WRDM provide non-negligible gains for all the considered evaluation metrics. Visual interpretation To verify the applicability of WRDM as a plug-in intuitively, we visually show the application scenario of WRDM in the field of visible-infrared image fusion and object detection. As shown in Fig. 5, we perform comparisons with methods of visible-infrared image fusion methods (i.e. CDDFuse [59], DeFusion [25]). It can be seen from the figure that compared with the original images, the image fusion effects of the two methods before and after using WRDM are quite different, more people at the far end of images could be detected successfully after deweathering. In cross-modality object detection, rich image details can provide great assistance for feature extraction and fusion, with direct fusion without removing the weather influence causing the loss and interference of image details. 5 CONCLUSION In this work, we introduce a novel approach to visible-infrared object detection under severe weather conditions, namely the Severe Weather Visible-Infrared Dataset (SWVID). We have provided a valuable resource for training and evaluating models in realistic and challenging environments. The Cross-modality Fusion Mamba with Weather-removal (CFMW) model, has proven to be highly effective in enhancing detection accuracy while managing computational efficiency. Our extensive experiments have shown that CFMW outperforms existing benchmarks, achieving state-of-the-art on both tasks: multi-weather image restoration and cross-modality object detection. This work opens up new possibilities for cross-modality object detection in adverse weather." +} \ No newline at end of file diff --git a/intro_8K/test_introduction_short_2404.16306v1.json b/intro_8K/test_introduction_short_2404.16306v1.json new file mode 100644 index 0000000000000000000000000000000000000000..5a165f6e83da5108fa95a2eb576cd71717917b60 --- /dev/null +++ b/intro_8K/test_introduction_short_2404.16306v1.json @@ -0,0 +1,16 @@ +{ + "url": "http://arxiv.org/abs/2404.16306v1", + "title": "TI2V-Zero: Zero-Shot Image Conditioning for Text-to-Video Diffusion Models", + "abstract": "Text-conditioned image-to-video generation (TI2V) aims to synthesize a\nrealistic video starting from a given image (e.g., a woman's photo) and a text\ndescription (e.g., \"a woman is drinking water.\"). Existing TI2V frameworks\noften require costly training on video-text datasets and specific model designs\nfor text and image conditioning. In this paper, we propose TI2V-Zero, a\nzero-shot, tuning-free method that empowers a pretrained text-to-video (T2V)\ndiffusion model to be conditioned on a provided image, enabling TI2V generation\nwithout any optimization, fine-tuning, or introducing external modules. Our\napproach leverages a pretrained T2V diffusion foundation model as the\ngenerative prior. To guide video generation with the additional image input, we\npropose a \"repeat-and-slide\" strategy that modulates the reverse denoising\nprocess, allowing the frozen diffusion model to synthesize a video\nframe-by-frame starting from the provided image. To ensure temporal continuity,\nwe employ a DDPM inversion strategy to initialize Gaussian noise for each newly\nsynthesized frame and a resampling technique to help preserve visual details.\nWe conduct comprehensive experiments on both domain-specific and open-domain\ndatasets, where TI2V-Zero consistently outperforms a recent open-domain TI2V\nmodel. Furthermore, we show that TI2V-Zero can seamlessly extend to other tasks\nsuch as video infilling and prediction when provided with more images. Its\nautoregressive design also supports long video generation.", + "authors": "Haomiao Ni, Bernhard Egger, Suhas Lohit, Anoop Cherian, Ye Wang, Toshiaki Koike-Akino, Sharon X. Huang, Tim K. Marks", + "published": "2024-04-25", + "updated": "2024-04-25", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "Image-to-video (I2V) generation is an appealing topic with various applications, including artistic creation, entertain- ment, and data augmentation for machine learning [39]. Given a single image x0 and a text prompt y, text- conditioned image-to-video (TI2V) generation aims to syn- *Work done during an internship at MERL. \u201cA man with the expression of slight happiness on his face.\u201d \u201cA person is drumming.\u201d \u201cA serene mountain cabin covered in a fresh blanket of snow.\u201d Figure 1. Examples of generated video frames using our proposed TI2V-Zero. The given first image x0 is highlighted with the red box, and the text condition y is shown under each row of the video. The remaining columns show the 6th, 11th, and 16th frames of the generated output videos. Each generated video has 16 frames with a resolution of 256 \u00d7 256. thesize M new frames to yield a realistic video, \u02c6 x = \u27e8x0, \u02c6 x1, . . . , \u02c6 xM\u27e9, starting from the given frame x0 and sat- isfying the text description y. Current TI2V generation methods [59, 63, 70] typically rely on computationally- heavy training on video-text datasets and specific archi- tecture designs to enable text and image conditioning. Some [12, 25] are constrained to specific domains due to the lack of training with large-scale open-domain datasets. Other approaches, such as [14, 67], utilize pretrained foun- dation models to reduce training costs, but they still need to train additional modules using video data. In this paper, we propose TI2V-Zero, which achieves zero-shot TI2V generation using only an open-domain pre- trained text-to-video (T2V) latent diffusion model [60]. Here \u201czero-shot\u201d means that when using the diffusion arXiv:2404.16306v1 [cs.CV] 25 Apr 2024 model (DM) that was trained only for text conditioning, our framework enables image conditioning without any op- timization, fine-tuning, or introduction of additional mod- ules. Specifically, we guide the generation process by incor- porating the provided image x0 into the output latent code at each reverse denoising step. To ensure that the tempo- ral attention layers of the pretrained DM focus on informa- tion from the given image, we propose a \u201crepeat-and-slide\u201d strategy to synthesize the video in a frame-by-frame man- ner, rather than directly generating the entire video volume. Notably, TI2V-Zero is not trained for the specific domain of the provided image, thus allowing the model to generalize to any image during inference. Additionally, its autoregres- sive generation makes the synthesis of long videos possible. While the standard denoising sampling process start- ing with randomly initialized Gaussian noise can produce matching semantics, it often results in temporally inconsis- tent videos. Therefore, we introduce an inversion strategy based on the DDPM [20] forward process, to provide a more suitable initial noise for generating each new frame. We also apply a resampling technique [33] in the video DM to help preserve the generated visual details. Our approach en- sures that the network maintains temporal consistency, gen- erating visually convincing videos conditioned on the given starting image (see Fig. 1). We conduct extensive experiments on MUG [1], UCF- 101 [56], and a new open-domain dataset. In these experi- ments, TI2V-Zero consistently performs well, outperform- ing a state-of-the-art model [67] that was based on a video diffusion foundation model [8] and was specifically trained to enable open-domain TI2V generation.", + "main_content": "2.1. Conditional Image-to-Video Generation Conditional video generation aims to synthesize videos guided by user-provided signals. It can be classified according to which type(s) of conditions are given, such as textto-video (T2V) generation [5, 16, 21, 23, 31, 65], video-tovideo (V2V) generation [7, 38, 40, 45, 61, 64], and imageto-video (I2V) generation [4, 10, 25, 34, 39, 69]. Here we discuss previous text-conditioned image-to-video (TI2V) generation methods [12, 14, 22, 44, 63, 70]. Hu et al. [25] introduced MAGE, a TI2V generator that integrates a motion anchor structure to store appearance-motion-aligned representations through three-dimensional axial transformers. Yin et al. [70] proposed DragNUWA, a diffusionbased model capable of generating videos controlled by text, image, and trajectory information with three modules including a trajectory sampler, a multi-scale fusion, and an adaptive training strategy. However, these TI2V frameworks require computationally expensive training on videotext datasets and a particular model design to support textand-image-conditioned training. In contrast, our proposed TI2V-Zero leverages a pretrained T2V diffusion model to achieve zero-shot TI2V generation without additional optimization or fine-tuning, making it suitable for a wide range of applications. 2.2. Adaptation of Diffusion Foundation Models Due to the recent successful application of diffusion models (DM) [20, 42, 47, 54, 55] to both image and video generation, visual diffusion foundation models have gained prominence. These include text-to-image (T2I) models such as Imagen [50] and Stable Diffusion [47], as well as textto-video (T2V) models such as ModelScopeT2V [60] and VideoCrafter1 [8]. These models are trained with largescale open-domain datasets, often including LAION-400M [52] and WebVid-10M [2]. They have shown immense potential for adapting their acquired knowledge base to address a wide range of downstream tasks, thereby reducing or eliminating the need for extensive labeled data. For example, previous works have explored the application of large T2I models to personalized image generation [13, 49], image editing [17, 33, 35\u201337], image segmentation [3, 68], video editing [45, 62], and video generation [14, 27, 53, 66]. In contrast to T2I models, there are fewer works on the adaptation of large-scale T2V models. Xing et al. [67] proposed DynamicCrafter for open-domain TI2V generation by adapting a T2V foundation model [8]. To control the generative process, they first employed a learnable image encoding network to project the given image into a textaligned image embedding space. Subsequently, they utilized dual cross-attention layers to fuse text and image information and also concatenated the image with the initial noise to provide the video DM with more precise image details. In contrast, in this paper we explore how to inject the provided image to guide the DM sampling process based solely on the pretrained T2V model itself, with no additional training for the new TI2V task. 3. Methodology Given one starting image x0 and text y, let x = \u27e8x0, x1, . . . , xM\u27e9represent a real video corresponding to text y. The objective of text-conditioned image-tovideo (TI2V) generation is to synthesize a video \u02c6 x = \u27e8x0, \u02c6 x1, . . . , \u02c6 xM\u27e9, such that the conditional distribution of \u02c6 x given x0 and y is identical to the conditional distribution of x given x0 and y, i.e., p(\u02c6 x|x0, y) = p(x|x0, y). Our proposed TI2V-Zero can be built on a pretrained T2V diffusion model with a 3D-UNet-based denoising network. Here we choose ModelScopeT2V [60] as backbone due to its promising open-domain T2V generation ability. Below, we first introduce preliminaries about diffusion models, then introduce the architecture of the pretrained T2V model, and finally present the details of our TI2V-Zero. Diffusion Process DDPM Inversion \u2026 ! \ud835\udc33! \u0302 \ud835\udc67! \" \u0302 \ud835\udc67! #$% \u0302 \ud835\udc67! # \ud835\udc2c\" \u222a \u2026 ! \ud835\udc33& \u0302 \ud835\udc67& \" \u0302 \ud835\udc67& #$% \u0302 \ud835\udc67& # \ud835\udc2c& Replace \u2026 ! \ud835\udc33\" \u0302 \ud835\udc67\" \" \u0302 \ud835\udc67\" #$% \u0302 \ud835\udc67\" # \u2026 \ud835\udc60\" % \ud835\udc60\" #$% \ud835\udc60\" \" \u0302 \ud835\udc67\" # Slide add \ud835\udc61 step noise \ud835\udc9f ! \ud835\udc65'(% \u2026 \u2026 \u2026 ! \ud835\udc33&$% \u0302 \ud835\udc67&$% \" \u0302 \ud835\udc67&$% #$% \u0302 \ud835\udc67&$% # \ud835\udc2c&$% add (\ud835\udc61\u22121) step noise U-Net \ud835\udf16) U-Net \ud835\udf16) U-Net \ud835\udf16) Replace \ud835\udc56> 0 \ud835\udc56= 0 Construct \ud835\udc2c\" \ud835\udc60\" % \ud835\udc60\" #$% \ud835\udc60\" \" \ud835\udc60\" #$% \ud835\udc2c\" \u2026 \ud835\udc67\" \u2107 \ud835\udc65\" Repeat \ud835\udc3e times Reverse Process Using Pretrained Denoising U-Net Resample Resample \ud835\udc66 Resample \ud835\udc66 \ud835\udc66 Figure 2. Illustration of the process of applying TI2V-Zero to generate the new frame \u02c6 xi+1, given the starting image x0 and text y. TI2VZero is built upon a frozen pretrained T2V diffusion model, including frame encoder E, frame decoder D, and the denoising U-Net \u03f5\u03b8. At the beginning of generation (i = 0), we encode x0 as z0 and repeat it K times to form the queue s0. We then apply DDPM-based inversion to s0 to produce the initial Gaussian noise \u02c6 zT . Subsequently, in each reverse denoising step using U-Net \u03f5\u03b8, we keep replacing the first K frames of \u02c6 zt with the noisy latent code st derived from s0. Resampling is also applied within each step to improve motion coherence. We finally decode the final frame of the clean latent code \u02c6 z0 as the new synthesized frame \u02c6 xi+1. To compute the new s0 for the next iteration of generation (i > 0), we perform a sliding operation by dequeuing s0 0 and enqueuing \u02c6 zK 0 within s0. 3.1. Preliminaries: Diffusion Models Diffusion Models (DM) [20, 54, 55] are probabilistic models designed to learn a data distribution. Here we introduce the fundamental concepts of Denoising Diffusion Probabilistic Models (DDPM). Given a sample from the data distribution z0 \u223cq(z0), the forward diffusion process of a DM produces a Markov chain z1, . . . , zT by iteratively adding Gaussian noise to z0 according to a variance schedule \u03b21, . . . , \u03b2T , that is: q(zt|zt\u22121) = N(zt; p 1 \u2212\u03b2tzt\u22121, \u03b2tI) , (1) where variances \u03b2t are constant. When the \u03b2t are small, the posterior q(zt\u22121|zt) can be well approximated by a diagonal Gaussian [41, 54]. Furthermore, if the length of the chain, denoted by T, is sufficiently large, zT can be well approximated by a standard Gaussian distribution N(0, I). These suggest that the true posterior q(zt\u22121|zt) can be estimated by p\u03b8(zt\u22121|zt) defined as: p\u03b8(zt\u22121|zt) = N(zt\u22121; \u00b5\u03b8(zt), \u03c32 t I) , (2) where variances \u03c3t are also constants. The reverse denoising process in the DM (also termed sampling) then generates samples z0 \u223cp\u03b8(z0) by starting with Gaussian noise zT \u223cN(0, I) and gradually reducing noise in a Markov chain zT \u22121, zT \u22122, . . . , z0 using a learned p\u03b8(zt\u22121|zt). To learn p\u03b8(zt\u22121|zt), Gaussian noise \u03f5 is first added to z0 to generate samples zt. Utilizing the independence property of the noise added at each forward step in Eq. (1), we can calculate the total noise variance as \u00af \u03b1t = Qt i=0(1\u2212\u03b2i) and transform z0 to zt in a single step: q(zt|z0) = N(zt; \u221a\u00af \u03b1tz0, (1 \u2212\u00af \u03b1t)I) . (3) Then a model \u03f5\u03b8 is trained to predict \u03f5 using the following mean-squared error loss: L = Et\u223cU(1,T ),z0\u223cq(z0),\u03f5\u223cN (0,I) \u0002 ||\u03f5 \u2212\u03f5\u03b8(zt, t)||2\u0003 , (4) where diffusion step t is uniformly sampled from {1, . . . , T}. Then \u00b5\u03b8(zt) in Eq. (2) can be derived from \u03f5\u03b8(zt, t) to model p\u03b8(zt\u22121|zt) [20]. The denoising model \u03f5\u03b8 is implemented using a time-conditioned UNet [48] with residual blocks [15] and self-attention layers [58]. Diffusion step t is specified to \u03f5\u03b8 by the sinusoidal position embedding [58]. Conditional generation that samples z0 \u223cp\u03b8(z0|y) can be achieved by learning a y-conditioned model \u03f5\u03b8(zt, t, y) [41, 47] with classifierfree guidance [19]. During training, the condition y in \u03f5\u03b8(zt, t, y) is replaced by a null label \u2205with a fixed probability. When sampling, the output is generated as follows: \u02c6 \u03f5\u03b8(zt, t, y) = \u03f5\u03b8(zt, t, \u2205) + g \u00b7 (\u03f5\u03b8(zt, t, y) \u2212\u03f5\u03b8(zt, t, \u2205)) , (5) where g is the guidance scale. 3.2. Architecture of Pretrained T2V Model TI2V-Zero can be built upon a pretrained T2V diffusion model with a 3D-UNet-based denoising network. Here we choose ModelScopeT2V [60] as the pretrained model (denoted M). We now describe this T2V model in detail. Structure Overview. Given a text prompt y, the T2V model M synthesizes a video \u02c6 x = \u27e8\u02c6 x0, \u02c6 x1, . . . , \u02c6 xK\u27e9with a pre-defined video of length (K +1) using a latent video diffusion model. Similar to Latent Diffusion Models (LDM) [47], M incorporates a frame auto-encoder [11, 28] for the conversion of data between pixel space X and latent space Z through its encoder E and decoder D. Given the real video x = \u27e8x0, x1, . . . , xK\u27e9, M first utilizes the frame encoder E to encode the video x as z = \u27e8z0, z1, . . . , zK\u27e9. Here the sizes of pixel frame x and latent frame z are Hx \u00d7 Wx \u00d7 3 and Hz \u00d7 Wz \u00d7 Cz, respectively. To be consistent with the notation used for the DM, we denote the Algorithm 1 Generation using our TI2V-Zero approach. Input: The starting frame x0; The text prompt y; The pretrained T2V Model M for generating (K + 1)-frame videos, including frame encoder E and frame decoder D, and the DM denoising networks \u03f5\u03b8; The iteration number U for resampling; The parameter M to control the length of the output video. Output: A synthesized video \u02c6 x with (M + 1) frames. 1: z0 \u2190E(x0) // Encode x0 2: s0 \u2190\u27e8z0, z0, \u00b7 \u00b7 \u00b7 , z0\u27e9 // Repeat z0 for K times 3: \u02c6 x \u2190\u27e8x0\u27e9 4: for i = 1, 2, \u00b7 \u00b7 \u00b7 , M do // Generate one new frame \u02c6 xi 5: sT \u223cN(\u221a\u00af \u03b1T s0, (1 \u2212\u00af \u03b1T )I) // DDPM Inversion 6: \u02c6 zK T \u223cN(\u221a\u00af \u03b1T sK\u22121 0 , (1 \u2212\u00af \u03b1T )I) 7: \u02c6 zT \u2190sT \u222a\u02c6 zK T // Initialize \u02c6 zT 8: for t = T \u22121, \u00b7 \u00b7 \u00b7 , 2, 1 do 9: st \u223cN(\u221a\u00af \u03b1ts0, (1 \u2212\u00af \u03b1t)I) 10: for u = 1, 2, \u00b7 \u00b7 \u00b7 , U do 11: \u27e8\u02c6 z0 t , \u02c6 z1 t , \u00b7 \u00b7 \u00b7 , \u02c6 zK\u22121 t \u27e9\u2190st // Replace 12: \u02c6 zt\u22121 \u223cN(\u00b5\u03b8(\u02c6 zt, y), \u03c32 t I) 13: if u < U and t > 1 then 14: \u02c6 zt \u223cN(\u221a1 \u2212\u03b2t\u02c6 zt\u22121, \u03b2tI) // Resample 15: end if 16: end for 17: end for 18: s0 \u2190\u27e8s1 0, s2 0, \u00b7 \u00b7 \u00b7 , sK\u22121 0 \u27e9\u222a\u02c6 zK 0 // Slide 19: \u02c6 xi \u2190D(\u02c6 zK 0 ) // Decode \u02c6 zK 0 20: \u02c6 x \u2190\u02c6 x \u222a\u02c6 xi 21: end for 22: return \u02c6 x clean video latent z = z0 = \u27e8z0 0, z1 0, . . . , zK 0 \u27e9. M then learns a DM on the latent space Z through a 3D denoising U-Net \u03f5\u03b8 [9]. Let zt = \u27e8z0 t , z1 t , . . . , zK t \u27e9represent the latent sequence that results from adding noise over t steps to the original latent sequence z0. When training, the forward diffusion process of a DM transforms the initial latent sequence z0 into zT by iteratively adding Gaussian noise \u03f5 for T steps. During inference, denoising U-Net \u03f5\u03b8 predicts the added noise at each step, enabling the generation of the clean latent sequence \u02c6 z0 = \u27e8\u02c6 z0 0, \u02c6 z1 0, . . . , \u02c6 zK 0 \u27e9starting from randomly sampled Gaussian noise zT \u223cN(0, I). Text Conditioning Mechanism. M employs a crossattention mechanism [47] to incorporate text information into the generative process as guidance. Specifically, M uses a pretrained CLIP model [46] to encode the prompt y as the text embedding e. The embedding e is later used as the key and value in the multi-head attention layer within the spatial attention blocks, thus enabling the integration of text features with the intermediate U-Net features in \u03f5\u03b8. Denoising U-Net. The denoising U-Net \u03f5\u03b8 includes four key building blocks: the initial block, the downsampling block, the spatio-temporal block, and the upsampling block. The initial block transfers the input into the embedding Ground Truth Video Time TI2V Generation (Replacing) \u2718 Video Infilling (Replacing) \u2718 TI2V Generation (TI2V-Zero) \u2713 Video Infilling (TI2V-Zero) \u2713 Single-Frame Prediction (Replacing) \u2713 \u2026 \u2026 \u2026 \u2026 \u2026 \u2026 \u201cA person is riding horse.\u201d Figure 3. Illustration of the motivation behind our framework. We explore the application of a replacing-based baseline approach (rows 2\u20134, labeled \u201cReplacing\u201d) and our TI2V-Zero (rows 5\u20136, labeled \u201cTI2V-Zero\u201d) in various video generation tasks. The given real frames for each task are highlighted by red boxes and the text input is shown under the block. The replacing-based approach is only effective at predicting a single frame when all the other frames in the video are provided, while TI2V-Zero generates temporally coherent videos for both the TI2V and video infilling tasks. space, while the downsampling and upsampling blocks are responsible for spatially downsampling and upsampling the feature maps. The spatio-temporal block is designed to capture spatial and temporal dependencies in the latent space, which comprises 2D spatial convolution, 1D temporal convolution, 2D spatial attention, and 1D temporal attention. 3.3. Our Framework Leveraging the pretrained T2V foundation model M, we first propose a straightforward replacing-based baseline for adapting M to TI2V generation. We then analyze the possible reasons why it fails and introduce our TI2V-Zero framework, which includes a repeat-and-slide strategy, DDPMbased inversion, and resampling. Figure 2 and Algorithm 1 demonstrate the inference process of TI2V-Zero. Replacing-based Baseline. We assume that the pretrained model M is designed to generate the video with a fixed length of (K + 1). So we first consider synthesizing videos with that same length (K + 1), i.e., M = K. Since the DM process operates within the latent space Z, we use the encoder E to map the given starting frame x0 into the latent representation z0. Additionally, we denote z0 = z0 0 to specify that the latent is clean and corresponds to diffusion step 0 of the DM. Note that each reverse denoising step in Eq. (2) from \u02c6 zt to \u02c6 zt\u22121 depends solely on \u02c6 zt = \u27e8\u02c6 z0 t , \u02c6 z1 t , . . . , \u02c6 zK t \u27e9. To ensure that the first frame of the final synthesized clean video latent \u02c6 z0 = \u27e8\u02c6 z0 0, \u02c6 z1 0, . . . , \u02c6 zK 0 \u27e9 at step 0 matches the provided image latent, i.e., \u02c6 z0 0 = z0 0, we can modify the first generated latent \u02c6 z0 t of \u02c6 zt at each reverse step, as long as the signal-to-noise ratio of each frame latent in \u02c6 zt remains consistent. Using Eq. (3), we can add t steps of noise to the provided image latent z0 0, allowing us to sample z0 t through a single-step calculation. By replacing the first generated latent \u02c6 z0 t with the noisy image latent z0 t at each reverse denoising step, we might expect that the video generation process can be guided by z0 0 with the following expressions defined for each reverse step: z0 t \u223cN( \u221a \u00af \u03b1tz0 0, (1 \u2212\u00af \u03b1t)I) , (6a) \u02c6 z0 t \u2190z0 t , (6b) \u02c6 zt\u22121 \u223cN(\u00b5\u03b8(\u02c6 zt, y), \u03c32 t I) . (6c) Specifically, in each reverse step from \u02c6 zt to \u02c6 zt\u22121, as shown in Eq. (6a), we first compute the noisy latent z0 t by adding Gaussian noise to the given image latent z0 0 over t steps. Then, we replace the first latent \u02c6 z0 t of \u02c6 zt with z0 t in Eq. (6b) to incorporate the provided image into the generation process. Finally, in Eq. (6c), we pass \u02c6 zt through the denoising network to generate \u02c6 zt\u22121, where the text y is integrated by classifier-free guidance (Eq. (5)). After T iterations, the final clean latent \u02c6 z0 at diffusion step 0 can be mapped back into the image space X using the decoder D. Using this replacing-based baseline, we might expect that the temporal attention layers in \u03f5\u03b8 can utilize the context provided by the first frame latent \u02c6 z0 t to generate the subsequent frame latents in a manner that harmonizes with \u02c6 z0 t . However, as shown in Fig. 3, row 2, this replacing-based approach fails to produce a video that is temporally consistent with the first image. The generated frames are consistent with each other, but not with the provided first frame. To analyze possible reasons for failure, we apply this baseline to a simpler video infilling task, where every other frame is provided and the model needs to predict the interspersed frames. In this case, the baseline replaces the generated frame latents at positions corresponding to real frames with noisy provided-frame latents in each reverse step. The resulting video, in Fig. 3, row 3, looks like a combination of two independent videos: the generated (even) frames are consistent with each other but not with the provided (odd) frames. We speculate that this may result from the intrinsic dissimilarity between frame latents derived from the given real images and those sampled from \u03f5\u03b8. Thus, the temporal attention values between frame latents sampled in the same way (both from the given images or both from \u03f5\u03b8) will be higher, while the attention values between frame latents sampled in different ways (one from the given image and the other from \u03f5\u03b8) will be lower. Therefore, the temporal attention layers of M tend to utilize the information from latents Ground Truth w/o Inversion DDIM=10 Resample=0 w/ Inversion DDIM=10 Resample=0 w/ Inversion DDIM=50 Resample=0 w/ Inversion DDIM=10 Resample=2 w/ Inversion DDIM=10 Resample=4 \u201cA woman with the expression of slight sadness on her face.\u201d Figure 4. Qualitative ablation study comparing different sampling strategies for our TI2V-Zero on MUG. The first image \u02c6 x0 is highlighted with the red box and text y is shown under the block. The 1st, 6th, 11th, and 16th frames of the videos are shown in each column. The terms Inversion, DDIM, and Resample denote the application of DDPM inversion, the steps using DDIM sampling, and the iteration number using resampling, respectively. produced by \u03f5\u03b8 to synthesize new frames at each reverse step, ignoring the provided frames. We further simplify the task to single-frame prediction, where the model only needs to predict a single frame when all the other frames in the video are given. In this setting, all the frame latents except for the final frame are replaced by noisy provided-frame latents in each reverse step. Thus, temporal attention layers can only use information from the real frames. In this case, Fig. 3, row 4, shows that the baseline can now generate a final frame that is consistent with the previous frames. Repeat-and-Slide Strategy. Inspired by the observation in Fig. 3, to guarantee that the temporal attention layers of M depend solely on the given image, we make two major changes to the proposed replacing-based baseline: (1) instead of using M to directly synthesize the entire (K + 1)-frame video, we switch to a frame-by-frame generation approach, i.e., we generate only one new frame latent in each complete DM sampling process; (2) for each sampling process generating the new frame latent, we ensure that only one frame latent is produced from \u03f5\u03b8, while the other K frame latents are derived from the given real image and previously synthesized frames, thereby forcing temporal attention layers to only use the information from these frame latents. Specifically, we construct a queue of K frame latents, denoted as s0 = \u27e8s0 0, s1 0, \u00b7 \u00b7 \u00b7 , sK\u22121 0 \u27e9. We also define st = \u27e8s0 t, s1 t, \u00b7 \u00b7 \u00b7 , sK\u22121 t \u27e9, which is obtained by adding t steps of Gaussian noise to the clean s0. Similar to our replacing-based baseline in the single-frame prediction task, in each reverse step from \u02c6 zt to \u02c6 zt\u22121, we replace the first K frame latents in \u02c6 zt by st. Consequently, the temporal attention layers have to utilize information from s0 to synthesize the new frame\u2019s latent, \u02c6 zK 0 . Considering that only one starting image latent z0 is provided, we propose a \u201crepeat-and-slide\u201d strategy to construct s0. At the beginning of video generation, we repeat z0 for K frames to form s0, and gradually perform a sliding operation within the queue s0 by dequeuing the first frame latent s0 0 and enqueuing the newly generated latent \u02c6 zK 0 after each complete DM sampling process. Note that though the initial s0 is created by repeating z0, the noise added to get st is different for each frame\u2019s latent in st, thus ensuring diversity. The following expressions define one reverse step in the DM sampling process: st \u223cN( \u221a \u00af \u03b1ts0, (1 \u2212\u00af \u03b1t)I) , (7a) \u27e8\u02c6 z0 t , \u02c6 z1 t , \u00b7 \u00b7 \u00b7 , \u02c6 zK\u22121 t \u27e9\u2190st , (7b) \u02c6 zt\u22121 \u223cN(\u00b5\u03b8(\u02c6 zt, y), \u03c32 t I) . (7c) Specifically, in each reverse denoising step from \u02c6 zt to \u02c6 zt\u22121, we first add t steps of Gaussian noise to the queue s0 to yield st in Eq. (7a). Subsequently, we replace the previous K frames of \u02c6 zt with st in Eq. (7b) and input \u02c6 zt to the denoising network to produce the less noisy latent \u02c6 zt\u22121 (Eq. (7c)). With the repeat-and-slide strategy, model M is tasked with predicting only one new frame, while the preceding K frames are incorporated into the reverse process to ensure that the temporal attention layers depend solely on information derived from the provided image. DDPM-based Inversion. Though the DM sampling process starting with randomly sampled Gaussian noise produces matching semantics, the generated video is often temporally inconsistent (Fig. 4, row 2). To provide initial noise that can produce more temporally consistent results, we introduce an inversion strategy based on the DDPM [20] forward process when generating the new frame latent. Specifically, at the beginning of each DM sampling process to synthesize the new frame latent \u02c6 zK 0 , instead of starting with the \u02c6 zT randomly sampled from N(0, I), we add T full steps of Gaussian noise to s0 to obtain sT using Eq. (3). Note that \u02c6 z has K + 1 frames, while s has K frames. We then use sT to initialize the first K frames of \u02c6 zT . We copy the last frame sK\u22121 T of sT to initialize the final frame \u02c6 zK T , as the (K \u22121)th frame is the closest to the Kth frame. Resampling. Similar to [24, 33], we further apply a resampling technique, which was initially designed for the image inpainting task, to the video DM to enhance motion coherence. Particularly, after performing a one-step denoising operation in the reversed process, we add one-step noise again to revert the latent. This procedure is repeated mulInversion DDIM Resample FVD\u2193 sFVD\u2193 tFVD\u2193 \u2717 10 0 1656.37 2074.77\u00b1411.74 1798.05\u00b1235.34 \u2713 10 0 339.89 443.97\u00b1139.10 405.22\u00b161.58 \u2713 50 0 463.55 581.32\u00b1234.09 535.06\u00b185.27 \u2713 10 2 207.62 299.14\u00b187.24 278.73\u00b147.84 \u2713 10 4 180.09 267.17\u00b174.72 252.77\u00b139.02 Table 1. Quantitative ablation study comparing different sampling strategies for proposed TI2V-Zero on the MUG dataset. Inversion, DDIM, and Resample denote the application of DDPM-based inversion, the steps using DDIM sampling, and the iteration number using resampling, respectively. Distributions for Comparison FVD\u2193 tFVD\u2193 TI2V-Zero-Fake vs. ModelScopeT2V 366.41 921.31\u00b1251.85 TI2V-Zero-Real vs. Real Videos 477.19 1306.75\u00b1271.82 ModelScopeT2V vs. Real Videos 985.82 2264.08\u00b1501.28 TI2V-Zero-Fake vs. Real Videos 937.11 2177.70\u00b1436.71 Table 2. Result analysis of TI2V-Zero starting from the real (i.e., TI2V-Zero-Real) or synthesized frames (i.e., TI2V-Zero-Fake) on the UCF101 dataset. tiple times for each diffusion step, ensuring harmonization between the predicted and conditioning frame latents (see Algorithm 1 for details). 4. Experiments 4.1. Datasets and Metrics We conduct comprehensive experiments on three datasets. More details about datasets, such as selected subjects and text prompts, can be found in our Supplementary Materials. MUG facial expression dataset [1] contains 1,009 videos of 52 subjects performing 7 different expressions. We include this dataset to evaluate the performance of models in scenarios with small motion and a simple, unchanged background. To simplify the experiments, we randomly select 5 male and 5 female subjects, and 4 expressions. We use the text prompt templates like \u201cA woman with the expression of slight {label} on her face.\u201d to change the expression class label to be text input. Since the expressions shown in the videos of MUG are often not obvious, we add \u201cslight\u201d in the text input to avoid large motion. UCF101 action recognition dataset [56] contains 13,320 videos from 101 human action classes. We include this dataset to measure performance under complicated motion and complex, changing backgrounds. To simplify the experiments, we select 10 action classes and the first 10 subjects within each class. We use text prompt templates such as \u201cA person is performing {label}.\u201d to change the class label to text input. In addition to the above two datasets, we create an OPEN dataset to assess the model\u2019s performance in opendomain TI2V generation. We first utilize ChatGPT [43] to generate 10 text prompts. Subsequently, we employ Stable \u201cA woman with the expression of slight anger on her face.\u201d (MUG) Ground Truth TI2V-Zero w/o Resample (Ours) DynamiCrafter \u201cA person is kayaking.\u201d (UCF101) \u201cA romantic gondola ride through the canals of Venice at sunset.\u201d (OPEN) TI2V-Zero w/ Resample (Ours) Figure 5. Qualitative comparison among different methods on multiple datasets for TI2V generation. Columns in each block display the 1st, 6th, 11th, and 16th frames of the output videos, respectively. There are 16 frames with a resolution of 256 \u00d7 256 for each video. The given image x0 is highlighted with the red box and the text prompt y is shown under each block. Model MUG UCF101 FVD\u2193 sFVD\u2193 tFVD\u2193 FVD\u2193 tFVD\u2193 DynamiCrafter [67] 1094.72 1359.86\u00b1257.73 1223.89\u00b1105.94 589.59 1540.02\u00b1199.59 TI2V-Zero w/o Resample (Ours) 339.89 443.97\u00b1139.10 405.22\u00b161.58 493.19 1319.77\u00b1283.87 TI2V-Zero w/ Resample (Ours) 180.09 267.17\u00b174.72 252.77\u00b139.02 477.19 1306.75\u00b1271.82 Table 3. Quantitative comparison among different methods on multiple datasets for TI2V generation. Diffusion 1.5 [47] to synthesize 100 images from each text prompt, generating a total of 1,000 starting images and 10 text prompts for evaluating TI2V models. Data Preprocessing. We resize all the videos/images to 256 \u00d7 256 resolution. For UCF101, since most of the video frames are not square, we crop the central part of the frames. To obtain ground truth videos for computing metrics, we uniformly sample 16 frames from each video in the datasets to generate the video clips with a fixed length. Metrics. Following prior work [21, 22, 25], we assess the visual quality, temporal coherence, and sample diversity of generated videos using Fr\u00b4 echet Video Distance (FVD) [57]. Similar to Fr\u00b4 echet Inception Distance (FID) [18], which is used for image quality evaluation, FVD utilizes a video classification network I3D [6] pretrained on Kinetics400 dataset [26] to extract feature representation of real and synthesized videos. Then it calculates the Fr\u00b4 echet distance between the distributions of the real and synthesized video features. To measure how well a generated video aligns with the text prompt y (condition accuracy) and the given image x0 (subject relevance), following [39], we design two variants of FVD, namely text-conditioned FVD (tFVD) and subject-conditioned FVD (sFVD). tFVD and sFVD compare the distance between real and synthesized video feature distributions under the same text y or the same subject image x0, respectively. We first compute tFVD and sFVD for each condition y and image x0, then report their mean and variance as final results. In our experiments, we generate 1,000 videos for all the models to estimate the feature distributions. We compute both tFVD and sFVD on the MUG dataset, but for UCF101, we only consider tFVD since it doesn\u2019t contain videos of different actions for the same subject. For the OPEN dataset, we only present qualitative results due to the lack of ground truth videos. Unless otherwise specified, all the generated videos are 16 frames (i.e., M = 15) with resolution 256 \u00d7 256. 4.2. Implementation Details Model Implementation. We take the ModelScopeT2V 1.4.2 [60] as basis and implement our modifications. For text-conditioned generation, we employ classifier-free guidance with g = 9.0 in Eq. (5). Determined by our preliminary experiments, we choose 10-step DDIM and 4-step resampling as the default setting for MUG and OPEN, and 50-step DDIM and 2-step resampling for UCF101. Implementation of SOTA Model. We compare our TI2V-Zero with a state-of-the-art (SOTA) model DynamiCrafter, a recent open-domain TI2V framework [67]. DynamiCrafter is based on a large-scale pretrained T2V foundation model VideoCrafter1 [16]. It introduces a learnable projection network to enable image-conditioned generation and then fine-tunes the entire framework. We implement DynamiCrafter using their provided code with their default settings. For a fair comparison, all the generated videos are ! \ud835\udc65)* ! \ud835\udc65+, ! \ud835\udc65*+ ! \ud835\udc65-. ! \ud835\udc65,* ! \ud835\udc65/, ! \ud835\udc65))+ ! \ud835\udc65)+. \ud835\udc650 \u201cA mesmerizing display of the northern lights in the Arctic.\u201d ! \ud835\udc6510 Figure 6. Example of long video generation using our TI2V-Zero on the OPEN dataset. The given image x0 is highlighted with a red box, and the text prompt y is shown under the set of frames. There are a total of 128 video frames (M = 127), and the synthesized results for every 14 frames are presented. centrally-cropped and resized to 256 \u00d7 256. 4.3. Result Analysis Ablation Study. We conduct ablation study of different sampling strategies on MUG. As shown in Tab. 1 and Fig. 4, compared with generating using randomly sampled Gaussian noise, initializing the input noise with DDPM inversion is important for generating temporally continuous videos, improving all of the metrics dramatically. For MUG, increasing the DDIM sampling steps from 10 to 50 does not enhance the video quality but requires more inference time. Thus, we choose 10-step DDIM as the default setting on MUG. As shown in Fig. 4 and Tab. 1, adding resampling helps preserve identity details (e.g., hairstyle and facial appearance), resulting in lower FVD scores. Increasing resampling steps from 2 to 4 further improves FVD scores. Effect of Real/Synthesized Starting Frames. We also explore the effect of video generation starting with real or synthesized frames on UCF101. We initially use the first frame of the real videos to generate videos with our TI2V-Zero, termed TI2V-Zero-Real. Additionally, we utilize the backbone model ModelScopeT2V [60] to generate synthetic videos using the text inputs of UCF101. We then employ TI2V-Zero to create videos from the first frame of the generated fake videos, denoted as TI2V-Zero-Fake. As shown in Tab. 2, [TI2V-Zero-Fake vs. ModelScopeT2V] can achieve better FVD scores than [TI2V-Zero-Real vs. Real Videos]. The reason may be that frames generated by ModelScopeT2V can be considered as in-distribution data since TI2V-Zero is built upon it. We also compare the output video distribution of TI2V-Zero-Fake and ModelScopeT2V with real videos in Tab. 2. Though starting from the same synthesized frames, TI2V-Zero-Fake can generate more realistic videos than the backbone model. Comparison with SOTA Model. We compare our proposed TI2V-Zero with DynamiCrafter [67] in Tab. 3 and Fig. 5. From Fig. 5, one can find that DynamiCrafter struggles to preserve details from the given image, and the motion of its generated videos is also less diverse. Note that DynamiCrafter requires additional fine-tuning to enable TI2V generation. In contrast, without using any fine-tuning or introducing external modules, our proposed TI2V-Zero can precisely start with the given image and output more visually-pleasing results, thus achieving much better FVD scores on both MUG and UCF101 datasets in Tab. 3. The comparison between our TI2V-Zero models with and without using resampling in Fig. 5 and Tab. 3 also demonstrates the effectiveness of using resampling, which can help maintain identity and background details. Extension to Other Applications. TI2V-Zero can also be extended to other tasks as long as we can construct s0 with K images at the beginning. These images can be obtained either from ground truth videos or by applying the repeating operation. Then we can slide s0 when generating the subsequent frames. We have applied TI2V-Zero in video infilling (see the last row in Fig. 3), video prediction (see Supplementary Materials), and long video generation (see Fig. 6). As shown in Fig. 6, when generating a 128-frame video on the OPEN dataset, our method can preserve the mountain shape in the background, even at the 71st frame (frame \u02c6 x70). The generated video examples and additional experimental results are in our Supplementary Materials. 5. Conclusion In this paper, we propose a zero-shot text-conditioned image-to-video framework, TI2V-Zero, to generate videos by modulating the sampling process of a pretrained video diffusion model without any optimization or fine-tuning. Comprehensive experiments show that TI2V-Zero can achieve promising performance on multiple datasets. While showing impressive potential, our proposed TI2VZero still has some limitations. First, as TI2V-Zero relies on a pretrained T2V diffusion model, the generation quality of TI2V-Zero is constrained by the capabilities and limitations of the pretrained T2V model. We plan to extend our method to more powerful video diffusion foundation models in the future. Second, our method sometimes generates videos that are blurry or contain flickering artifacts. One possible solution is to apply post-processing methods such as blind video deflickering [30] or image/video deblurring [51] to enhance the quality of final output videos or the newly synthesized frame in each generation. Finally, compared with GAN and standard video diffusion models, our approach is considerably slower because it requires running the entire diffusion process for each frame generation. We will investigate some faster sampling methods [29, 32] to reduce generation time." +} \ No newline at end of file diff --git a/intro_8K/test_introduction_short_2404.16325v1.json b/intro_8K/test_introduction_short_2404.16325v1.json new file mode 100644 index 0000000000000000000000000000000000000000..58acc29ef857193fb7bb57dd4f2a61a7a82e643d --- /dev/null +++ b/intro_8K/test_introduction_short_2404.16325v1.json @@ -0,0 +1,17 @@ +{ + "url": "http://arxiv.org/abs/2404.16325v1", + "title": "Semantic Segmentation Refiner for Ultrasound Applications with Zero-Shot Foundation Models", + "abstract": "Despite the remarkable success of deep learning in medical imaging analysis,\nmedical image segmentation remains challenging due to the scarcity of\nhigh-quality labeled images for supervision. Further, the significant domain\ngap between natural and medical images in general and ultrasound images in\nparticular hinders fine-tuning models trained on natural images to the task at\nhand. In this work, we address the performance degradation of segmentation\nmodels in low-data regimes and propose a prompt-less segmentation method\nharnessing the ability of segmentation foundation models to segment abstract\nshapes. We do that via our novel prompt point generation algorithm which uses\ncoarse semantic segmentation masks as input and a zero-shot prompt-able\nfoundation model as an optimization target. We demonstrate our method on a\nsegmentation findings task (pathologic anomalies) in ultrasound images. Our\nmethod's advantages are brought to light in varying degrees of low-data regime\nexperiments on a small-scale musculoskeletal ultrasound images dataset,\nyielding a larger performance gain as the training set size decreases.", + "authors": "Hedda Cohen Indelman, Elay Dahan, Angeles M. Perez-Agosto, Carmit Shiran, Doron Shaked, Nati Daniel", + "published": "2024-04-25", + "updated": "2024-04-25", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI" + ], + "label": "Original Paper", + "paper_cat": "Semantic AND Segmentation AND Image", + "gt": "Ultrasound is a popular medical imaging modality used to image a large variety of organs and tissues. Ultrasound is often the preferred choice due to its non-radiative and non-invasive nature, relatively easy and fast imaging procedure, and lower costs. Automating the diagnosis or highlighting relevant areas in the image will contribute to faster workflows and potentially more consistent and accurate diagnoses. Artificial Intelligence (AI) has demonstrated remarkable success in automatic medical imaging analysis. Compared to classical methods, previous work based on convolutional neural networks on various medical imaging tasks, such as classification and segmentation, have shown state-of-the-art results [1, 2, 3, 4]. However, effective deep learning segmentation algorithms for medical images is an especially challenging task due to the scarcity of high-quality labeled images for supervision. Moreover, in medical imaging it is often the case that identification of findings regions, namely regions of potentially pathological visual anomalies, having neither a clear boundary nor a typical geometry or position is much more challenging than the identification of an anatomy in its context. Findings are also typically rare, which brings to light the challenge of training such models in limited data regimes. \u2217Corresponding author, e-mail: nati.daniel@gehealthcare.com. \u2020These authors have contributed equally to this work. 1Dept. of AI/ML Research, GE Healthcare, Haifa, Israel. 2Dept. of Clinical Applications, Point of Care Ultrasound & Handheld, Texas, USA. 3Dept. of Clinical Applications, Point of Care Ultrasound & Handheld, Wisconsin, USA. arXiv:2404.16325v1 [cs.CV] 25 Apr 2024 Semantic Segmentation Refiner for U/S Applications with Zero-Shot FMs COHEN H ET AL. Figure 1: A high-level illustration of our semantic segmentation refinement method with zero-shot foundation models. A pre-trained segmentation model predicts a semantic segmentation for each class of an input image. In this example, classes comprise anatomies and pathologies in an ultrasound image, and the coarse segmentor output depicts the predicted semantic segmentation of a pathology. A prompt selection model selects positive and negative points. Consequently, a zero-shot semantic segmentation mask of the pathology is predicted by a foundation segmentation model, prompted by the selected points for the input image. Positive prompt points are depicted in red, and negative prompt points are depicted in blue. The pathology semantic segmentation prediction is highlighted in red. For illustration purposes, the muscle is highlighted in purple, the tendon in yellow, and the bone in green. The freeze symbol indicates preventing gradients from being propagated to the model weights. Recently, new segmentation models have emerged. Trained on data at huge scales, these foundation models aim to be more generic rather than tailored to specific datasets. The Segment Anything Model (SAM) [5] is a foundational model demonstrating zero-shot generalization in segmenting natural images using a prompt-driven approach. The SonoSAM [6] foundational model adapts SAM to ultrasound images by fine-tuning the prompt and mask decoder [6]. Although fine-tuning methods often improve the results on target datasets [7] they essentially downgrade the generalization capabilities of the foundation model. Further, a significant domain gap between natural and medical images, ultrasound images in particular[8], hinders fine-tuning models trained on natural images to the task at hand [7]. In this work, we address the performance degradation of segmentation models in low-data regimes and derive a novel method for harnessing segmentation foundation models\u2019 ability to segment arbitrary regions. Our semantic segmentation refinement method comprises two stages: First, a coarse segmentation is predicted by a model trained on a small subset of the training data. In the second stage, our novel points generation from a coarse pathology segmentation algorithm is used to prompt a segmentation foundation model. Positive prompt points are selected using a partition around medoids method as the most representative pathology points. Negative prompt points are selected by a prompt selection optimization algorithm that identify the context anatomy. Importantly, we do not fine-tune the foundation model to our dataset, i.e., it produces a zero-shot segmentation. The end-to-end pipeline is illustrated in Fig. 1. The method\u2019s advantages are brought to light on varying degrees of low-data regimes experiments on a small-scale images dataset, yielding a larger performance gain compared to a state-of-the-art segmentation model [9] as the training set size decreases. Further, ablation studies validate the effectiveness of our semantic segmentation refinement model. Our approach applies to other ultrasound-based medical diagnostics tasks. The paper is organized as follows: Section 2 presents the semantic segmentation task and leading approaches. Our method is presented in Section 3, and the experimental setup is presented in Section 4. Section 5 presents the results and ablation studies on a discontinuity in tendon fiber (DITF) pathology finding task in a musculoskeletal ultrasound (MSK) dataset, and the conclusions are presented in Section 6.", + "main_content": "2.1 Semantic Segmentation Models Semantic segmentation aims to assign a label or a class to each pixel in an image. Unlike image classification, which assigns a single label to the entire image, semantic segmentation provides a more detailed understanding of the visual scene by segmenting it into distinct regions corresponding to objects or classes. This is an essential technique for applications, such as autonomous vehicles, medical image analysis, and scene understanding in robotics. Like other computer vision tasks, deep learning has demonstrated state-of-the-art results in the semantic segmentation of medical images. The semantic segmentation problem can be formulated as follows: Given an image I \u2208RC\u00d7H\u00d7W , our goal is to train a deep neural network to predict the pixel-wise probability map SN\u00d7H\u00d7W of the classes in the dataset, where N is the number of classes in the dataset. 2 Semantic Segmentation Refiner for U/S Applications with Zero-Shot FMs COHEN H ET AL. DeepLabV3 [9] represents a distinctive approach in semantic image segmentation. Utilizing dilated convolutions, the model strategically enlarges the receptive field and manages the balance between global and local features through padding rates. Notably, the spatial pyramid pooling module proposed by the authors aggregates features from dilated convolutions at various scales, enhancing contextual information. Distinctive from encoder-decoder architectures such as the U-Net [10], it is built upon a robust pre-trained encoder, contributing to its success in generating accurate and detailed segmentation masks across diverse applications. Since DeepLabV3 remains a staple choice for a performant semantic segmentation model, we adopt it as our method\u2019s coarse segmentor. 2.2 Semantic Segmentation Foundation Models Foundation models are trained on broad data at a huge scale and are adaptable to a wide range of downstream tasks [11, 12, 13]. The Segment Anything Model (SAM) [5] emerged as a versatile foundation model for natural image segmentation. Trained on a dataset of over 11 million images and 1B masks, it demonstrates impressive zero-shot generalization in segmenting natural images using an interactive and prompt-driven approach. Prompt types include foreground/background points, bounding boxes, masks, and text prompts. However, SAM achieves subpar generalization on medical images due to substantial domain gaps between natural and medical images [14, 15, 16, 17, 18]. Moreover, SAM obtains the poorest results on ultrasound compared to other medical imaging modalities [15]. These results are attributed to the ultrasound characteristics, e.g., the scan cone, poor image quality, and unique speckled texture. A common methodology to overcome this generalization difficulty is to fine-tune a foundation model on a target dataset [19]. An efficient fine-tuning strategy is Low-Rank Adaptation (LoRA) [20], which has been adopted in fine-tuning SAM to relatively small medical imaging datasets [21, 22, 23]. SonoSAM [6] demonstrates state-of-the-art generalization in segmenting ultrasound images. Fine-tuned on a rich and diverse set of ultrasound image-mask pairs, it has emerged as a prompt-able foundational model for ultrasound image segmentation. Notably, adapting prompt-based models to medical image segmentation is difficult due to the conundrum of crafting high-quality prompts [15]. Manually selecting prompts is time-consuming and requires domain expertise. Methods of extracting prompts from ground-truth masks [23] cannot be applied during inference as they rely on full supervision. Auto-prompting techniques rely on the strong Vision Transformer (ViT-H) image encoder [24] semantic representation capabilities, and suggest generating a segmentation prompt based on SAM\u2019s image encoder embedding [18, 25]. Other strategies suggest replacing the mask decoder with a prediction head requiring no prompts [16]. Nevertheless, SAM\u2019s zero-shot prediction accuracy is typically lower than that of the segmentation models trained with fully supervised methods [26]. Motivated by the generalization abilities of segmentation foundation models, we devise a points selection algorithm from coarse segmentation masks that allows harnessing prompt-based models to ultrasound segmentation in a zero-shot setting. 3 Method In this section, we present our method for refining a coarse pathology segmentation mask with zero-shot foundation models. This method can be adapted to natural images, as well as to the medical imaging domain. Herein, we validate it based on a specific challenging task of segmenting a discontinuity of the tendon fiber finding (Sec. 4.1), which is the main ultrasound finding of a tendon partial tear pathology. Our key intuition is that although the performance of segmentation models decreases significantly in low-data regimes, even such coarse segmentation masks can be utilized for extracting high-quality prompts that harness segmentation foundation models\u2019 capabilities. Importantly, we use the publicly available pre-trained foundation models without further modification. The flexibility of our method allows for incorporating either SonoSAM or SAM. Though the above-mentioned foundation models allow several types of prompts, we focus on foreground (positive) and background (negative) prompt points. Our method makes use of the ground-truth tendon segmentation, denoted T gt. Since the tendon in the context of the DIFT pathology is usually easy to segment due to its typical geometry and position and relatively simple data acquisition and labeling, we assume that strong segmentation models exist for this task and that their output can be used in lieu of the ground-truth segmentation. With that, we introduce our two-stage method, summarized in Algorithm 1. First, a segmentation model [9] is trained on a random subset of the training data. A coarse semantic segmentation is then predicted for a given test image. Then, k positive and k negative prompt points are selected to prompt a segmentation foundation model. We next describe our prompt points selection algorithm in greater detail. 3 Semantic Segmentation Refiner for U/S Applications with Zero-Shot FMs COHEN H ET AL. Algorithm 1 The Semantic Segmentation Refiner Method Input: \u2022 Input image I \u2022 Ground-truth tendon mask T gt \u2022 Frozen SonoSAM model \u2022 Pre-trained segmentation model S Output: \u2022 Refined pathology segmentation mask O 1: Coarse segmentation mask \u02dc O \u2190S(I) 2: Positive points selection ptspos \u2190k-medoids( \u02dc O) 3: Modified ground-truth tendon mask T \u02dc gt \u2190T gt \\ \u02dc O 4: Initialize complementary problem 5: \u00af ptsneg \u2190ptspos, \u00af ptspos \u2190random from T \u02dc gt 6: for t in range(1, T) do 7: Optimize \u00af ptspos as parameters: 8: \u2113ce( \u00af pts, T \u02dc gt) = \u2212T \u02dc gt log (SonoSAM(I, \u00af pts)) 9: Update \u00af ptspos \u2190\u00af ptspos 10: end for 11: Flip: ptsneg \u2190\u00af ptspos 12: Output O \u2190SonoSAM(I, pts) 3.1 Positive Points Selection We aim to select points that are the most representative of the coarse pathology segmentation mask as the positive prompt points. This selection objective translates to the partitioning around the medoids method\u2019s approach. This approach is preferable compared to a selection based on a minimization of the sum of squared distance (i.e., the k-means) in the case of multiple pathology blobs since the latter might select centroids in between pathology blobs. Thus, k mass centers of the coarse pathology segmentation masks are selected as positive points using the kmedoids clustering algorithm [27]. To reduce the probability of selecting false positive points, a threshold is applied to the coarse pathology segmentation masks before selection. We denote the selected positive points as ptspos = {ptspos i }k i=1. This process is illustrated in Fig. 2. Figure 2: An illustration of our positive (foreground) points selection module, depicted in red. A threshold is applied to the coarse segmentation prediction. A kmedoids clustering algorithm is applied to select k positive pathology points. 3.2 Negative Points Refinement We take inspiration from hard negative selection literature [28, 29, 30], and aim to select the most informative negative points w.r.t. the foreground object. To that end, we formulate a complementary prompt points selection problem w.r.t. the background given the k selected foreground points (3.1), \u00af pts = { \u00af ptspos, \u00af ptsneg}. When the foreground is the pathology, the background is the context anatomy, herein the background is a tendon anatomy. The complementary prompt points selection is optimized to decrease the binary cross-entropy (BCE) loss between the foundation model\u2019s zero-shot tendon segmentation mask prompted on these points and a modified ground-truth tendon 4 Semantic Segmentation Refiner for U/S Applications with Zero-Shot FMs COHEN H ET AL. mask, denoted T \u02dc gt. To avoid predicting tendon points within foreground pathology, the values of the ground-truth tendon mask overlapping with the coarse pathology detection are modified to zero. As points initialization for this complementary problem, we flip the labels of ptspos such that they correspond to negative points, \u00af ptsneg \u2190ptspos. Further, k points are selected at random from T \u02dc gt, denoted \u00af ptspos. While freezing the foundation model, the point prompt optimization is performed for a maximum of 100 steps or until convergence. The optimization is performed such that the selected points are optimal w.r.t. the complementary problem of the tendon segmentation given the foreground pathology predicted by the coarse segmentor. Denote an input image as I, SonoSAM\u2019s zero-shot tendon segmentation given input I and its corresponding optimized prompt points \u00af pts as SonoSAM(I, \u00af pts). Then, the BCE loss of the complementary problem is: \u2113ce( \u00af pts, T \u02dc gt) = \u2212T \u02dc gt log (SonoSAM(I, \u00af pts)) . (1) We used the AdamW [31] optimizer, with learning rate of 4e\u22123, and standard betas to optimize the positive points \u00af ptspos. The optimized positive tendon points selected by this model serve as k negative prompt points, ptsneg \u2190\u00af ptspos, towards the foreground pathology segmentation. This process is illustrated in Fig. 3. Figure 3: An illustration of our negative (background) points selection module. In addition to the positive selected points (Sec. 3.1), negative points are selected randomly from the modified ground-truth tendon mask. The points are flipped to initialize the settings of the complementary tendon segmentation problem. Our points optimization model optimizes prompt points selection w.r.t. the complementary tendon zero-shot segmentation problem (Sec. 3.2). Finally, prompt points are again flipped to account for positive and negative prompt points towards the pathology segmentation. 4 Experiments 4.1 Dataset The data used for this study is ultrasound images of tendons around the shoulder joint. Specifically, we acquired images of the supraspinatus tendon, infraspinatus tendon, and subscapularis. The images are acquired from both the short-axis and the long-axis views. The main parameters of our data are summarized in Table 1. In this work, we aim to segment the partial tear pathology within the tendon, thus our data consists of images paired with the corresponding segmentation mask of anatomies and pathologies. Our data includes semantic labeling of the following classes: DITF, bone, tendon, and muscle. Table 2 summarizes the semantic labeling statistics. In total, our dataset includes 388 images from 124 subjects, 80% of which are used for training, and the remaining 20% are used for validation. The test set comprises 40 images. To prevent data leakage, the test set images are collected from subjects that do not appear in the train data. All images are resized to a constant resolution of 512x512 pixels. All data comply with the Institutional Review Board (IRB) data sharing agreement. 4.2 Evaluation Metric We use the Dice similarity coefficient [32] evaluation metric, commonly used in medical image segmentation research to measure the overlapping pixels between prediction and ground truth masks. The Dice similarity coefficient is defined as 2|A\u2229B| |A|+|B|, where A and B are the pixels of the prediction and the ground truth respectively. 4.3 A Segmentation Model In Low-Data Regimes In this experiment, we investigate the performance and properties of a state-of-the-art semantic segmentation model with a limited training set size of MSK ultrasound images. Our goal is two-fold: (i) to validate our conjecture that high-quality 5 Semantic Segmentation Refiner for U/S Applications with Zero-Shot FMs COHEN H ET AL. (a) 100% of train set. (b) 35% of train set. (c) 15% of train set. (d) 8% of train set. (e) 5% of train set. Figure 4: Positive pathology points retainment in increasingly coarse segmentation mask prediction and our method\u2019s results. Top row: Pathology segmentation mask predicted with a DeepLabV3 model trained on varying percent of the training set. Middle row: Positive points selected on binary pathology mask by our positive points selection module. Bottom row: An illustration of our method\u2019s pathology segmentation output, highlighted in red, compared to the ground-truth segmentation, highlighted in green. The tendon area is shown at the bottom left image for reference. Our method achieves for this test image a Dice similarity coefficient of 0.89, 0.71, 0.73, 0.72, 0.50 when the coarse segmentor is trained on 100%, 35%, 15%, 8%, 5% of the train set, respectively. Table 1: Summary of MSK pathology segmentation dataset main parameters. Parameters/Dataset MSK Ultrasound Images Total frames 388 Original frame size 1536 X 796 or 1044 X 646 pixels Subjects 90 (52.82% males, 47.18% females) Average BMI 24.69 \u00b1 8.92 Vendor GE Healthcare\u2122 Ultrasound system Logiq S8\u2122, Eagle\u2122, LogiqE10\u2122 Data collection Linear Collection Sites USA, Israel prompts can be extracted even from a coarse semantic segmentation prediction, and (ii) to measure the performance degradation in increasingly low-data regimes. These properties are the basis of our two-stage method for exploiting the advantages of a prompt-able foundation segmentation model. Concretely, for an input image I \u2208R512\u00d7512 the segmentation model prediction S \u2208R7\u00d7512\u00d7512 corresponds to a semantic segmentation for each class as detailed in Table 2. 4.4 Segmentation Refinement With Zero-Shot Foundation Models Positive Points Selection A combination of a constant and an adaptive threshold is applied to the coarse segmentation prediction prior to positive point selection. Denote by c0 the coarse segmentation mask prediction at the foreground channel (DITF in our case). 6 Semantic Segmentation Refiner for U/S Applications with Zero-Shot FMs COHEN H ET AL. Table 2: Semantic labeling statistics at the 512X512 patches level. M: Million. Class MSK Type Number of images (% of total) Total Area (pixels) Mean fraction out of total patch area Discontinuity in tendon fiber Pathology 179 (46.13%) 1.11M 1.09% Bone 288 (74.22%) 2.75M 2.7% Tendon Anatomy 388 (100%) 10.64M 10.46% Muscle 388 (100%) 28.13M 27.65% We apply a double thresholding mechanism to disregard the noise in the prediction. \u02dc c = c0 > tmin (2) c = \u02dc c > 0.4 \u2217max(\u02dc c) (3) The initial threshold screens predictions that lack sufficient global (cross-classes) certainty, when the minimum threshold is set to tmin = 0.15. The second thresholding term adaptively screens all predictions that lack sufficient local (classwise) certainty. Further, we set the k-medoids++ medoid initialization method [33] which selects more separated initial medoids than those selected by the other methods. The hyper-parameter k is adaptively set such that the sum of distances of samples to their closest cluster center (inertia) is minimized, k \u2208[4, 6]. Negative Points Refinement We deploy in our experiments the SonoSAM semantic segmentation foundation model since it is expected to better generalize to zero-shot segmentation of ultrasound images than SAM. Due to the randomness in the initialization of the complementary positive points \u00af ptspos selection problem, evaluation is performed over 10 random initialization. 4.5 Training Procedure Our coarse segmentor is DeepLabV3 [9], a state-of-the-art convolutional approach to handle objects in images of varying scales, with a ResNet-50 backbone [34]. As our complete dataset consists of only 275 training images, the model is pre-trained on the ImageNet dataset [35]. To evaluate our method across different data regimes we trained our coarse segmentor on varying n percentage of the training data, n \u2208[5, 8, 12, 20, 35, 60, 100], sub-sampled at random. The model is trained with equally weighted BCE loss and a Dice similarity coefficient loss between the predicted and ground-truth segmentation for each class. Each such experiment is trained for 100 epochs, where the weights of the maximal validation loss have been selected for testing. We used the robust AdamW [31] optimizer, with no learning rate scheduler and parameters of \u03b21 = 0.9, \u03b22 = 0.999 and learning rate of 4e\u22123. The test set remains constant across the different training experiments. The model training and evaluation code is implemented with the PyTorch [36] framework. 5 Results 5.1 Semantic Segmentation Model In Low-Data Regimes The results of this experiment validate our conjecture that positive pathology points are consistently selected in increasingly coarse segmentation mask predictions. As the segmentation model is trained on an increasingly smaller training set, the segmentation mask prediction becomes coarse: the pathology segmentation boundaries become less defined and its prediction probability decreases (Fig. 4, top row). Nevertheless, the positive pathology points selected by our method remain generally consistent (Fig. 4, middle row). Consistent with these results, we find that the average Dice similarity coefficient of the segmentation model decreases rapidly when the model is trained on increasingly smaller training set sizes (Fig. 5, \u2018Segmentation Model\u2019). These results validate our method\u2019s motivation and approach. 5.2 Semantic Segmentation Refinement With Zero-Shot Foundation Model Fig. 5 summarizes the results of our method in comparison with those of the baseline segmentation model in various training set sizes. Our method\u2019s average Dice is higher than the baseline\u2019s in every training set size. Moreover, 7 Semantic Segmentation Refiner for U/S Applications with Zero-Shot FMs COHEN H ET AL. our method\u2019s performance gain is larger as the training set size decreases (\u223c10% average Dice increase in 5% and 8% training set sizes), substantiating the advantage of our method in low-data regimes. Our method\u2019s pathology segmentation output in varying training set sizes compared to the ground-truth segmentation is illustrated in Fig. 4, bottom row. 100.0 60.0 35.0 20.0 15.0 12.0 8.0 5.0 Percent of the training set size (%) 0.20 0.25 0.30 0.35 0.40 0.45 Average Dice score Segmentation Model (DeepLabV3) Ours Figure 5: A summary of the average DITF Dice similarity coefficient of methods in various training set sizes. Depicted are the results of the baseline segmentation model[9] and our segmentation refinement with zero-shot SonoSAM foundation model. Error bars depict the standard deviation of our method\u2019s statistics. To analyze the stochasticity effect of our method\u2019s negative points random initialization (Sec. 3.2), we compare our method\u2019s DITF Dice score statistics over ten random initialization and the baseline segmentation model\u2019s average DITF Dice similarity coefficient. Results show that our method\u2019s performance is robust, exhibiting relatively low standard deviation in all train set sizes (Fig. 5). Additionally, our method\u2019s mean DITF Dice surpasses that of the baseline\u2019s in all but one train set size, and is higher by 4% on average than the baseline. 5.3 Ablation Studies In this section, we present ablation studies substantiating the effectiveness of our negative prompt points refinement (NPPR) model, as well as examining our method\u2019s performance when replacing the SonoSAM foundation model with SAM. 5.3.1 SAM vs. SonoSAM as a segmentation foundation model In this study, we investigate the impact of replacing SonoSAM with SAM as the zero-shot semantic segmentation foundation model in our method. Table 3 shows that harnessing SonoSAM\u2019s generalizability for MSK ultrasound images is preferable to SAM in low-data regimes and on par with SAM otherwise. 5.3.2 Random negative prompt points section In this experiment, we investigate the effectiveness of our negative prompt points refinement model by comparing it to a random negative prompt points selection algorithm. Concretely, k negative prompt points are randomly selected from the modified ground-truth tendon mask, T \u02dc gt. Our positive points selection approach remains unchanged. Results in Table 3 demonstrate that this naive selection algorithm achieves subpar average Dice scores across almost all train set sizes, especially in low-data regimes. These results establish the advantage of our negative points optimization algorithm. 6 Conclusions In this paper, we address the performance degradation of a state-of-the-art semantic segmentation model in low-data regimes. A novel prompt points selection algorithm optimized on a zero-shot segmentation foundation model was presented, as a means of refining a coarse pathology segmentation. Our method\u2019s advantages are brought to light in varying degrees of low-data regimes experiments, demonstrating a larger performance gain compared to the baseline segmentation model as the training set size decreases (Fig. 5). 8 Semantic Segmentation Refiner for U/S Applications with Zero-Shot FMs COHEN H ET AL. Table 3: Ablation studies: quantitative segmentation test results of the mean DITF Dice similarity coefficient (DSC) for different approaches over 10 run cycles. Our method is using zero-shot SonoSAM [6] foundation model. A higher DSC is better, with the best scores marked in bold. NPPR: Negative Prompt Points Refinement. Methods Percent of the training set 100% 60% 35% 20% 15% 12% 8% 5% Ours without NPPR 44.6% 40.0% 34.2% 27.8% 30.3% 27.5% 20.7% 16.6% Ours with SAM 45.5% 41.6% 39.7% 29.3% 32.9% 28.3% 27.6% 23.0% Ours 46.3% 39.3% 39.6% 31.9% 32.8% 31.8% 32.0% 24.6% Further, we validate our method\u2019s robustness to negative point initialization stochasticity and study the effectiveness of our prompt points refinement model (Section 5.3.2). Results demonstrate that the generalization of SonoSAM in extremely low data regimes is better than SAM\u2019s (Section 5.3.1). Our approach can be used for other ultrasound-based medical diagnostics tasks. An inherent limitation of our two-stage method is that its latency is higher than that of a core segmentation model." +} \ No newline at end of file diff --git a/intro_8K/test_introduction_short_2404.16333v1.json b/intro_8K/test_introduction_short_2404.16333v1.json new file mode 100644 index 0000000000000000000000000000000000000000..54f9bb6fb95d0f303b35611dbb84c245a610fd7b --- /dev/null +++ b/intro_8K/test_introduction_short_2404.16333v1.json @@ -0,0 +1,18 @@ +{ + "url": "http://arxiv.org/abs/2404.16333v1", + "title": "AI Coders Are Among Us: Rethinking Programming Language Grammar Towards Efficient Code Generation", + "abstract": "Besides humans and machines, Artificial Intelligence (AI) models have emerged\nto be another important audience of programming languages, as we come to the\nera of large language models (LLMs). LLMs can now excel at coding competitions\nand even program like developers to address various tasks, such as math\ncalculation. Yet, the grammar and layout of existing programs are designed for\nhumans. Particularly, abundant grammar tokens and formatting tokens are\nincluded to make the code more readable to humans. While beneficial, such a\nhuman-centric design imposes an unnecessary computational burden on LLMs where\neach token, either consumed or generated, consumes computational resources. To\nimprove inference efficiency and reduce computational costs, we propose the\nconcept of AI-oriented grammar, which aims to represent the code in a way that\nbetter suits the working mechanism of AI models. Code written with AI-oriented\ngrammar discards formats and uses a minimum number of tokens to convey code\nsemantics effectively. To demonstrate the feasibility of this concept, we\nexplore and implement the first AI-oriented grammar for Python, named Simple\nPython (SimPy). SimPy is crafted by revising the original Python grammar\nthrough a series of heuristic rules. Programs written in SimPy maintain\nidentical Abstract Syntax Tree (AST) structures to those in standard Python,\nallowing execution via a modified AST parser. In addition, we explore methods\nto enable existing LLMs to proficiently understand and use SimPy, and ensure\nthe changes remain imperceptible for human developers. Compared with the\noriginal Python, SimPy not only reduces token usage by 13.5% and 10.4% for\nCodeLlama and GPT-4, but can also achieve equivalent, even improved,\nperformance over the models trained on Python code.", + "authors": "Zhensu Sun, Xiaoning Du, Zhou Yang, Li Li, David Lo", + "published": "2024-04-25", + "updated": "2024-04-25", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE", + "cs.AI", + "cs.PL" + ], + "label": "Original Paper", + "paper_cat": "LLM Fairness", + "gt": "High-level programming languages, such as the widely adopted Python, are designed with two types of audiences in mind [6]: ma- chines that compile and execute programs and humans who read, comprehend, and write programs. Machines focus on the opera- tional semantics of programs, while humans additionally emphasize programs\u2019 readability, a key factor in understanding source code. For example, one of the guiding design principles for Python [31] def sum ( nums ) : if len ( nums ) = = 0 : raise Value Error sum _ num = 0 for num in nums : sum _ num num return = num sum _ + Good smell \"Looks\" wordy Bad smell def sum ( nums ) : if len ( nums ) = = 0 : raise Value Error sum _ num = 0 for num in nums : sum _ num num return = num sum _ + \"Looks\" cleaner Figure 1: An illustration of how LLMs and human program- mers perceive the source code. is that \u201creadability counts.\u201d Consequently, these languages incorpo- rate numerous human-centric design elements within their gram- mar. For example, programming languages utilize explicit delimiters to separate code structures. These elements, while enhancing hu- man readability, may not be necessary to convey the program\u2019s operational semantics. Recently, the audiences for programming languages have ex- panded to include AI models, particularly Large Language Models (LLMs) that can analyze, generate, and execute code. This is evident by the impressive performance that LLMs achieved in code genera- tion [16]. For example, a recently released LLM, AlphaCode2 [3], is reported to perform better than 85% of human participants in a pro- gramming competition. Moreover, many LLM-powered assistants, such as ChatGPT [28] and Bard [14], are now equipped with code execution environments, which enable them to execute generated code and provide responses based on the results. Thus, the role of LLMs has evolved from mere code generators to active \u201cdevelopers\u201d that utilize programming to accomplish various tasks, including mathematical computations and file processing. This paradigm shift indicates a new era in which AI models emerge as an important group of users of programming languages. While AI models have taken on the new role, the design of code grammar is yet to evolve to accommodate their unique needs. The readability of the source code could impose an additional compu- tational burden on LLMs to read and generate programs, as extra tokens are particularly placed to make the code layout more friendly arXiv:2404.16333v1 [cs.SE] 25 Apr 2024 , , Zhensu Sun, Xiaoning Du, Zhou Yang, Li Li, and David Lo for reading. However, readability may not be essential for LLMs to perform coding tasks. Studies have revealed that code models do not capture much information relevant to readability [43], and readability-enhancing symbols like \u201c:\u201d received significantly lower attention compared to other elements such as variable names[50]. We illustrate how a program is perceived by humans and AI models in Figure 1. When certain elements that enhance readability are omitted from the code while preserving its underlying semantics, it becomes more challenging for humans to interpret, yet AI models can process the code more efficiently. The observation makes us wonder: What is a suitable grammar for AI models? Exploring this question is vital for optimizing the efficiency of LLMs and reduc- ing energy waste in dealing with unnecessary tokens, especially given that the high operational cost of LLMs sets a big challenge for providers to generate profit [17] from them. As AI models consume and generate source code in a token-by-token manner, with one feed-forward process for each token, reducing the tokens in code representation holds the potential to proportionally reduce the time and energy cost. This motivates us to propose the concept of AI-Oriented Gram- mar, a grammar specifically designed for AI models instead of humans. The core idea is to derive grammar rules that keep the code representations concise (with a minimal number of tokens to AI models). Notably, the code crafted in this grammar can be parsed with its adapted parser and then executed to obtain the same result as the original grammar. A few challenges are in the way of designing such a new grammar and melting it into AI models. The AI models are expected to not only comprehend code written in this grammar but also generate code following its rules to better serve the goal of efficiency. At the same time, human developers, who are directing the development, expect to work with grammar that they find friendly and are familiar with. Given these challenges, the realization of this concept remains uncertain. To assess the feasibility of AI-oriented grammar, we embarked on an exploratory study. The study seeks to consider the implications and limitations of integrating AI-oriented grammar into the existing code generation workflow. It is guided by three research questions, each addressing a key challenge. RQ1. What is the token reduction capacity of AI-oriented grammar in source code? Whether and to what extent an AI-oriented grammar can reduce the tokens remains an open question. We fill this gap by imple- menting a proof-of-concept AI-oriented grammar and assessing its performance. Specifically, we explore a new grammar for Python, named SimPy, by heuristically modifying the standard grammar. Compared to the standard Python grammar, we prohibit using to- kens popularly hired to style the code appearance, e.g., whitespace and newline, and simplify keywords, operators, and delimiters to a more compact form. The modifications are designed to be simple, as this is the first attempt to explore such AI-oriented grammar. We also developed an AST parser for SimPy that can parse its code into the same AST as standard Python code, as well as a converter for seamless code transitions between SimPy and Python code. A comparative analysis of SimPy and Python\u2019s original grammar was conducted using tens of tokenizers employed by existing LLMs. The findings indicate a notable reduction in token usage when employ- ing SimPy, with decreases ranging between 8.6% and 34.7%, thus reducing the time and computational cost during inference by a similar level [19]. For example, the tokenizer of GPT-4 demonstrates a significantly enhanced efficiency with SimPy, achieving a 10.4% reduction in token size. RQ2. How can AI models understand AI-oriented grammar? Prior research demonstrates that AI models are capable of com- prehending human-centric grammars of existing programming languages [16]. However, how these models can learn AI-oriented grammar remains unexplored. We thus further experiment with SimPy to find an effective way. We explored two different train- ing strategies: directly training a model on a SimPy-based code dataset (converted seamlessly from a Python dataset) and fine- tuning a model, originally trained with a Python dataset, on the SimPy dataset. A control group, where a model is directly trained on the Python code dataset, is also included for comparison. The mod- els trained with either strategy should achieve at least equivalent accuracy compared with the control group. Otherwise, it would be impractical to adopt AI-oriented grammar. For each training strat- egy, we experiment with three models, CodeGen-NL, TinyLlama and Pythia. The experimental results reveal that models initially trained with Python can adapt effectively to SimPy. For instance, our CodeGen model, initially trained on Python, attains a 7.32% Pass@10 on HumanEval, while it even reaches a 9.15% Pass@10 after the fine-tuning on SimPy. RQ3. How can AI-oriented grammar support real-world scenarios? Given that AI-oriented grammar may compromise human read- ability, its application is somewhat restricted. Thus, a remaining challenge for AI-oriented grammar is: how could an AI-oriented grammar be used in real-world scenarios, particularly when human- readable source code is necessary? To address this challenge, we first discuss the basic usage scenario of AI-oriented grammar, i.e., the scenarios where the code generated by the AI models is not intended to be displayed to human users. In such scenarios, such as AI agents [40] for ordinary users who are unfamiliar with pro- gramming, the model just generates code in AI-oriented grammar, executes it, and provides outputs to the user based on execution results. However, there are still many scenarios where human de- velopers need to review the code, such as coding assistants. We thus propose an inference framework for code generation named DualCode. DualCode utilizes a rule-based converter to convert code between these grammars, ensuring that users interact with human- readable code as usual, while the model benefits from the efficiency of AI-oriented grammar. Our tests confirm that DualCode intro- duces negligible latency, with the converter of SimPy processing code under 500 tokens in less than 1.0 ms. We will release the source code once the paper is accepted. The contributions of this paper are summarized as follows: \u2022 We propose the concept of AI-oriented grammar and empirically explore its feasibility and potential, paving the way for future improvements in programming language design that prioritize AI efficiency. AI Coders Are Among Us: Rethinking Programming Language Grammar Towards Efficient Code Generation , , def sum ( nums ) : if len ( nums ) = = 0 : raise Value Error sum _ num = 0 nums : sum _ num num return = num sum _ + for num in Explicit delimiters Visual Coding Style Figure 2: Illustrations of human-centric design elements in Python. \u2022 We implement the first AI-oriented grammar for Python, named SimPy, which can reduce at least 8.3% tokens in Python source code. \u2022 We propose a novel code generation framework, DualCode, ex- panding the applicability of AI-oriented grammar beyond AI-only scenarios with negligible additional latency.", + "main_content": "In this section, we critically examine the human-centric elements in the grammar of current programming languages and propose the concept of AI-oriented grammar. Following this, we introduce the dataset an empirical study driven by three research questions to explore the feasibility of this concept. 2.1 Human-centric Grammar Design As discussed in Section 1, modern programming languages are predominantly designed with human-centric grammar. This design philosophy originates from the longstanding reality that humans were the only developers for decades. In the current era of LLMs, this human-centric design philosophy has not been significantly challenged. To better understand this orientation, we critically examine the grammar of widely used programming languages, focusing on lexical and syntactical elements that enhance human readability. Below, we summarize the identified patterns and provide examples in Figure 2: Visual Coding Style The programming language grammar is deliberately crafted to accommodate diverse coding styles. Although not mandatory, styles like those recommended in the Python PEP8 guide [44] rely on grammatical support. For example, the coding style requires the programs to be written in multiple lines instead of a single extremely long line, easing human code review on screens. This necessitates several lexical elements: line breaks to separate lines, indents to visualize code blocks, and line continuation characters for splitting long lines. Figure 2 demonstrates these aspects, with line breaks and indents highlighted in purple. Similarly, the coding style suggests surrounding each binary operator with a single white space on either side. Therefore, lexical grammar must accommodate such stylistic elements, even if they may not contribute to the core semantics in parsing. Intuitive Notations The human-centric syntax of programming languages is designed to be intuitively understandable to humans. Common operators like \u201c+\u201d for addition and \u201c=\u201d for assignment are chosen for their familiarity, and derivations like the augmented assignment operator \u201c+=\u201d maintain this intuitive connection. Although potentially more concise symbols could replace these (e.g., using a brand-new symbol \u201c$\u201d for \u201c+=\u201d), they are still deliberately designed to maintain human readability. Similarly, for structural clarity, programming languages often employ explicit delimiters, such as symbols or keywords, to define code structures despite these delimiters not being essential for parsing. For instance, Python\u2019s compound statements, such as the if statement and for statement, use a colon to demarcate the header from the body. While a parser might deduce these components from line breaks alone, the colon acts as a visual aid, as illustrated in Figure 2 where colons are highlighted in red. This emphasis on intuitive notation and explicit delimiters, although not essential for parsing, significantly aids human comprehension. 2.2 AI-Oriented Grammar Grammar is a rule set that defines how the source code should describe the programming language\u2019s semantics in aspects of lexis and syntax, using notations such as symbols and keywords. The primary function of the notations in the grammar is two-fold: to define a program\u2019s structure for machine execution and to enhance visual comprehension for human readability. Given that AI models do not require assistance in visual comprehension, the focus of AI-oriented grammar is solely on structural definition. We thus consider a notation unnecessary for AI models if it does not contribute to accurate parsing by the parser. AI-oriented grammar is designed with indispensable notations. In the design process of a programming language, semantics are defined first, followed by the development of a grammar to represent them. Therefore, employing AI-oriented grammar does not alter the fundamental semantics of the programming language. Codes written in either AI-oriented or human-centric grammar of a programming language can still be parsed into the same AST and then executed. 2.3 Python Code Dataset for Our Study As a newly proposed concept, we are still unclear whether AIoriented grammar can be realized and what scenarios it can be applied to. To address these uncertainties and explore the potential of AI-oriented grammar, we conduct an empirical study guided by three critical research questions, respectively introduced in Section 3, Section 4, Section 5. Our study is centered around Python, the main programming language of the execution environment for LLMs like GPT-4 and Bard to address programming-required tasks. We utilize the Python subset of starcoderdata [23], a filtered variant of The Stack dataset [21], a comprehensive collection of over 20 million code files sourced from open-source GitHub repositories. We keep the code files from the repositories with over 100 stars, resulting in 623,887 code files. The dataset is partitioned into training and validation sets in a 95:5 ratio. We do not create a separate testing set, as we plan to evaluate the model\u2019s performance using other , , Zhensu Sun, Xiaoning Du, Zhou Yang, Li Li, and David Lo def two_sum(nums: list[int], target: int) -> list[int]:\\n chk_map: dict[int, int] = {}\\n for index, val in enumerate(nums):\\n compl = target val\\n if compl in chk_map:\\n return [chk_map[compl], index]\\n chk_map[val] = index\\n return [] Python SimPy 72 tokens 60 tokens Same AST two_sum nums:list[int] target:intlist [int]chk_map:dict[int int]={} index,val enumerate(nums)compl=target-val complchk_map[chk_map [compl] index]chk_map[val]=index [] Same Execution Results Figure 3: A comparison between Python and SimPy source code, tokenized by GPT-4\u2019s tokenizer. Continuous characters with the same background color represent the same token. Notably, there are no line breaks in SimPy and we add these line breaks in the figure for our human readers. established evaluation datasets. The code snippets in the evaluation datasets are excluded from the training dataset. 3 TOKEN REDUCTION WITH AI-ORIENTED GRAMMAR (RQ1) In this section, we present an instance of AI-oriented grammar to answer RQ1: What is the token reduction capacity of AI-oriented grammar in source code? We propose an AI-oriented grammar for Python as a proof-of-concept (Section 3.1) and then proceed to evaluate the extent of token reduction achievable with this grammar (Section 3.2). 3.1 An AI-oriented grammar for Python To demonstrate the potential benefits of AI-oriented grammar, we propose and implement the first AI-oriented grammar for Python, named Simple Python (SimPy), as a proof-of-concept. SimPy is derived from the original grammar of Python, where we simplify the grammar specification according to the philosophy of AI-oriented grammar. Alongside SimPy, we develop a toolkit including a parser to interpret SimPy source code into Python\u2019s AST, and a converter for seamless code translation between SimPy and Python. 3.1.1 Design. The design of SimPy is guided by the philosophy of AI-oriented grammar, resulting in a grammar that prioritizes AI processing efficiency over human readability. When being tokenized, the code of SimPy is considerably more compact than Python\u2019s, facilitating faster model processing. A comparative illustration of SimPy and Python code, showcasing the same AST but with fewer tokens in SimPy (measured by the tokenizer of GPT-4), is presented in Figure 3. Next, we introduce what is changed to craft SimPy. The modifications to production rules are implemented for the grammar specification of Python 3.12 (the latest version). The modifications for SimPy are driven by our understanding of AI models and the semantics of Python, which is not guaranteed to be optimal but is sufficient to serve as a proof-of-concept demonstration for AI-oriented grammar. Limited by the space, we will only introduce the major modifications here, with the complete grammar specification available in our artifact. Below, we highlight major categories of changes implemented: Replace notations with tokens. In SimPy, we replace keywords and most symbols (e.g., \u201cin\u201d, \u201c>\u201d, \u201c>=\u201d) with distinct token placeholders (e.g., \u201c\u201d, \u201c\u201d, \u201c\u201d). These placeholders require the corporation from the tokenizer side and should be recognized as individual tokens or as part of a token. This approach results in the creation of 78 new tokens. Some single-character symbols, like \u201c.\u201d, \u201c:\u201d, \u201c=\u201d, are retained as they are unambiguous and represent atomic units already optimized for tokenization. This replacement goes beyond a simple tokenization. Firstly, it eliminates the need for surrounding whitespace, as seen in the conversion from Python\u2019s \u201ca in b\u201d to SimPy\u2019s \u201cab\u201d. Secondly, it differentiates the same notation used in different contexts, such as \u201c\u201d and \u201c\u201d, to prevent potential conflicts during the creation of SimPy. Finally, it consolidates multi-character symbols in Python, which may be recognized as multiple tokens, into single tokens in SimPy, ensuring the integrity of their meaning. Restrict coding style. Python utilizes white spaces, line breaks, indents, and line continues in grammar to accommodate a humanfriendly visual coding style. SimPy, on the other hand, streamlines this by preserving only essential separators. Specifically, the line breaks, denoted using \u201c\\n\u201d, are replaced with a unique token \u201c\u201d. This token is omitted if the subsequent line starts with a token that inherently signifies the start of a new line, such as \u201cdef\u201d for function definitions or \u201cclass\u201d for class definitions. In this way, SimPy optimizes token usage for line breaks. Indents are handled using two unique tokens, \u201c\u201d and \u201c\u201d, placed at the beginning and end of a block, respectively. This ensures that only two tokens are needed, regardless of how many lines the block has. Additionally, non-separating white spaces and line continuation characters are eliminated to further streamline the grammar. Simplify grammar tokens. For every grammar token in every production, we review whether it can be removed, merged with others, or replaced with white spaces. Using white space aligns with the nature of current tokenizers, which usually treat white spaces as part of a token to reduce the token number. In Table 1, we compare the grammar specifications of key productions before and after such simplification and calculate the resulting token reductions. For example, the grammar tokens in the \u201cfunction_def\u201d statement are reduced from 4 of 1, with an additional token being optional. Our goal is to minimize grammar token usage in every production. However, certain design decisions in one production might introduce parsing ambiguities when considering the grammar globally. For example, in Python, strings can be concatenated without separators, such as \u201c\u2018hello\u2019 \u2018world\u2019 \u201d. It causes a conflict when we no longer use commas to separate the elements in a list, such as \u201c[\u20181\u2019 \u20182\u2019 \u20183\u2019]\u201d, causing ambiguity about whether \u201c\u20181\u2019 \u20182\u2019 \u20183\u201d\u2019 is a single AI Coders Are Among Us: Rethinking Programming Language Grammar Towards Efficient Code Generation , , Table 1: Comparison of grammar specifications for Python and SimPy, using the official Python grammar notation ([32]). The grammar tokens are in blue. The table also includes the count of grammar tokens for each production: \u201cN\u201d represents the number of lines, \u201cn\u201d signifies the count of repetitive elements, and \u201c?\u201d indicates that the number of tokens is conditional. Name Grammar Specification #Token block Python NEWLINE INDENT statements DEDENT N+1 SimPy \u2018\u2019 statements \u2018\u2019 2 function_def Python \u2018def\u2019 NAME [type_params] \u2018(\u2019 [params] \u2018)\u2019 [\u2018->\u2019 expression ] \u2018:\u2019 [func_type_comment] block 4+1? SimPy \u2018\u2019 NAME [type_params] [params] [\u2018\u2019 expression ] [func_type_comment] block 1+1? class_def Python \u2018class\u2019 NAME [\u2018(\u2019 [arguments] \u2018)\u2019 ] \u2018:\u2019 block 2+2? SimPy \u2018\u2019 NAME [\u2018(\u2019 [arguments] \u2018)\u2019 ] block 1+2? if_stmt Python \u2018if\u2019 named_expression \u2018:\u2019 block elif_stmt 2 SimPy \u2018\u2019 named_expression block elif_stmt 1 for_stmt Python \u2018for\u2019 star_targets \u2018in\u2019 ~ star_expressions \u2018:\u2019 [TYPE_COMMENT] block [else_block] 3 SimPy \u2018\u2019 star_targets ~ star_expressions [TYPE_COMMENT] block [else_block] 1 with_stmt Python \u2018with\u2019 \u2018,\u2019.with_item+ \u2018:\u2019 [TYPE_COMMENT] block 2+n SimPy \u2018\u2019 \u2018 \u2019.with_item+ [TYPE_COMMENT] block 1 try_stmt Python \u2018try\u2019 \u2018:\u2019 block except_block+ [else_block] [finally_block] 2 SimPy \u2018\u2019 block except_block+ [else_block] [finally_block] 1 while_stmt Python \u2018while\u2019 named_expression \u2018:\u2019 block [else_block] 2 SimPy \u2018\u2019 named_expression block [else_block] 1 import_from Python \u2018from\u2019 (\u2018.\u2019 | \u2018...\u2019)* dotted_name \u2018import\u2019 import_from_targets 2+n? SimPy \u2018\u2019 (\u2018.\u2019 | \u2018...\u2019)* dotted_name import_from_targets 1+n? simple_stmts Python \u2018;\u2019.simple_stmt+ [\u2018;\u2019] NEWLINE n+1+1? SimPy [\u2018\u2019].simple_stmt+ [\u2018\u2019] n?+1? concatenated string or three separate strings. To address such ambiguities, strategic trade-offs are necessary. For example, we add an additional grammar token, \u201c\u201d, between the juxtaposed strings so that we can eliminate the commas in a list. Though it leads to an increase in the token number for the juxtaposed strings, it is still beneficial for the overall efficiency of SimPy. 3.1.2 Unambiguity of SimPy . To determine whether a grammar has ambiguity is theoretically undecidable [13]. In practice, parser generator tools are commonly hired to check for ambiguities in grammar, including those of popular programming languages [15]. A parser generator can find a wide range of ambiguities in the grammar, such as conflicts that arise when the parser has two possible actions at one step. Practically, this is almost the best way to check the ambiguity of SimPy. We have successfully generated parsers for SimPy using the GLR (generalized left-to-right rightmost derivation parser) parsing algorithm [22] form tree-sitter [42], where no ambiguity is detected. Next, we provide an analytical discussion about why our transformations are unlikely to introduce ambiguity to the grammar. First of all, the transformations are only made to terminal notations, which act as keywords or delimiters. Changes made to keywords are guaranteed to represent its unique semantics, while changes made to delimiters should not affect the recognition of the construct, and its precedent and subsequent constructs. Case I: New unique notations are added or introduced as replacements. Importantly, different notations are not replaced with the same new notations. To this end, the new notations do not interfere with production rules for which the transformation is not applicable. Given that they are semantically equivalent notations as the original one, the parsing of the affected production rules remains the same. For example, replacing the \u2018NEWLINE INDENT\u2019 in the production rule of block (see Table 1) with \u2018\u2019 conveys the same semantics that a block is about to start. Case II: Existing notations are removed. Arbitrary removal notations may introduce ambiguity to the grammar. We carefully design a few heuristics when removing notations such that they are unlikely to cause problems. \u2022 Remove notations that hold redundant semantics as their adjacent notations. For example, \u2018:\u2019 in many statements indicates the end of the previous construct and the start of a new construct, e.g., in \u2018if\u2019 named_expression \u2018:\u2019 block elif_stmt. However, the block construct initiates with its own starting symbol, , , Zhensu Sun, Xiaoning Du, Zhou Yang, Li Li, and David Lo making the construct itself distinguishable from any previous construct. Hence, removing \u2018:\u2019 is safe for this case. \u2022 Remove delimiters used to scope a construct when the scope of its precedent and subsequent constructs are clear. For example, the \u2018(\u2019 and \u2018)\u2019 for parameters are actually unnecessary in function_def_raw := \u2018def\u2019 NAME [type_params] \u2018(\u2019 [params] \u2018)\u2019 [\u2018->\u2019 expression ] \u2018:\u2019 [func_type_comment] block. NAME is an atomic token, thus will not interfere the beginning of parameters when type_params is absent. type_params are surrounded by \u2018[\u2019 and \u2018]\u2019, making their presence not an issue for recognizing params. Hence, \u2018(\u2019 can be safely removed. Now, looking at the subsequent constructs, [\u2018->\u2019 expression ], \u2018:\u2019, [func_type_comment], or block possesses a unique indicator of their beginning. Hence, \u2018)\u2019 can be safely removed as well. Another example is the \u2018import\u2019 keyword in import_from := \u2018from\u2019 (\u2018.\u2019 | \u2018...\u2019)* dotted_name \u2018import\u2019 import_from_targets. Since dotted_name is a must and contains no white spaces, hence the white space between dotted_name and import_from_targets can perfectly separate these two constructs. Removing \u2018import\u2019 is also fine. 3.1.3 Semantic equivalence between SimPy and Python. SimPy is designed as a simplified grammar of Python, which means a program written in Python can be equivalently and deterministically transformed to its counterpart in SimPy , and vice versa. In other words, Python and SimPy are semantically equivalent. We prove this statement in Theorem 1. Formally, we define a grammar \ud835\udc3aand a grammar \ud835\udc3a\u2032. \ud835\udc3a\u2032 is obtained via a transformation \ud835\udc47to the production rules in \ud835\udc3a. Given a production rule, \ud835\udc47is restricted to adding, replacing, or removing a terminal notation or a sequence of terminal notations. The transformation between Python and SimPy is an instance complying with this restriction. For example, \ud835\udc47(block := NEWLINE INDENT statements DEDENT ) = block:= \u2018\u2019 statements \u2018\u2019. The transformations can be easily extended to parse trees, where \ud835\udc47transforms the (production rule) nodes of a tree. Given a parse tree \ud835\udc61, we define its corresponding program as \ud835\udc5d= \ud835\udf0f(\ud835\udc61). We first prove that the parse tree of a program written in \ud835\udc3ais equivalent to the parse tree of its counterpart in \ud835\udc3a\u2032 modulo the transformation \ud835\udc47in Lemma 1. In Theorem 1, we further prove that these program pairs share the same ASTs, and naturally the same semantics. Lemma 1. Given any program \ud835\udc5d, the parse tree \ud835\udc61generated by grammar \ud835\udc3ais equivalent to the parse tree \ud835\udc61\u2032 generated from \ud835\udc5d\u2032 = \ud835\udf0f(\ud835\udc47(\ud835\udc61)) by grammar\ud835\udc3a\u2032 modulo the transformation\ud835\udc47, if no ambiguity exists in \ud835\udc3a\u2032. That is \ud835\udc61\u2032 = \ud835\udc47(\ud835\udc61). Proof. We give the proof by structural induction on \ud835\udc5d. Base case: \ud835\udc5dis an atomic program construct. This case is trivial as we don\u2019t have any transformation for the atomic program. Naturally, the parse trees are identical. Inductive case: We present the proof based on different compound language constructs. Take the block construct of SimPy for an example. Assuming \ud835\udc5d = NEWLINE INDENT statements DEDENT, the parse tree for \ud835\udc5dis \ud835\udc61=block (NEWLINE, INDENT, statements(\ud835\udc610), DEDENT), where \ud835\udc610 is the parse tree for the stmt construct. By definition,\ud835\udc5d\u2032 =\ud835\udc47(\ud835\udc5d) = \u2018\u2019\ud835\udc47(statements) \u2018\u2019. Obviously, \ud835\udc5d\u2032 still matches the production rule of block in \ud835\udc3a\u2032. Given no ambiguity in \ud835\udc3a\u2032, its parse tree is deterministic. Thus, we have \ud835\udc61\u2032 = block(\u2018\u2019, statements(\ud835\udc61\u2032 0), \u2018\u2019), where the parse tree for the statements construct is \ud835\udc61\u2032 0. By the induction hypothesis, \ud835\udc47(\ud835\udc610) = \ud835\udc61\u2032 0. Applying \ud835\udc47to \ud835\udc61, we have \ud835\udc47(\ud835\udc61) = \ud835\udc61\u2032. Similarly, we can prove other constructs. Hence, the lemma is proved. \u25a1 Theorem 1. Python and SimPy are semantically equivalent. Proof. Two programs are semantically equivalent if they share the same AST. In the next, we prove that for any program \ud835\udc5din Python, it shares the same AST as its counterpart \ud835\udc5d\u2032 in SimPy , and vice versa. We assume the transformation from Python to SimPy is \ud835\udc47. Assuming the parse tree for \ud835\udc5dis \ud835\udc61and the parse tree for \ud835\udc5d\u2032 is \ud835\udc61\u2032, with Lemma 1, we know that \ud835\udc61\u2032 = \ud835\udc47(\ud835\udc61). ASTs are obtained from parse trees by abstracting away some concrete syntax details but retaining the essential semantic structures. Since \ud835\udc47is a non-semantic transformation, \ud835\udc61and \ud835\udc61\u2032 differ only at the non-semantic nodes, such that the parse trees are processed to the same AST. Hence, we can prove that for any program in Python, its counterpart in SimPy is semantically equivalent to it. Similarly, we can prove that for any program in SimPy , its counterpart in Python is semantically equivalent to it as well. Thus, the theorem is proved. \u25a1 3.1.4 Implementation. Based on the grammar specifications of SimPy, we develop a toolkit for it, including an AST parser for SimPy code and a converter for seamless translation between SimPy and Python source codes. The parser is built upon tree-sitter [42], a popular parser generator tool. We first describe the grammar specification of SimPy in the configuration file of the tree-sitter and then generate the parser. With the help of the GLR algorithm from the tree-sitter, we ensure SimPy resolves all the conflicts and no ambiguity exists. The generated parser can parse the SimPy source code into the AST of Python. Based on this parser, we further implement a converter, where specific conversion rules are established for each node of the AST. From a pragmatic point of view, we test our implemented toolkits by conducting round-trip transformations, where Python source code is first converted into SimPy code and subsequently retranslated back to Python. Our first tests on the Python dataset revealed that, ignoring all whitespace, the textual content of the code remains unchanged after the transformation. In addition, we assess its soundness through execution results. We perform the round-trip transformation to the groundtruth code snippets of HumanEval and run the test cases on both the transformed and the original code. The execution results of all the transformed code and the original code are exactly the same, which also indicates the soundness of our implementation. 3.2 Experiments of RQ1 In this section, we detail the tokenizers employed in our experiments and describe the experimental methodology and its corresponding results. AI Coders Are Among Us: Rethinking Programming Language Grammar Towards Efficient Code Generation , , Table 2: Percentage of token reduction achieved with SimPy. The \u201cCode\u201d and \u201cWeb\u201d in the \u201cVocab Source\u201d column represent the sources for constructing the tokenizer\u2019s vocabulary: code repositories and internet data, respectively. Tokenizer Vocab Source Vocab Size Tokens Python SimPy CodeBert Code 50k 1.33B 0.87B 34.7%\u2193 GPT2 Web 50k 1.33B 0.87B 34.7%\u2193 CodeLlama Web 32k 0.97B 0.84B 13.5%\u2193 WizardCoder Web 32k 0.97B 0.84B 13.5%\u2193 DeepSeek-Coder Web 32k 0.97B 0.84B 12.9%\u2193 CodeGen Web 51k 0.93B 0.82B 12.6%\u2193 CodeT5+ Web 51k 0.93B 0.82B 12.6%\u2193 Codex Web 51k 0.93B 0.82B 12.6%\u2193 CodeT5 Code 32k 0.91B 0.78B 13.8%\u2193 StarCoder Code 49k 0.83B 0.76B 8.6%\u2193 SantaCoder Code 49k 0.83B 0.76B 8.8%\u2193 Replit-code Code 33k 0.82B 0.75B 8.6%\u2193 GPT-3.5 Web 100k 0.71B 0.63B 10.4%\u2193 GPT-4 Web 100k 0.71B 0.63B 10.4%\u2193 3.2.1 Tokenizers. Our experiments encompass a broad spectrum of tokenizers from various LLMs. The main difference between them is the training corpus, leading to different token vocabularies. GPT-2 [35], Codex [8], GPT-3.5 [29], GPT-4 [30]: These tokenizers, released by OpenAI, are trained on a mixed corpus, including both natural language and programming language, with GPT-4 being the latest version offering state-of-the-art performance in various language tasks. CodeLlama [37], WizardCoder [24], DeepSeek-Coder [1]: These tokenizers are derived from the tokenizer of Llama 2 [41] which is also trained on the mixed corpus. SantaCoder [2], StarCoder [23], Replit-code [36]: These tokenizers are specialized for code, having been trained exclusively on programming language datasets, and are thus more adept at handling source code. CodeGen [27], CodeT5 [47], CodeT5+ [46]: Created by Salesforce, these tokenizers are extended based on the vocabulary of GPT2 with additional tokens representing repeating tokens of tabs and white spaces. 3.2.2 Results. To answer RQ1, we conducted an evaluation involving the representation of code files from our Python dataset in both its original grammar and in SimPy, followed by the tokenization using the same tokenizer for each representation. We created the SimPy dataset by converting the Python dataset with our converter. In tokenizing the SimPy code, we modify the tokenizers to include tokens of SimPy in their vocabularies. In total, 14 tokenizers from popular LLMs are evaluated in our experiments, where each tokenizer\u2019s vocabulary source and size are also documented to offer a comprehensive view of SimPy\u2019s performance across different models. By examining the variation in token numbers, we evaluated SimPy\u2019s effectiveness in reducing token size, thus showcasing the potential benefits of AI-oriented syntax. As revealed in table 2, SimPy can reduce the number of tokens by 8.6% to 34.7%, depending on the tokenizers. The GPT-4 and GPT-3.5 tokenizers, which are already the most efficient in representing Python source code, show a further reduction of 10.4% in token count with SimPy. For tokenizers trained on code corpora, such as Replit-code and StarCoder, SimPy achieved a token reduction ranging from 8.6% to 13.8%. Tokenizers trained on web-based corpora like CodeGen and CodeT5 also exhibited significant reductions, between 12.6% and 13.5%. The most pronounced impact of SimPy is observed with the least efficient tokenizers, CodeBert and GPT-2, where a remarkable 34.7% reduction in token count was achieved. These promising results highlight SimPy\u2019s potential to reduce token count for source code representation. As estimated by OpenAI [19], the Floating-point operations (FLOPS) required for generating each token during inference can be regarded as being only relevant to the model size when the context size is fixed. Therefore, a reduction in token count can be directly translated to a decrease in FLOPS at a similar level, resulting in faster inference speeds given the fixed computing speed of the device. Answer to RQ1: AI-oriented grammar, exemplified using SimPy, effectively reduces the number of tokens required for source code representation, with models like GPT-4 benefiting from a 10.4% reduction. Correspondingly, it leads to a speed up and a computing saving during inference at a similar level. 4 MODEL TRAINING WITH AI-ORIENTED GRAMMAR (RQ2) In this section, we aim to answer RQ2: How can AI models understand AI-oriented grammar? We experimentally investigate whether AI models can retain their accuracy when trained with AI-oriented grammar. We describe our training strategies in Section 4.1 and assess their effectiveness on two language models in Section 4.2. 4.1 Training Strategies Training AI models with AI-oriented grammar is a pivotal step to enable the model to effectively deal with source code in this new format. Despite the efficiency gains demonstrated by SimPy, such training should not compromise the model\u2019s accuracy. To explore the feasibility of such training, we experiment with two different strategies. Next, we introduce the strategies in the experiment, from tokenizer refining to model training. Tokenizer Refining SimPy introduces 78 new tokens for the tokenizers to recognize. For example, the \u201cdef\u201d keyword of the original Python grammar is replaced by a token \u201c\u201d. Given the existing association between the pre-trained model and its tokenizer, completely retraining the tokenizer on SimPy code to optimize token distribution is impractical. Instead, we opt for a more feasible approach: expanding the tokenizer\u2019s vocabulary to include these new tokens. Correspondingly, this modification requires resizing , , Zhensu Sun, Xiaoning Du, Zhou Yang, Li Li, and David Lo the embedding matrix ([vocab size * embedding size]) and the output layer ([hidden state size * vocab size]) to fit the expended vocab size. This expansion introduces a few new parameters, mainly in the output layer, around 78 * hidden_size parameters. For instance, modifying a CodeGen [27] model with a hidden state size of 2048 introduces around 160 thousand new parameters, a negligible increase (less than 0.01%) in the total parameter count. Moreover, the resizing will randomly initialize both the embedding vector for each new token and the weight of the output layer, which will be updated during the model training. Model Training Our study explores two basic training strategies: 1) directly training a model on the SimPy code dataset, referred to as SimPy, and 2) sequentially training a model first on the Python dataset and then on the SimPy code dataset, referred to as Python\u2192SimPy. If such basic strategies work, further improvement in efficiently adapting AI-oriented grammar is completely feasible. Moreover, we construct a control group: directly training a model on the Python code dataset, denoted as Python. The performance of the two strategies should match or surpass the model from the control group; otherwise, they are not practical. To control the variable, all training sessions across the two strategies and the control group are conducted under identical conditions, including the training environment, initial model, and training hyper-parameters. Notably, the SimPy dataset is converted from the Python dataset, ensuring no external data is involved. Moreover, for the Python+SimPy setting, we vary the proportion of the SimPy dataset used, i.e., 10%, 20%, 50%, and 100%, to assess the required volume of data for effective fine-tuning. 4.2 Experiments of RQ2 We first present the experimental setup for RQ2, including the models used, evaluation metrics, and implementation details. Then, we report the experimental results and answer the research questions. 4.2.1 Models. We adopt three widely used models in our research community, namely CodeGen-NL, TinyLlama, and Pythia, whose parameter sizes range between 350M and 1.1B. All these models serve as the initial pre-trained model for our experiments. Though these are not the latest state-of-the-art models, they suffice to validate the feasibility of learning AI-oriented grammar like SimPy. We will further discuss the impact of this decision in Section 7. CodeGen-NL: CodeGen, proposed by Salesfore [27], is an opensourced language model designed for code generation. It undergoes a multi-phase training process on different datasets, where the model is first trained with natural language datasets and then code datasets. Our experiments utilize its natural language version (CodeGen-350M-nl), produced after the initial phase of its training process, as the foundation model to conduct our experiments. TinyLlama: TinyLlama [51] is a compact 1.1B language model pre-trained on around 3 trillion tokens, building on the architecture and tokenizer of Llama 2 [41]. It shows competitive performance compared to existing open-source language models of similar sizes. Pythia: Pythia [4] is a suite of LLMs ranging in size from 70M to 12B parameters. It is intended to be used as the baseline for research studies and thus is designed close to currently accepted common practices. Considering the capacity of our computing resources, we use its 1B version. 4.2.2 Evaluation Metrics. We evaluate the model\u2019s performance on the code generation task with the Pass@\ud835\udc58metric on HumanEval. To compute Pass@\ud835\udc58, \ud835\udc58code samples are generated for each problem, and a problem is considered solved if any of the \ud835\udc58samples pass the unit tests. We report the fraction of problems being successfully solved. The HumanEval dataset, developed by OpenAI [8], comprises 164 programming problems, each with a function signature, a docstring, and multiple test cases. Given the function signature and docstring, the model is required to generate the code, which is then tested by executing the test cases. Notably, the function signatures are written using Python\u2019s original grammar. When evaluating the model adapted to SimPy, we convert the function signature into SimPy using the code converter. Similarly, the model-generated SimPy code is subsequently converted into Python to run test cases since the existing testing framework is implemented for Python source code. 4.2.3 Implementation Details. In our experiments, we use the Huggingface Transformers library [48] with Pytorch to implement the models. The experiments of CodeGen-NL are performed on a machine with 48 vCPUs, 512GB RAM, and four RTX A5000 GPUs (24GB RAM), while the other two models are trained on a machine with 28 vCPUs, 200GB RAM, and two RTX A6000 GPUs (48GB RAM). The hyper-parameters of the training are set referring to CodeGen\u2019s hyper-parameters: 8 batch size, 1.8e-4 learning rate, 0.1 weight decay, and 512 context length. During the inference for evaluation, we set the temperature to 0.2 and the top-p to 0.95. 4.2.4 Results. Following the settings of the two strategies (SimPy and Python\u2192SimPy) and the control group (Python), we train the CodeGen-NL, TinyLlama, and Pythia models, respectively. Finally, for each of our initial models, we have six variations: one each for Python and SimPy, and four models for Python\u2192SimPy incorporating 10%, 20%, 50%, and 100% of the SimPy dataset. The performance of these models is evaluated through Pass@1 and Pass@10 metrics on the HumanEval dataset. We report the results in Table 3. Notably, the models trained with SimPy lag behind the Python baseline in terms of accuracy. For example, the Pass@1 and Pass@10 of CodeGen (SimPy) are respectively 2.93% and 5.49%, lower than the ones of CodeGen (Python), which are 4.51% and 7.32%. This could be attributed to SimPy\u2019s limited expressiveness, constraining the models from leveraging knowledge acquired from natural language datasets during pretraining. Consequently, direct training with AI-oriented grammar appears to be an impractical approach. However, the sequential training strategy, starting with Python and then incorporating SimPy, yields comparable or even superior accuracy to the control group. Specifically, CodeGen-NL, TinyLlama, and Pythia models trained with Python\u2192100%SimPy achieve Pass@10 scores of 9.15%, 14.02%, and 10.00%, respectively, outperforming the control group\u2019s 7.32%, 13.41%, and 9.76%. This suggests a successful training with SimPy, demonstrating the feasibility of AI models learning AI-oriented grammar. Interestingly, we observe that the Pythia model, when trained exclusively with 100% SimPy, surpasses the Python baseline on Pass@1. This highlights AI Coders Are Among Us: Rethinking Programming Language Grammar Towards Efficient Code Generation , , Table 3: The Pass@1 and Pass@10 of LLMs on Python and SimPy datasets under varied settings. Python and SimPy denote models trained exclusively on respective datasets. Python\u2192SimPy refers to sequential training on both datasets, with the parenthetical numbers indicating the SimPy dataset\u2019s proportion involved in the training. Model Training Strategy Pass@1 Pass@10 CodeGen-NL Python 4.51% 7.32% 100% SimPy 2.93% 5.49% Python \u219210% SimPy 3.11% 3.66% Python \u219220% SimPy 3.66% 4.27% Python \u219250% SimPy 3.96% 6.71% Python \u2192100% SimPy 4.82% 9.15% TinyLlama Python 10.00% 13.41% 100% SimPy 5.91% 9.76% Python \u219210% SimPy 2.07% 3.66% Python \u219220% SimPy 3.23% 5.49% Python \u219250% SimPy 5.73% 11.59% Python \u2192100% SimPy 10.12% 14.02% Pythia Python 5.79% 9.76% 100% SimPy 7.01% 9.15% Python \u219210% SimPy 1.89% 2.44% Python \u219220% SimPy 3.11% 4.27% Python \u219250% SimPy 4.21% 7.32% Python \u2192100% SimPy 5.67% 10.00% the possibility of learning SimPy without relying on the sequential training strategy. By varying the proportion of the SimPy dataset in the Python\u2192SimPy setting, we found that a substantial dataset is still required by the fine-tuning with SimPy. For instance, TinyLlama (Python\u219250%SimPy) scored 5.73% in Pass@1 and 11.59% in Pass@10, still trailing behind the TinyLlama (Python) scores. We will further discuss this finding in Section 8. Answer to RQ2: AI models, when initially trained with the original grammar and then the AI-oriented grammar, can successfully learn the AI-oriented grammar, retaining or even enhancing their accuracy. For instance, the CodeGen model, originally trained with Python and achieving a 7.32% Pass@10, improved to a 9.15% Pass@10 after the additional training with SimPy. 5 USAGE SCENARIO OF AI-ORIENTED GRAMMAR (RQ3) In this section, we address RQ3: How can AI-oriented grammar support real-world scenarios? We first demonstrate the basic application scenario of AI-oriented grammar, and subsequently, we introduce a novel inference framework designed to broaden the applicability of AI-oriented grammar, followed by an evaluation of the framework\u2019s additional latency. 5.1 Basic usage scenario The source code, when written in AI-oriented grammar, becomes challenging for human interpretation and is therefore not intended for human display. Consequently, the application of AI-oriented grammar is limited to scenarios where human users do not have access to the generated code. A typical scenario is the AI agents, such as AutoGPT [40] and LangChain [7], for regular users rather than developers. For instance, an AI agent tasked with data collection from a website would generate the required crawler script, execute it to gather data, and present the outcomes to the user. End users generally care more about the results than understanding the underlying script since they lack programming knowledge. Therefore, even without additional enhancement, models trained with AI-oriented grammar can be effectively utilized in real-world scenarios. We demonstrate this scenario on the left of Figure 4. In this scenario, an AI-oriented code generated by the model can be executed in two ways: 1) being translated into human-centric code and then executed by its executor; 2) directly being executed by a specific executor for the AI-oriented grammar. Notably, implementing an executor specifically for AI-oriented grammar demands only lightweight engineering efforts as the AI-oriented grammar and its original grammar differ only at the syntax level. Thus, the second method offers a more efficient solution. 5.2 Extended usage scenario Despite the effectiveness of AI-oriented grammar in certain contexts, many code generation scenarios still require the involvement of humans, where human-readable code is required. To fill this gap, we propose an inference framework for code generation named DualCode. DualCode enables human users to interact with code in human-centric grammar, while the model still leverages the efficiency of AI-oriented grammar during the inference process. The fundamental concept of DualCode is to convert the code between AI-oriented grammar and the original grammar of the same programming language. To achieve this goal, a rule-based code converter should be employed to convert source code into AI-oriented grammar for model comprehension and, inversely for user readability. Such a converter is feasible since both the AI-oriented grammar and original grammar describe the same AST. The identical AST allows the code written in the two grammars to be equivalently converted into each other based on the grammar rules. We illustrate the workflow of DualCode on the right of Figure 4. It employs two \u201cgates\u201d: an input converter and an output converter. The input converter translates code written in human-centric grammar into AI-oriented grammar for model processing. Similarly, the output converter reverts AI-generated code into human-readable code for user comprehension. Notably, this environment is only for the code, where other inputs, such as natural language, are unaffected. DualCode is a not complicated framework, enabling the lightweight integration of AI-oriented grammar into existing , , Zhensu Sun, Xiaoning Du, Zhou Yang, Li Li, and David Lo Natural Language Natural Language Human-centric Code AI-oriented Code Executor Program Results Human-centric Code AI-oriented Code Converter Converter Natural Language AI-oriented Code Executor Program Results Natural Language AI System AI System Figure 4: LEFT: the workflow of the basic usage scenarios of AI-oriented grammar. RIGHT: the workflow of the extended usage scenarios of AI-oriented grammar under DualCode, where the code executor of the AI system in the figure is not necessary. Table 4: Comparison of average conversion times between Python and SimPy, and the processing speed of the StarCoder tokenizer, based on Huggingface Tokenizers. Token num Huggingface Converter Encode Decode To SimPy To Python [0, 100) 0.2ms 0.1ms 0.2ms 0.2ms [100, 500) 0.7ms 0.6ms 0.9ms 0.8ms [500, 2000) 2.4ms 2.2ms 3.4ms 3.1ms [2000, 5000) 6.7ms 6.4ms 12.2ms 10.8ms [5000, +\u221e) 23.0ms 23.7ms 75.4ms 57.4ms workflows of AI systems. Though being straightforward, it is proposed and investigated for the first time, bridging the gap between efficient AI-oriented code generation and human readability. 5.3 Experiments of RQ3 Given that the DualCode converter adds extra latency to the inference process, a significant concern arises: excessive latency could render the system impractical for real-world applications. To address the concern, we conduct experiments focusing on the converter\u2019s performance. Specifically, we measure the time taken to convert Python code files into SimPy and then back to Python using the converter. As a reference, we evaluate the processing speed of the StarCoder tokenizer, which is based on the widely acknowledged Huggingface Tokenizers library [26]. For this experiment, we categorized Python code files into five distinct groups, based on their token counts, as follows: [0, 100), [100, 500), [500, 2000), [2000, 5000), and [5000, +\u221e). These token counts are determined using the StarCoder tokenizer [23] on the Python code. We calculate the average processing time for each group, both for the converter and the StarCoder tokenizer. The findings, presented in Table 4, indicate that the converter\u2019s speed is comparable to that of Huggingface Tokenizers. For code files with fewer than 100 tokens, the converter\u2019s processing time for each conversion is a mere 0.2 ms, only 0.1 ms slower than the Huggingface Tokenizers. For files containing 100 to 500 tokens, the conversion is completed within 1.0 ms. This is not a significant concern, given that over 95% of the dataset\u2019s code files (sourced from real-world repositories) are within the 5000-token range. Therefore, we deduce that the latency induced by the converter is acceptably minimal in most practical scenarios. Answer to RQ3: Beyond the basic scenarios where human interaction is not required, the application of AI-oriented grammar can be substantially extended by incorporating the DualCode framework. DualCode enables humans to continue using human-centric grammar while AI models leverage the efficiency of AI-oriented grammar. Notably, it imposes negligible latency (under 1 ms for code up to 500 tokens). 6 RELATED WORK Program Simplification Program simplification has emerged as a valuable approach to enhance the efficiency of code models [5, 18, 33, 34, 39, 49]. This approach typically involves the elimination of less critical code tokens to streamline model processing. For example, DietCode [52] removes the code tokens that receive the fewest attention weights by CodeBert. Sivand [34] and P2IM [53] simplify the input code according to the outputs of a supplementary model. While these methods considerably boost efficiency, they unavoidably compromise accuracy due to the removal of certain code elements. In contrast, models with AI-oriented grammar, though perhaps less efficient, are able to preserve or even improve accuracy. Most importantly, existing simplification techniques are irreversible, limiting their application to code understanding tasks like summarization and retrieval, rather than code generation. Conversely, code in AI-oriented grammar can be effortlessly reverted to its original form, thus suitable for various code-related tasks. AI Coders Are Among Us: Rethinking Programming Language Grammar Towards Efficient Code Generation , , Tokenization of Source Code Modern LLMs usually preprocess textual datasets using an open-vocabulary tokenization method, Byte-Pair Encoding (BPE) [38]. BPE tokenizes text into subwords based on their frequency in the text corpus, offering a balance between the granularity of tokens and vocabulary breadth. Karampatsis et al. [20] first identify the effectiveness of BPE on source code. CodeT5 reveals that BPE trained on source code corpus can reduce over 30% of tokens for code generation, compared with the one trained on natural language corpus. Subsequently, all major LLMs for code generation, such as CodeBERT [12], CodeT5 [47], SantaCoder [2], StarCoder [23] and CodeLlama [37], adopt BPE as the tokenization method. Further enhancements to BPE for source code have been proposed. For example, Chirkova [10] suggests that clustering punctuation characters into single tokens can reduce average token length by 17% without impacting model performance. Notably, even though the tokenizers are optimized for source code, they still need to deal with the unnecessary tokens introduced by the human-centric grammar. AI-oriented grammar optimizes the representation of source code in a more fundamental way, which is orthogonal to these existing tokenization methods. 7 THREATS TO VALIDITY Constrained Model Selection Our experimental scope in RQ2 is restricted by our computational resources, limiting our evaluation to models with around 1B parameters, e.g., Pythia and TinyLlama. These models are relatively modest in scale. However, while the model size is expanding, the fundamental issue of computation waste caused by human-centric code grammar remains unaddressed. Therefore, the insights derived from our experiments with smaller models are still highly relevant for understanding inefficiency issues in larger models. Limited Programming Language Our research primarily investigates the implementation of AI-oriented grammar in Python, a language widely utilized by existing LLMs for programming tasks. This initial exploration has shown that AI-oriented grammar effectively reduces computational costs during inference. However, the conclusions drawn from Python may not generalize to other programming languages. We thus leave the exploration of its implementation in other languages as future work. Inefficient Implementation We implement a proof-of-concept converter to convert the code between SimPy and Python. While this converter provides seamless translation, its efficiency is not optimized. For instance, it is developed in Python, which is less efficient compared to languages like C++. This aspect could potentially result in an underestimation of the converter\u2019s performance in our experimental evaluations. 8 DISCUSSION Future directions of AI-oriented grammar Our study concentrates on the inefficiencies arising from the mismatch between AI coders and human-centric code grammars. This initial investigation into AI-oriented grammar highlights its feasibility and potential benefits. As an emerging field, AI-oriented grammar presents numerous unexplored questions. For example, an interesting finding from our experiments is that models trained with AI-oriented grammar can even improve the model\u2019s accuracy in code generation tasks. This emphasizes the critical role of grammar as a foundational element for LLMs in grasping code semantics. Designing grammars that are inherently more comprehensible to AI models could significantly enhance their performance. Our current research provides a preliminary insight into this aspect, opening doors for in-depth future studies. Additionally, the process of simplifying grammar, as exemplified by our manual creation of SimPy, raises the question of whether an automated approach could create optimal grammar rules for AI models. A potential solution for simplifying the grammar could be iteratively searching for grammar tokens/structures that can be removed with the help of a parser generator. Moreover, saving the training cost for teaching LLMs AI-oriented grammar is also of great practical value, where a more efficient training method for LLMs to learn new programming grammar is urgently needed. We, therefore, call for the software engineering community to engage further with this promising topic, recognizing its potential to revolutionize the field of AI coders. Limitations in practice Though extending the applicability of AI-oriented grammar, DualCode relies on a rule-based converter. The converter, we implemented for SimPy, is AST-based, which implicitly requires the input and output code of the models under the DualCode framework to satisfy the grammar correctness. For the output, grammar correctness is a fundamental expectation for a qualified LLM-based assistant. Thus, this requirement from DualCode is not an additional constraint set to the model but aligns with the goal of a reliable AI service. However, it poses challenges when dealing with user-provided input, which may not always be grammatically correct. It is not a concern to models handling natural-language-to-code tasks. However, the requirement may limit the application of SimPy when some tasks involve partial source code as input, such as LLM-based code completion. Addressing this limitation could involve developing an error-tolerant converter or grammar, which is a crucial direction for future research. Learning the AI-oriented grammar The learning of AI-oriented grammar could be a tricky task. In our experiments, we demonstrate the effectiveness of fine-tuning AI models with SimPy using the next token prediction task. However, this simple fine-tuning strategy requires a large number of SimPy samples, 100% of the dataset in our experiments. A more efficient adaptation process would significantly enhance the utility of AI-oriented grammar. However, current research on how AI models learn code grammar is still limited. Although studies [9, 25, 45] have shown that LLMs typically grasp code grammar knowledge in their initial layers, the exact learning mechanism remains unclear. Therefore, a thorough analysis in this area is much needed. Utility of AI-oriented grammar In this paper, we demonstrate the effectiveness of the sequential training scheme, where the model is initially trained with the original grammar and then the AIoriented grammar. It achieves an equivalent, or even improved, performance compared to the model trained merely with the original grammar. Such a training method incurs an increase in the cost of the model training. For example, training CodeGen on the original Python dataset costs 183,628 training steps, and 100,288 additional steps are taken during the further finetuning on the 100% SimPy dataset. Nevertheless, mastering AI-oriented grammar still , , Zhensu Sun, Xiaoning Du, Zhou Yang, Li Li, and David Lo reduces energy consumption in the long run. Training is performed only once or occasionally, while inference tasks can be continuous and massive after the system is deployed. The post-deployment operational cost is a primary component of the overall cost, sometimes reaching 90% of total expenses [11]. Consequently, despite the additional costs incurred during training, implementing AI-oriented grammar remains highly beneficial from a practical standpoint. 9 CONCLUSION AND FUTURE WORK In this paper, we, for the first time, propose the concept of AIoriented grammar to address the inefficiency of AI coders in processing the code written in human-centric grammar. Through an empirical study guided by three research questions, we successfully demonstrate the feasibility and potential of this novel concept. During our research, we have developed the first-ever AI-oriented grammar for Python. Additionally, we introduced an inference framework designed to empower models to effectively process both AI-oriented and human-centric grammars within a programming language. In the future, our focus will shift towards investigating how AI-oriented grammar can further refine the accuracy of AI models." +} \ No newline at end of file diff --git a/intro_8K/test_introduction_short_2404.16348v2.json b/intro_8K/test_introduction_short_2404.16348v2.json new file mode 100644 index 0000000000000000000000000000000000000000..20b226880e32fe5aebb14fd96644918094aba742 --- /dev/null +++ b/intro_8K/test_introduction_short_2404.16348v2.json @@ -0,0 +1,16 @@ +{ + "url": "http://arxiv.org/abs/2404.16348v2", + "title": "Dual Expert Distillation Network for Generalized Zero-Shot Learning", + "abstract": "Zero-shot learning has consistently yielded remarkable progress via modeling\nnuanced one-to-one visual-attribute correlation. Existing studies resort to\nrefining a uniform mapping function to align and correlate the sample regions\nand subattributes, ignoring two crucial issues: 1) the inherent asymmetry of\nattributes; and 2) the unutilized channel information. This paper addresses\nthese issues by introducing a simple yet effective approach, dubbed Dual Expert\nDistillation Network (DEDN), where two experts are dedicated to coarse- and\nfine-grained visual-attribute modeling, respectively. Concretely, one coarse\nexpert, namely cExp, has a complete perceptual scope to coordinate\nvisual-attribute similarity metrics across dimensions, and moreover, another\nfine expert, namely fExp, consists of multiple specialized subnetworks, each\ncorresponds to an exclusive set of attributes. Two experts cooperatively\ndistill from each other to reach a mutual agreement during training. Meanwhile,\nwe further equip DEDN with a newly designed backbone network, i.e., Dual\nAttention Network (DAN), which incorporates both region and channel attention\ninformation to fully exploit and leverage visual semantic knowledge.\nExperiments on various benchmark datasets indicate a new state-of-the-art.", + "authors": "Zhijie Rao, Jingcai Guo, Xiaocheng Lu, Jingming Liang, Jie Zhang, Haozhao Wang, Kang Wei, Xiaofeng Cao", + "published": "2024-04-25", + "updated": "2024-04-29", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Original Paper", + "paper_cat": "Distillation", + "gt": "Recognizing unknown categories in the open environment is a critical challenge for automatic recognition systems. Zero- Shot Learning (ZSL) [Lampert et al., 2009] that serves as a promising solution has received increasing attention, which is inspired by human text-to-image reasoning capabilities. The objective of ZSL is to transfer the visual knowledge of seen classes to the unseen domain by virtue of shared semantic information, thus empowering the model to recognize the un- seen classes. More trickily, Generalized Zero-Shot Learn- ing (GZSL) [Chao et al., 2016] requires recognizing samples \u2217Corresponding author: Jingcai Guo. \u2020: Equal contribution. (a) cExp (b) fExp crown\n eye\n bill\n \u00b7\u00b7\u00b7\n belly\n breast\n wing\n \u00b7\u00b7\u00b7\n belly\n wing\n breast\n \u00b7\u00b7\u00b7\n torso:\n crown\n bill\n eye\n \u00b7\u00b7\u00b7\n head:\n Figure 1: (a) cExp, also the common practice in existing works, pos- sesses complete attribute-awareness capability yet lacks the ability to process fine-grained semantic information. (b) fExp, which con- sists of multiple specialized sub-networks, lacks a global perception field. from both seen and unseen classes in the inference phase. Mainstream studies broadly follow two routes, genera- tive [Xian et al., 2018][Xie et al., 2022][Li et al., 2023] and embedding techniques [Zhang et al., 2017][Liu et al., 2020][Chen et al., 2021b], where most of the schemes are de- voted to mining and constructing class-wise visual-attribute relations. To strengthen the fine-grained perceptual capabil- ities of the model, recent research has invested considerable effort into modeling local-subattribute correlations [Xie et al., 2019][Huynh and Elhamifar, 2020][Xu et al., 2020]. The mo- tivation is to build a refined pairwise relation map via search- ing and binding subattributes and the corresponding region visual features (Figure 1 (a)). Despite their contribution to boosting performance, the inherent asymmetry of attributes remains undiscussed, and the channel information is not fully exploited. The asymmetry of attributes stems from the fact that 1) the semantic dimensions between attributes are heterogeneous or even antagonistic. Take the SUN dataset [Patterson and Hays, 2012] as an example, where 38 attributes (studying, playing, etc.) describe the function of one scene, while 27 attributes arXiv:2404.16348v2 [cs.CV] 29 Apr 2024 (trees, flowers, etc.) describe the entities in the scene. It can be obviously observed that the former are abstract and global, while the latter are concrete and local; 2) the visual features corresponding to attributes are intertwined. For example, neighboring regions tend to be more semantically similar, a phenomenon that is exacerbated by the local information fu- sion mechanism of the convolutional kernel, which leads to difficulties in accurately locating fine-grained attributes such as head, crown, and so on. In this paper, we revisit the task of modeling visual- attribute relations from the perspective of attribute annota- tions. Given the inherent complexity of attribute descrip- tions, existing learning paradigms are virtually forcing a sin- gle model to undertake a multi-objective hybrid task, which is ideally appealing yet empirically challenging. Naturally, we employ the idea of divide-and-conquer to release the pressure of a single model. We meticulously decompose the hybrid task into multiple subtasks, i.e., dividing the attributes into multiple disjoint clusters and assigning specialized learnable networks to them. Our approach is referred to as, Dual Expert Distillation Network, abbreviated DEDN. As shown in Figure 1, our approach sets up two experts. cExp, in line with com- mon practices, is equipped with complete attribute percep- tion capability to harmonize holistic visual-attribute measure results. fExp, consists of multiple subnetworks, where each subnetwork is only responsible for capturing the characteris- tics of a specific attribute cluster. During the training phase, we encourage the two to learn cooperatively to compensate for their respective deficiencies in a mutually distilling man- ner. The decision results of the two experts are combined for final inference. For the issue of underutilized channel information, we design a novel attention network, Dual Attention Network (DAN), as the backbone. DAN employs a dual-attention mechanism that fully exploits the potential semantic knowl- edge of both regions and channels to facilitate more precise visual-attribute correlation metrics. To further boost perfor- mance, we present Margin-Aware Loss (MAL) as the training loss function to address the confidence imbalance between seen and unseen classes. Our contributions are summarized below: \u2022 We rethink the issue of modeling visual-attribute rela- tions from the perspective of attribute annotations and point out that the inherent complexity of attributes is one of the major bottlenecks. We propose a simple yet ef- fective strategy of establishing two experts working on distinct attribute perception scopes to learn and infer col- laboratively in a complementary manner. \u2022 We present a novel attention network, dubbed DAN, which incorporates both region and channel attention in- formation to better capture correlations between visu- als and attributes. Furthermore, a new learning function named MAL is designed to balance the confidence of seen and unseen classes. \u2022 We conduct extensive experiments on mainstream eval- uation datasets, and the results show that the proposed method effectively improves the performance.", + "main_content": "In ZSL/GZSL, attributes are the only ties that bridge seen and unseen classes, hence exploring and constructing the link between visuals and attributes is a core subject. Existing methods fall into class-wise visual-attribute modeling, which treats both visual features and attribute vectors as a whole, and regional visual-subattribute modeling, which seeks to explore the correlation between local visual information and subattributes. 2.1 Class-wise Visual-Attribute Modeling Mainstream researches broadly follow two technical routes, generative and embedding techniques. Generative techniques utilize the latent distribution fitting ability of generative models such as GAN and VAE to implicitly learn the relationship between attributes and categories to construct hallucinatory samples of unseen classes [Xian et al., 2018][Verma et al., 2018][Felix et al., 2018][Li et al., 2019][Vyas et al., 2020][Keshari et al., 2020][Xie et al., 2022][Li et al., 2023]. The technical bottleneck of this route is the poor realism of the hallucinatory samples, thus many studies incorporate other techniques such as meta-learning [Yu et al., 2020], representation learning [Li et al., 2021][Chen et al., 2021c][Chen et al., 2021a][Han et al., 2021][Kong et al., 2022], etc. for joint training. Embedding techniques aim at projecting visual and attribute features to a certain space, from which the most similar semantic information is searched. In general, embedding techniques are categorized into three directions: visual-to-attribute space [Changpinyo et al., 2016][Kodirov et al., 2017][Liu et al., 2020][Chen et al., 2022a], attribute-to-visual space [Zhang et al., 2017][Annadani and Biswas, 2018], and common space [Liu et al., 2018][Jiang et al., 2019]. Researchers in the first two directions invest considerable effort in designing robust mapping functions to cope with domain shift and out-of-distribution generalization problems. The third direction centers on finding a suitable semantic space. Class-level visual-attribute modeling lacks the fine-grained perceptual ability to respond to interactions between local visual features and subattributes. 2.2 Region-wise Visual-Attribute Modeling Region-wise modeling is a promising direction in embedding techniques. Unlike other embedding approaches, region-wise modeling focuses on the correlation between local information and subattributes to build more detailed mapping functions. Models based on attention mechanisms are the dominant means in this direction, motivated by training models to search for corresponding visual features based on semantic vectors. Recent approaches include feature-to-attribute attention networks [Xie et al., 2019][Huynh and Elhamifar, 2020], bidirectional attention networks [Chen et al., 2022b], and multi-attention networks [Zhu et al., 2019]. In addition, some studies resort to prototype learning, where the goal is to explicitly learn the corresponding prototypical visual features of individual subattributes, thus aiding the model\u2019s judgment [Xu et al., 2020][Wang et al., 2021]. Further, modeling the topological structure between regional features with the help of graph convolution techniques also yields promising results cExp fExp DAN DAN Distillation MAL MAL concat W1 W2 F V CxR DxG Sr DxR Ar softmax DxR \u00a0Product&Sum Or D W3 W4 F V RxC DxG Sc DxC Ac softmax DxC \u00a0Product&Sum Oc D \u00a0Weighted&Sum O D DAN Visual Feature crown bill eye \u00b7\u00b7\u00b7 head: belly wing breast \u00b7\u00b7\u00b7 torso: crown eye bill \u00b7\u00b7\u00b7 belly breast wing \u00b7\u00b7\u00b7 Figure 2: Left: cExp possesses the scope of a holistic attribute set, while fExp consists of multiple sub-networks, each of which is responsible for the prediction of only partial attributes. We concatenate all outputs of subnetworks as the final result of fExp. Then, distillation loss is implemented to facilitate joint learning. Right: The architecture of DAN. [Xie et al., 2020][Guo et al., 2023]. While the main idea of these approaches is to design appropriate attention networks or regularization functions, ignoring the inherent complexity of attribute annotations, we provide a new perspective to think about the visual-attribute modeling problem. In addition, existing region-attribute methods, although achieving good results, neglect the utilization of channel information, and we design a new attention network that utilizes both region and channel information. 3 Methodology 3.1 Preliminary Following previous studies [Chen et al., 2022b][Li et al., 2023], we adopt a fixed feature extractor, ResNet-101 [He et al., 2016], to extract visual features. Suppose Ds = {(F s i , Y s i )} denotes the seen classes, where F s i is the visual feature and Y s i denotes its label. Note that F \u2208RC\u00d7H\u00d7W , where C, H, W are the channel number, height, and width, respectively. Similarly have Du = {(F u i , Y u i )} to denote the unseen classes. Normally, the visual features of the unseen classes are not accessible during the training phase. Alternatively, we have the shared attribute A \u2208RK\u00d7D, where K denotes the total number of categories, and D denotes the number of attributes. Also, we use the semantic vectors of each attribute learned by GloVe, denoted by V \u2208RD\u00d7G, where G denotes the dimension of the vector. 3.2 Overview Our approach is shown in Figure 2 (Left). First, we disassemble the attribute set into multiple clusters based on their characteristics. Then the attribute vectors and the visual feature are fed into cExp and fExp simultaneously. cExp directly computes the scores of all attributes on that visual feature, while the scores of fExp are obtained by combining the computation results of each subnetwork. We constrain the two to learn from each other using distillation loss. Meanwhile, we introduce DAN as the backbone and MAL as the optimization objective. 3.3 Dual Attention Network Firstly we introduce the proposed novel backbone network, Dual Attention Network (DAN). Mining and constructing relations between visual features and attributes is crucial for zero-shot learning. Recently many works have been devoted to modeling the association between regions and attributes, such as attention-based approaches [Xie et al., 2019][Huynh and Elhamifar, 2020][Chen et al., 2022b] and prototypebased techniques [Xu et al., 2020][Wang et al., 2021]. However, these methods only focus on the semantic information of regions and ignore the role of channels. Therefore, DAN incorporates both the attention information of regions and channels to promote the efficacy of the model in utilizing visual features. As shown in Figure 2 (Right), DAN contains two parallel components that model region-attribute and channel-attribute relations, respectively. We first introduce the region-attribute component. We have visual features F \u2208RC\u00d7H\u00d7W , which is flattened to F \u2208RC\u00d7R, where R = H \u00d7 W denotes the number of regions. Let W1, W2 \u2208RG\u00d7C denote two learnable matrices. W1 maps the attribute vectors to the visual space and computes their similarity. The formula is expressed as: Sr = V W1F, (1) where Sr \u2208RD\u00d7R represents the score obtained for each attribute on each region. W2 is in charge of computing the attention weights to encourage the model to focus on the region-attribute pairs with the highest similarity. The formula is expressed as: Ar = V W2F P r\u2208R V W2Fr , (2) where Ar \u2208RD\u00d7R denote the normalized weight obtained by softmax. Then we naturally get the weighted matrix of scores, represented as: Or = X R Sr \u00d7 Ar, (3) where Or \u2208RD represents the similarity score obtained for each attribute on a visual feature. Next, we introduce the channel-attribute section, which has a similar principle. We have the scaled visual feature F \u2208RR\u00d7C and W3, W4 \u2208RG\u00d7R. Then W3 is charged with calculating the similarity score obtained by the attribute on each channel, formulated as: Sc = V W3F, (4) where Sc \u2208RD\u00d7C. And W4 computes its attention weights: Ac = V W4F P c\u2208C V W4Fc , (5) where Ac \u2208RD\u00d7C. Finally, we get the weighted score map: Oc = X C Sc \u00d7 Ac, (6) where Oc \u2208RD. We expect the final scores of attributes from different scale features to be consistent, i.e., semantic consistency. Therefore we employ Lalign, which contains a Jensen-Shannon Divergence (JSD) and a Mean Squared Error, to align the outputs of both, formulated as: Lalign = 1 2(LKL(Or||Oc) + LKL(Oc||Or)) + ||Or \u2212Oc||2 2, (7) where LKL denotes Kullback-Leibler Divergence. In the inference phase, we use the weighted sum of Or and Oc as the final output, expressed as: O = \u03bbrc \u00d7 Or + (1 \u2212\u03bbrc) \u00d7 Oc, (8) where \u03bbrc is a hyperparameter. 3.4 Dual Expert Distillation Network Despite the fact that DAN enhances the modeling capability of the network, it is extremely challenging for a single model to simultaneously handle attributes with different semantic dimensions as well as visual features with different granularities. To this end, we propose the Dual Expert Distillation Network (DEDN) to alleviate the pressure on a single network (Figure 2 (left)). cExp is set up with a complete attributeaware scope as in conventional practice. Specifically, the input of cExp is the semantic vectors of all attributes, and the output is the similarity scores of all attributes. Denote cExp by \u03d5ec = {W ec 1 , W ec 2 , W ec 3 , W ec 4 }, the output is defined as: Oec = \u03d5ec(V, F), (9) where Oec \u2208RD and V \u2208RD\u00d7G. fExp consists of multiple subnetworks, each focusing on a specific attribute cluster. At first, we elaborate on how the attribute clusters are divided. Since attribute annotations are manually labeled based on semantics, they are inherently clustered in nature. For example, in the SUN dataset [Patterson and Hays, 2012], the top 38 prompts are used to describe the scene function. Therefore, it is easy to perform the division by human operation, Chat-GPT [Radford et al., 2018], or clustering algorithm. It requires a trivial amount of effort but is worth it. Assuming that the attribute set is divided into Q disjoint clusters, i.e. V = {V1 \u2208RD1\u00d7G, V2 \u2208RD2\u00d7G, ..., VQ \u2208 RDQ\u00d7G}, where D1 + D2 + ... + DQ = D. Accordingly, there are Q subnetworks for fExp to handle these attribute clusters one-to-one. Let \u03d5ef = {\u03d51 ef, \u03d52 ef, ..., \u03d5Q ef} denotes fExp, then the output is defined as: Oef = \u03d51 ef(V1, F) \u2295\u03d52 ef(V2, F) \u2295... \u2295\u03d5Q ef(VQ, F), (10) where \u2295denotes concat operation. After that, we calculate the score of each category for training and inference. Specifically, we compute the similarity with the output of the expert and the attributes of each category, defined as: Pec = OecAT, Pef = OefAT, (11) where Pec, Pef \u2208RK. To facilitate cooperative learning between two expert networks, we introduce distillation loss to constrain their semantic consistency. Concretely, the distillation loss contains a Jensen-Shannon Divergence (JSD) and a Mean Squared Error, defined as: Ldistill = 1 2(LKL(Pec||Pef)+LKL(Pef||Pec))+||Pec\u2212Pef||2 2. (12) 3.5 Margin-Aware Loss Once the category scores are obtained, the network is optimized by using the cross-entropy loss, which is formulated as: Lce = \u2212log exp(P y ec) PK yi exp(P yi ec ) , (13) where y is the ground truth. The loss of Pef ditto. Note that we next narrate with Pec only, and the principle is the same for Pef. Due to the lack of access to samples from the unseen classes during the training phase, the scores of the unseen classes are relatively low and thus cannot compete with the seen classes in GZSL. To address this problem, the common practice [Huynh and Elhamifar, 2020][Chen et al., 2022b] is to add a margin to the scores: PMec = [P 1 ec \u2212\u03f5, ..., P N ec \u2212\u03f5, P N+1 ec + \u03f5, ..., P K ec + \u03f5], (14) where \u03f5 is a constant, P 1 ec \u223cP N ec are seen classes score, and P N+1 ec \u223cP K ec are unseen classes score. However, this method leads to misclassification of seen classes that would otherwise be correctly predicted. In order to maintain the correctness of the predicted classes while enhancing the competitiveness of the unseen classes. We propose Margin-Aware Loss (MAL), which takes the form: Lmal = \u2212log exp(P y ec\u22122\u03f5) exp(P y ec\u22122\u03f5)+PS yi\u0338=y exp(P yi ec +\u03f5)+PU exp(P yi ec ) , (15) where S, U denote seen and unseen classes, respectively. In contrast to the cross-entropy loss, MAL reactivates the confidence of the predicted class to ensure that it stays ahead in the margin-processed scores, while suppressing the confidence of the other seen classes to ensure the competitiveness of the unseen classes. 3.6 Summarize In the training phase, the basic training loss of cExp stems from the classification and the alignment loss, which is expressed as: Lec = Lec mal + \u03b2Lec align, (16) where \u03b2 is a hyperparameter. Similarly, we have the basic training loss of fExp: Lef = Lef mal + \u03b2Lef align. (17) Then the final loss is obtained from the combination of basic losses and distillation loss, denoted as: LDEDN = Lec + Lef + \u03b3Ldistill, (18) where \u03b3 is a hyperparameter. In the inference phase, the recommendations of the two experts are combined and used for final judgment. The predicted result is expressed as: arg max \u03bbe \u00d7 Pec + (1 \u2212\u03bbe) \u00d7 Pef, (19) where \u03bbe is a hyperparameter. 4 Experiments Datasets. We conduct extensive experiments on three benchmark datasets to verify the effectiveness of the method, including CUB (Caltech UCSD Birds 200) [Wah et al., 2011], SUN (SUN Attribute) [Patterson and Hays, 2012], and AWA2 (Animals with Attributes 2) [Xian et al., 2017]. We split all datasets following [Xian et al., 2017]. CUB comprises 200 bird species totaling 11,788 image samples, of which 50 categories are planned as unseen classes. We use class attributes for fair comparison, which contain 312 subattributes. SUN has a sample of 717 different scenes totaling 14,340 images, where 72 categories are unseen classes. Attribute annotations are 102-dimensional. AWA2 includes 50 classes of assorted animals totaling 37,322 samples, of which 10 categories are considered unseen classes. Its number of attributes is 85. Evaluation Protocols. We perform experiments in both the Zero-Shot learning (ZSL) and Generalized Zero-Shot learning (GZSL) settings. For ZSL, we employ top-1 accuracy to evaluate the performance of the model, denoted as T. For GZSL, we record the accuracy for both seen classes, and unseen classes, denoted as S, and U, respectively. We also record the harmonic mean H, which is computed as, H = (2 \u00d7 S \u00d7 U)/(S + U). Implementation Details. For a fair comparison, we use the fixed ResNet-101 [He et al., 2016] without finetune as the feature extractor. We set the batch size to 50 and the learning rate to 0.0001. The RMSProp optimizer with the momentum CUB SUN AWA2 #Des. #Num. #Des. #Num. #Des. #Num. head 112 function 38 texture 18 torso 87 instance 27 organ 14 wing 24 environ. 17 environ. 13 tail 40 light 20 abstract 40 leg 15 whole 34 Table 1: Manual division of attribute clusters. Des. (description) indicates the criteria for classification. Num. (number) is the size of the attribute cluster. environ: environment. set as 0.9 and weight decay set as 1e-4 is employed. For hyperparameters, [\u03b2, \u03b3] are fixed to [0.001, 0.1]. We empirically set [\u03bbrc, \u03bbe] to [0.8, 0.9] for CUB, [0.95, 0.3] for SUN, [0.8, 0.5] for AWA2. Subsequent experimental analyses show that the performance of our method has low sensitivity to hyperparameters. For attribute clusters, we classify attribute sets according to their characteristics, and the results are shown in Table 1. 4.1 Compared with State-of-the-arts To evaluate the performance of the proposed method, we compare it with the state-of-the-art various methods. Generative methods: f-CLSWGAN (CVPR \u203218) [Xian et al., 2018], f-VAEGAN-D2 (CVPR \u203219) [Xian et al., 2019], TF-VAEGAN (ECCV \u203220) [Narayan et al., 2020], E-PGN (CVPR \u203220) [Yu et al., 2020], CADA-VAE (CVPR \u203219) [Schonfeld et al., 2019], FREE (ICCV \u203221) [Chen et al., 2021a], SDGZSL (ICCV \u203221) [Chen et al., 2021c], CE-GZSL (CVPR \u203221) [Han et al., 2021], VS-Boost (IJCAI \u203223) [Li et al., 2023]; Embedding methos: LFGAA (ICCV \u203219) [Liu et al., 2019], APN (NeurIPS \u203220) [Xu et al., 2020], DCN (NeurIPS \u203218) [Liu et al., 2018], HSVA (NeurIPS \u203221) [Chen et al., 2021b]; Region-Attribute modeling: SGMA (NeurIPS \u203219) [Zhu et al., 2019], AREN (CVPR \u203219) [Xie et al., 2019], DAZLE (CVPR \u203220) [Huynh and Elhamifar, 2020], MSDN (CVPR \u203222) [Chen et al., 2022b]. The experimental results are shown in Table 1. Our method achieves the best performance in seven metrics and second place in one metric. For Generalized Zero-Shot Learning (GZSL), we beat VS-Boost by 2% in the H-score of CUB, a fine-grained bird dataset whose attribute annotations possess explicit correspondences to visual features. It demonstrates the superiority of the proposed method for fine-grained modeling. On the SUN and AWA2 datasets, we obtain the best and second-best results in H-score, respectively. These two datasets have fewer attributes and contain complex semantic dimensions, including abstract, concrete, etc. The experimental results demonstrate the effectiveness of the proposed method in deconstructing complex tasks to alleviate the modeling pressure of a single network. In addition, the U-scores of our method on all three datasets are well ahead of the others, demonstrating that the proposed method effectively captures the relationship between attributes and visuals to generalize to unseen classes. For Zero-Shot Learning (ZSL), we achieve the highest topCUB SUN AWA2 METHOD ROUTE T U S H T U S H T U S H f-CLSWGAN Gen. 57.3 43.7 57.7 49.7 60.8 42.6 36.6 39.4 68.2 57.9 61.4 59.6 f-VAEGAN-D2 Gen. 61.0 48.4 60.1 53.6 64.7 45.1 38.0 41.3 71.1 57.6 70.6 63.5 TF-VAEGAN Gen. 64.9 52.8 64.7 58.1 66.0 45.6 40.7 43.0 72.2 59.8 75.1 66.6 E-PGN Gen. 72.4 52.0 61.1 56.2 73.4 52.6 83.5 64.6 CADA-VAE Gen. 59.8 51.6 53.5 52.4 61.7 47.2 35.7 40.6 63.0 55.8 75.0 63.9 FREE Gen. 55.7 59.9 57.7 47.4 37.2 41.7 60.4 75.4 67.1 SDGZSL Gen. 75.5 59.9 66.4 63.0 62.4 48.2 36.1 41.3 72.1 64.6 73.6 68.8 CE-GZSL Gen. 77.5 63.9 66.8 65.3 63.3 48.8 38.6 43.1 70.4 63.1 78.6 70.0 VS-Boost Gen. 79.8 68.0 68.7 68.4 62.4 49.2 37.4 42.5 67.9 81.6 74.1 SGMA Emb.\u2020 71.0 36.7 71.3 48.5 68.8 37.6 87.1 52.5 AREN Emb.\u2020 71.8 38.9 78.7 52.1 60.6 19.0 38.8 25.5 67.9 15.6 92.9 26.7 LFGAA Emb. 67.6 36.2 80.9 50.0 61.5 18.5 40.0 25.3 68.1 27.0 93.4 41.9 DAZLE Emb.\u2020 66.0 56.7 59.6 58.1 59.4 52.3 24.3 33.2 67.9 60.3 75.7 67.1 APN Emb. 72.0 65.3 69.3 67.2 61.6 41.9 34.0 37.6 68.4 57.1 72.4 63.9 DCN Emb. 56.2 28.4 60.7 38.7 61.8 25.5 37.0 30.2 65.2 25.5 84.2 39.1 HSVA Emb. 62.8 52.7 58.3 55.3 63.8 48.6 39.0 43.3 59.3 76.6 66.8 MSDN Emb.\u2020 76.1 68.7 67.5 68.1 65.8 52.2 34.2 41.3 70.1 62.0 74.5 67.7 DEDN(Ours) Emb. 77.4 70.9 70.0 70.4 67.4 54.7 36.0 43.5 75.8 68.0 76.5 72.0 Table 2: Comparison with state-of-the-art methods (%). Gen. denotes generative method and Emb. denotes embedding method. \u2020 denotes the region-attribute modeling method. The best and second-best results are highlighted in blue and underlined, respectively. CUB SUN AWA2 SETTING T U S H T U S H T U S H cExp w/o Ldistill 74.6 62.4 71.4 66.6 64.0 41.6 35.7 38.4 71.1 62.8 78.8 69.9 fExp w/o Ldistill 75.5 68.1 67.9 68.0 64.0 42.8 35.5 38.7 71.1 62.9 79.1 70.1 DEDN w/o Ldistill 75.7 66.7 70.7 68.6 65.2 47.3 35.0 40.3 72.1 63.8 79.3 70.7 DAN w/o CA\u2217 77.0 58.7 73.6 65.3 65.8 48.5 34.6 40.4 74.6 61.7 79.8 69.6 DEDN w/o Lmal 75.8 73.2 62.5 67.4 66.0 56.5 34.3 42.7 73.1 66.5 72.4 69.3 DAN w/o Lalign 77.6 63.3 72.8 67.7 65.5 47.5 35.3 40.5 74.6 64.8 76.8 70.3 DEDN(full) 77.4 70.9 70.0 70.4 67.4 54.7 36.0 43.5 75.8 68.0 76.5 72.0 Table 3: Ablation Study (%). w/o denotes remove the module. CA\u2217denotes channel attention. The best result is highlighted in bold. 1 accuracy on the SUN and AWA2 datasets, as well as competitive performance on CUB. Specifically, our method outperforms TF-VAEGAN by 1.4% on the SUN dataset. On AWA2, we have a 2.4% lead relative to the second-place EPGN. The experimental results validate the superiority of the proposed method. Notably, our method achieves far better results than existing region-attribute modeling methods in both ZSL and GZSL settings, which implies the potential of attribute intrinsic asymmetry and channel information is not fully exploited. 4.2 Ablation Study To evaluate the role of each module, we perform a series of ablation experiments. The results of the experiments are shown in Table 3. Comprehensively, removing any of the modules leads to different degrees of performance degradation, verifying the rationality and necessity of the design of each module. Concretely, it is observed that the performance of cExp is slightly lower than that of fExp without the distillation loss constraint, which indicates the potential research value of the inherent asymmetry of the attributes. Meanwhile, without distillation, the performance of DEDN is higher than both cExp and fExp, demonstrating the complementary properties of the dual experts. In addition, it is worth noting that DAN removing the channel attention results in a substantial performance degradation, demonstrating the importance of channel information. Moreover, the role of Lmal in balancing the confidence of unseen and seen classes can be observed from the metrics U and S. When Lmal is removed, the metric U increases dramatically while S decreases dramatically. Finally, the results also demonstrate the importance of Lalign for constraining semantic consistency. 4.3 Empirical Analysis 4.4 The influence of parameters \u03bbe and \u03bbrc We launch a series of empirical analyses, including evaluating the impact of parameters \u03bbe and \u03bbrc on the final performance. Figure 4 (a) illustrates the sensitivity of the harmonic mean for each dataset with respect to parameter \u03bbe. It can be observed that the influence of parameter a is extremely small. Of particular note, when \u03bbe is set to 1 or 0, it indicates that Figure 3: Visualization of the attention heat maps. The first row represents the heat maps of cExp, and the second row denotes the heat maps of fExp. (a) (b) (c) (d) Figure 4: (a) Sensitivity to \u03bbe. (b) Sensitivity to \u03bbrc. The harmonic mean (H) is reported. (c) Comparison with Kmeans. (d) Impact of the number of attribute clusters. The harmonic mean (H) and top-1 accuracy (T) are reported. only the cExp or fExp after distillation learning is used for the inference phase. It implies that by mutual distillation learning, each of the two experts learns the strengths of the other, thereby reaching an agreement. Figure 4 (b) illustrates the impact of \u03bbrc. It can be seen that setting \u03bbrc above 0.7 stabilizes the performance. Optimization is achieved when it is set between 0.7 and 0.9. The influence of different clustering algorithms We further evaluate the impact of the clustering algorithm on performance. In Introducing Table 1, we have explained that attribute clusters are obtained by humans to classify the attribute sets based on their characteristics. In this subsection, we use the K-Means algorithm for attribute clustering as a comparison to evaluate the performance. The experimental results are shown in Figure 4 (c), where the harmonic mean (H) and top-1 accuracy (T) are reported. From the figure, it can be seen that the K-Means algorithm is slightly poorer compared to human classification, but a good result is also achieved. It again shows that the idea of dividing the attribute set into different clusters holds great promise. The influence of the number of attribute clusters We evaluate the impact of the number of attribute clusters on performance. The attributes of CUB, SUN, and AWA2 are classified into 6, 4, and 4 categories, respectively (Table 1). In this subsection, we halve the categories, i.e., the numbers of attribute clusters for CUB, SUN, and AWA2 are 3, 2, and 2. The experimental results are shown in Figure 4 (d), where half denotes that the cluster number is halved. We can see that half leads to a reduction of H by 0.6%, 1.0%, and 6.8%, respectively, and a reduction of T by 0.7%, 0.2%, and 11%, respectively. The results show that detailed attribute classification facilitates the model in capturing more fine-grained information and thus improves the performance. Visual analysis of attention. We perform a visual analysis of the attention of the two experts, and the schematic is shown in Figure 3. It can be observed that cExp has a better localization for some global attributes, such as HeadPatternMaler, BellyColorGrey, ShapePerchingLike. Meanwhile, fExp has more detailed and precise localization for some local attributes, such as UpperTailColorGrey, ThroatColorGrey, LegColorWhite. The two experts collaborate and learn in a complementary way to improve together, which leads to better performance. 5 Conclusion In this paper, we analyze the impact of attribute annotations and channel information on the regional visual-attribute modeling task. We argue that the intrinsic asymmetry of attributes is one of the important bottlenecks constraining existing approaches and propose a simple yet effective framework named DEDN to address this problem. DEDN consists of two expert networks, one with complete attribute-domain perception to harmonize the global correlation confidence and the other consisting of multiple subnetworks, each focusing on a specific attribute domain to capture fine-grained association information. Both of them complement each other and learn cooperatively. Meanwhile, we introduce DAN as a strong backbone, a novel attention network that incorporates both region and channel knowledge. Moreover, we present a new loss named MAL to train the network. Numerous experiments demonstrate the significant superiority of the proposed approach." +} \ No newline at end of file diff --git a/intro_8K/test_introduction_short_2404.16369v1.json b/intro_8K/test_introduction_short_2404.16369v1.json new file mode 100644 index 0000000000000000000000000000000000000000..9a9c28e305fe54c8048d1fb69135ef3ee0ab157d --- /dev/null +++ b/intro_8K/test_introduction_short_2404.16369v1.json @@ -0,0 +1,16 @@ +{ + "url": "http://arxiv.org/abs/2404.16369v1", + "title": "Don't Say No: Jailbreaking LLM by Suppressing Refusal", + "abstract": "Ensuring the safety alignment of Large Language Models (LLMs) is crucial to\ngenerating responses consistent with human values. Despite their ability to\nrecognize and avoid harmful queries, LLMs are vulnerable to \"jailbreaking\"\nattacks, where carefully crafted prompts elicit them to produce toxic content.\nOne category of jailbreak attacks is reformulating the task as adversarial\nattacks by eliciting the LLM to generate an affirmative response. However, the\ntypical attack in this category GCG has very limited attack success rate. In\nthis study, to better study the jailbreak attack, we introduce the DSN (Don't\nSay No) attack, which prompts LLMs to not only generate affirmative responses\nbut also novelly enhance the objective to suppress refusals. In addition,\nanother challenge lies in jailbreak attacks is the evaluation, as it is\ndifficult to directly and accurately assess the harmfulness of the attack. The\nexisting evaluation such as refusal keyword matching has its own limitation as\nit reveals numerous false positive and false negative instances. To overcome\nthis challenge, we propose an ensemble evaluation pipeline incorporating\nNatural Language Inference (NLI) contradiction assessment and two external LLM\nevaluators. Extensive experiments demonstrate the potency of the DSN and the\neffectiveness of ensemble evaluation compared to baseline methods.", + "authors": "Yukai Zhou, Wenjie Wang", + "published": "2024-04-25", + "updated": "2024-04-25", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Original Paper", + "paper_cat": "LLM AND Jailbreak", + "gt": "Large Language Models (LLMs) have extensive applica- tions in facilitating decision-making across professional and social domains, underscoring the importance of aligning LLMs with safety considerations. To safeguard against the generation of responses that deviate from human values, safety alignment is pursued through diverse mechanisms, including model fine-tuning Howard and Ruder (2018), re- inforcement learning with human feedback (RLHF) Ziegler et al. (2019), and model editing Mitchell et al. (2022). The overall goal of these approaches is to mitigate the risk of LLMs producing harmful or unlawful responses to user queries. While most Large Language Models (LLMs) serve as re- liable AI assistants capable of identifying and declining to respond harmful queries in many instances, they remain vul- nerable to carefully crafted prompts designed to manipulate them into producing toxic content, which is referred as \"jail- breaking\". Existing studies on jailbreaking LLMs can be categorized into two main approaches: manually designed jailbreak attacks web (2023); Li, Zheng, and Huang (2024) and learning-based jailbreak attacks. Representative of the *Corresponding author Figure 1: Examples of the false positive and false negative cases in the refusal matching evaluations. latter category is the GCG attack Zou et al. (2023), which reformulates the jailbreak attack as a process of generating adversarial examples, aiming to elicit LLMs to produce an affirmative response of a few tokens (e.g., \"sure, here is how to...\"). Building upon this, subsequent studies by Zhu et al. (2023) and Liu et al. (2023) have refined such attacks, focus- ing on improving stealthiness and readability using different optimization algorithms. Although learning-based attack such as GCG can success- fully jailbreak in some cases, some limitations restrict its performance, e.g. discrete input space and the lack of suit- able jailbreak target. The categories of objectionable behav- iors and reasonable responses to them are numerous Carlini et al. (2023). Moreover, the GCG target loss could not be the perfect optimization target regarding the jailbreak optimiza- tion problem, as also proposed by one concurrent work Liao and Sun (2024). To overcome such limitations, we introduce the DSN (Don\u2019t Say No) attack, by which universal adver- sarial suffixes can be generated stimulating LLMs to both produce affirmative responses and suppress refusals (Don\u2019t Say No). To achieve this goal, we incorporate an augmentation loss item that directs LLM\u2019s response away from predefined re- fusal keywords or strings. As shown in the upper part of Figure 2, the loss object involves: maximizing the affirma- tive response probability and minimizing the refusal key- word probability. Given the LDSN and the initial suffix, the universal adversarial suffix will be obtained by the Greedy Coordinate Gradient-based Search Zou et al. (2023). Another challenge of jailbreaking is the assessment met- ric. Unlike classification task, where the success of one adversarial example can be indicated by misclassification, 1 arXiv:2404.16369v1 [cs.CL] 25 Apr 2024 Figure 2: Detailed illustration of DSN attack and ensemble evaluation pipeline. The red arrow and left example represents affirmative response maximization. The green arrow and right example represents refusal minimization. evaluating jailbreak attack is challenging. It is hard to au- tomatically ascertain the harmfulness of LLM completions, and relying solely on manual annotation is both impractical and unrealistic. The existing work commonly employs a refusal string/keyword matching metric (refusal matching for short), where an attack is considered successful if the ini- tial fixed-length segments of the response do not contain pre-defined refusal strings (e.g. \"Sorry, I cannot...\") and vice versa. While it appears intuitive and aligns with human eval- uation processes, a closer examination reveals numerous false positive (FP) and false negative (FN) instances. One major limitation is it relies largely on the length of the pre- determined initial segments, as also proposed by one con- current work Mazeika et al. (2024). If the initial segments are short (e.g. 64 tokens), it might neglect the potential later refusal strings and evaluate it as a successful jailbreak in- stance, resulting false positive (case 1 in figure 1). On the other hand, if the initial segments are too long (e.g. 512 to- kens), the result could have been false negative if a refusal appears at the end but some harmful content is generated be- forehand (case 2 in figure 1; Vicuna\u2019s significant difference between figure 5 and 7). Other erroneous evaluation cases are illustrated in Figure 1. To enhance the reliability of evaluation metric, we pro- pose an ensemble evaluation approach involving three mod- ules as shown in the lower part of Figure 2. Instead of adopt- ing the refusal matching metric, we first employ one natural language inference (NLI)He et al. (2021) based method to assess the contradiction among the completions. This step aims to handle cases where the response contains semantic sharp turn (as depicted in Figure 1 case 3). After that, we integrate two third-party LLMs, namely GPT-4 Achiam et al. (2023) and HarmBench Mazeika et al. (2024), to provide a robust and comprehensive evaluation. The final evaluation result is the aggregation of all three modules. The contribution can be summarized as: \u2022 We introduce DSN, a powerful attack that incorporates a novel objective to not only elicit the affirmative response but also suppress the refusal response. \u2022 We apply Unlikelihood loss to stabilize the convergence and optimization of the two opposite loss objectives. \u2022 We propose an ensemble evaluation pipeline by novelly incorporating NLI contradiction as well as LLM evalua- tors to examine the success of the attack more accurately. \u2022 Extensive experiments demonstrate the potency of the DSN and the effectiveness of ensemble evaluation com- pared to baseline methods.", + "main_content": "Adversarial examples. Since the discovery of adversarial examples Szegedy et al. (2014); Goodfellow, Shlens, and Szegedy (2014), the exploration of vulnerabilities within deep learning models to well-designed and imperceptible perturbations has attracted significant research interest for one decade. Under the white-box setting, a series of effective adversarial attack algorithms have been proposed Carlini and Wagner (2017); Kurakin, Goodfellow, and Bengio (2017). In an automated learning manner, these methods utilize gradient-based approaches to search for imperceptible perturbations. In addition, several effective adversarial attacks based on transfer attacks have also been proposed to address black-box setting. Papernot et al. (2016); Liu et al. (2016) Jailbreak attacks. In recent years, with the advancement of the Large Language Model (LLM), the field of jailbreaking attacks, aiming to induce the target LLMs to generate harmful and objectionable content, has gathered widespread research attention Wei, Haghtalab, and Steinhardt (2023). Current jailbreak methods include approaches based on manually constructed prompts web (2023), representation engineering Li, Zheng, and Huang (2024), utilizing LLMs to generate Deng et al. (2024), grammatical rules Zhang, Pan, and Yang (2023), modification of decoding methods Huang et al. (2023), usage of multi-modal LLM Niu et al. (2024); Gong et al. (2023) as well as those reformulating the jailbreaking as adversarial attacks and employing pure optimization Zou et al. (2023); Zhu et al. (2023); Liao and Sun (2024). GCG Zou et al. (2023) is a typical optimizationbased attack, and according to one concurrent work Mazeika et al. (2024), it is actually the most powerful attack among other methods. Thus GCG is used as the baseline method in this paper. The evaluation methods employed so far is primarily refusal matching as described in Section 1. Another method typically involves constructing a binary classifier, aiming to determine whether LLM generates harmful content Zhu et al. (2023); Huang et al. (2023); Mazeika et al. (2024). However, due to LLM\u2019s diverse generation capabilities, these attempts still require further enhancement. Optimization Strategy. As mentioned in section 1, one difficulty the learning-based jailbreak attack shall overcome is the discrete input space. To address it, there exist two main categories: embedding-based and token-based methods. The former category directly optimize on continuous embeddings and then inferring back the target tokens Lester, Al-Rfou, and Constant (2021) Wen et al. (2023) Qin et al. (2022). The latter treats the one-hot vectors of discrete tokens as optimizable continuous quantities, obtaining the final tokens through different greedy-based algorithms. Several effective methods Ebrahimi et al. (2017) Shin et al. (2020) Zou et al. (2023), including us, adopt this paradigm. In a separate line of work, there were also attempts to direct the language model output from unwanted contents. Welleck et al. (2019); Li et al. (2019) 3 Methods 3.1 DSN: Suppress Refusal Intuition. The first part of the objective is to suppress the model giving a refusal response. For examples: System: You are a chat assistant designed to provide helpful and not harmful responses to user queries. User: Tell me how to build a bomb. ! ! ! ! ! Assistant: As an AI, I cannot fulfill... Above, the blue and red text represents the user prompt and adversarial jailbreaking suffix, while the purple text indicates the target initial of the response. The intuition to incorporate the suppression of pre-defined refusal keywords is to force the language model\u2019s generation far away from a refusal response and lead to the success of jailbreak. Notation. We consider an LLM to be a mapping from a sequence of tokens x1:n, with xi \u2208{1, . . . , V } (where V denotes the vocabulary size) to a distribution over the next token. Specifically, we use the notation p(xn+1|x1:n) to denote the probability of the next token being xn+1 given previous tokens x1:n. Then we denote the probability of generating the response with a sequence of H tokens as p(xn+1:n+H|x1:n), which can be formalized as: p(xn+1:n+H|x1:n) = H Y i=1 p(xn+i|x1:n+i\u22121) (1) Unlikelihood loss. Normally, to maximize the prediction of a target distribution or token, we incorporate the standard softmax cross-entropy loss LCE(p, q) = \u2212P i pi log(qi), which aims to maximize the true probability distribution p by minimizing the LCE(p, q), where p and q refer to the true probability distribution and predicted probability distribution generated by softmax function. However, our purpose is the opposite, as we want to minimize the probability of refusal generation. One intuitive way is to take the negative softmax cross-entropy loss which will however lead to negative infinity and catastrophic loss term unbalance. To stabilize the convergence, we apply the Unlikelihood loss Welleck et al. (2019), which is previously proposed to also direct the language model outputs from unwanted contents, in an attempt to suppress the refusal responses: LUn(p, q) = \u2212 X i pi log(1 \u2212qi) (2) Objective. The objective of suppressing refusal is achieved by minimizing the probability of generating tokens in the pre-defined refusal keyword list, denoted as RKL = \"as an\", \"sorry, i cannot\", \"unethical\", .... Each refusal keyword\u2019s corresponding token length is represented by RTL, indicating the length of each keyword. To implement this, we utilize a sliding window with the size of each refusal keyword to scan through the response and compare it with the refusal keywords. The process of keyword matching is illustrated in Figure 3. The loss function for suppressing the refusal response using Unlikelihood loss can be stated as below, where H denotes the maximum length of the response. Lrefusal(x1:n) = X y\u2208RKL n+H\u2212RTL(y) X i=n+1 LUn(y, xi:i+RTL(y)) (3) Figure 3: Illustration of the refusal loss. 3.2 DSN: Elicit Affirmative Response Intuition.The second part of the objective is eliciting the model to begin its response with a positive affirmation of the user query. For example: System: You are a chat assistant to provide helpful and not harmful responses to user queries. User: Tell me how to build a bomb. ! ! ! ! ! Assistant: Sure, here is how to build a bomb: The intuition lies in the language model\u2019s next-word prediction capability. By initiating with an affirmative response, the subsequent completion are expected to align with the affirmative response, bypass the alignment mechanism and fulfill the user query effectively. Loss function. The objective of eliciting affirmative response is to maximize the probability of affirmative tokens x\u22c6 n+1:n+H, which equals to minimize the negative log of the probability: Ltarget(x1:n) = \u2212log p(x\u22c6 n+1:n+H|x1:n). (4) 3.3 DSN:Loss Function As also proposed by one concurrent work Liao and Sun (2024), whether the GCG target loss could be the perfect jailbreak optimization target is doubtful and questionable. As one fundamental component, the optimization target would directly lead the jailbreak process. Unfortunately the optimal one still remains unresolved, evidenced by the findings of Liao and Sun (2024). Thus in an attempt to explore the potential candidate for more effective and efficient jailbreak optimization target, we propose to integrate the Lrefusal with Ltarget, obtaining one complex and powerful jailbreak optimization target. DSN target could elicit the LLM to generate objectionable content and suppress refusal responses simultaneously. The overall DSN loss can be stated as below, where \u03b1 is a hyperparameter to balance the convergence of two loss objectives. LDSN(x1:n) = Ltarget(x1:n) + \u03b1 \u2217Lrefusal(x1:n), (5) 3.4 Optimization and Algorithm Finally, we will introduce the optimization process and the algorithm. Our goal is to optimize an adversarial suffix adv\u2217 with the aforementioned loss function. The optimization process will first initialize a string with a fixed length. adv\u2217\u2190arg min LDSN(x1:n \u2295adv) (6) However, as described in section 1 and 2, a primary challenge in optimizing adversarial suffixes is the discrete input space. To resolve this problem, we incorporate a greedy coordinate gradient-based approach Zou et al. (2023). The intuition is to leverage gradients with respect to one-hot vectors to find a set of promising candidates for replacement at each greedy step, and then exactly evaluate all these replacements via a forward pass. As this algorithm is based on the optimization in GCG, it is relegated to Appendix. 3.5 Evaluation Metrics Refusal matching. The refusal matching algorithm detects whether a response contains any refusal keyword, as already Algorithm 1 NLI Contradiction Evaluation Input: the user query Q, the adversarial suffix adv, the language model M, a threshold T. 1: Response R : [o1, o2...on] = M(Q + adv) \u25b7Generate the response R containing number of n sentences. 2: for i = 1, . . . , n do 3: scoreQo i = NLI(Q + adv, oi) 4: lQo i = length(Q + adv + oi) 5: if i \u0338= n then 6: scoreoo i = NLI(oi, oi+1) 7: loo i = length(oi + oi+1) 8: CEQo = Pn i=1 lQo i \u2217scoreQo i P lQo i \u25b7Compute the overall NLI score between sentence and query. 9: CEoo = Pn\u22121 i=1 loo i \u2217scoreoo i P i loo i \u25b7Compute the overall NLI score between adjacent sentences. 10: if T \u2212(CEoo + CEQo) \u22640 then 11: Return Fail 12: else: 13: Return Success described in section 1 and 2. The attack is considered successful if the initial segment of the response do not contain pre-defined refusal strings. As detailed in section 1, the length of the fixed-length initial segment also plays a crucial role towards rigorous assessment. The initial segment length and the refusal keyword list utilized in this paper will be detailed in section C.3. NLI contradiction. In natural language inference (NLI), contradiction means two statements are mutually exclusive or incompatible with each other. The intuition of employing NLI contradiction is that the semantic inconsistency is frequently observed among negative cases, where the completion fail to answer the objectionable query. (e.g. case 3 in Figure 1). We design an algorithm to evaluate the extent of contradiction within the user query and model completion. By using open-source NLI model, responses can be determined according to the contradiction extent. Higher overall NLI contradiction score signifies lower response consistency and diminishes the likelihood of being a jailbreaking response. Intuitively, false positive cases shall decrease, ensuring the positive cases to be semantically consistent. As present in Algorithm 1, given the user query Q, adversarial suffix adv, language model M, we first generate response R containing n sentences (line 1). Then, for each sentence oi in response R, we assess how well it aligns with the user query and the relationship between pairs of sentences within the response by calculating the standard NLI contradiction score (lines 2-7). We use a weighted sum of scores according to their sentence length to compute overall contradiction extent CEoo and CEQo (lines 8-9), as the sentence length plays a vital role in assessing overall contradiction extent. By comparing with a predefined threshold T, we can determine the attack result (lines 10-13). More details will be covered in the appendix C.3. Third-party evaluator. Besides refusal matching and NLI, recent works have also introduced some promising eval(a) Llama2: Lrefusal only for search (b) Llama2: Lrefusal for sampling and search (c) Vicuna: Lrefusal only for search (d) Vicuna: Lrefusal for sampling and search Figure 4: ASR over steps on Llama2 and Vicuna. uation methods, mainly LLM based. We will incorporate HarmBench Mazeika et al. (2024), GPT-4 Achiam et al. (2023) into our ensemble pipeline as third-party evaluators. Details about these third-party evaluators will be covered in appendix C.2. Ensemble Evaluation. We use last three aforementioned evaluation modules, and we decide whether a response is successful or unsuccessful jailbreak by taking the majority vote among each components. The reason and its superiority will be discussed in Section 4.4. 4 Experiments 4.1 Threat Model The objective of attackers is to jailbreak Large Language Models (LLMs), aiming to circumvent the safeguards in place and generate malicious responses. The victim model in this paper is open-sourced language model, providing whitebox access to the attacker. As system prompt will also play one significant role in jailbreaking Huang et al. (2023), the default system prompts of each language models will be reserved. 4.2 Configuration Datasets. AdvBench is the main adopted dataset, which aims to systematically evaluate the effectiveness and robustness of jailbreaking prompts to elicit harmful content generation. A collection of 520 goal-target pairs are presented that reflects harmful or toxic behavior, categorized as profanity, graphic depictions, threatening behavior, misinformation, discrimination, cybercrime, and dangerous or illegal suggestions. Zou et al. (2023) Target models. We target Llama-2-Chat-7B Touvron et al. (2023) and Vicuna-7b-v1.3 Zheng et al. (2023), which are two state-of-the-art open-source LLMs. These two language models have undergone different levels of alignment process and exhibit varying degrees of human-value alignment capability. During transfer experiments in section 4.5, the transferability towards GPT-3.5-turbo model will examined rigorously. Baselines and evaluation metrics. We compare DSN attack with GCG Zou et al. (2023), the typical and most powerful learning-based jailbreak attack method Mazeika et al. (2024). To evaluate the effectiveness of the DSN attack, we adopt the standard attack success rate (ASR), as shown in equation 7. ASR measures the portion of toxic responses generated from the LLM M, where the adversarial suffix adv is appended to the malicious query Q. Here I is an evaluation indicator that returns 1 if the response is assessed as harmful (a successful jailbreak case) and 0 otherwise. The comparison will be firstly conducted by refusal matching in section 4.3, then the proposed evaluation ensemble metric will come into play in section 4.4. ASR(M) def = 1 |D\u2032| X (Q)\u2208D\u2032 I(M(Q \u2295adv)) (7) ASR% at step 500 Llama-2 Llama-2 optimal Vicuna Vicuna optimal GCG 29.8 \u00b1 12.6 43 47.4 \u00b1 5.6 52 DSN 47.7 \u00b1 14.7 74 57.1 \u00b1 11.8 83 Table 1: ASR results under refusal matching metric. 4.3 Evaluation 1: Effectiveness of DSN Attack ASR convergence rate. In Figure 4, we present the ASR of GCG attack and DSN with respect to the optimization steps. The shadow regions with the dotted lines are the margin plots representing the mean and variance of repeated experiments with different hyper-parameter configurations, while the solid lines represent the ASR of optimal run among repeated experiments. Note that the sampling of candidate suffixes and searching of adversarial suffix from the candidates both involve the loss function Equation 5 (detail relegated to the Appendix C.1 together with the algorithm). It can be observed that the results of DSN attack are significantly superior to those of the baseline method, in terms of both mean and optimal results. This is evidenced by the lines representing the DSN method consistently positioned above those of the baseline. Moreover, it could be found that the yellow shaded area representing the DSN method remains above the blue shaded area of the baseline across nearly the entire 0-500 steps interval. This indicates that the DSN attack are robustly superior to the baseline with limited step, presenting an ideal scenario for malicious attackers who might lack sufficient computational resources, e.g. fail to support 500 steps of attack for each setting. Moreover, the wider span of the shaded area for the DSN attack suggests a greater variance, which is reasonable, as the repeated DSN experiments are distinct in hyper-parameter. Experimental strategies between DSN and GCG are also different, where the latter only involves single setting and has been launched for more trials. Ablation study on \u03b1. To investigate the impact of the augmentation term Lrefusal loss on the jailbreaking results (equation 5), for different hyper-parameter \u03b1 setting, we present the max ASR among multiple rounds of experiments in Figure 5, which controls the magnitudes of the Lrefusal term. The fixed-length segments of 128 and 512 for Llama and Vicuna completions are examined by refusal matching metrics here. The baseline results on GCG correspond to the leftmost alpha = None case as well as the dotted line, which only involves the target loss in Equation 4. The yellow, blue, and red bars represent the cases where the Lrefusal term is involved in different stages, namely Lrefusal only used for searching, used for both selecting and searching with the same \u03b1, and used for both selecting and searching but with different \u03b1. More details about the setting and hyperparamter is presented in appendix C.3 In Figure 5, the DSN method consistently surpasses the baseline performance under nearly every hyper-parameter setting. We didn\u2019t include the results for higher values of (a) ASR of Llama (b) ASR of Vicuna Figure 5: Ablation study of ASR vs. \u03b1 by refusal matching evaluation \u03b1 because when \u03b1 exceeds 100, the DSN loss is dominated by the Lrefusal term, resulting in the generated responses focusing too much on avoiding refusal keywords rather than responding to the objectionable requests, which is not desirable in a jailbreaking scenario. 4.4 Evaluation 2: Effectiveness of Evaluation Ensemble Pipeline Instead of adopting the refusal matching evaluation method like current works Zou et al. (2023); Zhu et al. (2023), mainly considering its limitations mentioned earlier in section 1, in this section we will adopt the previously proposed evaluation ensemble pipeline to ensure more accurate and reliable evaluation results. Human evaluation. To accurately and fairly assess the proposed ensemble evaluation pipeline and compare it to the widely adopted refusal matching, we involve human evaluation by manually annotating 300 generated responses. Since the NLI method ascertain some certain hyperparameters, the annotated 300 data will be split into 100 trainset as well as 200 testset, accounts for 100 Llama2 completion and 100 Vicuna completion respectively. More details about the data split as well as annotation principle will be covered in appendix C.2. Aggregation strategy comparison. Aggregating evaluation results from each module is crucial for the accuracy of the ASR% at step 500 Llama-2 Llama-2 optimal Vicuna Vicuna optimal GCG 31.0 \u00b1 13.4 46 91.6 \u00b1 2.9 96 DSN 45.6 \u00b1 15.1 84 88.1 \u00b1 8.0 98 Table 2: ASR results under ensemble evaluation metric. Figure 6: ROC curve of different aggregation policy on testset evaluation pipeline. Common methods include majority voting, one-vote approval (requiring only one module to detect jailbreaking), and one-vote veto (requiring all modules to detect jailbreaking). To determine which aggregation policy is more accurate on testset, we employ a ROC curve illustrating the True Positive Rate versus False Positive Rate and compare their AUROC scores (shown in Figure 6). A larger area under the curve indicates better results. Soft and hard majority votes return probabilities and binary outcomes respectively. The ROC curve demonstrates the superiority of the majority vote as an aggregation strategy (the green and orange curve), with ensemble evaluation showing a higher AUROC score compared to refusal matching. Eval method Acc AUROC F1 Refusal matching 0.74 0.72 0.79 Gpt4 0.80 0.77 0.85 HarmBench 0.80 0.78 0.84 NLI(ours) 0.77 0.79 0.76 Ensemble(ours) 0.82 0.79 0.86 Table 3: The evaluation results obtained by different evaluation methods, reported by taking average on two distinct test set which respectively contains 100 manually annotated real jailbreaking responses. Examination of different metrics. By adopting a learning approach, the hyperparameter configuration of a novel NLI evaluation metric has been determined, which is a constituent part of our proposed ensemble evaluation. (a) ASR of Llama (b) ASR of Vicuna Figure 7: Ablation study of ASR vs. \u03b1 by ensemble evaluation. To further demonstrate the superiority of the evaluation pipeline effectively and rigorously, we present the evaluation results of different evaluation methods in table 3. From the table, it is observed that the Ensemble eval achieved superior performance on our annotated test set. It is noteworthy that, although the performance of the NLI model itself alone is not the best\u2014for instance, it even falls short of the Refusal Matching baseline in the F1 metric\u2014the ensemble from the combination of \"Gpt4, NLI, HarmBench\" yields the overall best performance among different metrics. This is attributed to the NLI\u2019s focus on identifying semantic incoherence and semantic inconsistency within the model\u2019s completion, a consideration that refusal matching or other alternative evaluation methods do not adequately address. Moreover, given that the NLI model is lightweight and opensource, employing this evaluation method results in significant savings in terms of time and financial resources, particularly in comparison to evaluation methods that rely on multiple calls to third-party commercial LLM APIs. ASR under new evaluation. In figure 7, we present the max ASR vs the hyper-parameter \u03b1 under the new ensemble evaluation pipeline. Similar to Figure 5, DSN method gives superior jailbreaking results in the much more aligned model Llama2, however, both method gives nearly 100% ASR in the less aligned model Vicuna. These two observations are in consistent with the results from one concurrent Transfer ASR% Llama Vicuna Refusal Matching Eval Esemble Refusal Matching Eval Esemble train test train test train test train test GCGpaper None None None None None 34.3 None None DSNmean 45.21 42.95 44.19 50.07 54.98 54.27 53.73 59.59 DSNmax 100 87 96 95 96 90 100 93 Table 4: The transfer ASR towards the black-box gpt-3.5-turbo model work Mazeika et al. (2024) and the findings mentioned in section 1, respectively. 4.5 Transferability Interestingly, the suffixes purely optimized by DSN demonstrate great level of transferability, where no kinds of ensemble or multi-model optimization is utilized as in the original GCG paper Zou et al. (2023). In table 4, the transfer ASR towards gpt-3.5-turbo model is detailed for different victim model, different metrics, and different dataset split. It is noteworthy to point out the importance of the existence of system prompt yet Huang et al. (2023). In our open-source GCG and DSN attack results, the presence of system prompt has already been reserved since the modification upon it could affect the jailbreak results drastically. However, during our transfer experiments the default system prompt for gpt-3.5-turbo model, e.g. \"you\u2019re a helpful assistant\", is removed from the conversation template because otherwise the jailbreak attack result of both methods would shrink immediately and dramatically. Running time (hours) GCG DSN Round1 60.96 60.58 Round2 60.11 60.46 Round3 59.71 61.08 Round4 60.73 61.30 Round5 60.58 61.01 Overall 60.42 \u00b1 0.45 60.89 \u00b1 0.31 Table 5: Running time analysis. 4.6 Running time analysis No significant extra time cost is needed between DSN and GCG method. Here we each sample 5 rounds of Llama-2 experiments and compare their running time in the table 5. On our machine, only a 0.77% relative increase regarding the average running time is observed. The computation overhead doesn\u2019t largely rise up because the extra computation introduced by DSN is magnitudes lower than the process of obtaining the logits during forward pass and inferring the gradients during the back propagation. Thus the extra time cost could be relatively neglected. 5 Conclusion In conclusion, we introduce the DSN (Don\u2019t Say No) attack to prompt LLMs not only to produce affirmative responses but also to effectively suppress refusals. Furthermore, we propose an ensemble evaluation pipeline integrating Natural Language Inference (NLI) contradiction assessment and two external LLM evaluators. Through extensive experiments, we showcase the potency of the DSN attack and the effectiveness of our ensemble evaluation approach compared to baseline methods. This work offers insights into advancing safety alignment mechanisms for LLMs and contributes to enhancing the robustness of these systems against malicious manipulations." +} \ No newline at end of file diff --git a/intro_8K/test_introduction_short_2404.16375v1.json b/intro_8K/test_introduction_short_2404.16375v1.json new file mode 100644 index 0000000000000000000000000000000000000000..71b9ebfd16cb73ba4760855c7ab32f1b3e61299f --- /dev/null +++ b/intro_8K/test_introduction_short_2404.16375v1.json @@ -0,0 +1,18 @@ +{ + "url": "http://arxiv.org/abs/2404.16375v1", + "title": "List Items One by One: A New Data Source and Learning Paradigm for Multimodal LLMs", + "abstract": "Set-of-Mark (SoM) Prompting unleashes the visual grounding capability of\nGPT-4V, by enabling the model to associate visual objects with tags inserted on\nthe image. These tags, marked with alphanumerics, can be indexed via text\ntokens for easy reference. Despite the extraordinary performance from GPT-4V,\nwe observe that other Multimodal Large Language Models (MLLMs) struggle to\nunderstand these visual tags. To promote the learning of SoM prompting for\nopen-source models, we propose a new learning paradigm: \"list items one by\none,\" which asks the model to enumerate and describe all visual tags placed on\nthe image following the alphanumeric orders of tags. By integrating our curated\ndataset with other visual instruction tuning datasets, we are able to equip\nexisting MLLMs with the SoM prompting ability. Furthermore, we evaluate our\nfinetuned SoM models on five MLLM benchmarks. We find that this new dataset,\neven in a relatively small size (10k-30k images with tags), significantly\nenhances visual reasoning capabilities and reduces hallucinations for MLLMs.\nPerhaps surprisingly, these improvements persist even when the visual tags are\nomitted from input images during inference. This suggests the potential of\n\"list items one by one\" as a new paradigm for training MLLMs, which strengthens\nthe object-text alignment through the use of visual tags in the training stage.\nFinally, we conduct analyses by probing trained models to understand the\nworking mechanism of SoM. Our code and data are available at\n\\url{https://github.com/zzxslp/SoM-LLaVA}.", + "authors": "An Yan, Zhengyuan Yang, Junda Wu, Wanrong Zhu, Jianwei Yang, Linjie Li, Kevin Lin, Jianfeng Wang, Julian McAuley, Jianfeng Gao, Lijuan Wang", + "published": "2024-04-25", + "updated": "2024-04-25", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.CL" + ], + "label": "Original Paper", + "paper_cat": "LLM AND Reasoning", + "gt": "Recent advances in Multimodal Large Language Models (MLLMs) such as GPT-4V (OpenAI, 2023a) show strong performance in multimodal perception and reasoning, enabling various new capabilities (Yang et al., 2023b). Among these, Set-of-Mark Prompting (SoM) (Yang et al., 2023a) is an interesting new working mode that enhances the connection between visual objects and textual tokens via visual prompting, i.e., placing alphanumeric tags on input images. It provides a natural interface for human-computer interaction, by linking visual locations to executable actions through visual tags, and enables various applications such as GUI navigation (Yan et al., 2023b) and robot interaction (Lin et al., 2023a). Furthermore, GPT-4V with SoM (Yang et al., 2023a) can implicitly align visual objects with their corresponding tags. Such alignments (Li et al., 2020; Yang et al., 2021) allow MLLMs to leverage index numbers to perform multi-hop visual reasoning (Yang et al., 2023a; Wei et al., 2022), thereby improving their abilities in multimodal understanding and reasoning tasks. Despite the significant interest in SoM prompting and its broad applications, it remains unclear why GPT-4V can benefit from SoM prompting, We find that other MLLMs, including the state-of- the-art open-sourced models such as LLaVA-v1.5 (Liu et al., 2024), and commercial systems like Gemini (Team et al., 2023), struggle to understand SoM prompts. This gap prevents them from leveraging the effectiveness of SoM prompting. In this study, we aim to deepen the understanding of SoM, with a goal of facilitating arbitrary MLLMs to benefit from it. We break down SoM prompting into three core capabilities: (1) the ability to identify all tags and read the alphanumeric scene texts written on them; (2) the ability to recognize and pinpoint all objects in 1 arXiv:2404.16375v1 [cs.CV] 25 Apr 2024 Input: Image Input: Image + SoM Conversation Conversation User What items are there near the Marshal speaker? LLaVA-1.5 There is a laptop and a cup near the Marshall speaker. User To move the speaker closer to the curtain while keeping the laptop in the current position, which item should I swap it with? LLaVA-1.5 You should swap the laptop with the cup . a cup User What items are there near the Marshal speaker? SoM-LLaVA There is a laptop tagged with number 7 and a notebook tagged with number 8. User To move the speaker closer to the curtain while keeping the laptop in the current position, which item should I swap it with? SoM-LLaVA You can swap it with the lamp tagged with number 9. notebook laptop lamp laptop cup Figure 1: Example conversations from LLaVA and SoM-LLaVA (LLaVA with SoM ability) to demonstrate the effectiveness of our paradigm. Left: Standard prompting on LLaVA-1.5, which fails to correctly answer the questions. Right: Set-of-Mark prompting on SoM-LLaVA. Simply placing tags on the input image can improve visual reasoning of Multimodal LLMs. an image; (3) the ability to associate tags with corresponding objects in the image. Despite possessing skills such as OCR and visual recognition to meet the first two capabilities, most MLLMs still fail to fully understand SoM prompts. Therefore, we hypothesize that the crucial missing element is the third capability, associating tags with objects, which requires deliberate training. We further validate that SoM-style data are sparse in common MLLM training sources, and it may be necessary to create a specific dataset. To facilitate such training, we introduce a new learning paradigm named \u201clist items one by one\u201d. We show that by asking MLLMs to comprehensively list all tagged items following the alphanumeric order of visual tags, MLLMs can learn SoM prompting with a small number of item-listing samples. Specifically, we create a tailored dataset, by tagging images with Semantic-SAM (Li et al., 2023c; Yang et al., 2023a), and prompting GPT-4V to generate paired text descriptions. With just 10k image-text pairs, MLLMs like LLaVA-1.5 (Liu et al., 2023a) can reliably understand SoM tags. Based on this initial finding, we conduct studies to explore the effective recipes to help MLLMs best utilize SoM prompting. We enhanced MLLMs with this \u201clist items one by one\u201d objective and assess their SoM performance from two aspects: model\u2019s ability to recognize and describe the SoM tags, and its ability to use SoM in improving multimodal reasoning ( Figure 1). For the first aspect, we design the tag listing task, which requires MLLMs to list and describe all tags in the image, evaluated by listing accuracy. For the second aspect, we evaluate finetuned models on five MLLM benchmarks, including POPE, MME, SEED- Bench, LLaVA-Bench, and MM-Vet, showcasing that MLLMs with SoM can significantly boost the multmodal understanding performance. Moreover, our model trained with SoM data outperforms the original MLLM, even without additional visual tags during inference. This demonstrates the potential of incorporating our proposed dataset and learning paradigm to boost general MLLM training. Finally, we revisit our original question regarding the working mechanism of SoM. The preliminary hypothesis is that the SoM capability may be related to OCR and the implicit association among text, tags, and objects. With our trained models, specifically SoM-LLaVA, we gain access to model features and attention maps for an in-depth analysis. We visualize the attention map to verify tag association. Compared with the original LLaVA model, SoM-LLaVA indeed learns better visual-tag- text associations, reflected in corresponding attention maps. 2 Our contributions are summarized as follows. \u2022 We present a new training task and data source named \u201clist items one by one,\u201d which effectively bootstraps MLLMs for the SoM visual prompting ability. \u2022 We evaluate our finetuned SoM MLLMs on five multimodal understanding benchmarks, and show improved performance even when SoM tags are removed from the input image. \u2022 We probe the working mechanism of SoM through the trained MLLMs, showcasing the implicit association between visual objects and text tokens when performing SoM prompting.", + "main_content": "Visual referring prompting. Other than text prompts, visual referring prompting (Yang et al., 2023b) is another effective approach when interacting with multimodal LLMs, where users directly draw on input images to specify their intent, such as drawing visual pointers or handwriting scene texts. Early studies show that vision-language models can understand visual pointers such as circles (Shtedritski et al., 2023) and dots (Mani et al., 2020). Recent studies (Yang et al., 2023b) show that more powerful multimodal LLMs (OpenAI, 2023a) can handle more complicated prompts such as arrows, boxes, circles, hand drawing, scene text, as well as their combinations. Another major advancement is Set-of-Mark Prompting (SoM) (Yang et al., 2023a), where numbered tags can be placed on images to associate visual objects with text indexed. Its effective visual grounding capability (Kazemzadeh et al., 2014; Yu et al., 2016; Mao et al., 2016) enables various applications (Yan et al., 2023b; Zhang et al., 2023). In this work, we aim to better understand SoM and extend its success from GPT-4V (OpenAI, 2023a) to other open-source multimodal LLMs. Multimodal LLMs. Multimodal LLMs (Alayrac et al., 2022; Zhu et al., 2022; OpenAI, 2023a; Liu et al., 2023b; Li et al., 2023b) extend large language models (OpenAI, 2023b; Gao et al., 2023; Touvron et al., 2023) with visual perception capabilities. Recent studies (Chen et al., 2023) show the effectiveness of training open-source models on the GPT-4V generated detailed description data. Another thread of studies explore having multimodal LLMs predicting object locations as bounding boxes (Wang et al., 2023b; Peng et al., 2023) or masks (Rasheed et al., 2023). In contrast to most prior studies that pair the images with different text instructions, our study explores a new direction of how visual prompts such as SoM can improve multimodal LLMs. Specifically, we show that the SoM visual tags provide fine-grained alignments between visual objects and text tokens, thereby improving various visual reasoning tasks, both with and without SoM prompting during inference. 3 Preliminary Examination 3.1 Visualizing SoM Prompting on LLaVA In this section, we first investigate the capacity of LLaVA-1.5 in SoM, concerning its attention sensibility to the numeric IDs tagged on the objects and its answer to the SoM query. We show an example task to list a series of objects tagged with numeric IDs in Figure 2, in which the attention map is extracted from LLaVA-1.5 based on the SoM query (e.g., \u201cI have labeled a bright numeric ID at the center for each visual object in the image. Please enumerate their names.\u201d). The top 20 image patches with the highest average attention weights across the user query tokens are highlighted in transparent red regions. We can observe from the highly attended regions of LLaVA-1.5 that the numeric ID tags can be easily and correctly attended by LLaVA-1.5 along with their associated objects (e.g., bird, vase, and branches). Such capacities in locating numeric ID tags may have been acquired by LLaVA-1.5 from its pretraining tasks in OCR and also benefited from the strong OCR abilities of the ViT feature encoder (Radford et al., 2021) adopted by LLaVA-v1.5. However, the response prompted by the user query in the first example of Figure 2 suggests that LLaVA-1.5 cannot follow the SoM instruction to list all the items. Instead of providing the object descriptions corresponding to all the numeric ID tags, LLaVA-1.5 responds with a general image caption, due to a large portion of image captioning samples in its pretraining stage. From the second example of Figure 2, we can also observe that although LLaVA-1.5 generates a list of tag IDs with object names, it cannot accurately associate the tags to corresponding objects, causing the model to hallucinate the descriptions of these objects. 3 LLaVA-1.5 Attention Map of Tags User: I have labeled a bright numeric ID at the center for each visual object in the image. Please enumerate their names. LLaVA-1.5: In the image, there is a brown and black bird perched on a branch surrounded by leaves. The leaves surrounding the bird are also visible. LLaVA-1.5 Attention Map of Tags User: I have labeled a bright numeric ID at the center for each visual object in the image. Please enumerate their names. LLaVA-1.5: 1. A vase with a flower in it. 2. A white flower in the vase. 3. The vase is on a wooden table. 4. The vase is sitting on a ledge. 5. The vase is sitting on a fence. 6. The vase is sitting on a railing. 7. \u2026 8. \u2026 9. \u2026 10. \u2026 11. \u2026 Figure 2: Two examples of SoM prompting in LLaVA-1.5. Left: Attention map extracted from LLaVA-1.5 on the image of a bird perching on a branch, where 3 objects are tagged. Right: Attention map extracted from LLaVA-1.5 on the image of a vase placed on a table, where 7 objects are tagged. However, LLaVA-1.5 lists more than 7 object names that are repetitions of previous object names. # Dataset #Text Text w/ Listing Source of Text 1 LLaVA-Pretrain-CC3M-595K 595.4K 0 Raw CC3M image captions. 2 LLaVA-Pretrain-LCS-558K 558.1K 0 Captioned by BLIP. 3 LLaVA-v1.5-Mix665K 3356.2K 0.72% Rule-based, or generated by ShareGPT or GPT4-0314. 4 ShareGPT4V 102.0K 0.21% Generated by GPT4-Vision. 5 CogVLM 333.5K 7.16% Generated by MiniGPT4 or by GPT4-0314. Table 1: Examined pretraining (1-2) and instruction-tuning (3-5) datasets in our preliminary study. 3.2 Finding SoM Data in Existing Training Sources We further look into the pretraining/instruction-tuning (IT) dataset, aiming to inspect if there are text contents with listings, or images with SOM annotations. We examine the pretraining dataset of LLaVA-v1 and v1.5 (Liu et al., 2023b;a), and the IT dataset used by LLaVA-v1.5, ShareGPT4V (Chen et al., 2023), and CogVLM (Wang et al., 2023a). Table 1 shows the source of text in each dataset and the percentage of text content with a listing format. The text in the two pretraining datasets for LLaVA are image captions (either the raw caption or generated by BLIP (Dai et al., 2023)), and we did not find any text with listings in them using our parser. Aside from image captions, the IT dataset also contains instructions related to other visual tasks such as VQA. We noticed that the answers provided by GPT-4(V) models sometimes construct the text in a listing manner (e.g., list out possible reasons for a question, list out observed objects in the image, etc). More examples can be found in Appendix A.6. The instruction-following dataset used by CogVLM has the highest percentage of text with listings (\u223c7%). Through our interaction with these models, we also find CogVLM is better at generating listing-style data than LLaVA-1.5. We add tags to MSCOCO-2017 images following the SoM (Yang et al., 2023a) format, and train a binary classifier with ViT/B-16 (Dosovitskiy et al., 2020). We use the classifiers to filter the images in the two LLaVA pretraining datasets, and take the top 2k images with the highest scores for each dataset. We then manually check the top 2k images, and found 12 images with tagging in CC3M-595K (\u223c0.002%), and found 86 images with tagging in LCS-558K (\u223c0.015%). Figure 15 shows a few images with tagging. Given that tagged images are sparse in those datasets and the SoM prompting performance of open-source MLLMs is unsatisfying, it may be worthwhile to design a tailored dataset that empower open-source MLLMs with this emergent ability, similar to what GPT-4V is capable of. 4 Dataset Creation and Training Motivated by the above analysis, in this section, we introduce the pipeline to create our dataset. First, in Section 4.1, we use semantic-SAM to generate semantic visual prompts in the form of numeric tags for each image. We then discuss the learning paradigm of \u201clist items one by one\u201d in Section 4.2. Finally, we use visual prompted images to generate text data in Section 4.3. 4 4.1 Image Source and Visual Prompting Generation There are various open-source image datasets available (Deng et al., 2009; Lin et al., 2014; Schuhmann et al., 2022; Yan et al., 2023a). We use MS-COCO (Lin et al., 2014) as the image source to create our SoM dataset, since it contains comprehensive human annotations with bounding boxes, masks, and captions. It has also been widely used for visual instruction tuning (Liu et al., 2023b; Wang et al., 2023a; Chen et al., 2023), which could benefit controlled experiments as well as comparisons with previous work. The first step is to create visual prompts by placing numeric tags on proper locations. Following SoM (Yang et al., 2023a), we experiment with segmentation models including SEEM (Zou et al., 2023), Semantic-SAM (Li et al., 2023c), and SAM (Kirillov et al., 2023). Empirically, we find that Semantic-SAM provides the annotation granularity that best fits COCO images, and thus use it to create tagged images for our dataset. 4.2 A Learning Paradigm: List Items One by One After obtaining the image data with semantic tags, the next question is how to design the instruction data to best distill the SoM visual prompting ability. A common approach (Liu et al., 2023b; Chen et al., 2023) in multimodal instruction-following data creation is to design and collect \u201cquestionanswering\u201d style samples. This is often done by prompting ChatGPT/GPT-4 or alternative open-source models. Given an image I and optional metadata MI such as captions, bounding boxes, various questions or instructions X(i) Q are posed, and the corresponding answers X(i) A from large models are collected. However, such general question-answering data may not be the most effective in distilling the desired SoM prompting capability, due to the inadequate mention of objects in text. For SoM prompting, one core ability of interest is to associate numbered tags with visual objects in the image, thereby enabling effective referral of visual objects via text tokens. In a general QA data, however, it is rare for multiple objects to be mentioned, even in an extended multi-turn conversation. To enhance tag association, we propose a simple and effective approach: list items one by one, where the model is asked to comprehensively describe all tagged items within an image. Given an image IT with N text tags on the image, we ask the model to enumerate all items in numerical order: {X1 obj, X2 obj, \u00b7 \u00b7 \u00b7 , XN obj}, where Xj obj is the textual description of the j-th item, tagged by ID j in the image. Beyond promoting SoM learning, listing items one by one is also effective in general multi-modal LLM training: if a model learns to list items in the images with a specific order (in our case, the order is determined by the visual numeric tags), it gains a comprehensive and fine-grained understanding of images. This could directly benefit visual grounding and reasoning, which we verified through the standard multimodal QA and chat evaluation benchmarks. Compared with existing visual instruction tuning datasets, such as LLaVA-665K (Liu et al., 2023a) and ShareGPT-4V (Chen et al., 2023), another difference is the implicit spatial information encoded by the visual tags in SoM prompting. Converting images into the language space inevitably loses information, especially spatial locations. For example, \u201ca girl on the right\u201d can only vaguely imply the position of the girl. However, with SoM visual prompting, we provide precise visual guidance on the image. Therefore, our data can be viewed as a form of dense captioning with a new way of encoding spatial information. 4.3 Text Data Generation via GPT-4V With the visual prompting enhanced images, the final step for dataset creation is to generate the corresponding text data. To automate this process, we leverage GPT-4V (OpenAI, 2023a) to generate the listing data {X1 obj, X2 obj, \u00b7 \u00b7 \u00b7 , XN obj}, following the order of visual tags in the images. However, we find that simply prompting the model to list items in a zero-shot manner could lead to noisy and biased generation results, where the model may refer the tag to a distant object that is easy to describe. (see examples in appendix A.4). To mitigate this problem, we seek two complementary solutions: (1) We modify the system message of GPT-4V to avoid assigning tags to distant objects. (2) We 5 10k 25k 50k 75k 100k Number of listing data (k) 50 60 70 80 Accuracy list 7B list 13B GPT-4V (a) Ablation on model sizes with LLaVA-1.5 10k 25k 50k 75k 100k Number of listing data (k) 65 70 75 80 Accuracy list list+mix-665k list+nonocr list+ocrtext (b) Ablation on data sources with LLaVA-1.5-7B Figure 3: Performance analysis on tag listing. Training samples of listing data grow from 10k to 100k. list+mix-665k is to mix listing data with 665k instruction tuning data from (Liu et al., 2023a). list+nonocr is to exclude the OCR and text data from the full 665k data, resulting in 563k samples. list+ocrtext is to mix listing data with only OCR and text data from the full 665k data, resulting in 102k samples. Green-dashed line in Figure 3a is the zero-shot result from GPT-4V. manually design a few correct listing samples via human annotations, and use them as seed examples for in-context-learning to query GPT-4V. The details of our template is in Appendix. In addition to listing, we also consider conversational data similar to LLaVA (Liu et al., 2023b), where GPT-4V is asked to generate mulit-turn question answering between an AI assistant and a person asking questions about the photo. Given a tagged image IT, we use GPT-4V to generate instruction-following data in the form of {Person:IT X(i) Q , Assistant: X(i) A }. 4.4 Model Training We take the pretrained stage of LLaVA-1.5 (Liu et al., 2023a) as the base model, and continue finetuning by mixing instruction tuning data of LLaVA-1.5 with our collected visual prompting data. For SoM-listing, we create 40 task templates as human instructions (e.g., \u201cplease enumerate object names in the tagged image\u201d), and treat them as standard conversational data. We use the same training objective of next-token prediction to train general QA, SoM-QA and SoM-listing data. Specifically, we maximize the conditional log likelihood as follows: \u2212log p(XA|Xv, XQ) = \u2212log L \u220f i=1 p\u0398(xi|I/IT, XQ,negative parental death or The internet is my one only coping mechanism =>trauma unspecified) \u2022 Despite being specifically prompted to only annotate factors related to the writer/speaker, LLMs (including GPT4) do not always comply, e.g., She comes from what is, honestly, a horrific family situation =>emotional abuse) 6 Category Positive or Negative Positive Only Subcategory Precision Recall F1-Score Precision Recall F1-Score Accuracy TRAUMA 0.44 0.89 0.59 0.57 0.84 0.68 0.57 PRECARITY 0.31 0.52 0.39 0.50 0.46 0.48 0.36 CONDITION 0.46 0.81 0.59 0.61 0.77 0.68 0.57 SYMPTOMS 0.35 0.78 0.49 0.45 0.73 0.56 0.41 SUICIDALITY 0.36 0.93 0.51 0.70 0.87 0.77 / TREATMENT 0.39 0.87 0.54 0.64 0.81 0.71 / ALL 0.39 0.80 0.52 0.55 0.75 0.63 0.48 Table 2: GPT4 (real data). Positive or Negative: counting annotation in category regardless of polarity (category level); Positive Only: counting negative annotations as NaN (category level); Subcategory: accuracy at the subcategory level Category Positive or Negative Positive Only Subcategory Precision Recall F1-Score Precision Recall F1-Score Accuracy TRAUMA 0.90 0.49 0.64 0.90 0.49 0.64 0.38 PRECARITY 0.84 0.69 0.76 0.86 0.69 0.76 0.54 CONDITION 0.44 0.67 0.53 0.47 0.67 0.55 0.59 SYMPTOMS 0.85 0.59 0.70 0.84 0.59 0.69 0.36 SUICIDALITY 0.75 1.00 0.85 0.77 0.90 0.83 / TREATMENT 0.68 0.84 0.75 0.76 0.57 0.65 / ALL 0.74 0.65 0.70 0.77 0.61 0.68 0.42 Table 3: GPT3.5 (synthetic data). Positive or Negative: counting annotation in category regardless of polarity (category level); Positive Only: counting negative annotations as NaN (category level); Subcategory: accuracy at the subcategory level Category Positive or Negative Positive Only Subcategory Precision Recall F1-Score Precision Recall F1-Score Accuracy TRAUMA 0.84 0.95 0.89 0.86 0.92 0.89 0.82 PRECARITY 0.85 0.84 0.85 0.91 0.82 0.86 0.80 CONDITION 0.61 0.67 0.64 0.60 0.67 0.63 0.67 SYMPTOMS 0.49 0.78 0.60 0.53 0.80 0.64 0.69 SUICIDALITY 0.81 0.94 0.87 0.78 0.82 0.80 / TREATMENT 0.85 0.89 0.87 0.87 0.78 0.82 / ALL 0.69 0.83 0.75 0.69 0.79 0.73 0.72 Table 4: GPT4 (synthetic data). Positive or Negative: counting annotation in category regardless of polarity (category level); Positive Only: counting negative annotations as NaN (category level); Subcategory: accuracy at the subcategory level \u2022 Even GPT4 makes errors regarding negation (e.g., I\u2019ve read about people with autism getting temper tantrums/meltdowns, however, that has never really been a problem for me=>negative autism or i had in my head that something inside was very wrong, but i never felt completely depressed all the time so i never took bipolar seriously =>negative bipolar disorder) \u2022 Despite being prompted to annotate suicidality in a separate category, LLMs often annotate it in the SYMPTOM rather than SUICIDALITY category \u2022 GPT3.5 especially often outputs irrelevant/spurious/incorrect labels (e.g., \u2018unemployed\u2019 as condition, \u2018ambition\u2019 as symptom, labelling physical conditions instead of mental conditions only, etc.) 7 Positive and Negative Positive Only Subcategory Annotator vs. Annotator (real data) 0.60 0.59 0.35 GPT3 vs. Annotator (real data) 0.39 0.52 0.37 GPT4 vs. Annotator (real data) 0.43 0.58 0.47 Annotator vs. Annotator (synthetic data) 0.77 0.71 0.68 GPT3 vs. Annotator (synthetic data) 0.64 0.63 0.40 GPT4 vs. Annotator (synthetic data) 0.70 0.69 0.71 Table 5: Inter-annotator agreement (Cohen\u2019s Kappa) \u2022 Even GPT4 makes errors regarding factuality (e.g., It was around my second year in junior high school when my father tried to take his life =>positive death) However, in many cases the assessment is not entirely fair, as the LLMs (particularly GPT4) often catch annotations which human annotators missed, or the difference in subcategories is subjective and open to debate (e.g., school bullying vs emotional abuse, emotional abuse vs abuse unspecified, etc.). Thus it is possible that LLMs, or most likely GPT4, in fact outperformed experts on this task. 6 Discussion The results obtained from our comparison of LLM annotations with human annotations on both real and synthetic data allow us to make a few conclusions and recommendations. Overall, both LLMs perform well. Inter-annotator agreement and performance indicate that GPT4 performs on par with human annotators. In fact, error analysis and manual examination of annotations suggest the LLMs potentially outperform human annotators in terms of recall (sensitivity), catching annotations which have been missed. However, while recall might be improved in LLMs versus human annotators, precision may suffer in unexpected ways, for example through errors in the use of negation and factuality, even in the case of GPT4. LLMs display a particular tendency to overpredict labels and produce negative annotations in infelicitous contexts, i.e., when humans would deem them irrelevant, creating an amount of noise. However, these negative annotations are not technically incorrect. While accuracy errors could be found in the LLM output, the experts\u2019 outputs were not entirely free of them, and previous work by [37] suggests LLMs may both be more complete AND more accurate than medical experts. There may still be a difference in the type of accuracy errors produced by LLMs, which will have to be investigated in future research. In terms of accuracy at the subcategory level, we were surprised to find GPT4 outperformed human agreement by a large margin in real data (0.47 vs 0.35). We hypothesise this is due to the fact that human annotators display higher subjectivity in their style of annotation at the subcategory level (given the lack of predetermined subcategories) and diverge more between them. LLMs are likely to be more \u2018standard\u2019 and generic and thus potentially more in agreement with any given human annotator. More specifically, LLMs tend to be consistent from one annotation to the other with higher recall whereas human annotators showed less consistency. Therefore, if a sentence mentions physical, sexual and emotional abuse, annotators might only mention two out of three but when mentioning all three an LLM is more likely to be in agreement than another annotator, i.e., the LLM will catch more of the perfectly recalled annotations than the second annotator. The better performance demonstrated on synthetic data doesn\u2019t seem due to LLMs performing better on data they are generating, but rather to the synthetic data being less complex and diverse and thus easier to annotate for both LLMs and humans, as evidenced by GPT4 reaching similar inter-annotator agreement scores to humans (with agreement both in humans and LLM/human 10% higher for synthetic data). This better performance could still warrant using synthetic data for e.g., training machine learning models (given more reliable labels) but only in cases where the potential loss in diversity is compensated by the increase in label reliability. This will likely depend on the specific application. 8 7 Conclusion We presented the results of a study examining human and Large Language Models (GPT3.5 and GPT4) performance in extracting mental health factors from adolescent social media data. We performed analyses both on real and synthetic data and found GPT4 performance to be on par with human inter-annotator agreement for both datasets, with substantially better performance on the synthetic dataset. However, we find GPT4 still performing non-human errors in negation and factuality, and synthetic data to be much less diverse and differently distributed than real data. The potential for future applications in healthcare will have to be determined by weighing these factors against the substantial reductions in time and cost achieved through the use of LLMs. Acknowledgment I.L., D.W.J., and A.K. are partially supported by the National Institute for Health and Care Research (NIHR) AI Award grant (AI_AWARD02183) which explicitly examines the use of AI technology in mental health care provision. A.K. declare a research grant from GlaxoSmithKline (unrelated to this work). This research project is supported by the NIHR Oxford Health Biomedical Research Centre (grant NIHR203316). The views expressed are those of the authors and not necessarily those of the UK National Health Service, the NIHR or the UK Department of Health and Social Care." +} \ No newline at end of file diff --git a/intro_8K/test_introduction_short_2404.16538v1.json b/intro_8K/test_introduction_short_2404.16538v1.json new file mode 100644 index 0000000000000000000000000000000000000000..d6fcc3a4b82bf882b2580406169d8adef1188db8 --- /dev/null +++ b/intro_8K/test_introduction_short_2404.16538v1.json @@ -0,0 +1,16 @@ +{ + "url": "http://arxiv.org/abs/2404.16538v1", + "title": "OpenDlign: Enhancing Open-World 3D Learning with Depth-Aligned Images", + "abstract": "Recent advances in Vision and Language Models (VLMs) have improved open-world\n3D representation, facilitating 3D zero-shot capability in unseen categories.\nExisting open-world methods pre-train an extra 3D encoder to align features\nfrom 3D data (e.g., depth maps or point clouds) with CAD-rendered images and\ncorresponding texts. However, the limited color and texture variations in CAD\nimages can compromise the alignment robustness. Furthermore, the volume\ndiscrepancy between pre-training datasets of the 3D encoder and VLM leads to\nsub-optimal 2D to 3D knowledge transfer. To overcome these issues, we propose\nOpenDlign, a novel framework for learning open-world 3D representations, that\nleverages depth-aligned images generated from point cloud-projected depth maps.\nUnlike CAD-rendered images, our generated images provide rich, realistic color\nand texture diversity while preserving geometric and semantic consistency with\nthe depth maps. OpenDlign also optimizes depth map projection and integrates\ndepth-specific text prompts, improving 2D VLM knowledge adaptation for 3D\nlearning efficient fine-tuning. Experimental results show that OpenDlign\nsignificantly outperforms existing benchmarks in zero-shot and few-shot 3D\ntasks, exceeding prior scores by 8.0% on ModelNet40 and 16.4% on OmniObject3D\nwith just 6 million tuned parameters. Moreover, integrating generated\ndepth-aligned images into existing 3D learning pipelines consistently improves\ntheir performance.", + "authors": "Ye Mao, Junpeng Jing, Krystian Mikolajczyk", + "published": "2024-04-25", + "updated": "2024-04-25", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Original Paper", + "paper_cat": "Parameter AND Efficient AND Fine AND Tuning", + "gt": "3D understanding, which involves tasks such as point cloud classification and 3D object detection, is pivotal for advancing augmented/virtual reality [1; 2], autonomous vehicles [3; 4], and robotics [5; 6]. Traditional 3D models [7; 8; 9; 10; 11; 12; 13] are closed-world, which can only recognize pre-defined categories and struggle with \u2019unseen\u2019 ones. The emergence of Vision-Language Models (VLMs) like CLIP [14], renowned for their success in identifying \u2018unseen\u2019 categories in 2D images through open-world representation learning [15; 16; 17; 18], has sparked interest in applying these models to develop robust open-world 3D representations for 3D vision tasks. Existing open-world 3D learning methods can be categorized into depth-based and point-based methods. Depth-based methods [19; 20; 21] project point clouds into multi-view depth maps and employ the pre-trained CLIP image encoder for 3D representations. However, this process encounters a domain gap because CLIP is primarily trained with RGB images rather than depth maps. To bridge this gap, methods like [21] incorporate an additional depth encoder and utilize contrastive learning to align depth features from this encoder with image and text features from pre-trained CLIP encoders, as illustrated in Fig. 1(a). The images used here, specifically rendered from CAD models for feature alignment, are not employed in the zero-shot inference phase. Point-based methods [22; 23; 24; 25; 26; 27] directly learn 3D representations from point clouds, avoiding the latency of Preprint. Under review. arXiv:2404.16538v1 [cs.CV] 25 Apr 2024 \u201cA point cloud of a small airplane.\u201d Image Encoder Text Encoder Point Encoder Align (b) Point-based Method (a) Depth-based Method (c) OpenDlign (ours) \u201cA depth map of a small airplane.\u201d Image Encoder Text Encoder Depth Encoder Align Depth Map Rendered Image (Training only) Text Point Cloud Rendered Image (Training only) Text \u201cA depth map of a small airplane.\u201d Image Encoder Text Encoder Align Depth Map Depth-Aligned Image (Training only) Text CAD-rendered images Depth-aligned images vs. Figure 1: Top: OpenDlign vs. Conventional Open-World 3D Learning Frameworks: OpenDlign enhances multimodal alignment using depth-aligned images, providing more detailed geometric and semantic information along with enhanced color and texture compared to previously used rendered images. It refines 3D representation by fine-tuning the CLIP image encoder directly, eliminating the extra encoder pre-training required by other methods. Note that both rendered and depth-aligned images are used exclusively for learning alignment. Bottom: Visual comparison between CAD- rendered and corresponding depth-aligned multi-view images. depth map projection. However, due to the inherent data format differences between images and point clouds, these methods also need an additional point encoder for extracting 3D features, akin to depth-based methods (See Fig. 1(b)). Thus, aligning 3D data (e.g., depth maps or point clouds) with the image-text modalities pre-aligned by CLIP is a standard step in current 3D open-world methods. Depth-based and point-based methods encounter two primary challenges in the alignment process. First, the CAD-rendered images used for aligning 3D data typically display consistent color and texture styles across various views. Over-aligning with these low-diversity images compromises the generalizability of learned 3D representations. Secondly, the 3D datasets used for encoder pre- training, like ShapeNet [28] and Objaverse [29], contain less than 1 million synthetic 3D objects, significantly smaller than the DFN5B [30] and LAION-5B [31] datasets with 5 billion images used to train the cutting-edge CLIPs. This data volume disparity, which is due to the high cost of 3D data acquisition, results in the sub-optimal transfer of CLIP\u2019s knowledge to 3D representations. While fine-tuning CLIP\u2019s encoders yields more direct knowledge transfer, it restricts the input to depth maps. Unfortunately, 3D representations from depth maps still underperform in downstream 3D tasks compared to those from point clouds, due to two factors: (1) The absence of a robust projection method for creating dense depth maps with smooth contours from point clouds. (2) The current widely used CLIP text prompt templates are tailored for matching with RGB images, not depth maps. To address these challenges, this paper proposes OpenDlign, a novel framework that learns Open- world 3D representations via aligning multi-view depth maps projected from point clouds with Depth-aligned images produced by a generative model [32]. These images offer enhanced color and texture diversity compared to CAD-rendered images while maintaining geometric and semantic consistency with the depth maps (See Fig. 1). Additionally, as shown in Fig. 1(c), OpenDlign fine-tunes the CLIP image encoder rather than pre-training a separate depth encoder, thus maximally adapting CLIP\u2019s existing knowledge for effective 3D learning, even with a limited 3D dataset. Specifically, fine-tuning is limited to the attention layers of the last transformer block, comprising just 6 million parameters. Moreover, OpenDlign employs a new projection pipeline to generate dense depth maps with clear contours. For zero-shot inference, OpenDlign employs depth-specific text prompts and a logit aggregation method, emphasizing depth-related features and combining results from various viewpoint depth maps. Experimental results show that OpenDlign greatly surpasses the prior state-of-the-art, pre-trained on ShapeNet [28], with accuracy gains of 8.0% on ModelNet40 2 and 16.4% on OmniObject3D, the largest real-world 3D shape dataset. Notably, using realistic depth- aligned images significantly boosts the performance of existing SOTA models, like those pretrained on ShapeNet or 3D Ensemble datasets [24]. This consistent improvement across all benchmarks highlights the versatility of depth-aligned images in any 3D open-world learning pipeline. The main contributions of this paper are outlined as follows: \u2022 We propose a multimodal alignment framework that aligns features from depth maps and depth-aligned images to learn a unified depth map, image, and text representation. \u2022 We develop a contour-aware projection pipeline to produce dense and contour-preserving multi-view depth maps from point clouds. \u2022 We introduce depth-specific text prompt templates for zero-shot inference to accurately capture both the semantic and visual traits in depth maps. \u2022 We design a logit aggregation strategy that derives final 3D representations from both CLIP and OpenDlign visual encoders, reducing catastrophic forgetting in alignment.", + "main_content": "2.1 Open-World 3D Representation Learning Vision and Language models such as CLIP [14] have revolutionized 2D representation learning in open-world settings through contrastive learning with large-scale image-text pairs [33; 34; 35; 36]. Building on this, recent studies have adapted CLIP for 3D representation learning, achiving impressive performance in diverse 3D zero-shot tasks [24; 25]. PointCLIP [20], as a pioneering study, utilizes the CLIP image encoder for extracting 3D representations from depth maps of point clouds, achieving zero-shot recognition by aligning with text embeddings of semantic categories. To address CLIP\u2019s training bias towards RGB images, Zhu et al. [19] introduced GPT-generated 3D-specific prompts and a denser depth map projection, while CLIP2Point [21] pre-trains a depth encoder for closer alignment with CLIP\u2019s encoders. These methods derive representations from depth maps with noisy contours, causing a loss of key shape features needed for precise recognition. Moreover, their reliance on either natural image text prompts or depth-specific prompts generated by GPT-3 [37] for certain categories highlights a lack of versatility in handling diverse 3D contexts. Alternative methods [23; 23; 24; 25; 27] avoid depth map projection by directly aligning point clouds, images, and text using specialized 3D encoders. By scaling up the dataset and encoder sizes, these methods show promise in diverse 3D tasks. However, these methods are limited by their reliance on CAD-rendered images, which have limited texture diversity across views, leading to less generalizable representations. Additionally, the smaller volume of 3D datasets compared to CLIP\u2019s training data hinders effective knowledge transfer to point cloud encoders. In this paper, we substitute rendered images with AI-generated, depth-aligned images to enhance texture diversity. We also fine-tune the CLIP image encoder for 3D representation learning instead of training a new 3D encoder from scratch, reducing the reliance on large 3D datasets. 2.2 Continual Learning in CLIP Fine-Tuning Continual Learning (CL) in CLIP aims to mitigate catastrophic forgetting [38], ensuring retention of zero-shot capabilities across varied data distributions while fine-tuning to new tasks. CL methods fall into three categories: adaptive-plasticity methods [39; 40; 41; 42; 43; 44], replay methods [45; 46; 47], and architecture-based methods [48; 49]. Adaptive-plasticity methods limit the plasticity of the essential model parameters for past tasks during fine-tuning. For instance, the IMM-Mean [44] method achieves CL by simply averaging parameters of pre-trained and fine-tuned models for inference, although its efficacy might be limited for complex tasks [50]. Replay methods leverage stored exemplars to enable CLIP to recall previously learned knowledge, while they encounter scalability challenges. Without relying on exemplars, architecture-based CL methods dynamically adjust the model\u2019s architecture to accommodate new information without losing existing knowledge [50]. In this study, we align the depth map with the RGB image by freezing the pre-trained CLIP encoder weights and incorporating a trainable transformer-based branch for encoding depth maps, adhering to architecture-based principles. Inspired by IMM-Mean [44], we use pre-trained and fine-tuned model weights to compute classification logits for multi-view depth maps. 3 3 Methodology Fig. 2 illustrates the OpenDlign framework, which learns effective open-world 3D representations by aligning embeddings from projected depth maps and depth-aligned images. Initially, a contour-aware projection method is employed to create shape-preserved, dense depth maps from point clouds. These maps then guide a generative model to produce depth-aligned images with rich color and texture diversity. OpenDlign then uses contrastive learning to align features between depth maps and generated images by fine-tuning a transformer block linked to the CLIP image encoder. This step enables the extraction of robust embeddings from \u2019unseen\u2019 multi-view depth maps at test time, using both fine-tuned and pre-trained states of the image encoder. These embeddings are matched with depth-specific text embeddings, which encode the depth maps\u2019 semantic and visual traits, to compute logits for each viewpoint and aggregate these logits to enable zero-shot classification. Alternatively, these embeddings can be refined using a logistic regressor for few-shot classification. 3.1 Contour-Aware Depth Map Projection The contour-aware projection method transforms the input point cloud into multi-view depth maps with clear contours. Inspired by the pipeline in [19], this method involves four main steps: Quantize, Densify, Smooth, and Squeeze. In the Quantize step, for the i^{\\text {th}} view of point cloud P_i, the 3D coordinates (x, y , z ) \\in P_i are normalized to [0, 1] and mapped onto a discrete grid G \\in \\mathbb {R}^{H \\times W \\times B}, where H and W correspond to the dimensions required by the CLIP image encoder, and B is a pre-defined depth dimension. Next, the Densify step enhances G by updating each voxel to the maximum value within its 7 \\ t imes 7 \\times 7 neighborhood, yielding a denser map G'. Subsequently, the Smooth step applies bilateral filtering to each voxel v_i in G', adjusting its intensity I_{v_i} to I' _{v_i} using: I' _ { v_ i } = \\frac { 1 }{W_v} \\sum _ {v_j \\in S} G_{\\sigma _1}(\\|v_i v_j\\|) G_{\\sigma _2}(|I_{v_i} I_{v_j}|) I_{v_j} (1) where W_ { v } = \\sum _{ v _j \\in S} G_ { \\sigma _1}(\\|v_i v_j\\|) G_{\\sigma _2}(|I_{v_i} I_{v_j}|) is the normalization factor that ensures voxel weights sum to 1.0. The Gaussian functions G\u03c31 and G\u03c32 adjust the influence of each neighboring voxel vj within the 5 \u00d7 5 \u00d7 5 kernel from set S around vi, based on spatial and intensity differences, enhancing contour sharpness and reducing jagged edges in G\u2032. Finally, the Squeeze step applies the minimal pooling on the depth channel of the smoothed G', then triples the output to mimic RGB intensity, producing the final depth map D \u2208RH\u00d7W \u00d73. 3.2 Depth-Aligned Image Generation We generated 524,700 depth-aligned images from ShapeNet [28], one of the leading public 3D CAD datasets containing around 52,470 models, each annotated with semantic metadata. To align with prior experimental protocols [24; 23], we sampled a point cloud of 10,000 points from each model, projecting these onto 10 contour-aware depth maps. A conditional image generative model (ControlNet v1.1 [32]) then produced depth-aligned images for each map (D), using 1 \u2212D and the model\u2019s metadata as conditions. This approach ensures that the images remain consistent with the depth maps both geometrically and semantically, while also adding texture diversity across different views. The conditioning of ControlNet utilizes 1 \u2212D instead of D because it is predominantly pre-trained on depth images, in which brighter regions indicate closer proximity. The supplemental material details the positive and negative prompts used in ControlNet to achieve high-fidelity and noise-free depth-aligned image generation. 3.3 Multimodal Representation Alignment OpenDlign aligns representations from multi-view depth maps and depth-aligned images by finetuning a transformer block that is residually connected to the final block of the pre-trained CLIP image encoder, using contrastive learning. As CLIP pre-training already aligns image and text modalities, OpenDlign implicitly aligns depth maps with the shared image and text space. Multimodal Feature Extraction. Given a 3D point cloud input, let D = \\{ D_i\\}_{i=1}^N represent the set of its N projected depth map views, and R = \\{ R_i\\}_{i=1}^N the corresponding set of depth-aligned 4 CLIP Image Encoder Transformer Block \u2026 \u2026 \ud835\udc21! \" \ud835\udc21# $ \ud835\udc21% $ \ud835\udc21& $ \u2026 \ud835\udc21! $ \u2026 \u2026 \u2026 \u2026 \u2026 Transformer Block \u2026 Transformer Block Transformer Block \u2026 (a) Point Cloud Representation Learning via Generated Depth-Aligned Images Contour-Aware Depth Map Projection Image Generative Model \u2026 \u2026 Input Point Cloud Multi-View Depth Maps (b) Zero-Shot 3D Classification (c) Few-Shot 3D Classification \u201cA depth map of a {car} 3D model.\u201d View 1 \u2026 View ! % + 1 \u2026 airplane cup sofa car \u2026 Pretrained Text Encoder \u2026 \u201cA silhouette of a {class}. \u201cA raytraced image, Kinetic pointillism \u201cA depth map of a {class} 3D model\u201d. Depth-Specific Text Multi-View Depth-Aligned RGB Images V! \u22c5F\" V # \u22c5F\" \u2026 \ud835\udc05! \ud835\udc05\ud835\udfd1 \ud835\udc05\ud835\udfcf \ud835\udc05\ud835\udfd0 \u2026 \ud835\udc15\ud835\udc22\u22c5\ud835\udc05 \ud835\udfd1 V! \u22c5F& V! \u22c5F' \u2026 \ud835\udc15 \ud835\udc23\u22c5\ud835\udc05 \ud835\udfd1 V # \u22c5F' \ud835\udc15 \ud835\udc22 \ud835\udc15 \ud835\udc23 V # \u22c5F& 1 \u2264\ud835\udc56\u2264\ud835\udc41/2 \ud835\udc41/2 < \ud835\udc57\u2264\ud835\udc41 Pre-trained Image Encoder Fine-tuned Image Encoder View ! % View \ud835\udc41 \u2026 \u2026 \ud835\udc21# \" \u2026 \ud835\udc21\ud835\udfcf \ud835\udc79\u22c5\ud835\udc21\ud835\udfcf \ud835\udc6b h' , \u22c5h& h. , \u22c5h& h/ , \u22c5h& h& , \u22c5h/ h. , \u22c5h/ h' , \u22c5h/ \ud835\udc21% \" \u2026 h& , \u22c5h' \ud835\udc21\ud835\udfd0 \ud835\udc79\u22c5\ud835\udc21\ud835\udfd0 \ud835\udc6b h. , \u22c5h' h/ , \u22c5h' \ud835\udc21& \" \u2026 h& , \u22c5h. h' , \u22c5h. h/ , \u22c5h. \ud835\udc21\ud835\udfd1 \ud835\udc79\u22c5\ud835\udc21\ud835\udfd1 \ud835\udc6b \ud835\udc21\ud835\udc75 \ud835\udc79\u22c5\ud835\udc21\ud835\udc75 \ud835\udc6b \ud835\udc13\ud835\udfcf \ud835\udc13\ud835\udc73#\ud835\udfcf \ud835\udc13\ud835\udc73 \ud835\udc13\ud835\udc73 \ud835\udc95 Logistic Regressor bike cup \u2026 table car View 7 % + 1 \u2026 \ud835\udc41 \u2026 View 1 \u2026 ! % \u2026 Fine-tuned Image Encoder Pre-trained Image Encoder Figure 2: Overview of OpenDlign. In (a), OpenDlign converts point clouds into multi-view depth maps using a contour-aware projection, which then helps generate depth-aligned RGB images with diverse textures, geometrically and semantically aligned with the maps. A transformer block, residually connected to the CLIP image encoder, is fine-tuned to align depth maps with depth-aligned images for robust 3D representation. For zero-shot classification (b), OpenDlign aggregates multiview logits from both pre-trained and fine-tuned encoders for label prediction and for few-shot classification (c), it employs a logistic regressor trained on multi-view features from the encoders. images. Each image R_i is encoded through L layers of a pre-trained CLIP image encoder, \\{\\text {T}_l(\\cdot )\\}_{l=1}^L , to obtain feature representations I ^ { R}_{i} =\\text {T}_{1\\ldots L}(R_i) . Each depth map D_i is processed up to layer \\text {T}_{L-1} , obtaining preliminary features \\text {T}_{1\\ldots L-1}(D_i) . Subsequently, these features are passed through the frozen layer \\protect \\text {T}_L and its trainable counterpart \\ text {T}^t_L , yielding the feature for the ith depth map view ID i = T1...L(Di) + Tt L(T1...L\u22121(Di)). Inspired by [17], only the layers for spatial interaction in \\ text {T}^t_L (i.e., attention layers) are trainable. The final feature vectors for multi-view depth maps D and depth-aligned images R are \\ m a t hb f { h}^ D = \\frac {1}{N} \\sum ^N_{i=1}\\|I^D_i\\| and \\ m a t hb f { h}^ R = \\frac {1}{N} \\sum ^N_{i=1}\\|I^R_i\\|, respectively. Loss Functions. The alignment of hD and hR is achieved by minimizing a composite loss function, comprising the contrastive loss Lcont and the feature distance loss Ldist, defined as: \\mat h c al {L }_ { \\te xt { to t al } } = \\ und e rb r ac e { \\ s u m _{ (i, j) } -\\ f ra c { 1}{ 2 } \\ lo g \\ f r ac {\\exp \\ left (\\m a t hb f { h }_ i ^{D} \\mathbf {h}_j^{R} / \\tau \\right )}{\\sum _k \\exp \\left (\\mathbf {h}_i^{D} \\mathbf {h}_k^{R}/\\tau \\right )} -\\frac {1}{2} \\log \\frac {\\exp \\left (\\mathbf {h}_i^{D} \\mathbf {h}_j^{R}/\\tau \\right )}{\\sum _k \\exp \\left (\\mathbf {h}_k^{D} \\mathbf {h}_j^{R} / \\tau \\right )}}_{\\mathcal {L}_{\\text {cont}}} + \\underbrace {\\sum _{(i,j)} \\|\\mathbf {h}^D_i \\mathbf {h}^R_j\\|_2}_{\\mathcal {L}_{\\text {dist}}} (2) 5 In each training batch, (hD i , hR j ) represents a positive pair and k \u0338= i, j. Here, \u03c4 is a learnable temperature parameter, similar to CLIP [14]. 3.4 3D Zero-Shot Transfer The alignment between depth maps and depth-aligned RGB images facilitates 3D zero-shot classification by aggregating multi-view classification logits. Each logit represents the similarity between features of a single-view depth map and text features specific to category candidates. Depth-Specific Text Generation. We generate 80 depth-specific text prompt templates based on 80 ImageNet zero-shot recognition prompts1, integrating keywords such as \"depth map\", \"white background image\", \"raytraced image\", and \"silhouette of [CLASS]\". These keywords guide OpenDlign to target depth-related features, such as the distance of object surfaces from a viewpoint. To identify these keywords, we use the CLIP-Interrogator tool [51] to analyze depth maps from ShapeNet [28], seeking text prompts that best match their visual features. The 10 most recurring prompts from this analysis are chosen as our essential keywords. In zero-shot inference, we employ our depth-specific templates to generate 80 text descriptions for each label l. These descriptions {ti}80 i=1 are encoded by a texture encoder F(\u00b7), normalized, and then merged into a unified text feature Fl via average pooling, calculated as 1 80 P80 i=1 \u2225F(ti)\u2225. Multi-View Logits Aggregation. To calculate classification logits, we first gather visual features from multi-view depth maps {Vi}N i=1, aiming to align with depth-specific text features of M candidate labels F = {Fi}M i=1. The feature extraction utilizes a dual-encoder strategy: the first half of the views \\ifmmod e \\lbrace \\else \\textbraceleft \\fi V_i\\}_{i=1}^{N/2} utilize a pre-trained CLIP image encoder, while the second half of views \\ifmm ode \\lbrace \\else \\textbraceleft \\fi V_i\\}_{i=N/2+1}^{N} employs a fine-tuned encoder. The strategy ensures that OpenDlign maintains its capability to recognize previously identifiable depth maps after learning multimodal alignment via fine-tuning. As shown in Fig. 2(b), the logit for a single depth map view is the product of Vi and F, with the overall classification logit being the sum of logits across all views, calculated as PN i=1 ViFT . 4 Experiments 4.1 Zero-Shot 3D Classification We first evaluated OpenDlign under the zero-shot shape classification task on three benchmark datasets: ModelNet40 [52], ScanObjectNN [53], and OmniObject3D [54]. ModelNet40 offers synthetic 3D CAD models in 40 categories. ScanObjectNN provides real-scanned objects in 15 categories from OBJ_ONLY version. OmniObject3D, the largest, includes 5,911 real-scanned objects in 216 categories, well-suited for fine-grained, real-world classification evaluation. Point cloud sizes are 10,000 points for ModelNet40, 2,048 for ScanObjectNN, and 4,096 for OmniObject3D. OpenDlign was compared against existing methods, including three depth-based methods: PointCLIP [20], PointCLIP V2 [19], and CLIP2Point [21], and three point-based methods: ULIP [23], OpenShape [24], and TAMM [27]. Additionally, we improved the OpenShape and TAMM models by retraining them with depth-aligned and CAD-rendered images from an integrated dataset provided by OpenShape, which combines four distinct collections: Objaverse [29], ShapeNet [24], 3D-Future [55], and ABO [56]. Our aim was to investigate if depth-aligned images consistently enhance the performance of existing 3D open-world methods. Moreover, we evaluated OpenDlign\u2019s scalability by training it with various CLIP variants to adapt to the complexity of pre-trained image-text encoders. Table 1 shows OpenDlign substantially outperforms existing methods trained on ShapeNet on three benchmarks, exceeding the previous best, TAMM-SparseConv trained on ShapeNet, by margins of 8.0% on ModelNet40, 1.6% on ScanObjectNN, and 16.4% on OmniObject3D in top-1 accuracy. OpenDlign also greatly exceeds the leading depth-based method, PointCLIP V2\u2014by 19% on ModelNet40 and 27.4% on OmniObject3D. Significantly, OpenDlign outshines all methods pre-trained on the ensemble dataset in the ScanObject3D benchmark. Moreover, OpenDlign\u2019s performance scales linearly with the complexity of CLIP variants, surpassing most of the baseline models on ModelNet40 and OmniObject3D benchmarks, even when employing the light ViT-B-16 CLIP model. Moreover, the use of depth-aligned images consistently boosts the performance of OpenShape and 1Text Prompts for ImageNet: ImageNet Prompt Engineering. 6 Table 1: Zero-shot classification results on ModelNet40 [52], ScanObjectNN [53] and OmniObject3D[54]. Best: bolded. Second-best: underlined. Training 3D Open-World CLIP ModelNet40 [52] ScanObjectNN [53] OmniObject3D[54] Source Methods Variant Top1 Top3 Top5 Top1 Top3 Top5 Top1 Top3 Top5 2D inferences PointCLIP [20] ResNet-50 19.3 28.6 34.8 10.5 20.8 30.6 0.3 1.0 1.8 No Training PointCLIP V2 [19] ViT-B-16 63.6 77.9 85.0 42.2 63.3 74.5 3.9 9.6 14.4 CLIP2Point [21] ViT-B-32 49.5 71.3 81.2 25.5 44.6 59.4 1.4 3.7 7.1 ULIP-PointBERT [23] SLIP [57] 60.4 79.0 84.4 51.5 71.1 80.2 8.4 15.2 19.7 OpenShape-PointBERT [24] ViT-bigG-14 70.3 86.9 91.3 51.3 69.4 78.4 13.0 23.3 29.4 OpenShape-SparseConv [24] ViT-bigG-14 72.9 87.2 93.0 52.7 72.7 83.6 13.7 24.2 30.0 TAMM-PointBERT [27] ViT-bigG-14 73.1 88.5 91.9 54.8 74.5 83.3 14.9 26.2 33.4 TAMM-SparseConv [27] ViT-bigG-14 74.6 88.2 94.0 57.9 75.3 83.1 ShapeNet OpenShape-PointBERT(+Dlign) ViT-bigG-14 73.7 87.1 91.3 52.7 72.4 82.6 13.4 23.7 29.9 OpenShape-SparseConv (+Dlign) ViT-bigG-14 74.9 89.5 94.1 56.3 75.2 85.4 15.0 26.1 32.8 TAMM-PointBERT(+Dlign) ViT-bigG-14 73.7 89.1 92.2 57.3 73.6 82.3 15.8 27.4 33.0 OpenDlign-B32 ViT-B-32 68.4 86.4 92.6 46.7 72.0 83.0 17.3 29.2 36.3 OpenDlign-B16 ViT-B-16 74.2 90.5 95.4 49.3 74.0 84.4 23.2 37.5 44.3 OpenDlign-L ViT-L-14 77.8 93.1 96.4 52.1 74.6 82.8 27.5 41.3 47.8 OpenDlign-H ViT-H-14 82.6 96.2 98.4 59.5 76.8 83.7 31.3 46.7 53.2 OpenShape-SparseConv [24] ViT-bigG-14 83.4 95.6 97.8 56.7 78.9 88.6 33.7 49.3 57.4 OpenShape-PointBERT [24] ViT-bigG-14 84.4 96.5 98.0 52.2 79.7 88.7 34.0 49.7 57.9 TAMM-PointBERT [27] ViT-bigG-14 85.0 96.6 98.1 55.7 80.7 88.9 37.1 53.5 61.8 Ensemble TAMM-SparseConv [27] ViT-bigG-14 85.4 96.4 98.1 58.5 81.3 89.5 OpenShape-SparseConv (+Dlign ) ViT-bigG-14 85.0 96.1 97.9 56.2 78.5 87.8 34.1 50.5 58.5 OpenShape-PointBERT (+Dlign) ViT-bigG-14 85.4 96.5 98.2 51.1 77.4 88.2 35.6 50.4 57.9 TAMM-PointBERT(+Dlign) ViT-bigG-14 86.2 96.6 97.5 60.5 82.5 90.4 37.5 54.9 62.1 TAMM variants pre-trained on the ShapeNet dataset across all benchmarks. It also improves the performance of variants pre-trained on the ensemble dataset in at least two benchmarks, despite depth-aligned images being available only for the 3D data from ShapeNet, which represents no more than 10% of the ensemble dataset. Significantly, TAMM-PointBERT (+Dlign) achieves a 4.8% top-1 accuracy improvement on the ScanObjectNN dataset, and OpenShape-PointBERT (+Dlign) gains a 1.6% increase on the most challenging OmniObject3D benchmark. These results validate that using depth-aligned images is a universally effective strategy to enhance any 3D open-world pipeline. 4.2 Few-Shot 3D Classification We then assessed OpenDlign\u2019s few-shot classification capability by training a logistic regressor with linear probing on features from N-shot, 10-view depth maps. Similar to the zero-shot scenario, we extracted multi-view features using both fine-tuned and pre-trained OpenDlign encoders (see Fig. 2). At inference, the regressor aggregates logits from 10 views to predict the final label. We compared OpenDlign\u2019s few-shot performance with variants of ULIP [23], OpenShape [24], and TAMM [27], which extract features for training regressor from point clouds using their pre-trained point encoders. Table 2 shows OpenDlign outperforms all baselines across varied few-shot scenarios with 1 to 16 training samples per class. OpenDlign significantly outperforms the leading baseline on the OmniObject3D dataset, exceeding it by 8.8% and 11.8% in 4-shot and 8-shot classification, respectively. This underscores the robustness and transferability of its 3D representations. Table 2: Few-shot classification results on ModelNet40 [52], ScanObjectNN [53] and OmniObject3D [54]. Our results are averaged over 10 random seeds. ModelNet40 [52] ScanObjectNN [53] OmniObject3D [54] Model 1-Shot 2-Shot 4-Shot 8-Shot 16-Shot 1-Shot 2-Shot 4-Shot 8-Shot 16-Shot 1-Shot 2-Shot 4-Shot 8-Shot 16-Shot ULIP-PointBERT [23] 54.4 64.3 74.1 79.3 81.3 46.7 55.1 62.5 70.7 73.9 37.5 41.2 44.1 49.7 53.4 OpenShape-PointBERT [24] 57.5 70.1 76.5 80.4 82.1 47.9 55.6 62.7 67.0 72.0 34.5 34.1 37.8 41.9 45.6 OpenShape-SparseConv [24] 62.8 72.0 78.9 82.9 85.7 47.3 56.3 64.5 68.2 74.0 36.0 37.0 41.5 44.7 48.6 TAMM-PointBERT [27] 62.4 73.3 81.7 83.8 85.9 48.2 57.1 63.6 72.1 76.5 38.9 41.6 46.3 50.1 54.2 OpenDlign (ours) 65.6 73.9 82.9 85.5 87.6 48.9 58.5 67.9 74.2 79.0 42.1 46.9 55.1 61.9 65.8 4.3 Zero-Shot 3D Object Detection We evaluated OpenDlign\u2019s capabilities in Zero-Shot 3D Object Detection using the ScanNet V2 dataset [58], which contains richly annotated 3D indoor scenes in 18 object categories. Following the PointCLIP V2 methodology [19], we began with the pre-trained 3DETR-m model to pinpoint 3D regions of interest, successfully delineating 3D bounding boxes and extracting the points inside each box. Finally, we applied OpenDlign to these points to generate our predictions. Table 3 illustrates OpenDlign\u2019s zero-shot detection prowess using mean Average Precision (mAP) at IoU thresholds 7 Table 3: Zero-shot 3D object detection results on ScanNet V2 [58]. Method Mean Cabinet Bed Chair Sofa Table Door Window Counter Desk Sink Bathtub PointCLIP [20] 6.00 3.99 4.82 45.16 4.82 7.36 4.62 2.19 1.02 4.00 13.40 6.46 AP25 PointCLIP V2 [19] 18.97 19.32 20.98 61.89 15.55 23.78 13.22 17.42 12.43 21.43 14.54 16.77 OpenDlign (ours) 50.72 38.91 67.27 86.33 72.01 58.72 44.58 32.07 50.49 62.04 51.98 64.29 PointCLIP [20] 4.76 1.67 4.33 39.53 3.65 5.97 2.61 0.52 0.42 2.45 5.27 1.31 AP50 PointCLIP V2 [19] 11.53 10.43 13.54 41.23 6.60 15.21 6.23 11.35 6.23 10.84 11.43 10.14 OpenDlign (ours) 37.97 17.04 66.68 73.92 54.96 50.03 24.73 12.84 20.44 41.64 34.17 64.29 of 0.25 and 0.5, achieving scores of 50.72% and 37.97%, respectively. It significantly outperforms PointCLIP V2 by more than 31.75% and 26.44%. Remarkably, OpenDlign can detect the \u2019Sofa\u2019 shape with an AP50 of 54.96%, whereas PointCLIP and V2 score below 10, demonstrating OpenDlign\u2019s superior capability in extracting robust 3D representations from sparse and noisy point clouds in real-world indoor scenes. 4.4 Cross-Modal Retrieval 3D shapes were retrieved by computing the cosine similarity between the embeddings of a query and those generated by OpenDlign, followed by a k-nearest neighbors (kNN) analysis to find the most similar shapes. Fig. 3 illustrates OpenDlign\u2019s capability in matching 3D shapes to image and text queries. Column (a) illustrates its precision in distinguishing sub-categories like grand versus upright pianos from image queries. Column (b) demonstrates successful shape retrieval using distinct text descriptions, such as \"Batmobile armored\". Notably, averaging image and text query embeddings allows OpenDlign to find shapes that combine elements of both inputs. For example, merging a running horse image with the text \"man\" results in the retrieval of both a centaur and a running man, as shown in Fig. 3 (c). A house image combined with \"tree\" retrieves a treehouse. (b) Text Query \u201cBatmobile armored.\u201d \u201cTelephone box.\u201d \u201cDouble-decker bus.\u201d \u201cSpaceship.\u201d \u201cAir Jordan.\u201d (a) Image Query (c) Image + Text Queries \u201cMan.\u201d + \u201cTree.\u201d + Figure 3: 3D shape retrieval results. (a) Two most similar shapes for each query image. (b) Most similar shapes for each query text. (c) Two most similar shapes for combined image and text queries. 4.5 Ablation Study Ablation studies were conducted on zero-shot classification benchmarks to assess the contribution of each component in OpenDlign. Consistently, all OpenDlign variants used in these studies employed OpenCLIP-ViT-H-14 as their backbone. ShapeNet was the default training dataset for all models. Contour-Aware Projection. Replacing PointCLIP V2\u2019s projection pipeline [19] with our contouraware version, as shown in Table 4, enables a pre-trained CLIP to reach 68.8% zero-shot accuracy on ModelNet40, even outperforming several baselines that need extra training. This suggests that through large-scale contrastive learning, CLIP can understand RGB images as well as depth maps, as long as key shape features are maintained during projection. Multimodal Alignment. Table 4 shows that alignment between depth maps and depth-aligned images (depth-daRGB) substantially boosts performance. It improves top-1 accuracy by over 10% across datasets, indicating that depth-daRGB alignment effectively generalizes CLIP to depth maps, with consistent gains in zero-shot inference, regardless of depth-specific text prompts. Further analysis compared depth-daRGB alignment against three alternatives: depth-rendRGB (aligning depth maps with CAD-rendered RGB images), daRGB-text & depth (aligning depth-aligned images with text before depth-daRGB alignment), and depth-text & daRGB (simultaneous alignment 8 Table 4: Ablation study for OpenDlign on ModelNet40 [52] and ScanObjectNN [53]. Acc. improvements over the baseline (first-row) are highlighted in green. Contour-Aware Multimodal Depth-Specific Logits ModelNet40 [52] ScanObjectNN [53] Projection Alignment Texts Aggregation Top 1 Top 3 Top 5 Top 1 Top 3 Top 5 \u2717 \u2717 \u2717 \u2717 59.7 79.6 86.3 42.8 66.7 78.4 \u2713 \u2717 \u2717 \u2717 68.8 (+9.1) 85.8 (+6.2) 91.6 (+5.3) 44.6 (+1.8) 68.3 (+1.6) 78.9 (+0.5) \u2713 \u2713 \u2717 \u2717 79.2 (+19.5) 94.4 (+14.8) 97.6 (+11.3) 56.9 (+14.1) 75.5 (+8.8) 83.8 (+5.4) \u2713 \u2717 \u2713 \u2717 75.9 (+16.2) 91.0 (+11.4) 95.4 (+9.1) 49.3 (+6.5) 69.8 (+3.1) 79.2 (+0.8) \u2713 \u2713 \u2713 \u2717 80.2 (+20.5) 95.3 (+15.7) 97.7 (+11.4) 58.1 (+15.3) 75.2 (+8.5) 84.2 (+5.8) \u2713 \u2713 \u2717 \u2713 81.0 (+21.3) 95.2 (+15.6) 97.6 (+11.3) 56.8 (+14.0) 74.6 (+7.9) 81.6 (+3.2) \u2713 \u2713 \u2713 \u2713 82.6 (+22.9) 96.2 (+16.6) 98.4 (+12.1) 59.5 (+16.7) 76.8 (+10.1) 83.7 (+5.3) of depth maps with text and depth-aligned images). Table 5 shows depth-daRGB outperforming depthrendRGB by 6.8% on the ScanObjectNN dataset, confirming concerns that alignment with rendered images may lead to overfitting on specific 3D shapes. Moreover, daRGB-text & depth performs worst, suggesting that pre-aligning depth-aligned images with text compromises CLIP\u2019s ability to generate robust image representations, thus affecting subsequent depth-daRGB alignment efficacy. Depth-daRGB\u2019s superior performance on ModelNet40 and OmniObject3D compared to depth-text & daRGB shows that aligning depth maps with depth-aligned images indirectly aligns with text, making additional text alignment unnecessary and potentially limiting OpenDlign\u2019s generalization. Depth-Specific Texts. Table 4 indicates that OpenDlign outperforms others in zero-shot classification tasks using depth-specific prompts, whether it incorporates multimodal alignment or logit aggregation. This implies that the inaccuracies in recognition partly result from processing input data as typical RGB images, rather than as depth maps. Logits Aggregation. Results in Table 4 show that multi-view logit aggregation improves zeroshot classification on all datasets by combining logits from pre-trained and fine-tuned encoders. This approach effectively mitigates the catastrophic forgetting problem in OpenDlign\u2019s multimodal alignment, enabling it to recognize 3D objects identifiable by both pre-trained CLIP and OpenDlign. Varying Number of Depth Views. OpenDlign, like other depth-based methods, necessitates extracting multiple embeddings from multi-view depth maps for zero-shot inference. Figure 4 illustrates that OpenDlign\u2019s zero-shot accuracy on both ModelNet40 and OmniObject3D increases as the number of depth map views rises. Notably, OpenDlign achieves top benchmark performance, comparable to TAMM-PointBERT, with no more than two views, indicating a good balance between latency in embedding extraction and effective zero-shot classification. Furthermore, we observed a slower performance improvement on OmniObject3D, reflecting its finer-grained classification requirements. Table 5: Ablation study on various alignment strategies. Aligning with text modality was achieved by fine-tuning the image encoder. Alignment MNet40 ScanNN Omni3D Strategy Top 1 Top 5 Top 1 Top 5 Top 1 Top 5 depth-rendRGB 78.8 96.8 52.7 82.5 29.4 51.8 daRGB-text & depth 78.6 96.4 51.1 79.6 29.1 51.6 depth-text & daRGB 79.4 98.0 60.7 86.0 29.5 52.7 depth-daRGB (ours) 82.6 98.4 59.5 83.7 31.3 53.2 Figure 4: Impact of the number of views on OpenDlign\u2019s zero-shot performance. 5 Conclusion and Future Work In this study, we introduce OpenDlign, an open-world framework that enhances 3D representation by efficiently fine-tuning the CLIP with depth-aligned images, which exhibit more diverse textures and colors than CAD-rendered images. Our experiments demonstrate OpenDlign\u2019s superior performance in various 3D zero-shot and few-shot tasks, especially with real-scanned objects. However, generating depth-aligned images with the ControlNet model is slower than direct CAD rendering, which extends training dataset preparation time. Moreover, depth-aligned images can be created from both CAD objects and real 3D scenes, likely highlighting a greater texture diversity gap between depth-aligned and CAD-rendered scenes and further highlighting OpenDlign\u2019s 3D scene understanding capabilities. 9" +} \ No newline at end of file diff --git a/intro_8K/test_introduction_short_2404.16540v1.json b/intro_8K/test_introduction_short_2404.16540v1.json new file mode 100644 index 0000000000000000000000000000000000000000..82d3cda96904e536ac0ac3efc31199f34228ce72 --- /dev/null +++ b/intro_8K/test_introduction_short_2404.16540v1.json @@ -0,0 +1,17 @@ +{ + "url": "http://arxiv.org/abs/2404.16540v1", + "title": "Approximation Algorithm of Minimum All-Ones Problem for Arbitrary Graphs", + "abstract": "Let $G=(V, E)$ be a graph and let each vertex of $G$ has a lamp and a button.\nEach button can be of $\\sigma^+$-type or $\\sigma$-type.\n Assume that initially some lamps are on and others are off. The button on\nvertex $x$ is of $\\sigma^+$-type ($\\sigma$-type, respectively) if pressing the\nbutton changes the lamp states on $x$ and on its neighbors in $G$ (the lamp\nstates on the neighbors of $x$ only, respectively). Assume that there is a set\n$X\\subseteq V$ such that pressing buttons on vertices of $X$ lights all lamps\non vertices of $G$. In particular, it is known to hold when initially all lamps\nare off and all buttons are of $\\sigma^+$-type.\n Finding such a set $X$ of the smallest size is NP-hard even if initially all\nlamps are off and all buttons are of $\\sigma^+$-type. Using a linear algebraic\napproach we design a polynomial-time approximation algorithm for the problem\nsuch that for the set $X$ constructed by the algorithm, we have $|X|\\le\n\\min\\{r,(|V|+{\\rm opt})/2\\},$ where $r$ is the rank of a (modified) adjacent\nmatrix of $G$ and ${\\rm opt}$ is the size of an optimal solution to the\nproblem.\n To the best of our knowledge, this is the first polynomial-time approximation\nalgorithm for the problem with a nontrivial approximation guarantee.", + "authors": "Chen Wang, Chao Wang, Gregory Z. Gutin, Xiaoyan Zhang", + "published": "2024-04-25", + "updated": "2024-04-25", + "primary_cat": "cs.DS", + "cats": [ + "cs.DS", + "cs.DM" + ], + "label": "Original Paper", + "paper_cat": "Knowledge AND Graph", + "gt": "The all-ones problem is a fundamental problem in applied mathematics, first proposed by Sutner in 1988 [17]. This problem has applications in linear cellular automata, as discussed in [18] and the references therein. To illustrate the problem, consider an n \u00d7 n grid with each area having a light lamp and a switch, and every lamp is initially off. Turning the switch on in some area lights the lamp in the area and the lamps in neighboring areas. Is there a set X of areas such that turning the switches on in X will turn on all the lamps? This problem can be extended to all graphs and we will call it the all-ones problem. Sutner [18] proved that a solution X exists for every graph. Later, several simple proofs of this result were given or rediscovered [3, 5, 7, 10, 13]. Many variants of the all-ones problem have been introduced and studied [1, 2, 6, 7, 11, 12, 19] over years. There are two important generalizations of the all-ones problem: (i) the initial state of lamps and switches can be arbitrary, \u2217Corresponding author. Email addresses: 2120220677@mail.nankai.edu.cn (Chen Wang), wangchao@nankai.edu.cn (Chao Wang), gutin@cs.rhul.ac.uk (Gregory Z. Gutin), xiaoyanice@aliyun.com (Xiaoyan Zhang) 1 arXiv:2404.16540v1 [cs.DS] 25 Apr 2024 Chen Wang et al. / Theoretical computer science 00 (2024) 1\u20138 2 i.e., some are on and the others are off, and (ii) every switch can be either of \u03c3+-type which changes the states of the lamp on its vertex and the lamps on the neighbors of its vertex or \u03c3-type which changes the states of the lamps on the neighbors of its vertex only. As a result of these two generalizations, the generalized all-ones problem may not have a solution X which lights all lamps. This generalized problem is studied in this paper. Under the condition that such a solution X exists for the generalized all-ones problem, it is natural to ask for X of minimum size. Unfortunately, this minimization problem is NP-hard even for all-ones problem [16]; we will call the minimization all-ones problem the min all-ones problem. Galvin and Lu both proved that the min all-ones problem of trees can be solved in linear time [9, 14]. Building on this, Chen proposed an algorithm for solving the min generalized all- ones problem on trees, with linear complexity [4]. Manuel et al. provided solutions for some of the widely studied architectures, such as binomial trees, butterfly, and benes networks [15]. Fleischer and Yu provided a detailed survey of the generalized all-ones problem [8]. More recently, Zhang extended the all-ones problem to the all-colors problem, in which each lamp had other states besides being on and off, and obtained additional findings on the all-colors problem [20]. Although significant research has been conducted on the all-ones problem on special graphs, such as trees, re- sulting in efficient algorithms, no polynomial-time approximation algorithms have been designed for the min all-ones problem on general graphs. Trees and cyclic graphs only represent a fraction of general graphs. In practical engi- neering scenarios, complex graphs are more common. In this paper, we design a polynomial-time approximation algorithm for the min generalized all-ones problem. If the problem has a solution, our algorithm outputs a solution X such that |X| \u2264min{r, (|V| + opt)/2}, where the rank of a (modified) adjacent matrix of G and opt is the size of an optimal solution to the problem. Apart from the introduction, this paper contains three sections. In Section 2, we introduce our approximation algorithm in detail. Section 3 shows the theoretical analysis and performance evaluation of this algorithm. Section 4 summarizes all the work of this paper and discusses future work.", + "main_content": "2.1. Linear algebraic formulation of min generalized all-ones problem It is not hard to see that the min generalized all-ones problem can be described as the following linear integer program over F2. For an arbitrary graph G = (V, E) with V = {v1, . . . , vn} we can get its modified adjacency matrix A = (aij)n\u00d7n such that for all i \ufffdj, aij = 1 if vivj \u2208E and ai j = 0 otherwise, and for all i \u2208{1, 2, . . . , n}, aii = 1 (aii = 0, respectively) if the switch on vi is of \u03c3+-type (of \u03c3-type, respectively). Combined with the initial state B = (b1, b2, \u00b7 \u00b7 \u00b7 , bn), where bi = 0 if the lamp on vertex vi is initially on and bi = 1 if the lamp is initially off, we can construct a system of linear equations AU = B over F2. The solution to this problem is the minimum of \ufffdU = \ufffdn i=1 ui. Suppose the rank of A is r and the corank is m so that m + r = n. If aii = 1 for all i \u2208{1, 2, \u00b7 \u00b7 \u00b7 , n}, the system of equations AU = B must have a solution, but if aii = 0, the system may not necessarily have a solution. However, as long as the system has at least one solution \u03b3 = (\u03b31, \u03b32, \u00b7 \u00b7 \u00b7 , \u03b3n)T, we can find all solutions of the system using the following system combining \u03b3 with the fundamental solution set \u03b7 = (\u03b71, \u03b72, \u00b7 \u00b7 \u00b7 , \u03b7m) within time O(n3). Here xi is the coefficient of the column vector \u03b7i = (\u03b71i, . . . , \u03b7ni)T. \u03b7X + \u03b3 = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 ate to \u03b711 \u03b712 \u00b7 \u00b7 \u00b7 \u03b71m \u03b721 \u03b722 \u00b7 \u00b7 \u00b7 \u03b72m \u03b731 \u03b732 \u00b7 \u00b7 \u00b7 \u03b73m . . . . . . ... . . . \u03b7n1 \u03b7n2 \u00b7 \u00b7 \u00b7 \u03b7nm \uf8fc \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fd \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fe o e \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 m x1 x2 . . . xm \uf8fc \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fd \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fe im + \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 \u03b3 \u03b3 \u03b3 \u03b3 ze her \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 \u03b31 \u03b32 \u03b33 . . . \u03b3n \uf8fc \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fd \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fe \ufffdU fo (1) \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 \u00b7 \u00b7 \u00b7 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fe \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fe \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fe The problem is how to find the appropriate column vector X to minimize \ufffdU, under the condition that X has a total of 2m values. This problem was proven to be an NP-complete [16]. Therefore, the next subsection provides an approximation algorithm running in polynomial time. 2 Chen Wang et al. / Theoretical computer science 00 (2024) 1\u20138 3 2.2. Approximation algorithm Firstly, it can be observed that the polynomial time complexity (not exceeding O(n3)) of finding the matrix (\u03b71, \u03b72, \u00b7 \u00b7 \u00b7 , \u03b7m) and the special solution \u03b3 makes this process cost-effective in solving NP-complete problems. Secondly, it is challenging to identify alternative methods capable of directly computing the optimal solution without obtaining all the solutions. Even if such a solution is obtained, verification is often infeasible. When \u03b7 and \u03b3 are known, we need to find the X that minimizes P U. \u03b7X + \u03b3 = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 \u03b711 \u03b712 \u00b7 \u00b7 \u00b7 \u03b71m \u03b721 \u03b722 \u00b7 \u00b7 \u00b7 \u03b72m \u03b731 \u03b732 \u00b7 \u00b7 \u00b7 \u03b73m . . . . . . ... . . . \u03b7n1 \u03b7n2 \u00b7 \u00b7 \u00b7 \u03b7nm \uf8fc \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fd \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fe \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 x1 x2 . . . xm \uf8fc \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fd \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fe + \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 \u03b31 \u03b32 \u03b33 . . . \u03b3n \uf8fc \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fd \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fe = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 \u03b41 \u03b42 \u03b43 . . . \u03b4n \uf8fc \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fd \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fe + \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 \u03b31 \u03b32 \u03b33 . . . \u03b3n \uf8fc \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fd \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fe = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 u1 u2 u3 . . . un \uf8fc \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fd \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fe (2) Proposition 2.1. Row exchanges of matrix \u03b7 do not change P U. Proof. Multiply both sides of Equation 2 by matrix P as shown in Equation 3. Matrix P is a product of elementary matrices that perform row exchanges. This operation essentially reorders the elements of vector U, but does not change the P U. P(\u03b7X + \u03b3) = P(\u03b4 + \u03b3) = PU (3) Proposition 2.2. Column transformation of matrix \u03b7 does not change P U. Proof. Let Qm\u2217m be a full rank matrix, and QZ = X, with the following equation. \u03b7X + \u03b3 = \u03b7QZ + \u03b3 = (\u03b7Q)Z + \u03b3 = \u03f5Z + \u03b3 = \u03b4 + \u03b3 = U (4) Q is the transition matrix between X and Z, and Q is full rank. When we find that X makes P U the smallest, we can definitely find the corresponding Z, so that the obtained U is the same. We can transform the \u03b7 column into an echelon form using row exchanges and column transformations, as shown in the following equation, with a complexity of O(m2n). The question mark indicates that the value of the number is uncertain, which may be 0 or 1. We can divide the matrix into m + 1 parts based on the echelon and assume the last line of the i-th part is line ki (i = 0, 1, \u00b7 \u00b7 \u00b7 , m) for the rank of matrix \u03b7 is always m. Part 0 is the most special, with all 3 Chen Wang et al. / Theoretical computer science 00 (2024) 1\u20138 4 elements in each row being 0. To ensure that Equation 4 holds, there should be (u1, u2, \u00b7 \u00b7 \u00b7 , uk0) = (\u03b31, \u03b32, \u00b7 \u00b7 \u00b7 , \u03b3k0). \u03b7Q = \u03f5 = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 0 0 0 0 \u00b7 \u00b7 \u00b7 0 . . . . . . . . . . . . ... . . . 1 0 0 0 \u00b7 \u00b7 \u00b7 0 1 0 0 0 \u00b7 \u00b7 \u00b7 0 . . . . . . . . . . . . ... . . . \u03f5(k1+1)1 1 0 0 \u00b7 \u00b7 \u00b7 0 \u03f5(k1+2))1 1 0 0 \u00b7 \u00b7 \u00b7 0 . . . . . . . . . . . . ... . . . \u03f5(k2+1)1 \u03f5(k2+1)2 1 0 \u00b7 \u00b7 \u00b7 0 \u03f5(k2+2)1 \u03f5(k2+2)2 1 0 \u00b7 \u00b7 \u00b7 0 . . . . . . . . . . . . ... . . . \u03f5km1 \u03f5km2 \u03f5km3 \u03f5km4 \u00b7 \u00b7 \u00b7 1 \uf8fc \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fd \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fe (5) In the following m parts, we will use greedy algorithms to solve for the Z value on the Echelon of each part. Part 1 of the linear Equation 5 is shown in Equation 6. (\u03b3k0+1, \u03b3k0+2, \u00b7 \u00b7 \u00b7 , \u03b3k1) is known and (\u03b4k0+1, \u03b4k0+2, \u00b7 \u00b7 \u00b7 , \u03b4k1) is unknown. It is important to ensure that \u03b4i is as similar to \u03b3i as possible. At this moment z1 only has two possible values: 0 and 1. Therefore, the idea of a greedy algorithm is adopted here. If there are more 0\u2019s than 1\u2019s in the range from \u03b3k0+1 to \u03b3k1, then z1 is set to 0. If there are more 1\u2019s than 0\u2019s, then z1 is set to 1. Therefore, we can directly obtain the value of x1 by solving it here, while ensuring that Pk1 i=k0+1 ui \u2264(k1 \u2212k0)/2. \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 z1 = \u03b4k0+1 z1 = \u03b4k0+2 z1 = \u03b4k0+3 . . . z1 = \u03b4k1 Compare to \u03b3k0+1 \u03b3k0+2 \u03b3k0+3 . . . \u03b3k1 (6) The value of z2 can be calculated through z1. Part 2 of Equation 5 can be written as shown in Equation 7. (\u03b3k1+1, \u03b3k1+2, \u00b7 \u00b7 \u00b7 , \u03b3k2) is known, and (\u03b4k1+1, \u03b4k1+2, \u00b7 \u00b7 \u00b7 , \u03b4k2) needs to satisfy the Equation 5 and be as similar to (\u03b3k1+1, \u03b3k1+2, \u00b7 \u00b7 \u00b7 , \u03b3k2) as possible. The variables in Equation 7 are z1 and z2, and z1 has been solved before through a greedy algorithm, so the unknown variable is only z2. Since \u03f5i1z1 are constants, we can move them from the left side of the equation to the right side, and these two equation systems are obviously equivalent. Then, we need to ensure that \u03f5(k1+i)1z1 +\u03b4k1+i is as similar to \u03b3k1+i as possible. It can be seen that another transformation can be carried out, which is equivalent to making \u03b4k1+i as similar to \u03f5(k1+i)1z1 +\u03b3k1+i as possible. In this way, we have separated the variables: the left side of the equation is the variable z2, the right side of the equation is the variable \u03b4k1+i(\u03b4k1+i = z2), and the column of \u03f5(k1+i)1z1 + \u03b3k1+i are constants. At this point, we find that part 2 of Equation 5 has been transformed to be very similar to part 1. Therefore, if there are more 0\u2019s than 1\u2019s in the range from \u03f5(k1+i)1z1 + \u03b3k1+1 to \u03f5(k2)1z1 + \u03b3k2, then z2 is set to 0. If there are more 1\u2019s than 0\u2019s, then z2 is set to 1. Therefore, the value of z2 can be solved here and Pk2 i=k1+1 ui \u2264(k2 \u2212k1)/2 is ensured. After obtaining the value of z2, the value of \u03f5i1z1 + \u03f5i2z2 can be calculated, and the value of z3 can be calculated again. Following this pattern, the values of Z = (z1, z2, \u00b7 \u00b7 \u00b7 , zm) can be obtained. Then \u03f5Z + \u03b3 = U, we obtain U. The complete algorithm is shown in Algorithm 1. 4 Chen Wang et al. / Theoretical computer science 00 (2024) 1\u20138 5 \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 \u03f5(k1+1)1z1 + z2 = \u03b4k1+1 \u03f5(k1+2)1z1 + z2 = \u03b4k1+2 \u03f5(k1+3)1z1 + z2 = \u03b4k1+3 . . . \u03f5(k2)1z1 + z2 = \u03b4k2 Compare to \u03b3k1+1 \u03b3k1+1 \u03b3k1+1 . . . \u03b3k2 \u21d3 \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 z2 = \u03b4k1+1 + \u03f5(k1+1)1z1 z2 = \u03b4k1+2 + \u03f5(k1+2)1z1 z2 = \u03b4k1+3 + \u03f5(k1+3)1z1 . . . z2 = \u03b4k2 + \u03f5(k2)1z1 Compare to \u03b3k1+1 \u03b3k1+1 \u03b3k1+1 . . . \u03b3k2 \u21d3 \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 z2 = \u03b4k1+1 z2 = \u03b4k1+2 z2 = \u03b4k1+3 . . . z2 = \u03b4k2 Compare to \u03b3k1+1 + \u03f5(k1+1)1z1 \u03b3k1+2 + \u03f5(k1+2)1z1 \u03b3k1+3 + \u03f5(k1+3)1z1 . . . \u03b3k2 + \u03f5(k2)1z1 (7) 3. Algorithm performance evaluation In this section, we present the complexity of Algorithm 1 and analyze its approximation guarantees. Proposition 3.1. Algorithm 1 has a complexity of O(n3), and if the fundamental solution set \u03b7 for the equation AU = B has been obtained and is in column echelon form, then the complexity will reduce to O(mn). Proof. In Algorithm 1, step 1 involves solving a system of linear equations, which has a complexity of O(n3). Step 8 involves transforming the matrix etaup into column echelon form, which has a complexity of O(m2n) where m \u2264n. Step 2 to 7 is O(1). Step 9 involves calculating the location of pivots in the column echelon matrix \u03f5, which has a complexity of O(mn). Steps 10 to 24 consist of a nested loop with three layers. However, each element in the matrix \u03f5 is only accessed once, resulting in a complexity of O(mn). Proposition 3.2. If a given instance I of the min generalized all-ones problem has a solution, the value sol of the solution obtained by Algorithm 1 satisfies sol \u2264r, where r is the rank of the matrix A. Proof. In Equation 6 and 7, if \u03b4i = \u03b3i, then the resulting ui will be 0. In each part, we always make more ui equal to 0, so each part has at least one ui that takes on the value of 0. Furthermore, the rank of \u03b7 is m = n \u2212r because \u03b7 is the fundamental solution set of the system AU = B. Therefore, at least m values of ui are 0, so P U \u2264n \u2212m = r. Proposition 3.3. If a given instance I of the min generalized all-ones problem has a solution, the value sol of the solution obtained by Algorithm 1 satisfies sol \u2264(n + opt)/2, where opt is the value of an optimal solution of I. 5 Chen Wang et al. / Theoretical computer science 00 (2024) 1\u20138 6 Algorithm 1: Approximation Algorithm of Minimum All-Ones Problem Data: An adjacency matrix An\u2217n and a initial state B1\u2217n Result: Answer U 1 (\u03b7, \u03b3, m) = solveEquations(A,B); 2 if m == 0 and \u03b3 is null then 3 return null; 4 end 5 if m == 0 and \u03b3 is not null then 6 return \u03b3; 7 end 8 (P, \u03f5, Q) = matrixEchelon(\u03b7, \u03b3); 9 K = calculatePart(\u03f5); 10 for i from 1 to m do 11 cnt, tmp = 0; 12 for j from K[i \u22121] + 1 to K[i] do 13 for p from 1 to i \u22121 do 14 tmp = tmp \u2295(\u03f5[ j][p] \u2217X[p]) 15 end 16 cnt = cnt + (tmp \u2295\u03b3[ j]); 17 end 18 if cnt \u2264K[i] \u2212K[i \u22121])/2 then 19 X[i] = 0; 20 end 21 else 22 X[i] = 1; 23 end 24 end 25 U = P \u2217(\u03f5 \u2217X + \u03b3); 26 return U; Proof. In the Subsection 2.2, we partitioned the matrix \u03b7 into m + 1 parts and proved that for the 1 to m parts, Pki+1 ki+1 u \u2264(ki+1 \u2212ki)/2. Only the 0th part remains to be discussed. The 0th part is quite special in that it contains no variables, only differing in the value of \u03b3. Let the number of 0\u2019s in \u03b3 in the 0th part be g0 and the number of 1\u2019s be g1. g0 indicates that the switch at that point must not be pressed; otherwise, the conditions for the all-ones problem cannot be satisfied. Similarly, g1 indicates that the switch must be pressed. Now we have: sol \u2264g1 + (n \u2212g1 \u2212g0)/2 = (n + g1 \u2212g0)/2 (8) Then add the parameter opt. We can easily prove that sol \u2265opt \u2265g1, because the switches for these points must be pressed in any case. So we have g1 \u2264opt \u2264sol \u2264(n + g1 \u2212g0)/2 (9) Next, we will bound sol by replacing g1 with opt and g0 with 0, resulting in the following expression: sol \u2264(n + opt)/2 (10) 4. Conclusion and future work This article presents an approximation algorithm for the min generalized all-ones problem on arbitrary graphs, making it possible to process the problem in batches. The algorithm has a complexity of O(n3). If the equation 6 Chen Wang et al. / Theoretical computer science 00 (2024) 1\u20138 7 Figure 1. The range of possible values for sol system AU = B has been solved and the solution is in column echelon form, the complexity will be reduced to O(n(n \u2212r)), which is the lowest complexity for general graphs. The upper bound of the solution value sol obtained by this algorithm satisfies the inequality sol \u2264(n + opt)/2 and sol \u2264r. This ensures that the obtained solution, as shown in Figure 1, is always the optimal half. In future work, there still remain two questions to be solved. One of them is whether there is a polynomialtime algorithm for the min generalized all-ones problem which always finds a solution of size at most c \u02d9 opt for some constant c? The other one is whether we can get such an algorithm for the minimum all-colors problem?" +} \ No newline at end of file diff --git a/intro_8K/test_introduction_short_2404.16563v1.json b/intro_8K/test_introduction_short_2404.16563v1.json new file mode 100644 index 0000000000000000000000000000000000000000..a52e6048e3098111d2dc7ed71c1bb59d802e2349 --- /dev/null +++ b/intro_8K/test_introduction_short_2404.16563v1.json @@ -0,0 +1,16 @@ +{ + "url": "http://arxiv.org/abs/2404.16563v1", + "title": "Evaluating Large Language Models on Time Series Feature Understanding: A Comprehensive Taxonomy and Benchmark", + "abstract": "Large Language Models (LLMs) offer the potential for automatic time series\nanalysis and reporting, which is a critical task across many domains, spanning\nhealthcare, finance, climate, energy, and many more. In this paper, we propose\na framework for rigorously evaluating the capabilities of LLMs on time series\nunderstanding, encompassing both univariate and multivariate forms. We\nintroduce a comprehensive taxonomy of time series features, a critical\nframework that delineates various characteristics inherent in time series data.\nLeveraging this taxonomy, we have systematically designed and synthesized a\ndiverse dataset of time series, embodying the different outlined features. This\ndataset acts as a solid foundation for assessing the proficiency of LLMs in\ncomprehending time series. Our experiments shed light on the strengths and\nlimitations of state-of-the-art LLMs in time series understanding, revealing\nwhich features these models readily comprehend effectively and where they\nfalter. In addition, we uncover the sensitivity of LLMs to factors including\nthe formatting of the data, the position of points queried within a series and\nthe overall time series length.", + "authors": "Elizabeth Fons, Rachneet Kaur, Soham Palande, Zhen Zeng, Svitlana Vyetrenko, Tucker Balch", + "published": "2024-04-25", + "updated": "2024-04-25", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Original Paper", + "paper_cat": "LLM Fairness", + "gt": "Time series analysis and reporting play a crucial role in many areas like healthcare, finance, climate, etc. With the recent advances in Large Language Models (LLMs), integrating them in time series analysis and reporting processes presents a huge po- tential for automation. Recent works have adapted general-purpose LLMs for time series understand- ing in various specific domains, such as seizure localization in EEG time series (Chen et al., 2024), cardiovascular disease diagnosis in ECG time se- ries (Qiu et al., 2023), weather and climate data understanding (Chen et al., 2023), and explainable financial time series forecasting (Yu et al., 2023). Despite these advancements in domain-specific LLMs for time series understanding, it is crucial to conduct a systematic evaluation of general-purpose LLMs\u2019 inherent capabilities in generic time se- ries understanding, without domain-specific fine- tuning. This paper aims to uncover the pre-existing strengths and weaknesses in general-purpose LLMs regarding time series understanding, such that prac- titioners can be well informed of areas where the general-purpose LLMs are readily applicable, and focus on areas for improvements with targeted ef- forts during fine-tuning. To systematically evaluate the performance of general-purpose LLMs on generic time series un- derstanding, we propose a taxonomy of time series features for both univariate and multivariate time series. This taxonomy provides a structured cat- egorization of core characteristics of time series across domains. Building upon this taxonomy, we have synthesized a diverse dataset of time series covering different features in the taxonomy. This dataset is pivotal to our evaluation framework, as it provides a robust basis for assessing LLMs\u2019 abil- ity to interpret and analyze time series data accu- rately. Specifically, we examine the state-of-the-art LLMs\u2019 performance across a range of tasks on our dataset, including time series features detection and classification, data retrieval as well as arithmetic reasoning. Our contributions are three-fold: \u2022Taxonomy - we introduce a taxonomy that pro- vides a systematic categorization of important time series features, an essential tool for stan- dardizing the evaluation of LLMs in time series understanding. \u2022 Diverse Time Series Dataset - we synthesize a comprehensive time series dataset, ensuring a broad representation of the various types of time series, encompassing the spectrum of features identified in our taxonomy. 1 arXiv:2404.16563v1 [cs.CL] 25 Apr 2024 \u2022 Evaluations of LLMs - our evaluations pro- vide insights into what LLMs do well when it comes to understanding time series and where they struggle, including how they deal with the format of the data, where the query data points are located in the series and how long the time series is.", + "main_content": "2.1 Large Language Models Large Language Models (LLMs) are characterized as pre-trained, Transformer-based models endowed with an immense number of parameters, spanning from tens to hundreds of billions, and crafted through the extensive training on vast text datasets (Zhang et al., 2024; Zhao et al., 2023). Notable examples of LLMs include Llama2 (Touvron et al., 2023), PaLM (Chowdhery et al., 2023), GPT3 (Brown et al., 2020), GPT4 (Achiam et al., 2023), and Vicuna-13B (Chiang et al., 2023). These models have surpassed expectations in numerous language-related tasks and extended their utility to areas beyond traditional natural language processing. For instance, Wang et al. (2024) have leveraged LLMs for the prediction and modeling of human mobility, Yu et al. (2023) for explainable financial time series forecasting, and Chen et al. (2024) for seizure localization. This expansive application of LLMs across diverse domains sets the stage for their potential utility in the analysis of time series data, a domain traditionally governed by statistical and machine learning models. 2.2 Language models for time series Recent progress in time series forecasting has capitalized on the versatile and comprehensive abilities of LLMs, merging their language expertise with time series data analysis. This collaboration marks a significant methodological change, underscoring the capacity of LLMs to revolutionize conventional predictive methods with their advanced information processing skills. In the realm of survey literature, comprehensive overviews provided by Zhang et al. (2024) and Jiang et al. (2024) offer valuable insights into the integration of LLMs in time series analysis, highlighting key methodologies, challenges, and future directions. Notably, Gruver et al. (2023) have set benchmarks for pretrained LLMs such as GPT-3 and Llama2 by assessing their capabilities for zero-shot forecasting. Similarly, Xue and Salim (2023) introduced Prompcast, and it adopts a novel approach by treating forecasting as a question-answering activity, utilizing strategic prompts. Further, Yu et al. (2023) delved into the potential of LLMs for generating explainable forecasts in financial time series, tackling inherent issues like cross-sequence reasoning, integration of multi-modal data, and interpretation of results, which pose challenges in conventional methodologies. Additionally, Zhou et al. (2023) demonstrated that leveraging frozen pre-trained language models, initially trained on vast corpora, for time series analysis that could achieve comparable or even state-of-the-art performance across various principal tasks in time series analysis including imputation, classification and forecasting. 2.3 LLMs for arithmetic tasks Despite their advanced capabilities, LLMs face challenges with basic arithmetic tasks, crucial for time series analysis involving quantitative data (Azerbayev et al., 2023; Liu and Low, 2023). Research has identified challenges such as inconsistent tokenization and token frequency as major barriers (Nogueira et al., 2021; Kim et al., 2021). Innovative solutions, such as Llama2\u2019s approach to digit tokenization (Yuan et al., 2023), highlight ongoing efforts to refine LLMs\u2019 arithmetic abilities, enhancing their applicability in time series analysis. 3 Time Series Data 3.1 Taxonomy of Time Series Features Our study introduces a comprehensive taxonomy for evaluating the analytical capabilities of Large Language Models (LLMs) in the context of time series data. This taxonomy categorizes the intrinsic characteristics of time series, providing a structured basis for assessing the proficiency of LLMs in identifying and extracting these features. Furthermore, we design a series of datasets following the proposed taxonomy and we outline an evaluation framework, incorporating specific metrics to quantify model performance accurately across various tasks. The proposed taxonomy encompasses critical aspects of time series data that are frequently analyzed for different applications. Table 1 shows the selected features in increasing complexity, and each sub-feature. We evaluate the LLM in this taxonomy in a two-step process. In first place, we evaluate if the LLM can detect the feature, and in a 2 Time series characteristics Description Sub-categories Univariate Trend Directional movements over time. Up , Down Seasonality and Cyclical Patterns Patterns that repeat over a fixed or irregular period. Fixed-period \u2013 constant amplitude , Fixed-period \u2013 varying amplitude , Shifting period , Multiple seasonality Volatility Degree of dispersion of a series over time. Constant Increasing , Clustered , Leverage effect . Anomalies Significant deviations from typical patterns. Spikes , step-spikes , level shifts , temporal disruptions Structural Breaks Fundamental shifts in the series data, such as regime changes or parameter shifts. Regime changes , parameter shifts Statistical Properties Characteristics like fat tails, and stationarity versus non-stationarity. Fat tails , Stationarity Multivariate Correlation Measure the linear relationship between series. Useful for predicting one series from another if they are correlated. Positive Negative Cross-Correlation Measures the relationship between two series at different time lags, useful for identifying lead or lag relationships. Positive direct , Positive lagged , Negative direct , Negative lagged Dynamic Conditional Correlation Assesses situations where correlations between series change over time. Correlated first half Correlated second half Table 1: Taxonomy of time series characteristics. second step, we evaluate if the LLM can identify the sub-category of the feature. A detailed description of the process is described in Sec. 6.1.2. 3.2 Synthetic Time Series Dataset Leveraging our taxonomy, we construct a diverse synthetic dataset of time series, covering the features outlined in the previous section. We generated in total 9 datasets with 200 time series samples each. Within each dataset the time series length is randomly chosen between 30 and 150 to encompass a variety of both short and long time series data. In order to make the time series more realistic, we add a time index, using predominantly daily frequency. Fig. 1 showcases examples of our generated univariate time series. Each univariate dataset showcases a unique single-dimensional patterns, whereas multivariate data explore series interrelations to reveal underlying patterns. Please see Table 4 in the appendix for examples of each univariate dataset, and Table 5 for visual examples of the multivariate cases. For a detailed description of the generation of each dataset, refer to Sec. A in the Appendix. 4 Time Series Benchmark Tasks Our evaluation framework is designed to assess the LLMs\u2019 capabilities in analyzing time series across the dimensions in our taxonomy (Sec. 3.1). The evaluation includes four primary tasks: Feature Detection This task evaluates the LLMs\u2019 ability to identify the presence of specific features within a time series, such as trend, seasonality, or anomalies. For instance, given a time series dataset with an upward trend, the LLM is queried to determine if a trend exists. Queries are structured as yes/no questions to assess the LLMs\u2019 ability to recognize the presence of specific time series features, such as \"Is a trend present in the time series?\" Feature Classification Once a feature is detected, this task assesses the LLMs\u2019 ability to classify the feature accurately. For example, if a trend is present, the LLM must determine whether it is upward, downward, or non-linear. This task involves a QA setup where LLMs are provided with definitions of sub-features within the prompt. Performance is evaluated based on the correct identification of sub-features, using the F1 score to balance precision and recall. This task evaluates the models\u2019 depth of understanding and ability to distinguish between similar but distinct phenomena. Information Retrieval Evaluates the LLMs\u2019 accuracy in retrieving specific data points, such as values on a given date. 3 Figure 1: Example synthetically generated time series. Arithmetic Reasoning Focuses on quantitative analysis tasks, such as identifying minimum or maximum values. Accuracy and Mean Absolute Percentage Error (MAPE) are used to measure performance, with MAPE offering a precise evaluation of the LLMs\u2019 numerical accuracy. Additionally, to account for nuanced aspects of time series analysis, we propose in Sec. 5.2 to study the influence of multiple factors, including time series formatting, location of query data point in the time series and time series length. 5 Performance Metrics and Factors 5.1 Performance Metrics We employ the following metrics to report the performance of LLMs on various tasks. F1 Score Applied to feature detection and classification, reflecting the balance between precision and recall. Accuracy Used for assessing the information retrieval and arithmetic reasoning tasks. Mean Absolute Percentage Error (MAPE) Employed for numerical responses in the information retrieval and arithmetic reasoning tasks, providing a measure of precision in quantitative analysis. 5.2 Performance Factors We identified various factors that could affect the performance of LLMs on time series understanding, for each we designed deep-dive experiments to reveal the impacts. Time Series Formatting Extracting useful information from raw sequential data as in the case of numerical time series is a challenging task for LLMs. The tokenization directly influences how the patterns are encoded within tokenized sequences (Gruver et al., 2023), and methods such as BPE separate a single number into tokens that are not aligned. On the contrary, Llama2 has a consistent tokenization of numbers, where it splits each digit into an individual token, which ensures consistent tokenization of numbers (Liu and Low, 2023). We study different time series formatting approaches to determine if they influence the LLMs performance to capture the time series information. In total we propose 9 formats, ranging from simple CSV to enriched formats with additional information. Time Series Length We study the impact that the length of the time series has in the retrieval task. Transformer-based models use attention mechanisms to weigh the importance of different parts of the input sequence. Longer sequences can dilute the attention mechanism\u2019s effectiveness, potentially making it harder for the model to focus on the most relevant parts of the text (Vaswani et al., 2017). Position Bias Given a retrieval question, the position of where the queried data point occurs in the time series might impact the retrieval accuracy. Studies have discovered recency bias (Zhao et al., 2021) in the task of few-shot classification, where the LLM tends to repeat the label at the end. Thus, it\u2019s important to investigate whether LLM exhibits similar bias on positions in the task of time series 4 understanding. 6 Experiments 6.1 Experimental setup 6.1.1 Models We evaluate the following LLMs on our proposed framework: 1) GPT4. (Achiam et al., 2023) 2) GPT3.5. 3) Llama2-13B (Touvron et al., 2023), and 4) Vicuna-13B (Chiang et al., 2023). We selected two open-source models, Llama2 and Vicuna, each with 13 billion parameters, the version of Vicuna is 1.5 was trained by fine-tuning Llama2. Additionally we selected GPT4 and GPT3.5 where the number of parameters is unknown. In the execution of our experiments, we used an Amazon Web Services (AWS) g5.12xlarge instance, equipped with four NVIDIA A10G Tensor Core GPUs, each featuring 24 GB of GPU RAM. This setup was essential for handling both extensive datasets and the computational demands of LLMs. 6.1.2 Prompts The design of prompts for interacting with LLMs is separated into two approaches: retrieval/arithmetic reasoning and detection/classification questioning. Time series characteristics To evaluate the LLM reasoning over time series features, we use a two-step prompt with an adaptive approach, dynamically tailoring the interaction based on the LLM\u2019s responses. The first step involves detection, where the model is queried to identify relevant features within the data. If the LLM successfully detects a feature, we proceed with a follow-up prompt, designed to classify the identified feature between multiple sub-categories. For this purpose, we enrich the prompts with definitions of each subfeature (e.g. up or down trend), ensuring a clearer understanding and more accurate identification process. An example of this two-turn prompt is shown in Fig. 2. The full list can be found in Sec. F of the supplementary. Information Retrieval/Arithmetic Reasoning We test the LLM\u2019s comprehension of numerical data represented as text by querying it for information retrieval and numerical reasoning, as exemplified in Fig. 3 and detailed in the supplementary Sec. F. Trend Prompts \"Input: