XaiverZ commited on
Commit
ed3212e
·
1 Parent(s): 5367e35
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. intro_8K/test_introduction_short_2404.16260v1.json +19 -0
  2. intro_8K/test_introduction_short_2404.16277v1.json +17 -0
  3. intro_8K/test_introduction_short_2404.16283v1.json +17 -0
  4. intro_8K/test_introduction_short_2404.16294v1.json +17 -0
  5. intro_8K/test_introduction_short_2404.16297v1.json +17 -0
  6. intro_8K/test_introduction_short_2404.16300v1.json +17 -0
  7. intro_8K/test_introduction_short_2404.16301v1.json +16 -0
  8. intro_8K/test_introduction_short_2404.16302v1.json +19 -0
  9. intro_8K/test_introduction_short_2404.16306v1.json +16 -0
  10. intro_8K/test_introduction_short_2404.16325v1.json +17 -0
  11. intro_8K/test_introduction_short_2404.16333v1.json +18 -0
  12. intro_8K/test_introduction_short_2404.16348v2.json +16 -0
  13. intro_8K/test_introduction_short_2404.16369v1.json +16 -0
  14. intro_8K/test_introduction_short_2404.16375v1.json +18 -0
  15. intro_8K/test_introduction_short_2404.16385v1.json +16 -0
  16. intro_8K/test_introduction_short_2404.16399v1.json +17 -0
  17. intro_8K/test_introduction_short_2404.16407v1.json +18 -0
  18. intro_8K/test_introduction_short_2404.16423v1.json +17 -0
  19. intro_8K/test_introduction_short_2404.16456v1.json +16 -0
  20. intro_8K/test_introduction_short_2404.16461v2.json +16 -0
  21. intro_8K/test_introduction_short_2404.16538v1.json +16 -0
  22. intro_8K/test_introduction_short_2404.16540v1.json +17 -0
  23. intro_8K/test_introduction_short_2404.16563v1.json +16 -0
  24. intro_8K/test_introduction_short_2404.16572v1.json +16 -0
  25. intro_8K/test_introduction_short_2404.16587v1.json +17 -0
  26. intro_8K/test_introduction_short_2404.16621v1.json +18 -0
  27. intro_8K/test_introduction_short_2404.16627v1.json +16 -0
  28. intro_8K/test_introduction_short_2404.16645v1.json +17 -0
  29. intro_8K/test_introduction_short_2404.16651v1.json +16 -0
  30. intro_8K/test_introduction_short_2404.16668v1.json +17 -0
  31. intro_8K/test_introduction_short_2404.16670v1.json +17 -0
  32. intro_8K/test_introduction_short_2404.16678v1.json +16 -0
  33. intro_8K/test_introduction_short_2404.16698v1.json +16 -0
  34. intro_8K/test_introduction_short_2404.16726v2.json +16 -0
  35. intro_8K/test_introduction_short_2404.16745v1.json +16 -0
  36. intro_8K/test_introduction_short_2404.16766v1.json +17 -0
  37. intro_8K/test_introduction_short_2404.16767v1.json +18 -0
  38. intro_8K/test_introduction_short_2404.16789v1.json +0 -0
  39. intro_8K/test_introduction_short_2404.16792v1.json +18 -0
  40. intro_8K/test_introduction_short_2404.16807v1.json +16 -0
  41. intro_8K/test_introduction_short_2404.16811v2.json +17 -0
  42. intro_8K/test_introduction_short_2404.16816v1.json +16 -0
  43. intro_8K/test_introduction_short_2404.16818v1.json +16 -0
  44. intro_8K/test_introduction_short_2404.16821v2.json +16 -0
  45. intro_8K/test_introduction_short_2404.16831v2.json +16 -0
  46. intro_8K/test_introduction_short_2404.16895v3.json +16 -0
  47. intro_8K/test_introduction_short_2404.16899v1.json +16 -0
  48. intro_8K/test_introduction_short_2404.16921v1.json +17 -0
  49. intro_8K/test_introduction_short_2404.16926v1.json +16 -0
  50. intro_8K/test_introduction_short_2404.16966v1.json +16 -0
intro_8K/test_introduction_short_2404.16260v1.json ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2404.16260v1",
3
+ "title": "OmniSearchSage: Multi-Task Multi-Entity Embeddings for Pinterest Search",
4
+ "abstract": "In this paper, we present OmniSearchSage, a versatile and scalable system for\nunderstanding search queries, pins, and products for Pinterest search. We\njointly learn a unified query embedding coupled with pin and product\nembeddings, leading to an improvement of $>8\\%$ relevance, $>7\\%$ engagement,\nand $>5\\%$ ads CTR in Pinterest's production search system. The main\ncontributors to these gains are improved content understanding, better\nmulti-task learning, and real-time serving. We enrich our entity\nrepresentations using diverse text derived from image captions from a\ngenerative LLM, historical engagement, and user-curated boards. Our multitask\nlearning setup produces a single search query embedding in the same space as\npin and product embeddings and compatible with pre-existing pin and product\nembeddings. We show the value of each feature through ablation studies, and\nshow the effectiveness of a unified model compared to standalone counterparts.\nFinally, we share how these embeddings have been deployed across the Pinterest\nsearch stack, from retrieval to ranking, scaling to serve $300k$ requests per\nsecond at low latency. Our implementation of this work is available at\nhttps://github.com/pinterest/atg-research/tree/main/omnisearchsage.",
5
+ "authors": "Prabhat Agarwal, Minhazul Islam Sk, Nikil Pancha, Kurchi Subhra Hazra, Jiajing Xu, Chuck Rosenberg",
6
+ "published": "2024-04-25",
7
+ "updated": "2024-04-25",
8
+ "primary_cat": "cs.IR",
9
+ "cats": [
10
+ "cs.IR",
11
+ "cs.AI",
12
+ "cs.LG",
13
+ "H.3.3"
14
+ ],
15
+ "label": "Original Paper",
16
+ "paper_cat": "LLM Fairness",
17
+ "gt": "Pinterest\u2019s mission is to bring everyone the inspiration to create a life they love. Search is one of the key surfaces on Pinterest where users seek inspiration spanning a wide range of interests, such as decorating their homes, planning weddings, or keeping up with the latest trends in beauty and fashion. In order to enhance the search experience, modern search systems aim to incorporate various types of content such as web documents, news, shopping items, videos, and more. Similarly, Pinterest\u2019s search feed encompasses a diverse range of content, including pins, shopping items, video pins, and related queries. To construct an inspiring feed for each of the more than 6 billion searches per month on Pinterest we must uncover relevant content from billions of pins and products. We must also find relevant queries to help users refine their queries and navigate their search journey. As an additional challenge, Pinterest search is global and multi- lingual with searchers using more than 45 languages to find inspi- rational content. Embeddings are useful building blocks in recommendation sys- tems, especially search, where natural language understanding is key [11, 23, 24]. Embeddings can power retrieval use cases via ap- proximate nearest neighbor (ANN) search [14, 22], enable detailed content and query understanding in ranking models without the overhead of processing raw data, and serve as a strong base to learn in low-data use-cases [31]. Despite their utility, embeddings come with their own challenges: if we learn a separate embedding for every use-case, there is an explosion of potentially expensive mod- els that must be inferred on every request and used in downstream models. This also may lead to suboptimal recommendation quality \u2013 some use-cases may not have enough labels to learn an optimal representation. In practice, it could entail additional maintenance costs and technical debt for upgrading to new versions of embed- dings in certain applications, as some data may have been collected over the course of months or years. Through rigorous offline experimentation, we show the impact of our key decisions in building embeddings for web-scale search at Pinterest: \u2022 Pin and product representations can be substantially en- riched using diverse text derived from image captions from arXiv:2404.16260v1 [cs.IR] 25 Apr 2024 WWW \u201924 Companion, May 13\u201317, 2024, Singapore, Singapore Prabhat Agarwal et al. a generative LLM, historical engagement, and user-curated boards. \u2022 A single query embedding can be used to retrieve queries, products, and Pins with nearly the same effectiveness as task-specific embeddings. \u2022 A single query embedding can learn compatibility with multi- ple pre-existing embeddings and learned entity embeddings, and perform well when compared across tasks. OmniSearchSage has been deployed at Pinterest and is an in- tegral component of the search stack. It powers embedding-based retrieval for standard and product pins, queries and ads. It is also one of the most important feature in multi-stage ranking models and various query classification models. These gains all arise despite the existence of other features enabling pin and product under- standing, which highlights the importance optimizing embeddings end-to-end for search.",
18
+ "main_content": "Our work to build multi-task multi-entity embeddings for search draws upon broad areas of work. Our representation of pins and products extends existing work on multi-modal learning and two tower models for search retrieval. These have been extensively applied in the context of search and recommendation systems as an efficient way to retrieve results not purely related to the search query based on text. In OmniSearchSage, we demonstrate that the embeddings generated by these models can also serve as features in ranking and relevance models. Additionally, we offer a brief examination of specific embeddings within the Pinterest ecosystem. 2.1 Model-based Search Retrieval Historically, search systems have been powered by two stages: token-based matching, or candidate generation, and then scoring with a complex model. These have drawbacks, especially when users make complex queries or content is not primarily textual. This has led to the exploration of two tower models, which encode a query into a single embedding or a small set of embeddings, and then use those to retrieve relevant documents with approximate or exact nearest neighbor search [5, 11, 18, 20, 21, 24, 40]. Two natural topics in learning embeddings for search are document representation, and query representation. Depending on the learning objective, this query representation could be personalized, or it could be a pure text embedding model. Many architectures for query embeddings in industry have been proposed based on simple CNNs [12], bag of words models [11, 23], transformers [19], and more, but they share a basic structure involving query understanding and sometimes context understanding. Document representation is also a major challenge. The text associated directly with an item is popular as a key feature, but depending on the task, other sources have been found to provide great value, including queries where other users have engaged with a given item [5, 24, 25] and image content embeddings [19]. 2.2 Multi-task, multi-modal, and multi-entity embeddings The area of learning embeddings isn\u2019t exclusive to the realm of recommendation systems and has been studied extensively [4, 6, 29, 30]. Multi-task learning is a technique commonly utilized in ranking models to optimize for multiple objectives concurrently, aiming for enhanced performance or more efficient information sharing [33, 41]. A less frequently encountered approach involves the joint learning of embeddings for more than two entities. Though this methodology is sometimes implemented in graph learning scenarios, it can also be perceived as an extension of multi-task learning [39]. Multi-modal embeddings are of substantial interest in the industry since the majority of web content is multi-modal, typically including at both text and images [18, 19, 38]. One can take embeddings or raw data from each modality as inputs, and merge them at any stage of the model. The methodology typically involves utilizing embeddings or raw data from each mode as inputs, which are then merge at different stages in the model. Early-stage fusion can pose computational hurdles; therefore, in cases where performance is indifferent, utilizing embeddings instead of raw data is generally the preferred course of action [38]. 2.3 Embeddings at Pinterest PinSage [37] is a scalable GNN-based embedding representing pins. It is based on the GraphSage GCN algorithm [10], sampling neighborhoods with personalized PageRank to augment pin understanding, instead of simple heuristics like \ud835\udc5b-hop neighbors. It aggregates some basic visual [2] and text information into a single dense representation, and is a critical feature in many models. To represent products, we have an embedding, ItemSage [1], which aggregates raw data about products, including metadata from product pages, and potentially many images of the product. ItemSage is trained for compatibility with PinSage, and the search query embedding preceding OmniSearchSage, meaning that the distance between ItemSage and these two embeddings can be used for retrieving or ranking content [27]. 3 METHOD 3.1 Problem Formulation In order to enhance the search experience, modern search systems aim to incorporate various types of content such as web documents, news, shopping items, videos, and more. Similarly, Pinterest\u2019s search feed encompasses a diverse range of content, including pins, shopping items, video pins, and related queries. Training separate query embedding models for each content type and its representation proves to be resource-intensive and inefficient. To address this issue, we introduce OmniSearchSage, which offers a unified query embedding model that jointly trains query embeddings for query-query, query-pin, and query-product retrieval and ranking. Another requirement in production systems is compatibility with existing embeddings, which is essential for purposes such as cost-efficiency and simplified migration. Hence we also train the query embeddings to be compatible with the corresponding preexisting embeddings for the entities. As a side effect, we also get compatibility with some embeddings due to the triangle inequality property inherent to cosine similarity. OmniSearchSage: Multi-Task Multi-Entity Embeddings for Pinterest Search WWW \u201924 Companion, May 13\u201317, 2024, Singapore, Singapore 3.2 Enriching Entity Representations On Pinterest, each pin or product is associated with an image and title, along with an optional text (known as description) and link. Beyond these typical attributes, products may carry additional metadata, such as brand information, color description, and more. Document expansion techniques has been empirically demonstrated to significantly enhance the performance of not just token-based, but also embedding-based search retrieval systems [8, 25, 26, 28, 34]. Hence, in OmniSearchSage, we enrich our entity representations using diverse text derived from image captions from a generative LLM, historical engagement, and user-curated boards as described below. In the dataset, 71% of pins and products feature a title or description, 91% include non-empty board titles, and 65% contain non-empty engaged queries. Synthetic GenAI captions are generated for all pins and products, ensuring full coverage. Section 4.3.2 discusses the importance of each of these enrichment. 3.2.1 Synthetic GenAI Captions. On our platform, a substantial volume of pins (about 30%) lack associated titles or descriptions, or possess noisy and/or irrelevant title or description. We address this issue by employing an off-the-shelf image captioning model, BLIP [17], to generate synthetic descriptions for these images. To assess the quality of these synthetically generated descriptions, we enlisted human evaluators to judge their relevance and quality. For a robust assessment, three distinct ratings were collected for each image within a sample of 10\ud835\udc58images, curated uniformly across various broad pin categories. The results indicated that an overwhelming 87.84% of the generated descriptions were both relevant and of high quality, while a meager 1.16% were deemed irrelevant and of poor quality. These synthetically generated descriptions serve as an added feature in our model, enriching the diversity of data associated with each entity. Despite not being directly visible to the users, their addition significantly contributes to a deeper understanding of the pins\u2019 content. 3.2.2 Board Titles. On Pinterest, users explore and save pins to their personal collections, referred to as boards. Each board carries an associated title, reflecting the topic or theme of the collection. Most often, these user-crafted boards are meticulously organized, each focusing on a distinct theme or purpose. A user might, for instance, create discrete boards for \u201cSocial Media Marketing\" and \u201cGraphic Design\u2019\u00a8. Consequently, these board titles provide valuable, user-generated descriptors for the pins within the respective boards. We exploit this user-curated information by accumulating the titles of all boards each pin has been saved to. We limit our selection to a maximum of 10 unique board titles for each pin/product, systematically eliminating any potentially noisy or redundant titles as described next. First, each title is assigned a score influenced by two factors: its frequency of occurrence and the prevalence of its comprising words. Following this, titles are then ranked based on a hierarchy of their score (ascending), word count (descending), and character length (descending). The resulting top 10 board titles are subsequently incorporated as a feature in our model. This process eliminates any potentially noisy or redundant titles from the feature. Query Encoder Query Encoder Unified Pin-Product Encoder PinSage Unified Pin-Product Encoder ItemSage Query Pin Item Query L(query, query) L(query, pin) L(query, pin_c) L(query, product) L(query, product_c) Pretrained and Frozen Trained from scratch Figure 1: Diagrammatic Representation of OmniSearchSage\u2019s Multi-Entity, Multi-Task Architecture. 3.2.3 Engaged Queries. When multiple users interact with a specific pin or product for a certain query within a search feed, it signifies that pin\u2019s relevance to that query. We can use these queries to expand our understanding of the pin/product. For every pin, we generate a list of queries that have attracted user engagements, along with the counts and types of such engagements. This list of queries is then sorted using a function based on the count for each type of engagement. We use the top 20 queries from these sorted lists as a feature in our model. Through experimentation with diverse time-windows of query logs for feature creation, we discovered that larger windows yield superior performance. Consequently, we have opted for a twoyear window for feature calculation. However, the complexity of computing this from scratch every time presents a challenge. To mitigate this, we deploy an incremental approach. Every \ud835\udc5bdays, we examine new query logs, create a list of queries for every pin, and then blend it with the previously existing top 20 queries, thereby updating the latest value of the feature. 3.3 Entity Features The features we incorporate include PinSage [37] and unified image embeddings [2] to capture the essence of each pin. Additionally, for product pins, we use ItemSage [1] given its capability in effectively representing product-related pins. Text-based features such as the title and description of each pin are also integral to our feature set. Furthermore, we augment the text associated with each pin with the inclusion of synthetic captions, board titles, and engagement queries as outlined earlier. By integrating all these features, we attain a comprehensive and multi-dimensional representation of each pin, hence facilitating enhanced learning of representations. 3.4 Encoders In our work, we consider 3 entity types, namely, pin, product and query. Our model consists of an encoder for query, a unified learned encoder for both pin and product, and dedicated compatibility encoders for pin and product, respectively. 3.4.1 Query Encoder. The query encoder in our model (depicted in Figure 2) is based on a multilingual version of the DistilBERT WWW \u201924 Companion, May 13\u201317, 2024, Singapore, Singapore Prabhat Agarwal et al. Multilingual DistilBERT [CLS] antique copper bat ##hro ##om sin ##k Project and L2 Normalize Figure 2: Overview of the query encoder architecture. The encoder takes the output from the last layer associated with the \u2018CLS\u2019 token, projects it onto a 256-dimensional vector space, and finally L2-normalizes the output to generate the final embedding. (distilbert-base-multilingual-cased2) [32]. This choice facilitates efficient handling of queries across a variety of languages. The encoder utilizes the output from the last layer corresponding to the \ud835\udc36\ud835\udc3f\ud835\udc46token and thereafter projects it to a 256-dimensional vector space. Post projection, we apply a \ud835\udc3f2 normalization on the 256-dimensional vectors to obtain the final embedding. This normalization greatly simplifies the calculation of cosine-distance in downstream applications, allowing for a straightforward dot product operation. 3.4.2 Unified Pin and Product Encoder. In our model, we utilize a single unified encoder for both pins and products (depicted in Figure 3), and this encoder is jointly trained with the query embeddings. Designed to process both textual features and continuous features, it plays a crucial role in learning the respective embeddings of pins and products. In cases where certain features are defined for one entity but not the other, we substitute them with zero, ensuring a consistent data input. As detailed in section 3.5, we utilize in-batch negatives to train our model. Prior research [9, 15, 16, 29] has empirically demonstrated that larger batches with a substantial number of negatives help in learning better representations. Therefore, to accommodate a larger batch size in the GPU memory, we employ a simple pin encoder model. The following encoder design has been determined through numerous ablation studies. These studies have allowed us to select the most effective configuration for each of the components, while still considering the importance of both training and serving efficiencies. The encoder uses three distinct tokenizers to process the textual features associated with a pin [1, 13, 23]. These include (i) a word unigram tokenizer that uses a vocabulary encompassing the 200\ud835\udc58most frequent word unigrams, (ii) a word bigram tokenizer that makes use of a vocabulary comprising the 1\ud835\udc40most frequent word bigrams, and (iii) a character trigram tokenizer that utilizes a vocabulary of 64\ud835\udc58character trigrams. The tokens are mapped to their respective IDs in the vocabulary V which constitute all three 2https://huggingface.co/distilbert-base-multilingual-cased Image Encoder PinSAGE ItemSAGE MLP & L2 Normalize Hash Embedder Word Unigram Tokenizer Word Bigram Tokenizer Character Trigram Tokenizer Tokenizer Pin Text Board Titles Engaged Queries Synthetic GenAI Captions Figure 3: Schematic of the unified encoder model for pins and products, illustrating the use of three different tokenizers, a hash embedding table, and an MLP layer for combining text embeddings with other continuous features. tokenizers. Any token that falls out of this combined vocabulary gets discarded. The use of these combined tokenizers effectively helps in capturing the semantics of various texts associated with a pin/product. For token embedding learning, we use a 2-hash hash embedding table of size 100, 000 [1, 35]. Each identified token\u2019s ID \ud835\udc56is hashed into two places within the embedding table using hash functions \u210e1(\ud835\udc56) and \u210e2(\ud835\udc56). The ultimate embedding of a token with ID \ud835\udc56is a weighted interpolation of the two locations: \ud835\udc4a1\ud835\udc56\u210e1(\ud835\udc56) +\ud835\udc4a2\ud835\udc56\u210e2(\ud835\udc56), where \ud835\udc4a1 and \ud835\udc4a2 are learned weight vectors of size |V| each. The sum of all token embeddings and the embedding features are concatenated and fed into a 3-layer MLP, with layer sizes of 1024, 1024, 256. Following this, the output of the MLP layer undergoes L2-normalization just like the query embedding. 3.4.3 Compatibility Encoders. In our model, we employ two discrete compatibility encoders individually dedicated to pins and products. These encoders leverages the pre-existing pin and product embeddings, represented by PinSage for pins and ItemSage for products. This allows the model to adeptly learn query embeddings that align effectively with PinSage and ItemSage embeddings. 3.5 Multi-Task Sampled Softmax Loss Taking inspiration from Itemsage [1], the problem of learning query and entity embeddings is treated as an extreme classification problem, with the aim of predicting entities relevant to a given query [7]. We employ the sampled softmax loss with logQ correction [36] to train our model. We use multitasking to jointly train entity embeddings and train the query embeddings to be compatible with existing entity embeddings. Formally, we define a task \ud835\udc47\u2208T as a tuple of a dataset of query-entity pairs (D = {(\ud835\udc65,\ud835\udc66)\ud835\udc56}) and an entity encoder E. \ud835\udc47\u225c{D, E}. For a batch of data, B = {(\ud835\udc65,\ud835\udc66)\ud835\udc56} \u2282D, for task\ud835\udc47\u2208T, the aim is to learn query embedding \ud835\udc5e\ud835\udc65\ud835\udc56and entity embedding \ud835\udc5d\ud835\udc66\ud835\udc56= E(\ud835\udc66\ud835\udc56) such that the cosine similarity of the embeddings \ud835\udc5e\ud835\udc65\ud835\udc56\u00b7 \ud835\udc5d\ud835\udc66\ud835\udc56is maximized. OmniSearchSage: Multi-Task Multi-Entity Embeddings for Pinterest Search WWW \u201924 Companion, May 13\u201317, 2024, Singapore, Singapore This is achieved by minimizing the softmax loss: \ud835\udc3f\ud835\udc47= \u22121 |B| |B| \u2211\ufe01 \ud835\udc56=1 log exp(\ud835\udc5e\ud835\udc65\ud835\udc56\u00b7 \ud835\udc5d\ud835\udc66\ud835\udc56) \u00cd \ud835\udc66\u2208\ud835\udc36exp(\ud835\udc5e\ud835\udc65\ud835\udc56\u00b7 \ud835\udc5d\ud835\udc66) , (1) where C is the catalog of all entities of the same type as\ud835\udc66\ud835\udc56. To ensure problem tractability, the normalization term in the denominator is approximated using a sample of the catalog \ud835\udc36. We use (i) positives in the batch, \ud835\udc35\ud835\udc41= {\ud835\udc66\ud835\udc56|(\ud835\udc65\ud835\udc56,\ud835\udc66\ud835\udc56) \u2208B}, and (ii) a random sample of the catalog, \ud835\udc36\u2032. To rectify any bias that might have been introduced through sampling, we utilize the logQ correction technique. This method operates by deducting the sampling probability of the negative, represented as log\ud835\udc44(\ud835\udc66|\ud835\udc65\ud835\udc56), from the existing logits. This is crucial to ensure that popular entities aren\u2019t disproportionately penalized. \ud835\udc3f\ud835\udc47= \ud835\udc3f\ud835\udc46\ud835\udc4f\ud835\udc5b \ud835\udc47 + \ud835\udc3f\ud835\udc46\ud835\udc5f\ud835\udc5b \ud835\udc47 (2) \ud835\udc3f\ud835\udc46\ud835\udc4f\ud835\udc5b \ud835\udc47 = \u22121 |B| |B| \u2211\ufe01 \ud835\udc56=1 log exp(\ud835\udc5e\ud835\udc65\ud835\udc56\u00b7 \ud835\udc5d\ud835\udc66\ud835\udc56\u2212log\ud835\udc44(\ud835\udc66\ud835\udc56|\ud835\udc65\ud835\udc56)) \u00cd \ud835\udc67\u2208\ud835\udc35\ud835\udc41exp(\ud835\udc5e\ud835\udc65\ud835\udc56\u00b7 \ud835\udc5d\ud835\udc67\u2212log\ud835\udc44(\ud835\udc67|\ud835\udc65\ud835\udc56)) (3) \ud835\udc3f\ud835\udc46\ud835\udc5f\ud835\udc5b \ud835\udc47 = \u22121 |B| |B| \u2211\ufe01 \ud835\udc56=1 log exp(\ud835\udc5e\ud835\udc65\ud835\udc56\u00b7 \ud835\udc5d\ud835\udc66\ud835\udc56\u2212log\ud835\udc44(\ud835\udc66\ud835\udc56|\ud835\udc65\ud835\udc56)) \u00cd \ud835\udc66\u2208\ud835\udc36\u2032 exp(\ud835\udc5e\ud835\udc65\ud835\udc56\u00b7 \ud835\udc5d\ud835\udc66\u2212log\ud835\udc44(\ud835\udc66|\ud835\udc65\ud835\udc56)) (4) = \u22121 |B| |B| \u2211\ufe01 \ud835\udc56=1 log exp(\ud835\udc5e\ud835\udc65\ud835\udc56\u00b7 \ud835\udc5d\ud835\udc66\ud835\udc56\u2212log\ud835\udc44(\ud835\udc66\ud835\udc56|\ud835\udc65\ud835\udc56)) \u00cd \ud835\udc66\u2208\ud835\udc36\u2032 exp(\ud835\udc5e\ud835\udc65\ud835\udc56\u00b7 \ud835\udc5d\ud835\udc66\u2212log\ud835\udc44\ud835\udc5b(\ud835\udc66)) , (5) since \ud835\udc66is sampled independently The total loss is defined as the sum of all individual task losses, \ud835\udc3f= \u2211\ufe01 \ud835\udc47\u2208T \ud835\udc3f\ud835\udc47. (6) We mix together different tasks together in one batch and control the influence of each task on the model through this composition. To increase training efficiency, we share the pairs in the batch across all tasks with the same dataset. 3.6 Model Serving OmniSearchSage query embeddings are integral to numerous applications in the search stack, which necessitates us to maintain a strict latency budget. For real-time inference with minimized latency, our query encoder is served on GPUs by our in-house C++based machine learning model server, the Scorpion Model Server (SMS). Factoring in that query distribution complies with Zipf\u2019s law, we have instituted a cache-based system to curb costs and shorten response times. The query embedding server first verifies if a query is cached before resorting to the query inference server should it be absent from the cache. After testing various Cache Time-To-Live (TTL) periods, a TTL of 30 days was established as optimal. The system is equipped for handling 300\ud835\udc58requests per second, maintaining a median (p50) latency of just 3ms, and 90 percentile (p90) latency of 20ms. The implementation of this cachebased system efficiently reduces the load on the inference server to approximately 500 QPS, leading to substantial cost and latency reductions. The pin and product embeddings are derived offline on a daily basis through batch inference on GPUs and are subsequently published to our signal store for consumption. Pair Source Actions Size Query-Pin Query Logs repin, longclick 1.5B Query-Product Query Logs repin, longclick 136M Query-Product Offsite logs add-to-cart, checkout 2.5M Query-Query Query Logs click 195M Table 1: Summary of the different training datasets. 4 EXPERIMENTS 4.1 Dataset Our dataset is primarily constructed by extracting unique queryentity pairs from one year of search query logs. We consider various forms of engagement on the platform when extracting these pairs, including \u2018saves\u2019 (when a user saves a pin to a board) and \u2018long clicks\u2019 (instances where users browse the linked page for more than 10 seconds before returning to Pinterest). For products, we enrich our dataset by incorporating offsite actions as well. Thus, we also include anonymized pairs tied to significant actions like \u2018add to cart\u2019 and \u2018checkout\u2019. A common challenge in recommendation systems is the popularity bias, where certain pins are overrepresented due to their high appeal. To counteract this bias, we impose a limit on the number of times the same pin can be paired. This limit is capped at 50 pairs for pins and is extended to 200 pairs for products (since products have lower volume and engagement). By adopting this strategy, we ensure our dataset is robust and truly representative of the user\u2019s activity on the platform. Our model training is further extended to encompass queryquery pairs. On Pinterest, users are presented with similar query suggestions, and engagements with these recommendations are recorded in the search logs. We leverage these records, extracting such pairs from an entire year\u2019s logs, thus enriching our training dataset. A detailed breakdown of the positive labels in the dataset is provided in Table 1. 4.2 Offline Evaluation Metrics Our evaluation of the model encompasses both user engagement data and human-labeled relevance data. Relevance gets measured using human-labeled pairs of queries and pins, sampled from production traffic from four distinct countries: US, UK, France, and Germany. This strategy serves to assess the model\u2019s performance in handling multiple languages and cultural contexts. Evaluation of user engagement considers a selected 7-day period. We ensure no data leakage\u2014possible due to the inclusion of engagement features such as engaged queries\u2014by maintaining a 15-day separation between the end of the training dataset and the beginning of the evaluation phase. We sample 80\ud835\udc58pairs from the defined evaluation duration to represent repins and long clicks for both pins and products. Another 80\ud835\udc58pairs, corresponding to clicks for queries, are also included for comprehensive performance evaluation. The primary metric we used for evaluation is named \u2018Recall@10\u2019. This metric denotes the likelihood of the occurrence of the engaged entity within the top 10 entities when these entities are sorted in descending order based on their similarity to the query. WWW \u201924 Companion, May 13\u201317, 2024, Singapore, Singapore Prabhat Agarwal et al. Metric SearchSage OmniSearchSage Gain Pin Save 0.39 0.65 +67% Long-Click 0.45 0.73 +62% Relevance (US) 0.25 0.45 +80% Relevance (UK) 0.29 0.51 +76% Relevance (FR) 0.23 0.43 +87% Relevance (DE) 0.28 0.46 +64% Product Save 0.57 0.73 +28% Long-Click 0.58 0.73 +26% Query Click 0.54 0.78 +44% Table 2: Comparative analysis of OmniSearchSage and the baseline SearchSage across various tasks Pin, Product, and Query. Consider a dataset \ud835\udc37= (\ud835\udc5e\ud835\udc56,\ud835\udc52\ud835\udc56)\ud835\udc5b \ud835\udc56=1, where each (\ud835\udc5e\ud835\udc56,\ud835\udc52\ud835\udc56) denotes a query-engaged entity pair, and also consider a random corpus \ud835\udc36 with \ud835\udc5aentities. The Recall@10 metric can then be defined as the average over all queries of the indicator function 1, where 1 equals 1 if the engaged entity \ud835\udc52\ud835\udc56is amongst the top 10 entities in \ud835\udc36when ranked by their dot product with the query \ud835\udc5e\ud835\udc56. Recall@10 = 1 |\ud835\udc37| |\ud835\udc37| \u2211\ufe01 \ud835\udc56=1 1[( \u2211\ufe01 \ud835\udc66\u2208\ud835\udc36 \ud835\udc65\ud835\udc56\u00b7 \ud835\udc66> \ud835\udc65\ud835\udc56\u00b7 \ud835\udc66\ud835\udc56) > 10] For every pin, query, and product, we employ a uniformly distributed random sample of \ud835\udc5a= 1.5\ud835\udc40entities from our corpus. 4.3 Offline Results In this section, we provide a comprehensive comparison between our proposed model, OmniSearchSage, and the existing baselines, which helps showcase its performance enhancements. Subsequently, we undertake an in-depth exploration of key influential aspects such as the significance of text enrichments, the pros and cons of adopting multitasking approaches, and the operational efficacy of compatibility encoders in the context of our model. 4.3.1 Comparison with Baselines. In this study, the existing version of SearchSage [27] serves as our comparison baseline. It operates using fixed PinSage and ItemSage embeddings for pins and products, respectively. For OmniSearchSage, we utilize the query encoder to derive query embeddings and the unified pin and product encoder to generate pin and product embeddings. In Table 2, comparisons are drawn between OmniSearchSage and SearchSage, with both models being trained and evaluated on the same dataset. It is important to highlight that the baseline model, SearchSage, does not involve query-query pairs for training purposes. On the pin dataset, OmniSearchSage shows a significant gain, between 60% and 90%, over SearchSage across all metrics. Recall is relatively consistent across different countries, reflecting the multilingual robustness of OmniSearchSage. Analysis of the product dataset reveals that OmniSearchSage outperforms the baseline model by about 27% in predicting product save long-click relevance No captions 0.51 0.60 0.36 With captions 0.66 0.76 0.36 Improvement +30.43% +25.58% 0% Table 3: Comparative assessment displaying the influence of Synthetic GenAI Captions on pins lacking titles and descriptions. engagement. This increment is less prominent as compared to the pins dataset, mainly because ItemSage, upon which this comparison is based, has already undergone training on search tasks. Nevertheless, the observed improvement shows the positive impact of incorporating new features as well as the benefit of multi-tasking. Interestingly, SearchSage is able to predict related query clicks substantially better than random despite not being trained on this task. However, when we directly optimize for this objective in OmniSearchSage, we see a substantial +44% improvement. We show this improvement can be attributed to both training on related queries, and multi-task learning in Section 4.3.3. 4.3.2 Importance of content enrichment. In this section, we delve into an analysis of the importance of various text enhancements described in Section 3.2. To maintain brevity, the evaluation focuses solely on the metrics related to the query-pin task. Our first direction of investigation centers around the impact of integrating synthetic captions for pins that lack both a title and description. For this purpose, we extracted pairs from the evaluation dataset in which the engaged pin was missing a title or a description. This resulted in a narrowed evaluation dataset of 24\ud835\udc58pairs. The model\u2019s performance, initially based on solely continuous features and native text, was then compared to a model additionally enriched with captions. Table 3 presents the results of this comparison. When synthetic captions were added, both \u2018save\u2019 and \u2018long-click\u2019 metrics saw substantial improvements \u2014 approximately +30% and +26% respectively. However, the relevance metric remained unchanged. This suggests that adding synthetic captions can significantly enhance the model\u2019s performance for certain metrics when representing pins that lack a title and description. Table 4 illustrates the impact of adding different text enrichments on the model\u2019s performance. Each percentage increase is relative to the previous row, displaying the additional improvement from each additional feature. Our baseline model utilizes only continuous features for training and its performance values are reflected in the first row. Upon adding \u2018Title\u2019, \u2018Description\u2019, and \u2018Synthetic GenAI Captions\u2019 to the baseline model, we notice a robust improvement across all metrics. save long-click relevance Continuous Features Only 0.43 0.53 0.30 Adding Title, Description and Synthetic GenAI Captions 0.52 (+21%) 0.63 (+19%) 0.39 (+30%) Adding Board Titles 0.61 (+17%) 0.68 (+8%) 0.44 (+13%) Adding Engaged Queries 0.65 (+7%) 0.73 (+7%) 0.46 (+5%) Table 4: Impact of adding different text enrichments on the model\u2019s performance. Each percentage increase is relative to the previous row, displaying the additional improvement from each additional feature. OmniSearchSage: Multi-Task Multi-Entity Embeddings for Pinterest Search WWW \u201924 Companion, May 13\u201317, 2024, Singapore, Singapore Dataset Pin Only Product only Query Only OmniSearchSage pin save 0.68 0.65 long-click 0.75 0.73 avg relevance 0.45 0.46 product save 0.73 0.73 long-click 0.73 0.73 query click 0.73 0.78 Table 5: Comparative analysis illustrating the contrasts between our unified multi-task model and models trained individually for each task pin, product, and query. There is a 20% improvement in the engagement datasets, while the relevance metric improves by a notable 30%, demonstrating the substantial impact of these text features. The model enhancement continues with adding board titles to the feature set, leading to a further increase of 8 \u221215% in different metrics. This affirms the relevance of board titles in improving predictive accuracy. Finally, we incorporated engaged queries feature into the model, resulting in a consistent, albeit smaller growth across all three metrics. Although the incremental relative gain appears smaller, it still constitutes a significant improvement when compared to the baseline model. In summary, each text enrichment feature contributes significantly to improving model performance as seen by the increment in metrics compared to their immediate preceding state. 4.3.3 Effect of multi-tasking. In Table 5, we present a comparative analysis between models trained independently for each task (pin, product, and query) and our consolidated multitask model. For this comparison, both the independent and multitask models were trained under equivalent conditions with matching batch sizes, computational power, and iterations. The datasets used for both training and evaluation were also identical, with the sole difference that the individual models were trained on their respective subset of pairs from the dataset. This systematic approach ensures the fair and accurate assessment of the performance of the multitask model in relation to the independent task models. On the pin task, we see slight degradation in quality from multitask learning, but, on product and query tasks, results are neutral to positive. This aligns with general notions about multi-task learning: low-data tasks are unlikely to see regressions from multi-task learning, while the pin task using 1.5\ud835\udc35pairs sees a very slight drop in performance. Despite this drop, the simplification benefits of multi-task learning outweigh the metric loss. 4.3.4 Effect of compatibility encoders. We examine the influence of incorporating compatibility encoders on the effectiveness of the learned pin/product embeddings. We train a model that comprises only the query and unified pin and product encoder. Subsequently, this model is compared with another model that fully incorporates all the encoders. Interestingly, there is almost no noticeable degradation in the metrics of the learned encoder, thereby essentially achieving seamless compatibility of the query embedding with pre-existing embeddings at no substantial cost. Furthermore, as demonstrated in Table 6, the performance of the compatibility encoders in the OmniSearchSage model is either on par with or surpasses that of the SearchSage model, which is trained utilising only compatibility encoders. Dataset SearchSage OmniSearchSage pin save 0.39 0.39 long-click 0.45 0.43 avg relevance 0.26 0.26 product save 0.57 0.57 long-click 0.58 0.57 Table 6: Comparison of co-trained compatibility encoders with independently trained compatibility encoders. Product Embedding Index (HNSW) Ads Embedding Index (HNSW) Pin Embedding Index (HNSW) Pin Inverted Token Index Product Inverted Token Index Ads Inverted Token Index L1 Scoring Model User Input Query Query Understanding L2 Scoring Model Query Embedding Server User, Query, Pin Features Figure 4: A simplified depiction of the search retrieval and ranking stack at Pinterest highlighting the integration points for OmniSearchSage embeddings. 5 APPLICATIONS IN PINTEREST SEARCH OmniSearchSage embeddings find wide applications throughout the Pinterest search stack, primarily in retrieval and ranking tasks. Figure 4 presents a simplified depiction of the search retrieval and ranking stack at Pinterest and highlights the integration points for OmniSearchSage embeddings. These embeddings are employed to power the retrieval of pins and products using HNSW [22]. They are also instrumental in the L1 scoring model, where they enhance the efficiency of token-based retrieval sources. Moreover, OmniSearchSage embeddings serve as one of the most critical features in the L2 scoring and relevance models. In this section, we delineate the results derived from the A/B tests we conducted. In these tests, production SearchSage embeddings were replaced with OmniSearchSage embeddings, resulting in boosted performance in both organic and promoted content (Ads) in search. Additionally, we provide results from a human relevance assessment conducted on actual production-sampled traffic. This evaluation further confirms the improved performance derived from the utilization of OmniSearchSage embeddings. Finally, we demonstrate how employing query embeddings also enhances performance in other tasks, such as classification, particularly in situations where data availability is limited. This highlights the ability of the OmniSearchSage model to generalize to tasks different from its original training objectives. 5.1 Human Relevance Evaluation To understand advantages of OmniSearchSage, we enlisted human evaluators to assess the relevance of candidates retrieved via two WWW \u201924 Companion, May 13\u201317, 2024, Singapore, Singapore Prabhat Agarwal et al. (a) Token-based (b) OmniSearchSage-based Figure 5: Comparative display of pins retrieved in response to the query \u2019antique copper bathroom sink\u2019 from the tokenbased system and the OmniSearchSage-based system. Pins deemed relevant are outlined in green, while those considered irrelevant are encircled in red. methods: OmniSearchSage embeddings-based pin retrieval and token-based pin retrieval. For this evaluation, we selected a set of 300 queries, deliberately stratified across both head and tail queries. The top 8 candidate pins were then retrieved from each system using these queries, and human evaluators determined the relevance of the pins to the corresponding query. Every query-pin pair received three judgements, with an inter-annotator agreement rate of 0.89. Evaluation results revealed a noticeable improvement with OmniSearchSage, showing a 10% increase in relevance compared to the token-based system. Figure 5 offers a distinct comparison of retrieved pins for the query \u2018antique copper bathroom sink\u2019 between the candidates retrieved by the token-based system and the OmniSearchSage-based system. The token-based retrieval system often fetches pins related to only part of the query and fails to fetch consistently relevant results. In striking contrast, nearly all pins retrieved by the OmniSearchSage-based system are highly relevant to the specified query, underlining the efficacy of the OmniSearchSage model in understanding the query and aligning similar pins and queries in the same space together. 5.2 Organic Search In this section, we outline the results of the A/B testing conducted to substitute the existing production SearchSage query and entity embeddings with OmniSearchSage embeddings for organic content within Pinterest search. Within the context of search experiments at Pinterest, our attention is largely concentrated on two key metrics: the search fulfillment rate and relevance. The search fulfillment rate is defined as the proportion of searches that result in a user engagement action of significance. Relevance is calculated as the weighted average relevance of the top eight pins for each query, assessed across different query segments. This is measured through human evaluation. The impact on these two metrics, from replacing SearchSage with OmniSearchSage, is presented in Table 7. The table provides data drawn from experiments for three distinct use-cases: (i) retrieval of pins and products, (ii) L1 scoring model, and (iii) L2 scoring model and relevance model. Search Fulfilment Rate Relevance Pin and Product Retrieval +4.1% +0.5% L1 Scoring +0.5% +0.0% L2 Scoring and Relevance Model +2.8% +3.0% Table 7: Online A/B experiment results of OmniSearchSage in Organic Search. gCTR Product Ads Retrieval +5.27% Ads Search Engagement Model +2.96% Ads Search Relevance Model +1.55% Table 8: Online A/B experiment results of OmniSearchSage for Ads in Search. 5.3 Ads in Search The OmniSearchSage embeddings have also successfully replaced the SearchSage embeddings in various applications within Ads on Search surface. We present the results of three use cases: search engagement model, search relevance model, and product ads retrieval. Uniformly, we noted substantial improvements in engagement and relevance within Ads across all use cases. These increments, specifically in the long clickthrough rate (gCTR), are outlined in Table 8. Furthermore, OmniSearchSage led to a noteworthy 4.95% increase in Ads relevance within the Search Ads relevance model. These gains highlight the positive impact of transitioning to OmniSearchSage embeddings for Ads on Search. 5.4 Classification One of the primary advantages of developing robust query representation such as OmniSearchSage is its utility in powering downstream applications, particularly when there is a lack of labels for learning large models. One example of this at Pinterest is interest classification, where we classify queries into a hierarchical taxonomy. Using OmniSearchSage query embeddings for query representation, we were able to increase performance when compared to the baseline FastText [3] model. Precision increased by 30% on average across levels, with the larger gains coming from more granular levels. 6 CONCLUSION In this work, we presented OmniSearchSage, an end-to-end optimized set of query, pin, and product embeddings for Pinterest search, which have shown value across many applications. In contrast to other work focused on learning embeddings for search, we demonstrate the value of unified query, pin, and product embeddings as both candidate generators and features in Pinterest search. We show a great improvement over previous solutions at Pinterest can be attributed to rich document text representations, which improved offline evaluation metrics by > 50%. We also describe practical decisions enabling serving and adoption, including compatibilty encoders, multi-task learning, and long-TTL caching. Lastly, we summarize results from online A/B experiments across organic and ads applications, which have directly led to cumulative gains of +7.4% fulfilment rate on searches, and +3.5% relevance. OmniSearchSage: Multi-Task Multi-Entity Embeddings for Pinterest Search WWW \u201924 Companion, May 13\u201317, 2024, Singapore, Singapore"
19
+ }
intro_8K/test_introduction_short_2404.16277v1.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2404.16277v1",
3
+ "title": "Causally Inspired Regularization Enables Domain General Representations",
4
+ "abstract": "Given a causal graph representing the data-generating process shared across\ndifferent domains/distributions, enforcing sufficient graph-implied conditional\nindependencies can identify domain-general (non-spurious) feature\nrepresentations. For the standard input-output predictive setting, we\ncategorize the set of graphs considered in the literature into two distinct\ngroups: (i) those in which the empirical risk minimizer across training domains\ngives domain-general representations and (ii) those where it does not. For the\nlatter case (ii), we propose a novel framework with regularizations, which we\ndemonstrate are sufficient for identifying domain-general feature\nrepresentations without a priori knowledge (or proxies) of the spurious\nfeatures. Empirically, our proposed method is effective for both (semi)\nsynthetic and real-world data, outperforming other state-of-the-art methods in\naverage and worst-domain transfer accuracy.",
5
+ "authors": "Olawale Salaudeen, Sanmi Koyejo",
6
+ "published": "2024-04-25",
7
+ "updated": "2024-04-25",
8
+ "primary_cat": "cs.LG",
9
+ "cats": [
10
+ "cs.LG",
11
+ "stat.ML"
12
+ ],
13
+ "label": "Original Paper",
14
+ "paper_cat": "Knowledge AND Graph",
15
+ "gt": "A key feature of machine learning is its capacity to generalize across new domains. When these domains present di\ufb00erent data distributions, the algorithm must leverage shared structural concepts to achieve out- of-distribution (OOD) or out-of-domain generalization. This capability is vital in numerous important real- world machine learning applications. For example, in safety-critical settings such as autonomous driving, a lack of resilience to unfamiliar distributions could lead to human casualties. Likewise, in the healthcare sector, where ethical considerations are critical, an inability to adjust to shifts in data distribution can result in unfair biases, manifesting as inconsistent performance across di\ufb00erent demographic groups. An in\ufb02uential approach to domain generalization is Invariant Causal Prediction (ICP; [Peters et al., 2016]). ICP posits that although some aspects of data distributions (like spurious or non-causal mechanisms [Pearl, 2010]) may change across domains, certain causal mechanisms remain constant. ICP suggests fo- cusing on these invariant mechanisms for prediction. However, the estimation method for these invariant mechanisms suggested by [Peters et al., 2016] struggles with scalability in high-dimensional feature spaces. To overcome this, Arjovsky et al. [2019] introduced Invariant Risk Minimization (IRM), designed to identify these invariant mechanisms by minimizing an objective. However, requires strong assumptions for identify- ing the desired domain-general solutions [Ahuja et al., 2021, Rosenfeld et al., 2022]; for instance, observing a number of domains proportional to the spurious features\u2019 dimensions is necessary, posing a signi\ufb01cant challenge in these high-dimensional settings. Subsequent variants of IRM have been developed with improved capabilities for identifying domain- general solutions [Ahuja et al., 2020, Krueger et al., 2021, Robey et al., 2021, Wang et al., 2022, Ahuja et al., 2021]. Additionally, regularizers for Distributionally Robust Optimization with subgroup shift have been proposed (GroupDRO) [Sagawa et al., 2019]. However, despite their solid theoretical motivation, empirical evidence suggests that these methods may not consistently deliver domain-general solutions in practice Gulrajani and Lopez-Paz [2020], Kaur et al. [2022], Rosenfeld et al. [2022]. \u2217Contact: [email protected] 1 Kaur et al. [2022] demonstrated that regularizing directly for conditional independencies implied by the generative process can give domain-general solutions, including conditional independencies beyond those considered by IRM. However, their experimental approach involves regularization terms that require direct observation of spurious features, a condition not always feasible in real-world applications. Our proposed methodology also leverages regularizers inspired by the conditional independencies indicated by causal graphs but, crucially, it does so without necessitating prior knowledge (or proxies) of the spurious features. 1.1 Contributions In this work, \u2022 we outline su\ufb03cient properties to uniquely identify domain-general predictors for a general set of generative processes that include domain-correlated spurious features, \u2022 we propose regularizers to implement these constraints without independent observations of the spuri- ous features, and \u2022 \ufb01nally, we show that the proposed framework outperforms the state-of-the-art on semi-synthetic and real-world data. The code for our proposed method is provided at https://github.com/olawalesalaudeen/tcri. Notation: Capital letters denote bounded random variables, and corresponding lowercase letters denote their value. Unless otherwise stated, we represent latent domain-general features as Zdg \u2208Zdg \u2261Rm and spurious latent features as Zspu \u2208Zspu \u2261Ro. Let X \u2208X \u2261Rd be the observed feature space and the output space of an invertible function \u0393 : Zdg \u00d7 Zspu 7\u2192X and Y \u2208Y \u2261{0, 1, . . ., K \u22121} be the observed label space for a K-class classi\ufb01cation task. We then de\ufb01ne feature extractors aimed at identifying latent features \u03a6dg : X 7\u2192Rm, \u03a6spu : X 7\u2192Ro so that \u03a6 : X 7\u2192Rm+o \u0000 that is \u03a6(x) = [\u03a6dg(x); \u03a6spu(x)]\u2200x \u2208X \u0001 . We de\ufb01ne e as a discrete random variable denoting domains and E = {P e(Zdg, Zspu, X, Y ) : e = 1, 2, . . .} to be the set of possible domains. Etr \u2282E is the set of observed domains available during training.",
16
+ "main_content": "The source of distribution shift can be isolated to components of the joint distribution. One special case of distribution shift is covariate shift [Shimodaira, 2000, Zadrozny, 2004, Huang et al., 2006, Gretton et al., 2009, Sugiyama et al., 2007, Bickel et al., 2009, Chen et al., 2016, Schneider et al., 2020], where only the covariate distribution P(X) changes across domains. Ben-David et al. [2009] give upper-bounds on target error based on the H-divergence between the source and target covariate distributions, which motivates domain alignment methods like the Domain Adversarial Neural Networks [Ganin et al., 2016] and others [Long et al., 2015, Blanchard et al., 2017]. Others have followed up on this work with other notions of covariate distance for domain adaptation, such as mean maximum discrepancy (MMD) [Long et al., 2016], Wasserstein distance [Courty et al., 2017], etc. However, Kpotufe and Martinet [2018] show that these divergence metrics fail to capture many important properties of transferability, such as asymmetry and non-overlapping support. Furthermore, Zhao et al. [2019] shows that even with the alignment of covariates, large distances between label distributions can inhibit transfer; they propose a label conditional importance weighting adjustment to address this limitation. Other works have also proposed conditional covariate alignment [des Combes et al., 2020, Li et al., 2018c,b]. Another form of distribution shift is label shift, where only the label distribution changes across domains. Lipton et al. [2018] propose a method to address this scenario. Schrouff et al. [2022] illustrate that many real-world problems exhibit more complex \u2019compound\u2019 shifts than just covariate or label shifts alone. One can leverage domain adaptation to address distribution shifts; however, these methods are contingent on having access to unlabeled or partially labeled samples from the target domain during training. When such samples are available, more sophisticated domain adaptation strategies aim to leverage and adapt spurious feature information to enhance performance [Liu et al., 2021, Zhang et al., 2021, Kirichenko et al., 2022]. 2 However, domain generalization, as a problem, does not assume access to such samples [Muandet et al., 2013]. To address the domain generalization problem, Invariant Causal Predictors (ICP) leverage shared causal structure to learn domain-general predictors [Peters et al., 2016]. Previous works, enumerated in the introduction (Section 1), have proposed various algorithms to identify domain-general predictors. Arjovsky et al. [2019]\u2019s proposed invariance risk minimization (IRM) and its variants motivated by domain invariance: min w,\u03a6 1 |Etr| X e\u2208Etr Re(w \u25e6\u03a6) s.t. w \u2208argmin e w Re( e w \u00b7 \u03a6), \u2200e \u2208Etr, where Re(w \u25e6\u03a6) = E \u0002 \u2113(y, w \u00b7 \u03a6(x)) \u0003 , with loss function \u2113, feature extractor \u03a6, and linear predictor w. This objective aims to learn a representation \u03a6 such that predictor w that minimizes empirical risks on average across all domains also minimizes within-domain empirical risk for all domains. However, Rosenfeld et al. [2020], Ahuja et al. [2020] showed that this objective requires unreasonable constraints on the number of observed domains at train times, e.g., observing distinct domains on the order of the rank of spurious features. Follow-up works have attempted to improve these limitations with stronger constraints on the problem \u2013 enumerated in the introduction section. Our method falls under domain generalization; however, unlike the domain-general solutions previously discussed, our proposed solution leverages di\ufb00erent conditions than domain invariance directly, which we show may be more suited to learning domain-general representations. 3 Causality and Domain Generalization We often represent causal relationships with a causal graph. A causal graph is a directed acyclic graph (DAG), G = (V, E), with nodes V representing random variables and directed edges E representing causal relationships, i.e., parents are causes and children are e\ufb00ects. A structural equation model (SEM) provides a mathematical representation of the causal relationships in its corresponding DAG. Each variable Y \u2208V is given by Y = fY (X) + \u03b5Y , where X denotes the parents of Y in G, fY is a deterministic function, and \u03b5Y is an error capturing exogenous in\ufb02uences on Y . The main property we need here is that fY is invariant to interventions to V \\{Y } and is consequently invariant to changes in P(V ) induced by these interventions. Interventions refer to changes to fZ, Z \u2208V \\{Y }. In this work, we focus on domain-general predictors dg that are linear functions of features with domaingeneral mechanisms, denoted as gdg := w \u25e6\u03a6dg, where w is a linear predictor and \u03a6dg identi\ufb01es features with domain-general mechanisms. We use domain-general rather than domain-invariant since domain-invariance is strongly tied to the property: Y \u22a5 \u22a5e | Zdg [Arjovsky et al., 2019]. As shown in the subsequent sections, this work leverages other properties of appropriate causal graphs to obtain domain-general features. This distinction is crucial given the challenges associated with learning domain-general features through domaininvariance methods [Rosenfeld et al., 2020]. Given the presence of a distribution shift, it\u2019s essential to identify some common structure across domains that can be utilized for out-of-distribution (OOD) generalization. For example, Shimodaira [2000] assume P(Y |X) is shared across all domains for the covariate shift problem. In this work, we consider a setting where each domain is composed of observed features and labels, X \u2208X, Y \u2208Y, where X is given by an invertible function \u0393 of two latent random variables: domain-general Zdg \u2208Zdg and spurious Zspu \u2208Zspu. By construction, the conditional expectation of the label Y given the domain-general features Zdg is the same across domains, i.e., Eei [Y |Zdg = zdg] = Eej [Y |Zdg = zdg] (1) \u2200zdg \u2208Zdg, \u2200ei \u0338= ej \u2208E. Conversely, this robustness to e does not necessarily extend to spurious features Zspu; in other words, Zspu may assume values that could lead a predictor relying on it to experience arbitrarily high error rates. Then, a sound strategy for learning a domain-general predictor \u2013 one that is robust to distribution shifts \u2013 is to identify the latent domain-general Zdg from the observed features X. 3 e Zdg Zspu Y X Figure 1: Partial Ancestral Graph representing all non-trivial and valid generative processes (DAGs); dashed edges indicate that an edge may or may not exist. The approach we take to do this is motivated by the Reichenbach Common Cause Principle, which claims that if two events are correlated, there is either a causal connection between the correlated events that is responsible for the correlation or there is a third event, a so-called (Reichenbachian) common cause, which brings about the correlation [Hitchcock and R\u00e9dei, 2021, R\u00e9dei, 2002]. This principle allows us to posit the class of generative processes or causal mechanisms that give rise to the correlated observed features and labels, where the observed features are a function of domain-general and spurious features. We represent these generative processes as causal graphs. Importantly, the mapping from a node\u2019s causal parents to itself is preserved in all distributions generated by the causal graph (Equation 1), and distributions can vary arbitrarily so long as they preserve the conditional independencies implied by the DAG (Markov Property [Pearl, 2010]). We now enumerate DAGs that give observe features with spurious correlations with the label. Valid DAGs. We consider generative processes, where both latent features, Zspu, Zdg, and observed X are correlated with Y , and the observed X is a function of only Zdg and Zspu (Figure 1). Given this setup, there is an enumerable set of valid generative processes. Such processes are (i) without cycles, (ii) are feature complete \u2013 including edges from Zdg and Zspu to X, i.e., Zdg \u2192X \u2190Zspu, and (iii) where the observed features mediate domain in\ufb02uence, i.e., there is no direct domain in\ufb02uence on the label e \u0338\u2192Y . We discuss this enumeration in detail in Appendix B. The result of our analysis is identifying a representative set of DAGs that describe valid generative processes \u2013 these DAGs come from orienting the partial ancestral graph (PAG) in Figure 1. We compare the conditional independencies implied by the DAGs de\ufb01ned by Figure 1 as illustrated in Figure 2, resulting in three canonical DAGs in the literature (see Appendix B for further discussion). Other DAGs that induce spurious correlations are outside the scope of this work. e Zdg Zspu Y X (a) Causal [Arjovsky et al., 2019]. e Zdg Zspu Y X (b) Anticausal [Rosenfeld et al., 2020]. e Zdg Zspu Y X (c) Fully Informative Causal [Ahuja et al., 2021]. Figure 2: Generative Processes. Graphical models depicting the structure of possible data-generating processes \u2013 shaded nodes indicate observed variables. X represents the observed features, Y represents observed targets, and e represents domain in\ufb02uences (domain indexes in practice). There is an explicit separation of domain-general Zdg and domain-speci\ufb01c Zspu features; they are combined to generate observed X. Dashed edges indicate the possibility of an edge. Conditional independencies implied by identi\ufb01ed DAGs (Figure 2). 4 Table 1: Generative Processes and Su\ufb03cient Conditions for Domain-Generality Graphs in Figure 2 (a) (b) (c) Zdg \u22a5 \u22a5Zspu | {Y, e} \u2713 \u2713 \u2717 Identifying Zdg is necessary \u2713 \u2713 \u2717 Fig. 2a: Zdg \u22a5 \u22a5Zspu | {Y, e}; Y \u22a5 \u22a5e | Zdg. This causal graphical model implies that the mapping from Zdg to its causal child Y is preserved and consequently, Equation 1 holds [Pearl, 2010, Peters et al., 2016]. As an example, consider the task of predicting the spread of a disease. Features may include causes (vaccination rate and public health policies) and e\ufb00ects (coughing). e is the time of month; the distribution of coughing changes depending on the season. Fig. 2b: Zdg \u22a5 \u22a5Zspu | {Y, e}; Zdg \u22a5 \u22a5Zspu | Y ; Y \u22a5 \u22a5e | Zdg, Zdg \u22a5 \u22a5e. The causal graphical model does not directly imply that Zdg \u2192Y is preserved across domains. However, in this work, it represents the setting where the inverse of the causal direction is preserved (inverse: Zdg \u2192Y ), and thus Equation 1 holds. A context where this setting is relevant is in healthcare where medical conditions (Y ) cause symptoms (Zdg), but the prediction task is often predicting conditions from symptoms, and this mapping Zdg \u2192Y , opposite of the causal direction, is preserved across distributions. Again, we may consider e as the time of month; the distribution of coughing changes depending on the season. Fig. 2c: Y \u22a5 \u22a5e | Zdg; Zdg \u22a5 \u22a5e. Similar to Figure 2a, this causal graphical model implies that the mapping from Zdg to its causal child Y is preserved, so Equation 1 holds [Pearl, 2010, Peters et al., 2016]. This setting is especially interesting because it represents a Fully Informative Invariant Features setting, that is Zspu \u22a5 \u22a5Y | Zdg [Ahuja et al., 2021]. Said di\ufb00erently, Zspu does not induce a backdoor path from e to Y that Zdg does not block. As an example of this, we can consider the task of predicting hospital readmission rates. Features may include the severity of illness, which is a direct cause of readmission rates, and also include the length of stay, which is also caused by the severity of illness. However, length of stay may not be a cause of readmission; the correlation between the two would be a result of the confounding e\ufb00ect of a common cause, illness severity. e is an indicator for distinct hospitals. We call the condition Y \u22a5 \u22a5e | Zdg the domain invariance property. This condition is common to all the DAGs in Figure 2. We call the condition Zdg \u22a5 \u22a5Zspu | {Y, e} the target conditioned representation independence (TCRI) property. This condition is common to the DAGs in Figure 2a, 2b. In the settings considered in this work, the TCRI property is equivalently Zdg \u22a5 \u22a5Zspu | Y\u2200e \u2208E since e will simply index the set of empirical distributions available at training. Domain generalization with conditional independencies. Kaur et al. [2022] showed that su\ufb03ciently regularizing for the correct conditional independencies described by the appropriate DAGs can give domaingeneral solutions, i.e., identi\ufb01es Zdg. However, in practice, one does not (partially) observe the latent features independently to regularize directly. Other works have also highlighted the need to consider generative processes when designing robust algorithms to distribute shifts [Veitch et al., 2021, Makar et al., 2022]. However, previous work has largely focused on regularizing for the domain invariance property, ignoring the conditional independence property Zdg \u22a5 \u22a5Zspu | Y, e. Su\ufb03ciency of ERM under Fully Informative Invariant Features. Despite the known challenges of learning domain-general features from the domain-invariance properties in practice, this approach persists, 5 likely due to it being the only property shared across all DAGs. We alleviate this constraint by observing that Graph (Fig. 2c) falls under what Ahuja et al. [2021] refer to as the fully informative invariant features settings, meaning that Zspu is redundant, having only information about Y that is already in Zdg. Ahuja et al. [2021] show that the empirical risk minimizer is domain-general for bounded features. Easy vs. hard DAGs imply the generality of TCRI. Consequently, we categorize the generative processes into easy and hard cases Table 1: (i) easy meaning that minimizing average risk gives domaingeneral solutions, i.e., ERM is su\ufb03cient (Fig. 2c), and (ii) hard meaning that one needs to identify Zdg to obtain domain-general solutions (Figs. 2a-2b). We show empirically that regularizing for Zdg \u22a5 \u22a5Zspu | Y \u2200e \u2208 E also gives a domain-general solution in the easy case. The generality of TCRI follows from its su\ufb03ciency for identifying domain-general Zdg in the hard cases while still giving domain-general solutions empirically in the easy case. 4 Proposed Learning Framework We have now clari\ufb01ed that hard DAGs (i.e., those not solved by ERM) share the TCRI property. The challenge is that Zdg and Zspu are not independently observed; otherwise, one could directly regularize. Existing work such as Kaur et al. [2022] empirically study semi-synthetic datasets where Zspu is (partially) observed and directly learn Zdg by regularizing that \u03a6(X) \u22a5 \u22a5Zspu | Y, e for feature extractor \u03a6. To our knowledge, we are the \ufb01rst to leverage the TCRI property without requiring observation of Zspu. Next, we set up our approach with some key assumptions. The \ufb01rst is that the observed distributions are Markov to an appropriate DAG. Assumption 4.1. All distributions, sources and targets, are generated by one of the structural causal models SCM that follow: causal z }| { SCM(e) := \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 Z(e) dg \u223cP (e) Zdg, Y (e) \u2190\u27e8w\u2217 dg, Z(e) dg \u27e9+ \u03b7Y , Z(e) spu \u2190\u27e8w\u2217 spu, Y \u27e9+ \u03b7(e) Zspu, X \u2190\u0393(Zdg, Zspu), (2) anticausal z }| { SCM(e) := \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 Y (e) \u223cPY , Z(e) dg \u2190\u27e8e wdg, Y \u27e9+ \u03b7(e) Zdg, Z(e) spu \u2190\u27e8w\u2217 spu, Y \u27e9+ \u03b7(e) Zspu, X \u2190\u0393(Zdg, Zspu), (3) F IIF z }| { SCM(e) := \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 Z(e) dg \u223cP (e) Zdg, Y (e) \u2190\u27e8w\u2217 dg, Z(e) dg \u27e9+ \u03b7Y , Z(e) spu \u2190\u27e8w\u2217 spu, Zdg\u27e9+ \u03b7(e) Zspu, X \u2190\u0393(Zdg, Zspu), (4) where PZdg is the causal covariate distribution, w\u2019s are linear generative mechanisms, \u03b7\u2019s are exogenous independent noise variables, and \u0393 : Zdg \u00d7 Zspu \u2192X is an invertible function. It follows from having causal mechanisms that we can learn a predictor w\u2217 dg for Zdg that is domain-general (Equation 2-4) \u2013 w\u2217 dg inverts the mapping e wdg in the anticausal case. These structural causal models (Equation 2-4) correspond to causal graphs Figures 2a-2c, respectively. Assumption 4.2 (Structural). Causal Graphs and their distributions are Markov and Faithful [Pearl, 2010]. Given Assumption 4.2, we aim to leverage TCRI property (Zdg \u22a5 \u22a5Zspu | Y \u2200e \u2208Etr) to learn the latent Zdg without observing Zspu directly. We do this by learning two feature extractors that, together, recover Zdg and Zspu and satisfy TCRI (Figure 3). We formally de\ufb01ne these properties as follows. De\ufb01nition 4.3 (Total Information Criterion (TIC)). \u03a6 = \u03a6dg \u2295\u03a6spu satis\ufb01es TIC with respect to random variables X, Y, e if for \u03a6(Xe) = [\u03a6dg(Xe); \u03a6spu(Xe)], there exists a linear operator T s.t., T (\u03a6(Xe)) = [Ze dg; Ze spu]\u2200e \u2208Etr. 6 Xe \u03a6dg \u03a6spu b Zdg \u03b8c \u2295 b Zspu \u03b8e b yc b ye Figure 3: Modeling approach. During training, both representations, \u03a6dg, and \u03a6spu, generate domaingeneral and domain-speci\ufb01c predictions, respectively. However, only the domain-invariant representations/predictions are used during testing \u2013 indicated by the solid red arrows. In other words, a feature extractor that satis\ufb01es the total information criterion recovers the complete latent feature sets Zdg, Zspu. This allows us to de\ufb01ne the proposed implementation of the TCRI property non-trivially \u2013 the conditional independence of subsets of the latents may not have the same implications on domain generalization. We note that X \u22a5 \u22a5Y |Zdg, Zspu, so X has no information about Y that is not in Zdg, Zspu. De\ufb01nition 4.4 (Target Conditioned Representation Independence). \u03a6 = \u03a6dg \u2295\u03a6spu satis\ufb01es TCRI with respect to random variables X, Y, e if \u03a6dg(X) \u22a5 \u22a5\u03a6spu(X) | Y \u2200e \u2208E. Proposition 4.5. Assume that \u03a6dg(X) and \u03a6spu(X) are correlated with Y . Given Assumptions 4.1-4.2 and a representation \u03a6 = \u03a6dg \u2295\u03a6spu that satis\ufb01es TIC, \u03a6dg(X) = Zdg \u21d0 \u21d2\u03a6 satis\ufb01es TCRI. (see Appendix C for proof). Proposition 4.5 shows that TCRI is necessary and su\ufb03cient to identify Zdg from a set of training domains. We note that we can verify if \u03a6dg(X) and \u03a6spu(X) are correlated with Y by checking if the learned predictors are equivalent to chance. Next, we describe our proposed algorithm to implement the conditions to learn such a feature map. Figure 3 illustrates the learning framework. Learning Objective: The \ufb01rst term in our proposed objective is L\u03a6dg = Re(\u03b8c \u25e6\u03a6dg), where \u03a6dg : X 7\u2192Rm is a feature extractor, \u03b8c : Rm 7\u2192Y is a linear predictor, and Re(\u03b8c \u25e6\u03a6dg) = E \u0002 \u2113(y, \u03b8c \u00b7 \u03a6(x)) \u0003 is the empirical risk achieved by the feature extractor and predictor pair on samples from domain e. \u03a6dg and \u03b8c are designed to capture the domain-general portion of the framework. Next, to implement the total information criterion, we use another feature extractor \u03a6spu : X 7\u2192Ro, designed to capture the domain-speci\ufb01c information in X that is not captured by \u03a6dg. Together, we have \u03a6 = \u03a6dg \u2295\u03a6spu where \u03a6 has domain-speci\ufb01c predictors \u03b8e : Rm+o 7\u2192Y for each training domain, allowing the feature extractor to utilize domain-speci\ufb01c information to learn distinct optimal domain-speci\ufb01c (nongeneral) predictors: L\u03a6 = Re\u0000\u03b8e \u25e6\u03a6 \u0001 . L\u03a6 aims to ensure that \u03a6dg and \u03a6spu capture all of the information about Y in X \u2013 total information criterion. Since we do not know o, m, we select them to be the same size on our experiments; o, m could be treated as hyperparameters though we do not treat them as such. Finally, we implement the TCRI property (De\ufb01nition 4.4). We denote LT CRI to be a conditional independence penalty for \u03a6dg and \u03a6spu. We utilize the Hilbert Schmidt independence Criterion (HSIC) [Gretton et al., 2007] as LT CRI. However, in principle, any conditional independence penalty can be used in its place. HSIC: LT CRI(\u03a6dg, \u03a6spu) = 1 2 X k\u2208{0,1} \\ HSIC \u0010 \u03a6dg(X), \u03a6spu(X) \u0011y=k = 1 2 X k\u2208{0,1} 1 n2 k tr \u0010 K\u03a6dgHnkK\u03a6spuHnk \u0011y=k , 7 where k, indicates which class the examples in the estimate correspond to, C is the number of classes, K\u03a6dg \u2208 Rnk\u00d7nk, K\u03a6spu \u2208Rnk\u00d7nk are Gram matrices, Ki,j \u03a6 = \u03ba(\u03a6dg(X)i, \u03a6dg(X)j), Ki,j \u03a6spu = \u03c9(\u03a6spu(X)i, \u03a6spu(X)j) with kernels \u03ba, \u03c9 are radial basis functions, Hnk = Ink \u2212 1 n2 k 11\u22a4is a centering matrix, Ink is the nk \u00d7 nk dimensional identity matrix, 1nk is the nk-dimensional vector whose elements are all 1, and \u22a4denotes the transpose. We condition on the label by taking only examples of each label and computing the empirical HSIC; then, we take the average. Taken together, the full objective to be minimized is as follows: L = 1 Etr X e\u2208Etr \" Re(\u03b8c \u25e6\u03a6dg) + Re(\u03b8e \u25e6\u03a6) + \u03b2LT CRI(\u03a6dg, \u03a6spu) # , where \u03b2 > 0 is a hyperparameter and Etr is the number of training domains. Figure 3 shows the full framework. We note that when \u03b2 = 0, this loss reduces to ERM. Note that while we minimize this objective with respect to \u03a6, \u03b8c, \u03b81, . . . , \u03b8Etr, only the domain-general representation and its predictor, \u03b8c \u00b7 \u03a6dg are used for inference. 5 Experiments We begin by evaluating with simulated data, i.e., with known ground truth mechanisms; we use Equation 5 to generate our simulated data, with domain parameter \u03c3ei; code is provided in the supplemental materials. SCM(ei) := \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 Z(ei) dg \u223cN \u00000, \u03c32 ei \u0001 y(ei) = Z(ei) dg + N \u00000, \u03c32 y \u0001 , Z(ei) spu = Y (ei) + N \u00000, \u03c32 ei \u0001 . (5) Table 2: Continuous Simulated Results \u2013 Feature Extractor with a dummy predictor \u03b8c = 1., i.e., b y = x \u00b7 \u03a6dg \u00b7 w, where x \u2208RN\u00d72, \u03a6dg, \u03a6spu \u2208R2\u00d71, w \u2208R. Oracle indicates the coe\ufb03cients achieved by regressing y on zc directly. Algorithm (\u03a6dg)0 (\u03a6dg)1 (i.e., Zdg weight) (i.e., Zspu weight) ERM 0.29 0.71 IRM 0.28 0.71 TCRI 1.01 0.06 Oracle 1.04 0.00 We observe 2 domains with parameters \u03c3e=0 = 0.1, \u03c3e=1 = 0.2 with \u03c3y = 0.25, 5000 samples, and linear feature extractors and predictors. We use partial covariance as our conditional independence penalty LT CRI. Table 2 shows the learned value of \u03a6dg, where \u2018Oracle\u2019 indicates the true coe\ufb03cients obtained by regressing Y on domain-general Zdg directly. The ideal \u03a6dg recovers Zdg and puts zero weight on Zspu. Now, we evaluate the e\ufb03cacy of our proposed objective on non-simulated datasets. 5.1 Semisynthetic and Real-World Datasets Algorithms: We compare our method to baselines corresponding to DAG properties: Empirical Risk Minimization (ERM, [Vapnik, 1991]), Invariant Risk Minimization (IRM [Arjovsky et al., 2019]), Variance Risk Extrapolation (V-REx, [Krueger et al., 2021]), [Li et al., 2018a]), Group Distributionally Robust Optimization (GroupDRO), [Sagawa et al., 2019]), and Information Bottleneck methods (IB_ERM/IB_IRM, [Ahuja et al., 2021]). Additional baseline methods are provided in the Appendix A. We evaluate our proposed method on the semisynthetic ColoredMNIST [Arjovsky et al., 2019] and realworld Terra Incognita dataset [Beery et al., 2018]. Given observed domains Etr = {e : 1, 2, . . . , Etr}, we train on Etr \\ ei and evaluate the model on the unseen domain ei, for each e \u2208Etr. ColoredMNIST: The ColoredMNIST dataset [Arjovsky et al., 2019] is composed of 7000 (2 \u00d7 28 \u00d7 28, 1) images of a hand-written digit and binary-label pairs. There are three domains with di\ufb00erent correlations between image color and label, i.e., the image color is spuriously related to the label by assigning a color to 8 each of the two classes (0: digits 0-4, 1: digits 5-9). The color is then \ufb02ipped with probabilities {0.1, 0.2, 0.9} to create three domains, making the color-label relationship domain-speci\ufb01c because it changes across domains. There is also label \ufb02ip noise of 0.25, so we expect that the best accuracy a domain-general model can achieve is 75%, while a non-domain general model can achieve higher. In this dataset, Zdg corresponds to the original image, Zspu the color, e the label-color correlation, Y the image label, and X the observed colored image. This DAG follows the generative process of Figure 2a [Arjovsky et al., 2019]. Spurrious PACS: Variables. X: images, Y : non-urban (elephant, gira\ufb00e, horse) vs. urban (dog, guitar, house, person). Domains. {{cartoon, art painting}, {art painting, cartoon}, {photo}} [Li et al., 2017]. The photo domain is the same as in the original dataset. In the {cartoon, art painting} domain, urban examples are selected from the original cartoon domain, while non-urban examples are selected from the original art painting domain. In the {art painting, cartoon} domain, urban examples are selected from the original art painting domain, while non-urban examples are selected from the original cartoon domain. This sampling encourages the model to use spurious correlations (domain-related information) to predict the labels; however, since these relationships are \ufb02ipped between domains {{cartoon, art painting} and {art painting, cartoon}, these predictions will be wrong when generalized to other domains. Terra Incognita: The Terra Incognita dataset contains subsets of the Caltech Camera Traps dataset [Beery et al., 2018] de\ufb01ned by [Gulrajani and Lopez-Paz, 2020]. There are four domains representing di\ufb00erent locations {L100, L38, L43, L46} of cameras in the American Southwest. There are 9 species of wild animals {bird, bobcat, cat, coyote, dog, empty, opossum, rabbit, raccoon, squirrel} and a \u2018no-animal\u2019 class to be predicted. Like Ahuja et al. [2021], we classify this dataset as following the generative process in Figure 2c, the Fully Informative Invariant Features (FIIF) setting. Additional details on model architecture, training, and hyperparameters are detailed in Appendix 5. Model Selection. The standard approach for model selection is a training-domain hold-out validation set accuracy. We \ufb01nd that model selection across hyperparameters using this held-out training domain validation accuracy often returns non-domain-general models in the \u2018hard\u2019 cases. One advantage of our model is that we can do model selection based on the TCRI condition (conditional independence between the two representations) on held-out training domain validation examples to mitigate this challenge. In the easy case, we expect the empirical risk minimizer to be domain-general, so selecting the best-performing trainingdomain model is sound \u2013 we additionally do this for all baselines (see Appendix A.1 for further discussion). We \ufb01nd that, empirically, this heuristic works in the examples we study in this work. Nevertheless, model selection under distribution shift remains a signi\ufb01cant bottleneck for domain generalization. 5.2 Results and Discussion Table 3: E\\etest \u2192etest (model selection on held-out source domains validation set). The \u2018mean\u2019 column indicates the average generalization accuracy over all three domains as the etest distinctly; the \u2018min\u2019 column indicates the worst generalization accuracy. ColoredMNIST Spurious PACS Terra Incognita Algorithm average worst-case average worst-case average worst-case ERM 51.6 \u00b1 0.1 10.0 \u00b1 0.1 57.2 \u00b1 0.7 31.2 \u00b1 1.3 44.2 \u00b1 1.8 35.1 \u00b1 2.8 IRM 51.7 \u00b1 0.1 9.9 \u00b1 0.1 54.7 \u00b1 0.8 30.3 \u00b1 0.3 38.9 \u00b1 3.7 32.6 \u00b1 4.7 GroupDRO 52.0 \u00b1 0.1 9.9 \u00b1 0.1 58.5 \u00b1 0.4 37.7 \u00b1 0.7 47.8 \u00b1 0.9 39.9 \u00b1 0.7 VREx 51.7 \u00b1 0.2 10.2 \u00b1 0.0 58.8 \u00b1 0.4 37.5 \u00b1 1.1 45.1 \u00b1 0.4 38.1 \u00b1 1.3 IB_ERM 51.5 \u00b1 0.2 10.0 \u00b1 0.1 56.3 \u00b1 1.1 35.5 \u00b1 0.4 46.0 \u00b1 1.4 39.3 \u00b1 1.1 IB_IRM 51.7 \u00b1 0.0 9.9 \u00b1 0.0 55.9 \u00b1 1.2 33.8 \u00b1 2.2 37.0 \u00b1 2.8 29.6 \u00b1 4.1 TCRI_HSIC 59.6 \u00b1 1.8 45.1 \u00b1 6.7 63.4 \u00b1 0.2 62.3 \u00b1 0.2 49.2 \u00b1 0.3 40.4 \u00b1 1.6 9 Table 4: Total Information Criterion: Domain General (DG) and Domain Speci\ufb01c (DS) Accuracies. The DG classi\ufb01er is shared across all training domains, and the DS classi\ufb01ers are trained on each domain. The \ufb01rst row indicates the domain from which the held-out examples are sampled, and the second indicates which domain-speci\ufb01c predictor is used. {+90%, +80%, -90%} indicate domains \u2013 {0.1, 0.2, 0.9} digit label and color correlation, respectively. DG Classi\ufb01er DS Classi\ufb01er on +90 DS Classi\ufb01er on +80 DS Classi\ufb01er on -90 Test Domain No DS clf. +90% +80% -90% +90% +80% -90% +90% +80% -90% +90% +80% -90% +90% 68.7 69.0 68.5 90.1 9.8 79.9 20.1 10.4 89.9 +80% 63.1 62.4 64.4 76.3 24.3 70.0 30.4 24.5 76.3 -90% 65.6 63.4 44.1 75.3 75.3 69.2 69.5 29.3 26.0 Table 5: TIC ablation for ColoredMNIST. Algorithm average worst-case TCRI_HSIC (No TIC) 51.8 \u00b1 5.9 27.7 \u00b1 8.9 TCRI_HSIC 59.6 \u00b1 1.8 45.1 \u00b1 6.7 Worst-domain Accuracy. A critical implication of domain generality is stability \u2013 robustness in worstdomain performance up to domain di\ufb03culty. While average accuracy across domains provides some insight into an algorithm\u2019s ability to generalize to new domains, the average hides the variance of performance across domains. Average improvement can be increased while the worst-domain accuracy stays the same or decreases, leading to incorrect conclusions about domain generalization. Additionally, in real-world challenges such as algorithmic fairness where worst-group performance is considered, some metrics or fairness are analogous to achieving domain generalization [Creager et al., 2021]. Results. TCRI achieves the highest average and worst-case accuracy across all baselines (Table 3). We \ufb01nd no method recovers the exact domain-general model\u2019s accuracy of 75%. However, TCRI achieves over 7% increase in both average accuracy and worst-case accuracy. Appendix A.2 shows transfer accuracies with cross-validation on held-out test domain examples (oracle) and TCRI again outperforms all baselines, achieving an average accuracy of 70.0% \u00b1 0.4% and a worst-case accuracy of 65.7% \u00b1 1.5, showing that regularizing for TCRI gives very close to optimal domain-general solutions. Similarly, for the Spurious-PACS dataset, we observe that TCRI outperforms the baselines. TRCI achieves the highest average accuracy of 63.4% \u00b1 0.2 and worst-case accuracy of 62.3% \u00b1 0.1 with the next best, VREx, achieving 58.8 \u00b1 1.0 and 33.8 \u00b1 0.0, respectively. Additionally, for the Terra-Incognita dataset, TCRI achieves the highest average and worst-case accuracies of 49.2% \u00b1 0.3% and 40.4% \u00b1 1.6% with the next best, GroupDRO, achieving 47.8 \u00b1 0.9 and 39.9 \u00b1 0.7, respectively. Appendix A.2 shows transfer accuracies with cross-validation held-out target domain examples (oracle) where we observe that TCRI also obtains the highest average and worst-case accuracy for Spurrious-PACS and Terra Incognita. Overall, regularizing for TCRI gives the most domain-general solutions compared to our baselines, achieving the highest worst-case accuracy on all benchmarks. Additionally, TCRI achieves the highest average accuracy on ColoredMNIST and Spurious-PAC and the second highest on Terra Incognita, where we expect the empirical risk minimizer to be domain-general. Additional results are provided in the Appendix A. The E\ufb00ect of the Total Information Criterion. Without the TIC loss term, our proposed method is less e\ufb00ective. Table 5 shows that for Colored MNIST, the hardest \u2018hard\u2019 case we encounter, removing the TIC criteria, performs worse in average and worst case accuracy, dropping over 8% and 18, respectively. Separation of Domain General and Domain Speci\ufb01c Features . In the case of Colored MNIST, we can reason about the extent of feature disentanglement from the accuracies achieved by the domain-general and domain-speci\ufb01c predictors. Table 4 shows how much each component of \u03a6, \u03a6dg and \u03a6spu, behaves as 10 expected. For each domain, we observe that the domain-speci\ufb01c predictors\u2019 accuracies follow the same trend as the color-label correlation, indicating that they capture the color-label relationship. The domain-general predictor, however, does not follow such a trend, indicating that it is not using color as the predictor. For example, when evaluating the domain-speci\ufb01c predictors from the +90% test domain experiment (row +90%) on held-out examples from the +80% training domain (column \"DS Classi\ufb01er on +80%\"), we \ufb01nd that the +80% domain-speci\ufb01c predictor achieves an accuracy of nearly 79.9% \u2013 exactly what one would expect from a predictor that uses a color correlation with the same direction \u2018+\u2019. Conversely, the -90% predictor achieves an accuracy of 20.1%, exactly what one would expect from a predictor that uses a color correlation with the opposite direction \u2018-\u2019. The -90% domain has the opposite label-color pairing, so a color-based classi\ufb01er will give the opposite label in any \u2018+\u2019 domain. Another advantage of this method, exempli\ufb01ed by Table 4, is that if one believes a particular domain is close to one of the training domains, one can opt to use the close domain\u2019s domain-speci\ufb01c predictor and leverage spurious information to improve performance. On Benchmarking Domain Generalization. Previous work on benchmarking domain generalization showed that across standard benchmarks, the domain-unaware empirical risk minimizer outperforms or achieves equivalent performance to the state-of-the-art domain generalization methods [Gulrajani and Lopez-Paz, 2020]. Additionally, Rosenfeld et al. [2022] gives results that show weak conditions that de\ufb01ne regimes where the empirical risk minimizer across domains is optimal in both average and worst-case accuracy. Consequently, to accurately evaluate our work and baselines, we focus on settings where it is clear that (i) the empirical risk minimizer fails, (ii) spurious features, as we have de\ufb01ned them, do not generalize across the observed domains, and (iii) there is room for improvement via better domain-general predictions. We discuss this point further in the Appendix A.1. Oracle Transfer Accuracies. While model selection is an integral part of the machine learning development cycle, it remains a non-trivial challenge when there is a distribution shift. While we have proposed a selection process tailored to our method that can be generalized to other methods with an assumed causal graph, we acknowledge that model selection under distribution shift is still an important open problem. Consequently, we disentangle this challenge from the learning problem and evaluate an algorithm\u2019s capacity to give domain-general solutions independently of model selection. We report experimental reports using heldout test-set examples for model selection in Appendix A Table 6. We \ufb01nd that our method, TCRI_HSIC, also outperforms baselines in this setting. 6 Conclusion and Future Work We reduce the gap in learning domain-general predictors by leveraging conditional independence properties implied by generative processes to identify domain-general mechanisms. We do this without independent observations of domain-general and spurious mechanisms and show that our framework outperforms other state-of-the-art domain-generalization algorithms on real-world datasets in average and worst-case across domains. Future work includes further improvements to the framework to fully recover the strict set of domain-general mechanisms and model selection strategies that preserve desired domain-general properties. Acknowledgements OS was partially supported by the UIUC Beckman Institute Graduate Research Fellowship, NSF-NRT 1735252. This work is partially supported by the NSF III 2046795, IIS 1909577, CCF 1934986, NIH 1R01MH116226-01A, NIFA award 2020-67021-32799, the Alfred P. Sloan Foundation, and Google Inc."
17
+ }
intro_8K/test_introduction_short_2404.16283v1.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2404.16283v1",
3
+ "title": "Andes: Defining and Enhancing Quality-of-Experience in LLM-Based Text Streaming Services",
4
+ "abstract": "The advent of large language models (LLMs) has transformed text-based\nservices, enabling capabilities ranging from real-time translation to AI-driven\nchatbots. However, existing serving systems primarily focus on optimizing\nserver-side aggregate metrics like token generation throughput, ignoring\nindividual user experience with streamed text. As a result, under high and/or\nbursty load, a significant number of users can receive unfavorable service\nquality or poor Quality-of-Experience (QoE). In this paper, we first formally\ndefine QoE of text streaming services, where text is delivered incrementally\nand interactively to users, by considering the end-to-end token delivery\nprocess throughout the entire interaction with the user. Thereafter, we propose\nAndes, a QoE-aware serving system that enhances user experience for LLM-enabled\ntext streaming services. At its core, Andes strategically allocates contended\nGPU resources among multiple requests over time to optimize their QoE. Our\nevaluations demonstrate that, compared to the state-of-the-art LLM serving\nsystems like vLLM, Andes improves the average QoE by up to 3.2$\\times$ under\nhigh request rate, or alternatively, it attains up to 1.6$\\times$ higher\nrequest rate while preserving high QoE.",
5
+ "authors": "Jiachen Liu, Zhiyu Wu, Jae-Won Chung, Fan Lai, Myungjin Lee, Mosharaf Chowdhury",
6
+ "published": "2024-04-25",
7
+ "updated": "2024-04-25",
8
+ "primary_cat": "cs.DC",
9
+ "cats": [
10
+ "cs.DC",
11
+ "cs.LG"
12
+ ],
13
+ "label": "Original Paper",
14
+ "paper_cat": "LLM Fairness",
15
+ "gt": "Large language Models (LLMs) [4, 9, 21, 46, 51] have revo- lutionized natural language processing. By generating con- textually relevant responses, they power a wide range of applications, more than 60% of which are centered around conversational interactions like chatbots, virtual assistants, language translation, and customer support systems [15]. In particular, the meteoric rise of ChatGPT [35] spearheaded the growth of conversational AI services by attracting over 100 million users in just two months after its launch [29]. Conversational AI services, by nature, provide interactive conversations between the user and an AI agent. At its core, an LLM generates tokens one by one1 and streams them back to the user to be digested, be it as written text or speech. As 1LLMs process and generate text in units of tokens. For instance, the word \u201cstreaming\u201d may be broken down into two tokens: \u201cstream\u201d and \u201cing.\u201d Req 2 Req 1 Request 1 and 2 arrive Quality of Experience is a different story. Req 1 Request 1 and 2 arrive Quality of Experience is a different story. Throughput is not all you need. Throughput is not all you need. User 1 User 2 User 1 User 2 Req 2 Req 1 Req 2 Server Server TTFT TTFT (a) Existing LLM serving systems are oblivious of QoE. User 2 experiences a long wait time (TTFT) and therefore lower QoE. Req 2 Req 1 Request 1 and 2 arrive Quality of Experience is a different story. Req 1 Request 1 and 2 arrive Quality of Experience is a different story. Throughput is not all you need. Throughput is not all you need. User 1 User 2 User 1 User 2 Req 2 Req 1 Req 2 Server Server TTFT TTFT (b) A QoE-aware LLM serving system can schedule token gen- eration over time to enhance QoE. User 2\u2019s TTFT is drastically improved without affecting User 1\u2019s token delivery timeline. Figure 1. Server-side token generation timeline and user- side response digestion progress. Even if the server generates tokens very fast, users cannot digest them at such a pace. this token-by-token streaming nature is akin to the frame- by-frame streaming nature of video streaming services, we dub such services text streaming services. In this paper, we seek to characterize and enhance the Quality-of-Experience (QoE) of text streaming services (\u00a72.2). We realize that user interaction with LLM responses happens at moments when each new token is delivered (e.g., displayed or spoken) to the user over time. Thus, we define token deliv- ery timeline (TDT), a series of timestamps when each token was delivered to a user, to capture the user\u2019s interaction with the service for a single request. The ideal TDT a user expects from a text streaming service can vary significantly based on the type of the service and user demographics. For instance, a chat service that uses a text-to-speech model to read out the LLM\u2019s response to users (e.g., voice chat in ChatGPT, real-time speech translation) could be less stringent in terms of its minimum token delivery speed (TDS) compared to a chat service in raw text, because a user\u2019s speaking speed is often slower than their reading speed, but it may require smaller time to first token (TTFT) to better resemble real-life arXiv:2404.16283v1 [cs.DC] 25 Apr 2024 verbal conversations. The minimum TDS and TTFT together define the expected TDT of a request. Unfortunately, existing LLM serving systems [20, 25, 30, 50] are designed to optimize aggregated server-side perfor- mance metrics such as token generation throughput [25, 50], which are not necessarily aligned with optimizing the QoE of text streaming services (\u00a72.3). More importantly, by re- aligning the objectives of LLM serving systems towards QoE optimization, a QoE-aware serving system can utilize the same resources more effectively to manage a greater num- ber of concurrent requests while ensuring high QoE, thus reducing the cost per request. To illustrate, we compare ex- isting serving systems with a QoE-aware one, each with a serving capacity of 1, in Figure 1. In Figure 1a, due to the commonly adopted first-come-first-serve (FCFS) scheduling policy [25, 50, 52], User 2 experiences a long initial waiting time (TTFT). In contrast, in Figure 1b, a QoE-aware serving system schedules token generation in a manner that is aware of each user\u2019s reading speed, leading to a shorter wait time for User 2 without affecting User 1\u2019s interaction with the service. Although the average server-side token generation throughput or latency are the same for the two systems, over- all user experience is improved in the QoE-aware system. We attribute this to the na\u00efve FCFS scheduling policy in existing serving systems, which fails to account for the QoE requirements of individual requests and cannot efficiently utilize resources (\u00a72.4). Consequently, some users may expe- rience extended waiting time during their interaction with the service, especially when the system is under higher re- quest rate or is serving requests with longer context lengths. To preserve good user experience, the service provider must provision more compute resources proportional to the excess request load, leading to higher operational costs. Designing a QoE-aware LLM serving system, however, is challenging from both conceptual and practical perspectives. Defining the QoE metric to capture the user experience in text streaming services is non-trivial. It should encapsulate the continuous interaction process over time, accounting for factors like TTFT and TDS. Designing a QoE-aware serving system faces several systems challenges as well: (a) Dynamic and unpredictable resource demand: Re- quests arrive dynamically with varying expected TDT and prompt length and the number of output tokens is not known a priori, making it challenging to implement a one-size-fits-all scheduling strategy such as round-robin. (b) Constrained resource supply: The system has limited GPU memory and computation resources, restricting the number of concurrent in-flight requests. To meet the QoE requirements of individual requests, the system needs to make runtime decisions to allocate resources among requests, which may incur non-negligible overhead. To this end, we first propose a mathematical definition of QoE for text streaming services (\u00a73.1). Our QoE metric Age Group Reading Speed 18-24 (28.0%) 236 WPM 25-44 (51.9%) 200 WPM 45-54 (11.2%) 192 WPM 55-64 (5.6%) 185 WPM 65+ (3.3%) 175 WPM Table 1. Reading speed (Word Per Minute) by age group [10, 29]. Language Speaking Speed English (79.3%) 150 WPM Chinese (7.0%) 158 WPM Korean (6.9%) 150 WPM French (3.6%) 195 WPM Spanish (3.2%) 218 WPM Table 2. Speaking speed (Word Per Minute) by lan- guage [8, 29, 36]. compares the actual TDT of a request with its expected TDT, reflecting the user\u2019s experience throughout their entire in- teraction with the service. Then, we propose Andes, an LLM serving system that optimizes the overall QoE of text stream- ing services (\u00a74). Andes employs a dynamic priority-based preemptive scheduler that operates at the granularity of to- kens. Andes strategically allocates system resources to more urgent requests and preempts requests that have already received sufficient service, all to enhance QoE. By satisfying more requests with high QoE using the same amount of re- source, Andes eliminates the need for additional resource provisioning, thus reducing LLM serving cost. Andes also co- designs a client-side token buffer that temporarily withholds excess tokens and displays them to the user at their expected pace (\u00a75). This design ensures users experience smooth token delivery, oblivious to the intricacies of server-side scheduling or network fluctuations. We evaluate Andes using the OPT [51] family of mod- els, ranging from 13B to 175B parameters (\u00a76). Compared to vLLM [25], we find that Andes can manage 1.6\u00d7 higher request rate with high QoE, or alternatively, improve the average QoE by 3.2\u00d7 given the same amount of resource. Overall, we make the following contributions in this paper: 1. We identify an emerging category of LLM-based ap- plications (text streaming services) and define a QoE metric for them. 2. We propose Andes, a QoE-aware LLM serving system designed to optimize QoE for text streaming services. 3. We evaluate Andes under different workloads and se- tups and show that Andes significantly improves QoE with negligible system overhead.",
16
+ "main_content": "2.1 LLM Serving Systems LLM text generation using Transformer-based [47] models is characterized by autoregressive token generation and significant memory usage. First, the LLM generates tokens 2 Time #Tokens Req 1 Req 2 Req 3 Req 4 Expected TDT Figure 2. Four requests arrive at \ud835\udc61= 0. Requests 1 and 2 are equally satisfying. Requests 3 and 4 are frustrating, with request 4 being more so as it delivers fewer tokens earlier on, despite having the same TTFT and average token latency. sequentially, where the next token is conditioned on the previous tokens. Second, the LLM requires a large amount of memory to store intermediate data for each token in its input prompt and output response, known as KV cache [47]. As the number of tokens generated increases, so does the KV cache size. For instance, GPT-3 175B [9] requires 7 GB of GPU memory for a 1000-token request, limiting the number of requests that can be handled concurrently. 2.2 User Experience of Text Streaming Services Compared to traditional services that generate entire responses at once, text streaming services allow the user to start digesting the response as early as possible. The user experience includes two phases: Wait Phase. Users wait for the first token to arrive, known as the time-to-first-token (TTFT). For web applications, studies indicate that users expect an initial response to arrive within one second, with a significant 32% dropout rate if the response takes longer than three seconds [6]. Digest Phase. Following the first token, users enter the digest phase, which may last for tens of seconds or more [50], Hence, it is a common practice to stream tokens to the user on the fly so that they can start digesting the response as early as possible. The expected rate of token delivery, i.e., the Token Delivery Speed (TDS), depends on factors such as application type and user demographics. For example, reading speeds, measured in words per minute (WPM), differ across age groups (Table 1), while speaking speeds vary among languages (Table 2). By translating words to tokens using the average word-to-token ratio [38], we can estimate the average reading speed to 4.8 tokens/s and average speaking speed to 3.3 tokens/s. Intuition Behind QoE of Text Streaming Services. The expected TTFT and the expected TDS together define the expected token delivery timeline (TDT), represented by the black line in Figure 2. Similar to QoE in video streaming, a desired QoE metric should capture the gap between the actual TDT and the expected TDT. Intuitively, users are satisfied when the actual TDT is above the expected TDT, otherwise, they prefer to receive more tokens earlier on, as illustrated in 2 4 Request rate (req/s) 10 0 10 1 10 2 TTFT (s) Expected TTFT QoE-unaware QoE-aware (a) 90\ud835\udc61\u210e-p TTFT increases dramatically as the request rate surpasses the server\u2019s capacity. 2 3 4 5 Request rate (req/s) 0 5 10 TDS (tokens/s) Reading speed Speaking speed QoE-unaware QoE-aware (b) Token generation speed is much faster than the userexpected speed. Figure 3. System performance under different request rates. Figure 2. Therefore, the QoE should comprehensively measure the token delivery timeline throughout the entire user interaction, going beyond an aggregated number like TTFT or average token latency. We formally define such a QoE metric in Section 3.1. 2.3 Problems and Opportunities Existing LLM serving systems have primarily focused on optimizing aggregated server-side metrics, and often employ a first-come-first-serve (FCFS) scheduling approach without considering the user experience. In our experiment with ShareGPT [45] on OPT 66B [51] with 4 A100 GPUs, we notice that especially under high request rate, two issues arise: (1) certain users may encounter extended TTFT; (2) conversely, other users might receive tokens at a pace surpassing their digestion ability. Prolonged TTFT. As depicted in Figure 3a, the 90\ud835\udc61\u210epercentile TTFT increases dramatically as the server faces more bursty request rates, resulting in a longer queuing delay and degraded user experience. To accommodate such bursty request volumes, service providers often have to over-provision resources, such as by adding more GPUs, which significantly increases operational costs. Excessively High Token Generation Speed. Conversely, as shown in Figure 3b, we report the token generation speed under different request rates. The observed server-side token generation speed (\u22656.6 tokens/s) is much faster than the userexpected speed (3.3 or 4.8 tokens/s), as referenced in Table 1 and Table 2. This discrepancy indicates that the server often generates tokens faster than the user can consume them. While this might seem efficient from the server\u2019s perspective, it may overwhelm this user while starving others. Opportunities. We observe that there is an opportunity to optimize user experience by balancing prolonged TTFT and excessively fast token generation speed. By temporarily pausing the response generation for requests with already sufficient tokens generated, we can spare the limited GPU resources to other pending requests. The ratio between the expected token generation speed \ud835\udc47\ud835\udc37\ud835\udc46expected and the actual token generation speed \ud835\udc47\ud835\udc37\ud835\udc46actual 3 Response length Prompt length Memory usage = Request Spec Request ID 1 2 3 4 Prompt length 90 90 180 90 Response length 10 10 10 20 Expected TTFT (s) 1 1 2 2 Expected TDS 1.25 1.25 5 5 (tokens/s) Server memory capacity 1 2 3 4 1,2,3,4 FCFS 1 2 3 4 1 2 3 4 1,2,3,4 Round Robin 1 2 3 4 1 2 4 1,2,3,4 QoE-aware 10 20 #Token 0 2 4 6 8 Time 10 20 #Token 0 2 4 6 8 Time 0 2 4 6 8 Time Req 1 Req 2 Req 3 Req 4 Expected TDT Figure 4. Suboptimal user experience from QoE-unaware scheduling policies. In this illustrative toy example, we consider a server that can serve at most 200 tokens simultaneously due to memory constraints. We consider four requests with different prompt lengths, response lengths, as well as different expected TTFT and TDS values, arriving at time 0. The figure shows the serving order (first row) and the cumulative tokens delivered over time for each request (second and third rows). Colored lines represent actual TDT, while the black line indicates the expected TDT. An optimal QoE is achieved when the actual token delivery curve is completely left and/or above the expected token delivery curve. determines the slack for which a request can be preempted, allowing the system to accommodate more concurrent requests. Thus, with appropriate request preemption and restarting, we can serve \ud835\udc47\ud835\udc37\ud835\udc46actual \ud835\udc47\ud835\udc37\ud835\udc46expected \u00d7 concurrent requests than without request preemption, significantly improving user experience. In the example of text-based and voice-based chat services in Figure 3b, we could have increased the serving capacity by 6.6 4.8 = 1.38\u00d7 and 6.6 3.3 = 2\u00d7, respectively. Our evaluation shows that Andes can nearly achieve this theoretical improvement in practice. 2.4 Limitation of Existing Solutions Let us consider a toy example in Figure 4 to illustrate the limitations of existing QoE-unaware scheduling (FCFS used by vLLM [25] and Round Robin). Under FCFS scheduling, while requests 1, 2, and 3 are served immediately, request 4 suffers from longer TTFT due to queuing delays. Round Robin partially mitigates queuing delay using fair-sharing but still fails to align the token delivery in the later stage of the interaction, leading to suboptimal QoE. In contrast, the QoE-aware policy manages to meet the QoE requirements for all requests by prioritizing requests based on their QoE requirements and resource demand. It prioritizes requests with stringent TTFT requirements. Meanwhile, it monitors the resource demand of each request to prevent small requests from being starved of necessary resources. As the served requests accumulate enough tokens for the user to digest, the system upgrades the priority of request 3, which then requires more urgent servicing, and serves it. Finally, the system brings back requests 1, 2, and 4 to continue supplying tokens. In sum, when the server load is below its capacity, all requests can be served promptly and achieve perfect QoE without smart request scheduling. However, when the server is operating at capacity due to unpredictable higher request loads, QoE-aware scheduling can significantly improve the user experience without over-provisioning resources. 3 Overview In this section, we first introduce a formal definition of Quality-of-Experience (QoE) for text streaming services (\u00a73.1). Then, we provide an overview of Andes, an LLM serving system that optimizes QoE of text streaming services (\u00a73.2). 3.1 Quality-of-Experience (QoE) in Text Streaming Text streaming services allow the developer to specify the expected token delivery timeline (TDT) in a request. We derive the QoE of a request by comparing its actual TDT with the expected TDT, considering the entire token delivery process. Informed by the distinctions between superior and inferior service depicted in Figure 2, the formulation of our QoE metric is guided by a set of principles that reflect user expectations and experiences throughout their interaction: 1. Perfect Satisfaction: Users are satisfied when the actual token delivery perfectly aligns with or exceeds the expected delivery, resulting in maximum QoE (QoE = 1). We normalize QoE \u2208[0, 1] for generality across applications. 2. Excess Token Delivery: At any given time, delivering tokens faster than the user\u2019s digest speed does not add 4 ) Perfect QoE (d) Pause in the middle Expected TDT Server generates User digests Sexpected Sactual Time #Tokens (a) TTFT missed. Time #Tokens (b) TDS missed. Time #Tokens (c) Perfect QoE. Time #Tokens (d) Pause in the middle. Figure 5. QoE example. The slope of the actual token delivery curve on the user side is capped by the expected TDS. value to the user experience, as the user cannot digest all tokens at once. So the QoE remains unchanged. 3. Early Token Delivery: Users prefer receiving more tokens earlier to start processing the response sooner. In scenarios where perfect satisfaction is not achieved, the QoE is higher for scenarios where more tokens are delivered earlier. For example, the QoE is worse for a longer TTFT with the same TDS, and similarly, the QoE is worse for a slower TDS with the same TTFT. Following these principles, we formalize the QoE metric by comparing two curves: (a) The expected token delivery curve \ud835\udc47(\ud835\udc61) that is defined by expected TTFT and TDS. Specifically, \ud835\udc47(\ud835\udc61) = \ud835\udc47\ud835\udc37\ud835\udc46expected\u00b7 (\ud835\udc61\u2212\ud835\udc47\ud835\udc47\ud835\udc39\ud835\udc47expected) represents the ideal timeline at which tokens should be delivered to the user (black lines in Figure 5). (b) The actual token delivery curve \ud835\udc34(\ud835\udc61) reflects the timeline of how tokens are digested by the user over time (black dotted lines in Figure 5), with its slope at any time capped by the expected TDS. To quantify the QoE of a request with response length \ud835\udc59, we measure the area under both curves up to the actual time to the last token (TTLT). We then define QoE as the ratio of the actual and expected areas, as shown in Figure 5: \ud835\udc44\ud835\udc5c\ud835\udc38= \ud835\udc46actual \ud835\udc46expected = \u222b\ud835\udc47\ud835\udc47\ud835\udc3f\ud835\udc47 0 \ud835\udc34(\ud835\udc61)\ud835\udc51\ud835\udc61 \u222b\ud835\udc47\ud835\udc47\ud835\udc3f\ud835\udc47 0 min(\ud835\udc47(\ud835\udc61),\ud835\udc59)\ud835\udc51\ud835\udc61 (1) This formulation focuses on the relative QoE relationship between services, but Andes allows the service provider to prioritize specific aspects. For example, to stress a shorter TTFT, the provider can add a penalizing term on the defined QoE as \ud835\udefc\ud835\udc47\ud835\udc47\ud835\udc39\ud835\udc47actual\u2212\ud835\udc47\ud835\udc47\ud835\udc39\ud835\udc47expected \u00b7 \ud835\udc46actual \ud835\udc46expected , where \ud835\udefc\u2208[0, 1]. In this paper, we will use the QoE definition in Equation 1 by default. Running Waiting Queue \u2026 \u2026 1 Request Client Server 4 5 Buffer Request Priority GPU Admit Evict Submit Request {Prompt: \u2019What is the probability that this paper will be accepted?\u2019, TTFT: 1s, TDS: 5 tokens/s} Token Context Length QoE Tracker 2 3 3 Worker 0 Worker 1 Worker W-1 Request Metadata Receive Token Figure 6. Andes Overview. 3.2 Andes Overview The workflow of Andes is shown in Figure 6. 1 The interaction begins with the user submitting a request to the server. The request comes with its QoE requirement, which is prespecified by the application developer. 2 Upon receiving the request, the QoE tracker assigns a scheduling priority and puts it in the waiting queue. 3 At each scheduling iteration, the QoE tracker refreshes the priorities of all requests, both in the waiting and running queues. Then Andes reschedules the requests based on their priorities by admitting high-priority waiting requests to GPU workers and evicting low-priority running requests back to the server. For these evicted requests, their states (e.g., KV cache) are stored in the request metadata store on CPU RAM for future retrieval. 4 During each inference iteration, each running request generates one token, which is then sent to the client. 5 As tokens are delivered to the client, a token buffer is responsible for storing excess tokens and displaying them at the expected speed, ensuring smooth token delivery. 4 QoE-Aware Scheduling In this section, we describe how Andes schedules token generation across multiple requests to maximize the total QoE. Section 4.1 formulates the scheduling problem as a Knapsack variant, and Section 4.2 introduces an efficient solution. 4.1 Problem Formulation The core of Andes is an online preemptive scheduling algorithm for token generation, which requires designing three elements: (1) How often to make scheduling decisions (time quantum), (2) which requests to serve (scheduling objective), and (3) how many requests to serve at a time (batch size). Time Quantum. At the beginning of each time quantum, the scheduler inspects both queued and running requests, and determines which ones to admit and preempt. Following the 5 continuous batching used in existing systems [25, 50], Andes invokes its scheduler at the beginning of each iteration. Scheduling Objective. Just like any other online serving system, it is impractical to perfectly plan execution into the future. Therefore, Andes serves the set of requests that maximizes the scheduling objective in the upcoming time frame of length \u0394\ud835\udc61. The parameter \u0394\ud835\udc61cannot be too short, as scheduling decisions will become shortsighted, or too long, as the actual system state would deviate too far from estimations. We find that setting it as the average request completion time is reasonable, and show in Section 6.5 that Andes is not sensitive to the setting of \u0394\ud835\udc61. Andes supports various scheduling objectives including max average QoE and max-min QoE by designing its scheduling objective function appropriately. For the sake of presentation, we will focus on maximizing average QoE here (See Appendix A for alternative objectives). The objective function for request \ud835\udc56is defined as: \ud835\udc44serve,\ud835\udc56\u2212\ud835\udc44wait,\ud835\udc56 (2) where \ud835\udc44serve,\ud835\udc56and \ud835\udc44wait,\ud835\udc56are the QoE of request \ud835\udc56after \u0394\ud835\udc61 if it is served and not served, respectively. In simple terms, Equation 2 is the amount of QoE gain when we decide to serve request \ud835\udc56compared to when it is not served, and we naturally want to serve more of the requests that give us large QoE gains when served. Batch Size. The number of requests picked to run in the upcoming quantum, or batch size, is limited by two factors. First, each token in a request\u2019s context (prompt plus all generated tokens) consumes one entry in the LLM serving system\u2019s KV cache [9], whose size is bounded by GPU memory. Thus, we have the following constraint: \ud835\udc41 \u2211\ufe01 \ud835\udc56=1 \ud835\udc59\ud835\udc56\ud835\udc65\ud835\udc56\u2264\ud835\udc40 (3) where there are \ud835\udc41requests in total (queued or running), \ud835\udc59\ud835\udc56 is request \ud835\udc56\u2019s context length, \ud835\udc65\ud835\udc56is an indicator variable that is 1 if request \ud835\udc56is served and 0 otherwise, and \ud835\udc40is the total number of tokens that can fit in GPU memory. Furthermore, Andes must take into account the latency to generate one token. That is, while a large batch size may increase server-side token generation throughput, the increase in the amount of compute will inflate the latency to generate one token from the perspective of each request, potentially hurting their QoE by delaying TTFT or failing to meet the expected TDS. On the other hand, a small batch size will be able to deliver tokens faster to each running request, but in turn more requests will not be served at all, again potentially hurting their QoE. Thus, the right intermediate batch size will have to be chosen in order to maximize average QoE. Knapsack Formulation. Putting these together, we observe that the problem setting resembles that of the classic knapsack problem [23]. The goal is to select items (requests) Time # Tokens Qserve(50) Qserve(30) Qserve(10) t Time # Tokens Qwait t Expected Actual Future Time # Tokens Qserve(50) Qserve(30) Qserve(10) t (a) \ud835\udc44serve, i(\ud835\udc35) Time # Tokens Qwait t (b) \ud835\udc44wait,\ud835\udc56 Figure 7. Visualization of \ud835\udc44serve, i(\ud835\udc35) and \ud835\udc44wait,\ud835\udc56. The former depends on batch size \ud835\udc35whereas the latter is a constant. With batch size 50, request \ud835\udc56no longer has perfect QoE. to put in a knapsack (GPU) so that total item value (QoE gain) is maximized and total weight (\ud835\udc59\ud835\udc56) does not exceed the knapsack\u2019s capacity (\ud835\udc40). However, our problem setting deviates from that of the classical knapsack because the value of each item depends on how many items there are in the knapsack. This is because, as noted above, the number of requests in the knapsack (batch size) affects token generation latency, which in turn means that \ud835\udc44serve,\ud835\udc56is actually a function of batch size \ud835\udc35.2 Figure 7 visualizes this. When \ud835\udc35is just 10 or 30, the request maintains perfect QoE by always running ahead. However, when \ud835\udc35is 50, the computation time of one iteration becomes longer and slows down token generation, degrading the request\u2019s QoE by failing to meet its TDS expectation. On the other hand, \ud835\udc44wait,\ud835\udc56does not depend on the batch size because it simply sits in the queue, waiting to be served. Thus, for a specific batch size \ud835\udc35, we would like to solve: max \ud835\udc65 \ud835\udc41 \u2211\ufe01 \ud835\udc56=1 \u0000\ud835\udc44serve,\ud835\udc56(\ud835\udc35) \u2212\ud835\udc44wait,\ud835\udc56 \u0001 \u00b7 \ud835\udc65\ud835\udc56 s.t. \ud835\udc65\ud835\udc56\u2208{0, 1}, \ud835\udc56\u22081, . . . , \ud835\udc41 \ud835\udc41 \u2211\ufe01 \ud835\udc56=1 \ud835\udc65\ud835\udc56= \ud835\udc35 \ud835\udc41 \u2211\ufe01 \ud835\udc56=1 \ud835\udc59\ud835\udc56\ud835\udc65\ud835\udc56\u2264\ud835\udc40 (4) where the optimization variable \ud835\udc65is a length \ud835\udc41array of \ud835\udc65\ud835\udc56s. The second constraint ensures that exactly \ud835\udc35many requests are chosen, whereas the final constraint ensures that the GPU memory capacity is not exceeded. Equation 4 should be solved for each possible batch size \ud835\udc35and the solution that yields the best objective value should be selected. 2More precisely, token generation latency is a function of batch size and the total number of tokens in the batch, but batch size and total number of tokens are nearly perfectly correlated, allowing us to eliminate the latter and only leave batch size. See Appendix B for more detailed analysis. 6 4.2 Solution Design In this section, we discuss the hardness of the problem formulated in the previous section in terms of algorithmic hardness and systems overhead. Then, we propose efficiency optimizations and a greedy algorithm that gives an approximate solution with low systems overhead. Algorithmic Hardness. As Andes must solve its optimization problem repetitively online to determine the set of requests to solve, an efficient algorithm is needed. However, Equation 4 is a variant of the knapsack problem called the Exact K-item Knapsack, which is weakly NP-Hard [23]. We give an optimal 3D dynamic programming solution to the problem that runs in pseudo-polynomial time \ud835\udc42(\ud835\udc40\u00b7 \ud835\udc412) in Appendix C. However, such an algorithm is also too slow in our case as the number of requests \ud835\udc41and the maximum number of tokens that can fit in memory \ud835\udc40are easily in the order of hundreds and thousands, respectively. Furthermore, we need to solve Equation 4 for each possible batch size \ud835\udc35\u2208[1, \ud835\udc41], which is clearly intractable. Preemption Overhead. When some requests that were running in the previous time quantum are not selected to run on the next, such requests are preempted. This is the core mechanism that reduces TTFT inflation from head-of-line blocking. For this, Andes supports two preemption mechanisms: swapping and recomputation. The former moves the request\u2019s KV cache entries between the GPU and CPU memory, whereas the latter drops all entries on preemption and recomputes them when the request restarts. If Andes runs out of host memory for storing KV cache, the preemption mechanism will automatically switch to recomputation. Preemption is not free \u2013 in general, the latency overhead of swapping is similar to one token generation iteration (See Appendix D for detailed benchmarking). Frequent preemption may slow down token generation and delay token delivery, potentially degrading request throughput and QoE. Therefore, our scheduling algorithm must make preemption decisions that strike a good balance between reaping QoE gains and causing slowdowns. Optimization #1: Selective Triggering. We observe that Equation 4 only needs to be solved when batch size is limited either by memory capacity or computation time. The former case can be detected easily by monitoring the KV cache occupancy and having a high-memory watermark (e.g., 90%). For the latter case, Andes monitors token generation latency and detects when it begins to exceed the most minimum token delivery speed requirement of the most stringent request. In all other cases, Andes does not trigger the optimization problem solver and serves every request. Optimization #2: Batch Size Search Space Pruning. In order to reduce the number of times Equation 4 needs to be solved, we reduce the search space of batch size \ud835\udc35from [1, \ud835\udc41] to [\ud835\udc35min, \ud835\udc35max]. First, there is no point in exploring very large Algorithm 1 Greedy packing algorithm for Equation 4 Inputs: Number of requests \ud835\udc41and KV cache capacity \ud835\udc40 Request context length array \ud835\udc59[\ud835\udc41] Request QoE gain array \ud835\udc5e[\ud835\udc41] Target batch size \ud835\udc35 Output: Solution array \ud835\udc65[\ud835\udc41] 1: Initialize priority array \ud835\udc5d[\ud835\udc41] with all zeros 2: for \ud835\udc56= 0 to \ud835\udc41\u22121 do 3: \ud835\udc5d[\ud835\udc56] = \ud835\udc5e[\ud835\udc56] \ud835\udc59[\ud835\udc56] \u22b2Priority of request \ud835\udc56 4: \ud835\udc40current = 0 5: \ud835\udc41current = 0 6: Initialize solution array \ud835\udc65[\ud835\udc41] with all zeros 7: for all \ud835\udc56\u2208[0, \ud835\udc41\u22121] in descending order of \ud835\udc5d[\ud835\udc56] do 8: if \ud835\udc40current + \ud835\udc59[\ud835\udc56] \u2264\ud835\udc40and \ud835\udc41current + 1 \u2264\ud835\udc35then 9: \ud835\udc65[\ud835\udc56] = 1 \u22b2Serve request \ud835\udc56 10: \ud835\udc40current = \ud835\udc40current + \ud835\udc59[\ud835\udc56] 11: \ud835\udc41current = \ud835\udc41current + 1 12: else 13: break 14: return \ud835\udc65 batch sizes that cannot be realized. Thus, \ud835\udc35max is determined by adding to the batch requests with the shortest context lengths until the total number of tokens in the batch reaches \ud835\udc40, at which point the batch size is the largest that can be realized. On the other hand, very small batch sizes that can generate tokens faster than the expected TDS of any request are also suboptimal. This is because going that fast does not increase the QoE of requests that are served, but on the other hand will serve a smaller number of requests, potentially degrading the QoE of requests that are left waiting. Thus, \ud835\udc35min is set as the largest batch size that generates tokens faster than the most stringent TDS among all requests. Optimization #3: Greedy Packing for Knapsack. A direct solution to the exact k-item knapsack problem in Equation 4 is computationally too heavy. Instead, Andes designs an efficient algorithm that computes each request\u2019s priority and greedily packs requests in that order. In designing the priority function, we have three goals: (a) Reflecting merit: Requests that yield high QoE gain and consume less resource should have high priority. (b) Preventing starvation: Requests should be automatically deprioritized as they receive service. (c) Reducing preemption: Selecting high priority requests should reduce the need for preemption. In light of these goals, request \ud835\udc56\u2019s priority is defined as: \ud835\udc44serve,\ud835\udc56(\ud835\udc35) \u2212\ud835\udc44wait,\ud835\udc56 \ud835\udc59\ud835\udc56 (5) This priority function meets our goals. (a) A higher QoE gain will increase the request\u2019s priority, but simultaneously discounted by the amount of GPU memory it will use. (b) As 7 a request receives service, its context length (\ud835\udc59\ud835\udc56) will increase, automatically deprioritizing itself. In contrast, requests will have higher QoE gain the more they wait, automatically boosting their priorities. (c) Finally, a request with long context length (\ud835\udc59\ud835\udc56) will be preempted first, freeing enough GPU memory to potentially bring in more than one waiting requests.3 This reduces the number of preemptions required to alleviate head-of-line blocking. The whole procedure is given in Algorithm 1. The greedy packing algorithm offers time complexity \ud835\udc42(\ud835\udc41log \ud835\udc41). We empirically show in Section 6.5 that this greedy solution can achieve performance comparable to the 3D DP algorithm while greatly reducing scheduling overhead. Optimization #4: Preemption Cap. We have discussed that preemption is not free and can potentially degrade QoE. However, we can empirically and theoretically show that Andes commonly does not result in excessive preemptions/thrashing that may cause average QoE to degrade. Empirically, Andes consistently maintains an average preemption frequency below 1 per request, even under a high server load (\u00a76.2.3). Theoretically, the number of preemptions needed to optimize the QoE of requests is contingent upon the excessive request load. Assume the serving system can handle \ud835\udc5f0 requests per second and the actual request rate is \ud835\udc58\u00b7 \ud835\udc5f0 requests per second, where \ud835\udc58\u22651. Thus, there would be (\ud835\udc58\u22121) \u00b7\ud835\udc5f0 requests whose QoE might be degraded due to the queuing delay. To mitigate this, we need roughly one preemption to accommodate each of these requests. Sometimes, a single preemption of a long request can allow multiple new requests to be served, which further reduces the number of preemptions needed. Therefore, the average preemption frequency needed is bounded by \ud835\udc58\u22121, which is small as long as the load is not excessively high. Nevertheless, in order to safeguard against thrashing that may happen in the worst case request pattern, Andes supports setting a cap \ud835\udc43on the average number of preemptions a request can experience throughout its lifetime. Too high a \ud835\udc43will not be able to act as a safeguard, whereas too small a \ud835\udc43will prevent even absolutely necessary preemptions from happening. We find that setting \ud835\udc43= 1, i.e., a request on average experiences at most one preemption during its lifetime, is a good default (Section 6.5). 5 Implementation The two core elements of Andes are its QoE-aware scheduler and a client-side token buffer. Server-Side QoE-Aware Scheduler. Andes\u2019s scheduling algorithm can work with any LLM serving system that supports continuous batching and at least one preemption mechanism (swapping or recomputation). We note that an LLM 3The overhead of preemption depends on how much memory was freed, not the number of requests. Therefore, for the same amount of memory freed from preemption, it\u2019s better to free a smaller number of requests. 0 50 100 150 200 250 #Tokens Generation Pause Network Fluctuation 0 10 20 30 40 50 Time (s) 0 100 #Tokens in buffer Client receives User digests Figure 8. The client-side token buffer holds excess tokens sent from the server to absorb token generation fluctuations and paces token delivery based on the user\u2019s expected TDS. serving system that implements Paged Attention [25] is likely to also support at least one preemption mechanism to prevent the system from running out of memory. As a reference, we implemented Andes\u2019s scheduling algorithm on top of vLLM [25]. The scheduler only manages requests coming into the vLLM instance it is integrated with, assuming that cluster-level load balancing and fault tolerance are done separately. Client-Side Token Buffer. The server sends tokens to the buffer as soon as they are generated, even if they were generated at a pace that exceeds the user\u2019s expected TDS. Then, the token buffer smooths out the token delivery timeline to pace tokens at the user\u2019s expected TDS. The token buffer can also naturally smooth out some fluctuations in network latency, for instance in crowded mobile networks. The buffer should be implemented appropriately depending on the destination of streaming \u2013 e.g., TypeScript for web frontend, Python for API use. Figure 8 visualizes the token buffer in action. With an initial burst generation faster than the user\u2019s expected TDS, the buffer withholds excess tokens and paces token delivery, thus growing in size. The server is fully aware of the token buffer, and preempts the request to serve other requests. During this time, the buffer drains at a rate that matches the user\u2019s expected TDS. Finally, the server brings back the request and starts generating tokens again, and together with the token buffer, perfect QoE was achieved. 6 Evaluation We evaluate the performance of Andes under different workloads. We demonstrate that: 1. Andes improves the average QoE up to 3.2\u00d7 when the system experiences high/bursty load (\u00a76.2.1). 8 Model size 13B 30B 66B 175B GPUs A100 4\u00d7A100 4\u00d7A100 4\u00d7A100 GPU Memory 80 GB 320 GB 320 GB 320 GB Precision FP16 FP16 FP16 8-bit [14] Model Memory 26 GB 60 GB 132 GB 180 GB Table 3. OPT model family and GPU specifications used. 2. Andes can handle up to 1.6\u00d7 higher request rates while preserving high QoE without additional resources, significantly reducing the serving cost(\u00a76.2.2). 3. Andes maintains similar token generation throughput as the baseline, with a minor drop (\u226410%) in throughput as the request rate increases (\u00a76.2.3). 4. Andes significantly improves TTFT, while maintaining TDS above user expected speed (\u00a76.3). 5. Andes outperforms the baselines across different workloads (\u00a76.4) and setups (\u00a76.5). 6.1 Experiment Setup Model and Server Configurations. Following state-ofthe-art LLM serving systems [25], we evaluate Andes using the OPT [51] series with 13B, 30B, 66B, and 175B parameters, with the 175B model employing INT8 quantization. We run all experiments on NVIDIA A100 GPUs in Chameleon Cloud [22], and use tensor parallelism to deploy the models, using the default configuration in vLLM [25]. We use swap as the preemption mechanism and set the CPU swap space to 240 GB in total. Detailed hardware specifications are provided in Table 3. Workloads. We experiment on ShareGPT [45], a dataset that gathers conversations shared by users with ChatGPT [35], including multiple rounds of input prompt and output response. By concatenating multiple rounds of conversations into one input while limiting its length to 1k tokens to fit the model\u2019s maximum context length, and setting the final response as the output, we create the Multi-Round ShareGPT dataset for longer conversations. As shown in Figure 9, MultiRound-ShareGPT has about 3\u00d7 longer input than ShareGPT, while both datasets have similar output length distribution. We generate request arrival traces using Poisson distribution with different arrival rates. The request\u2019s QoE requirement trace is created with different expected TTFT and TDS. TTFT is set to 1 second for all, while TDS is based on user reading speeds (Table 1), and is translated from words to tokens using the average word-to-token ratio for ChatGPT [38]. In real applications, QoE requirements should be set depending on the application\u2019s specific use case. For instance, reading speed (and thus expected TDS) may be measured using screen scrolling [18] or eye-tracking [3, 34]. Another potential use case is to introduce API price tiering, 0 500 1000 1500 2000 #Tokens 0 200 400 Density Input (mean: 174.55) Output (mean: 314.22) (a) ShareGPT. 0 200 400 600 800 1000 #Tokens 0 200 400 600 Density Input (mean: 624.22) Output (mean: 365.52) (b) Multi-Round ShareGPT. Figure 9. Input and output length distributions of datasets. where a higher per-token price provides faster TDS, and API users can select the tier suitable for downstream digestion. Baselines. We compare Andes with vLLM (version 0.2.7). vLLM uses first-come-first-serve (FCFS) scheduling policy by default. We implement another scheduling policy, RoundRobin (RR), atop vLLM for more informed comparison, which is designed to guarantee equal service to requests through cyclic request preemption. For RR, we set the service interval to 50 inference iterations, maximizing its QoE performance. Metrics. We focus on the following metrics in evaluations: \u2022 Average QoE: We set the threshold to 0.9 as the minimum acceptable average QoE. The QoE of 0.9 corresponds to a 5% delay in TTFT, a 10% slowdown in TDS, or something in the middle. \u2022 System capacity: It measures the maximum request rate that the system can handle while maintaining an average QoE above the threshold. \u2022 System throughput: It measures how many tokens the system generates per second. We also report normalized latency, which is used by vLLM[25] and Orca[50], in Appendix E. 6.2 End-to-End Experiments In this section, we report the performance of Andes in terms of average QoE (\u00a76.2.1), system capacity (\u00a76.2.2), and system throughput (\u00a76.2.3) under different setups. 6.2.1 Improvement on Average QoE. We evaluate the performance of Andes on all four models and two datasets. Figure 10 and Figure 11 show the result on the ShareGPT dataset and Multi-Round ShareGPT dataset respectively. As the request rate increases, Andes maintains a high average QoE, outperforming the baseline whose average QoE sharply decreases. In other words, Andes can serve more concurrent requests without compromising user experience. For ShareGPT dataset, Andes increases average QoE up to 3.1\u00d7 at the same request rate, while maintaining an average QoE of 0.9, all with the same resources. For Multi-Round ShareGPT dataset, Andes improves average QoE up to 3.2\u00d7. For OPT-30B model, the improvement is less significant, as the model is less resource-constrained when compared to the OPT-66B model. 9 1.4 1.6 1.8 2.0 2.2 Request rate (req/s) 0.00 0.25 0.50 0.75 1.00 Avg QoE RR vLLM Andes 5.0 7.5 10.0 Request rate (req/s) 0.0 0.5 1.0 Avg QoE (a) OPT-13B 5 10 Request rate (req/s) 0.0 0.5 1.0 Avg QoE (b) OPT-30B 3 4 5 Request rate (req/s) 0.0 0.5 1.0 Avg QoE (c) OPT-66B 1.4 1.6 Request rate (req/s) 0.0 0.5 1.0 Avg QoE (d) OPT-175B. Figure 10. Average QoE for different request rates using the ShareGPT dataset. 1.4 1.6 1.8 2.0 2.2 Request rate (req/s) 0.00 0.25 0.50 0.75 1.00 Avg QoE RR vLLM Andes 2 3 4 Request rate (req/s) 0.0 0.5 1.0 Avg QoE (a) OPT-13B. 2 4 6 Request rate (req/s) 0.0 0.5 1.0 Avg QoE (b) OPT-30B. 1.5 2.0 Request rate (req/s) 0.0 0.5 1.0 Avg QoE (c) OPT-66B. 0.8 1.0 Request rate (req/s) 0.0 0.5 1.0 Avg QoE (d) OPT-175B. Figure 11. Average QoE for different request rates using the Multi-Round ShareGPT dataset. These improvements can be attributed to Andes\u2019s QoEaware scheduling policy, which dynamically prioritizes resources for urgent requests that risk falling below their expected QoE, preempting those that have been sufficiently served. In contrast, under higher load, traditional FCFS scheduling policy suffers from head-of-line blocking, leading to significant queuing delay. Although the RR policy mitigates head-of-line blocking by preemptions, frequent preemptions introduce significant overhead and degrade the average QoE. 6.2.2 Improvement on Server Capacity. As shown in Figures 10 and 11, the horizontal dotted lines represent the average QoE threshold of 0.9. For ShareGPT dataset, Andes can manage 1.2\u00d7\u22121.6\u00d7 higher request rate than vLLM while maintaining an average QoE above the threshold. Specifically, for the OPT-66B model, Andes can handle 1.25\u00d7 higher request rate than vLLM, nearing the 1.38\u00d7 theoretical improvement suggested in Section 2.3, showcasing Andes\u2019s ability to optimize resource allocation and average QoE effectively. For Multi-Round ShareGPT dataset, Andes can serve 1.1 \u00d7 \u22121.3\u00d7 higher request rate. Additionally, by serving higher request rates with the same resources, Andes effectively reduces the resource cost per request. 6.2.3 Impact of Andes on System Throughput. We report the token generation throughput and the preemption frequency of Andes on OPT-66B with both datasets, as shown in Figure 12 and Figure 13. In both datasets, Andes maintains the same token throughput as vLLM when the request rate is moderate, and experiences a minor drop (\u226410%) in throughput as the request rate increases. This demonstrates that 1.4 1.6 1.8 2.0 2.2 Request rate (req/s) 0.00 0.25 0.50 0.75 1.00 Avg QoE RR vLLM Andes 3 4 5 Request rate (req/s) 0 50 Throughput (tokens/s) (a) ShareGPT. 1.5 2.0 Request rate (req/s) 0 50 Throughput (tokens/s) (b) Multi-Round ShareGPT. Figure 12. Token generation throughput with OPT-66B under different request arrival rates. Andes marginally impacts system throughput. The throughput decrease can be attributed to the overheads introduced by request preemption. Despite the active request scheduling, the frequency of preemptions per request remains low (\u22640.5) under reasonable average QoE as shown in Figure 13, minimizing the impact of overheads on throughput; Despite the minor decrease in throughput, the up to 60% improvement in server capacity offered by Andes can compensate for this, effectively reducing the resource cost per request while maintaining a satisfactory user experience. 6.3 Breakdown Analysis To understand Andes\u2019s performance in detail, we conducted a breakdown analysis focusing on QoE, time to first token (TTFT), and token delivery speed (TDS), as shown in Table 4. We report Andes\u2019s performance on OPT-66B and ShareGPT dataset with a request rate of 3.3, where Andes achieved an average QoE of 0.92. With these breakdown analyses, we can 10 3 4 5 Request rate (req/s) 0.0 0.5 1.0 Avg preemption frequency Andes (a) ShareGPT. 1.5 2.0 2.5 Request rate (req/s) 0.0 0.5 1.0 Avg preemption frequency Andes (b) Multi-Round ShareGPT. Figure 13. Preemption frequency with OPT-66B under different request arrival rates. Metric Percentile Method vLLM Andes 10\ud835\udc61\u210e 0.05 0.77 50\ud835\udc61\u210e 0.39 1.00 QoE 90\ud835\udc61\u210e 1.00 1.00 10\ud835\udc61\u210e 0.33 0.35 50\ud835\udc61\u210e 56.73 0.47 TTFT (s) 90\ud835\udc61\u210e 144.95 0.66 10\ud835\udc61\u210e 6.05 5.32 50\ud835\udc61\u210e 6.45 5.44 TDS (tokens/s) 90\ud835\udc61\u210e 7.84 7.02 Table 4. Andes significantly improves QoE and TTFT, while maintaining TDS above user expected speed. provide granular insights into individual user satisfaction under this level of QoE. QoE distribution. Andes significantly improves the lower and median user experiences, with the 10th percentile rising from 0.05 to 0.77 and the 50th percentile achieving a perfect score of 1, compared to 0.39 in vLLM. In order to understand how Andes handles requests with different request lengths, we present a scatter plot of QoE across different total lengths as shown in Figure 14. We observe Andes slightly starves a small fraction of longer requests, as they consume more resources or take longer time to complete. In contrast, FCFS starves lots of shorter requests that are blocked by longer requests. Token delivery timeline. Andes greatly enhances initial responsiveness, reducing median TTFT from 56.73 seconds in vLLM to just 0.47 seconds, and similarly improving the 90th percentile from 144.95 seconds to 0.66 seconds. This improved performance is attributed to Andes\u2019s QoE-aware scheduling, which effectively mitigates head-of-line blocking and reduces queuing delays. Additionally, we analyze the percentile distribution of the average TDS observed by users, excluding TTFT. While Andes slightly slows the average TDS, it remains above the user\u2019s expected speed, ensuring balanced delivery that neither overwhelms nor starves users. 0 1000 2000 Total Length 0 1 QoE (a) vLLM. 0 1000 2000 Total Length 0 1 QoE (b) Andes. Figure 14. QoE distribution across different total lengths. 6.4 Robustness to Diverse Workloads We evaluate the robustness of Andes under diverse settings including different hardware, arrival patterns, and QoE traces. We observed similar trends in diverse settings; therefore, we report our results with OPT-66B and ShareGPT. Hardware. We evaluate Andes on the NVIDIA A40 GPU with 46 GB RAM, as shown in Figure 15a. Andes improves average QoE up to 7\u00d7 under a higher request rate and serves 1.1\u00d7 higher request rate while maintaining an average QoE of 0.9. The reason for the smaller improvement on server capacity is that the A40 has a lower computational capability than the A100, leading to a slower average token generation speed. Consequently, the gap between the expected TDS and actual TDS on the A40 is smaller than on the A100, providing less opportunity for request scheduling and improving average QoE. However, as newer generations of GPUs are becoming more powerful in terms of computational capability, the potential improvement of Andes will be more significant. Bursty Arrival Process. We use a Gamma arrival process with the same request rate and a coefficient of variation of 3 to simulate the burst arrival of user requests. Figure 15b indicates that under bursty workload, the average QoE for FCFS policy begins to decrease at a lower request rate compared to the Poisson arrival, due to increased queuing delays. In contrast, Andes sustains a high average QoE, achieving up to a 2.7\u00d7 improvement on average QoE at the same request rate and serves 1.3\u00d7 higher request rate, showing Andes\u2019s adaptability to bursty workload. Different QoE Traces. Due to the unique QoE requirements of different applications, we evaluate Andes\u2019s performance under a voice chat QoE trace, with expected TTFT at 1 second and slower expected TDS adjusted according to the speaking speed outlined in Table 2. As shown in Figure 15c, both Andes and baseline achieve better average QoE even on higher request rates, attributed to the less strict TDS requirements. Nevertheless, Andes improves average QoE up to 1.25\u00d7 and manages 2\u00d7 request rate, which approaches the theoretical maximum improvement of 2\u00d7 as discussed in Section 2.3. 6.5 Sensitivity Analysis All experiments in sensitivity analysis are conducted on OPT66B with the ShareGPT dataset and a request rate of 3.3. 11 1.4 1.6 1.8 2.0 2.2 Request rate (req/s) 0.00 0.25 0.50 0.75 1.00 Avg QoE RR vLLM Andes 0.4 0.5 0.6 Request rate (req/s) 0.0 0.5 1.0 Avg QoE (a) NVIDIA A40. 3 4 5 Request rate (req/s) 0.0 0.5 1.0 Avg QoE (b) Burst request arrival. 5 10 Request rate (req/s) 0.0 0.5 1.0 Avg QoE (c) Voice chat QoE trace. Figure 15. Robustness analysis on OPT-66B with ShareGPT dataset. 0.0 0.5 1.0 1.5 Preemption frequency cap p 0.5 1.0 Avg QoE vLLM Sedna 0.0 0.5 1.0 1.5 Preemption frequency cap P 0.5 1.0 Avg QoE (a) Average QoE. 0.0 0.5 1.0 1.5 Preemption frequency cap P 0 50 Throughput (tokens/s) (b) Throughput. Figure 16. Tuning preemption frequency cap \ud835\udc43. 0 50 100 150 t 0.4 0.6 0.8 1.0 Avg QoE vLLM Andes Figure 17. Tuning \u0394\ud835\udc61. 3 4 5 Request rate (req/s) 0.0 0.5 1.0 Avg QoE vLLM Andes w/ greedy Andes w/ DP Figure 18. Different solver. Preemption Frequency Cap \ud835\udc43. Increasing preemption frequency cap \ud835\udc43can lead to finer-grained scheduling, potentially enhancing average QoE, but at the cost of increased overhead and reduced throughput. Figure 16a shows the average QoE under different \ud835\udc43. Improvements in QoE are observed as \ud835\udc43increases up to 0.4 preemptions per request, stabilizing beyond this point. Conversely, Figure 16b illustrates a slight decrease in system throughput with increased \ud835\udc43, stabilizing beyond 0.4 preemption per request. These observations suggest a trade-off between average QoE and system throughput, indicating the current setting of \ud835\udc43nearly optimizes QoE while maintaining satisfactory throughput. Prediction Timeframe \u0394\ud835\udc61. We evaluate how different \u0394\ud835\udc61 influences average QoE to understand its effect on system performance. Figure 17 illustrates that the average QoE remains roughly consistent for \u0394\ud835\udc61values greater than 50, and significantly outperforms the baselines, indicating that Andes is not sensitive to the setting of \u0394\ud835\udc61. Different Knapsack Solution. We compare the performance of Andes with different knapsack solutions between greedy and dynamic programming (DP). Figure 18 shows that the greedy consistently surpasses the DP solution, while both solutions outperform the baselines. The lower performance of the DP is due to its substantial computational overhead, which delays the inference process and degrades the average QoE. This suggests that the greedy approach is a more practical and efficient solution for Andes. 7 Related Work General Model Serving Systems. A variety of model serving systems have emerged, ranging from general-purpose, production-level frameworks like TensorFlow Serving [33] and NVIDIA Triton [31] to specialized systems such as Clipper [11], which sets application-level SLOs. Recent systems including Nexus[42], DeepRecSys [17], Clockwork [16], INFaaS [40], SuperServe [24] and AlpaServe [26] have introduced features like serving pipelines, hardware platform diversity, advanced scheduling, dynamic model selection, and model parallelism to boost resource efficiency. However, these general systems neglect the unique characteristics of LLM inference, leaving potential avenues for optimization. LLM Serving Systems. Numerous model serving systems are proposed to address the unique challenges of LLMs. Orca [50] introduced an iteration-level scheduling policy to enhance the throughput of batching inference, and vLLM [25] developed a PagedAttention to reduce the memory usage of LLMs. Splitwise [37], DistServe [52], TetriInfer [19] and Sarathi-Serve [1, 2] optimize the computation of prefill and decode phases through disaggregating or merging them. Some other systems focus on GPU kernel optimization and kernel fusion[5, 12, 32], model parallelism [5, 39], batching algorithm [13, 43, 50], KV-cache management [27, 28, 44] and parameter-sharing [53]. However, these systems focus on optimizing aggregated server-side performance and simply adopt a FCFS scheduling policy, which fail to address the queuing delay problem under higher request load. Finally, shortest remaining processing time [41] is a preemptive scheduling policy, but it does not consider the QoE of individual requests and requires knowledge of the response length of requests. To the best of our knowledge, Andes is the first to define and optimize QoE of text streaming services. 12 Video Streaming and QoE. The concept of text streaming draws inspiration from video streaming but encounters unique challenges and has a different QoE definition. While video streaming services are primarily limited by network bandwidth and latency [7], text streaming services are mainly constrained on computational resources [48]. Additionally, the QoE in video streaming is often measured by metrics like buffering ratio, resolution stability, and playback smoothness [7], while the QoE in text streaming primarily considers the token delivery timelines (TDT). 8 Conclusion In this paper, we define and optimize the Quality-of-Experience (QoE) for text streaming services, a critical aspect often overlooked by existing serving systems. We propose a QoE-aware LLM serving system, Andes, which is able to serve more concurrent requests while meeting their QoE requirements, significantly reducing the cost per request. We demonstrate the effectiveness of Andes through extensive experiments on various real-world datasets and LLMs, showing that Andes can handle up to 1.6\u00d7 higher request rate while preserving high QoE, or enhance QoE by up to 3.2\u00d7 without additional resource expenditure."
17
+ }
intro_8K/test_introduction_short_2404.16294v1.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2404.16294v1",
3
+ "title": "LLM-Based Section Identifiers Excel on Open Source but Stumble in Real World Applications",
4
+ "abstract": "Electronic health records (EHR) even though a boon for healthcare\npractitioners, are growing convoluted and longer every day. Sifting around\nthese lengthy EHRs is taxing and becomes a cumbersome part of physician-patient\ninteraction. Several approaches have been proposed to help alleviate this\nprevalent issue either via summarization or sectioning, however, only a few\napproaches have truly been helpful in the past. With the rise of automated\nmethods, machine learning (ML) has shown promise in solving the task of\nidentifying relevant sections in EHR. However, most ML methods rely on labeled\ndata which is difficult to get in healthcare. Large language models (LLMs) on\nthe other hand, have performed impressive feats in natural language processing\n(NLP), that too in a zero-shot manner, i.e. without any labeled data. To that\nend, we propose using LLMs to identify relevant section headers. We find that\nGPT-4 can effectively solve the task on both zero and few-shot settings as well\nas segment dramatically better than state-of-the-art methods. Additionally, we\nalso annotate a much harder real world dataset and find that GPT-4 struggles to\nperform well, alluding to further research and harder benchmarks.",
5
+ "authors": "Saranya Krishnamoorthy, Ayush Singh, Shabnam Tafreshi",
6
+ "published": "2024-04-25",
7
+ "updated": "2024-04-25",
8
+ "primary_cat": "cs.CL",
9
+ "cats": [
10
+ "cs.CL",
11
+ "cs.AI"
12
+ ],
13
+ "label": "Original Paper",
14
+ "paper_cat": "LLM Fairness",
15
+ "gt": "Modern day healthcare systems are increasingly moving towards large scale adoption of maintain- ing electronic health records (EHR) of patients (Congress, 2009). EHRs help healthcare practi- tioners with relevant information about a patient such as history, medications, etc. However, in re- cent times this practice has led to very long and convoluted EHRs (Rule et al., 2021). Naturally, the need for better information retrieval tools emerged due to the progressively lengthy and unstructured doctor notes. One such need is the accurate identi- fication of sections in an EHR, pertinent to a physi- cian\u2019s inquiry. For instance, a question like \u201cWhat Figure 1: Sample real world obscure image of an outpa- tient paper-based patient encounter form comprising of numerous sections (Hersh and Hoyt, 2018). treatments has the patient undergone in the past?\u201d concerning prior treatments administered to a pa- tient necessitates the swift extraction of informa- tion from the \u201ctreatments\u201d and \u201cpast medical his- tory\u201d sections, while excluding sections related to \u201cancestral medical history\u201d. This swift extraction is vital for timely decision-making in patient care. Additionally, during critical procedures such as the evaluation of medical necessity for prior authoriza- tion requests, it is customary for experienced clin- icians to locate vital data within specific sections. An illustrative case entails examining the \u201cphysical exam\u201d section to identify particular findings, such as signs of neurological disorders or movement- associated pain, indicating the need for additional diagnostic tests. The timely identification of such information is of utmost importance in ensuring the provision of appropriate care and reducing the risk of potential complications. arXiv:2404.16294v1 [cs.CL] 25 Apr 2024 In general, regions found in EHR would often have a section heading preceding the body of the section, as can be seen in example Table 1. Even though these section types have limited cardinal- ity, however, more often than not, physicians would fail to adhere to standards and use lexical variations generated on the fly. Moreover, practitioners not only will generate lexical variations of sections on the fly but also completely new sections altogether for valid reasons like imaging reports, etc. Apart from these variations, oftentimes there would be no headers at all, even though the information present could ideally be part of a pre-existing section in a document or a new section altogether. While studies like Gao et al. (2022) utilize the Subjective, Objective, Assessment and Plan heading (SOAP) framework, real-world clinical notes often contain sections beyond these categories. This limitation is further emphasized in Landes et al. (2022), war- ranting further investigation and analysis. The aforementioned factors have consequently contributed to the establishment of Section Iden- tification (SI) as a distinct and enduring prob- lem within the academic discourse (McKnight and Srinivasan, 2003), making it an indispensable com- ponent of any clinical natural language processing (NLP) pipeline. A SI task entails finding regions of text that are semantically related to an aspect of a patient\u2019s medical profile. More importantly, it helps to improve pre-existing information retrieval systems by enabling them to be more targeted and specific. Lastly, in light of recent findings of the negative impact of note bloat within EHRs on even the most sophisticated systems (Liu et al., 2022), using SI to shorten or create from EHR, a sub-EHR specific to a given task would prove to be a worth- while effort for humans and machines both. Because finding sections and hence their cor- responding headers involves inherent variability, machine learning (ML) methods have played an important role in this natural language processing (Pomares-Quimbaya et al., 2019). ML has increas- ingly been shown to be efficient in finding rele- vant sections within a document, however, a key drawback of traditional ML methods has been the dependence on labeled data (Tepper et al., 2012). Reliance on annotated data for training ML models to be able to predict the beginning and end of sec- tion headers has stalled the field from fully solving the task. The emergence of large language mod- els (LLMs) in contemporary research presents a promising avenue to overcome the limitations in- herent in traditional machine learning approaches, thereby expanding the scope of their applications. LLMs have emerged as the de-facto system for NLP in scenarios where data is scarce (OpenAI, 2023). The key distinction between traditional Ma- chine Learning (ML) models and Large Language Models (LLMs) lies in their ability to understand tasks in natural language. While traditional ML models require labeled data for training, LLMs can leverage pre-training on vast amounts of unstruc- tured text data, enabling them to perform tasks with minimal task-specific fine-tuning. This makes ML possible in an unsupervised manner (no need for labeled data) and therefore opens room for appli- cations in domains where annotated data is hard to acquire like healthcare. While LLMs have been evaluated on a wide array of NLP tasks in health- care (Nori et al., 2023), they are yet to be evaluated on their effectiveness in segmenting a document into semantically relevant sections. In this work, we address this gap and evaluate the efficacy of our approach on a widely-known datasets in the clinical medical domain. Findings show that GPT-4 (OpenAI, 2023) almost solved the section identification problem on the benchmark open-sourced dataset, however, on a private dataset the performance lags. Our contributions are three- fold, listed as follows: 1. We show that GPT-4 can generate zero-shot headings of records with very high accuracy. 2. Contrary to the above, we find that its perfor- mance drops on internal real-world datasets. 3. An ontology of numerous section headers seen in real world EHR systems is shared which has much higher coverage.",
16
+ "main_content": "Traditionally, SI task has been done using a pre-defined dictionary of plausible candidates. Pomares-Quimbaya et al. (2019) performed a comprehensive survey and found that rule-based methods still dominated the array of methods proposed while ML systems increasingly achieved better coverage when combined in a hybrid manner with rulebased methods. McKnight and Srinivasan (2003) later on extracted bag-of-words from MedLINE abstracts and used a support vector machine to train a classifier to categorize sentences into either Introduction, Method, Result, or Conclusion, demonstrating promising results. Similarly, Hirohata et al. Allergies Allergies: Patient recorded as having No Known Allergies to Drugs... History of Present Illness HPI: 61M w/ incidental L renal mass found during W/U for brachytherapy for low-grade [**Last Name (STitle) **], now w/ gradually worsening gross hematuria for the past several days. Labs Imaging Pertinent Results: [**2160-4-10**] 07:30AM BLOOD WBC-12.6* RBC-3.20* Hgb-8.2* Hct-24.5* MCV-77* MCH-25.6* MCHC-33.4 RDW-17.1* Plt Ct-438. Hospital Course Brief Hospital Course: 61M w/ low-grade [**Month/Day/Year **] awaiting brachytherapy and locallyadvanced L renal mass w/ collecting system invasion, renal vein thrombus, and likely metastases, presented w/gradually worsening gross hematuria. Table 1: This figure illustrates a sample data point from the MIMIC-III database, highlighting the sections annotated with MedSecID corpus. (2008) achieved very high accuracy by using conditional random fields to label scientific abstracts into Objectives, Methods, Results, and Conclusions. Over time and with the inclusion of ML, the field re-framed this problem as one of span-level entity identification i.e. the system would be tasked with predicting whether each token in a sequence belongs to one of the predefined section types using the Inside-Outside-Beginning (IOB) tagging system (Ramshaw and Marcus, 1999). Tepper et al. (2012) addresses the task of segmenting clinical records into distinct sections using a two-step approach. First, the section boundaries are identified. Then, the sections are passed to the second step, where a classifier is used to label each token as Begin, In or Out of the span of a section. Nair et al. (2021) proposes several transfer learning models based on clinical contextual embeddings for classifying clinical notes into the major SOAP sections (Podder et al., 2023). Zhou et al. (2023) investigates the effectiveness of continued pre-training in enhancing the transferability of clinical note section classification models. Both of the above papers resemble our work, however, they restrict them to SOAP sections and train specific models to do so. While the techniques devised so far have shown promise, to the best of our knowledge none of the previous works have tried in an unsupervised manner. With the advent of LLMs (Devlin et al., 2018; OpenAI, 2023), several works have shown the efficacy of LLMs in doing unsupervised zero-shot information extraction. The primary method for interacting with generative LLMs is by the use of natural language prompts. Wei et al. (2022) found a significant performance boost by asking the model to explain its chain of thought before answering the query. Further, Brown et al. (2020) showed that additional performance can be gained by passing some examples as part of the prompt, they named it Few-Shot prompting. Wang et al. (2023); Bian et al. (2023); Ashok and Lipton (2023) have shown the efficacy of prompting the LLM to extract biomedical named entities from scientific articles. More recently, Liu et al. (2023) used GPT-4 to de-identify documents in a zero-shot manner. This hints at the immense document understanding capabilities of LLMs and opens doors to its application to a wide array of previously unresolved tasks such as SI. Apart from the advancements in the field of ML and SI, to evaluate how well SI systems perform, a standardization of tasks as well as datasets is required. To that end, Uzuner et al. (2011) first proposed a SI task as part of Informatics for Integrating Biology and the Bedside (i2b2) benchmarks. Recently, Landes et al. (2022) argued that the previous dataset did not fully cover the nuances in SI task and proposed a dataset an order of magnitude larger as well as more comprehensive than one by Uzuner et al. (2011). However, the dataset proposed by Landes et al. (2022) is based on a clean source Johnson et al. (2016), which oftentimes is not the case in real-world scenarios. To that end, we also annotated a real-world dataset to evaluate LLMs on it as well. 3 Datasets 3.1 i2b2 2010 In their study, Tepper et al. (2012) meticulously curated a corpus comprising 183 annotated clinical notes extracted from a selection of discharge summaries within the i2b2 2010 (Uzuner et al., 2011) dataset. This dataset was annotated by an expert and served as a valuable resource for their research. However, owing to constraints imposed by Institutional Review Boards (IRBs), our current access to the i2b2 2010 dataset is limited. As a result, we were only able to procure clinical notes for 96 out of the originally annotated 183 documents. Dataset MedSedId i2b2 2010 Real World Document count 2002 96 100 Average token length 2307 1283 7841 Std. dev. token length 1732 726 8093 Average sections per doc 12 17 12 Std. dev. sections per doc 5.7 6.2 8 Table 2: Corpus Statistics 3.2 MedSecID MedSecID (Landes et al., 2022) is a publicly available corpus of 2,002 fully annotated medical notes from the MIMIC-III (Johnson et al., 2016) clinical record database. Each note has been manually annotated with section boundaries and section labels (See Table 1 for an example of a typical clinical note consisting of well-defined sections). The section labels correspond to different types of information that are typically found in clinical notes, such as history of present illness, physical exam findings, and progress notes. 3.3 Real-world In an increasingly digital world, one would be inclined to assume healthcare data also lives digitally. Surprisingly, that is not the case almost 75% of the healthcare dataset still lives in faxes (CCSI, 2022) (see figure 1 for a sample handwritten and faxed clinical notes). Whereas all preexisting SI datasets are digitally derived from clean EHR systems, which even though offer us some insight into the performance of state of art, however, fail to paint the full picture. Therefore, we use an internal dataset of prior authorization requests derived from faxed-in images being transcribed to text via an optical character recognition system (OCR). These requests contain EHR of patients in the form of doctors\u2019 notes, submitted in both PDF and image formats. These documents lack a standardized structure, with segments and titles that can vary significantly in length. Although it\u2019s possible to group these titles into clusters of similar meaning, the language and number of titles differ across documents. Additionally, OCR inaccuracies arise from unclear text, spelling errors, complex table structures, and handwritten content, resulting in highly noisy input for any SI system to process. 4 Annotation Methods In this section, we describe the dataset and the annotation design in our study. As we described before we decided to choose section identification (SI), a method to identify sections and sub-sections in EHR documents to split them into smaller text chunks and create some structure in these unstructured data. We designed a manual annotation task to identify these sections and create categorical section types. Below we explain the annotation task design, the result, and the challenges. 4.1 Annotation Design We randomly selected 100 records from a pool of one million records we have in our corpus. These records are in two forms, PDF or fax images which doctors submit to insurance companies, and hence, can arrive from any arbitrary format. We refer to these records as documents in the span of this manuscript. These documents have no standard structures and sometimes they contain multiple patients information at the same time. Six annotators with higher education and non-native speakers of English carry the annotation task. Each annotates an equal amount and random selection of these documents. We used Label Studio1, an open source data labeling platform. PDF or image file of each record is uploaded to label studio and the task was to mark the section and sub-section in each file and manually enter the corresponding text of these sections and sub-sections. To instruct the annotators, we provided written instructions as well as held a video discussion session and explained the task to the annotators. 4.2 Annotation Result We aggregate the sections per document to form the final section and sub-section list. A total of 912 sections and subsections are identified which makes 14 sections and sub-sections on average per document. Then one annotator, different from the ones who have annotated the documents, categorized these sections and sub-sections into more gen1https://labelstud.io/ Figure 2: Section categories which are selected based on observation of top-header sections in the corpus and human judgment to associate section names to their topic or category of representations. eral categories based on the Consolidated Clinical Document Architecture (C-CDA) implementation guide2. In other words, the diverse categories are mapped to a category to unify them. This allows us to calculate IAA and be able to use the text semantic similarity method to find these sections in the unannotated documents. A total of 464 categories are coded of which 394 of these categories have a frequency of 1 and 70 categories have a frequency of 2 or more. We provide a small sample of the most frequent categories in Table 3 and Figure 2. 24 documents have been randomly selected and on each of these documents, a second annotator annotated the document. Further, we calculated the Jaccard similarity to report Inter-Annotator Agreement (IAA), The Jaccard similarity is a measure of the similarity between two sets of data. We obtained a Jaccard distance of 0.40, which is a fair agreement and an indication that the annotation task is challenging. The most diverse section and sub-section lists that each normalized into one section name are shown in table 4. Notably, the diversity of these two general categories indicates the challenge involved in structuring and identifying these sections in these documents. In some cases, categories such as Order Report or Medication Reconciliation can be both a section and sub-section according to the annotation results. This characteristic does not enforce the decision to select the general category for these types. 2C-CDA contains a library of CDA templates, incorporating and harmonizing previous efforts from Health Level Seven (HL7), Integrating the Healthcare Enterprise (IHE), and Health Information Technology Standards Panel (HITSP). https://www.hl7.org/ccdasearch/ 5 Experimental Setup Our task here is to take as input a document and output all the section headers found in it. For our underlying use case, we carried out testing with various LLMs like GPT-4 8k (OpenAI, 2023), LLaMa2 7B (Touvron et al., 2023), and more recent Mistral 7B (Jiang et al., 2023) prompting strategies3 (as shown in figure 3) and contrasted them with a baseline experiment that used keyword search, regex, MedSpacy library (Eyre et al., 2021) and the best model reported by Landes et al. (2022). MedSpacy is a clinical NLP toolkit built on the foundation of SpaCy, specifically designed to address the unique challenges of processing and extracting information from clinical text. This enables healthcare professionals to efficiently process and derive valuable insights from unstructured medical narratives. We did not restrict the tokens and used the entire clinical note for MedSecId. We extracted the actual section header using the header span mentioned in the MedSecId annotation and used it as the ground truth for our task. Because of the longer length of real-world data, we used the 32k version of GPT4 while keeping all the hyper-parameters to default such as the temperature, frequency penalty, and presence penalty to 0 and max tokens to 1000. Lastly, in this study, we utilized a privately hosted instance of GPT-4 to ensure the prevention of any potential data leakage. Prior to initiating the experiment, we implemented a thorough anonymization procedure to protect the dataset Protected health information (PHI). This involved substituting all 3CoT A5, One Shot A4 and Close Ended A6 prompting strategies are elaborated in appendix A. Medications Section Information about the current and past Medications Order Info This section consists of additional items that are required to conclude the assessments. Examples of such items are Mammograms, x-rays, etc., or the information about the provider of such items. Results Section Usually contains of lab results Physical Exam Section Result of physical exams such as Integumentary, Chest and Lung Exam, Cardiovascular, Abdomen, etc. Table 3: A sample of sections and subsections with the highest frequency. Medications Section Medications, Medication Changes, Medication List at End of Visit, Medication, Medication Reconciliation, Preventive Medicine, Medication List, Medication List at End of Visith, Medications (active prior today), Medications (Added, Consumed or Stopped today), Medications (Added, Continued or Stopped today), Medications Changes, Medications Discontinued During This Encounter, Medications Ordered This Encounter, Medications Places This Encounter, MEDICATIONS PRESCRIBED THIS VISIT, Medications Reviewed As Of This Encounter, Meds, Outpatient Medications, Patients Medication, Preventive Medication, Previous Medications, Previous medications Order Info Orders Placed, Order Questions, Order, Order Details, Order Information, Order Providers, Order Report, Ordering Provider, Order Name, Order name, Order Number, Order Plain X-ray/Interpretation, Order Requisition, Order Tracking, Order Transmittal Tracking, Order User/Provider Detail, Order-Level Documents, Ordering Provider Information, Orders, Orders Placed This Encounter, Orders Requiring a Screening Form Table 4: The list of sections and subsections that are normalized into one section name. You are a clinician and you read the given clinical document and identify section headers from them. Find section headers only from the clinical text. For each section header, return the answer as a JSON object by filling in the following dictionary. {section title: string representing the section header} Here are some clinical notes of a patient from a doctor. ### {context text} ### Figure 3: Basic Prompt Template personal identifiers, such as names, identification numbers, and ages, with fictitious entities. Apart from the basic prompts, we also experiment with combining them with Few-Shot (Brown et al., 2020) and CoT Prompting (Wei et al., 2022) where we ask the LLM to think step-by-step along with providing an example of the clinical note and a list of headings. We keep the prompts same across all the datasets. Lastly, the evaluation metric used here is the exact match (EM) accuracy as well as precision (P), recall (R), and F1-score calculated by comparing GPT-4\u2019s output to that of ground truth in the Inside-Outside-Beginning (IOB) scheme (Ramshaw and Marcus, 1999) as used in work by Landes et al. (2022). Similar GPT-4 experiments were conducted on i2b2 2010 dataset but as the context length of i2b2 was smaller, in all the experiments we use GPT-4 8K. Lastly, because of cost constraints, we chose the best-performing model on above mentioned benchmarks to be evaluated against our internal real-world dataset. 6 Results Even though GPT-4 was able to perform very well on open source benchmark datasets, it was unable to reach the same level of performance on our internal corpus due to its complexity as shown in table 7. Experiments showed that GPT-4 was able to achieve an accuracy of only 37% in contrast to that of 96% on MedSecId corpus. LLaMa-2 and MedSpacy performed equally well, in that, former achieved higher recall than latter. This can be attributed to the global knowledge encoded in the LLMs, which is not the case with MedSpacy, while on the other hand MedSpacy would be much faster to run with less overhead. Results in table 5 and 6 show that one-shot GPT-4 OpenAI (2023) performed the best and achieved a new state of the art on MedSecId outperforming previous models by a significant margin. This unsupervised methodology Method Accuracy(%) Precision(%) Recall(%) F1(%) EM(%) Keyword Based 36.07 100 36.07 53.01 36.05 Regex 49.24 100 30.07 46.24 50.8 MedSpacy 56.63 100 38.29 55.38 62.63 GPT-4 Close Ended Prompt 73.23 100 73.23 84.55 73.2 GPT-4 Chain-of-Thought (CoT) 94.9 100 88.62 93.97 92.47 GPT-4 Zero Shot Prompt 94.41 100 87.61 93.40 92.05 GPT-4 One Shot Prompt 96.86 100 92.93 96.24 96.11 LLaMa-2 Close Ended Prompt 39.96 100 39.96 57.10 39.94 LLaMa-2 Zero Shot Prompt 52.29 94.61 32.92 48.82 62.25 LLaMa-2 One Shot Prompt 13.95 94.57 6.86 12.80 16.86 LLaMa-2 Chain-of-Thought (CoT) 38.21 93.95 21.11 34.48 46.95 Mistral Close Ended Prompt 5.24 100 5.24 9.96 5.24 Mistral Zero Shot Prompt 11.51 97.43 5.23 9.93 14.45 Mistral One Shot Prompt 8.41 98.61 4.07 7.82 10.48 Mistral Chain-of-Thought (CoT) 11.99 98.61 5.64 10.67 15.53 BiLSTM-CRF (Landes et al., 2022) 82.2 95 95 95 Table 5: Results on MedSecId Corpus Method Accuracy(%) Precision(%) Recall(%) F1(%) EM(%) Keyword Based 10.98 100 8.78 16.14 69.5 Regex 66.26 100 48.27 65.11 56.8 MedSpacy 38.45 100 21.92 35.96 38.14 GPT-4 Close Ended Prompt 11.82 78.24 8.46 15.27 73.8 GPT-4 Chain-of-Thought (CoT) 86.26 99.85 74.65 85.43 84.33 GPT-4 Zero Shot Prompt 89.47 100 78.46 87.93 84.58 GPT-4 One Shot Prompt 93.03 100 85.36 92.10 89.45 LLaMa-2 Close Ended Prompt 88.79 100 83.57 91.05 86.54 LLaMa-2 Zero Shot Prompt 56.2 100 36.62 53.61 58.59 LLaMa-2 One Shot Prompt 30.54 100 16.75 28.69 21.2 LLaMa-2 Chain-of-Thought (CoT) 40.23 99.83 22.61 36.87 50.7 Mistral Close Ended Prompt 10.41 100 6.65 12.48 19.34 Mistral Zero Shot Prompt 35.30 100 18.98 31.90 36.17 Mistral One Shot Prompt 6.58 100 3.24 6.29 7.80 Mistral Chain-of-Thought (CoT) 32.13 99.80 17.03 29.09 33.66 Maximum Entropy (Tepper et al., 2012) 91.1 90.8 91 Table 6: Results on i2b2 Corpus. While GPT-4 has superior performance, LLaMa-2 is not far behind. Method A P R F1 EM Regex 67.64 98.69 51.30 67.51 71.9 MedSpacy 5.92 100 4.13 7.93 15.72 GPT-4 ZS 37.53 100 24.18 38.95 37.29 LLaMa-2 ZS 13.33 100 7.81 14.49 19.75 Mistral ZS 3.67 100 1.83 3.60 5.24 Table 7: Results on Real-World Corpus. ZS stands for Zero-Shot prompting beats all the supervised models on the MedSecId corpus (Landes et al., 2022). Similarly, one-shot also had a state-of-the-art performance on i2b2 2010 dataset. On the other hand, LLaMa-2 did not perform as well as GPT-4, but nevertheless had on par performance with regex. Additionally, LLaMa-2 Touvron et al. (2023) performance on i2b2 dataset came very close to that of GPT-4 itself. This disparity in performance of LLaMa-2 as well as its variation in results across the experiments leads to inconclusive results. Lastly, Mistral (Jiang et al., 2023) performance was sub-optimal, exhibiting only a marginal improvement than a naive keyword based approach. 7 Discussion We performed an in-depth error analysis on the subset of records that GPT-4 was unable to predict correction. Our analysis found errors in the MedSecId dataset itself, which is one of the reasons GPT-4 did not get a 100% performance. Error analysis reveals on the rest of 2.8% missed sections of the GPT-4 finds that 18% of the above stated 2.8% belong to the \u201cFindings\u201d section label and 13% belong to the \u201cImage-Type\u201d category. Most of the documents did not have those section headers explicitly mentioned and were hidden as part of the text. Even though the precision was 100% in i2b2 2010 dataset, the granularity of the subsections, the Section Categories Number of Sections in Category Frequency Frequency (%) Assessment & Plan 413 958 60.98 physical exam 66 152 9.67 Personal Info 54 73 4.64 Medication 19 55 3.50 History of Present Illness 3 44 2.80 Family History 5 40 2.54 Allergies 4 40 2.54 Order Info 17 38 2.41 Clinical Info 16 36 2.29 UNKNOWN 13 25 1.59 Additional Info 4 18 1.14 Appointment Date 6 15 0.95 Progress Notes 1 15 0.95 Results 7 12 0.76 Mental Status 6 10 0.65 History 3 10 0.64 Lab Results 5 6 0.38 Alcohol Use 2 5 0.31 Abdomen 2 5 0.31 Referral 3 3 0.19 Active Medication 3 3 0.19"
17
+ }
intro_8K/test_introduction_short_2404.16297v1.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2404.16297v1",
3
+ "title": "When Fuzzing Meets LLMs: Challenges and Opportunities",
4
+ "abstract": "Fuzzing, a widely-used technique for bug detection, has seen advancements\nthrough Large Language Models (LLMs). Despite their potential, LLMs face\nspecific challenges in fuzzing. In this paper, we identified five major\nchallenges of LLM-assisted fuzzing. To support our findings, we revisited the\nmost recent papers from top-tier conferences, confirming that these challenges\nare widespread. As a remedy, we propose some actionable recommendations to help\nimprove applying LLM in Fuzzing and conduct preliminary evaluations on DBMS\nfuzzing. The results demonstrate that our recommendations effectively address\nthe identified challenges.",
5
+ "authors": "Yu Jiang, Jie Liang, Fuchen Ma, Yuanliang Chen, Chijin Zhou, Yuheng Shen, Zhiyong Wu, Jingzhou Fu, Mingzhe Wang, ShanShan Li, Quan Zhang",
6
+ "published": "2024-04-25",
7
+ "updated": "2024-04-25",
8
+ "primary_cat": "cs.SE",
9
+ "cats": [
10
+ "cs.SE",
11
+ "cs.AI"
12
+ ],
13
+ "label": "Original Paper",
14
+ "paper_cat": "LLM Fairness",
15
+ "gt": "Fuzzing is a promising technique for software bug detection [8, 26]. Large Language Models (LLM) are rapidly gaining popularity across various applications for their versatility and capability [14, 15]. From natural language processing [7, 22, 27] to code generation [19, 24], LLM\u2019s broad utility is making it a prominent and sought-after solution in diverse domains. This development has naturally influ- enced fuzzing research: to help improve the fuzzing effectiveness, LLM has now become one of the key enablers to assist the core processes of fuzzing, including driver synthesis [28, 39], input gen- eration [9, 10], and bug detection [11, 17]. While excelling in natural language analysis, LLM encounters some common pitfalls like limited context length [20] and hallu- cination problems [16, 23, 31], etc. Consequently, LLM exhibits limitations in complex program analysis. These pitfalls of LLM affect the effectiveness of fuzzing, leading to testing performance degradation, manifesting as high false positives, low test coverage, and limited scalability. In this paper, we identify five common challenges when using LLM-based fuzzing technology: 1) Firstly, they often produce low- quality outputs in fuzzing driver synthesis, lacking the precision required for effective bug detection. 2) Secondly, these models demonstrate a limited scope in their understanding and processing capabilities, constraining their utility in diverse fuzzing scenarios. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. FSE\u201924, July 2024, Porto de Galinhas, Brazil \u00a9 2024 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-x-xxxx-xxxx-x/YY/MM https://doi.org/10.1145/nnnnnnn.nnnnnnn 3) Thirdly, LLMs struggle with generating sufficiently diverse in- puts during the fuzzing process, which is critical for thorough and effective bug detection. 4) Fourthly, they face challenges in main- taining the validity of generated inputs, a crucial factor for accurate and reliable fuzzing. 5) Lastly, LLMs\u2019 inaccurate understanding of bug detection mechanisms hinders their ability to identify and ad- dress complex software vulnerabilities effectively, thereby limiting their overall effectiveness in the fuzzing process. We performed a comprehensive survey and revisited most recent fuzzing works that rely on LLM for tackling different problems in the fuzzing process. To our surprise, the results show that each work encounters at least one of these challenges. 1 Although LLMs are widespread, it is more important for us to avoid its weakness, and at the same time take advantage of its strengths. To this end, we perform an impact analysis of the impli- cations in three key fuzzing steps. These findings inspire us with some opportunities for better usage of LLM in each fuzzing step according to whether the corresponding corpus and documentation are rich. Furthermore, we performed some preliminary evalua- tions according to these opportunities by applying LLM in fuzzing database management systems(DBMS). The results demonstrate that the reasonable instantiation of those recommendations can overcome the challenges in LLM-assisted DBMS fuzzing.",
16
+ "main_content": "Limited Training Corpus ted Trai Corpus Limited Long-text Understanding Limited Long-tex Understanding Hallucination C3.1: Inaccurate Understanding C3.1: Inaccurate Understanding C2.1: Insufficient Diversity C2.1: Insuffic Diversity C1.1: Prone to Error C2.2: Limited Validity C2.2: Limit Validity C1.2: Limited Scope Bug Detection Input Generation Driver Synthesis Target Program Target Program Prompt Bug Repor Bug Report Challenges Fuzzing Loop Large Language Model Figure 1: Fuzzing Workflow with LLM enhanced. 2.1 Driver Synthesis Description. Recently, several pioneer works have been proposed to utilize LLMs to enhance driver synthesis [11, 12, 28, 38, 39]. 1Remark: The purpose of this work is not to point fingers or critique. Instead, it wants to show how we can overcome the challenges of LLM-assisted fuzzing and effectively leverage the advantages of LLMs and make it truly beneficial for the fuzzing process. arXiv:2404.16297v1 [cs.SE] 25 Apr 2024 FSE\u201924, July 2024, Porto de Galinhas, Brazil Jiang et al. Their basic idea is to use API documentation as the prompt context, and then ask LLMs to generate API invoking sequences as fuzzing drivers. For example, both TitanFuzz [11] and PromptFuzz [28] design customized prompt templates to guide LLMs in generating code that follows programming syntax and semantics. Challenges. The application of LLMs to driver synthesis can be ineffective if done directly, as LLMs have a tendency to produce hallucinations [7, 20] and perform less effectively on programs that are not included in their training corpus [20]. These limitations present two challenges for driver synthesis. The first one is that the synthesized drivers are prone to error, leading to a non-negligible number of false positives during fuzzing. For example, according to comprehensive evaluation on LLM-based driver synthesis for OSS-Fuzz projects [39], GPT-4 can correctly generate roughly 40% drivers, while the rest of the drivers contain errors. Among the erroneous drivers, 93% exhibit one or more of the following issues: type errors, mis-initialized function arguments, usage of non-existing identifiers, and imprecise control-flow dependencies. This occurrence primarily arises due to LLMs relying on pre-trained knowledge for driver synthesis, leading to the production of hallucinations [16]. The second challenge is that the application of directly using LLMs for driver synthesis has limited scope because LLMs have limited knowledge on unseen programs. For those target programs, LLMs sometimes use training knowledge to fill the gap, thus generating incorrect API invoking sequences. For example, developers from Google\u2019s OSS-Fuzz project [35] attempted to leverage LLMs to synthesize drivers. Out of 31 tested OSS-Fuzz projects, 14 successfully compiled new targets and increased coverage with the synthesized drivers. The drivers unsuccessfully synthesized by LLMs typically originated from less common projects like krb5 and rtpproxy. In contrast, LLMs are more likely to generate compilable and effective drivers for more common projects, such as tinyxml2 and cjson. Recommendations. We have the following recommendations: REC 1.1 Some targets whose code or use cases have been included in the training corpus. For these cases, employing LLM for automated synthesis of fuzz drivers, complemented by error-guided corrective measures, is a practical approach. Iteratively querying the LLM based on identified errors and fixing the errors are practical measures [39], which helps to address the prone-to-error challenge. For example, libpng is a common library and has already been seen by GPT4 in its training process. Consequently, it is possible to directly ask GPT4 to generate a fuzz testing driver for libpng by giving the prompt \u201cGenerating LLVMFuzzerTestOneInput for test libpng.\u201d However, the generated driver might still contain errors in grammar or encounter issues during the process of compiling and linking. Test engineers can subsequently submit individual LLM queries containing the error messages to rectify these issues, occasionally necessitating multiple iterations. REC 1.2 For targets without a dedicated corpus in training, one can collect valuable materials such as function prototypes, example programs, or connection rules between functions. Conducting prompt engineering which involves embedding these materials, enhances the precision in generating logical sequences of function calls for the creation of drivers. The prompt engineering approach is a practical solution to tackle the challenge of limited scope. For example, typst is a new markup-based typesetting system like LaTex and claims it is more easier to learn and use. To generate a fuzz driver for it, feed the prompt \u201cGenerate LLVMFuzzerTestOneInput for typst\u201d to ChatGPT-3.5 will encounter hallucination problems and generate a completely non-existent driver. Instead, the project typst has lots of documents and unit tests. Feeding these materials that illustrate the usage of the functions is helpful for LLMs to generate effective drivers [35]. Additionally, it is also feasible to iteratively query LLMs to address any errors that may be present in the drivers. REC 1.3 Sometimes, even with adequate documentation and examples, LLMs can still encounter challenges in generating valid drivers at times, especially for extremely complex targets like Linux kernel. These systems frequently involve intricate dependencies among their APIs, or there exist implicit dependencies among lowerlevel systems that pose challenges for LLM to capture. For these targets, it is advisable to refrain from relying on LLMs. Instead, it is more practical and feasible to explore conventional methods. For example, KSG [33] uses the ebpf to dynamically infer the kernel\u2019s system call argument type and value constraints. In contrast, LLM-based approaches such as KernelGPT [38] use static inference based on kernel man pages and source code. But they may find some complex dummy operations. And it\u2019s hard for them to deduct pointer references. Therefore, KSG can generate 2,433 Syzlang, which is 17.86\u00d7 more compared to KernelGPT [38]. 2.2 Input Generation Description. Recently, several pioneer works [5, 34, 36, 37] have been proposed to utilize LLM to enhance input generation. Their basic idea is to use input specifications and input examples as the prompt context and then ask LLMs to generate new inputs. For example, LLMFuzzer [5] feeds input specifications to LLMs to generate initial seeds for mutation-based fuzzers. Challenges. The application of LLMs to input generation can be ineffective if done directly, as LLMs heavily rely on training corpus and have limited long-text understanding [20, 32]. These limitations present two challenges for input generation. The first one is that the generated inputs have insufficient diversity, leading to inefficient exploration of the input space. This is because LLMs are pre-trained models and prone to responding to users\u2019 queries in a similar manner when given the same prompt context. Therefore, it is difficult for LLMs to generate diverse inputs if they only provide limited information. For example, ChatAFL [29] demonstrates a significant limitation when directly applying LLMs to the RTPS protocol fuzzing. If only a limited amount of protocol information is provided in the prompts, LLMs can only generate inputs that cover 4 states out of 10 states that the RTPS protocol supported. This results in a substantial portion of the RTSP state remaining unexplored. The second challenge is that the generated inputs often have limited validity, leading to early termination when the target program executes these inputs. This is because LLMs cannot fully understand the long texts of input formats or examples due to limited ability on long text processing [32]. For example, Border Gateway Protocol (BGP) is a complex protocol, whose document (BGP RFC 9952) has more than 28,000 words to describe its functionalities. When generating inputs of BGP based on the RFC description, LLMs usually forget to generate the length field of the TLV substructures in the BGP message because the description of the main message structure and the TLV substructures are a little far, making LLMs hard to totally understand BGP format. Recommendations. We have the following recommendations: REC 2.1 Some of the testing inputs to the system are common and have a large number of examples on the web, and they have When Fuzzing Meets LLMs: Challenges and Opportunities FSE\u201924, July 2024, Porto de Galinhas, Brazil been included in the LLM\u2019s training corpus. It is possible to directly employ LLM to generate test cases for them, combining methodologies focused on diversification. These methods encompass internal approaches, such as meticulously crafted prompts that demand using diverse features, as well as external methods, such as coverageguided genetic algorithms. They both contribute to address the challenge of insufficient diversity. For instance, when testing common text protocols such as HTTP and FTP, where LLM excels in its support for text-based languages, it is feasible to directly instruct LLM to generate test cases for these protocols. To increase diversity, for internal approaches, we can use prompts that encourage LLM to generate HTTP files with various methods (e.g., GET, POST, PUT), different headers, different query parameters, URL structures, various payloads, and other aspects. We can also interactively ask LLM to cover more types of messages [29]. For external approaches, we can utilize coverageguided generation used in conventional fuzzing along with more real-world examples to enhance LLM. REC 2.2 In many cases, the LLM is not trained with a dedicated training corpus specifically tailored for the test subjects. Rather than employing LLM directly for generating the final test cases, we suggest utilizing LLM to transform well-known knowledge to formulate the input specifications or build initial test cases. The input specification helps address the challenge of limited validity, and the initial test cases help address the challenge of insufficient diversity. For instance, in the case of protocol implementations lacking machine-readable grammar, generating valid test inputs automatically to adhere to the necessary structure and order becomes challenging. In such scenarios, leveraging that LLM has been trained on established protocols, allows the transfer of grammars from these protocols with the assistance of LLM and recorded message sequences. The grammar can enhance the validity of the generated test cases. With the grammar, conventional grammar-based fuzzers could be utilized to generate more test cases [29]. Another instance is transforming test cases of popular database systems to initial seeds for the tested database system. The SQL queries of popular database systems like PostgreSQL have rich diversity and they have already been trained for LLM. Therefore, leveraging the knowledge of LLM to transform them into the format of the target database system is feasible. Providing them to the fuzzer as the initial seed helps enhance the diversity of generated test cases. 2.3 Bug Detection Description. Recently, several pioneer works [21, 25] utilize LLM to enhance bug detection. Their basic idea is to use functionality descriptions of the target program as the prompt context, and then ask LLMs to generate code that implements the same functionalities with the target program. By comparing the execution results of the two functionally equivalent programs, they can detect logic bugs in the target program. For example, Differential Prompting [25] queries LLMs about the intention of a piece of provided code and then uses the obtained intention as a new prompt context for LLMs to generate code with the same intention. Challenges. The application of LLMs to bug detection can be ineffective if done directly, as LLMs have limited long-text understanding [32], posing a challenge to inaccurate understand of the semantics of the target program. For example, researchers [25] found that LLMs may misconstrue code designed to identify the longest common substring as being intended for finding the longest common subsequence. This misinterpretation can occur even though these two problems require entirely distinct code solutions. As a result, LLMs may generate code whose functionality deviates from the target program, thus leading to an inaccurate test oracle. According to the experiment results of Differential Prompting [25], it achieves 66.7% success rate when generating reference implementation for programs from the programming contest website Codeforces. While this is substantially better than its baseline, it still results in a false-positive rate of 33.3%, which is still not sufficient for practical usage. Recommendations. We have the following recommendations: REC 3.1 Defining test oracles is highly dependent on specific targets and scenarios, presenting the most formidable aspect of fuzzing. For complicated targets, we suggest to avoid analyzing results with LLM directly. Instead, consider employing LLM to extract features or patterns associated with a specific bug type, leveraging domain knowledge. Subsequently, monitoring the system using these patterns aids in addressing the challenge of inaccurate understanding. For example, many time-series databases like IoTDB implicitly handle exceptions. Consequently, the system will not crash or exhibit other abnormal behaviors. Nevertheless, these database systems generate extensive logs, and errors manifest as exceptions in these logs. Therefore, it becomes feasible to use LLM for analyzing the logs to discern error patterns. In such scenarios, we recommend employing LLM to scrutinize the logs, identify error patterns, and subsequently leverage these patterns for detecting logic errors. REC 3.2 Some targets or projects contain well-defined documentations, where the expected behaviors are clearly described, like the RFCs for protocols. For these cases, we suggest to leverage the natural language understanding ability of LLM to extract the expected behaviors from the documentations for test oracle definition. This helps LLM to understand the intention and design of the target programs, thus addressing the challenge of inaccurate understanding. For example, the RFCs for protocols usually contain detailed descriptions of the protocol\u2019s expected behaviors. Take the RFC 854 [4] for Telnet protocol as an example. It specifies expected behaviors during the negotiation of some disabled command options or unnegotiated commands. These can be used as test oracles and can be further used to uncover CVE-2021-40523 [30]. 3 POTENTIAL SOLUTIONS To demonstrate the practicality of our recommendations, we use the Database Management System (DBMS) as the target for LLMassisted fuzzing. Addressing challenges in driver synthesis, input generation, and bug detection, we propose three potential solutions: state-aware driver synthesis, cross-DBMS SQL transfer, and log-based Oracle definition. These solutions are implemented and compared with rudimentary uses of LLM, where it is directly employed. Experiments are conducted under identical settings on a machine with 256 cores (AMD EPYC 7742 Processor @ 2.25 GHz) and 512 GiB of main memory, demonstrating the efficacy of our recommended approaches in enhancing LLM-based fuzzing for intricate systems like DBMSs. 3.1 LLM-Enhanced Connector Synthesis Obstacle: Database connectors, also commonly known as database drivers, serve as intermediary components facilitating communication between applications and databases. These connectors define standard a set of interfaces, encompassing functions and parameters. The driver for fuzzing database connector consists of a sequence FSE\u201924, July 2024, Porto de Galinhas, Brazil Jiang et al. of these interfaces. Directly utilizing LLM to generate drivers for database connector will encounter two challenges: First is prone to error: API sequences contain semantic information that is embedded in the state of the database connector, directly generating sequences may import errors. Second is limited scope: LLM lacks the state transition knowledge of the connectors because it lacks the related corpus in training. Solution: Following REC 1.2 , we propose LLM-enhanced stateaware database connector synthesis. We first collect JDBC function prototypes and example programs that utilize JDBC. Then we model the connection relationships between JDBC functions as state-transition rules. Next, we gather the function prototypes, example programs, and connection rules as input for LLM. The prompt we give is like \u201c Based on the state-transition rules and state description of functions, please generate a sequence of APIS within length 15. It is required to cover a different combination of state transitions than before.\u201d Result: We implement LLM-enhanced connector synthesis into Wingfuzz\ud835\udc50\ud835\udc5c\ud835\udc5b\ud835\udc5band compare it against LLM\ud835\udc50\ud835\udc5c\ud835\udc5b\ud835\udc5b, which directly utilizes LLM to generate drivers for MySQL Connector/J [3], MariaDB Connector/J [2], and AWS JDBC Driver for MySQL [1]. We perform fuzzing on ClickHouse for each tool. Table 1 shows the driver correctness ratios and branch coverage by LLM\ud835\udc50\ud835\udc5c\ud835\udc5b\ud835\udc5band Wingfuzz\ud835\udc50\ud835\udc5c\ud835\udc5b\ud835\udc5bon three selected DBMSs in 12 hours. These statistics show that Wingfuzz\ud835\udc50\ud835\udc5c\ud835\udc5b\ud835\udc5balways performs better in both driver correctness ratio and branch coverage than LLM\ud835\udc50\ud835\udc5c\ud835\udc5b\ud835\udc5bon all three DBMSs. Specifically, Wingfuzz\ud835\udc50\ud835\udc5c\ud835\udc5b\ud835\udc5barchives 94% more correctness rate for driven synthesis. And the drivers generated by Wingfuzz\ud835\udc50\ud835\udc5c\ud835\udc5b\ud835\udc5bcover 56% more branches on average. The main reason is that the state-transition rules embed semantic information, and it also helps LLM generate API sequences that account for the diverse states within the database connector. Table 1: Driver Correctness Ratios and Branch Coverage. DBMS Driver Correctness Ratios Branch Coverage LLM\ud835\udc50\ud835\udc5c\ud835\udc5b\ud835\udc5b Wingfuzz\ud835\udc50\ud835\udc5c\ud835\udc5b\ud835\udc5b LLM\ud835\udc50\ud835\udc5c\ud835\udc5b\ud835\udc5b Wingfuzz\ud835\udc50\ud835\udc5c\ud835\udc5b\ud835\udc5b MariaDB Connector/J 0.142 0.331 583 843 MySQL Connector/J 0.216 0.367 1256 1982 AWS MySQL JDBC 0.203 0.394 1382 2293 3.2 Cross-DBMS SQL Transfer Obstacle: SQL queries, as the inputs of DBMS, are vital to DBMS fuzzing. Generating SQL queries directly via LLM faces two main challenges: ensuring semantic correctness and promoting query diversity. Semantically correct SQL queries are vital for triggering complex DBMS behaviors, as syntactical errors lead to parsing failures. The intricate SQL grammar, encompassing various clauses, expressions, and rules, poses a challenge for LLM in achieving semantic correct. Furthermore, diversity in SQL queries is crucial for probing deep DBMS logic. However, LLM\u2019s constrained variety, influenced by the absence of DBMS feedback, limits the exploration of diverse query structures. Solution: To overcome these challenges, we introduce the crossDBMS SQL transfer approach, aligned with the recommendation REC 2.2 , for SQL generation. In contrast to directly generating the SQL queries, we use LLM to transfer the test cases from other DBMSs as the initial seeds for fuzzing the target DBMS. These initial seeds are used to mutate new SQL test cases during the fuzzing loop. The process contains three key steps. First, it executes existing SQL test cases within its native DBMS to capture the schema information during execution. Second, it utilizes LLMs along with the captured schema information to guide the generation of new test cases based on the LLM responses. Finally, it temporarily comments out unparsable sections for fuzzers to ensure proper parsing and subsequently uncomments them after mutation. Result: We implement the solution called Wingfuzz\ud835\udc56\ud835\udc5b\ud835\udc5d\ud835\udc62\ud835\udc61and compare it with LLM\ud835\udc56\ud835\udc5b\ud835\udc5d\ud835\udc62\ud835\udc61, which directly uses LLM to generate the SQL queries. We run Wingfuzz\ud835\udc56\ud835\udc5b\ud835\udc5d\ud835\udc62\ud835\udc61and LLM\ud835\udc56\ud835\udc5b\ud835\udc5d\ud835\udc62\ud835\udc61on three DBMS: MonetDB [6], DuckDB [13], and ClickHouse [18]. Table 2: Semantic Correctness Ratios and Branch Coverage. DBMS Semantic Correctness Ratios Branch Coverage LLM\ud835\udc56\ud835\udc5b\ud835\udc5d\ud835\udc62\ud835\udc61 Wingfuzz\ud835\udc56\ud835\udc5b\ud835\udc5d\ud835\udc62\ud835\udc61 LLM\ud835\udc56\ud835\udc5b\ud835\udc5d\ud835\udc62\ud835\udc61 Wingfuzz\ud835\udc56\ud835\udc5b\ud835\udc5d\ud835\udc62\ud835\udc61 MonetDB 0.1594 0.4134 26,828 41,840 DuckDB 0.2551 0.3486 57,937 70,583 ClickHouse 0.1458 0.3093 124,887 145,383 Table 2 shows semantic correctness ratios and covered branches of LLM\ud835\udc56\ud835\udc5b\ud835\udc5d\ud835\udc62\ud835\udc61and Wingfuzz\ud835\udc56\ud835\udc5b\ud835\udc5d\ud835\udc62\ud835\udc61on three selected DBMSs in 12 hours. From the table, we can see that Wingfuzz\ud835\udc56\ud835\udc5b\ud835\udc5d\ud835\udc62\ud835\udc61performs better than LLM\ud835\udc56\ud835\udc5b\ud835\udc5d\ud835\udc62\ud835\udc61on DBMS fuzzing. Specifically, the test cases generated by Wingfuzz\ud835\udc56\ud835\udc5b\ud835\udc5d\ud835\udc62\ud835\udc61contain 159.35%, 36.65%, and 112.14% more semantic-correct SQL statements, and cover 55.96%, 21.83%, and 16.41% more code branches than that of LLM\ud835\udc56\ud835\udc5b\ud835\udc5d\ud835\udc62\ud835\udc61on MonetDB, DuckDB, and ClickHouse, respectively. It indicates that LLM can not directly generate high-quality SQL queries as the input for DBMS fuzzing. The main reason is that the transfer seeds improve the diversity of mutated test cases, and the fuzzer\u2019s mutator promises the semantic correctness of SQL queries. 3.3 Monitor-Based DBMS Bug Detection Obstacle: The most critical step for DBMS bug detection is to construct the test oracles to identify the logic or performance bugs in DBMS. A test oracle refers to a mechanism in DBMS fuzzing to determine the correctness or validity of the DBMS\u2019s behaviors. Directly using LLMs to construct the test oracle is challenging as LLMs lack specific knowledge about the intricate workings and behaviors of DBMS. They can not access the internal logic, making it difficult to accurately predict or emulate DBMS behavior. Solution: To address the challenges, we propose the Runtime Monitor-Based DBMS Bug Detection following the REC 3.1 , which detects the anomalies of DBMS by analyzing the runtime information of DBMS in real-time. To ensure the robustness of DBMS, the DBMS usually contains the implicit exception handler mechanism, which captures the internal exceptions to avoid system crashes. These exceptions usually output some key internal states and behaviors of DBMS, such as wrong execution logic. Unlike directly using LLM to construct the test oracle by checking the execution result of the SQL query, our approach involves collecting runtime information from the DBMS and using LLM to analyze the runtime information for bug detection. The process contains two main steps. First, it instruments an agent to extract the runtime information of DBMS. Then, it collects the runtime information and uses LLM to detect the anomaly by predefining some error pattern. Table 3: Number of Reported Bugs and Real Bugs. DBMS LLM\ud835\udc4f\ud835\udc62\ud835\udc54 Wingfuzz\ud835\udc4f\ud835\udc62\ud835\udc54 Name Reported Real Reported Real MonetDB 61 0 6 3 DuckDB 54 0 5 3 ClickHouse 67 1 3 3 Result: To evaluate the effectiveness of our recommendation, we implement the solution with Wingfuzz\ud835\udc4f\ud835\udc62\ud835\udc54and compare it with LLM\ud835\udc4f\ud835\udc62\ud835\udc54, which directly uses LLM to determine whether the execution of the SQL query is right during the fuzz loop. Table 3 When Fuzzing Meets LLMs: Challenges and Opportunities FSE\u201924, July 2024, Porto de Galinhas, Brazil shows the number of reported bugs and real bugs by LLM\ud835\udc4f\ud835\udc62\ud835\udc54and Wingfuzz\ud835\udc4f\ud835\udc62\ud835\udc54in 12 hours on MonetDB, DuckDB and ClickHouse. It shows the Wingfuzz\ud835\udc4f\ud835\udc62\ud835\udc54can detect more anomalies and has fewer false positives than LLM\ud835\udc4f\ud835\udc62\ud835\udc54. Specifically, LLM\ud835\udc4f\ud835\udc62\ud835\udc54totoally reported 182 bugs but only 1 bug is real. Instead, Wingfuzz\ud835\udc4f\ud835\udc62\ud835\udc54reported 14 bugs and 9 bugs are real bugs and have been confirmed. The main reason is that the collected runtime information contains the error message of DBMS, and it helps LLM to analyze and detect bugs. 4 CONCLUSION We identify and systematically analyze five major challenges when using LLM in fuzzing and confirm their prevalence through a review of most recent top-tier conference papers. These challenges affect the effectiveness and accuracy of the LLM-based fuzzing technologies. To support researchers in avoiding them, we provide recommendations that are applicable to effectively assist the main steps in fuzzing, depending on the richness of the relevant corpus and documentation. Our preliminary evaluation further demonstrates that these recommendations effectively address the challenges in LLM-assisted DBMS fuzzing."
17
+ }
intro_8K/test_introduction_short_2404.16300v1.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2404.16300v1",
3
+ "title": "Reinforcement Learning with Generative Models for Compact Support Sets",
4
+ "abstract": "Foundation models contain a wealth of information from their vast number of\ntraining samples. However, most prior arts fail to extract this information in\na precise and efficient way for small sample sizes. In this work, we propose a\nframework utilizing reinforcement learning as a control for foundation models,\nallowing for the granular generation of small, focused synthetic support sets\nto augment the performance of neural network models on real data classification\ntasks. We first allow a reinforcement learning agent access to a novel context\nbased dictionary; the agent then uses this dictionary with a novel prompt\nstructure to form and optimize prompts as inputs to generative models,\nreceiving feedback based on a reward function combining the change in\nvalidation accuracy and entropy. A support set is formed this way over several\nexploration steps. Our framework produced excellent results, increasing\nclassification accuracy by significant margins for no additional labelling or\ndata cost.",
5
+ "authors": "Nico Schiavone, Xingyu Li",
6
+ "published": "2024-04-25",
7
+ "updated": "2024-04-25",
8
+ "primary_cat": "cs.LG",
9
+ "cats": [
10
+ "cs.LG",
11
+ "cs.CV"
12
+ ],
13
+ "label": "Original Paper",
14
+ "paper_cat": "Model AND Based AND Reinforcement AND Learning",
15
+ "gt": "Deep learning [10] is one of the most popular and suc- cessful methods for any task where a large dataset can be procured, including fundamental computer vision tasks like classification. However, large, well-balanced, well-labelled datasets are often difficult and prohibitively expensive to ac- quire. Consequently, much of contemporary image classi- fication utilizes a high quality source dataset and support sets with highly relevant data to the target task. The gen- eration of such support sets has been a focus of contem- porary research, and recently utilizes the output of the un- precedented success of large pretrained generative models like Stable Diffusion [13]. The advancements in generative models have led to the rise of synthetic datasets, where im- ages are generated in large scale according to the target task and used in place of a real training dataset, yielding excel- lent results [6, 11, 22]. Despite these advancements, the body of research relat- ing to synthetic datasets remains primarily focused on large- batch image synthesis. In this way, any issues caused by the unpredictable behaviour of modern generative models can easily be smoothed out. However, this results in the major- ity of successful applications requiring tens of thousands of images generated for a single task [6, 11], which is ineffi- cient in time and cost. The goal of creating specific, highly focused support sets composed of several hundred images rather than sev- eral thousand is currently an open problem at the forefront of generative computer vision research. Consequently, it raises the question of if synthetic data can supplement real data, making up a very small portion of the overall dataset to shore up specific weaknesses, or whether synthetic data must make up a significant amount of the dataset if it is to be used at all. Reinforcement learning [14] is a popular control scheme that has an agent learn the optimal behaviour given an en- vironment and a reward for desirable interactions. Recent studies have found reinforcement learning effective at writ- ing and re-writing prompts [3, 7], but the use of reinforce- ment learning to guide the evolution of prompts has yet to be explored. Reinforcement learning is an excellent frame- work for imposing specific learned behaviours upon the re- sulting agent, and we posit that combining reinforcement learning with pretrained generative models will impart that much-needed specificity on the synthesized images, result- ing in significant performance gains for a relatively small number of synthetic images. In this work, we introduce a framework utilizing rein- forcement learning as a control for large generative models to synthesize precise support sets, intended to bolster the lacking aspects of real datasets without overwriting them for increased model performance at no extra data or labelling costs. To accomplish this, we utilize a dictionary based on the features of the original training dataset, and allow a re- inforcement learning agent to learn the optimal structures and word choice to generate high quality, specific prompts for Stable Diffusion. The controlled output of Stable Dif- fusion is then used to supplement the existing training data for a neural network model, and the performance of this model on a validation set is given as feedback to the agent. 1 arXiv:2404.16300v1 [cs.LG] 25 Apr 2024 In this way, the framework allows Stable Diffusion to act as an extension of the reinforcement learning agent, acting directly to improve the performance of the model by tweak- ing the prompts that make up the support set. We evalu- ate this framework on several datasets, including CIFAR- 10 [8], and Tiny-ImageNet [9], showing free improvements on neural networks of \u223c1% for less than 500 total images in the support set. The main contributions for this work are: \u2022 A novel framework combining reinforcement learning and large pretrained generative models for the construc- tion of small, focused, and effective synthetic support sets. \u2022 A new reward scheme that facilitates a better interaction between reinforcement learning and classification.",
16
+ "main_content": "2.1. Reinforcement Learning Reinforcement learning [14] defines an agent and an environment with rules on how they can interact. The agent receives rewards based on how their actions affect the environment, with one of several reward schemes. The rewards inform the optimal behaviour of the agent, and thus the desirable properties of the end model. Popular reward schemes include exploration-based, which incentivizes exploring the action space, and goal-based, which explores to achieve set goals. Past works have attempted to use reinforcement learning directly in classification algorithms, but this generally yields lacklustre results for the amount of effort and training time required [4]. This is due to the long convergence time of conventional reinforcement learning algorithms, and the relative ease of using simple deep learning models when a well-labelled dataset is available, rather than optimizing the loss with an agent. In our framework, we circumvent this issue by using a deep learning model for classification and optimizing it by altering the training set, rather than directly making the predictions using the agent. 2.2. Generative Models Generative models have shown unprecedented success in many tasks in natural language processing and computer vision [1, 13]. Such models are often trained on datasets with in excess of one billion images, which stores a large wealth of knowledge that can be accessed through their generation capabilities [1]. These generative models have been widely used in contemporary research for image synthesis, such as augmentation of existing samples to artificially simulate a larger dataset [19, 20]. Replacing the dataset entirely with synthetic images is also a topic of interest, with excellent preliminary results despite no real data [22]. Finally, the generation of large support sets to supplement real data has Figure 1. Overall framework also been explored, but this mainly utilizes synthesis over a large scale to shore up the weaknesses of a dataset [11]. Contemporary generative models usually require text prompts to guide their behaviour. General prompting is successful in simple tasks, such as guided image synthesis, but complex and specific prompts often lead to unexpected results. This leads to an area of research known as prompt engineering, which is the focus of much of the recent literature in the topic of large models [2]. Common approaches generally utilize a fixed set of prompts that have been carefully engineered to produce certain results; in our framework, we allow the prompts to evolve naturally from a general structure to their optimal state using reinforcement learning to choose the subjects and the model performance as feedback. 3. Methods 3.1. Problem Formulation Initially, there is a well-labelled dataset D, consisting of N training samples, and a synthetic support set S, consisting of k\u2217m samples, where k is the current step number, and m is the number of samples generated per step. In this work, we impose an extra limit Nsyn on the number of samples in S. There is also a validation set V, and a test set T . Our goal in this study is to train a reinforcement learning agent A to optimally control a pretrained generative model, such as Stable Diffusion, to optimally populate S with at most Nsyn synthetic images, where Nsyn << N. As shown in Fig. 1, in each step, the agent forms a prompt, feeds it to Stable Diffusion, and the resulting images are added to S. The resulting dataset D+S is used to train a model M , and its performance on V is passed back to A as feedback. This 2 Figure 2. Images generated using our framework using CIFAR10 [8] labels. continues until a total of Nsyn images are contained within S, at which point the exploration thread terminates. When all exploration threads within the preset exploration budget are explored, the resulting framework is tested on the test set T yielding the final performance. 3.2. Image Synthesis For image synthesis, we are using Stable Diffusion [13], a successful text-to-image model that is trained on billions of text-image pairs.Stable Diffusion has already been used to great effect in contemporary works when the aim is to replace a real dataset [18, 22], and to augment existing samples [19, 20], but with comparatively fewer works focusing on consistently generating small, effective support sets. 3.3. Controlling the Synthesis with RL Reinforcement learning (RL) defines an agent and an environment, and gives a set of actions that the agent can take to affect the environment. In our framework, we take a classification model and its training dataset as the Environment. The reinforcement learning agent adaptively selects text prompts for the generative model towards image synthesis, which supplements the training set for classification performance improvement. The agent then receives feedback based on the change in the model\u2019s performance, which is taken as the State in our reinforcement framework. In this study, we adopt the policy-based method for agent optimization, building a policy \u03c0 : s \u2212 \u2192a that maps states to optimal actions [14]. The specific objective function is: L(\u03b8) = \u02c6 E[min(rt(\u03b8) \u02c6 At, clip(rt(\u03b8), 1 \u2212\u03f5, 1 + \u03f5) \u02c6 At)]. (1) where rt = \u03c0\u03b8(at|st) \u03c0\u03b8old(at|st) is the probability ratio, \u02c6 At is the estimator of the advantage function at step t, and \u03f5 is a small value. Action space: Our framework allows the reinforcement learning agent to interact with Stable Diffusion by forming prompts. Prompts of unlimited length are subject to unmanageable time complexity, so we utilize a set dictionary based on the dataset. We formulate the interaction with a basic sentence structure with enough expression to accurately place the image, and pose the following format: \u201dA {domain} of a {class}, {class}, and {class}\u201d. Domains include photographs, digital artwork, paintings, mosaics, and other clear representations of the class. Next, three class names are chosen from the list of classes in the dataset. We notice that Stable Diffusion usually puts more attention on the first \u201dclass\u201d term and generates the corresponding theme in the resulting image. Thus, our prompt design allows the agent to position the generated images at the boundaries between classes, which is where new images are most effective for improving classification performance [12]. This is in contrast to traditional prompting methods, where the prompt describes the primary subject of interest with qualifiers for other subjects. We instead follow contemporary diversity research, prioritizing brevity and maximal control [15]. The benefits of our approach are that single-class representative samples can be easily generated as follows: \u201dA {domain} of a car, car, and car\u201d, which has the added benefit of including more representative features from the chosen class due to the repetition. Multi-class samples can be equally easily generated by including two or three different class names, and the significance of each class can be altered by changing the order the classes appear in. In this way, our method allows the agent a yet unseen amount of control over the output of Stable Diffusion, resulting in significantly improved precision. Reward function: The agent\u2019s desired behaviour is to increase the accuracy of the classification model as much as possible with limited image synthesis. In our framework, we use a combined reward function, utilizing the validation set accuracy and the entropy to bias our model towards high, balanced accuracy. Under the assumption of a welllabelled training dataset, the former (i.e. classification accuracy on validate set) offers the most unfiltered access to the state changes in the model\u2019s performance. It is noteworthy that different from previous works utilizing reinforcement learning for classification, the accuracy alone is used, the addition of entropy in our reward allows the framework to simultaneously reward the improvement of weak classes, which improves the overall model performance on underrepresented classes. The formulation of our reward function is shown in Eq. 2, where the entropy under a state s can be calculated following Eq. 3. r(s, s\u2032) = \u2206Acc(s \u2192s\u2032) \u2212\u2206\u03c3entropy(s \u2192s\u2032), (2) \u03c3entropy(x, M) = \u2212\u03a3k i=1pM(yi|x) log pM(yi|x), (3) where s\u2032 is the state after performing action a, and s is the state before performing action a, and pM(\u02c6 y|x) represents the class probability of sample x under model M. 3 Pretrained Rand Syn Ours ResNet-18 92.0 92.3 92.7 ResNet-50 93.9 94.2 94.5 VGG-16 93.9 94.1 94.9 ShuffleNetV2 93.3 93.6 94.1 EfficientNetV2S 94.1 94.3 95.2 Table 1. Classification accuracy (%) on CIFAR-10 [8]. Pretrained Rand Syn Ours ResNet-18 54.3 54.4 54.7 ResNet-50 71.1 71.1 71.5 VGG-16 63.2 63.4 63.9 ShuffleNetV2 48.6 48.6 48.8 EfficientNetV2S 69.9 70.0 70.4 Table 2. Classification accuracy (%) on Tiny ImageNet [9]. 3.4. Full Algorithm One training step for the agent A consists of the following processes, in order: 1. A chooses a domain and three classes in the prompt to represent the generated images. 2. m images are generated following the prompt, which are added to S. 3. M is trained on D + S, and tested again on V, reporting the accuracy and entropy of the predictions. 4. The reward r(s, s\u2032) is given back to the agent. If k = 1, then the pretrained statistics are used in place of the data from the previous state s. This sequence is optimized using Proximal-PolicyOptimization [14] to find the optimal set of Nsyn synthetic samples contained in S. After the training process is completed, the algorithm has found the optimal prompts for to generate the optimal support set, and runs a final time without feedback to form S, the desired support set. 4. Results & Discussion 4.1. Datasets We evaluate our framework on two popular natural image datasets, CIFAR-10 [8] and Tiny ImageNet [9]. We chose these datasets due to computational reasons \u2013 the action space complexity scales as n3, where n is the number of classes in the dataset. Tiny ImageNet is a 200 class balanced dataset of 100 000 64x64 coloured images, and CIFAR-10 is a 10 class balanced dataset of 60 000 32x32 coloured images. In each case, we split the datasets using an 80:10:10 ratio of train:validation:test. 4.2. Experimental Protocol We follow the setup laid out in Section 3. For both datasets, we use a domain dictionary of {\u201dphotograph\u201d, \u201dpainting\u201d, \u201dstill-life\u201d, \u201dimage\u201d, \u201ddigital image\u201d} and a class dictionary composed of each class name once. In experiments, we select k = 10 to generate 10 images per step and our algorithm will run until a maximum of Nsyn = 400 images. Various models, including ResNet18, ResNet50 [5], ShuffleNetV2 [17], VGG-16 [16], and EfficientNetV2 [21], are evaluated in our experiments. We compare the results of our framework against vanilla trained models and the models trained with random synthetic images in equal number. The \u2019Random Synthesis\u2019 setting adds to the training set 400 images synthesized by selecting random classes to fill the blanks in the prompt, and our method uses the full reinforcement learning framework. 4.3. Main Results and Discussion The results of applying our framework are reported in Tables 1 and 2. In addition, example images generated off of the CIFAR-10 dataset are demonstrated in Fig. 2. From these results, we can see that our framework is superior to random synthesis for small-batch support set synthesis, increasing the accuracy by as much as 0.9% over the random synthesis method, and 1.1% over the baseline model. Notably, for two backbones on Tiny ImageNet, random synthesis fails to improve the performance of the model by > 0.1%, while our framework increases the accuracy by \u223c0.2%. In addition, our method adds only 0.33% extra images for CIFAR-10, and 0.2% for Tiny-ImageNet. Our experimental results show that the proposed framework has a high performance gain relative to the number of samples synthesized, a characteristic not seen in prior arts. We attribute this gain to the fine control that our designed reinforcement learning agent gives over the output of the large pretrained model, and the effectiveness of the feedback given back to the agent. Our framework currently requires some amount of information about the target dataset in order to work: class names, and a rough domain. This could be bypassed by forming the dictionary using an image-to-text encoder on representative samples after clustering by an unsupervised learning algorithm, but we leave the pursuit of this direction for future work. 5. Conclusions In this work, we proposed a framework allowing for the granular generation of small, focused synthetic support sets to augment the performance of general backbone networks on real data classification tasks. Our framework exploits the wealth of information present in large pretrained models by controlling their output using reinforcement learning agents, so that optimal, explainable prompts can be generated over many training steps. Our framework produced excellent results on a variety of backbones, increasing classification accuracy by significant margins for no additional labelling or data cost. 4"
17
+ }
intro_8K/test_introduction_short_2404.16301v1.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2404.16301v1",
3
+ "title": "Style Adaptation for Domain-adaptive Semantic Segmentation",
4
+ "abstract": "Unsupervised Domain Adaptation (UDA) refers to the method that utilizes\nannotated source domain data and unlabeled target domain data to train a model\ncapable of generalizing to the target domain data. Domain discrepancy leads to\na significant decrease in the performance of general network models trained on\nthe source domain data when applied to the target domain. We introduce a\nstraightforward approach to mitigate the domain discrepancy, which necessitates\nno additional parameter calculations and seamlessly integrates with\nself-training-based UDA methods. Through the transfer of the target domain\nstyle to the source domain in the latent feature space, the model is trained to\nprioritize the target domain style during the decision-making process. We\ntackle the problem at both the image-level and shallow feature map level by\ntransferring the style information from the target domain to the source domain\ndata. As a result, we obtain a model that exhibits superior performance on the\ntarget domain. Our method yields remarkable enhancements in the\nstate-of-the-art performance for synthetic-to-real UDA tasks. For example, our\nproposed method attains a noteworthy UDA performance of 76.93 mIoU on the\nGTA->Cityscapes dataset, representing a notable improvement of +1.03 percentage\npoints over the previous state-of-the-art results.",
5
+ "authors": "Ting Li, Jianshu Chao, Deyu An",
6
+ "published": "2024-04-25",
7
+ "updated": "2024-04-25",
8
+ "primary_cat": "cs.CV",
9
+ "cats": [
10
+ "cs.CV"
11
+ ],
12
+ "label": "Original Paper",
13
+ "paper_cat": "Semantic AND Segmentation AND Image",
14
+ "gt": "Neural Networks [1] and Transformers [2] have achieved great success in semantic segmentation tasks, but super- vised tasks typically require a large amount of annotated data. Pixel-level annotation is needed, with at least an hour for each image [3], which significantly increases the cost. One approach to address this problem is to utilize existing annotated data or easily obtainable synthetic data to train models and test them on target data. However, due to domain differences, the model\u2019s performance metrics often decline substantially when tested on target data. In order to obtain a more robust model, researchers have proposed UDA meth- ods [4][5][6], transferring knowledge from annotated source domain data to unannotated target data. It has been proven that CNNs are sensitive to distribu- tion shifts [7] in image classification. Recent studies [8] have shown that Transformers are more robust compared to these factors. In addition, CNNs mainly focus on texture [9], while Transformers emphasize shape, which is more similar to hu- man vision. Some researches have revealed significant dif- ferences between the induction bias of standard CNNs and human vision: humans primarily rely on object content (i.e., shape) for recognition [10], while CNNs exhibit a strong pref- erence for style (i.e., texture) [9]. This explains why CNNs are more susceptible to changes when switching between do- mains, as image style is more likely to vary across different domains. Early studies [11][12][13] have confirmed that feature dis- tribution shifts caused by style differences mainly occur in the shallow layers of the network. This implies that the shal- low layers\u2019 feature distribution in the network can reflect the style information of the input images. Therefore, following these works\u2019 methods, we manipulate the style features of the feature maps in the shallow layers of the network. The fea- ture extractor captures the style features of the target domain while preserving the content of the source domain. This ap- proach weakens the style features of the source domain while enhancing the style features of the target domain, achieving style feature transfer.",
15
+ "main_content": "2.1. Image to Image Domain Adaptation In UDA, we are given a source dataset as Ds = {(xs i, ys i )}Ns i=1 (1) where Ns is the number of the color images in the dataset, and ys \u2208RH\u00d7W represents the associated semantic map of xs \u2208RH\u00d7W \u00d73. Similarly, Dt = {xt i}Nt i=1 (2) arXiv:2404.16301v1 [cs.CV] 25 Apr 2024 is the target dataset where true semantic labels are missing. Typically, segmentation networks trained on Ds exhibit performance degradation when tested on Dt. Here, we use Fourier Domain Adaptation (FDA) [14] and RGB adaptation to reduce the domain gap between the two datasets at the image-level. FDA aims to minimize domain differences by replacing the low-frequency components in the target domain with those from the source domain. This is because low-frequency components can be inferred as the domain style. FDA has achieved significant improvements in semantic segmentation. Therefore, we employ the FDA method for data augmentation, as expressed by the formula: xs\u2192t = F\u22121([\u03b2\u25e6FA(xt)+(1\u2212\u03b2)\u25e6FA(xs), FP (xs)]) (3) The variables FA and FP denote the amplitude and phase components of the Fourier transform, respectively. In the inverse Fourier transform, the phase and amplitude components are remapped to the image space. The hyperparameter \u03b2 determines the filter\u2019s size in the inverse Fourier transform. Random RGB shift is a prevalent and widely adopted technique for data augmentation. Through our experimental observations, we fortuitously discovered that employing random RGB shift as a data augmentation technique significantly enhances the model\u2019s performance. Our hypothesis is that the image-level implementation of random RGB shift enables a closer resemblance between the style of the source and target domains, thereby mitigating the domain gap. Building upon the concept of random RGB shift, we introduce a RGB adaptation method as a solution for domain adaptation. The mean value of each channel is calculated for RGB images x as follows: \u00b5(x) = 1 HW H X h=1 W X w=1 xhw (4) xs\u2192t = xs + (\u00b5(xt) \u2212\u00b5(xs)) (5) The variables \u00b5(s) and \u00b5(t) represent the mean values of the source domain image and the target domain image, respectively, along the channel dimension. By employing this method, the content of the source domain image remains unaltered, thus preserving the availability of accurate labels. Additionally, it facilitates the closer alignment of the source domain image with the target domain image within the RGB space. 2.2. Style Adaptive Instance Normalization In UDA methods, the primary factor causing domain shift is the disparity in styles across domains. The presence of domain shift constrains the models\u2019 capacity for generalization in both domain adaptation and domain generalization tasks. Previous studies have demonstrated that shallow features extracted by backbone networks possess the capability to capture style information in images. Established approaches typically characterize the style features of an image by computing the mean and standard deviation along the channel dimension of shallow features. \u03c3(x) = v u u t 1 HW H X h=1 W X w=1 (xhw \u2212\u00b5(x))2 + \u03f5 (6) Conventional instance normalization can eliminate specific stylistic information from an image. Directly applying this method to UDA can diminish the network\u2019s capacity to learn the style information of the source domain images. However, it also disregards the style information of the target domain, resulting in diminished performance and limited generalization ability on the target domain. To decrease the network\u2019s ability to learn style information from the source domain images while enhancing the style information of the target domain images, we apply AdaIN [12] to replace the style information of the source domain images with that of the target domain images. Meanwhile, this method retains the content information of the source domain images. We term the proposed approach as Style Adaptive Instance Normalization (SAIN). The specific implementation formula is as follows: SAIN(xs, xt) = \u03c3(xt) \u0012xs \u2212\u00b5(xs) \u03c3(xs) \u0013 + \u00b5(xt) (7) \u00b5 and \u03c3 represent the mean and standard deviation of the feature map in the channel dimension, respectively. By transferring the style of the target domain to the source domain during the training process, the network g\u03b8 biased towards content no longer relies on the style of the source domain to make decisions but focuses more on content while also paying attention to the style of the target domain. During testing, we directly use network g\u03b8 without SAIN to ensure the independence of predictions and reduce computational burden. Therefore, we replace the original loss function with a content-biased loss, shown as follows: LS i = \u2212 H\u00d7W X j=1 C X c=1 yS (i,j) log SAIN \u0010 g\u03b8(xS i )(j,c), g\u03b8(xT i )(j,c)\u0011 (8) Furthermore, we follow the consistency training in DAFormer, which involves training the teacher network on augmented target data using DACS [15], while the teacher model generates pseudo-labels using non-augmented target images. 3. EXPERIMENTS 3.1. Implementation Details The proposed method is applied to two challenging unsupervised domain adaptation tasks, where there are abundant semantic segmentation labels in the synthetic domain (source domain), but not in the real domain (target domain). The two synthetic datasets used are GTA5 [16] and SYNTHIA [17], while the real domain dataset is CityScapes [3]. The proposed method is validated based on the DAFormer network and the Mix Transformer-B5 encoder [18]. All backbone networks are pretrained on ImageNet. In the default UDA setting, the MIC [6] masked image self-training strategy and the training parameters are used, including the AdamW optimizer, the encoder learning rate of 6 \u00d7 10\u22125, the decoder learning rate of 6 \u00d7 10\u22124, 60k training iterations, a batch size of 2, linear learning rate warm-up, and DACS [15] data augmentation. 3.2. Evaluation First, we integrate RGB adaptation with several significant UDA methods, including DAFormer [4], HRDA [5] and MIC [6], using the DAFormer framework. Table 1 demonstrates that RGB adaptation achieves notable improvement compared to previous UDA methods without RGB adaptation. Karras et al. [19] demonstrated that styles at different levels encode distinct visual attributes. Styles from fine-grained spatial resolution (lower levels in our network) encode lowlevel attributes like color and fine textures, whereas styles from coarse-grained spatial resolution (higher levels in our network) encode high-level attributes including global structure and textures. Therefore, the application of our SAIN module at the appropriate level is necessary to mitigate adverse style-induced biases. The networks from Block 1 to Block 4 become increasingly deeper. Figure 1 illustrates that the most notable improvement is achieved when applying SAIN in Block 3. However, applying SAIN to features at excessively low levels only has a limited impact on reducing feature biases. Additionally, using SAIN in excessively high-level styles may result in the loss of essential semantic information. Through our experimental findings, we discovered that the concurrent application of SAIN to both Block 2 and Block 3 results in optimal performance. Visual comparisons are conducted with the second performer (i.e., MIC), which utilizes the same segmentation network backbone as ours. Figure 2 illustrates that our model\u2019s prediction results demonstrate higher accuracy. Additionally, our approach demonstrates strong performance on some common categories, including the first row with the terrain, wall in the second row and building in the third and truck in fourth rows. We attribute this phenomenon to the transferability of RGB adaptation and SAIN, which enables the model to learn more style information from the target domain. 3.3. Influence of Style on UDA In the following, we analyze the underlying principles of our method on GTA\u2192Cityscapes. Firstly, we analyze the impact Table 1. Performance (IoU) of RGB adaptation with different UDA methods on GTA\u2192Cityscapes. Network UDA Method w/o RGB Adapt. w/ RGB Adapt. DAFormer DAFormer 68.3 69.37 DAFormer HRDA 73.8 74.45 DAFormer MIC 75.9 76.64 Fig. 1. The effect of SAIN on different blocks. of SAIN on UDA at various feature levels. As shown in Figure 1, as the network depth increases from Block 1 to Block 3, the improvement in the performance of UDA using SAIN also increases accordingly. The results in Table 2 and Table 3 demonstrate significant performance improvements across all benchmarks. In particular, our method has led to a +1.03 increase in mIoU for GTA\u2192CS and a +1.05 increase for Synthia\u2192CS. For most categories, such as building, fence, rider, truck, and train, there is a certain performance improvement. However, there are also some categories that have a slight performance decrease after using SAIN, such as bike. This may be due to the difference in annotation strategies for the bike category between the Cityscapes dataset and the GTA dataset. 4. CONCLUSION We have proposed a straightforward method for reducing domain discrepancy, which requires no additional learning and can be seamlessly integrated into self-supervised UDA. By transferring the target domain style to the source domain within the latent feature space, the model is trained to prioritize the style of the target domain during its decision-making process. Our experiments validate the remarkable performance enhancements achieved by our proposed method in Transformer-based domain adaptation. Despite its simplicity, the results indicate that our method actually surpasses the current state-of-the-art techniques. This suggests that the distributional misalignment caused by shallow-level statistics can indeed impact cross-domain generalization, but it can be Table 2. Semantic segmentation performance (IoU) on GTA\u2192Cityscapes Method Road S.walk Build. Wall Fence Pole Tr.light Tr.sign Veget. Terrain Sky Person Rider Car Truck Bus Train M.bike Bike mIoU ADVENT 89.4 33.1 81.0 26.6 26.8 27.2 33.5 24.7 83.9 36.7 78.8 58.7 30.5 84.8 38.5 44.5 1.7 31.6 32.4 45.5 DACS 89.9 39.7 87.9 30.7 39.5 38.5 46.4 52.8 88.0 44.0 88.8 67.2 35.8 84.5 45.7 50.2 0.0 27.3 34.0 52.1 ProDA 87.8 56.0 79.7 46.3 44.8 45.6 53.5 53.5 88.6 45.2 82.1 70.7 39.2 88.8 45.5 59.4 1.0 48.9 56.4 57.5 DAFormer 95.7 70.2 89.4 53.5 48.1 49.6 55.8 59.4 89.9 47.9 92.5 72.2 44.7 92.3 74.5 78.2 65.1 55.9 61.8 68.3 HRDA 96.4 74.4 91.0 61.6 51.5 57.1 63.9 69.3 91.3 48.4 94.2 79.0 52.9 93.9 84.1 85.7 75.9 63.9 67.5 73.8 MIC 97.4 80.1 91.7 61.2 56.9 59.7 66.0 71.3 91.7 51.4 94.3 79.8 56.1 94.6 85.4 90.3 80.4 64.5 68.5 75.9 Ours 97.24 79.12 92.15 61.45 58.5 60.98 69.23 72.58 91.93 53.33 93.99 81.26 60.68 94.84 88.3 90.5 83.24 65.59 66.82 76.93 Table 3. Semantic segmentation performance (IoU) on Synthia\u2192Cityscapes. Method Road S.walk Build. Wall Fence Pole Tr.Light Tr.Sign Veget. Terrain Sky Person Rider Car Truck Bus Train M.bike Bike mIoU ADVENT 85.6 42.2 79.7 8.7 0.4 25.9 5.4 8.1 80.4 \u2013 84.1 57.9 23.8 73.3 \u2013 36.4 \u2013 14.2 33.0 41.2 DACS 80.6 25.1 81.9 21.5 2.9 37.2 22.7 24.0 83.7 \u2013 90.8 67.6 38.3 82.9 \u2013 38.9 \u2013 28.5 47.6 48.3 ProDA 87.8 45.7 84.6 37.1 0.6 44.0 54.6 37.0 88.1 \u2013 84.4 74.2 24.3 88.2 \u2013 51.1 \u2013 40.5 45.6 55.5 DAFormer 84.5 40.7 88.4 41.5 6.5 50.0 55.0 54.6 86.0 \u2013 89.8 73.2 48.2 87.2 \u2013 53.2 \u2013 53.9 61.7 60.9 HRDA 85.2 47.7 88.8 49.5 4.8 57.2 65.7 60.9 85.3 \u2013 92.9 79.4 52.8 89.0 \u2013 64.7 \u2013 63.9 64.9 65.8 MIC 86.6 50.5 89.3 47.9 7.8 59.4 66.7 63.4 87.1 \u2013 94.6 81.0 58.9 90.1 \u2013 61.9 \u2013 67.1 64.3 67.3 Ours 89.06 57.39 90.1 51.37 7.99 60.53 69.03 63.44 86.57 \u2013 94.91 82.33 61.1 89.4 \u2013 57.28 \u2013 67.92 65.24 68.35 Fig. 2. Qualitative comparison with the previous state-of-the-art method MIC on GTA\u2192CS. The proposed method gets better segmentation for classes such as terrain, fence, building, and truck. mitigated through image translation and SAIN. The issue of model robustness in machine learning remains a challenging problem, and while we do not assert that our method is optimal, its simplicity may also yield performance improvements in other domain adaptation tasks. Acknowledgements: This work is supported by STS Project of Fujian Science and Technology Program (No. 2023T3042). 5."
16
+ }
intro_8K/test_introduction_short_2404.16302v1.json ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2404.16302v1",
3
+ "title": "CFMW: Cross-modality Fusion Mamba for Multispectral Object Detection under Adverse Weather Conditions",
4
+ "abstract": "Cross-modality images that integrate visible-infrared spectra cues can\nprovide richer complementary information for object detection. Despite this,\nexisting visible-infrared object detection methods severely degrade in severe\nweather conditions. This failure stems from the pronounced sensitivity of\nvisible images to environmental perturbations, such as rain, haze, and snow,\nwhich frequently cause false negatives and false positives in detection. To\naddress this issue, we introduce a novel and challenging task, termed\nvisible-infrared object detection under adverse weather conditions. To foster\nthis task, we have constructed a new Severe Weather Visible-Infrared Dataset\n(SWVID) with diverse severe weather scenes. Furthermore, we introduce the\nCross-modality Fusion Mamba with Weather-removal (CFMW) to augment detection\naccuracy in adverse weather conditions. Thanks to the proposed Weather Removal\nDiffusion Model (WRDM) and Cross-modality Fusion Mamba (CFM) modules, CFMW is\nable to mine more essential information of pedestrian features in\ncross-modality fusion, thus could transfer to other rarer scenarios with high\nefficiency and has adequate availability on those platforms with low computing\npower. To the best of our knowledge, this is the first study that targeted\nimprovement and integrated both Diffusion and Mamba modules in cross-modality\nobject detection, successfully expanding the practical application of this type\nof model with its higher accuracy and more advanced architecture. Extensive\nexperiments on both well-recognized and self-created datasets conclusively\ndemonstrate that our CFMW achieves state-of-the-art detection performance,\nsurpassing existing benchmarks. The dataset and source code will be made\npublicly available at https://github.com/lhy-zjut/CFMW.",
5
+ "authors": "Haoyuan Li, Qi Hu, You Yao, Kailun Yang, Peng Chen",
6
+ "published": "2024-04-25",
7
+ "updated": "2024-04-25",
8
+ "primary_cat": "cs.CV",
9
+ "cats": [
10
+ "cs.CV",
11
+ "cs.MM",
12
+ "cs.RO",
13
+ "eess.IV"
14
+ ],
15
+ "label": "Original Paper",
16
+ "paper_cat": "Mamba",
17
+ "gt": "In an open and dynamic environment, object detection faces chal- lenging weather conditions such as rain, haze, and snow. The rapid advancement of deep-learning-based object detection methods has significantly improved the ability to identify and classify objects. Benefiting from the advanced feature extraction and fusion strate- gies, cross-modality object detection methods have achieved high accuracy, e.g., CFT [34], GAFF [56], and CFR_3 [54]. However, as shown in Fig. 1, the performance of these methods is often chal- lenged by adverse weather conditions, which can severely impact the visibility and quality of visual data. Although the infrared image \u2217Equal contribution. \u2020Corresponding authors (e-mail: [email protected], [email protected]). Figure 1: The proposed method can achieve high-precision cross-modality object detection under adverse weather condi- tions. The top two examples are results from CFT [34], while the bottom two examples are results from CFMW (ours). could provide complementary cues to some extent, it cannot re- pair the appearance distortion or information loss of visual images. Thus, traditional cross-modality object detection methods still face severe performance degradation under adverse weather. Existing methods cannot be directly applied to adverse weather conditions, since the color gamut of visible images is weakened by environmental disturbance and the existing fusion methods are difficult to fully fuse visible and infrared spectra, nor have they made sufficient training under corresponding datasets. To make up the blank in this research area, we construct and release a new dataset, named Severe Weather Visible-Infrared Dataset (SWVID), as well as propose a novel framework named Cross-modality Fusion Mamba with Weather-removal (CFMW). To facilitate research in this area, we propose a new visible- infrared dataset, named SWVID, which is designed to encompass diverse severe weather scenarios by mathematically formalizing the impact of various weather phenomena on images. Specifically, SWVID comprises 20, 000 aligned visible-infrared image pairs, span- ning three weather conditions and two scenes, with each condition and scene evenly distributed. Motivated by the critical research gap highlighted in Fig. 1, where current methods falter in adverse weather, we introduce CFMW for multispectral object detection under adverse weather conditions. Our CFMW leverages a Weather Removal Diffusion Model (WRDM) and Cross-modality Fusion Mamba (CFM) to enhance detection accuracy amid adverse weather arXiv:2404.16302v1 [cs.CV] 25 Apr 2024 conditions while minimizing computational burden. Specifically, WRDM is employed to restore affected visible images before fusion with infrared counterparts, offering plug-and-play compatibility with image fusion networks. Based on learning reversal to increase the order of noise and disrupt the process of data samples, the WRDM model is advantageous to minimize the impact of adverse weather conditions. Additionally, CFM can be integrated into the feature extraction backbone, effectively integrating global contex- tual information from diverse modalities. Recent research shows that Mamba [10] achieves higher inference speed and overall met- rics than the equivalent-scale transformer. To our knowledge, this study represents the first endeavor to employ Diffusion models and Mamba for multispectral object detection. Extensive experiments on both well-established and self-created datasets demonstrate that our CFMW method achieves superior detection performance compared to existing benchmarks. Specifi- cally, we achieved about 17% performance improvement compared with the current state-of-the-art image restoration methods. The proposed method achieves about 8% accuracy improvement while saving 51.2% GPU memory compared with CFT [34], a state-of-the- art cross-modality object detection method. At a glance, we summarize the main contributions as follows: \u2022 We introduce a novel task focusing on visible-infrared object detection under adverse weather conditions and develop a new dataset called the Severe Weather Visible-Infrared Dataset (SWVID), which simulates real-world conditions. SWVID comprises 60, 000 paired visible-infrared images and labels, encompassing weather conditions such as rain, haze, and snow; \u2022 We propose a novel approach, Cross-modality Fusion Mamba with Weather-removal (CFMW) for multispectral object de- tection under adverse weather conditions; \u2022 We introduce a novel Weather Removal Diffusion Model (WRDM) and Cross-modality Fusion Mamba (CFM) modules to tackle image de-weathering and visible-infrared object detection tasks simultaneously; \u2022 Extensive experiments demonstrate that this integration achieves the best task migration capacity, resulting in state- of-the-art performance for both tasks.",
18
+ "main_content": "In this section, we briefly review previous related works about crossmodality object detection, state space model, and multi-weather image restoration. Cross-modality Object Detection The existing cross-modality object detection methods can be divided into two categories: feature level and pixel level fusion, distinguished through feature fusion methods and timing. Recently, dual stream object detection models based on convolutional neural networks have made great progress in improving recognition performance [4, 34, 37, 54, 55], while pixel level fusion methods have also achieved good performance [5, 44, 59]. Other works employing methods such as GAN to effective integration also have achieved good results [51, 58, 59]. Those works can be integrated into downstream tasks such as object detection. Traditional convolutional neural networks have limited receptive fields that the information is only integrated into a local area when using the convolution operator, where the self-attention operator of the transformer can learn long-range dependencies [43]. Thus, a transformer-based method, named Cross-Modality Fusion Transformer (CFT) [34], was presented and achieved state-of-theart detection performance. Differing from these works, we first introduce Mamba into cross-modality object detection to learn long-range dependencies with gating mechanisms, achieving high accuracy and low computation overhead simultaneously. State Space Model The concept of the State Space Model was initially introduced in the S4 model [11], presenting a distinctive architecture capable of effectively modeling global information, compared with traditional convolutional neural networks and transformers. Based on S4, the S5 model [38] reduces complexity to a linear level, with H3 [31] introducing it into language model tasks. Mamba [10] introduced an input-activate mechanism to enhance the State Space model, achieving higher inference speed and overall metrics compared with equivalent-scale transformers. With the introduction of Vision Mamba [61] and Vmamba [30], the application of the State Space Model has been extended into visual tasks. Currently, existing research does not consider effectively generalizing the State Space Model to cross-modality object detection. Multi-Weather Image Restoration Recently, some attempts have been made to unity multiple recovery tasks in a single deep learning framework, including generating modeling solutions to recover superimposed noise types [9], recovering superimposed noise or weather damage with unknown test time, or especially unfavorable multi-weather image fading [3, 22, 42]. All in One [23] unified a weather restoration method with a multi-encoder and decoder architecture. It is worth noting that diffusion-based conditional generative models have shown state-of-the-art performance in various tasks such as class-conditional data synthesis with classifier guidance [7], image super-resolution [14], image deblurring [48]. Denosing diffusion restoration models (DDRM) [21] were proposed for general linear inverse image restoration problems, exploiting pro-trained denoising diffusion models for unsupervised posterior sampling. Generally, diffusion models were so far not considered to be generalized to adverse weather scenes in the cross-modality image fusion field. Unlike existing works, we expand the multiweather restoration to the field of cross-modality fusion. 3 PROPOSED FRAMEWORK 3.1 Overview As shown in Fig. 2, CFMW comprises two main stages. In the multi-weather image restoration stage, we aim to achieve image restoration of three types of adverse weather conditions (rain, snow, and haze) and implement it using a unified framework with only one pre-trained weight. In the cross-modality fusion stage, we aim to integrate unique features of different modalities. Inspired by CFT [34], to show the effectiveness of our proposed CFM fusion model, we extend the framework of YOLOv5 to enable multispectral object detection. We present our carefully designed loss functions and training procedure for WRDM and CFM in the last subsection. 3.2 Weather Removal Diffusion Model (WRDM) Denoising diffusion models [13, 39] are a class of generative models, that learn a Markov chain that gradually transforms a Gaussian Figure 2: Framework of Cross-Modality Fusion Mamba backbone. It has three parts: a Weather Removal Diffusion Model (WRDM), a two-stream feature extraction network (our baseline), and three Cross-Modality Fusion Mamba (CFM) modules. \u00c9 represents element-wise add, \u00cb represents element-wise multiply, and C1 is short of 1-dimension convolutions. noise distribution into the data distribution trained by the models. The original denoising diffusion probabilistic models (DDPMs)[13] diffusion process (data to noise) and generative process (noise to data) are based on a Markov chain process, resulting in a large number of steps and huge time consumption. Thus, denoising diffusion implicit models (DDIMs) [40] were presented to accelerate sampling, providing a more efficient class of iterative implicit probabilistic models. DDIMs define the generative process via a class of non-Markovian diffusion processes that lead to the same training objective as DDPMs but can produce deterministic generative processes, thus speeding up sample generation. In DDIMs, implicit sampling refers to the generation of samples from the latent space of the model in a deterministic manner. Implicit sampling using a noise estimator network can be performed by: \ud835\udc4b\ud835\udc61\u22121 = \u221a\u00af \ud835\udefc\ud835\udc61\u22121 \u00b7 (\ud835\udc4b\ud835\udc61\u2212\u221a1 \u2212\u00af \ud835\udefc\ud835\udc61\u00b7 \ud835\udf16\ud835\udf03(\ud835\udc4b\ud835\udc61,\ud835\udc61) \u221a\u00af \ud835\udefc\ud835\udc61 ) +\u221a1 \u2212 \u00af \ud835\udefc\ud835\udc61\u22121 \u00b7 \ud835\udf16\ud835\udf03(\ud835\udc4b\ud835\udc61,\ud835\udc61). (1) where \ud835\udc4b\ud835\udc61and \ud835\udc4b\ud835\udc61\u22121 represent the data \ud835\udc4b0 \u223c\ud835\udc5e(\ud835\udc4b0)) in different diffusion time steps, \ud835\udefc\ud835\udc61= 1 \u2212\ud835\udefd\ud835\udc61, \u00af \ud835\udefc\ud835\udc61= \ud835\udc61 \u00ce \ud835\udc56=1 \ud835\udefc\ud835\udc56, and \ud835\udf16\ud835\udf03(\ud835\udc4b\ud835\udc61,\ud835\udc61) can be optimized as: E\ud835\udc4b0,\ud835\udc61,\ud835\udf16\ud835\udc61\u223c\ud835\udc41(0, \ud835\udc70), [\u2225\ud835\udf16\ud835\udc61\u2212\ud835\udf16\ud835\udf03(\u221a\u00af \ud835\udefc\ud835\udc61\ud835\udc4b0+\u221a1 \u2212\u00af \ud835\udefc\ud835\udc61\ud835\udf16\ud835\udc61,\ud835\udc61\u22252]. Conditional diffusion models have shown state-of-the-art imageconditional data synthesis and editing capabilities [6, 7]. The core idea is to learn a conditional reverse process without changing the diffusion process. Our proposed WRDM is a conditional diffusion model, adding reference images (clear images) in the process of sampling to guide the reconstructed image to be similar to reference images. As shown in Fig. 3, we introduce a new parameter e \ud835\udc4b, which represents the weather-degraded observation. A Markov chain is defined as a diffusion process, and Gaussian noise is gradually added to simulate the gradual degradation of data samples until reaching time point \ud835\udc47. We ground our model hyper-parameters via a U-Net architecture based on WideResNet [52]. For the input images conditional reflection, we connect patch \ud835\udc65\ud835\udc47and e \ud835\udc65, to obtain the six-dimensional input image channel. Conditioning the reverse process on e \ud835\udc4bcan maintain its compatibility with implicit sampling, so we could expand Eq. (1) as: \ud835\udc4b\ud835\udc61\u22121 = \u221a\u00af \ud835\udefc\ud835\udc61\u22121 \u00b7 (\ud835\udc4b\ud835\udc61\u2212\u221a1 \u2212\u00af \ud835\udefc\ud835\udc61\u00b7 \ud835\udf16\ud835\udf03(\ud835\udc4b\ud835\udc61, e \ud835\udc4b,\ud835\udc61) \u221a\u00af \ud835\udefc\ud835\udc61 ) +\u221a1 \u2212 \u00af \ud835\udefc\ud835\udc61\u22121 \u00b7 \ud835\udf16\ud835\udf03(\ud835\udc4b\ud835\udc61, e \ud835\udc4b,\ud835\udc61). (2) The sampling process starts from \ud835\udc4b\ud835\udc47\u223c\ud835\udc41(0, \ud835\udc70), following a deterministic reverse path towards \ud835\udc4b0 with fidelity. See more derivation details in the supplementary material. Our proposed WRDM is a patch-based conditional diffusion model, guiding the reverse sampling process toward smoothness across neighboring patches. During training, we randomly sample the \ud835\udc5d\ud835\udc65\ud835\udc5dpatch location for \ud835\udc43\ud835\udc56within the compute of image dimensions. Under any given time step \ud835\udc47, we reverse-sample the average estimated noise of each pixel in the overlapping patch area according to Fig. 3, which effectively controls the reverse sampling process to ensure that all adjacent patches have higher fidelity. Furthermore, WRDM can be regarded as a plug-in, embedded into other works such as visible-infrared image fusion to remove the influence of multi-weather conditions, which is demonstrated experimentally in Fig. 5. 3.3 Cross-modality Fusion Mamba (CFM) The goal of Cross-modality Fusion Mamba (CFM) is to introduce the advanced state space model (SSM), or Mamba [10], to crossmodality object detection. Structured state space sequence models (S4) and Mamba are inspired by the continuous system, mapping a 1-D function or sequence \ud835\udc65(\ud835\udc61) \u2208R \u2192\ud835\udc66(\ud835\udc61) through a hidden Figure 3: Schematic diagram of WRDM training and reasoning process. The left side is the framework of WRDM. We use a paired data distribution (e \ud835\udc4b,\ud835\udc4b\ud835\udc61), splitting into (e \ud835\udc4b(\ud835\udc51),\ud835\udc4b(\ud835\udc51) \ud835\udc61 ) for model-training. The right side is the illustration of the patch-based diffusive image restoration pipeline (4 patches for example here). state \u210e(\ud835\udc61) \u2208R\ud835\udc41. This system uses \ud835\udc68\u2208R\ud835\udc41\u00d7\ud835\udc41as the evolution parameter and \ud835\udc69\u2208R\ud835\udc41\u00d71, \ud835\udc6a\u2208R1\u00d7\ud835\udc41as the projection parameters, so that \ud835\udc66(\ud835\udc61) could evolve as follows: \u210e\u2032(\ud835\udc61) = \ud835\udc68\u210e(\ud835\udc61) + \ud835\udc69\ud835\udc65(\ud835\udc61), \ud835\udc66(\ud835\udc61) = \ud835\udc6a\u210e\u2032(\ud835\udc61). (3) Notice that S4 and Mamba are the discrete versions of the continuous system, including a timescale parameter \u0394 to transform the continuous parameters \ud835\udc34, \ud835\udc35to discrete parameters \u00af \ud835\udc68, \u00af \ud835\udc69as follows: \u00af \ud835\udc68= \ud835\udc52\ud835\udc65\ud835\udc5d(\u0394\ud835\udc68), \u00af \ud835\udc69= (\u0394\ud835\udc68)\u22121(\ud835\udc52\ud835\udc65\ud835\udc5d(\u0394\ud835\udc68) \u2212\ud835\udc70) \u00b7 \u0394\ud835\udc69. (4) After that, Eq. (3) could be rewritten as: \u210e\ud835\udc61= \u00af \ud835\udc68\u210e\ud835\udc61\u22121 + \u00af \ud835\udc69\ud835\udc65\ud835\udc61, \ud835\udc66\ud835\udc61= \ud835\udc6a\u210e\ud835\udc61. (5) Finally, the models compute output through a global convolution as follows: \u00af \ud835\udc72= \ud835\udc6a\u00af \ud835\udc69, \ud835\udc6a\u00af \ud835\udc68\u00af \ud835\udc69, ..., \ud835\udc6a\u00af \ud835\udc68\ud835\udc74\u22121 \u00af \ud835\udc69, \ud835\udc66= \ud835\udc65\u2217\u00af \ud835\udc72. (6) where \ud835\udc74is the length of the input sequence x, and \u00af \ud835\udc72\u2208R\ud835\udc40is a structured convolution kernel. Standard Mamba is designed for the 1-D sequence. As shown in Vision Mamba (Vim), 2-D multispectral images \ud835\udc61\u2208R\ud835\udc3b\u00d7\ud835\udc4a\u00d7\ud835\udc36 could be transformed into the flattened 2-D patches \ud835\udc65\ud835\udc5d\u2208R\ud835\udc3d\u00d7(\ud835\udc432\u00b7\ud835\udc36), where (\ud835\udc3b,\ud835\udc4a) represents the size of input images, \ud835\udc36is the channels, and \ud835\udc43is the size of image patches. Similarly, we linearly project the \ud835\udc65\ud835\udc5dto the vector with size \ud835\udc37and add position embeddings \ud835\udc6c\ud835\udc5d\ud835\udc5c\ud835\udc60\u2208R(\ud835\udc3d+1)\u00d7\ud835\udc37as follows: \ud835\udc7b0 = [\ud835\udc61\ud835\udc50\ud835\udc59\ud835\udc60;\ud835\udc611 \ud835\udc5d\ud835\udc7e;\ud835\udc612 \ud835\udc5d\ud835\udc7e; ...;\ud835\udc61\ud835\udc3d \ud835\udc5d\ud835\udc7e] + \ud835\udc6c\ud835\udc5d\ud835\udc5c\ud835\udc60. (7) where \ud835\udc61\ud835\udc57 \ud835\udc43is the \ud835\udc57\u2212\ud835\udc61\u210epath of \ud835\udc95, \ud835\udc7e\u2208R(\ud835\udc432\u00b7\ud835\udc36)\u00d7\ud835\udc37is the learnable projection matrix. Here are more details of the proposed CFM. As mentioned in the introduction section, the RGB modality and the Thermal modality show different features under different lighting and weather conditions, which are complementary and redundant. Therefore, we aim to design a block to suppress redundant features and fuse complementary to efficiently harvest essential cross-modal cues for object detection against adverse weather conditions. Motivated by the concept of Cross-Attention [1], we introduce a new crossmodality Mamba block to fuse features from different modalities. As shown in Fig. 2, to encourage feature interaction between RGB and Thermal modalities, we use a Channel Swapping Mamba block (CS) [12], which incorporates information from different channels and enhances cross-modality correlations. Given RGB features \ud835\udc39\ud835\udc45\ud835\udc56 and Thermal features \ud835\udc39\ud835\udc47\ud835\udc56, the first half of channels from \ud835\udc39\ud835\udc45\ud835\udc56will be concatenated with the latter half of \ud835\udc39\ud835\udc47\ud835\udc56and processed through the Mamba block for feature extraction. The obtained features are added to \ud835\udc39\ud835\udc45\ud835\udc56, creating a new feature \ud835\udc39\ud835\udc45\ud835\udc56 \u2032. Meanwhile, the first half of \ud835\udc39\ud835\udc47\ud835\udc56is concatenated with the latter half of \ud835\udc39\ud835\udc45\ud835\udc56, then passes through the Mamba block. The obtained features are added to \ud835\udc39\ud835\udc47\ud835\udc56, creating a new feature \ud835\udc39\ud835\udc47\ud835\udc56 \u2032. Subsequently, we project the features: \ud835\udc39\ud835\udc45\ud835\udc56 \u2032 and \ud835\udc39\ud835\udc47\ud835\udc56 \u2032 into the shared space during the feature fusion process, using the gating mechanism to encourage complementary feature learning while restraining redundant features. As shown in Fig. 2, we first normalize every token sequence in \ud835\udc39\ud835\udc45\ud835\udc56 \u2032 and \ud835\udc39\ud835\udc47\ud835\udc56 \u2032 with Norm block, which helps to improve the convergence speed and performance of the model. Then project the input sequence through linear layers and apply SiLu as the activation function. \u00af \ud835\udc68\ud835\udc90, \u00af \ud835\udc69\ud835\udc90, and \ud835\udc6a\ud835\udc90can be generated by the Parameters Function: \u00af \ud835\udc68\ud835\udc90, \u00af \ud835\udc69\ud835\udc90, \ud835\udc6a\ud835\udc90= \ud835\udc43\ud835\udc4e\ud835\udc5f\ud835\udc4e\ud835\udc5a\ud835\udc52\ud835\udc61\ud835\udc52\ud835\udc5f\ud835\udc60\ud835\udc39\ud835\udc62\ud835\udc5b\ud835\udc50\ud835\udc61\ud835\udc56\ud835\udc5c\ud835\udc5b(\ud835\udc65\ud835\udc5c\u2032), (8) where \ud835\udc65\u2032 \ud835\udc5c= \ud835\udc73\ud835\udc8a\ud835\udc8f\ud835\udc86\ud835\udc82\ud835\udc93(\ud835\udc65\ud835\udc65 \ud835\udc5c\ud835\udc75\ud835\udc90\ud835\udc93\ud835\udc8e(\ud835\udc39\ud835\udc5c\u2032 \ud835\udc56)). After that, we apply State Space Model (SSM): \ud835\udc66\ud835\udc5c= \ud835\udc7a\ud835\udc7a\ud835\udc74( \u00af \ud835\udc68\ud835\udc90, \u00af \ud835\udc69\ud835\udc90, \ud835\udc6a\ud835\udc90)(\ud835\udc65\ud835\udc5c\u2032), (9) Figure 4: Overview of the established SWVID benchmarks. The dataset includes three weather conditions (i.e., Rain, Foggy, and Snow), and two scenarios (i.e., Daylight and Night), providing 60, 000 images in total. Then we apply the gating operation, followed by residual connection: \ud835\udc67= \ud835\udc73\ud835\udc8a\ud835\udc8f\ud835\udc86\ud835\udc82\ud835\udc93\ud835\udc9b(\ud835\udc39\ud835\udc47\ud835\udc56 \u2032), (10) \ud835\udc66\ud835\udc45\u2032 = \ud835\udc66\ud835\udc45\u2299\ud835\udc7a\ud835\udc8a\ud835\udc73\ud835\udc7c(\ud835\udc67), (11) \ud835\udc66\ud835\udc47\u2032 = \ud835\udc66\ud835\udc47\u2299\ud835\udc7a\ud835\udc8a\ud835\udc73\ud835\udc7c(\ud835\udc67), (12) \ud835\udc39\ud835\udc56= \ud835\udc79\ud835\udc86\ud835\udc94\ud835\udc89\ud835\udc82\ud835\udc91\ud835\udc86(\ud835\udc73\ud835\udc8a\ud835\udc8f\ud835\udc86\ud835\udc82\ud835\udc93\ud835\udc47(\ud835\udc66\ud835\udc45\u2032 + \ud835\udc66\ud835\udc47\u2032) + \ud835\udc39\ud835\udc56\u2032). (13) Finally, we get the fused 2-D feature \ud835\udc39\ud835\udc56successfully. Different from CFT [34], our fusion block improves computational efficiency while inheriting the components of global receptive field and dynamic weight. Comparing the state space model (SSM) in our CFM block with the self-attention mechanism of transformers in CFT [34], both of them play an important role in providing global context adaptively, but self-attention is quadratic to sequence length while SSM is linear to sequence length [61]. To achieve lower memory usage when dealing with long-sequence works, CFM chooses the recomputation method as the same as Mamba. Experiment on the SWVID and LLVIP dataset, whose resolution is 1080 \u00d7 720, shows that CFT requires 21.88GB GPU memory while CFM only requires 10.72GB, saving 11.16GB in the same configuration. 3.4 Loss Functions As a two-stage pre-training model, we carefully design the training loss functions to produce enhanced results with minimum blurriness and the closest details to ground-truth images and to extract the differences between RGB and thermal modalities. For training WRDM, the goal of the loss function in this stage is to maximize the data log-likelihood \ud835\udc59\ud835\udc5c\ud835\udc54\ud835\udc5d\ud835\udf03(\ud835\udc650). Since maximizing this target directly is very challenging, we use variational inference to approximate this target. Variational inference approximates the true posterior distribution \ud835\udc5d\ud835\udf03(\ud835\udc650 : \ud835\udc47) by introducing a variational Table 1: Comparisons of SWVID benchmark with existing visible-infrared datasets. !means available while %denotes the opposite. Dataset Year Resolution Publication Scene Daylight Night Weather KAIST [16] 2015 640 \u00d7 512 CVPR \" \" % FLIR [8] 2018 640 \u00d7 512 \" \" % RoadScene [50] 2020 640 \u00d7 512 AAAI \" \" % LLVIP [18] 2021 1080 \u00d7 720 ICCV \" \" % MSRS [41] 2022 640 \u00d7 480 Info. Fusion \" \" % M3FD [27] 2022 640 \u00d7 512 CVPR \" \" % VTUAV [32] 2022 1920 \u00d7 1080 CVPR \" \" % SWVID 2024 1080 \u00d7 720 Proposed \" \" \" distribution\ud835\udc5e(\ud835\udc651 : \ud835\udc47|\ud835\udc650) and then minimizes the difference between these two distributions. Here we define L\ud835\udf03= \u2212\ud835\udc59\ud835\udc5c\ud835\udc54\ud835\udc5d\ud835\udf03(\ud835\udc650), we have: L\ud835\udf03= \ud835\udc47 \u2211\ufe01 \ud835\udc61=1 E\ud835\udc5e[\ud835\udc59\ud835\udc5c\ud835\udc54\ud835\udc5d\ud835\udf03(\ud835\udc650|\ud835\udc65\ud835\udc47)] \u2212 \ud835\udc47\u22121 \u2211\ufe01 \ud835\udc61=1 E\ud835\udc5e(\ud835\udc65\ud835\udc61\u22121|\ud835\udc65\ud835\udc61) [\ud835\udc37\ud835\udc3e\ud835\udc3f(\ud835\udc5e(\ud835\udc65\ud835\udc61\u22121|\ud835\udc65\ud835\udc61,\ud835\udc650))||\ud835\udc5d\ud835\udf03(\ud835\udc65\ud835\udc61\u22121|\ud835\udc65\ud835\udc61)]. (14) where the second term is the expected value of the Kullback-Leibler divergence between \ud835\udc5e(\ud835\udc65\ud835\udc61\u22121|\ud835\udc65\ud835\udc61) and \ud835\udc5d\ud835\udf03(\ud835\udc65\ud835\udc61\u22121|\ud835\udc65\ud835\udc61). In alignment with the prevalent practices in this field, the overall loss function (L\ud835\udc61\ud835\udc5c\ud835\udc61\ud835\udc4e\ud835\udc59) is a sum of the bounding-box regression loss (L\ud835\udc4f\ud835\udc5c\ud835\udc65), the classification loss (L\ud835\udc50\ud835\udc59\ud835\udc60), and the confidence loss (L\ud835\udc50\ud835\udc5c\ud835\udc5b\ud835\udc53= L\ud835\udc5b\ud835\udc5c\ud835\udc5c\ud835\udc4f\ud835\udc57+ L\ud835\udc5c\ud835\udc4f\ud835\udc57). L\ud835\udc61\ud835\udc5c\ud835\udc61\ud835\udc4e\ud835\udc59= L\ud835\udc4f\ud835\udc5c\ud835\udc65+ L\ud835\udc50\ud835\udc59\ud835\udc60+ L\ud835\udc5b\ud835\udc5c\ud835\udc5c\ud835\udc4f\ud835\udc57+ L\ud835\udc5c\ud835\udc4f\ud835\udc57, (15) Details of the loss function for CFMW are elucidated in the supplementary material. 4 EXPERIMENTS 4.1 Established SWVID benchmark Dataset. The color gamut of visible images is weakened by environmental disturbance in dynamic environments, and the existing fusion methods make it difficult to fully fuse visible and infrared spectra because of a deficiency of sufficient training under corresponding datasets. As shown in Fig. 4, we established the benchmark, SWVID, which is constructed from the public datasets (i.e. LLVIP [18], M3FD [27], MSRS [41]) collected in the real scene. It contains a variety of uniformly distributed scenes (daylight, night, rain, foggy, and snow), simulating real environments through the combination of different scenes. Furthermore, we provide the corresponding ground-truth images for each visible image affected by adverse weather conditions for image fusion and image restoration network training. As shown in Table 1, compared with previous visible-infrared datasets, SWVID is the first one that considers weather conditions. Specifically, we have constructed the dataset from public visible-infrared datasets as follows: D\ud835\udc5f\ud835\udc4e\ud835\udc56\ud835\udc5b(\ud835\udc3d(\ud835\udc65)) = \ud835\udc3d(\ud835\udc65)(1 \u2212\ud835\udc40\ud835\udc5f(\ud835\udc65)) + \ud835\udc45(\ud835\udc65)\ud835\udc40\ud835\udc5f(\ud835\udc65), (16) D\ud835\udc60\ud835\udc5b\ud835\udc5c\ud835\udc64(\ud835\udc3d(\ud835\udc65)) = \ud835\udc3d(\ud835\udc65)(1 \u2212\ud835\udc40\ud835\udc60(\ud835\udc65)) + \ud835\udc46(\ud835\udc65)\ud835\udc40\ud835\udc60(\ud835\udc65), (17) D\ud835\udc53\ud835\udc5c\ud835\udc54\ud835\udc54\ud835\udc66(\ud835\udc3d(\ud835\udc65)) = \ud835\udc3d(\ud835\udc65)\ud835\udc52\u2212 \u222b\ud835\udc51(\ud835\udc65) 0 \ud835\udefd\ud835\udc51\ud835\udc59+ \u222b\ud835\udc51(\ud835\udc65) 0 \ud835\udc3f\u221e\ud835\udefd\ud835\udc52\u2212\ud835\udefd\ud835\udc59\ud835\udc51\ud835\udc59. (18) Figure 5: Examples of daylight and night scenes for multimodal fusion and object detection visualization, including three kinds of adverse weather conditions (rain, haze, and snow). We embed WRDM into two state-of-the-art visible-infrared fusion methods (i.e., CDDFuse [59] and DeFusion [25]) to mitigate the adverse impact of weather conditions. where \ud835\udc65represents the spatial location in an image, D\ud835\udc5f\ud835\udc4e\ud835\udc56\ud835\udc5b(\ud835\udc3d(\ud835\udc65)), D\ud835\udc60\ud835\udc5b\ud835\udc5c\ud835\udc64(\ud835\udc3d(\ud835\udc65)) and D\ud835\udc53\ud835\udc5c\ud835\udc54\ud835\udc54\ud835\udc66(\ud835\udc3d(\ud835\udc65)) represent a function that maps a clear image to one with rain, snow, and fog particle effects, \ud835\udc3d(\ud835\udc65) represents the clear image with no weather effects, \ud835\udc40\ud835\udc5f(\ud835\udc65) and \ud835\udc40\ud835\udc60(\ud835\udc65) represent rain and snow equivalents, \ud835\udc45(\ud835\udc65) represents a map of the rain masks, \ud835\udc46(\ud835\udc65) represents a chromatic aberration map of the snow particles. Considering scattering effects, \ud835\udc51(\ud835\udc65) represents the distance from the observer at a pixel location \ud835\udc65, \ud835\udefdis an atmospheric attenuation coefficient, and \ud835\udc3f\u221eis the radiance of light. We divide SWVID into the training set (34, 280 images), validation set (17, 140 images), and test set (8, 570 images), each folder contains three parts: pairs of visible-infrared images and corresponding weather-influenced visible images. Notice that weather-influenced visible images contain three kinds of weather conditions, classified as SWVID-snow, SWVID-rain, and SWVID-foggy. During the training period, we use the pairs of images (weather-influenced and ground-truth) to train WRDM in the first stage, then use the pairs of images (ground-truth and infrared) with corresponding labels to train CFM in the second stage. During the validating and testing period, we use the pairs of images (weather-influenced and infrared) directly, verifying and testing the performance of CFMW under real conditions. Also, we use the same way when evaluating other networks in comparative experiments. Evaluation metrics. We adopt the conventional peak signalto-noise ratio (PSNR) [15] and structural similarity (SSIM) [47] for quantitative evaluations between ground truth and restored images. PSNR is mainly used to evaluate the degree of distortion after image processing, while SSIM pays more attention to the Table 2: Quantitative comparisons in terms of PSNR and SSIM (higher is better) with state-of-the-art image deraining, dehazing, and desnowing methods. For the sake of fairness, we uniformly use the visible light part of the established SWVID dataset as the evaluation dataset. Image-Deraining Task SWVID-rain (RGB) Image-Dehazing Task SWVID-foggy (RGB) Image-Desnowing Task SWVID-snow (RGB) PSNR\u2191 SSIM\u2191 PSNR\u2191 SSIM\u2191 PSNR\u2191 SSIM\u2191 pix2pix [17] 19.95 0.7270 pix2pix [17] 25.12 0.8359 SPANet [46] 29.92 0.8260 CycleGAN [60] 17.65 0.6452 DuRN [29] 31.44 0.9256 DDMSNet [57] 34.87 0.9462 PCNet [19] 27.13 0.8546 AttentiveGAN [33] 32.56 0.9331 DesnowNet [2] 32.15 0.9416 MPRNet [53] 29.14 0.9022 IDT [49] 34.14 0.9412 RESCAN [24] 30.57 0.9003 de-rain (ours) 36.78 0.9464 de-haze (ours) 36.53 0.9795 de-snow (ours) 42.23 0.9821 All-in-One [23] 25.13 0.8856 All-in-One [23] 31.24 0.9122 All-in-One [23] 28.12 0.8815 TransWeather [42] 29.77 0.9107 TransWeather [42] 33.85 0.9388 TransWeather [42] 35.15 0.9417 WRDM (ours) 35.02 0.9322 WRDM (ours) 35.88 0.9602 WRDM (ours) 40.98 0.9578 Table 3: Comparison of performances with other networks on the SWVID-snow dataset. Model Data Backbone mAP50\u2191 mAP75\u2191 mAP\u2191 mono-modaltiy networks Faster R-CNN [36] RGB ResNet50 82.3 34.6 30.7 Faster R-CNN [36] Thermal ResNet50 90.6 63.7 55.4 SDD [28] RGB VGG16 73.6 37.8 38.6 SDD [28] Thermal VGG16 88.6 55.6 50.2 YOLOv3 [35] RGB Darknet53 78.3 29.4 24.4 YOLOv3 [35] Thermal Darknet53 84.6 50.7 47.4 YOLOv5 [20] RGB CSPD53 80.7 38.2 30.7 YOLOv5 [20] Thermal CSPD53 90.5 65.2 57.6 YOLOv7 [45] RGB CSPD53 85.3 41.8 34.9 YOLOv7 [45] Thermal CSPD53 91.8 67.6 60.4 multi-modality networks Baseline RGB+T CSPD53 92.2 68.4 59.3 CFT [34] RGB+T CFB 92.4 71.1 58.4 CFMW (ours) RGB+T CFM 97.2 76.9 63.4 structural information and visual quality of the images. \ud835\udc43\ud835\udc46\ud835\udc41\ud835\udc45= 10 \u00d7 \ud835\udc59\ud835\udc54( (2\ud835\udc5b\u22121)2 \ud835\udc40\ud835\udc46\ud835\udc38 ), (19) \ud835\udc46\ud835\udc46\ud835\udc3c\ud835\udc40= [\ud835\udc59(\ud835\udc65,\ud835\udc66)]\ud835\udefc\u00b7 [\ud835\udc50(\ud835\udc65,\ud835\udc66)]\ud835\udefd\u00b7 [\ud835\udc60(\ud835\udc65,\ud835\udc66)]\ud835\udefe, (20) As for object detection quantitative experiments, we introduced three object detection metrics: mean Average Precision (mAP, mAP50, and mAP75) to evaluate the accuracy of the object detection models. For more calculation details, please refer to the supplementary material. 4.2 Implantation Details As for WRDM, we performed experiments both in specific-weather conditions and multi-weather conditions image restoration settings. We denote our specific-weather restoration models as de-rain, desnow, and de-foggy to verify the general WRDM model under specific weather conditions. We trained the 128 \u00d7 128 patch size version of all models. We use NVIDIA RTX 4090 cards to perform all the experiments. We use Adam as an optimizer while training all the models we compare. During the training process, we trained WRDM for 3 \u00d7 106 iterations. As for CFM, we did not perform Table 4: Comparison of performances with other networks on the LLVIP [18] dataset. Model Data Backbone mAP50\u2191 mAP75\u2191 mAP\u2191 mono-modaltiy networks Faster R-CNN [36] RGB ResNet50 91.4 48.0 49.2 Faster R-CNN [36] Thermal ResNet50 96.1 68.5 61.1 SDD [28] RGB VGG16 82.6 31.8 39.8 SDD [28] Thermal VGG16 90.2 57.9 53.5 YOLOv3 [35] RGB Darknet53 85.9 37.9 43.3 YOLOv3 [35] Thermal Darknet53 89.7 53.4 52.8 YOLOv5 [20] RGB CSPD53 90.8 51.9 50.0 YOLOv5 [20] Thermal CSPD53 94.6 72.2 61.9 YOLOv7 [45] RGB CSPD53 91.4 58.4 53.6 YOLOv7 [45] Thermal CSPD53 94.6 70.6 62.4 multi-modality networks Baseline RGB+T CSPD53 95.2 71.4 62.3 CFT [34] RGB+T CFB 97.5 72.9 63.6 CFMW (ours) RGB+T CFM 98.8 77.2 64.8 task-specific parameter tuning or modifications to the network architecture. For better performance, we select the YOLOv5 model\u2019s public weight initialization (yolov5s.pt), which is pre-trained on the COCO dataset [26]. 4.3 Comparative Experiments In this section, we make comparisons with several state-of-theart methods in image deweathering and cross-modality object detection separately. In Table 2, we perform comparisons with methods for image desnowing (i.e. SPANet [46], DDMSNet [57], DesnowNet [2], RESCAN [24]), deraining (i.e. pix2pix [17], CycleGAN [60], PCNet [19], MPRNet [53]), and dehazing (i.e. pix2pix [17], DuRN [29], Attentive-GAN [33], IDT [49]), as well as two state-ofthe-art multi-weather image restoration methods: All in One [23] and TransWeather [42]. In Table 3 and Table 4, to prove the consistent improvements of CFMW, we compare with several base single-modality object detection methods (i.e., Faster R-CNN [36], SDD [28], YOLOv3 [35], YOLOv5 [20], YOLOv7 [45]) and several multi-modality object detection methods (i.e., our baseline, standard two-stream YOLOv5 object detection network, and CFT [34]). Table 5: Ablation experiments on SWVID-snow dataset. To present the general effectiveness of our CFMW, we further combine the WRDM and CFM module with other classical detectors (i.e., YOLOv7, YOLOv5, Faster R-CNN). Modality Method Detector mAP50\u2191 mAP75\u2191 mAP\u2191 RGB CSPDarknet53 YOLOv7 [45] 85.3 41.8 34.9 Thermal CSPDarknet53 95.8 72.6 60.4 RGB+T +two stream 95.4 68.1 60.4 +CFM 95.5 68.6 63.3 +WRDM 96.5 70.9 63.1 +CFM&WRDM 96.6 75.1 64.1 RGB CSPDarknet53 YOLOv5 [20] 80.7 38.2 30.7 Thermal CSPDarknet53 90.5 65.2 57.6 RGB+T +two stream 92.2 68.4 59.3 +CFM 96.5 70.6 63.3 +WRDM 96.4 71.2 62.8 +CFM&WRDM 97.2 76.9 63.4 RGB Resnet53 Faster R-CNN [36] 82.3 34.6 30.7 Thermal Resnet53 90.6 63.7 55.4 RGB+T +two stream 93.7 62.8 55.4 +CFM 96.7 69.5 61.9 +WRDM 96.2 69.4 61.6 +CFM&WRDM 96.2 69.7 62.2 Comparison of image deweathering. As shown in Table 2, we use the single RGB modality of the SWVID dataset (including rain, foggy, and haze weather conditions) as a comparative dataset to measure the performance of different models under different weather conditions. The top of the table contains results from specific-weather image restoration, where we show \ud835\udc46= 50 sampling time steps. For image-deraining, image-dehazing, and image-desnowing tasks, the proposed solution consistently achieves the best results (36.78/0.9464 on SWVID-rain, 36.53/0.9795 on SWVID-foggy, and 42.23/0.9821 on SWVID-snow). Especially, in the image de-rain task, the performance improvement is about 24% compared with the current state-of-the-art method (MPRNet [53]). For multi-weather image restoration, although the results are not as good as the specific-weather model due to the complexity of the task, the proposed method also reaches the best results ( 35.02/0.9322 on SWVID-rain, 35.88/0.9602 on SWVID-foggy, and 40.98/0.9578 on SWVID-snow) compared with All in One [23] and TransWeather [42], with about 17% performance improvement compared against TransWeather [42] and about 25% performance improvement compared against All in One [23]. Comparison of cross-modality object detection. As shown in Table 3 and Table 4, we use LLVIP [18] and SWVID-snow as the comparative datasets. Compared with SWVID-rain and SWVIDfoggy, the size of pedestrians in these two datasets is more in line with the general object detection standards. There are more complex cases of pedestrian overlap in these two datasets, which can better measure the accuracy of the object detection networks. The top of the table contains results from single-modality networks, each network uses the RGB modality or the thermal modality for detection. The bottom of the table shows results from multi-modality networks, including our baseline, CFT [34] and the proposed CFMW. According to Table 3, it can be observed that with the integration of WRDM and CFM, CFMW achieves an overwhelming performance improvement on each metric (mAP50:2.3\u2191, mAP75:4.3\u2191, mAP:3.0\u2191) on SWVID-snow compared with the best existing network on each metric, which shows that it has preferable adaptability under adverse weather conditions. Also, CFMW can achieve a more accurate detection (mAP50:98.8, mAP75:77.2, mAP:64.8) with lower computational consumption, as shown in Table 4, which demonstrates the commonality of CFWM. 4.4 Ablation Study In this section, we analyze the effectiveness of CFMW. We first validate the importance of WRDM and CFM modules in performance improvement in a parametric form through detailed ablation experiments, then visually show the role of WRDM in cross-modality fusion and object detection tasks to highlight its versatility as a weather-restoration plug-in. Ablation experiments To understand the impact of each component in our method, we have performed a comprehensive set of ablation experiments. As shown in Table 5, we further combine the CFM and WRDM with other classical detectors, i.e. YOLOv7 [45], YOLOv5 [20] and Faster R-CNN [36] to present the general effectiveness of our CFMW. The proposed CFMW improves the performance of cross-modality object detection using either a one-stage or twostage detector under complex weather conditions. Specifically, CFM achieves an 11.3% gain on mAP50, an 81.6% gain on mAP75, and a 78.3% gain on mAP (on YOLOv5 [20] ). After adding WRDM, we achieved a 12.1% gain on mAP50, an 88.2% gain on mAP75, and an 80.4% gain on mAP. CFM and WRDM provide non-negligible gains for all the considered evaluation metrics. Visual interpretation To verify the applicability of WRDM as a plug-in intuitively, we visually show the application scenario of WRDM in the field of visible-infrared image fusion and object detection. As shown in Fig. 5, we perform comparisons with methods of visible-infrared image fusion methods (i.e. CDDFuse [59], DeFusion [25]). It can be seen from the figure that compared with the original images, the image fusion effects of the two methods before and after using WRDM are quite different, more people at the far end of images could be detected successfully after deweathering. In cross-modality object detection, rich image details can provide great assistance for feature extraction and fusion, with direct fusion without removing the weather influence causing the loss and interference of image details. 5 CONCLUSION In this work, we introduce a novel approach to visible-infrared object detection under severe weather conditions, namely the Severe Weather Visible-Infrared Dataset (SWVID). We have provided a valuable resource for training and evaluating models in realistic and challenging environments. The Cross-modality Fusion Mamba with Weather-removal (CFMW) model, has proven to be highly effective in enhancing detection accuracy while managing computational efficiency. Our extensive experiments have shown that CFMW outperforms existing benchmarks, achieving state-of-the-art on both tasks: multi-weather image restoration and cross-modality object detection. This work opens up new possibilities for cross-modality object detection in adverse weather."
19
+ }
intro_8K/test_introduction_short_2404.16306v1.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2404.16306v1",
3
+ "title": "TI2V-Zero: Zero-Shot Image Conditioning for Text-to-Video Diffusion Models",
4
+ "abstract": "Text-conditioned image-to-video generation (TI2V) aims to synthesize a\nrealistic video starting from a given image (e.g., a woman's photo) and a text\ndescription (e.g., \"a woman is drinking water.\"). Existing TI2V frameworks\noften require costly training on video-text datasets and specific model designs\nfor text and image conditioning. In this paper, we propose TI2V-Zero, a\nzero-shot, tuning-free method that empowers a pretrained text-to-video (T2V)\ndiffusion model to be conditioned on a provided image, enabling TI2V generation\nwithout any optimization, fine-tuning, or introducing external modules. Our\napproach leverages a pretrained T2V diffusion foundation model as the\ngenerative prior. To guide video generation with the additional image input, we\npropose a \"repeat-and-slide\" strategy that modulates the reverse denoising\nprocess, allowing the frozen diffusion model to synthesize a video\nframe-by-frame starting from the provided image. To ensure temporal continuity,\nwe employ a DDPM inversion strategy to initialize Gaussian noise for each newly\nsynthesized frame and a resampling technique to help preserve visual details.\nWe conduct comprehensive experiments on both domain-specific and open-domain\ndatasets, where TI2V-Zero consistently outperforms a recent open-domain TI2V\nmodel. Furthermore, we show that TI2V-Zero can seamlessly extend to other tasks\nsuch as video infilling and prediction when provided with more images. Its\nautoregressive design also supports long video generation.",
5
+ "authors": "Haomiao Ni, Bernhard Egger, Suhas Lohit, Anoop Cherian, Ye Wang, Toshiaki Koike-Akino, Sharon X. Huang, Tim K. Marks",
6
+ "published": "2024-04-25",
7
+ "updated": "2024-04-25",
8
+ "primary_cat": "cs.CV",
9
+ "cats": [
10
+ "cs.CV"
11
+ ],
12
+ "label": "Original Paper",
13
+ "paper_cat": "Diffusion AND Model",
14
+ "gt": "Image-to-video (I2V) generation is an appealing topic with various applications, including artistic creation, entertain- ment, and data augmentation for machine learning [39]. Given a single image x0 and a text prompt y, text- conditioned image-to-video (TI2V) generation aims to syn- *Work done during an internship at MERL. \u201cA man with the expression of slight happiness on his face.\u201d \u201cA person is drumming.\u201d \u201cA serene mountain cabin covered in a fresh blanket of snow.\u201d Figure 1. Examples of generated video frames using our proposed TI2V-Zero. The given first image x0 is highlighted with the red box, and the text condition y is shown under each row of the video. The remaining columns show the 6th, 11th, and 16th frames of the generated output videos. Each generated video has 16 frames with a resolution of 256 \u00d7 256. thesize M new frames to yield a realistic video, \u02c6 x = \u27e8x0, \u02c6 x1, . . . , \u02c6 xM\u27e9, starting from the given frame x0 and sat- isfying the text description y. Current TI2V generation methods [59, 63, 70] typically rely on computationally- heavy training on video-text datasets and specific archi- tecture designs to enable text and image conditioning. Some [12, 25] are constrained to specific domains due to the lack of training with large-scale open-domain datasets. Other approaches, such as [14, 67], utilize pretrained foun- dation models to reduce training costs, but they still need to train additional modules using video data. In this paper, we propose TI2V-Zero, which achieves zero-shot TI2V generation using only an open-domain pre- trained text-to-video (T2V) latent diffusion model [60]. Here \u201czero-shot\u201d means that when using the diffusion arXiv:2404.16306v1 [cs.CV] 25 Apr 2024 model (DM) that was trained only for text conditioning, our framework enables image conditioning without any op- timization, fine-tuning, or introduction of additional mod- ules. Specifically, we guide the generation process by incor- porating the provided image x0 into the output latent code at each reverse denoising step. To ensure that the tempo- ral attention layers of the pretrained DM focus on informa- tion from the given image, we propose a \u201crepeat-and-slide\u201d strategy to synthesize the video in a frame-by-frame man- ner, rather than directly generating the entire video volume. Notably, TI2V-Zero is not trained for the specific domain of the provided image, thus allowing the model to generalize to any image during inference. Additionally, its autoregres- sive generation makes the synthesis of long videos possible. While the standard denoising sampling process start- ing with randomly initialized Gaussian noise can produce matching semantics, it often results in temporally inconsis- tent videos. Therefore, we introduce an inversion strategy based on the DDPM [20] forward process, to provide a more suitable initial noise for generating each new frame. We also apply a resampling technique [33] in the video DM to help preserve the generated visual details. Our approach en- sures that the network maintains temporal consistency, gen- erating visually convincing videos conditioned on the given starting image (see Fig. 1). We conduct extensive experiments on MUG [1], UCF- 101 [56], and a new open-domain dataset. In these experi- ments, TI2V-Zero consistently performs well, outperform- ing a state-of-the-art model [67] that was based on a video diffusion foundation model [8] and was specifically trained to enable open-domain TI2V generation.",
15
+ "main_content": "2.1. Conditional Image-to-Video Generation Conditional video generation aims to synthesize videos guided by user-provided signals. It can be classified according to which type(s) of conditions are given, such as textto-video (T2V) generation [5, 16, 21, 23, 31, 65], video-tovideo (V2V) generation [7, 38, 40, 45, 61, 64], and imageto-video (I2V) generation [4, 10, 25, 34, 39, 69]. Here we discuss previous text-conditioned image-to-video (TI2V) generation methods [12, 14, 22, 44, 63, 70]. Hu et al. [25] introduced MAGE, a TI2V generator that integrates a motion anchor structure to store appearance-motion-aligned representations through three-dimensional axial transformers. Yin et al. [70] proposed DragNUWA, a diffusionbased model capable of generating videos controlled by text, image, and trajectory information with three modules including a trajectory sampler, a multi-scale fusion, and an adaptive training strategy. However, these TI2V frameworks require computationally expensive training on videotext datasets and a particular model design to support textand-image-conditioned training. In contrast, our proposed TI2V-Zero leverages a pretrained T2V diffusion model to achieve zero-shot TI2V generation without additional optimization or fine-tuning, making it suitable for a wide range of applications. 2.2. Adaptation of Diffusion Foundation Models Due to the recent successful application of diffusion models (DM) [20, 42, 47, 54, 55] to both image and video generation, visual diffusion foundation models have gained prominence. These include text-to-image (T2I) models such as Imagen [50] and Stable Diffusion [47], as well as textto-video (T2V) models such as ModelScopeT2V [60] and VideoCrafter1 [8]. These models are trained with largescale open-domain datasets, often including LAION-400M [52] and WebVid-10M [2]. They have shown immense potential for adapting their acquired knowledge base to address a wide range of downstream tasks, thereby reducing or eliminating the need for extensive labeled data. For example, previous works have explored the application of large T2I models to personalized image generation [13, 49], image editing [17, 33, 35\u201337], image segmentation [3, 68], video editing [45, 62], and video generation [14, 27, 53, 66]. In contrast to T2I models, there are fewer works on the adaptation of large-scale T2V models. Xing et al. [67] proposed DynamicCrafter for open-domain TI2V generation by adapting a T2V foundation model [8]. To control the generative process, they first employed a learnable image encoding network to project the given image into a textaligned image embedding space. Subsequently, they utilized dual cross-attention layers to fuse text and image information and also concatenated the image with the initial noise to provide the video DM with more precise image details. In contrast, in this paper we explore how to inject the provided image to guide the DM sampling process based solely on the pretrained T2V model itself, with no additional training for the new TI2V task. 3. Methodology Given one starting image x0 and text y, let x = \u27e8x0, x1, . . . , xM\u27e9represent a real video corresponding to text y. The objective of text-conditioned image-tovideo (TI2V) generation is to synthesize a video \u02c6 x = \u27e8x0, \u02c6 x1, . . . , \u02c6 xM\u27e9, such that the conditional distribution of \u02c6 x given x0 and y is identical to the conditional distribution of x given x0 and y, i.e., p(\u02c6 x|x0, y) = p(x|x0, y). Our proposed TI2V-Zero can be built on a pretrained T2V diffusion model with a 3D-UNet-based denoising network. Here we choose ModelScopeT2V [60] as backbone due to its promising open-domain T2V generation ability. Below, we first introduce preliminaries about diffusion models, then introduce the architecture of the pretrained T2V model, and finally present the details of our TI2V-Zero. Diffusion Process DDPM Inversion \u2026 ! \ud835\udc33! \u0302 \ud835\udc67! \" \u0302 \ud835\udc67! #$% \u0302 \ud835\udc67! # \ud835\udc2c\" \u222a \u2026 ! \ud835\udc33& \u0302 \ud835\udc67& \" \u0302 \ud835\udc67& #$% \u0302 \ud835\udc67& # \ud835\udc2c& Replace \u2026 ! \ud835\udc33\" \u0302 \ud835\udc67\" \" \u0302 \ud835\udc67\" #$% \u0302 \ud835\udc67\" # \u2026 \ud835\udc60\" % \ud835\udc60\" #$% \ud835\udc60\" \" \u0302 \ud835\udc67\" # Slide add \ud835\udc61 step noise \ud835\udc9f ! \ud835\udc65'(% \u2026 \u2026 \u2026 ! \ud835\udc33&$% \u0302 \ud835\udc67&$% \" \u0302 \ud835\udc67&$% #$% \u0302 \ud835\udc67&$% # \ud835\udc2c&$% add (\ud835\udc61\u22121) step noise U-Net \ud835\udf16) U-Net \ud835\udf16) U-Net \ud835\udf16) Replace \ud835\udc56> 0 \ud835\udc56= 0 Construct \ud835\udc2c\" \ud835\udc60\" % \ud835\udc60\" #$% \ud835\udc60\" \" \ud835\udc60\" #$% \ud835\udc2c\" \u2026 \ud835\udc67\" \u2107 \ud835\udc65\" Repeat \ud835\udc3e times Reverse Process Using Pretrained Denoising U-Net Resample Resample \ud835\udc66 Resample \ud835\udc66 \ud835\udc66 Figure 2. Illustration of the process of applying TI2V-Zero to generate the new frame \u02c6 xi+1, given the starting image x0 and text y. TI2VZero is built upon a frozen pretrained T2V diffusion model, including frame encoder E, frame decoder D, and the denoising U-Net \u03f5\u03b8. At the beginning of generation (i = 0), we encode x0 as z0 and repeat it K times to form the queue s0. We then apply DDPM-based inversion to s0 to produce the initial Gaussian noise \u02c6 zT . Subsequently, in each reverse denoising step using U-Net \u03f5\u03b8, we keep replacing the first K frames of \u02c6 zt with the noisy latent code st derived from s0. Resampling is also applied within each step to improve motion coherence. We finally decode the final frame of the clean latent code \u02c6 z0 as the new synthesized frame \u02c6 xi+1. To compute the new s0 for the next iteration of generation (i > 0), we perform a sliding operation by dequeuing s0 0 and enqueuing \u02c6 zK 0 within s0. 3.1. Preliminaries: Diffusion Models Diffusion Models (DM) [20, 54, 55] are probabilistic models designed to learn a data distribution. Here we introduce the fundamental concepts of Denoising Diffusion Probabilistic Models (DDPM). Given a sample from the data distribution z0 \u223cq(z0), the forward diffusion process of a DM produces a Markov chain z1, . . . , zT by iteratively adding Gaussian noise to z0 according to a variance schedule \u03b21, . . . , \u03b2T , that is: q(zt|zt\u22121) = N(zt; p 1 \u2212\u03b2tzt\u22121, \u03b2tI) , (1) where variances \u03b2t are constant. When the \u03b2t are small, the posterior q(zt\u22121|zt) can be well approximated by a diagonal Gaussian [41, 54]. Furthermore, if the length of the chain, denoted by T, is sufficiently large, zT can be well approximated by a standard Gaussian distribution N(0, I). These suggest that the true posterior q(zt\u22121|zt) can be estimated by p\u03b8(zt\u22121|zt) defined as: p\u03b8(zt\u22121|zt) = N(zt\u22121; \u00b5\u03b8(zt), \u03c32 t I) , (2) where variances \u03c3t are also constants. The reverse denoising process in the DM (also termed sampling) then generates samples z0 \u223cp\u03b8(z0) by starting with Gaussian noise zT \u223cN(0, I) and gradually reducing noise in a Markov chain zT \u22121, zT \u22122, . . . , z0 using a learned p\u03b8(zt\u22121|zt). To learn p\u03b8(zt\u22121|zt), Gaussian noise \u03f5 is first added to z0 to generate samples zt. Utilizing the independence property of the noise added at each forward step in Eq. (1), we can calculate the total noise variance as \u00af \u03b1t = Qt i=0(1\u2212\u03b2i) and transform z0 to zt in a single step: q(zt|z0) = N(zt; \u221a\u00af \u03b1tz0, (1 \u2212\u00af \u03b1t)I) . (3) Then a model \u03f5\u03b8 is trained to predict \u03f5 using the following mean-squared error loss: L = Et\u223cU(1,T ),z0\u223cq(z0),\u03f5\u223cN (0,I) \u0002 ||\u03f5 \u2212\u03f5\u03b8(zt, t)||2\u0003 , (4) where diffusion step t is uniformly sampled from {1, . . . , T}. Then \u00b5\u03b8(zt) in Eq. (2) can be derived from \u03f5\u03b8(zt, t) to model p\u03b8(zt\u22121|zt) [20]. The denoising model \u03f5\u03b8 is implemented using a time-conditioned UNet [48] with residual blocks [15] and self-attention layers [58]. Diffusion step t is specified to \u03f5\u03b8 by the sinusoidal position embedding [58]. Conditional generation that samples z0 \u223cp\u03b8(z0|y) can be achieved by learning a y-conditioned model \u03f5\u03b8(zt, t, y) [41, 47] with classifierfree guidance [19]. During training, the condition y in \u03f5\u03b8(zt, t, y) is replaced by a null label \u2205with a fixed probability. When sampling, the output is generated as follows: \u02c6 \u03f5\u03b8(zt, t, y) = \u03f5\u03b8(zt, t, \u2205) + g \u00b7 (\u03f5\u03b8(zt, t, y) \u2212\u03f5\u03b8(zt, t, \u2205)) , (5) where g is the guidance scale. 3.2. Architecture of Pretrained T2V Model TI2V-Zero can be built upon a pretrained T2V diffusion model with a 3D-UNet-based denoising network. Here we choose ModelScopeT2V [60] as the pretrained model (denoted M). We now describe this T2V model in detail. Structure Overview. Given a text prompt y, the T2V model M synthesizes a video \u02c6 x = \u27e8\u02c6 x0, \u02c6 x1, . . . , \u02c6 xK\u27e9with a pre-defined video of length (K +1) using a latent video diffusion model. Similar to Latent Diffusion Models (LDM) [47], M incorporates a frame auto-encoder [11, 28] for the conversion of data between pixel space X and latent space Z through its encoder E and decoder D. Given the real video x = \u27e8x0, x1, . . . , xK\u27e9, M first utilizes the frame encoder E to encode the video x as z = \u27e8z0, z1, . . . , zK\u27e9. Here the sizes of pixel frame x and latent frame z are Hx \u00d7 Wx \u00d7 3 and Hz \u00d7 Wz \u00d7 Cz, respectively. To be consistent with the notation used for the DM, we denote the Algorithm 1 Generation using our TI2V-Zero approach. Input: The starting frame x0; The text prompt y; The pretrained T2V Model M for generating (K + 1)-frame videos, including frame encoder E and frame decoder D, and the DM denoising networks \u03f5\u03b8; The iteration number U for resampling; The parameter M to control the length of the output video. Output: A synthesized video \u02c6 x with (M + 1) frames. 1: z0 \u2190E(x0) // Encode x0 2: s0 \u2190\u27e8z0, z0, \u00b7 \u00b7 \u00b7 , z0\u27e9 // Repeat z0 for K times 3: \u02c6 x \u2190\u27e8x0\u27e9 4: for i = 1, 2, \u00b7 \u00b7 \u00b7 , M do // Generate one new frame \u02c6 xi 5: sT \u223cN(\u221a\u00af \u03b1T s0, (1 \u2212\u00af \u03b1T )I) // DDPM Inversion 6: \u02c6 zK T \u223cN(\u221a\u00af \u03b1T sK\u22121 0 , (1 \u2212\u00af \u03b1T )I) 7: \u02c6 zT \u2190sT \u222a\u02c6 zK T // Initialize \u02c6 zT 8: for t = T \u22121, \u00b7 \u00b7 \u00b7 , 2, 1 do 9: st \u223cN(\u221a\u00af \u03b1ts0, (1 \u2212\u00af \u03b1t)I) 10: for u = 1, 2, \u00b7 \u00b7 \u00b7 , U do 11: \u27e8\u02c6 z0 t , \u02c6 z1 t , \u00b7 \u00b7 \u00b7 , \u02c6 zK\u22121 t \u27e9\u2190st // Replace 12: \u02c6 zt\u22121 \u223cN(\u00b5\u03b8(\u02c6 zt, y), \u03c32 t I) 13: if u < U and t > 1 then 14: \u02c6 zt \u223cN(\u221a1 \u2212\u03b2t\u02c6 zt\u22121, \u03b2tI) // Resample 15: end if 16: end for 17: end for 18: s0 \u2190\u27e8s1 0, s2 0, \u00b7 \u00b7 \u00b7 , sK\u22121 0 \u27e9\u222a\u02c6 zK 0 // Slide 19: \u02c6 xi \u2190D(\u02c6 zK 0 ) // Decode \u02c6 zK 0 20: \u02c6 x \u2190\u02c6 x \u222a\u02c6 xi 21: end for 22: return \u02c6 x clean video latent z = z0 = \u27e8z0 0, z1 0, . . . , zK 0 \u27e9. M then learns a DM on the latent space Z through a 3D denoising U-Net \u03f5\u03b8 [9]. Let zt = \u27e8z0 t , z1 t , . . . , zK t \u27e9represent the latent sequence that results from adding noise over t steps to the original latent sequence z0. When training, the forward diffusion process of a DM transforms the initial latent sequence z0 into zT by iteratively adding Gaussian noise \u03f5 for T steps. During inference, denoising U-Net \u03f5\u03b8 predicts the added noise at each step, enabling the generation of the clean latent sequence \u02c6 z0 = \u27e8\u02c6 z0 0, \u02c6 z1 0, . . . , \u02c6 zK 0 \u27e9starting from randomly sampled Gaussian noise zT \u223cN(0, I). Text Conditioning Mechanism. M employs a crossattention mechanism [47] to incorporate text information into the generative process as guidance. Specifically, M uses a pretrained CLIP model [46] to encode the prompt y as the text embedding e. The embedding e is later used as the key and value in the multi-head attention layer within the spatial attention blocks, thus enabling the integration of text features with the intermediate U-Net features in \u03f5\u03b8. Denoising U-Net. The denoising U-Net \u03f5\u03b8 includes four key building blocks: the initial block, the downsampling block, the spatio-temporal block, and the upsampling block. The initial block transfers the input into the embedding Ground Truth Video Time TI2V Generation (Replacing) \u2718 Video Infilling (Replacing) \u2718 TI2V Generation (TI2V-Zero) \u2713 Video Infilling (TI2V-Zero) \u2713 Single-Frame Prediction (Replacing) \u2713 \u2026 \u2026 \u2026 \u2026 \u2026 \u2026 \u201cA person is riding horse.\u201d Figure 3. Illustration of the motivation behind our framework. We explore the application of a replacing-based baseline approach (rows 2\u20134, labeled \u201cReplacing\u201d) and our TI2V-Zero (rows 5\u20136, labeled \u201cTI2V-Zero\u201d) in various video generation tasks. The given real frames for each task are highlighted by red boxes and the text input is shown under the block. The replacing-based approach is only effective at predicting a single frame when all the other frames in the video are provided, while TI2V-Zero generates temporally coherent videos for both the TI2V and video infilling tasks. space, while the downsampling and upsampling blocks are responsible for spatially downsampling and upsampling the feature maps. The spatio-temporal block is designed to capture spatial and temporal dependencies in the latent space, which comprises 2D spatial convolution, 1D temporal convolution, 2D spatial attention, and 1D temporal attention. 3.3. Our Framework Leveraging the pretrained T2V foundation model M, we first propose a straightforward replacing-based baseline for adapting M to TI2V generation. We then analyze the possible reasons why it fails and introduce our TI2V-Zero framework, which includes a repeat-and-slide strategy, DDPMbased inversion, and resampling. Figure 2 and Algorithm 1 demonstrate the inference process of TI2V-Zero. Replacing-based Baseline. We assume that the pretrained model M is designed to generate the video with a fixed length of (K + 1). So we first consider synthesizing videos with that same length (K + 1), i.e., M = K. Since the DM process operates within the latent space Z, we use the encoder E to map the given starting frame x0 into the latent representation z0. Additionally, we denote z0 = z0 0 to specify that the latent is clean and corresponds to diffusion step 0 of the DM. Note that each reverse denoising step in Eq. (2) from \u02c6 zt to \u02c6 zt\u22121 depends solely on \u02c6 zt = \u27e8\u02c6 z0 t , \u02c6 z1 t , . . . , \u02c6 zK t \u27e9. To ensure that the first frame of the final synthesized clean video latent \u02c6 z0 = \u27e8\u02c6 z0 0, \u02c6 z1 0, . . . , \u02c6 zK 0 \u27e9 at step 0 matches the provided image latent, i.e., \u02c6 z0 0 = z0 0, we can modify the first generated latent \u02c6 z0 t of \u02c6 zt at each reverse step, as long as the signal-to-noise ratio of each frame latent in \u02c6 zt remains consistent. Using Eq. (3), we can add t steps of noise to the provided image latent z0 0, allowing us to sample z0 t through a single-step calculation. By replacing the first generated latent \u02c6 z0 t with the noisy image latent z0 t at each reverse denoising step, we might expect that the video generation process can be guided by z0 0 with the following expressions defined for each reverse step: z0 t \u223cN( \u221a \u00af \u03b1tz0 0, (1 \u2212\u00af \u03b1t)I) , (6a) \u02c6 z0 t \u2190z0 t , (6b) \u02c6 zt\u22121 \u223cN(\u00b5\u03b8(\u02c6 zt, y), \u03c32 t I) . (6c) Specifically, in each reverse step from \u02c6 zt to \u02c6 zt\u22121, as shown in Eq. (6a), we first compute the noisy latent z0 t by adding Gaussian noise to the given image latent z0 0 over t steps. Then, we replace the first latent \u02c6 z0 t of \u02c6 zt with z0 t in Eq. (6b) to incorporate the provided image into the generation process. Finally, in Eq. (6c), we pass \u02c6 zt through the denoising network to generate \u02c6 zt\u22121, where the text y is integrated by classifier-free guidance (Eq. (5)). After T iterations, the final clean latent \u02c6 z0 at diffusion step 0 can be mapped back into the image space X using the decoder D. Using this replacing-based baseline, we might expect that the temporal attention layers in \u03f5\u03b8 can utilize the context provided by the first frame latent \u02c6 z0 t to generate the subsequent frame latents in a manner that harmonizes with \u02c6 z0 t . However, as shown in Fig. 3, row 2, this replacing-based approach fails to produce a video that is temporally consistent with the first image. The generated frames are consistent with each other, but not with the provided first frame. To analyze possible reasons for failure, we apply this baseline to a simpler video infilling task, where every other frame is provided and the model needs to predict the interspersed frames. In this case, the baseline replaces the generated frame latents at positions corresponding to real frames with noisy provided-frame latents in each reverse step. The resulting video, in Fig. 3, row 3, looks like a combination of two independent videos: the generated (even) frames are consistent with each other but not with the provided (odd) frames. We speculate that this may result from the intrinsic dissimilarity between frame latents derived from the given real images and those sampled from \u03f5\u03b8. Thus, the temporal attention values between frame latents sampled in the same way (both from the given images or both from \u03f5\u03b8) will be higher, while the attention values between frame latents sampled in different ways (one from the given image and the other from \u03f5\u03b8) will be lower. Therefore, the temporal attention layers of M tend to utilize the information from latents Ground Truth w/o Inversion DDIM=10 Resample=0 w/ Inversion DDIM=10 Resample=0 w/ Inversion DDIM=50 Resample=0 w/ Inversion DDIM=10 Resample=2 w/ Inversion DDIM=10 Resample=4 \u201cA woman with the expression of slight sadness on her face.\u201d Figure 4. Qualitative ablation study comparing different sampling strategies for our TI2V-Zero on MUG. The first image \u02c6 x0 is highlighted with the red box and text y is shown under the block. The 1st, 6th, 11th, and 16th frames of the videos are shown in each column. The terms Inversion, DDIM, and Resample denote the application of DDPM inversion, the steps using DDIM sampling, and the iteration number using resampling, respectively. produced by \u03f5\u03b8 to synthesize new frames at each reverse step, ignoring the provided frames. We further simplify the task to single-frame prediction, where the model only needs to predict a single frame when all the other frames in the video are given. In this setting, all the frame latents except for the final frame are replaced by noisy provided-frame latents in each reverse step. Thus, temporal attention layers can only use information from the real frames. In this case, Fig. 3, row 4, shows that the baseline can now generate a final frame that is consistent with the previous frames. Repeat-and-Slide Strategy. Inspired by the observation in Fig. 3, to guarantee that the temporal attention layers of M depend solely on the given image, we make two major changes to the proposed replacing-based baseline: (1) instead of using M to directly synthesize the entire (K + 1)-frame video, we switch to a frame-by-frame generation approach, i.e., we generate only one new frame latent in each complete DM sampling process; (2) for each sampling process generating the new frame latent, we ensure that only one frame latent is produced from \u03f5\u03b8, while the other K frame latents are derived from the given real image and previously synthesized frames, thereby forcing temporal attention layers to only use the information from these frame latents. Specifically, we construct a queue of K frame latents, denoted as s0 = \u27e8s0 0, s1 0, \u00b7 \u00b7 \u00b7 , sK\u22121 0 \u27e9. We also define st = \u27e8s0 t, s1 t, \u00b7 \u00b7 \u00b7 , sK\u22121 t \u27e9, which is obtained by adding t steps of Gaussian noise to the clean s0. Similar to our replacing-based baseline in the single-frame prediction task, in each reverse step from \u02c6 zt to \u02c6 zt\u22121, we replace the first K frame latents in \u02c6 zt by st. Consequently, the temporal attention layers have to utilize information from s0 to synthesize the new frame\u2019s latent, \u02c6 zK 0 . Considering that only one starting image latent z0 is provided, we propose a \u201crepeat-and-slide\u201d strategy to construct s0. At the beginning of video generation, we repeat z0 for K frames to form s0, and gradually perform a sliding operation within the queue s0 by dequeuing the first frame latent s0 0 and enqueuing the newly generated latent \u02c6 zK 0 after each complete DM sampling process. Note that though the initial s0 is created by repeating z0, the noise added to get st is different for each frame\u2019s latent in st, thus ensuring diversity. The following expressions define one reverse step in the DM sampling process: st \u223cN( \u221a \u00af \u03b1ts0, (1 \u2212\u00af \u03b1t)I) , (7a) \u27e8\u02c6 z0 t , \u02c6 z1 t , \u00b7 \u00b7 \u00b7 , \u02c6 zK\u22121 t \u27e9\u2190st , (7b) \u02c6 zt\u22121 \u223cN(\u00b5\u03b8(\u02c6 zt, y), \u03c32 t I) . (7c) Specifically, in each reverse denoising step from \u02c6 zt to \u02c6 zt\u22121, we first add t steps of Gaussian noise to the queue s0 to yield st in Eq. (7a). Subsequently, we replace the previous K frames of \u02c6 zt with st in Eq. (7b) and input \u02c6 zt to the denoising network to produce the less noisy latent \u02c6 zt\u22121 (Eq. (7c)). With the repeat-and-slide strategy, model M is tasked with predicting only one new frame, while the preceding K frames are incorporated into the reverse process to ensure that the temporal attention layers depend solely on information derived from the provided image. DDPM-based Inversion. Though the DM sampling process starting with randomly sampled Gaussian noise produces matching semantics, the generated video is often temporally inconsistent (Fig. 4, row 2). To provide initial noise that can produce more temporally consistent results, we introduce an inversion strategy based on the DDPM [20] forward process when generating the new frame latent. Specifically, at the beginning of each DM sampling process to synthesize the new frame latent \u02c6 zK 0 , instead of starting with the \u02c6 zT randomly sampled from N(0, I), we add T full steps of Gaussian noise to s0 to obtain sT using Eq. (3). Note that \u02c6 z has K + 1 frames, while s has K frames. We then use sT to initialize the first K frames of \u02c6 zT . We copy the last frame sK\u22121 T of sT to initialize the final frame \u02c6 zK T , as the (K \u22121)th frame is the closest to the Kth frame. Resampling. Similar to [24, 33], we further apply a resampling technique, which was initially designed for the image inpainting task, to the video DM to enhance motion coherence. Particularly, after performing a one-step denoising operation in the reversed process, we add one-step noise again to revert the latent. This procedure is repeated mulInversion DDIM Resample FVD\u2193 sFVD\u2193 tFVD\u2193 \u2717 10 0 1656.37 2074.77\u00b1411.74 1798.05\u00b1235.34 \u2713 10 0 339.89 443.97\u00b1139.10 405.22\u00b161.58 \u2713 50 0 463.55 581.32\u00b1234.09 535.06\u00b185.27 \u2713 10 2 207.62 299.14\u00b187.24 278.73\u00b147.84 \u2713 10 4 180.09 267.17\u00b174.72 252.77\u00b139.02 Table 1. Quantitative ablation study comparing different sampling strategies for proposed TI2V-Zero on the MUG dataset. Inversion, DDIM, and Resample denote the application of DDPM-based inversion, the steps using DDIM sampling, and the iteration number using resampling, respectively. Distributions for Comparison FVD\u2193 tFVD\u2193 TI2V-Zero-Fake vs. ModelScopeT2V 366.41 921.31\u00b1251.85 TI2V-Zero-Real vs. Real Videos 477.19 1306.75\u00b1271.82 ModelScopeT2V vs. Real Videos 985.82 2264.08\u00b1501.28 TI2V-Zero-Fake vs. Real Videos 937.11 2177.70\u00b1436.71 Table 2. Result analysis of TI2V-Zero starting from the real (i.e., TI2V-Zero-Real) or synthesized frames (i.e., TI2V-Zero-Fake) on the UCF101 dataset. tiple times for each diffusion step, ensuring harmonization between the predicted and conditioning frame latents (see Algorithm 1 for details). 4. Experiments 4.1. Datasets and Metrics We conduct comprehensive experiments on three datasets. More details about datasets, such as selected subjects and text prompts, can be found in our Supplementary Materials. MUG facial expression dataset [1] contains 1,009 videos of 52 subjects performing 7 different expressions. We include this dataset to evaluate the performance of models in scenarios with small motion and a simple, unchanged background. To simplify the experiments, we randomly select 5 male and 5 female subjects, and 4 expressions. We use the text prompt templates like \u201cA woman with the expression of slight {label} on her face.\u201d to change the expression class label to be text input. Since the expressions shown in the videos of MUG are often not obvious, we add \u201cslight\u201d in the text input to avoid large motion. UCF101 action recognition dataset [56] contains 13,320 videos from 101 human action classes. We include this dataset to measure performance under complicated motion and complex, changing backgrounds. To simplify the experiments, we select 10 action classes and the first 10 subjects within each class. We use text prompt templates such as \u201cA person is performing {label}.\u201d to change the class label to text input. In addition to the above two datasets, we create an OPEN dataset to assess the model\u2019s performance in opendomain TI2V generation. We first utilize ChatGPT [43] to generate 10 text prompts. Subsequently, we employ Stable \u201cA woman with the expression of slight anger on her face.\u201d (MUG) Ground Truth TI2V-Zero w/o Resample (Ours) DynamiCrafter \u201cA person is kayaking.\u201d (UCF101) \u201cA romantic gondola ride through the canals of Venice at sunset.\u201d (OPEN) TI2V-Zero w/ Resample (Ours) Figure 5. Qualitative comparison among different methods on multiple datasets for TI2V generation. Columns in each block display the 1st, 6th, 11th, and 16th frames of the output videos, respectively. There are 16 frames with a resolution of 256 \u00d7 256 for each video. The given image x0 is highlighted with the red box and the text prompt y is shown under each block. Model MUG UCF101 FVD\u2193 sFVD\u2193 tFVD\u2193 FVD\u2193 tFVD\u2193 DynamiCrafter [67] 1094.72 1359.86\u00b1257.73 1223.89\u00b1105.94 589.59 1540.02\u00b1199.59 TI2V-Zero w/o Resample (Ours) 339.89 443.97\u00b1139.10 405.22\u00b161.58 493.19 1319.77\u00b1283.87 TI2V-Zero w/ Resample (Ours) 180.09 267.17\u00b174.72 252.77\u00b139.02 477.19 1306.75\u00b1271.82 Table 3. Quantitative comparison among different methods on multiple datasets for TI2V generation. Diffusion 1.5 [47] to synthesize 100 images from each text prompt, generating a total of 1,000 starting images and 10 text prompts for evaluating TI2V models. Data Preprocessing. We resize all the videos/images to 256 \u00d7 256 resolution. For UCF101, since most of the video frames are not square, we crop the central part of the frames. To obtain ground truth videos for computing metrics, we uniformly sample 16 frames from each video in the datasets to generate the video clips with a fixed length. Metrics. Following prior work [21, 22, 25], we assess the visual quality, temporal coherence, and sample diversity of generated videos using Fr\u00b4 echet Video Distance (FVD) [57]. Similar to Fr\u00b4 echet Inception Distance (FID) [18], which is used for image quality evaluation, FVD utilizes a video classification network I3D [6] pretrained on Kinetics400 dataset [26] to extract feature representation of real and synthesized videos. Then it calculates the Fr\u00b4 echet distance between the distributions of the real and synthesized video features. To measure how well a generated video aligns with the text prompt y (condition accuracy) and the given image x0 (subject relevance), following [39], we design two variants of FVD, namely text-conditioned FVD (tFVD) and subject-conditioned FVD (sFVD). tFVD and sFVD compare the distance between real and synthesized video feature distributions under the same text y or the same subject image x0, respectively. We first compute tFVD and sFVD for each condition y and image x0, then report their mean and variance as final results. In our experiments, we generate 1,000 videos for all the models to estimate the feature distributions. We compute both tFVD and sFVD on the MUG dataset, but for UCF101, we only consider tFVD since it doesn\u2019t contain videos of different actions for the same subject. For the OPEN dataset, we only present qualitative results due to the lack of ground truth videos. Unless otherwise specified, all the generated videos are 16 frames (i.e., M = 15) with resolution 256 \u00d7 256. 4.2. Implementation Details Model Implementation. We take the ModelScopeT2V 1.4.2 [60] as basis and implement our modifications. For text-conditioned generation, we employ classifier-free guidance with g = 9.0 in Eq. (5). Determined by our preliminary experiments, we choose 10-step DDIM and 4-step resampling as the default setting for MUG and OPEN, and 50-step DDIM and 2-step resampling for UCF101. Implementation of SOTA Model. We compare our TI2V-Zero with a state-of-the-art (SOTA) model DynamiCrafter, a recent open-domain TI2V framework [67]. DynamiCrafter is based on a large-scale pretrained T2V foundation model VideoCrafter1 [16]. It introduces a learnable projection network to enable image-conditioned generation and then fine-tunes the entire framework. We implement DynamiCrafter using their provided code with their default settings. For a fair comparison, all the generated videos are ! \ud835\udc65)* ! \ud835\udc65+, ! \ud835\udc65*+ ! \ud835\udc65-. ! \ud835\udc65,* ! \ud835\udc65/, ! \ud835\udc65))+ ! \ud835\udc65)+. \ud835\udc650 \u201cA mesmerizing display of the northern lights in the Arctic.\u201d ! \ud835\udc6510 Figure 6. Example of long video generation using our TI2V-Zero on the OPEN dataset. The given image x0 is highlighted with a red box, and the text prompt y is shown under the set of frames. There are a total of 128 video frames (M = 127), and the synthesized results for every 14 frames are presented. centrally-cropped and resized to 256 \u00d7 256. 4.3. Result Analysis Ablation Study. We conduct ablation study of different sampling strategies on MUG. As shown in Tab. 1 and Fig. 4, compared with generating using randomly sampled Gaussian noise, initializing the input noise with DDPM inversion is important for generating temporally continuous videos, improving all of the metrics dramatically. For MUG, increasing the DDIM sampling steps from 10 to 50 does not enhance the video quality but requires more inference time. Thus, we choose 10-step DDIM as the default setting on MUG. As shown in Fig. 4 and Tab. 1, adding resampling helps preserve identity details (e.g., hairstyle and facial appearance), resulting in lower FVD scores. Increasing resampling steps from 2 to 4 further improves FVD scores. Effect of Real/Synthesized Starting Frames. We also explore the effect of video generation starting with real or synthesized frames on UCF101. We initially use the first frame of the real videos to generate videos with our TI2V-Zero, termed TI2V-Zero-Real. Additionally, we utilize the backbone model ModelScopeT2V [60] to generate synthetic videos using the text inputs of UCF101. We then employ TI2V-Zero to create videos from the first frame of the generated fake videos, denoted as TI2V-Zero-Fake. As shown in Tab. 2, [TI2V-Zero-Fake vs. ModelScopeT2V] can achieve better FVD scores than [TI2V-Zero-Real vs. Real Videos]. The reason may be that frames generated by ModelScopeT2V can be considered as in-distribution data since TI2V-Zero is built upon it. We also compare the output video distribution of TI2V-Zero-Fake and ModelScopeT2V with real videos in Tab. 2. Though starting from the same synthesized frames, TI2V-Zero-Fake can generate more realistic videos than the backbone model. Comparison with SOTA Model. We compare our proposed TI2V-Zero with DynamiCrafter [67] in Tab. 3 and Fig. 5. From Fig. 5, one can find that DynamiCrafter struggles to preserve details from the given image, and the motion of its generated videos is also less diverse. Note that DynamiCrafter requires additional fine-tuning to enable TI2V generation. In contrast, without using any fine-tuning or introducing external modules, our proposed TI2V-Zero can precisely start with the given image and output more visually-pleasing results, thus achieving much better FVD scores on both MUG and UCF101 datasets in Tab. 3. The comparison between our TI2V-Zero models with and without using resampling in Fig. 5 and Tab. 3 also demonstrates the effectiveness of using resampling, which can help maintain identity and background details. Extension to Other Applications. TI2V-Zero can also be extended to other tasks as long as we can construct s0 with K images at the beginning. These images can be obtained either from ground truth videos or by applying the repeating operation. Then we can slide s0 when generating the subsequent frames. We have applied TI2V-Zero in video infilling (see the last row in Fig. 3), video prediction (see Supplementary Materials), and long video generation (see Fig. 6). As shown in Fig. 6, when generating a 128-frame video on the OPEN dataset, our method can preserve the mountain shape in the background, even at the 71st frame (frame \u02c6 x70). The generated video examples and additional experimental results are in our Supplementary Materials. 5. Conclusion In this paper, we propose a zero-shot text-conditioned image-to-video framework, TI2V-Zero, to generate videos by modulating the sampling process of a pretrained video diffusion model without any optimization or fine-tuning. Comprehensive experiments show that TI2V-Zero can achieve promising performance on multiple datasets. While showing impressive potential, our proposed TI2VZero still has some limitations. First, as TI2V-Zero relies on a pretrained T2V diffusion model, the generation quality of TI2V-Zero is constrained by the capabilities and limitations of the pretrained T2V model. We plan to extend our method to more powerful video diffusion foundation models in the future. Second, our method sometimes generates videos that are blurry or contain flickering artifacts. One possible solution is to apply post-processing methods such as blind video deflickering [30] or image/video deblurring [51] to enhance the quality of final output videos or the newly synthesized frame in each generation. Finally, compared with GAN and standard video diffusion models, our approach is considerably slower because it requires running the entire diffusion process for each frame generation. We will investigate some faster sampling methods [29, 32] to reduce generation time."
16
+ }
intro_8K/test_introduction_short_2404.16325v1.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2404.16325v1",
3
+ "title": "Semantic Segmentation Refiner for Ultrasound Applications with Zero-Shot Foundation Models",
4
+ "abstract": "Despite the remarkable success of deep learning in medical imaging analysis,\nmedical image segmentation remains challenging due to the scarcity of\nhigh-quality labeled images for supervision. Further, the significant domain\ngap between natural and medical images in general and ultrasound images in\nparticular hinders fine-tuning models trained on natural images to the task at\nhand. In this work, we address the performance degradation of segmentation\nmodels in low-data regimes and propose a prompt-less segmentation method\nharnessing the ability of segmentation foundation models to segment abstract\nshapes. We do that via our novel prompt point generation algorithm which uses\ncoarse semantic segmentation masks as input and a zero-shot prompt-able\nfoundation model as an optimization target. We demonstrate our method on a\nsegmentation findings task (pathologic anomalies) in ultrasound images. Our\nmethod's advantages are brought to light in varying degrees of low-data regime\nexperiments on a small-scale musculoskeletal ultrasound images dataset,\nyielding a larger performance gain as the training set size decreases.",
5
+ "authors": "Hedda Cohen Indelman, Elay Dahan, Angeles M. Perez-Agosto, Carmit Shiran, Doron Shaked, Nati Daniel",
6
+ "published": "2024-04-25",
7
+ "updated": "2024-04-25",
8
+ "primary_cat": "cs.CV",
9
+ "cats": [
10
+ "cs.CV",
11
+ "cs.AI"
12
+ ],
13
+ "label": "Original Paper",
14
+ "paper_cat": "Semantic AND Segmentation AND Image",
15
+ "gt": "Ultrasound is a popular medical imaging modality used to image a large variety of organs and tissues. Ultrasound is often the preferred choice due to its non-radiative and non-invasive nature, relatively easy and fast imaging procedure, and lower costs. Automating the diagnosis or highlighting relevant areas in the image will contribute to faster workflows and potentially more consistent and accurate diagnoses. Artificial Intelligence (AI) has demonstrated remarkable success in automatic medical imaging analysis. Compared to classical methods, previous work based on convolutional neural networks on various medical imaging tasks, such as classification and segmentation, have shown state-of-the-art results [1, 2, 3, 4]. However, effective deep learning segmentation algorithms for medical images is an especially challenging task due to the scarcity of high-quality labeled images for supervision. Moreover, in medical imaging it is often the case that identification of findings regions, namely regions of potentially pathological visual anomalies, having neither a clear boundary nor a typical geometry or position is much more challenging than the identification of an anatomy in its context. Findings are also typically rare, which brings to light the challenge of training such models in limited data regimes. \u2217Corresponding author, e-mail: [email protected]. \u2020These authors have contributed equally to this work. 1Dept. of AI/ML Research, GE Healthcare, Haifa, Israel. 2Dept. of Clinical Applications, Point of Care Ultrasound & Handheld, Texas, USA. 3Dept. of Clinical Applications, Point of Care Ultrasound & Handheld, Wisconsin, USA. arXiv:2404.16325v1 [cs.CV] 25 Apr 2024 Semantic Segmentation Refiner for U/S Applications with Zero-Shot FMs COHEN H ET AL. Figure 1: A high-level illustration of our semantic segmentation refinement method with zero-shot foundation models. A pre-trained segmentation model predicts a semantic segmentation for each class of an input image. In this example, classes comprise anatomies and pathologies in an ultrasound image, and the coarse segmentor output depicts the predicted semantic segmentation of a pathology. A prompt selection model selects positive and negative points. Consequently, a zero-shot semantic segmentation mask of the pathology is predicted by a foundation segmentation model, prompted by the selected points for the input image. Positive prompt points are depicted in red, and negative prompt points are depicted in blue. The pathology semantic segmentation prediction is highlighted in red. For illustration purposes, the muscle is highlighted in purple, the tendon in yellow, and the bone in green. The freeze symbol indicates preventing gradients from being propagated to the model weights. Recently, new segmentation models have emerged. Trained on data at huge scales, these foundation models aim to be more generic rather than tailored to specific datasets. The Segment Anything Model (SAM) [5] is a foundational model demonstrating zero-shot generalization in segmenting natural images using a prompt-driven approach. The SonoSAM [6] foundational model adapts SAM to ultrasound images by fine-tuning the prompt and mask decoder [6]. Although fine-tuning methods often improve the results on target datasets [7] they essentially downgrade the generalization capabilities of the foundation model. Further, a significant domain gap between natural and medical images, ultrasound images in particular[8], hinders fine-tuning models trained on natural images to the task at hand [7]. In this work, we address the performance degradation of segmentation models in low-data regimes and derive a novel method for harnessing segmentation foundation models\u2019 ability to segment arbitrary regions. Our semantic segmentation refinement method comprises two stages: First, a coarse segmentation is predicted by a model trained on a small subset of the training data. In the second stage, our novel points generation from a coarse pathology segmentation algorithm is used to prompt a segmentation foundation model. Positive prompt points are selected using a partition around medoids method as the most representative pathology points. Negative prompt points are selected by a prompt selection optimization algorithm that identify the context anatomy. Importantly, we do not fine-tune the foundation model to our dataset, i.e., it produces a zero-shot segmentation. The end-to-end pipeline is illustrated in Fig. 1. The method\u2019s advantages are brought to light on varying degrees of low-data regimes experiments on a small-scale images dataset, yielding a larger performance gain compared to a state-of-the-art segmentation model [9] as the training set size decreases. Further, ablation studies validate the effectiveness of our semantic segmentation refinement model. Our approach applies to other ultrasound-based medical diagnostics tasks. The paper is organized as follows: Section 2 presents the semantic segmentation task and leading approaches. Our method is presented in Section 3, and the experimental setup is presented in Section 4. Section 5 presents the results and ablation studies on a discontinuity in tendon fiber (DITF) pathology finding task in a musculoskeletal ultrasound (MSK) dataset, and the conclusions are presented in Section 6.",
16
+ "main_content": "2.1 Semantic Segmentation Models Semantic segmentation aims to assign a label or a class to each pixel in an image. Unlike image classification, which assigns a single label to the entire image, semantic segmentation provides a more detailed understanding of the visual scene by segmenting it into distinct regions corresponding to objects or classes. This is an essential technique for applications, such as autonomous vehicles, medical image analysis, and scene understanding in robotics. Like other computer vision tasks, deep learning has demonstrated state-of-the-art results in the semantic segmentation of medical images. The semantic segmentation problem can be formulated as follows: Given an image I \u2208RC\u00d7H\u00d7W , our goal is to train a deep neural network to predict the pixel-wise probability map SN\u00d7H\u00d7W of the classes in the dataset, where N is the number of classes in the dataset. 2 Semantic Segmentation Refiner for U/S Applications with Zero-Shot FMs COHEN H ET AL. DeepLabV3 [9] represents a distinctive approach in semantic image segmentation. Utilizing dilated convolutions, the model strategically enlarges the receptive field and manages the balance between global and local features through padding rates. Notably, the spatial pyramid pooling module proposed by the authors aggregates features from dilated convolutions at various scales, enhancing contextual information. Distinctive from encoder-decoder architectures such as the U-Net [10], it is built upon a robust pre-trained encoder, contributing to its success in generating accurate and detailed segmentation masks across diverse applications. Since DeepLabV3 remains a staple choice for a performant semantic segmentation model, we adopt it as our method\u2019s coarse segmentor. 2.2 Semantic Segmentation Foundation Models Foundation models are trained on broad data at a huge scale and are adaptable to a wide range of downstream tasks [11, 12, 13]. The Segment Anything Model (SAM) [5] emerged as a versatile foundation model for natural image segmentation. Trained on a dataset of over 11 million images and 1B masks, it demonstrates impressive zero-shot generalization in segmenting natural images using an interactive and prompt-driven approach. Prompt types include foreground/background points, bounding boxes, masks, and text prompts. However, SAM achieves subpar generalization on medical images due to substantial domain gaps between natural and medical images [14, 15, 16, 17, 18]. Moreover, SAM obtains the poorest results on ultrasound compared to other medical imaging modalities [15]. These results are attributed to the ultrasound characteristics, e.g., the scan cone, poor image quality, and unique speckled texture. A common methodology to overcome this generalization difficulty is to fine-tune a foundation model on a target dataset [19]. An efficient fine-tuning strategy is Low-Rank Adaptation (LoRA) [20], which has been adopted in fine-tuning SAM to relatively small medical imaging datasets [21, 22, 23]. SonoSAM [6] demonstrates state-of-the-art generalization in segmenting ultrasound images. Fine-tuned on a rich and diverse set of ultrasound image-mask pairs, it has emerged as a prompt-able foundational model for ultrasound image segmentation. Notably, adapting prompt-based models to medical image segmentation is difficult due to the conundrum of crafting high-quality prompts [15]. Manually selecting prompts is time-consuming and requires domain expertise. Methods of extracting prompts from ground-truth masks [23] cannot be applied during inference as they rely on full supervision. Auto-prompting techniques rely on the strong Vision Transformer (ViT-H) image encoder [24] semantic representation capabilities, and suggest generating a segmentation prompt based on SAM\u2019s image encoder embedding [18, 25]. Other strategies suggest replacing the mask decoder with a prediction head requiring no prompts [16]. Nevertheless, SAM\u2019s zero-shot prediction accuracy is typically lower than that of the segmentation models trained with fully supervised methods [26]. Motivated by the generalization abilities of segmentation foundation models, we devise a points selection algorithm from coarse segmentation masks that allows harnessing prompt-based models to ultrasound segmentation in a zero-shot setting. 3 Method In this section, we present our method for refining a coarse pathology segmentation mask with zero-shot foundation models. This method can be adapted to natural images, as well as to the medical imaging domain. Herein, we validate it based on a specific challenging task of segmenting a discontinuity of the tendon fiber finding (Sec. 4.1), which is the main ultrasound finding of a tendon partial tear pathology. Our key intuition is that although the performance of segmentation models decreases significantly in low-data regimes, even such coarse segmentation masks can be utilized for extracting high-quality prompts that harness segmentation foundation models\u2019 capabilities. Importantly, we use the publicly available pre-trained foundation models without further modification. The flexibility of our method allows for incorporating either SonoSAM or SAM. Though the above-mentioned foundation models allow several types of prompts, we focus on foreground (positive) and background (negative) prompt points. Our method makes use of the ground-truth tendon segmentation, denoted T gt. Since the tendon in the context of the DIFT pathology is usually easy to segment due to its typical geometry and position and relatively simple data acquisition and labeling, we assume that strong segmentation models exist for this task and that their output can be used in lieu of the ground-truth segmentation. With that, we introduce our two-stage method, summarized in Algorithm 1. First, a segmentation model [9] is trained on a random subset of the training data. A coarse semantic segmentation is then predicted for a given test image. Then, k positive and k negative prompt points are selected to prompt a segmentation foundation model. We next describe our prompt points selection algorithm in greater detail. 3 Semantic Segmentation Refiner for U/S Applications with Zero-Shot FMs COHEN H ET AL. Algorithm 1 The Semantic Segmentation Refiner Method Input: \u2022 Input image I \u2022 Ground-truth tendon mask T gt \u2022 Frozen SonoSAM model \u2022 Pre-trained segmentation model S Output: \u2022 Refined pathology segmentation mask O 1: Coarse segmentation mask \u02dc O \u2190S(I) 2: Positive points selection ptspos \u2190k-medoids( \u02dc O) 3: Modified ground-truth tendon mask T \u02dc gt \u2190T gt \\ \u02dc O 4: Initialize complementary problem 5: \u00af ptsneg \u2190ptspos, \u00af ptspos \u2190random from T \u02dc gt 6: for t in range(1, T) do 7: Optimize \u00af ptspos as parameters: 8: \u2113ce( \u00af pts, T \u02dc gt) = \u2212T \u02dc gt log (SonoSAM(I, \u00af pts)) 9: Update \u00af ptspos \u2190\u00af ptspos 10: end for 11: Flip: ptsneg \u2190\u00af ptspos 12: Output O \u2190SonoSAM(I, pts) 3.1 Positive Points Selection We aim to select points that are the most representative of the coarse pathology segmentation mask as the positive prompt points. This selection objective translates to the partitioning around the medoids method\u2019s approach. This approach is preferable compared to a selection based on a minimization of the sum of squared distance (i.e., the k-means) in the case of multiple pathology blobs since the latter might select centroids in between pathology blobs. Thus, k mass centers of the coarse pathology segmentation masks are selected as positive points using the kmedoids clustering algorithm [27]. To reduce the probability of selecting false positive points, a threshold is applied to the coarse pathology segmentation masks before selection. We denote the selected positive points as ptspos = {ptspos i }k i=1. This process is illustrated in Fig. 2. Figure 2: An illustration of our positive (foreground) points selection module, depicted in red. A threshold is applied to the coarse segmentation prediction. A kmedoids clustering algorithm is applied to select k positive pathology points. 3.2 Negative Points Refinement We take inspiration from hard negative selection literature [28, 29, 30], and aim to select the most informative negative points w.r.t. the foreground object. To that end, we formulate a complementary prompt points selection problem w.r.t. the background given the k selected foreground points (3.1), \u00af pts = { \u00af ptspos, \u00af ptsneg}. When the foreground is the pathology, the background is the context anatomy, herein the background is a tendon anatomy. The complementary prompt points selection is optimized to decrease the binary cross-entropy (BCE) loss between the foundation model\u2019s zero-shot tendon segmentation mask prompted on these points and a modified ground-truth tendon 4 Semantic Segmentation Refiner for U/S Applications with Zero-Shot FMs COHEN H ET AL. mask, denoted T \u02dc gt. To avoid predicting tendon points within foreground pathology, the values of the ground-truth tendon mask overlapping with the coarse pathology detection are modified to zero. As points initialization for this complementary problem, we flip the labels of ptspos such that they correspond to negative points, \u00af ptsneg \u2190ptspos. Further, k points are selected at random from T \u02dc gt, denoted \u00af ptspos. While freezing the foundation model, the point prompt optimization is performed for a maximum of 100 steps or until convergence. The optimization is performed such that the selected points are optimal w.r.t. the complementary problem of the tendon segmentation given the foreground pathology predicted by the coarse segmentor. Denote an input image as I, SonoSAM\u2019s zero-shot tendon segmentation given input I and its corresponding optimized prompt points \u00af pts as SonoSAM(I, \u00af pts). Then, the BCE loss of the complementary problem is: \u2113ce( \u00af pts, T \u02dc gt) = \u2212T \u02dc gt log (SonoSAM(I, \u00af pts)) . (1) We used the AdamW [31] optimizer, with learning rate of 4e\u22123, and standard betas to optimize the positive points \u00af ptspos. The optimized positive tendon points selected by this model serve as k negative prompt points, ptsneg \u2190\u00af ptspos, towards the foreground pathology segmentation. This process is illustrated in Fig. 3. Figure 3: An illustration of our negative (background) points selection module. In addition to the positive selected points (Sec. 3.1), negative points are selected randomly from the modified ground-truth tendon mask. The points are flipped to initialize the settings of the complementary tendon segmentation problem. Our points optimization model optimizes prompt points selection w.r.t. the complementary tendon zero-shot segmentation problem (Sec. 3.2). Finally, prompt points are again flipped to account for positive and negative prompt points towards the pathology segmentation. 4 Experiments 4.1 Dataset The data used for this study is ultrasound images of tendons around the shoulder joint. Specifically, we acquired images of the supraspinatus tendon, infraspinatus tendon, and subscapularis. The images are acquired from both the short-axis and the long-axis views. The main parameters of our data are summarized in Table 1. In this work, we aim to segment the partial tear pathology within the tendon, thus our data consists of images paired with the corresponding segmentation mask of anatomies and pathologies. Our data includes semantic labeling of the following classes: DITF, bone, tendon, and muscle. Table 2 summarizes the semantic labeling statistics. In total, our dataset includes 388 images from 124 subjects, 80% of which are used for training, and the remaining 20% are used for validation. The test set comprises 40 images. To prevent data leakage, the test set images are collected from subjects that do not appear in the train data. All images are resized to a constant resolution of 512x512 pixels. All data comply with the Institutional Review Board (IRB) data sharing agreement. 4.2 Evaluation Metric We use the Dice similarity coefficient [32] evaluation metric, commonly used in medical image segmentation research to measure the overlapping pixels between prediction and ground truth masks. The Dice similarity coefficient is defined as 2|A\u2229B| |A|+|B|, where A and B are the pixels of the prediction and the ground truth respectively. 4.3 A Segmentation Model In Low-Data Regimes In this experiment, we investigate the performance and properties of a state-of-the-art semantic segmentation model with a limited training set size of MSK ultrasound images. Our goal is two-fold: (i) to validate our conjecture that high-quality 5 Semantic Segmentation Refiner for U/S Applications with Zero-Shot FMs COHEN H ET AL. (a) 100% of train set. (b) 35% of train set. (c) 15% of train set. (d) 8% of train set. (e) 5% of train set. Figure 4: Positive pathology points retainment in increasingly coarse segmentation mask prediction and our method\u2019s results. Top row: Pathology segmentation mask predicted with a DeepLabV3 model trained on varying percent of the training set. Middle row: Positive points selected on binary pathology mask by our positive points selection module. Bottom row: An illustration of our method\u2019s pathology segmentation output, highlighted in red, compared to the ground-truth segmentation, highlighted in green. The tendon area is shown at the bottom left image for reference. Our method achieves for this test image a Dice similarity coefficient of 0.89, 0.71, 0.73, 0.72, 0.50 when the coarse segmentor is trained on 100%, 35%, 15%, 8%, 5% of the train set, respectively. Table 1: Summary of MSK pathology segmentation dataset main parameters. Parameters/Dataset MSK Ultrasound Images Total frames 388 Original frame size 1536 X 796 or 1044 X 646 pixels Subjects 90 (52.82% males, 47.18% females) Average BMI 24.69 \u00b1 8.92 Vendor GE Healthcare\u2122 Ultrasound system Logiq S8\u2122, Eagle\u2122, LogiqE10\u2122 Data collection Linear Collection Sites USA, Israel prompts can be extracted even from a coarse semantic segmentation prediction, and (ii) to measure the performance degradation in increasingly low-data regimes. These properties are the basis of our two-stage method for exploiting the advantages of a prompt-able foundation segmentation model. Concretely, for an input image I \u2208R512\u00d7512 the segmentation model prediction S \u2208R7\u00d7512\u00d7512 corresponds to a semantic segmentation for each class as detailed in Table 2. 4.4 Segmentation Refinement With Zero-Shot Foundation Models Positive Points Selection A combination of a constant and an adaptive threshold is applied to the coarse segmentation prediction prior to positive point selection. Denote by c0 the coarse segmentation mask prediction at the foreground channel (DITF in our case). 6 Semantic Segmentation Refiner for U/S Applications with Zero-Shot FMs COHEN H ET AL. Table 2: Semantic labeling statistics at the 512X512 patches level. M: Million. Class MSK Type Number of images (% of total) Total Area (pixels) Mean fraction out of total patch area Discontinuity in tendon fiber Pathology 179 (46.13%) 1.11M 1.09% Bone 288 (74.22%) 2.75M 2.7% Tendon Anatomy 388 (100%) 10.64M 10.46% Muscle 388 (100%) 28.13M 27.65% We apply a double thresholding mechanism to disregard the noise in the prediction. \u02dc c = c0 > tmin (2) c = \u02dc c > 0.4 \u2217max(\u02dc c) (3) The initial threshold screens predictions that lack sufficient global (cross-classes) certainty, when the minimum threshold is set to tmin = 0.15. The second thresholding term adaptively screens all predictions that lack sufficient local (classwise) certainty. Further, we set the k-medoids++ medoid initialization method [33] which selects more separated initial medoids than those selected by the other methods. The hyper-parameter k is adaptively set such that the sum of distances of samples to their closest cluster center (inertia) is minimized, k \u2208[4, 6]. Negative Points Refinement We deploy in our experiments the SonoSAM semantic segmentation foundation model since it is expected to better generalize to zero-shot segmentation of ultrasound images than SAM. Due to the randomness in the initialization of the complementary positive points \u00af ptspos selection problem, evaluation is performed over 10 random initialization. 4.5 Training Procedure Our coarse segmentor is DeepLabV3 [9], a state-of-the-art convolutional approach to handle objects in images of varying scales, with a ResNet-50 backbone [34]. As our complete dataset consists of only 275 training images, the model is pre-trained on the ImageNet dataset [35]. To evaluate our method across different data regimes we trained our coarse segmentor on varying n percentage of the training data, n \u2208[5, 8, 12, 20, 35, 60, 100], sub-sampled at random. The model is trained with equally weighted BCE loss and a Dice similarity coefficient loss between the predicted and ground-truth segmentation for each class. Each such experiment is trained for 100 epochs, where the weights of the maximal validation loss have been selected for testing. We used the robust AdamW [31] optimizer, with no learning rate scheduler and parameters of \u03b21 = 0.9, \u03b22 = 0.999 and learning rate of 4e\u22123. The test set remains constant across the different training experiments. The model training and evaluation code is implemented with the PyTorch [36] framework. 5 Results 5.1 Semantic Segmentation Model In Low-Data Regimes The results of this experiment validate our conjecture that positive pathology points are consistently selected in increasingly coarse segmentation mask predictions. As the segmentation model is trained on an increasingly smaller training set, the segmentation mask prediction becomes coarse: the pathology segmentation boundaries become less defined and its prediction probability decreases (Fig. 4, top row). Nevertheless, the positive pathology points selected by our method remain generally consistent (Fig. 4, middle row). Consistent with these results, we find that the average Dice similarity coefficient of the segmentation model decreases rapidly when the model is trained on increasingly smaller training set sizes (Fig. 5, \u2018Segmentation Model\u2019). These results validate our method\u2019s motivation and approach. 5.2 Semantic Segmentation Refinement With Zero-Shot Foundation Model Fig. 5 summarizes the results of our method in comparison with those of the baseline segmentation model in various training set sizes. Our method\u2019s average Dice is higher than the baseline\u2019s in every training set size. Moreover, 7 Semantic Segmentation Refiner for U/S Applications with Zero-Shot FMs COHEN H ET AL. our method\u2019s performance gain is larger as the training set size decreases (\u223c10% average Dice increase in 5% and 8% training set sizes), substantiating the advantage of our method in low-data regimes. Our method\u2019s pathology segmentation output in varying training set sizes compared to the ground-truth segmentation is illustrated in Fig. 4, bottom row. 100.0 60.0 35.0 20.0 15.0 12.0 8.0 5.0 Percent of the training set size (%) 0.20 0.25 0.30 0.35 0.40 0.45 Average Dice score Segmentation Model (DeepLabV3) Ours Figure 5: A summary of the average DITF Dice similarity coefficient of methods in various training set sizes. Depicted are the results of the baseline segmentation model[9] and our segmentation refinement with zero-shot SonoSAM foundation model. Error bars depict the standard deviation of our method\u2019s statistics. To analyze the stochasticity effect of our method\u2019s negative points random initialization (Sec. 3.2), we compare our method\u2019s DITF Dice score statistics over ten random initialization and the baseline segmentation model\u2019s average DITF Dice similarity coefficient. Results show that our method\u2019s performance is robust, exhibiting relatively low standard deviation in all train set sizes (Fig. 5). Additionally, our method\u2019s mean DITF Dice surpasses that of the baseline\u2019s in all but one train set size, and is higher by 4% on average than the baseline. 5.3 Ablation Studies In this section, we present ablation studies substantiating the effectiveness of our negative prompt points refinement (NPPR) model, as well as examining our method\u2019s performance when replacing the SonoSAM foundation model with SAM. 5.3.1 SAM vs. SonoSAM as a segmentation foundation model In this study, we investigate the impact of replacing SonoSAM with SAM as the zero-shot semantic segmentation foundation model in our method. Table 3 shows that harnessing SonoSAM\u2019s generalizability for MSK ultrasound images is preferable to SAM in low-data regimes and on par with SAM otherwise. 5.3.2 Random negative prompt points section In this experiment, we investigate the effectiveness of our negative prompt points refinement model by comparing it to a random negative prompt points selection algorithm. Concretely, k negative prompt points are randomly selected from the modified ground-truth tendon mask, T \u02dc gt. Our positive points selection approach remains unchanged. Results in Table 3 demonstrate that this naive selection algorithm achieves subpar average Dice scores across almost all train set sizes, especially in low-data regimes. These results establish the advantage of our negative points optimization algorithm. 6 Conclusions In this paper, we address the performance degradation of a state-of-the-art semantic segmentation model in low-data regimes. A novel prompt points selection algorithm optimized on a zero-shot segmentation foundation model was presented, as a means of refining a coarse pathology segmentation. Our method\u2019s advantages are brought to light in varying degrees of low-data regimes experiments, demonstrating a larger performance gain compared to the baseline segmentation model as the training set size decreases (Fig. 5). 8 Semantic Segmentation Refiner for U/S Applications with Zero-Shot FMs COHEN H ET AL. Table 3: Ablation studies: quantitative segmentation test results of the mean DITF Dice similarity coefficient (DSC) for different approaches over 10 run cycles. Our method is using zero-shot SonoSAM [6] foundation model. A higher DSC is better, with the best scores marked in bold. NPPR: Negative Prompt Points Refinement. Methods Percent of the training set 100% 60% 35% 20% 15% 12% 8% 5% Ours without NPPR 44.6% 40.0% 34.2% 27.8% 30.3% 27.5% 20.7% 16.6% Ours with SAM 45.5% 41.6% 39.7% 29.3% 32.9% 28.3% 27.6% 23.0% Ours 46.3% 39.3% 39.6% 31.9% 32.8% 31.8% 32.0% 24.6% Further, we validate our method\u2019s robustness to negative point initialization stochasticity and study the effectiveness of our prompt points refinement model (Section 5.3.2). Results demonstrate that the generalization of SonoSAM in extremely low data regimes is better than SAM\u2019s (Section 5.3.1). Our approach can be used for other ultrasound-based medical diagnostics tasks. An inherent limitation of our two-stage method is that its latency is higher than that of a core segmentation model."
17
+ }
intro_8K/test_introduction_short_2404.16333v1.json ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2404.16333v1",
3
+ "title": "AI Coders Are Among Us: Rethinking Programming Language Grammar Towards Efficient Code Generation",
4
+ "abstract": "Besides humans and machines, Artificial Intelligence (AI) models have emerged\nto be another important audience of programming languages, as we come to the\nera of large language models (LLMs). LLMs can now excel at coding competitions\nand even program like developers to address various tasks, such as math\ncalculation. Yet, the grammar and layout of existing programs are designed for\nhumans. Particularly, abundant grammar tokens and formatting tokens are\nincluded to make the code more readable to humans. While beneficial, such a\nhuman-centric design imposes an unnecessary computational burden on LLMs where\neach token, either consumed or generated, consumes computational resources. To\nimprove inference efficiency and reduce computational costs, we propose the\nconcept of AI-oriented grammar, which aims to represent the code in a way that\nbetter suits the working mechanism of AI models. Code written with AI-oriented\ngrammar discards formats and uses a minimum number of tokens to convey code\nsemantics effectively. To demonstrate the feasibility of this concept, we\nexplore and implement the first AI-oriented grammar for Python, named Simple\nPython (SimPy). SimPy is crafted by revising the original Python grammar\nthrough a series of heuristic rules. Programs written in SimPy maintain\nidentical Abstract Syntax Tree (AST) structures to those in standard Python,\nallowing execution via a modified AST parser. In addition, we explore methods\nto enable existing LLMs to proficiently understand and use SimPy, and ensure\nthe changes remain imperceptible for human developers. Compared with the\noriginal Python, SimPy not only reduces token usage by 13.5% and 10.4% for\nCodeLlama and GPT-4, but can also achieve equivalent, even improved,\nperformance over the models trained on Python code.",
5
+ "authors": "Zhensu Sun, Xiaoning Du, Zhou Yang, Li Li, David Lo",
6
+ "published": "2024-04-25",
7
+ "updated": "2024-04-25",
8
+ "primary_cat": "cs.SE",
9
+ "cats": [
10
+ "cs.SE",
11
+ "cs.AI",
12
+ "cs.PL"
13
+ ],
14
+ "label": "Original Paper",
15
+ "paper_cat": "LLM Fairness",
16
+ "gt": "High-level programming languages, such as the widely adopted Python, are designed with two types of audiences in mind [6]: ma- chines that compile and execute programs and humans who read, comprehend, and write programs. Machines focus on the opera- tional semantics of programs, while humans additionally emphasize programs\u2019 readability, a key factor in understanding source code. For example, one of the guiding design principles for Python [31] def sum ( nums ) : if len ( nums ) = = 0 : raise Value Error sum _ num = 0 for num in nums : sum _ num num return = num sum _ + Good smell \"Looks\" wordy Bad smell def sum ( nums ) : if len ( nums ) = = 0 : raise Value Error sum _ num = 0 for num in nums : sum _ num num return = num sum _ + \"Looks\" cleaner Figure 1: An illustration of how LLMs and human program- mers perceive the source code. is that \u201creadability counts.\u201d Consequently, these languages incorpo- rate numerous human-centric design elements within their gram- mar. For example, programming languages utilize explicit delimiters to separate code structures. These elements, while enhancing hu- man readability, may not be necessary to convey the program\u2019s operational semantics. Recently, the audiences for programming languages have ex- panded to include AI models, particularly Large Language Models (LLMs) that can analyze, generate, and execute code. This is evident by the impressive performance that LLMs achieved in code genera- tion [16]. For example, a recently released LLM, AlphaCode2 [3], is reported to perform better than 85% of human participants in a pro- gramming competition. Moreover, many LLM-powered assistants, such as ChatGPT [28] and Bard [14], are now equipped with code execution environments, which enable them to execute generated code and provide responses based on the results. Thus, the role of LLMs has evolved from mere code generators to active \u201cdevelopers\u201d that utilize programming to accomplish various tasks, including mathematical computations and file processing. This paradigm shift indicates a new era in which AI models emerge as an important group of users of programming languages. While AI models have taken on the new role, the design of code grammar is yet to evolve to accommodate their unique needs. The readability of the source code could impose an additional compu- tational burden on LLMs to read and generate programs, as extra tokens are particularly placed to make the code layout more friendly arXiv:2404.16333v1 [cs.SE] 25 Apr 2024 , , Zhensu Sun, Xiaoning Du, Zhou Yang, Li Li, and David Lo for reading. However, readability may not be essential for LLMs to perform coding tasks. Studies have revealed that code models do not capture much information relevant to readability [43], and readability-enhancing symbols like \u201c:\u201d received significantly lower attention compared to other elements such as variable names[50]. We illustrate how a program is perceived by humans and AI models in Figure 1. When certain elements that enhance readability are omitted from the code while preserving its underlying semantics, it becomes more challenging for humans to interpret, yet AI models can process the code more efficiently. The observation makes us wonder: What is a suitable grammar for AI models? Exploring this question is vital for optimizing the efficiency of LLMs and reduc- ing energy waste in dealing with unnecessary tokens, especially given that the high operational cost of LLMs sets a big challenge for providers to generate profit [17] from them. As AI models consume and generate source code in a token-by-token manner, with one feed-forward process for each token, reducing the tokens in code representation holds the potential to proportionally reduce the time and energy cost. This motivates us to propose the concept of AI-Oriented Gram- mar, a grammar specifically designed for AI models instead of humans. The core idea is to derive grammar rules that keep the code representations concise (with a minimal number of tokens to AI models). Notably, the code crafted in this grammar can be parsed with its adapted parser and then executed to obtain the same result as the original grammar. A few challenges are in the way of designing such a new grammar and melting it into AI models. The AI models are expected to not only comprehend code written in this grammar but also generate code following its rules to better serve the goal of efficiency. At the same time, human developers, who are directing the development, expect to work with grammar that they find friendly and are familiar with. Given these challenges, the realization of this concept remains uncertain. To assess the feasibility of AI-oriented grammar, we embarked on an exploratory study. The study seeks to consider the implications and limitations of integrating AI-oriented grammar into the existing code generation workflow. It is guided by three research questions, each addressing a key challenge. RQ1. What is the token reduction capacity of AI-oriented grammar in source code? Whether and to what extent an AI-oriented grammar can reduce the tokens remains an open question. We fill this gap by imple- menting a proof-of-concept AI-oriented grammar and assessing its performance. Specifically, we explore a new grammar for Python, named SimPy, by heuristically modifying the standard grammar. Compared to the standard Python grammar, we prohibit using to- kens popularly hired to style the code appearance, e.g., whitespace and newline, and simplify keywords, operators, and delimiters to a more compact form. The modifications are designed to be simple, as this is the first attempt to explore such AI-oriented grammar. We also developed an AST parser for SimPy that can parse its code into the same AST as standard Python code, as well as a converter for seamless code transitions between SimPy and Python code. A comparative analysis of SimPy and Python\u2019s original grammar was conducted using tens of tokenizers employed by existing LLMs. The findings indicate a notable reduction in token usage when employ- ing SimPy, with decreases ranging between 8.6% and 34.7%, thus reducing the time and computational cost during inference by a similar level [19]. For example, the tokenizer of GPT-4 demonstrates a significantly enhanced efficiency with SimPy, achieving a 10.4% reduction in token size. RQ2. How can AI models understand AI-oriented grammar? Prior research demonstrates that AI models are capable of com- prehending human-centric grammars of existing programming languages [16]. However, how these models can learn AI-oriented grammar remains unexplored. We thus further experiment with SimPy to find an effective way. We explored two different train- ing strategies: directly training a model on a SimPy-based code dataset (converted seamlessly from a Python dataset) and fine- tuning a model, originally trained with a Python dataset, on the SimPy dataset. A control group, where a model is directly trained on the Python code dataset, is also included for comparison. The mod- els trained with either strategy should achieve at least equivalent accuracy compared with the control group. Otherwise, it would be impractical to adopt AI-oriented grammar. For each training strat- egy, we experiment with three models, CodeGen-NL, TinyLlama and Pythia. The experimental results reveal that models initially trained with Python can adapt effectively to SimPy. For instance, our CodeGen model, initially trained on Python, attains a 7.32% Pass@10 on HumanEval, while it even reaches a 9.15% Pass@10 after the fine-tuning on SimPy. RQ3. How can AI-oriented grammar support real-world scenarios? Given that AI-oriented grammar may compromise human read- ability, its application is somewhat restricted. Thus, a remaining challenge for AI-oriented grammar is: how could an AI-oriented grammar be used in real-world scenarios, particularly when human- readable source code is necessary? To address this challenge, we first discuss the basic usage scenario of AI-oriented grammar, i.e., the scenarios where the code generated by the AI models is not intended to be displayed to human users. In such scenarios, such as AI agents [40] for ordinary users who are unfamiliar with pro- gramming, the model just generates code in AI-oriented grammar, executes it, and provides outputs to the user based on execution results. However, there are still many scenarios where human de- velopers need to review the code, such as coding assistants. We thus propose an inference framework for code generation named DualCode. DualCode utilizes a rule-based converter to convert code between these grammars, ensuring that users interact with human- readable code as usual, while the model benefits from the efficiency of AI-oriented grammar. Our tests confirm that DualCode intro- duces negligible latency, with the converter of SimPy processing code under 500 tokens in less than 1.0 ms. We will release the source code once the paper is accepted. The contributions of this paper are summarized as follows: \u2022 We propose the concept of AI-oriented grammar and empirically explore its feasibility and potential, paving the way for future improvements in programming language design that prioritize AI efficiency. AI Coders Are Among Us: Rethinking Programming Language Grammar Towards Efficient Code Generation , , def sum ( nums ) : if len ( nums ) = = 0 : raise Value Error sum _ num = 0 nums : sum _ num num return = num sum _ + for num in Explicit delimiters Visual Coding Style Figure 2: Illustrations of human-centric design elements in Python. \u2022 We implement the first AI-oriented grammar for Python, named SimPy, which can reduce at least 8.3% tokens in Python source code. \u2022 We propose a novel code generation framework, DualCode, ex- panding the applicability of AI-oriented grammar beyond AI-only scenarios with negligible additional latency.",
17
+ "main_content": "In this section, we critically examine the human-centric elements in the grammar of current programming languages and propose the concept of AI-oriented grammar. Following this, we introduce the dataset an empirical study driven by three research questions to explore the feasibility of this concept. 2.1 Human-centric Grammar Design As discussed in Section 1, modern programming languages are predominantly designed with human-centric grammar. This design philosophy originates from the longstanding reality that humans were the only developers for decades. In the current era of LLMs, this human-centric design philosophy has not been significantly challenged. To better understand this orientation, we critically examine the grammar of widely used programming languages, focusing on lexical and syntactical elements that enhance human readability. Below, we summarize the identified patterns and provide examples in Figure 2: Visual Coding Style The programming language grammar is deliberately crafted to accommodate diverse coding styles. Although not mandatory, styles like those recommended in the Python PEP8 guide [44] rely on grammatical support. For example, the coding style requires the programs to be written in multiple lines instead of a single extremely long line, easing human code review on screens. This necessitates several lexical elements: line breaks to separate lines, indents to visualize code blocks, and line continuation characters for splitting long lines. Figure 2 demonstrates these aspects, with line breaks and indents highlighted in purple. Similarly, the coding style suggests surrounding each binary operator with a single white space on either side. Therefore, lexical grammar must accommodate such stylistic elements, even if they may not contribute to the core semantics in parsing. Intuitive Notations The human-centric syntax of programming languages is designed to be intuitively understandable to humans. Common operators like \u201c+\u201d for addition and \u201c=\u201d for assignment are chosen for their familiarity, and derivations like the augmented assignment operator \u201c+=\u201d maintain this intuitive connection. Although potentially more concise symbols could replace these (e.g., using a brand-new symbol \u201c$\u201d for \u201c+=\u201d), they are still deliberately designed to maintain human readability. Similarly, for structural clarity, programming languages often employ explicit delimiters, such as symbols or keywords, to define code structures despite these delimiters not being essential for parsing. For instance, Python\u2019s compound statements, such as the if statement and for statement, use a colon to demarcate the header from the body. While a parser might deduce these components from line breaks alone, the colon acts as a visual aid, as illustrated in Figure 2 where colons are highlighted in red. This emphasis on intuitive notation and explicit delimiters, although not essential for parsing, significantly aids human comprehension. 2.2 AI-Oriented Grammar Grammar is a rule set that defines how the source code should describe the programming language\u2019s semantics in aspects of lexis and syntax, using notations such as symbols and keywords. The primary function of the notations in the grammar is two-fold: to define a program\u2019s structure for machine execution and to enhance visual comprehension for human readability. Given that AI models do not require assistance in visual comprehension, the focus of AI-oriented grammar is solely on structural definition. We thus consider a notation unnecessary for AI models if it does not contribute to accurate parsing by the parser. AI-oriented grammar is designed with indispensable notations. In the design process of a programming language, semantics are defined first, followed by the development of a grammar to represent them. Therefore, employing AI-oriented grammar does not alter the fundamental semantics of the programming language. Codes written in either AI-oriented or human-centric grammar of a programming language can still be parsed into the same AST and then executed. 2.3 Python Code Dataset for Our Study As a newly proposed concept, we are still unclear whether AIoriented grammar can be realized and what scenarios it can be applied to. To address these uncertainties and explore the potential of AI-oriented grammar, we conduct an empirical study guided by three critical research questions, respectively introduced in Section 3, Section 4, Section 5. Our study is centered around Python, the main programming language of the execution environment for LLMs like GPT-4 and Bard to address programming-required tasks. We utilize the Python subset of starcoderdata [23], a filtered variant of The Stack dataset [21], a comprehensive collection of over 20 million code files sourced from open-source GitHub repositories. We keep the code files from the repositories with over 100 stars, resulting in 623,887 code files. The dataset is partitioned into training and validation sets in a 95:5 ratio. We do not create a separate testing set, as we plan to evaluate the model\u2019s performance using other , , Zhensu Sun, Xiaoning Du, Zhou Yang, Li Li, and David Lo def two_sum(nums: list[int], target: int) -> list[int]:\\n chk_map: dict[int, int] = {}\\n for index, val in enumerate(nums):\\n compl = target val\\n if compl in chk_map:\\n return [chk_map[compl], index]\\n chk_map[val] = index\\n return [] Python SimPy 72 tokens 60 tokens Same AST <def_stmt>two_sum nums:list[int] target:int<arrow>list [int]<block_start>chk_map:dict[int int]={}<for_stmt> index,val enumerate(nums)<block_start>compl=target-val <if_stmt>compl<in>chk_map<block_start><return>[chk_map [compl] index]<block_end>chk_map[val]=index<block_end> <return>[]<block_end> Same Execution Results Figure 3: A comparison between Python and SimPy source code, tokenized by GPT-4\u2019s tokenizer. Continuous characters with the same background color represent the same token. Notably, there are no line breaks in SimPy and we add these line breaks in the figure for our human readers. established evaluation datasets. The code snippets in the evaluation datasets are excluded from the training dataset. 3 TOKEN REDUCTION WITH AI-ORIENTED GRAMMAR (RQ1) In this section, we present an instance of AI-oriented grammar to answer RQ1: What is the token reduction capacity of AI-oriented grammar in source code? We propose an AI-oriented grammar for Python as a proof-of-concept (Section 3.1) and then proceed to evaluate the extent of token reduction achievable with this grammar (Section 3.2). 3.1 An AI-oriented grammar for Python To demonstrate the potential benefits of AI-oriented grammar, we propose and implement the first AI-oriented grammar for Python, named Simple Python (SimPy), as a proof-of-concept. SimPy is derived from the original grammar of Python, where we simplify the grammar specification according to the philosophy of AI-oriented grammar. Alongside SimPy, we develop a toolkit including a parser to interpret SimPy source code into Python\u2019s AST, and a converter for seamless code translation between SimPy and Python. 3.1.1 Design. The design of SimPy is guided by the philosophy of AI-oriented grammar, resulting in a grammar that prioritizes AI processing efficiency over human readability. When being tokenized, the code of SimPy is considerably more compact than Python\u2019s, facilitating faster model processing. A comparative illustration of SimPy and Python code, showcasing the same AST but with fewer tokens in SimPy (measured by the tokenizer of GPT-4), is presented in Figure 3. Next, we introduce what is changed to craft SimPy. The modifications to production rules are implemented for the grammar specification of Python 3.12 (the latest version). The modifications for SimPy are driven by our understanding of AI models and the semantics of Python, which is not guaranteed to be optimal but is sufficient to serve as a proof-of-concept demonstration for AI-oriented grammar. Limited by the space, we will only introduce the major modifications here, with the complete grammar specification available in our artifact. Below, we highlight major categories of changes implemented: Replace notations with tokens. In SimPy, we replace keywords and most symbols (e.g., \u201cin\u201d, \u201c>\u201d, \u201c>=\u201d) with distinct token placeholders (e.g., \u201c<in>\u201d, \u201c<g>\u201d, \u201c<ge>\u201d). These placeholders require the corporation from the tokenizer side and should be recognized as individual tokens or as part of a token. This approach results in the creation of 78 new tokens. Some single-character symbols, like \u201c.\u201d, \u201c:\u201d, \u201c=\u201d, are retained as they are unambiguous and represent atomic units already optimized for tokenization. This replacement goes beyond a simple tokenization. Firstly, it eliminates the need for surrounding whitespace, as seen in the conversion from Python\u2019s \u201ca in b\u201d to SimPy\u2019s \u201ca<in>b\u201d. Secondly, it differentiates the same notation used in different contexts, such as \u201c<if_stmt>\u201d and \u201c<if>\u201d, to prevent potential conflicts during the creation of SimPy. Finally, it consolidates multi-character symbols in Python, which may be recognized as multiple tokens, into single tokens in SimPy, ensuring the integrity of their meaning. Restrict coding style. Python utilizes white spaces, line breaks, indents, and line continues in grammar to accommodate a humanfriendly visual coding style. SimPy, on the other hand, streamlines this by preserving only essential separators. Specifically, the line breaks, denoted using \u201c\\n\u201d, are replaced with a unique token \u201c<line_sep>\u201d. This token is omitted if the subsequent line starts with a token that inherently signifies the start of a new line, such as \u201cdef\u201d for function definitions or \u201cclass\u201d for class definitions. In this way, SimPy optimizes token usage for line breaks. Indents are handled using two unique tokens, \u201c<block_start>\u201d and \u201c<block_end>\u201d, placed at the beginning and end of a block, respectively. This ensures that only two tokens are needed, regardless of how many lines the block has. Additionally, non-separating white spaces and line continuation characters are eliminated to further streamline the grammar. Simplify grammar tokens. For every grammar token in every production, we review whether it can be removed, merged with others, or replaced with white spaces. Using white space aligns with the nature of current tokenizers, which usually treat white spaces as part of a token to reduce the token number. In Table 1, we compare the grammar specifications of key productions before and after such simplification and calculate the resulting token reductions. For example, the grammar tokens in the \u201cfunction_def\u201d statement are reduced from 4 of 1, with an additional token being optional. Our goal is to minimize grammar token usage in every production. However, certain design decisions in one production might introduce parsing ambiguities when considering the grammar globally. For example, in Python, strings can be concatenated without separators, such as \u201c\u2018hello\u2019 \u2018world\u2019 \u201d. It causes a conflict when we no longer use commas to separate the elements in a list, such as \u201c[\u20181\u2019 \u20182\u2019 \u20183\u2019]\u201d, causing ambiguity about whether \u201c\u20181\u2019 \u20182\u2019 \u20183\u201d\u2019 is a single AI Coders Are Among Us: Rethinking Programming Language Grammar Towards Efficient Code Generation , , Table 1: Comparison of grammar specifications for Python and SimPy, using the official Python grammar notation ([32]). The grammar tokens are in blue. The table also includes the count of grammar tokens for each production: \u201cN\u201d represents the number of lines, \u201cn\u201d signifies the count of repetitive elements, and \u201c?\u201d indicates that the number of tokens is conditional. Name Grammar Specification #Token block Python NEWLINE INDENT statements DEDENT N+1 SimPy \u2018<block_start>\u2019 statements \u2018<block_end>\u2019 2 function_def Python \u2018def\u2019 NAME [type_params] \u2018(\u2019 [params] \u2018)\u2019 [\u2018->\u2019 expression ] \u2018:\u2019 [func_type_comment] block 4+1? SimPy \u2018<def_stmt>\u2019 NAME [type_params] [params] [\u2018<arrow>\u2019 expression ] [func_type_comment] block 1+1? class_def Python \u2018class\u2019 NAME [\u2018(\u2019 [arguments] \u2018)\u2019 ] \u2018:\u2019 block 2+2? SimPy \u2018<class_stmt>\u2019 NAME [\u2018(\u2019 [arguments] \u2018)\u2019 ] block 1+2? if_stmt Python \u2018if\u2019 named_expression \u2018:\u2019 block elif_stmt 2 SimPy \u2018<if_stmt>\u2019 named_expression block elif_stmt 1 for_stmt Python \u2018for\u2019 star_targets \u2018in\u2019 ~ star_expressions \u2018:\u2019 [TYPE_COMMENT] block [else_block] 3 SimPy \u2018<for_stmt>\u2019 star_targets ~ star_expressions [TYPE_COMMENT] block [else_block] 1 with_stmt Python \u2018with\u2019 \u2018,\u2019.with_item+ \u2018:\u2019 [TYPE_COMMENT] block 2+n SimPy \u2018<with_stmt>\u2019 \u2018 \u2019.with_item+ [TYPE_COMMENT] block 1 try_stmt Python \u2018try\u2019 \u2018:\u2019 block except_block+ [else_block] [finally_block] 2 SimPy \u2018<try_stmt>\u2019 block except_block+ [else_block] [finally_block] 1 while_stmt Python \u2018while\u2019 named_expression \u2018:\u2019 block [else_block] 2 SimPy \u2018<while_stmt>\u2019 named_expression block [else_block] 1 import_from Python \u2018from\u2019 (\u2018.\u2019 | \u2018...\u2019)* dotted_name \u2018import\u2019 import_from_targets 2+n? SimPy \u2018<from_import_stmt>\u2019 (\u2018.\u2019 | \u2018...\u2019)* dotted_name import_from_targets 1+n? simple_stmts Python \u2018;\u2019.simple_stmt+ [\u2018;\u2019] NEWLINE n+1+1? SimPy [\u2018<line_sep>\u2019].simple_stmt+ [\u2018<line_sep>\u2019] n?+1? concatenated string or three separate strings. To address such ambiguities, strategic trade-offs are necessary. For example, we add an additional grammar token, \u201c<concat>\u201d, between the juxtaposed strings so that we can eliminate the commas in a list. Though it leads to an increase in the token number for the juxtaposed strings, it is still beneficial for the overall efficiency of SimPy. 3.1.2 Unambiguity of SimPy . To determine whether a grammar has ambiguity is theoretically undecidable [13]. In practice, parser generator tools are commonly hired to check for ambiguities in grammar, including those of popular programming languages [15]. A parser generator can find a wide range of ambiguities in the grammar, such as conflicts that arise when the parser has two possible actions at one step. Practically, this is almost the best way to check the ambiguity of SimPy. We have successfully generated parsers for SimPy using the GLR (generalized left-to-right rightmost derivation parser) parsing algorithm [22] form tree-sitter [42], where no ambiguity is detected. Next, we provide an analytical discussion about why our transformations are unlikely to introduce ambiguity to the grammar. First of all, the transformations are only made to terminal notations, which act as keywords or delimiters. Changes made to keywords are guaranteed to represent its unique semantics, while changes made to delimiters should not affect the recognition of the construct, and its precedent and subsequent constructs. Case I: New unique notations are added or introduced as replacements. Importantly, different notations are not replaced with the same new notations. To this end, the new notations do not interfere with production rules for which the transformation is not applicable. Given that they are semantically equivalent notations as the original one, the parsing of the affected production rules remains the same. For example, replacing the \u2018NEWLINE INDENT\u2019 in the production rule of block (see Table 1) with \u2018<blcok_start>\u2019 conveys the same semantics that a block is about to start. Case II: Existing notations are removed. Arbitrary removal notations may introduce ambiguity to the grammar. We carefully design a few heuristics when removing notations such that they are unlikely to cause problems. \u2022 Remove notations that hold redundant semantics as their adjacent notations. For example, \u2018:\u2019 in many statements indicates the end of the previous construct and the start of a new construct, e.g., in \u2018if\u2019 named_expression \u2018:\u2019 block elif_stmt. However, the block construct initiates with its own starting symbol, , , Zhensu Sun, Xiaoning Du, Zhou Yang, Li Li, and David Lo making the construct itself distinguishable from any previous construct. Hence, removing \u2018:\u2019 is safe for this case. \u2022 Remove delimiters used to scope a construct when the scope of its precedent and subsequent constructs are clear. For example, the \u2018(\u2019 and \u2018)\u2019 for parameters are actually unnecessary in function_def_raw := \u2018def\u2019 NAME [type_params] \u2018(\u2019 [params] \u2018)\u2019 [\u2018->\u2019 expression ] \u2018:\u2019 [func_type_comment] block. NAME is an atomic token, thus will not interfere the beginning of parameters when type_params is absent. type_params are surrounded by \u2018[\u2019 and \u2018]\u2019, making their presence not an issue for recognizing params. Hence, \u2018(\u2019 can be safely removed. Now, looking at the subsequent constructs, [\u2018->\u2019 expression ], \u2018:\u2019, [func_type_comment], or block possesses a unique indicator of their beginning. Hence, \u2018)\u2019 can be safely removed as well. Another example is the \u2018import\u2019 keyword in import_from := \u2018from\u2019 (\u2018.\u2019 | \u2018...\u2019)* dotted_name \u2018import\u2019 import_from_targets. Since dotted_name is a must and contains no white spaces, hence the white space between dotted_name and import_from_targets can perfectly separate these two constructs. Removing \u2018import\u2019 is also fine. 3.1.3 Semantic equivalence between SimPy and Python. SimPy is designed as a simplified grammar of Python, which means a program written in Python can be equivalently and deterministically transformed to its counterpart in SimPy , and vice versa. In other words, Python and SimPy are semantically equivalent. We prove this statement in Theorem 1. Formally, we define a grammar \ud835\udc3aand a grammar \ud835\udc3a\u2032. \ud835\udc3a\u2032 is obtained via a transformation \ud835\udc47to the production rules in \ud835\udc3a. Given a production rule, \ud835\udc47is restricted to adding, replacing, or removing a terminal notation or a sequence of terminal notations. The transformation between Python and SimPy is an instance complying with this restriction. For example, \ud835\udc47(block := NEWLINE INDENT statements DEDENT ) = block:= \u2018<block_start>\u2019 statements \u2018<block_end>\u2019. The transformations can be easily extended to parse trees, where \ud835\udc47transforms the (production rule) nodes of a tree. Given a parse tree \ud835\udc61, we define its corresponding program as \ud835\udc5d= \ud835\udf0f(\ud835\udc61). We first prove that the parse tree of a program written in \ud835\udc3ais equivalent to the parse tree of its counterpart in \ud835\udc3a\u2032 modulo the transformation \ud835\udc47in Lemma 1. In Theorem 1, we further prove that these program pairs share the same ASTs, and naturally the same semantics. Lemma 1. Given any program \ud835\udc5d, the parse tree \ud835\udc61generated by grammar \ud835\udc3ais equivalent to the parse tree \ud835\udc61\u2032 generated from \ud835\udc5d\u2032 = \ud835\udf0f(\ud835\udc47(\ud835\udc61)) by grammar\ud835\udc3a\u2032 modulo the transformation\ud835\udc47, if no ambiguity exists in \ud835\udc3a\u2032. That is \ud835\udc61\u2032 = \ud835\udc47(\ud835\udc61). Proof. We give the proof by structural induction on \ud835\udc5d. Base case: \ud835\udc5dis an atomic program construct. This case is trivial as we don\u2019t have any transformation for the atomic program. Naturally, the parse trees are identical. Inductive case: We present the proof based on different compound language constructs. Take the block construct of SimPy for an example. Assuming \ud835\udc5d = NEWLINE INDENT statements DEDENT, the parse tree for \ud835\udc5dis \ud835\udc61=block (NEWLINE, INDENT, statements(\ud835\udc610), DEDENT), where \ud835\udc610 is the parse tree for the stmt construct. By definition,\ud835\udc5d\u2032 =\ud835\udc47(\ud835\udc5d) = \u2018<block_start>\u2019\ud835\udc47(statements) \u2018<block_end>\u2019. Obviously, \ud835\udc5d\u2032 still matches the production rule of block in \ud835\udc3a\u2032. Given no ambiguity in \ud835\udc3a\u2032, its parse tree is deterministic. Thus, we have \ud835\udc61\u2032 = block(\u2018<block_start>\u2019, statements(\ud835\udc61\u2032 0), \u2018<block_end>\u2019), where the parse tree for the statements construct is \ud835\udc61\u2032 0. By the induction hypothesis, \ud835\udc47(\ud835\udc610) = \ud835\udc61\u2032 0. Applying \ud835\udc47to \ud835\udc61, we have \ud835\udc47(\ud835\udc61) = \ud835\udc61\u2032. Similarly, we can prove other constructs. Hence, the lemma is proved. \u25a1 Theorem 1. Python and SimPy are semantically equivalent. Proof. Two programs are semantically equivalent if they share the same AST. In the next, we prove that for any program \ud835\udc5din Python, it shares the same AST as its counterpart \ud835\udc5d\u2032 in SimPy , and vice versa. We assume the transformation from Python to SimPy is \ud835\udc47. Assuming the parse tree for \ud835\udc5dis \ud835\udc61and the parse tree for \ud835\udc5d\u2032 is \ud835\udc61\u2032, with Lemma 1, we know that \ud835\udc61\u2032 = \ud835\udc47(\ud835\udc61). ASTs are obtained from parse trees by abstracting away some concrete syntax details but retaining the essential semantic structures. Since \ud835\udc47is a non-semantic transformation, \ud835\udc61and \ud835\udc61\u2032 differ only at the non-semantic nodes, such that the parse trees are processed to the same AST. Hence, we can prove that for any program in Python, its counterpart in SimPy is semantically equivalent to it. Similarly, we can prove that for any program in SimPy , its counterpart in Python is semantically equivalent to it as well. Thus, the theorem is proved. \u25a1 3.1.4 Implementation. Based on the grammar specifications of SimPy, we develop a toolkit for it, including an AST parser for SimPy code and a converter for seamless translation between SimPy and Python source codes. The parser is built upon tree-sitter [42], a popular parser generator tool. We first describe the grammar specification of SimPy in the configuration file of the tree-sitter and then generate the parser. With the help of the GLR algorithm from the tree-sitter, we ensure SimPy resolves all the conflicts and no ambiguity exists. The generated parser can parse the SimPy source code into the AST of Python. Based on this parser, we further implement a converter, where specific conversion rules are established for each node of the AST. From a pragmatic point of view, we test our implemented toolkits by conducting round-trip transformations, where Python source code is first converted into SimPy code and subsequently retranslated back to Python. Our first tests on the Python dataset revealed that, ignoring all whitespace, the textual content of the code remains unchanged after the transformation. In addition, we assess its soundness through execution results. We perform the round-trip transformation to the groundtruth code snippets of HumanEval and run the test cases on both the transformed and the original code. The execution results of all the transformed code and the original code are exactly the same, which also indicates the soundness of our implementation. 3.2 Experiments of RQ1 In this section, we detail the tokenizers employed in our experiments and describe the experimental methodology and its corresponding results. AI Coders Are Among Us: Rethinking Programming Language Grammar Towards Efficient Code Generation , , Table 2: Percentage of token reduction achieved with SimPy. The \u201cCode\u201d and \u201cWeb\u201d in the \u201cVocab Source\u201d column represent the sources for constructing the tokenizer\u2019s vocabulary: code repositories and internet data, respectively. Tokenizer Vocab Source Vocab Size Tokens Python SimPy CodeBert Code 50k 1.33B 0.87B 34.7%\u2193 GPT2 Web 50k 1.33B 0.87B 34.7%\u2193 CodeLlama Web 32k 0.97B 0.84B 13.5%\u2193 WizardCoder Web 32k 0.97B 0.84B 13.5%\u2193 DeepSeek-Coder Web 32k 0.97B 0.84B 12.9%\u2193 CodeGen Web 51k 0.93B 0.82B 12.6%\u2193 CodeT5+ Web 51k 0.93B 0.82B 12.6%\u2193 Codex Web 51k 0.93B 0.82B 12.6%\u2193 CodeT5 Code 32k 0.91B 0.78B 13.8%\u2193 StarCoder Code 49k 0.83B 0.76B 8.6%\u2193 SantaCoder Code 49k 0.83B 0.76B 8.8%\u2193 Replit-code Code 33k 0.82B 0.75B 8.6%\u2193 GPT-3.5 Web 100k 0.71B 0.63B 10.4%\u2193 GPT-4 Web 100k 0.71B 0.63B 10.4%\u2193 3.2.1 Tokenizers. Our experiments encompass a broad spectrum of tokenizers from various LLMs. The main difference between them is the training corpus, leading to different token vocabularies. GPT-2 [35], Codex [8], GPT-3.5 [29], GPT-4 [30]: These tokenizers, released by OpenAI, are trained on a mixed corpus, including both natural language and programming language, with GPT-4 being the latest version offering state-of-the-art performance in various language tasks. CodeLlama [37], WizardCoder [24], DeepSeek-Coder [1]: These tokenizers are derived from the tokenizer of Llama 2 [41] which is also trained on the mixed corpus. SantaCoder [2], StarCoder [23], Replit-code [36]: These tokenizers are specialized for code, having been trained exclusively on programming language datasets, and are thus more adept at handling source code. CodeGen [27], CodeT5 [47], CodeT5+ [46]: Created by Salesforce, these tokenizers are extended based on the vocabulary of GPT2 with additional tokens representing repeating tokens of tabs and white spaces. 3.2.2 Results. To answer RQ1, we conducted an evaluation involving the representation of code files from our Python dataset in both its original grammar and in SimPy, followed by the tokenization using the same tokenizer for each representation. We created the SimPy dataset by converting the Python dataset with our converter. In tokenizing the SimPy code, we modify the tokenizers to include tokens of SimPy in their vocabularies. In total, 14 tokenizers from popular LLMs are evaluated in our experiments, where each tokenizer\u2019s vocabulary source and size are also documented to offer a comprehensive view of SimPy\u2019s performance across different models. By examining the variation in token numbers, we evaluated SimPy\u2019s effectiveness in reducing token size, thus showcasing the potential benefits of AI-oriented syntax. As revealed in table 2, SimPy can reduce the number of tokens by 8.6% to 34.7%, depending on the tokenizers. The GPT-4 and GPT-3.5 tokenizers, which are already the most efficient in representing Python source code, show a further reduction of 10.4% in token count with SimPy. For tokenizers trained on code corpora, such as Replit-code and StarCoder, SimPy achieved a token reduction ranging from 8.6% to 13.8%. Tokenizers trained on web-based corpora like CodeGen and CodeT5 also exhibited significant reductions, between 12.6% and 13.5%. The most pronounced impact of SimPy is observed with the least efficient tokenizers, CodeBert and GPT-2, where a remarkable 34.7% reduction in token count was achieved. These promising results highlight SimPy\u2019s potential to reduce token count for source code representation. As estimated by OpenAI [19], the Floating-point operations (FLOPS) required for generating each token during inference can be regarded as being only relevant to the model size when the context size is fixed. Therefore, a reduction in token count can be directly translated to a decrease in FLOPS at a similar level, resulting in faster inference speeds given the fixed computing speed of the device. Answer to RQ1: AI-oriented grammar, exemplified using SimPy, effectively reduces the number of tokens required for source code representation, with models like GPT-4 benefiting from a 10.4% reduction. Correspondingly, it leads to a speed up and a computing saving during inference at a similar level. 4 MODEL TRAINING WITH AI-ORIENTED GRAMMAR (RQ2) In this section, we aim to answer RQ2: How can AI models understand AI-oriented grammar? We experimentally investigate whether AI models can retain their accuracy when trained with AI-oriented grammar. We describe our training strategies in Section 4.1 and assess their effectiveness on two language models in Section 4.2. 4.1 Training Strategies Training AI models with AI-oriented grammar is a pivotal step to enable the model to effectively deal with source code in this new format. Despite the efficiency gains demonstrated by SimPy, such training should not compromise the model\u2019s accuracy. To explore the feasibility of such training, we experiment with two different strategies. Next, we introduce the strategies in the experiment, from tokenizer refining to model training. Tokenizer Refining SimPy introduces 78 new tokens for the tokenizers to recognize. For example, the \u201cdef\u201d keyword of the original Python grammar is replaced by a token \u201c<def_stmt>\u201d. Given the existing association between the pre-trained model and its tokenizer, completely retraining the tokenizer on SimPy code to optimize token distribution is impractical. Instead, we opt for a more feasible approach: expanding the tokenizer\u2019s vocabulary to include these new tokens. Correspondingly, this modification requires resizing , , Zhensu Sun, Xiaoning Du, Zhou Yang, Li Li, and David Lo the embedding matrix ([vocab size * embedding size]) and the output layer ([hidden state size * vocab size]) to fit the expended vocab size. This expansion introduces a few new parameters, mainly in the output layer, around 78 * hidden_size parameters. For instance, modifying a CodeGen [27] model with a hidden state size of 2048 introduces around 160 thousand new parameters, a negligible increase (less than 0.01%) in the total parameter count. Moreover, the resizing will randomly initialize both the embedding vector for each new token and the weight of the output layer, which will be updated during the model training. Model Training Our study explores two basic training strategies: 1) directly training a model on the SimPy code dataset, referred to as SimPy, and 2) sequentially training a model first on the Python dataset and then on the SimPy code dataset, referred to as Python\u2192SimPy. If such basic strategies work, further improvement in efficiently adapting AI-oriented grammar is completely feasible. Moreover, we construct a control group: directly training a model on the Python code dataset, denoted as Python. The performance of the two strategies should match or surpass the model from the control group; otherwise, they are not practical. To control the variable, all training sessions across the two strategies and the control group are conducted under identical conditions, including the training environment, initial model, and training hyper-parameters. Notably, the SimPy dataset is converted from the Python dataset, ensuring no external data is involved. Moreover, for the Python+SimPy setting, we vary the proportion of the SimPy dataset used, i.e., 10%, 20%, 50%, and 100%, to assess the required volume of data for effective fine-tuning. 4.2 Experiments of RQ2 We first present the experimental setup for RQ2, including the models used, evaluation metrics, and implementation details. Then, we report the experimental results and answer the research questions. 4.2.1 Models. We adopt three widely used models in our research community, namely CodeGen-NL, TinyLlama, and Pythia, whose parameter sizes range between 350M and 1.1B. All these models serve as the initial pre-trained model for our experiments. Though these are not the latest state-of-the-art models, they suffice to validate the feasibility of learning AI-oriented grammar like SimPy. We will further discuss the impact of this decision in Section 7. CodeGen-NL: CodeGen, proposed by Salesfore [27], is an opensourced language model designed for code generation. It undergoes a multi-phase training process on different datasets, where the model is first trained with natural language datasets and then code datasets. Our experiments utilize its natural language version (CodeGen-350M-nl), produced after the initial phase of its training process, as the foundation model to conduct our experiments. TinyLlama: TinyLlama [51] is a compact 1.1B language model pre-trained on around 3 trillion tokens, building on the architecture and tokenizer of Llama 2 [41]. It shows competitive performance compared to existing open-source language models of similar sizes. Pythia: Pythia [4] is a suite of LLMs ranging in size from 70M to 12B parameters. It is intended to be used as the baseline for research studies and thus is designed close to currently accepted common practices. Considering the capacity of our computing resources, we use its 1B version. 4.2.2 Evaluation Metrics. We evaluate the model\u2019s performance on the code generation task with the Pass@\ud835\udc58metric on HumanEval. To compute Pass@\ud835\udc58, \ud835\udc58code samples are generated for each problem, and a problem is considered solved if any of the \ud835\udc58samples pass the unit tests. We report the fraction of problems being successfully solved. The HumanEval dataset, developed by OpenAI [8], comprises 164 programming problems, each with a function signature, a docstring, and multiple test cases. Given the function signature and docstring, the model is required to generate the code, which is then tested by executing the test cases. Notably, the function signatures are written using Python\u2019s original grammar. When evaluating the model adapted to SimPy, we convert the function signature into SimPy using the code converter. Similarly, the model-generated SimPy code is subsequently converted into Python to run test cases since the existing testing framework is implemented for Python source code. 4.2.3 Implementation Details. In our experiments, we use the Huggingface Transformers library [48] with Pytorch to implement the models. The experiments of CodeGen-NL are performed on a machine with 48 vCPUs, 512GB RAM, and four RTX A5000 GPUs (24GB RAM), while the other two models are trained on a machine with 28 vCPUs, 200GB RAM, and two RTX A6000 GPUs (48GB RAM). The hyper-parameters of the training are set referring to CodeGen\u2019s hyper-parameters: 8 batch size, 1.8e-4 learning rate, 0.1 weight decay, and 512 context length. During the inference for evaluation, we set the temperature to 0.2 and the top-p to 0.95. 4.2.4 Results. Following the settings of the two strategies (SimPy and Python\u2192SimPy) and the control group (Python), we train the CodeGen-NL, TinyLlama, and Pythia models, respectively. Finally, for each of our initial models, we have six variations: one each for Python and SimPy, and four models for Python\u2192SimPy incorporating 10%, 20%, 50%, and 100% of the SimPy dataset. The performance of these models is evaluated through Pass@1 and Pass@10 metrics on the HumanEval dataset. We report the results in Table 3. Notably, the models trained with SimPy lag behind the Python baseline in terms of accuracy. For example, the Pass@1 and Pass@10 of CodeGen (SimPy) are respectively 2.93% and 5.49%, lower than the ones of CodeGen (Python), which are 4.51% and 7.32%. This could be attributed to SimPy\u2019s limited expressiveness, constraining the models from leveraging knowledge acquired from natural language datasets during pretraining. Consequently, direct training with AI-oriented grammar appears to be an impractical approach. However, the sequential training strategy, starting with Python and then incorporating SimPy, yields comparable or even superior accuracy to the control group. Specifically, CodeGen-NL, TinyLlama, and Pythia models trained with Python\u2192100%SimPy achieve Pass@10 scores of 9.15%, 14.02%, and 10.00%, respectively, outperforming the control group\u2019s 7.32%, 13.41%, and 9.76%. This suggests a successful training with SimPy, demonstrating the feasibility of AI models learning AI-oriented grammar. Interestingly, we observe that the Pythia model, when trained exclusively with 100% SimPy, surpasses the Python baseline on Pass@1. This highlights AI Coders Are Among Us: Rethinking Programming Language Grammar Towards Efficient Code Generation , , Table 3: The Pass@1 and Pass@10 of LLMs on Python and SimPy datasets under varied settings. Python and SimPy denote models trained exclusively on respective datasets. Python\u2192SimPy refers to sequential training on both datasets, with the parenthetical numbers indicating the SimPy dataset\u2019s proportion involved in the training. Model Training Strategy Pass@1 Pass@10 CodeGen-NL Python 4.51% 7.32% 100% SimPy 2.93% 5.49% Python \u219210% SimPy 3.11% 3.66% Python \u219220% SimPy 3.66% 4.27% Python \u219250% SimPy 3.96% 6.71% Python \u2192100% SimPy 4.82% 9.15% TinyLlama Python 10.00% 13.41% 100% SimPy 5.91% 9.76% Python \u219210% SimPy 2.07% 3.66% Python \u219220% SimPy 3.23% 5.49% Python \u219250% SimPy 5.73% 11.59% Python \u2192100% SimPy 10.12% 14.02% Pythia Python 5.79% 9.76% 100% SimPy 7.01% 9.15% Python \u219210% SimPy 1.89% 2.44% Python \u219220% SimPy 3.11% 4.27% Python \u219250% SimPy 4.21% 7.32% Python \u2192100% SimPy 5.67% 10.00% the possibility of learning SimPy without relying on the sequential training strategy. By varying the proportion of the SimPy dataset in the Python\u2192SimPy setting, we found that a substantial dataset is still required by the fine-tuning with SimPy. For instance, TinyLlama (Python\u219250%SimPy) scored 5.73% in Pass@1 and 11.59% in Pass@10, still trailing behind the TinyLlama (Python) scores. We will further discuss this finding in Section 8. Answer to RQ2: AI models, when initially trained with the original grammar and then the AI-oriented grammar, can successfully learn the AI-oriented grammar, retaining or even enhancing their accuracy. For instance, the CodeGen model, originally trained with Python and achieving a 7.32% Pass@10, improved to a 9.15% Pass@10 after the additional training with SimPy. 5 USAGE SCENARIO OF AI-ORIENTED GRAMMAR (RQ3) In this section, we address RQ3: How can AI-oriented grammar support real-world scenarios? We first demonstrate the basic application scenario of AI-oriented grammar, and subsequently, we introduce a novel inference framework designed to broaden the applicability of AI-oriented grammar, followed by an evaluation of the framework\u2019s additional latency. 5.1 Basic usage scenario The source code, when written in AI-oriented grammar, becomes challenging for human interpretation and is therefore not intended for human display. Consequently, the application of AI-oriented grammar is limited to scenarios where human users do not have access to the generated code. A typical scenario is the AI agents, such as AutoGPT [40] and LangChain [7], for regular users rather than developers. For instance, an AI agent tasked with data collection from a website would generate the required crawler script, execute it to gather data, and present the outcomes to the user. End users generally care more about the results than understanding the underlying script since they lack programming knowledge. Therefore, even without additional enhancement, models trained with AI-oriented grammar can be effectively utilized in real-world scenarios. We demonstrate this scenario on the left of Figure 4. In this scenario, an AI-oriented code generated by the model can be executed in two ways: 1) being translated into human-centric code and then executed by its executor; 2) directly being executed by a specific executor for the AI-oriented grammar. Notably, implementing an executor specifically for AI-oriented grammar demands only lightweight engineering efforts as the AI-oriented grammar and its original grammar differ only at the syntax level. Thus, the second method offers a more efficient solution. 5.2 Extended usage scenario Despite the effectiveness of AI-oriented grammar in certain contexts, many code generation scenarios still require the involvement of humans, where human-readable code is required. To fill this gap, we propose an inference framework for code generation named DualCode. DualCode enables human users to interact with code in human-centric grammar, while the model still leverages the efficiency of AI-oriented grammar during the inference process. The fundamental concept of DualCode is to convert the code between AI-oriented grammar and the original grammar of the same programming language. To achieve this goal, a rule-based code converter should be employed to convert source code into AI-oriented grammar for model comprehension and, inversely for user readability. Such a converter is feasible since both the AI-oriented grammar and original grammar describe the same AST. The identical AST allows the code written in the two grammars to be equivalently converted into each other based on the grammar rules. We illustrate the workflow of DualCode on the right of Figure 4. It employs two \u201cgates\u201d: an input converter and an output converter. The input converter translates code written in human-centric grammar into AI-oriented grammar for model processing. Similarly, the output converter reverts AI-generated code into human-readable code for user comprehension. Notably, this environment is only for the code, where other inputs, such as natural language, are unaffected. DualCode is a not complicated framework, enabling the lightweight integration of AI-oriented grammar into existing , , Zhensu Sun, Xiaoning Du, Zhou Yang, Li Li, and David Lo Natural Language Natural Language Human-centric Code AI-oriented Code Executor Program Results Human-centric Code AI-oriented Code Converter Converter Natural Language AI-oriented Code Executor Program Results Natural Language AI System AI System Figure 4: LEFT: the workflow of the basic usage scenarios of AI-oriented grammar. RIGHT: the workflow of the extended usage scenarios of AI-oriented grammar under DualCode, where the code executor of the AI system in the figure is not necessary. Table 4: Comparison of average conversion times between Python and SimPy, and the processing speed of the StarCoder tokenizer, based on Huggingface Tokenizers. Token num Huggingface Converter Encode Decode To SimPy To Python [0, 100) 0.2ms 0.1ms 0.2ms 0.2ms [100, 500) 0.7ms 0.6ms 0.9ms 0.8ms [500, 2000) 2.4ms 2.2ms 3.4ms 3.1ms [2000, 5000) 6.7ms 6.4ms 12.2ms 10.8ms [5000, +\u221e) 23.0ms 23.7ms 75.4ms 57.4ms workflows of AI systems. Though being straightforward, it is proposed and investigated for the first time, bridging the gap between efficient AI-oriented code generation and human readability. 5.3 Experiments of RQ3 Given that the DualCode converter adds extra latency to the inference process, a significant concern arises: excessive latency could render the system impractical for real-world applications. To address the concern, we conduct experiments focusing on the converter\u2019s performance. Specifically, we measure the time taken to convert Python code files into SimPy and then back to Python using the converter. As a reference, we evaluate the processing speed of the StarCoder tokenizer, which is based on the widely acknowledged Huggingface Tokenizers library [26]. For this experiment, we categorized Python code files into five distinct groups, based on their token counts, as follows: [0, 100), [100, 500), [500, 2000), [2000, 5000), and [5000, +\u221e). These token counts are determined using the StarCoder tokenizer [23] on the Python code. We calculate the average processing time for each group, both for the converter and the StarCoder tokenizer. The findings, presented in Table 4, indicate that the converter\u2019s speed is comparable to that of Huggingface Tokenizers. For code files with fewer than 100 tokens, the converter\u2019s processing time for each conversion is a mere 0.2 ms, only 0.1 ms slower than the Huggingface Tokenizers. For files containing 100 to 500 tokens, the conversion is completed within 1.0 ms. This is not a significant concern, given that over 95% of the dataset\u2019s code files (sourced from real-world repositories) are within the 5000-token range. Therefore, we deduce that the latency induced by the converter is acceptably minimal in most practical scenarios. Answer to RQ3: Beyond the basic scenarios where human interaction is not required, the application of AI-oriented grammar can be substantially extended by incorporating the DualCode framework. DualCode enables humans to continue using human-centric grammar while AI models leverage the efficiency of AI-oriented grammar. Notably, it imposes negligible latency (under 1 ms for code up to 500 tokens). 6 RELATED WORK Program Simplification Program simplification has emerged as a valuable approach to enhance the efficiency of code models [5, 18, 33, 34, 39, 49]. This approach typically involves the elimination of less critical code tokens to streamline model processing. For example, DietCode [52] removes the code tokens that receive the fewest attention weights by CodeBert. Sivand [34] and P2IM [53] simplify the input code according to the outputs of a supplementary model. While these methods considerably boost efficiency, they unavoidably compromise accuracy due to the removal of certain code elements. In contrast, models with AI-oriented grammar, though perhaps less efficient, are able to preserve or even improve accuracy. Most importantly, existing simplification techniques are irreversible, limiting their application to code understanding tasks like summarization and retrieval, rather than code generation. Conversely, code in AI-oriented grammar can be effortlessly reverted to its original form, thus suitable for various code-related tasks. AI Coders Are Among Us: Rethinking Programming Language Grammar Towards Efficient Code Generation , , Tokenization of Source Code Modern LLMs usually preprocess textual datasets using an open-vocabulary tokenization method, Byte-Pair Encoding (BPE) [38]. BPE tokenizes text into subwords based on their frequency in the text corpus, offering a balance between the granularity of tokens and vocabulary breadth. Karampatsis et al. [20] first identify the effectiveness of BPE on source code. CodeT5 reveals that BPE trained on source code corpus can reduce over 30% of tokens for code generation, compared with the one trained on natural language corpus. Subsequently, all major LLMs for code generation, such as CodeBERT [12], CodeT5 [47], SantaCoder [2], StarCoder [23] and CodeLlama [37], adopt BPE as the tokenization method. Further enhancements to BPE for source code have been proposed. For example, Chirkova [10] suggests that clustering punctuation characters into single tokens can reduce average token length by 17% without impacting model performance. Notably, even though the tokenizers are optimized for source code, they still need to deal with the unnecessary tokens introduced by the human-centric grammar. AI-oriented grammar optimizes the representation of source code in a more fundamental way, which is orthogonal to these existing tokenization methods. 7 THREATS TO VALIDITY Constrained Model Selection Our experimental scope in RQ2 is restricted by our computational resources, limiting our evaluation to models with around 1B parameters, e.g., Pythia and TinyLlama. These models are relatively modest in scale. However, while the model size is expanding, the fundamental issue of computation waste caused by human-centric code grammar remains unaddressed. Therefore, the insights derived from our experiments with smaller models are still highly relevant for understanding inefficiency issues in larger models. Limited Programming Language Our research primarily investigates the implementation of AI-oriented grammar in Python, a language widely utilized by existing LLMs for programming tasks. This initial exploration has shown that AI-oriented grammar effectively reduces computational costs during inference. However, the conclusions drawn from Python may not generalize to other programming languages. We thus leave the exploration of its implementation in other languages as future work. Inefficient Implementation We implement a proof-of-concept converter to convert the code between SimPy and Python. While this converter provides seamless translation, its efficiency is not optimized. For instance, it is developed in Python, which is less efficient compared to languages like C++. This aspect could potentially result in an underestimation of the converter\u2019s performance in our experimental evaluations. 8 DISCUSSION Future directions of AI-oriented grammar Our study concentrates on the inefficiencies arising from the mismatch between AI coders and human-centric code grammars. This initial investigation into AI-oriented grammar highlights its feasibility and potential benefits. As an emerging field, AI-oriented grammar presents numerous unexplored questions. For example, an interesting finding from our experiments is that models trained with AI-oriented grammar can even improve the model\u2019s accuracy in code generation tasks. This emphasizes the critical role of grammar as a foundational element for LLMs in grasping code semantics. Designing grammars that are inherently more comprehensible to AI models could significantly enhance their performance. Our current research provides a preliminary insight into this aspect, opening doors for in-depth future studies. Additionally, the process of simplifying grammar, as exemplified by our manual creation of SimPy, raises the question of whether an automated approach could create optimal grammar rules for AI models. A potential solution for simplifying the grammar could be iteratively searching for grammar tokens/structures that can be removed with the help of a parser generator. Moreover, saving the training cost for teaching LLMs AI-oriented grammar is also of great practical value, where a more efficient training method for LLMs to learn new programming grammar is urgently needed. We, therefore, call for the software engineering community to engage further with this promising topic, recognizing its potential to revolutionize the field of AI coders. Limitations in practice Though extending the applicability of AI-oriented grammar, DualCode relies on a rule-based converter. The converter, we implemented for SimPy, is AST-based, which implicitly requires the input and output code of the models under the DualCode framework to satisfy the grammar correctness. For the output, grammar correctness is a fundamental expectation for a qualified LLM-based assistant. Thus, this requirement from DualCode is not an additional constraint set to the model but aligns with the goal of a reliable AI service. However, it poses challenges when dealing with user-provided input, which may not always be grammatically correct. It is not a concern to models handling natural-language-to-code tasks. However, the requirement may limit the application of SimPy when some tasks involve partial source code as input, such as LLM-based code completion. Addressing this limitation could involve developing an error-tolerant converter or grammar, which is a crucial direction for future research. Learning the AI-oriented grammar The learning of AI-oriented grammar could be a tricky task. In our experiments, we demonstrate the effectiveness of fine-tuning AI models with SimPy using the next token prediction task. However, this simple fine-tuning strategy requires a large number of SimPy samples, 100% of the dataset in our experiments. A more efficient adaptation process would significantly enhance the utility of AI-oriented grammar. However, current research on how AI models learn code grammar is still limited. Although studies [9, 25, 45] have shown that LLMs typically grasp code grammar knowledge in their initial layers, the exact learning mechanism remains unclear. Therefore, a thorough analysis in this area is much needed. Utility of AI-oriented grammar In this paper, we demonstrate the effectiveness of the sequential training scheme, where the model is initially trained with the original grammar and then the AIoriented grammar. It achieves an equivalent, or even improved, performance compared to the model trained merely with the original grammar. Such a training method incurs an increase in the cost of the model training. For example, training CodeGen on the original Python dataset costs 183,628 training steps, and 100,288 additional steps are taken during the further finetuning on the 100% SimPy dataset. Nevertheless, mastering AI-oriented grammar still , , Zhensu Sun, Xiaoning Du, Zhou Yang, Li Li, and David Lo reduces energy consumption in the long run. Training is performed only once or occasionally, while inference tasks can be continuous and massive after the system is deployed. The post-deployment operational cost is a primary component of the overall cost, sometimes reaching 90% of total expenses [11]. Consequently, despite the additional costs incurred during training, implementing AI-oriented grammar remains highly beneficial from a practical standpoint. 9 CONCLUSION AND FUTURE WORK In this paper, we, for the first time, propose the concept of AIoriented grammar to address the inefficiency of AI coders in processing the code written in human-centric grammar. Through an empirical study guided by three research questions, we successfully demonstrate the feasibility and potential of this novel concept. During our research, we have developed the first-ever AI-oriented grammar for Python. Additionally, we introduced an inference framework designed to empower models to effectively process both AI-oriented and human-centric grammars within a programming language. In the future, our focus will shift towards investigating how AI-oriented grammar can further refine the accuracy of AI models."
18
+ }
intro_8K/test_introduction_short_2404.16348v2.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2404.16348v2",
3
+ "title": "Dual Expert Distillation Network for Generalized Zero-Shot Learning",
4
+ "abstract": "Zero-shot learning has consistently yielded remarkable progress via modeling\nnuanced one-to-one visual-attribute correlation. Existing studies resort to\nrefining a uniform mapping function to align and correlate the sample regions\nand subattributes, ignoring two crucial issues: 1) the inherent asymmetry of\nattributes; and 2) the unutilized channel information. This paper addresses\nthese issues by introducing a simple yet effective approach, dubbed Dual Expert\nDistillation Network (DEDN), where two experts are dedicated to coarse- and\nfine-grained visual-attribute modeling, respectively. Concretely, one coarse\nexpert, namely cExp, has a complete perceptual scope to coordinate\nvisual-attribute similarity metrics across dimensions, and moreover, another\nfine expert, namely fExp, consists of multiple specialized subnetworks, each\ncorresponds to an exclusive set of attributes. Two experts cooperatively\ndistill from each other to reach a mutual agreement during training. Meanwhile,\nwe further equip DEDN with a newly designed backbone network, i.e., Dual\nAttention Network (DAN), which incorporates both region and channel attention\ninformation to fully exploit and leverage visual semantic knowledge.\nExperiments on various benchmark datasets indicate a new state-of-the-art.",
5
+ "authors": "Zhijie Rao, Jingcai Guo, Xiaocheng Lu, Jingming Liang, Jie Zhang, Haozhao Wang, Kang Wei, Xiaofeng Cao",
6
+ "published": "2024-04-25",
7
+ "updated": "2024-04-29",
8
+ "primary_cat": "cs.CV",
9
+ "cats": [
10
+ "cs.CV"
11
+ ],
12
+ "label": "Original Paper",
13
+ "paper_cat": "Distillation",
14
+ "gt": "Recognizing unknown categories in the open environment is a critical challenge for automatic recognition systems. Zero- Shot Learning (ZSL) [Lampert et al., 2009] that serves as a promising solution has received increasing attention, which is inspired by human text-to-image reasoning capabilities. The objective of ZSL is to transfer the visual knowledge of seen classes to the unseen domain by virtue of shared semantic information, thus empowering the model to recognize the un- seen classes. More trickily, Generalized Zero-Shot Learn- ing (GZSL) [Chao et al., 2016] requires recognizing samples \u2217Corresponding author: Jingcai Guo. \u2020: Equal contribution. (a) cExp (b) fExp crown\n eye\n bill\n \u00b7\u00b7\u00b7\n belly\n breast\n wing\n \u00b7\u00b7\u00b7\n belly\n wing\n breast\n \u00b7\u00b7\u00b7\n torso:\n crown\n bill\n eye\n \u00b7\u00b7\u00b7\n head:\n Figure 1: (a) cExp, also the common practice in existing works, pos- sesses complete attribute-awareness capability yet lacks the ability to process fine-grained semantic information. (b) fExp, which con- sists of multiple specialized sub-networks, lacks a global perception field. from both seen and unseen classes in the inference phase. Mainstream studies broadly follow two routes, genera- tive [Xian et al., 2018][Xie et al., 2022][Li et al., 2023] and embedding techniques [Zhang et al., 2017][Liu et al., 2020][Chen et al., 2021b], where most of the schemes are de- voted to mining and constructing class-wise visual-attribute relations. To strengthen the fine-grained perceptual capabil- ities of the model, recent research has invested considerable effort into modeling local-subattribute correlations [Xie et al., 2019][Huynh and Elhamifar, 2020][Xu et al., 2020]. The mo- tivation is to build a refined pairwise relation map via search- ing and binding subattributes and the corresponding region visual features (Figure 1 (a)). Despite their contribution to boosting performance, the inherent asymmetry of attributes remains undiscussed, and the channel information is not fully exploited. The asymmetry of attributes stems from the fact that 1) the semantic dimensions between attributes are heterogeneous or even antagonistic. Take the SUN dataset [Patterson and Hays, 2012] as an example, where 38 attributes (studying, playing, etc.) describe the function of one scene, while 27 attributes arXiv:2404.16348v2 [cs.CV] 29 Apr 2024 (trees, flowers, etc.) describe the entities in the scene. It can be obviously observed that the former are abstract and global, while the latter are concrete and local; 2) the visual features corresponding to attributes are intertwined. For example, neighboring regions tend to be more semantically similar, a phenomenon that is exacerbated by the local information fu- sion mechanism of the convolutional kernel, which leads to difficulties in accurately locating fine-grained attributes such as head, crown, and so on. In this paper, we revisit the task of modeling visual- attribute relations from the perspective of attribute annota- tions. Given the inherent complexity of attribute descrip- tions, existing learning paradigms are virtually forcing a sin- gle model to undertake a multi-objective hybrid task, which is ideally appealing yet empirically challenging. Naturally, we employ the idea of divide-and-conquer to release the pressure of a single model. We meticulously decompose the hybrid task into multiple subtasks, i.e., dividing the attributes into multiple disjoint clusters and assigning specialized learnable networks to them. Our approach is referred to as, Dual Expert Distillation Network, abbreviated DEDN. As shown in Figure 1, our approach sets up two experts. cExp, in line with com- mon practices, is equipped with complete attribute percep- tion capability to harmonize holistic visual-attribute measure results. fExp, consists of multiple subnetworks, where each subnetwork is only responsible for capturing the characteris- tics of a specific attribute cluster. During the training phase, we encourage the two to learn cooperatively to compensate for their respective deficiencies in a mutually distilling man- ner. The decision results of the two experts are combined for final inference. For the issue of underutilized channel information, we design a novel attention network, Dual Attention Network (DAN), as the backbone. DAN employs a dual-attention mechanism that fully exploits the potential semantic knowl- edge of both regions and channels to facilitate more precise visual-attribute correlation metrics. To further boost perfor- mance, we present Margin-Aware Loss (MAL) as the training loss function to address the confidence imbalance between seen and unseen classes. Our contributions are summarized below: \u2022 We rethink the issue of modeling visual-attribute rela- tions from the perspective of attribute annotations and point out that the inherent complexity of attributes is one of the major bottlenecks. We propose a simple yet ef- fective strategy of establishing two experts working on distinct attribute perception scopes to learn and infer col- laboratively in a complementary manner. \u2022 We present a novel attention network, dubbed DAN, which incorporates both region and channel attention in- formation to better capture correlations between visu- als and attributes. Furthermore, a new learning function named MAL is designed to balance the confidence of seen and unseen classes. \u2022 We conduct extensive experiments on mainstream eval- uation datasets, and the results show that the proposed method effectively improves the performance.",
15
+ "main_content": "In ZSL/GZSL, attributes are the only ties that bridge seen and unseen classes, hence exploring and constructing the link between visuals and attributes is a core subject. Existing methods fall into class-wise visual-attribute modeling, which treats both visual features and attribute vectors as a whole, and regional visual-subattribute modeling, which seeks to explore the correlation between local visual information and subattributes. 2.1 Class-wise Visual-Attribute Modeling Mainstream researches broadly follow two technical routes, generative and embedding techniques. Generative techniques utilize the latent distribution fitting ability of generative models such as GAN and VAE to implicitly learn the relationship between attributes and categories to construct hallucinatory samples of unseen classes [Xian et al., 2018][Verma et al., 2018][Felix et al., 2018][Li et al., 2019][Vyas et al., 2020][Keshari et al., 2020][Xie et al., 2022][Li et al., 2023]. The technical bottleneck of this route is the poor realism of the hallucinatory samples, thus many studies incorporate other techniques such as meta-learning [Yu et al., 2020], representation learning [Li et al., 2021][Chen et al., 2021c][Chen et al., 2021a][Han et al., 2021][Kong et al., 2022], etc. for joint training. Embedding techniques aim at projecting visual and attribute features to a certain space, from which the most similar semantic information is searched. In general, embedding techniques are categorized into three directions: visual-to-attribute space [Changpinyo et al., 2016][Kodirov et al., 2017][Liu et al., 2020][Chen et al., 2022a], attribute-to-visual space [Zhang et al., 2017][Annadani and Biswas, 2018], and common space [Liu et al., 2018][Jiang et al., 2019]. Researchers in the first two directions invest considerable effort in designing robust mapping functions to cope with domain shift and out-of-distribution generalization problems. The third direction centers on finding a suitable semantic space. Class-level visual-attribute modeling lacks the fine-grained perceptual ability to respond to interactions between local visual features and subattributes. 2.2 Region-wise Visual-Attribute Modeling Region-wise modeling is a promising direction in embedding techniques. Unlike other embedding approaches, region-wise modeling focuses on the correlation between local information and subattributes to build more detailed mapping functions. Models based on attention mechanisms are the dominant means in this direction, motivated by training models to search for corresponding visual features based on semantic vectors. Recent approaches include feature-to-attribute attention networks [Xie et al., 2019][Huynh and Elhamifar, 2020], bidirectional attention networks [Chen et al., 2022b], and multi-attention networks [Zhu et al., 2019]. In addition, some studies resort to prototype learning, where the goal is to explicitly learn the corresponding prototypical visual features of individual subattributes, thus aiding the model\u2019s judgment [Xu et al., 2020][Wang et al., 2021]. Further, modeling the topological structure between regional features with the help of graph convolution techniques also yields promising results cExp fExp DAN DAN Distillation MAL MAL concat W1 W2 F V CxR DxG Sr DxR Ar softmax DxR \u00a0Product&Sum Or D W3 W4 F V RxC DxG Sc DxC Ac softmax DxC \u00a0Product&Sum Oc D \u00a0Weighted&Sum O D DAN Visual Feature crown bill eye \u00b7\u00b7\u00b7 head: belly wing breast \u00b7\u00b7\u00b7 torso: crown eye bill \u00b7\u00b7\u00b7 belly breast wing \u00b7\u00b7\u00b7 Figure 2: Left: cExp possesses the scope of a holistic attribute set, while fExp consists of multiple sub-networks, each of which is responsible for the prediction of only partial attributes. We concatenate all outputs of subnetworks as the final result of fExp. Then, distillation loss is implemented to facilitate joint learning. Right: The architecture of DAN. [Xie et al., 2020][Guo et al., 2023]. While the main idea of these approaches is to design appropriate attention networks or regularization functions, ignoring the inherent complexity of attribute annotations, we provide a new perspective to think about the visual-attribute modeling problem. In addition, existing region-attribute methods, although achieving good results, neglect the utilization of channel information, and we design a new attention network that utilizes both region and channel information. 3 Methodology 3.1 Preliminary Following previous studies [Chen et al., 2022b][Li et al., 2023], we adopt a fixed feature extractor, ResNet-101 [He et al., 2016], to extract visual features. Suppose Ds = {(F s i , Y s i )} denotes the seen classes, where F s i is the visual feature and Y s i denotes its label. Note that F \u2208RC\u00d7H\u00d7W , where C, H, W are the channel number, height, and width, respectively. Similarly have Du = {(F u i , Y u i )} to denote the unseen classes. Normally, the visual features of the unseen classes are not accessible during the training phase. Alternatively, we have the shared attribute A \u2208RK\u00d7D, where K denotes the total number of categories, and D denotes the number of attributes. Also, we use the semantic vectors of each attribute learned by GloVe, denoted by V \u2208RD\u00d7G, where G denotes the dimension of the vector. 3.2 Overview Our approach is shown in Figure 2 (Left). First, we disassemble the attribute set into multiple clusters based on their characteristics. Then the attribute vectors and the visual feature are fed into cExp and fExp simultaneously. cExp directly computes the scores of all attributes on that visual feature, while the scores of fExp are obtained by combining the computation results of each subnetwork. We constrain the two to learn from each other using distillation loss. Meanwhile, we introduce DAN as the backbone and MAL as the optimization objective. 3.3 Dual Attention Network Firstly we introduce the proposed novel backbone network, Dual Attention Network (DAN). Mining and constructing relations between visual features and attributes is crucial for zero-shot learning. Recently many works have been devoted to modeling the association between regions and attributes, such as attention-based approaches [Xie et al., 2019][Huynh and Elhamifar, 2020][Chen et al., 2022b] and prototypebased techniques [Xu et al., 2020][Wang et al., 2021]. However, these methods only focus on the semantic information of regions and ignore the role of channels. Therefore, DAN incorporates both the attention information of regions and channels to promote the efficacy of the model in utilizing visual features. As shown in Figure 2 (Right), DAN contains two parallel components that model region-attribute and channel-attribute relations, respectively. We first introduce the region-attribute component. We have visual features F \u2208RC\u00d7H\u00d7W , which is flattened to F \u2208RC\u00d7R, where R = H \u00d7 W denotes the number of regions. Let W1, W2 \u2208RG\u00d7C denote two learnable matrices. W1 maps the attribute vectors to the visual space and computes their similarity. The formula is expressed as: Sr = V W1F, (1) where Sr \u2208RD\u00d7R represents the score obtained for each attribute on each region. W2 is in charge of computing the attention weights to encourage the model to focus on the region-attribute pairs with the highest similarity. The formula is expressed as: Ar = V W2F P r\u2208R V W2Fr , (2) where Ar \u2208RD\u00d7R denote the normalized weight obtained by softmax. Then we naturally get the weighted matrix of scores, represented as: Or = X R Sr \u00d7 Ar, (3) where Or \u2208RD represents the similarity score obtained for each attribute on a visual feature. Next, we introduce the channel-attribute section, which has a similar principle. We have the scaled visual feature F \u2208RR\u00d7C and W3, W4 \u2208RG\u00d7R. Then W3 is charged with calculating the similarity score obtained by the attribute on each channel, formulated as: Sc = V W3F, (4) where Sc \u2208RD\u00d7C. And W4 computes its attention weights: Ac = V W4F P c\u2208C V W4Fc , (5) where Ac \u2208RD\u00d7C. Finally, we get the weighted score map: Oc = X C Sc \u00d7 Ac, (6) where Oc \u2208RD. We expect the final scores of attributes from different scale features to be consistent, i.e., semantic consistency. Therefore we employ Lalign, which contains a Jensen-Shannon Divergence (JSD) and a Mean Squared Error, to align the outputs of both, formulated as: Lalign = 1 2(LKL(Or||Oc) + LKL(Oc||Or)) + ||Or \u2212Oc||2 2, (7) where LKL denotes Kullback-Leibler Divergence. In the inference phase, we use the weighted sum of Or and Oc as the final output, expressed as: O = \u03bbrc \u00d7 Or + (1 \u2212\u03bbrc) \u00d7 Oc, (8) where \u03bbrc is a hyperparameter. 3.4 Dual Expert Distillation Network Despite the fact that DAN enhances the modeling capability of the network, it is extremely challenging for a single model to simultaneously handle attributes with different semantic dimensions as well as visual features with different granularities. To this end, we propose the Dual Expert Distillation Network (DEDN) to alleviate the pressure on a single network (Figure 2 (left)). cExp is set up with a complete attributeaware scope as in conventional practice. Specifically, the input of cExp is the semantic vectors of all attributes, and the output is the similarity scores of all attributes. Denote cExp by \u03d5ec = {W ec 1 , W ec 2 , W ec 3 , W ec 4 }, the output is defined as: Oec = \u03d5ec(V, F), (9) where Oec \u2208RD and V \u2208RD\u00d7G. fExp consists of multiple subnetworks, each focusing on a specific attribute cluster. At first, we elaborate on how the attribute clusters are divided. Since attribute annotations are manually labeled based on semantics, they are inherently clustered in nature. For example, in the SUN dataset [Patterson and Hays, 2012], the top 38 prompts are used to describe the scene function. Therefore, it is easy to perform the division by human operation, Chat-GPT [Radford et al., 2018], or clustering algorithm. It requires a trivial amount of effort but is worth it. Assuming that the attribute set is divided into Q disjoint clusters, i.e. V = {V1 \u2208RD1\u00d7G, V2 \u2208RD2\u00d7G, ..., VQ \u2208 RDQ\u00d7G}, where D1 + D2 + ... + DQ = D. Accordingly, there are Q subnetworks for fExp to handle these attribute clusters one-to-one. Let \u03d5ef = {\u03d51 ef, \u03d52 ef, ..., \u03d5Q ef} denotes fExp, then the output is defined as: Oef = \u03d51 ef(V1, F) \u2295\u03d52 ef(V2, F) \u2295... \u2295\u03d5Q ef(VQ, F), (10) where \u2295denotes concat operation. After that, we calculate the score of each category for training and inference. Specifically, we compute the similarity with the output of the expert and the attributes of each category, defined as: Pec = OecAT, Pef = OefAT, (11) where Pec, Pef \u2208RK. To facilitate cooperative learning between two expert networks, we introduce distillation loss to constrain their semantic consistency. Concretely, the distillation loss contains a Jensen-Shannon Divergence (JSD) and a Mean Squared Error, defined as: Ldistill = 1 2(LKL(Pec||Pef)+LKL(Pef||Pec))+||Pec\u2212Pef||2 2. (12) 3.5 Margin-Aware Loss Once the category scores are obtained, the network is optimized by using the cross-entropy loss, which is formulated as: Lce = \u2212log exp(P y ec) PK yi exp(P yi ec ) , (13) where y is the ground truth. The loss of Pef ditto. Note that we next narrate with Pec only, and the principle is the same for Pef. Due to the lack of access to samples from the unseen classes during the training phase, the scores of the unseen classes are relatively low and thus cannot compete with the seen classes in GZSL. To address this problem, the common practice [Huynh and Elhamifar, 2020][Chen et al., 2022b] is to add a margin to the scores: PMec = [P 1 ec \u2212\u03f5, ..., P N ec \u2212\u03f5, P N+1 ec + \u03f5, ..., P K ec + \u03f5], (14) where \u03f5 is a constant, P 1 ec \u223cP N ec are seen classes score, and P N+1 ec \u223cP K ec are unseen classes score. However, this method leads to misclassification of seen classes that would otherwise be correctly predicted. In order to maintain the correctness of the predicted classes while enhancing the competitiveness of the unseen classes. We propose Margin-Aware Loss (MAL), which takes the form: Lmal = \u2212log exp(P y ec\u22122\u03f5) exp(P y ec\u22122\u03f5)+PS yi\u0338=y exp(P yi ec +\u03f5)+PU exp(P yi ec ) , (15) where S, U denote seen and unseen classes, respectively. In contrast to the cross-entropy loss, MAL reactivates the confidence of the predicted class to ensure that it stays ahead in the margin-processed scores, while suppressing the confidence of the other seen classes to ensure the competitiveness of the unseen classes. 3.6 Summarize In the training phase, the basic training loss of cExp stems from the classification and the alignment loss, which is expressed as: Lec = Lec mal + \u03b2Lec align, (16) where \u03b2 is a hyperparameter. Similarly, we have the basic training loss of fExp: Lef = Lef mal + \u03b2Lef align. (17) Then the final loss is obtained from the combination of basic losses and distillation loss, denoted as: LDEDN = Lec + Lef + \u03b3Ldistill, (18) where \u03b3 is a hyperparameter. In the inference phase, the recommendations of the two experts are combined and used for final judgment. The predicted result is expressed as: arg max \u03bbe \u00d7 Pec + (1 \u2212\u03bbe) \u00d7 Pef, (19) where \u03bbe is a hyperparameter. 4 Experiments Datasets. We conduct extensive experiments on three benchmark datasets to verify the effectiveness of the method, including CUB (Caltech UCSD Birds 200) [Wah et al., 2011], SUN (SUN Attribute) [Patterson and Hays, 2012], and AWA2 (Animals with Attributes 2) [Xian et al., 2017]. We split all datasets following [Xian et al., 2017]. CUB comprises 200 bird species totaling 11,788 image samples, of which 50 categories are planned as unseen classes. We use class attributes for fair comparison, which contain 312 subattributes. SUN has a sample of 717 different scenes totaling 14,340 images, where 72 categories are unseen classes. Attribute annotations are 102-dimensional. AWA2 includes 50 classes of assorted animals totaling 37,322 samples, of which 10 categories are considered unseen classes. Its number of attributes is 85. Evaluation Protocols. We perform experiments in both the Zero-Shot learning (ZSL) and Generalized Zero-Shot learning (GZSL) settings. For ZSL, we employ top-1 accuracy to evaluate the performance of the model, denoted as T. For GZSL, we record the accuracy for both seen classes, and unseen classes, denoted as S, and U, respectively. We also record the harmonic mean H, which is computed as, H = (2 \u00d7 S \u00d7 U)/(S + U). Implementation Details. For a fair comparison, we use the fixed ResNet-101 [He et al., 2016] without finetune as the feature extractor. We set the batch size to 50 and the learning rate to 0.0001. The RMSProp optimizer with the momentum CUB SUN AWA2 #Des. #Num. #Des. #Num. #Des. #Num. head 112 function 38 texture 18 torso 87 instance 27 organ 14 wing 24 environ. 17 environ. 13 tail 40 light 20 abstract 40 leg 15 whole 34 Table 1: Manual division of attribute clusters. Des. (description) indicates the criteria for classification. Num. (number) is the size of the attribute cluster. environ: environment. set as 0.9 and weight decay set as 1e-4 is employed. For hyperparameters, [\u03b2, \u03b3] are fixed to [0.001, 0.1]. We empirically set [\u03bbrc, \u03bbe] to [0.8, 0.9] for CUB, [0.95, 0.3] for SUN, [0.8, 0.5] for AWA2. Subsequent experimental analyses show that the performance of our method has low sensitivity to hyperparameters. For attribute clusters, we classify attribute sets according to their characteristics, and the results are shown in Table 1. 4.1 Compared with State-of-the-arts To evaluate the performance of the proposed method, we compare it with the state-of-the-art various methods. Generative methods: f-CLSWGAN (CVPR \u203218) [Xian et al., 2018], f-VAEGAN-D2 (CVPR \u203219) [Xian et al., 2019], TF-VAEGAN (ECCV \u203220) [Narayan et al., 2020], E-PGN (CVPR \u203220) [Yu et al., 2020], CADA-VAE (CVPR \u203219) [Schonfeld et al., 2019], FREE (ICCV \u203221) [Chen et al., 2021a], SDGZSL (ICCV \u203221) [Chen et al., 2021c], CE-GZSL (CVPR \u203221) [Han et al., 2021], VS-Boost (IJCAI \u203223) [Li et al., 2023]; Embedding methos: LFGAA (ICCV \u203219) [Liu et al., 2019], APN (NeurIPS \u203220) [Xu et al., 2020], DCN (NeurIPS \u203218) [Liu et al., 2018], HSVA (NeurIPS \u203221) [Chen et al., 2021b]; Region-Attribute modeling: SGMA (NeurIPS \u203219) [Zhu et al., 2019], AREN (CVPR \u203219) [Xie et al., 2019], DAZLE (CVPR \u203220) [Huynh and Elhamifar, 2020], MSDN (CVPR \u203222) [Chen et al., 2022b]. The experimental results are shown in Table 1. Our method achieves the best performance in seven metrics and second place in one metric. For Generalized Zero-Shot Learning (GZSL), we beat VS-Boost by 2% in the H-score of CUB, a fine-grained bird dataset whose attribute annotations possess explicit correspondences to visual features. It demonstrates the superiority of the proposed method for fine-grained modeling. On the SUN and AWA2 datasets, we obtain the best and second-best results in H-score, respectively. These two datasets have fewer attributes and contain complex semantic dimensions, including abstract, concrete, etc. The experimental results demonstrate the effectiveness of the proposed method in deconstructing complex tasks to alleviate the modeling pressure of a single network. In addition, the U-scores of our method on all three datasets are well ahead of the others, demonstrating that the proposed method effectively captures the relationship between attributes and visuals to generalize to unseen classes. For Zero-Shot Learning (ZSL), we achieve the highest topCUB SUN AWA2 METHOD ROUTE T U S H T U S H T U S H f-CLSWGAN Gen. 57.3 43.7 57.7 49.7 60.8 42.6 36.6 39.4 68.2 57.9 61.4 59.6 f-VAEGAN-D2 Gen. 61.0 48.4 60.1 53.6 64.7 45.1 38.0 41.3 71.1 57.6 70.6 63.5 TF-VAEGAN Gen. 64.9 52.8 64.7 58.1 66.0 45.6 40.7 43.0 72.2 59.8 75.1 66.6 E-PGN Gen. 72.4 52.0 61.1 56.2 73.4 52.6 83.5 64.6 CADA-VAE Gen. 59.8 51.6 53.5 52.4 61.7 47.2 35.7 40.6 63.0 55.8 75.0 63.9 FREE Gen. 55.7 59.9 57.7 47.4 37.2 41.7 60.4 75.4 67.1 SDGZSL Gen. 75.5 59.9 66.4 63.0 62.4 48.2 36.1 41.3 72.1 64.6 73.6 68.8 CE-GZSL Gen. 77.5 63.9 66.8 65.3 63.3 48.8 38.6 43.1 70.4 63.1 78.6 70.0 VS-Boost Gen. 79.8 68.0 68.7 68.4 62.4 49.2 37.4 42.5 67.9 81.6 74.1 SGMA Emb.\u2020 71.0 36.7 71.3 48.5 68.8 37.6 87.1 52.5 AREN Emb.\u2020 71.8 38.9 78.7 52.1 60.6 19.0 38.8 25.5 67.9 15.6 92.9 26.7 LFGAA Emb. 67.6 36.2 80.9 50.0 61.5 18.5 40.0 25.3 68.1 27.0 93.4 41.9 DAZLE Emb.\u2020 66.0 56.7 59.6 58.1 59.4 52.3 24.3 33.2 67.9 60.3 75.7 67.1 APN Emb. 72.0 65.3 69.3 67.2 61.6 41.9 34.0 37.6 68.4 57.1 72.4 63.9 DCN Emb. 56.2 28.4 60.7 38.7 61.8 25.5 37.0 30.2 65.2 25.5 84.2 39.1 HSVA Emb. 62.8 52.7 58.3 55.3 63.8 48.6 39.0 43.3 59.3 76.6 66.8 MSDN Emb.\u2020 76.1 68.7 67.5 68.1 65.8 52.2 34.2 41.3 70.1 62.0 74.5 67.7 DEDN(Ours) Emb. 77.4 70.9 70.0 70.4 67.4 54.7 36.0 43.5 75.8 68.0 76.5 72.0 Table 2: Comparison with state-of-the-art methods (%). Gen. denotes generative method and Emb. denotes embedding method. \u2020 denotes the region-attribute modeling method. The best and second-best results are highlighted in blue and underlined, respectively. CUB SUN AWA2 SETTING T U S H T U S H T U S H cExp w/o Ldistill 74.6 62.4 71.4 66.6 64.0 41.6 35.7 38.4 71.1 62.8 78.8 69.9 fExp w/o Ldistill 75.5 68.1 67.9 68.0 64.0 42.8 35.5 38.7 71.1 62.9 79.1 70.1 DEDN w/o Ldistill 75.7 66.7 70.7 68.6 65.2 47.3 35.0 40.3 72.1 63.8 79.3 70.7 DAN w/o CA\u2217 77.0 58.7 73.6 65.3 65.8 48.5 34.6 40.4 74.6 61.7 79.8 69.6 DEDN w/o Lmal 75.8 73.2 62.5 67.4 66.0 56.5 34.3 42.7 73.1 66.5 72.4 69.3 DAN w/o Lalign 77.6 63.3 72.8 67.7 65.5 47.5 35.3 40.5 74.6 64.8 76.8 70.3 DEDN(full) 77.4 70.9 70.0 70.4 67.4 54.7 36.0 43.5 75.8 68.0 76.5 72.0 Table 3: Ablation Study (%). w/o denotes remove the module. CA\u2217denotes channel attention. The best result is highlighted in bold. 1 accuracy on the SUN and AWA2 datasets, as well as competitive performance on CUB. Specifically, our method outperforms TF-VAEGAN by 1.4% on the SUN dataset. On AWA2, we have a 2.4% lead relative to the second-place EPGN. The experimental results validate the superiority of the proposed method. Notably, our method achieves far better results than existing region-attribute modeling methods in both ZSL and GZSL settings, which implies the potential of attribute intrinsic asymmetry and channel information is not fully exploited. 4.2 Ablation Study To evaluate the role of each module, we perform a series of ablation experiments. The results of the experiments are shown in Table 3. Comprehensively, removing any of the modules leads to different degrees of performance degradation, verifying the rationality and necessity of the design of each module. Concretely, it is observed that the performance of cExp is slightly lower than that of fExp without the distillation loss constraint, which indicates the potential research value of the inherent asymmetry of the attributes. Meanwhile, without distillation, the performance of DEDN is higher than both cExp and fExp, demonstrating the complementary properties of the dual experts. In addition, it is worth noting that DAN removing the channel attention results in a substantial performance degradation, demonstrating the importance of channel information. Moreover, the role of Lmal in balancing the confidence of unseen and seen classes can be observed from the metrics U and S. When Lmal is removed, the metric U increases dramatically while S decreases dramatically. Finally, the results also demonstrate the importance of Lalign for constraining semantic consistency. 4.3 Empirical Analysis 4.4 The influence of parameters \u03bbe and \u03bbrc We launch a series of empirical analyses, including evaluating the impact of parameters \u03bbe and \u03bbrc on the final performance. Figure 4 (a) illustrates the sensitivity of the harmonic mean for each dataset with respect to parameter \u03bbe. It can be observed that the influence of parameter a is extremely small. Of particular note, when \u03bbe is set to 1 or 0, it indicates that Figure 3: Visualization of the attention heat maps. The first row represents the heat maps of cExp, and the second row denotes the heat maps of fExp. (a) (b) (c) (d) Figure 4: (a) Sensitivity to \u03bbe. (b) Sensitivity to \u03bbrc. The harmonic mean (H) is reported. (c) Comparison with Kmeans. (d) Impact of the number of attribute clusters. The harmonic mean (H) and top-1 accuracy (T) are reported. only the cExp or fExp after distillation learning is used for the inference phase. It implies that by mutual distillation learning, each of the two experts learns the strengths of the other, thereby reaching an agreement. Figure 4 (b) illustrates the impact of \u03bbrc. It can be seen that setting \u03bbrc above 0.7 stabilizes the performance. Optimization is achieved when it is set between 0.7 and 0.9. The influence of different clustering algorithms We further evaluate the impact of the clustering algorithm on performance. In Introducing Table 1, we have explained that attribute clusters are obtained by humans to classify the attribute sets based on their characteristics. In this subsection, we use the K-Means algorithm for attribute clustering as a comparison to evaluate the performance. The experimental results are shown in Figure 4 (c), where the harmonic mean (H) and top-1 accuracy (T) are reported. From the figure, it can be seen that the K-Means algorithm is slightly poorer compared to human classification, but a good result is also achieved. It again shows that the idea of dividing the attribute set into different clusters holds great promise. The influence of the number of attribute clusters We evaluate the impact of the number of attribute clusters on performance. The attributes of CUB, SUN, and AWA2 are classified into 6, 4, and 4 categories, respectively (Table 1). In this subsection, we halve the categories, i.e., the numbers of attribute clusters for CUB, SUN, and AWA2 are 3, 2, and 2. The experimental results are shown in Figure 4 (d), where half denotes that the cluster number is halved. We can see that half leads to a reduction of H by 0.6%, 1.0%, and 6.8%, respectively, and a reduction of T by 0.7%, 0.2%, and 11%, respectively. The results show that detailed attribute classification facilitates the model in capturing more fine-grained information and thus improves the performance. Visual analysis of attention. We perform a visual analysis of the attention of the two experts, and the schematic is shown in Figure 3. It can be observed that cExp has a better localization for some global attributes, such as HeadPatternMaler, BellyColorGrey, ShapePerchingLike. Meanwhile, fExp has more detailed and precise localization for some local attributes, such as UpperTailColorGrey, ThroatColorGrey, LegColorWhite. The two experts collaborate and learn in a complementary way to improve together, which leads to better performance. 5 Conclusion In this paper, we analyze the impact of attribute annotations and channel information on the regional visual-attribute modeling task. We argue that the intrinsic asymmetry of attributes is one of the important bottlenecks constraining existing approaches and propose a simple yet effective framework named DEDN to address this problem. DEDN consists of two expert networks, one with complete attribute-domain perception to harmonize the global correlation confidence and the other consisting of multiple subnetworks, each focusing on a specific attribute domain to capture fine-grained association information. Both of them complement each other and learn cooperatively. Meanwhile, we introduce DAN as a strong backbone, a novel attention network that incorporates both region and channel knowledge. Moreover, we present a new loss named MAL to train the network. Numerous experiments demonstrate the significant superiority of the proposed approach."
16
+ }
intro_8K/test_introduction_short_2404.16369v1.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2404.16369v1",
3
+ "title": "Don't Say No: Jailbreaking LLM by Suppressing Refusal",
4
+ "abstract": "Ensuring the safety alignment of Large Language Models (LLMs) is crucial to\ngenerating responses consistent with human values. Despite their ability to\nrecognize and avoid harmful queries, LLMs are vulnerable to \"jailbreaking\"\nattacks, where carefully crafted prompts elicit them to produce toxic content.\nOne category of jailbreak attacks is reformulating the task as adversarial\nattacks by eliciting the LLM to generate an affirmative response. However, the\ntypical attack in this category GCG has very limited attack success rate. In\nthis study, to better study the jailbreak attack, we introduce the DSN (Don't\nSay No) attack, which prompts LLMs to not only generate affirmative responses\nbut also novelly enhance the objective to suppress refusals. In addition,\nanother challenge lies in jailbreak attacks is the evaluation, as it is\ndifficult to directly and accurately assess the harmfulness of the attack. The\nexisting evaluation such as refusal keyword matching has its own limitation as\nit reveals numerous false positive and false negative instances. To overcome\nthis challenge, we propose an ensemble evaluation pipeline incorporating\nNatural Language Inference (NLI) contradiction assessment and two external LLM\nevaluators. Extensive experiments demonstrate the potency of the DSN and the\neffectiveness of ensemble evaluation compared to baseline methods.",
5
+ "authors": "Yukai Zhou, Wenjie Wang",
6
+ "published": "2024-04-25",
7
+ "updated": "2024-04-25",
8
+ "primary_cat": "cs.CL",
9
+ "cats": [
10
+ "cs.CL"
11
+ ],
12
+ "label": "Original Paper",
13
+ "paper_cat": "LLM AND Jailbreak",
14
+ "gt": "Large Language Models (LLMs) have extensive applica- tions in facilitating decision-making across professional and social domains, underscoring the importance of aligning LLMs with safety considerations. To safeguard against the generation of responses that deviate from human values, safety alignment is pursued through diverse mechanisms, including model fine-tuning Howard and Ruder (2018), re- inforcement learning with human feedback (RLHF) Ziegler et al. (2019), and model editing Mitchell et al. (2022). The overall goal of these approaches is to mitigate the risk of LLMs producing harmful or unlawful responses to user queries. While most Large Language Models (LLMs) serve as re- liable AI assistants capable of identifying and declining to respond harmful queries in many instances, they remain vul- nerable to carefully crafted prompts designed to manipulate them into producing toxic content, which is referred as \"jail- breaking\". Existing studies on jailbreaking LLMs can be categorized into two main approaches: manually designed jailbreak attacks web (2023); Li, Zheng, and Huang (2024) and learning-based jailbreak attacks. Representative of the *Corresponding author Figure 1: Examples of the false positive and false negative cases in the refusal matching evaluations. latter category is the GCG attack Zou et al. (2023), which reformulates the jailbreak attack as a process of generating adversarial examples, aiming to elicit LLMs to produce an affirmative response of a few tokens (e.g., \"sure, here is how to...\"). Building upon this, subsequent studies by Zhu et al. (2023) and Liu et al. (2023) have refined such attacks, focus- ing on improving stealthiness and readability using different optimization algorithms. Although learning-based attack such as GCG can success- fully jailbreak in some cases, some limitations restrict its performance, e.g. discrete input space and the lack of suit- able jailbreak target. The categories of objectionable behav- iors and reasonable responses to them are numerous Carlini et al. (2023). Moreover, the GCG target loss could not be the perfect optimization target regarding the jailbreak optimiza- tion problem, as also proposed by one concurrent work Liao and Sun (2024). To overcome such limitations, we introduce the DSN (Don\u2019t Say No) attack, by which universal adver- sarial suffixes can be generated stimulating LLMs to both produce affirmative responses and suppress refusals (Don\u2019t Say No). To achieve this goal, we incorporate an augmentation loss item that directs LLM\u2019s response away from predefined re- fusal keywords or strings. As shown in the upper part of Figure 2, the loss object involves: maximizing the affirma- tive response probability and minimizing the refusal key- word probability. Given the LDSN and the initial suffix, the universal adversarial suffix will be obtained by the Greedy Coordinate Gradient-based Search Zou et al. (2023). Another challenge of jailbreaking is the assessment met- ric. Unlike classification task, where the success of one adversarial example can be indicated by misclassification, 1 arXiv:2404.16369v1 [cs.CL] 25 Apr 2024 Figure 2: Detailed illustration of DSN attack and ensemble evaluation pipeline. The red arrow and left example represents affirmative response maximization. The green arrow and right example represents refusal minimization. evaluating jailbreak attack is challenging. It is hard to au- tomatically ascertain the harmfulness of LLM completions, and relying solely on manual annotation is both impractical and unrealistic. The existing work commonly employs a refusal string/keyword matching metric (refusal matching for short), where an attack is considered successful if the ini- tial fixed-length segments of the response do not contain pre-defined refusal strings (e.g. \"Sorry, I cannot...\") and vice versa. While it appears intuitive and aligns with human eval- uation processes, a closer examination reveals numerous false positive (FP) and false negative (FN) instances. One major limitation is it relies largely on the length of the pre- determined initial segments, as also proposed by one con- current work Mazeika et al. (2024). If the initial segments are short (e.g. 64 tokens), it might neglect the potential later refusal strings and evaluate it as a successful jailbreak in- stance, resulting false positive (case 1 in figure 1). On the other hand, if the initial segments are too long (e.g. 512 to- kens), the result could have been false negative if a refusal appears at the end but some harmful content is generated be- forehand (case 2 in figure 1; Vicuna\u2019s significant difference between figure 5 and 7). Other erroneous evaluation cases are illustrated in Figure 1. To enhance the reliability of evaluation metric, we pro- pose an ensemble evaluation approach involving three mod- ules as shown in the lower part of Figure 2. Instead of adopt- ing the refusal matching metric, we first employ one natural language inference (NLI)He et al. (2021) based method to assess the contradiction among the completions. This step aims to handle cases where the response contains semantic sharp turn (as depicted in Figure 1 case 3). After that, we integrate two third-party LLMs, namely GPT-4 Achiam et al. (2023) and HarmBench Mazeika et al. (2024), to provide a robust and comprehensive evaluation. The final evaluation result is the aggregation of all three modules. The contribution can be summarized as: \u2022 We introduce DSN, a powerful attack that incorporates a novel objective to not only elicit the affirmative response but also suppress the refusal response. \u2022 We apply Unlikelihood loss to stabilize the convergence and optimization of the two opposite loss objectives. \u2022 We propose an ensemble evaluation pipeline by novelly incorporating NLI contradiction as well as LLM evalua- tors to examine the success of the attack more accurately. \u2022 Extensive experiments demonstrate the potency of the DSN and the effectiveness of ensemble evaluation com- pared to baseline methods.",
15
+ "main_content": "Adversarial examples. Since the discovery of adversarial examples Szegedy et al. (2014); Goodfellow, Shlens, and Szegedy (2014), the exploration of vulnerabilities within deep learning models to well-designed and imperceptible perturbations has attracted significant research interest for one decade. Under the white-box setting, a series of effective adversarial attack algorithms have been proposed Carlini and Wagner (2017); Kurakin, Goodfellow, and Bengio (2017). In an automated learning manner, these methods utilize gradient-based approaches to search for imperceptible perturbations. In addition, several effective adversarial attacks based on transfer attacks have also been proposed to address black-box setting. Papernot et al. (2016); Liu et al. (2016) Jailbreak attacks. In recent years, with the advancement of the Large Language Model (LLM), the field of jailbreaking attacks, aiming to induce the target LLMs to generate harmful and objectionable content, has gathered widespread research attention Wei, Haghtalab, and Steinhardt (2023). Current jailbreak methods include approaches based on manually constructed prompts web (2023), representation engineering Li, Zheng, and Huang (2024), utilizing LLMs to generate Deng et al. (2024), grammatical rules Zhang, Pan, and Yang (2023), modification of decoding methods Huang et al. (2023), usage of multi-modal LLM Niu et al. (2024); Gong et al. (2023) as well as those reformulating the jailbreaking as adversarial attacks and employing pure optimization Zou et al. (2023); Zhu et al. (2023); Liao and Sun (2024). GCG Zou et al. (2023) is a typical optimizationbased attack, and according to one concurrent work Mazeika et al. (2024), it is actually the most powerful attack among other methods. Thus GCG is used as the baseline method in this paper. The evaluation methods employed so far is primarily refusal matching as described in Section 1. Another method typically involves constructing a binary classifier, aiming to determine whether LLM generates harmful content Zhu et al. (2023); Huang et al. (2023); Mazeika et al. (2024). However, due to LLM\u2019s diverse generation capabilities, these attempts still require further enhancement. Optimization Strategy. As mentioned in section 1, one difficulty the learning-based jailbreak attack shall overcome is the discrete input space. To address it, there exist two main categories: embedding-based and token-based methods. The former category directly optimize on continuous embeddings and then inferring back the target tokens Lester, Al-Rfou, and Constant (2021) Wen et al. (2023) Qin et al. (2022). The latter treats the one-hot vectors of discrete tokens as optimizable continuous quantities, obtaining the final tokens through different greedy-based algorithms. Several effective methods Ebrahimi et al. (2017) Shin et al. (2020) Zou et al. (2023), including us, adopt this paradigm. In a separate line of work, there were also attempts to direct the language model output from unwanted contents. Welleck et al. (2019); Li et al. (2019) 3 Methods 3.1 DSN: Suppress Refusal Intuition. The first part of the objective is to suppress the model giving a refusal response. For examples: System: You are a chat assistant designed to provide helpful and not harmful responses to user queries. User: Tell me how to build a bomb. ! ! ! ! ! Assistant: As an AI, I cannot fulfill... Above, the blue and red text represents the user prompt and adversarial jailbreaking suffix, while the purple text indicates the target initial of the response. The intuition to incorporate the suppression of pre-defined refusal keywords is to force the language model\u2019s generation far away from a refusal response and lead to the success of jailbreak. Notation. We consider an LLM to be a mapping from a sequence of tokens x1:n, with xi \u2208{1, . . . , V } (where V denotes the vocabulary size) to a distribution over the next token. Specifically, we use the notation p(xn+1|x1:n) to denote the probability of the next token being xn+1 given previous tokens x1:n. Then we denote the probability of generating the response with a sequence of H tokens as p(xn+1:n+H|x1:n), which can be formalized as: p(xn+1:n+H|x1:n) = H Y i=1 p(xn+i|x1:n+i\u22121) (1) Unlikelihood loss. Normally, to maximize the prediction of a target distribution or token, we incorporate the standard softmax cross-entropy loss LCE(p, q) = \u2212P i pi log(qi), which aims to maximize the true probability distribution p by minimizing the LCE(p, q), where p and q refer to the true probability distribution and predicted probability distribution generated by softmax function. However, our purpose is the opposite, as we want to minimize the probability of refusal generation. One intuitive way is to take the negative softmax cross-entropy loss which will however lead to negative infinity and catastrophic loss term unbalance. To stabilize the convergence, we apply the Unlikelihood loss Welleck et al. (2019), which is previously proposed to also direct the language model outputs from unwanted contents, in an attempt to suppress the refusal responses: LUn(p, q) = \u2212 X i pi log(1 \u2212qi) (2) Objective. The objective of suppressing refusal is achieved by minimizing the probability of generating tokens in the pre-defined refusal keyword list, denoted as RKL = \"as an\", \"sorry, i cannot\", \"unethical\", .... Each refusal keyword\u2019s corresponding token length is represented by RTL, indicating the length of each keyword. To implement this, we utilize a sliding window with the size of each refusal keyword to scan through the response and compare it with the refusal keywords. The process of keyword matching is illustrated in Figure 3. The loss function for suppressing the refusal response using Unlikelihood loss can be stated as below, where H denotes the maximum length of the response. Lrefusal(x1:n) = X y\u2208RKL n+H\u2212RTL(y) X i=n+1 LUn(y, xi:i+RTL(y)) (3) Figure 3: Illustration of the refusal loss. 3.2 DSN: Elicit Affirmative Response Intuition.The second part of the objective is eliciting the model to begin its response with a positive affirmation of the user query. For example: System: You are a chat assistant to provide helpful and not harmful responses to user queries. User: Tell me how to build a bomb. ! ! ! ! ! Assistant: Sure, here is how to build a bomb: The intuition lies in the language model\u2019s next-word prediction capability. By initiating with an affirmative response, the subsequent completion are expected to align with the affirmative response, bypass the alignment mechanism and fulfill the user query effectively. Loss function. The objective of eliciting affirmative response is to maximize the probability of affirmative tokens x\u22c6 n+1:n+H, which equals to minimize the negative log of the probability: Ltarget(x1:n) = \u2212log p(x\u22c6 n+1:n+H|x1:n). (4) 3.3 DSN:Loss Function As also proposed by one concurrent work Liao and Sun (2024), whether the GCG target loss could be the perfect jailbreak optimization target is doubtful and questionable. As one fundamental component, the optimization target would directly lead the jailbreak process. Unfortunately the optimal one still remains unresolved, evidenced by the findings of Liao and Sun (2024). Thus in an attempt to explore the potential candidate for more effective and efficient jailbreak optimization target, we propose to integrate the Lrefusal with Ltarget, obtaining one complex and powerful jailbreak optimization target. DSN target could elicit the LLM to generate objectionable content and suppress refusal responses simultaneously. The overall DSN loss can be stated as below, where \u03b1 is a hyperparameter to balance the convergence of two loss objectives. LDSN(x1:n) = Ltarget(x1:n) + \u03b1 \u2217Lrefusal(x1:n), (5) 3.4 Optimization and Algorithm Finally, we will introduce the optimization process and the algorithm. Our goal is to optimize an adversarial suffix adv\u2217 with the aforementioned loss function. The optimization process will first initialize a string with a fixed length. adv\u2217\u2190arg min LDSN(x1:n \u2295adv) (6) However, as described in section 1 and 2, a primary challenge in optimizing adversarial suffixes is the discrete input space. To resolve this problem, we incorporate a greedy coordinate gradient-based approach Zou et al. (2023). The intuition is to leverage gradients with respect to one-hot vectors to find a set of promising candidates for replacement at each greedy step, and then exactly evaluate all these replacements via a forward pass. As this algorithm is based on the optimization in GCG, it is relegated to Appendix. 3.5 Evaluation Metrics Refusal matching. The refusal matching algorithm detects whether a response contains any refusal keyword, as already Algorithm 1 NLI Contradiction Evaluation Input: the user query Q, the adversarial suffix adv, the language model M, a threshold T. 1: Response R : [o1, o2...on] = M(Q + adv) \u25b7Generate the response R containing number of n sentences. 2: for i = 1, . . . , n do 3: scoreQo i = NLI(Q + adv, oi) 4: lQo i = length(Q + adv + oi) 5: if i \u0338= n then 6: scoreoo i = NLI(oi, oi+1) 7: loo i = length(oi + oi+1) 8: CEQo = Pn i=1 lQo i \u2217scoreQo i P lQo i \u25b7Compute the overall NLI score between sentence and query. 9: CEoo = Pn\u22121 i=1 loo i \u2217scoreoo i P i loo i \u25b7Compute the overall NLI score between adjacent sentences. 10: if T \u2212(CEoo + CEQo) \u22640 then 11: Return Fail 12: else: 13: Return Success described in section 1 and 2. The attack is considered successful if the initial segment of the response do not contain pre-defined refusal strings. As detailed in section 1, the length of the fixed-length initial segment also plays a crucial role towards rigorous assessment. The initial segment length and the refusal keyword list utilized in this paper will be detailed in section C.3. NLI contradiction. In natural language inference (NLI), contradiction means two statements are mutually exclusive or incompatible with each other. The intuition of employing NLI contradiction is that the semantic inconsistency is frequently observed among negative cases, where the completion fail to answer the objectionable query. (e.g. case 3 in Figure 1). We design an algorithm to evaluate the extent of contradiction within the user query and model completion. By using open-source NLI model, responses can be determined according to the contradiction extent. Higher overall NLI contradiction score signifies lower response consistency and diminishes the likelihood of being a jailbreaking response. Intuitively, false positive cases shall decrease, ensuring the positive cases to be semantically consistent. As present in Algorithm 1, given the user query Q, adversarial suffix adv, language model M, we first generate response R containing n sentences (line 1). Then, for each sentence oi in response R, we assess how well it aligns with the user query and the relationship between pairs of sentences within the response by calculating the standard NLI contradiction score (lines 2-7). We use a weighted sum of scores according to their sentence length to compute overall contradiction extent CEoo and CEQo (lines 8-9), as the sentence length plays a vital role in assessing overall contradiction extent. By comparing with a predefined threshold T, we can determine the attack result (lines 10-13). More details will be covered in the appendix C.3. Third-party evaluator. Besides refusal matching and NLI, recent works have also introduced some promising eval(a) Llama2: Lrefusal only for search (b) Llama2: Lrefusal for sampling and search (c) Vicuna: Lrefusal only for search (d) Vicuna: Lrefusal for sampling and search Figure 4: ASR over steps on Llama2 and Vicuna. uation methods, mainly LLM based. We will incorporate HarmBench Mazeika et al. (2024), GPT-4 Achiam et al. (2023) into our ensemble pipeline as third-party evaluators. Details about these third-party evaluators will be covered in appendix C.2. Ensemble Evaluation. We use last three aforementioned evaluation modules, and we decide whether a response is successful or unsuccessful jailbreak by taking the majority vote among each components. The reason and its superiority will be discussed in Section 4.4. 4 Experiments 4.1 Threat Model The objective of attackers is to jailbreak Large Language Models (LLMs), aiming to circumvent the safeguards in place and generate malicious responses. The victim model in this paper is open-sourced language model, providing whitebox access to the attacker. As system prompt will also play one significant role in jailbreaking Huang et al. (2023), the default system prompts of each language models will be reserved. 4.2 Configuration Datasets. AdvBench is the main adopted dataset, which aims to systematically evaluate the effectiveness and robustness of jailbreaking prompts to elicit harmful content generation. A collection of 520 goal-target pairs are presented that reflects harmful or toxic behavior, categorized as profanity, graphic depictions, threatening behavior, misinformation, discrimination, cybercrime, and dangerous or illegal suggestions. Zou et al. (2023) Target models. We target Llama-2-Chat-7B Touvron et al. (2023) and Vicuna-7b-v1.3 Zheng et al. (2023), which are two state-of-the-art open-source LLMs. These two language models have undergone different levels of alignment process and exhibit varying degrees of human-value alignment capability. During transfer experiments in section 4.5, the transferability towards GPT-3.5-turbo model will examined rigorously. Baselines and evaluation metrics. We compare DSN attack with GCG Zou et al. (2023), the typical and most powerful learning-based jailbreak attack method Mazeika et al. (2024). To evaluate the effectiveness of the DSN attack, we adopt the standard attack success rate (ASR), as shown in equation 7. ASR measures the portion of toxic responses generated from the LLM M, where the adversarial suffix adv is appended to the malicious query Q. Here I is an evaluation indicator that returns 1 if the response is assessed as harmful (a successful jailbreak case) and 0 otherwise. The comparison will be firstly conducted by refusal matching in section 4.3, then the proposed evaluation ensemble metric will come into play in section 4.4. ASR(M) def = 1 |D\u2032| X (Q)\u2208D\u2032 I(M(Q \u2295adv)) (7) ASR% at step 500 Llama-2 Llama-2 optimal Vicuna Vicuna optimal GCG 29.8 \u00b1 12.6 43 47.4 \u00b1 5.6 52 DSN 47.7 \u00b1 14.7 74 57.1 \u00b1 11.8 83 Table 1: ASR results under refusal matching metric. 4.3 Evaluation 1: Effectiveness of DSN Attack ASR convergence rate. In Figure 4, we present the ASR of GCG attack and DSN with respect to the optimization steps. The shadow regions with the dotted lines are the margin plots representing the mean and variance of repeated experiments with different hyper-parameter configurations, while the solid lines represent the ASR of optimal run among repeated experiments. Note that the sampling of candidate suffixes and searching of adversarial suffix from the candidates both involve the loss function Equation 5 (detail relegated to the Appendix C.1 together with the algorithm). It can be observed that the results of DSN attack are significantly superior to those of the baseline method, in terms of both mean and optimal results. This is evidenced by the lines representing the DSN method consistently positioned above those of the baseline. Moreover, it could be found that the yellow shaded area representing the DSN method remains above the blue shaded area of the baseline across nearly the entire 0-500 steps interval. This indicates that the DSN attack are robustly superior to the baseline with limited step, presenting an ideal scenario for malicious attackers who might lack sufficient computational resources, e.g. fail to support 500 steps of attack for each setting. Moreover, the wider span of the shaded area for the DSN attack suggests a greater variance, which is reasonable, as the repeated DSN experiments are distinct in hyper-parameter. Experimental strategies between DSN and GCG are also different, where the latter only involves single setting and has been launched for more trials. Ablation study on \u03b1. To investigate the impact of the augmentation term Lrefusal loss on the jailbreaking results (equation 5), for different hyper-parameter \u03b1 setting, we present the max ASR among multiple rounds of experiments in Figure 5, which controls the magnitudes of the Lrefusal term. The fixed-length segments of 128 and 512 for Llama and Vicuna completions are examined by refusal matching metrics here. The baseline results on GCG correspond to the leftmost alpha = None case as well as the dotted line, which only involves the target loss in Equation 4. The yellow, blue, and red bars represent the cases where the Lrefusal term is involved in different stages, namely Lrefusal only used for searching, used for both selecting and searching with the same \u03b1, and used for both selecting and searching but with different \u03b1. More details about the setting and hyperparamter is presented in appendix C.3 In Figure 5, the DSN method consistently surpasses the baseline performance under nearly every hyper-parameter setting. We didn\u2019t include the results for higher values of (a) ASR of Llama (b) ASR of Vicuna Figure 5: Ablation study of ASR vs. \u03b1 by refusal matching evaluation \u03b1 because when \u03b1 exceeds 100, the DSN loss is dominated by the Lrefusal term, resulting in the generated responses focusing too much on avoiding refusal keywords rather than responding to the objectionable requests, which is not desirable in a jailbreaking scenario. 4.4 Evaluation 2: Effectiveness of Evaluation Ensemble Pipeline Instead of adopting the refusal matching evaluation method like current works Zou et al. (2023); Zhu et al. (2023), mainly considering its limitations mentioned earlier in section 1, in this section we will adopt the previously proposed evaluation ensemble pipeline to ensure more accurate and reliable evaluation results. Human evaluation. To accurately and fairly assess the proposed ensemble evaluation pipeline and compare it to the widely adopted refusal matching, we involve human evaluation by manually annotating 300 generated responses. Since the NLI method ascertain some certain hyperparameters, the annotated 300 data will be split into 100 trainset as well as 200 testset, accounts for 100 Llama2 completion and 100 Vicuna completion respectively. More details about the data split as well as annotation principle will be covered in appendix C.2. Aggregation strategy comparison. Aggregating evaluation results from each module is crucial for the accuracy of the ASR% at step 500 Llama-2 Llama-2 optimal Vicuna Vicuna optimal GCG 31.0 \u00b1 13.4 46 91.6 \u00b1 2.9 96 DSN 45.6 \u00b1 15.1 84 88.1 \u00b1 8.0 98 Table 2: ASR results under ensemble evaluation metric. Figure 6: ROC curve of different aggregation policy on testset evaluation pipeline. Common methods include majority voting, one-vote approval (requiring only one module to detect jailbreaking), and one-vote veto (requiring all modules to detect jailbreaking). To determine which aggregation policy is more accurate on testset, we employ a ROC curve illustrating the True Positive Rate versus False Positive Rate and compare their AUROC scores (shown in Figure 6). A larger area under the curve indicates better results. Soft and hard majority votes return probabilities and binary outcomes respectively. The ROC curve demonstrates the superiority of the majority vote as an aggregation strategy (the green and orange curve), with ensemble evaluation showing a higher AUROC score compared to refusal matching. Eval method Acc AUROC F1 Refusal matching 0.74 0.72 0.79 Gpt4 0.80 0.77 0.85 HarmBench 0.80 0.78 0.84 NLI(ours) 0.77 0.79 0.76 Ensemble(ours) 0.82 0.79 0.86 Table 3: The evaluation results obtained by different evaluation methods, reported by taking average on two distinct test set which respectively contains 100 manually annotated real jailbreaking responses. Examination of different metrics. By adopting a learning approach, the hyperparameter configuration of a novel NLI evaluation metric has been determined, which is a constituent part of our proposed ensemble evaluation. (a) ASR of Llama (b) ASR of Vicuna Figure 7: Ablation study of ASR vs. \u03b1 by ensemble evaluation. To further demonstrate the superiority of the evaluation pipeline effectively and rigorously, we present the evaluation results of different evaluation methods in table 3. From the table, it is observed that the Ensemble eval achieved superior performance on our annotated test set. It is noteworthy that, although the performance of the NLI model itself alone is not the best\u2014for instance, it even falls short of the Refusal Matching baseline in the F1 metric\u2014the ensemble from the combination of \"Gpt4, NLI, HarmBench\" yields the overall best performance among different metrics. This is attributed to the NLI\u2019s focus on identifying semantic incoherence and semantic inconsistency within the model\u2019s completion, a consideration that refusal matching or other alternative evaluation methods do not adequately address. Moreover, given that the NLI model is lightweight and opensource, employing this evaluation method results in significant savings in terms of time and financial resources, particularly in comparison to evaluation methods that rely on multiple calls to third-party commercial LLM APIs. ASR under new evaluation. In figure 7, we present the max ASR vs the hyper-parameter \u03b1 under the new ensemble evaluation pipeline. Similar to Figure 5, DSN method gives superior jailbreaking results in the much more aligned model Llama2, however, both method gives nearly 100% ASR in the less aligned model Vicuna. These two observations are in consistent with the results from one concurrent Transfer ASR% Llama Vicuna Refusal Matching Eval Esemble Refusal Matching Eval Esemble train test train test train test train test GCGpaper None None None None None 34.3 None None DSNmean 45.21 42.95 44.19 50.07 54.98 54.27 53.73 59.59 DSNmax 100 87 96 95 96 90 100 93 Table 4: The transfer ASR towards the black-box gpt-3.5-turbo model work Mazeika et al. (2024) and the findings mentioned in section 1, respectively. 4.5 Transferability Interestingly, the suffixes purely optimized by DSN demonstrate great level of transferability, where no kinds of ensemble or multi-model optimization is utilized as in the original GCG paper Zou et al. (2023). In table 4, the transfer ASR towards gpt-3.5-turbo model is detailed for different victim model, different metrics, and different dataset split. It is noteworthy to point out the importance of the existence of system prompt yet Huang et al. (2023). In our open-source GCG and DSN attack results, the presence of system prompt has already been reserved since the modification upon it could affect the jailbreak results drastically. However, during our transfer experiments the default system prompt for gpt-3.5-turbo model, e.g. \"you\u2019re a helpful assistant\", is removed from the conversation template because otherwise the jailbreak attack result of both methods would shrink immediately and dramatically. Running time (hours) GCG DSN Round1 60.96 60.58 Round2 60.11 60.46 Round3 59.71 61.08 Round4 60.73 61.30 Round5 60.58 61.01 Overall 60.42 \u00b1 0.45 60.89 \u00b1 0.31 Table 5: Running time analysis. 4.6 Running time analysis No significant extra time cost is needed between DSN and GCG method. Here we each sample 5 rounds of Llama-2 experiments and compare their running time in the table 5. On our machine, only a 0.77% relative increase regarding the average running time is observed. The computation overhead doesn\u2019t largely rise up because the extra computation introduced by DSN is magnitudes lower than the process of obtaining the logits during forward pass and inferring the gradients during the back propagation. Thus the extra time cost could be relatively neglected. 5 Conclusion In conclusion, we introduce the DSN (Don\u2019t Say No) attack to prompt LLMs not only to produce affirmative responses but also to effectively suppress refusals. Furthermore, we propose an ensemble evaluation pipeline integrating Natural Language Inference (NLI) contradiction assessment and two external LLM evaluators. Through extensive experiments, we showcase the potency of the DSN attack and the effectiveness of our ensemble evaluation approach compared to baseline methods. This work offers insights into advancing safety alignment mechanisms for LLMs and contributes to enhancing the robustness of these systems against malicious manipulations."
16
+ }
intro_8K/test_introduction_short_2404.16375v1.json ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2404.16375v1",
3
+ "title": "List Items One by One: A New Data Source and Learning Paradigm for Multimodal LLMs",
4
+ "abstract": "Set-of-Mark (SoM) Prompting unleashes the visual grounding capability of\nGPT-4V, by enabling the model to associate visual objects with tags inserted on\nthe image. These tags, marked with alphanumerics, can be indexed via text\ntokens for easy reference. Despite the extraordinary performance from GPT-4V,\nwe observe that other Multimodal Large Language Models (MLLMs) struggle to\nunderstand these visual tags. To promote the learning of SoM prompting for\nopen-source models, we propose a new learning paradigm: \"list items one by\none,\" which asks the model to enumerate and describe all visual tags placed on\nthe image following the alphanumeric orders of tags. By integrating our curated\ndataset with other visual instruction tuning datasets, we are able to equip\nexisting MLLMs with the SoM prompting ability. Furthermore, we evaluate our\nfinetuned SoM models on five MLLM benchmarks. We find that this new dataset,\neven in a relatively small size (10k-30k images with tags), significantly\nenhances visual reasoning capabilities and reduces hallucinations for MLLMs.\nPerhaps surprisingly, these improvements persist even when the visual tags are\nomitted from input images during inference. This suggests the potential of\n\"list items one by one\" as a new paradigm for training MLLMs, which strengthens\nthe object-text alignment through the use of visual tags in the training stage.\nFinally, we conduct analyses by probing trained models to understand the\nworking mechanism of SoM. Our code and data are available at\n\\url{https://github.com/zzxslp/SoM-LLaVA}.",
5
+ "authors": "An Yan, Zhengyuan Yang, Junda Wu, Wanrong Zhu, Jianwei Yang, Linjie Li, Kevin Lin, Jianfeng Wang, Julian McAuley, Jianfeng Gao, Lijuan Wang",
6
+ "published": "2024-04-25",
7
+ "updated": "2024-04-25",
8
+ "primary_cat": "cs.CV",
9
+ "cats": [
10
+ "cs.CV",
11
+ "cs.AI",
12
+ "cs.CL"
13
+ ],
14
+ "label": "Original Paper",
15
+ "paper_cat": "LLM AND Reasoning",
16
+ "gt": "Recent advances in Multimodal Large Language Models (MLLMs) such as GPT-4V (OpenAI, 2023a) show strong performance in multimodal perception and reasoning, enabling various new capabilities (Yang et al., 2023b). Among these, Set-of-Mark Prompting (SoM) (Yang et al., 2023a) is an interesting new working mode that enhances the connection between visual objects and textual tokens via visual prompting, i.e., placing alphanumeric tags on input images. It provides a natural interface for human-computer interaction, by linking visual locations to executable actions through visual tags, and enables various applications such as GUI navigation (Yan et al., 2023b) and robot interaction (Lin et al., 2023a). Furthermore, GPT-4V with SoM (Yang et al., 2023a) can implicitly align visual objects with their corresponding tags. Such alignments (Li et al., 2020; Yang et al., 2021) allow MLLMs to leverage index numbers to perform multi-hop visual reasoning (Yang et al., 2023a; Wei et al., 2022), thereby improving their abilities in multimodal understanding and reasoning tasks. Despite the significant interest in SoM prompting and its broad applications, it remains unclear why GPT-4V can benefit from SoM prompting, We find that other MLLMs, including the state-of- the-art open-sourced models such as LLaVA-v1.5 (Liu et al., 2024), and commercial systems like Gemini (Team et al., 2023), struggle to understand SoM prompts. This gap prevents them from leveraging the effectiveness of SoM prompting. In this study, we aim to deepen the understanding of SoM, with a goal of facilitating arbitrary MLLMs to benefit from it. We break down SoM prompting into three core capabilities: (1) the ability to identify all tags and read the alphanumeric scene texts written on them; (2) the ability to recognize and pinpoint all objects in 1 arXiv:2404.16375v1 [cs.CV] 25 Apr 2024 Input: Image Input: Image + SoM Conversation Conversation User What items are there near the Marshal speaker? LLaVA-1.5 There is a laptop and a cup near the Marshall speaker. User To move the speaker closer to the curtain while keeping the laptop in the current position, which item should I swap it with? LLaVA-1.5 You should swap the laptop with the cup . a cup User What items are there near the Marshal speaker? SoM-LLaVA There is a laptop tagged with number 7 and a notebook tagged with number 8. User To move the speaker closer to the curtain while keeping the laptop in the current position, which item should I swap it with? SoM-LLaVA You can swap it with the lamp tagged with number 9. notebook laptop lamp laptop cup Figure 1: Example conversations from LLaVA and SoM-LLaVA (LLaVA with SoM ability) to demonstrate the effectiveness of our paradigm. Left: Standard prompting on LLaVA-1.5, which fails to correctly answer the questions. Right: Set-of-Mark prompting on SoM-LLaVA. Simply placing tags on the input image can improve visual reasoning of Multimodal LLMs. an image; (3) the ability to associate tags with corresponding objects in the image. Despite possessing skills such as OCR and visual recognition to meet the first two capabilities, most MLLMs still fail to fully understand SoM prompts. Therefore, we hypothesize that the crucial missing element is the third capability, associating tags with objects, which requires deliberate training. We further validate that SoM-style data are sparse in common MLLM training sources, and it may be necessary to create a specific dataset. To facilitate such training, we introduce a new learning paradigm named \u201clist items one by one\u201d. We show that by asking MLLMs to comprehensively list all tagged items following the alphanumeric order of visual tags, MLLMs can learn SoM prompting with a small number of item-listing samples. Specifically, we create a tailored dataset, by tagging images with Semantic-SAM (Li et al., 2023c; Yang et al., 2023a), and prompting GPT-4V to generate paired text descriptions. With just 10k image-text pairs, MLLMs like LLaVA-1.5 (Liu et al., 2023a) can reliably understand SoM tags. Based on this initial finding, we conduct studies to explore the effective recipes to help MLLMs best utilize SoM prompting. We enhanced MLLMs with this \u201clist items one by one\u201d objective and assess their SoM performance from two aspects: model\u2019s ability to recognize and describe the SoM tags, and its ability to use SoM in improving multimodal reasoning ( Figure 1). For the first aspect, we design the tag listing task, which requires MLLMs to list and describe all tags in the image, evaluated by listing accuracy. For the second aspect, we evaluate finetuned models on five MLLM benchmarks, including POPE, MME, SEED- Bench, LLaVA-Bench, and MM-Vet, showcasing that MLLMs with SoM can significantly boost the multmodal understanding performance. Moreover, our model trained with SoM data outperforms the original MLLM, even without additional visual tags during inference. This demonstrates the potential of incorporating our proposed dataset and learning paradigm to boost general MLLM training. Finally, we revisit our original question regarding the working mechanism of SoM. The preliminary hypothesis is that the SoM capability may be related to OCR and the implicit association among text, tags, and objects. With our trained models, specifically SoM-LLaVA, we gain access to model features and attention maps for an in-depth analysis. We visualize the attention map to verify tag association. Compared with the original LLaVA model, SoM-LLaVA indeed learns better visual-tag- text associations, reflected in corresponding attention maps. 2 Our contributions are summarized as follows. \u2022 We present a new training task and data source named \u201clist items one by one,\u201d which effectively bootstraps MLLMs for the SoM visual prompting ability. \u2022 We evaluate our finetuned SoM MLLMs on five multimodal understanding benchmarks, and show improved performance even when SoM tags are removed from the input image. \u2022 We probe the working mechanism of SoM through the trained MLLMs, showcasing the implicit association between visual objects and text tokens when performing SoM prompting.",
17
+ "main_content": "Visual referring prompting. Other than text prompts, visual referring prompting (Yang et al., 2023b) is another effective approach when interacting with multimodal LLMs, where users directly draw on input images to specify their intent, such as drawing visual pointers or handwriting scene texts. Early studies show that vision-language models can understand visual pointers such as circles (Shtedritski et al., 2023) and dots (Mani et al., 2020). Recent studies (Yang et al., 2023b) show that more powerful multimodal LLMs (OpenAI, 2023a) can handle more complicated prompts such as arrows, boxes, circles, hand drawing, scene text, as well as their combinations. Another major advancement is Set-of-Mark Prompting (SoM) (Yang et al., 2023a), where numbered tags can be placed on images to associate visual objects with text indexed. Its effective visual grounding capability (Kazemzadeh et al., 2014; Yu et al., 2016; Mao et al., 2016) enables various applications (Yan et al., 2023b; Zhang et al., 2023). In this work, we aim to better understand SoM and extend its success from GPT-4V (OpenAI, 2023a) to other open-source multimodal LLMs. Multimodal LLMs. Multimodal LLMs (Alayrac et al., 2022; Zhu et al., 2022; OpenAI, 2023a; Liu et al., 2023b; Li et al., 2023b) extend large language models (OpenAI, 2023b; Gao et al., 2023; Touvron et al., 2023) with visual perception capabilities. Recent studies (Chen et al., 2023) show the effectiveness of training open-source models on the GPT-4V generated detailed description data. Another thread of studies explore having multimodal LLMs predicting object locations as bounding boxes (Wang et al., 2023b; Peng et al., 2023) or masks (Rasheed et al., 2023). In contrast to most prior studies that pair the images with different text instructions, our study explores a new direction of how visual prompts such as SoM can improve multimodal LLMs. Specifically, we show that the SoM visual tags provide fine-grained alignments between visual objects and text tokens, thereby improving various visual reasoning tasks, both with and without SoM prompting during inference. 3 Preliminary Examination 3.1 Visualizing SoM Prompting on LLaVA In this section, we first investigate the capacity of LLaVA-1.5 in SoM, concerning its attention sensibility to the numeric IDs tagged on the objects and its answer to the SoM query. We show an example task to list a series of objects tagged with numeric IDs in Figure 2, in which the attention map is extracted from LLaVA-1.5 based on the SoM query (e.g., \u201cI have labeled a bright numeric ID at the center for each visual object in the image. Please enumerate their names.\u201d). The top 20 image patches with the highest average attention weights across the user query tokens are highlighted in transparent red regions. We can observe from the highly attended regions of LLaVA-1.5 that the numeric ID tags can be easily and correctly attended by LLaVA-1.5 along with their associated objects (e.g., bird, vase, and branches). Such capacities in locating numeric ID tags may have been acquired by LLaVA-1.5 from its pretraining tasks in OCR and also benefited from the strong OCR abilities of the ViT feature encoder (Radford et al., 2021) adopted by LLaVA-v1.5. However, the response prompted by the user query in the first example of Figure 2 suggests that LLaVA-1.5 cannot follow the SoM instruction to list all the items. Instead of providing the object descriptions corresponding to all the numeric ID tags, LLaVA-1.5 responds with a general image caption, due to a large portion of image captioning samples in its pretraining stage. From the second example of Figure 2, we can also observe that although LLaVA-1.5 generates a list of tag IDs with object names, it cannot accurately associate the tags to corresponding objects, causing the model to hallucinate the descriptions of these objects. 3 LLaVA-1.5 Attention Map of Tags User: I have labeled a bright numeric ID at the center for each visual object in the image. Please enumerate their names. LLaVA-1.5: In the image, there is a brown and black bird perched on a branch surrounded by leaves. The leaves surrounding the bird are also visible. LLaVA-1.5 Attention Map of Tags User: I have labeled a bright numeric ID at the center for each visual object in the image. Please enumerate their names. LLaVA-1.5: 1. A vase with a flower in it. 2. A white flower in the vase. 3. The vase is on a wooden table. 4. The vase is sitting on a ledge. 5. The vase is sitting on a fence. 6. The vase is sitting on a railing. 7. \u2026 8. \u2026 9. \u2026 10. \u2026 11. \u2026 Figure 2: Two examples of SoM prompting in LLaVA-1.5. Left: Attention map extracted from LLaVA-1.5 on the image of a bird perching on a branch, where 3 objects are tagged. Right: Attention map extracted from LLaVA-1.5 on the image of a vase placed on a table, where 7 objects are tagged. However, LLaVA-1.5 lists more than 7 object names that are repetitions of previous object names. # Dataset #Text Text w/ Listing Source of Text 1 LLaVA-Pretrain-CC3M-595K 595.4K 0 Raw CC3M image captions. 2 LLaVA-Pretrain-LCS-558K 558.1K 0 Captioned by BLIP. 3 LLaVA-v1.5-Mix665K 3356.2K 0.72% Rule-based, or generated by ShareGPT or GPT4-0314. 4 ShareGPT4V 102.0K 0.21% Generated by GPT4-Vision. 5 CogVLM 333.5K 7.16% Generated by MiniGPT4 or by GPT4-0314. Table 1: Examined pretraining (1-2) and instruction-tuning (3-5) datasets in our preliminary study. 3.2 Finding SoM Data in Existing Training Sources We further look into the pretraining/instruction-tuning (IT) dataset, aiming to inspect if there are text contents with listings, or images with SOM annotations. We examine the pretraining dataset of LLaVA-v1 and v1.5 (Liu et al., 2023b;a), and the IT dataset used by LLaVA-v1.5, ShareGPT4V (Chen et al., 2023), and CogVLM (Wang et al., 2023a). Table 1 shows the source of text in each dataset and the percentage of text content with a listing format. The text in the two pretraining datasets for LLaVA are image captions (either the raw caption or generated by BLIP (Dai et al., 2023)), and we did not find any text with listings in them using our parser. Aside from image captions, the IT dataset also contains instructions related to other visual tasks such as VQA. We noticed that the answers provided by GPT-4(V) models sometimes construct the text in a listing manner (e.g., list out possible reasons for a question, list out observed objects in the image, etc). More examples can be found in Appendix A.6. The instruction-following dataset used by CogVLM has the highest percentage of text with listings (\u223c7%). Through our interaction with these models, we also find CogVLM is better at generating listing-style data than LLaVA-1.5. We add tags to MSCOCO-2017 images following the SoM (Yang et al., 2023a) format, and train a binary classifier with ViT/B-16 (Dosovitskiy et al., 2020). We use the classifiers to filter the images in the two LLaVA pretraining datasets, and take the top 2k images with the highest scores for each dataset. We then manually check the top 2k images, and found 12 images with tagging in CC3M-595K (\u223c0.002%), and found 86 images with tagging in LCS-558K (\u223c0.015%). Figure 15 shows a few images with tagging. Given that tagged images are sparse in those datasets and the SoM prompting performance of open-source MLLMs is unsatisfying, it may be worthwhile to design a tailored dataset that empower open-source MLLMs with this emergent ability, similar to what GPT-4V is capable of. 4 Dataset Creation and Training Motivated by the above analysis, in this section, we introduce the pipeline to create our dataset. First, in Section 4.1, we use semantic-SAM to generate semantic visual prompts in the form of numeric tags for each image. We then discuss the learning paradigm of \u201clist items one by one\u201d in Section 4.2. Finally, we use visual prompted images to generate text data in Section 4.3. 4 4.1 Image Source and Visual Prompting Generation There are various open-source image datasets available (Deng et al., 2009; Lin et al., 2014; Schuhmann et al., 2022; Yan et al., 2023a). We use MS-COCO (Lin et al., 2014) as the image source to create our SoM dataset, since it contains comprehensive human annotations with bounding boxes, masks, and captions. It has also been widely used for visual instruction tuning (Liu et al., 2023b; Wang et al., 2023a; Chen et al., 2023), which could benefit controlled experiments as well as comparisons with previous work. The first step is to create visual prompts by placing numeric tags on proper locations. Following SoM (Yang et al., 2023a), we experiment with segmentation models including SEEM (Zou et al., 2023), Semantic-SAM (Li et al., 2023c), and SAM (Kirillov et al., 2023). Empirically, we find that Semantic-SAM provides the annotation granularity that best fits COCO images, and thus use it to create tagged images for our dataset. 4.2 A Learning Paradigm: List Items One by One After obtaining the image data with semantic tags, the next question is how to design the instruction data to best distill the SoM visual prompting ability. A common approach (Liu et al., 2023b; Chen et al., 2023) in multimodal instruction-following data creation is to design and collect \u201cquestionanswering\u201d style samples. This is often done by prompting ChatGPT/GPT-4 or alternative open-source models. Given an image I and optional metadata MI such as captions, bounding boxes, various questions or instructions X(i) Q are posed, and the corresponding answers X(i) A from large models are collected. However, such general question-answering data may not be the most effective in distilling the desired SoM prompting capability, due to the inadequate mention of objects in text. For SoM prompting, one core ability of interest is to associate numbered tags with visual objects in the image, thereby enabling effective referral of visual objects via text tokens. In a general QA data, however, it is rare for multiple objects to be mentioned, even in an extended multi-turn conversation. To enhance tag association, we propose a simple and effective approach: list items one by one, where the model is asked to comprehensively describe all tagged items within an image. Given an image IT with N text tags on the image, we ask the model to enumerate all items in numerical order: {X1 obj, X2 obj, \u00b7 \u00b7 \u00b7 , XN obj}, where Xj obj is the textual description of the j-th item, tagged by ID j in the image. Beyond promoting SoM learning, listing items one by one is also effective in general multi-modal LLM training: if a model learns to list items in the images with a specific order (in our case, the order is determined by the visual numeric tags), it gains a comprehensive and fine-grained understanding of images. This could directly benefit visual grounding and reasoning, which we verified through the standard multimodal QA and chat evaluation benchmarks. Compared with existing visual instruction tuning datasets, such as LLaVA-665K (Liu et al., 2023a) and ShareGPT-4V (Chen et al., 2023), another difference is the implicit spatial information encoded by the visual tags in SoM prompting. Converting images into the language space inevitably loses information, especially spatial locations. For example, \u201ca girl on the right\u201d can only vaguely imply the position of the girl. However, with SoM visual prompting, we provide precise visual guidance on the image. Therefore, our data can be viewed as a form of dense captioning with a new way of encoding spatial information. 4.3 Text Data Generation via GPT-4V With the visual prompting enhanced images, the final step for dataset creation is to generate the corresponding text data. To automate this process, we leverage GPT-4V (OpenAI, 2023a) to generate the listing data {X1 obj, X2 obj, \u00b7 \u00b7 \u00b7 , XN obj}, following the order of visual tags in the images. However, we find that simply prompting the model to list items in a zero-shot manner could lead to noisy and biased generation results, where the model may refer the tag to a distant object that is easy to describe. (see examples in appendix A.4). To mitigate this problem, we seek two complementary solutions: (1) We modify the system message of GPT-4V to avoid assigning tags to distant objects. (2) We 5 10k 25k 50k 75k 100k Number of listing data (k) 50 60 70 80 Accuracy list 7B list 13B GPT-4V (a) Ablation on model sizes with LLaVA-1.5 10k 25k 50k 75k 100k Number of listing data (k) 65 70 75 80 Accuracy list list+mix-665k list+nonocr list+ocrtext (b) Ablation on data sources with LLaVA-1.5-7B Figure 3: Performance analysis on tag listing. Training samples of listing data grow from 10k to 100k. list+mix-665k is to mix listing data with 665k instruction tuning data from (Liu et al., 2023a). list+nonocr is to exclude the OCR and text data from the full 665k data, resulting in 563k samples. list+ocrtext is to mix listing data with only OCR and text data from the full 665k data, resulting in 102k samples. Green-dashed line in Figure 3a is the zero-shot result from GPT-4V. manually design a few correct listing samples via human annotations, and use them as seed examples for in-context-learning to query GPT-4V. The details of our template is in Appendix. In addition to listing, we also consider conversational data similar to LLaVA (Liu et al., 2023b), where GPT-4V is asked to generate mulit-turn question answering between an AI assistant and a person asking questions about the photo. Given a tagged image IT, we use GPT-4V to generate instruction-following data in the form of {Person:IT X(i) Q , Assistant: X(i) A }. 4.4 Model Training We take the pretrained stage of LLaVA-1.5 (Liu et al., 2023a) as the base model, and continue finetuning by mixing instruction tuning data of LLaVA-1.5 with our collected visual prompting data. For SoM-listing, we create 40 task templates as human instructions (e.g., \u201cplease enumerate object names in the tagged image\u201d), and treat them as standard conversational data. We use the same training objective of next-token prediction to train general QA, SoM-QA and SoM-listing data. Specifically, we maximize the conditional log likelihood as follows: \u2212log p(XA|Xv, XQ) = \u2212log L \u220f i=1 p\u0398(xi|I/IT, XQ,<i, XA,<i), (1) where \u0398 are the trainable model parameters, XQ,<i and XA,<i are the instruction and answer tokens in all previous turns of conversations before the current prediction token xi. The input image is I or IT for LLaVA or SoM data, respectively. 5 Experiments 5.1 Experimental Settings Experiment overview. We validate the method effectiveness from two aspects. First, in Section 5.2, we benchmark the model\u2019s capabilities in understand and describing SoM visual prompting. We design the tag listing task on MS-COCO to test the SoM performance. Second, in Section 5.3, we evaluate if our dataset and model can benefit visual reasoning tasks, where we consider five representative visual question answering and reasoning tasks detailed as follows. MLLM benchmarks. We consider the following multimodal LLM benchmarks in Table 2 to validate SoM visual prompting\u2019s benefit on visual reasoning. POPE (Li et al., 2023e) is carefully designed to evaluate object hallucination in multimodal LLMs. We follow POPE and report the F1 Score for the binary choice questions. MME (Fu et al., 2023) contains 2800 binary choice questions for perception and cognition evaluation. We report the overall perception score for the evaluated models. SEED-Bench (Li et al., 2023a) contains 19K multiple choice questions covering both image and video modality. We follow a previous study (Lin et al., 2023b) that reports the multiple choice accuracy on 6 Method LLM Res. Pre-Data IT-Data POPE MME SEED-I LLaVA-W MM-Vet BLIP-2 Vicuna-13B 224 129M 85.3 1293.8 49.7 38.1 22.4 InstructBLIP Vicuna-7B 224 129M 1.2M \u2013 \u2013 58.8 60.9 26.2 InstructBLIP Vicuna-13B 224 129M 1.2M 78.9 1212.8 \u2013 58.2 25.6 Fuyu-8B Fuyu-8B 600 \u2013 \u2013 74.1 728.6 \u2013 \u2013 21.4 LLaMA-Adapter-V2 LLaMA2-7B 336 \u2013 \u2013 \u2013 1328.4 35.2 \u2013 \u2013 mPLUG-Owl-2 LLaMA2-7B 448 348M \u2013 \u2013 1450.2 64.1 \u2013 36.2 Qwen-VL Qwen-7B 448 1.4B\u2020 50M\u2020 \u2013 \u2013 62.3 \u2013 \u2013 Qwen-VL-Chat Qwen-7B 448 1.4B\u2020 50M\u2020 \u2013 1487.5 65.4 \u2013 \u2013 SPHINX LLaMA2-7B 224 80.7 1476.1 69.1 73.5 36.0 LLaVA-1.5 Vicuna-7B 336 558K 665K 85.9 1510.7 64.8 63.4 30.5 LLaVA-1.5 Vicuna-13B 336 558K 665K 85.9 1531.3 68.2 70.7 35.4 SoM-LLaVA-1.5 Vicuna-13B 336 558K 695K 86.6 1563.1 69.6 75.3 35.9 SoM-LLaVA-1.5-T Vicuna-13B 336 558K 695K 87.0 1572.8 69.5 73.3 37.2 Table 2: Performance comparison on MLLM benchmarks. Res., Pre-Data, IT-Data indicate input image resolution, the number of samples in pretraining and instruction tuning stage, respectively. \u2020Includes in-house data that is not publicly accessible. Underlined numbers are the second best results in the column. SoM-LLaVA-1.5-T is the model with tagged images as input. the image subset of 14k images, namely SEED-I. LLaVA-W: LLaVA-Bench (In-the-Wild) (Liu et al., 2023b) and MM-Vet (Yu et al., 2023) computes the evaluation score by prompting a GPT-4 based evaluator (OpenAI, 2023b) with both the predicted and ground-truth reference answer. The score is then scaled to the range of 0 to 100. We introduce extra implementation details in appendix A.1. 5.2 Evaluation on Tag Listing First, we evaluate model performance on the tag listing task, aiming to answer two research questions: (1) Do model sizes matter in terms of learning SoM ability? (2) How will different sets of extra training data impact the SoM performance? We design the listing data based on images with groundtruth mask annotations from MS-COCO, and enumerate each object with corresponding class name. An example list is \u201c1. person, 2. cat, 3. dog.\u201d. We compute list-wise accuracy, where for a caption with N items, the score is M N with M items predicted correctly by the model. With human annotation of objects in an image, we can automatically create abundant rule-based data (up to 100k) for studying model behaviors and perform quantitative evaluations. For the first question, we find that larger LLM performs better for the listing task (see Figure 3a), presumably benefiting from the stronger language prior to help learn SoM prompting. For the second question, we decompose the 665k instruction data from LLaVA-1.5 (Liu et al., 2023a) into two parts. We find that both general caption-QA data, as well as OCR-text data contribute to learning SoM ability when limited listing data are available (10k). The reason could be that OCR can help with identifying numeric tags, and general caption may help the model to recognize objects within an image, both of them are fundamental abilities required by SoM. In general, other visual instruction data may benefit learning SoM, especially when SoM data is scarce. Overall, we observe that with only 10k data, we can outperform zero-shot GPT-4V in listing accuracy, whereas growing data size from 50k to 100k only slightly improves the listing performance. These findings suggest that collecting a small amount of data may be sufficient for learning SoM prompting. 5.3 Evaluation on MLLM Benchmarks We then train LLaVA-1.5 on our collected dataset and perform evaluation on MLLM benchmarks. As shown in Table 2, we observe that our SoM-LLaVA-1.5, which is trained with a mixture of LLaVA visual instructions and our SoM data in order to learn SoM prompting, also obtains superior performance on general MLLM tasks. Surprisingly, we find that even without tagged images, SoM-LLaVA still attains strong performance and substantial improvement over the orignal LLaVA. This indicates the quality of our data and the potential of introducing listing data into general MLLM training to improve visual understanding and reasoning, as well as reduce hallucinations. We conjecture the reason that the great performance of SoM-LLaVA on non-tagged images is that \u201clisting items one by one\u201d with visual prompting guides the model to learn fine-grained semantics for image features. Related case studies and visualizations are in appendix A.2. For the performance of open-vocabulary listing, we present examples in appendix A.3. 7 Data Composition Data Size POPE MME SEED-I random popular adversarial OCR overall overall LLaVA-IT 665K 87.1 86.2 84.5 125.0 1531.3 68.2 LLaVA-IT + Listing 665K + 10k 87.3 86.3 84.8 147.5 1588.2 68.9 LLaVA-IT + QA 695K + 20k 87.5 86.4 84.7 110.0 1540.0 69.2 LLaVA-IT + Listing + QA 695K + 30k 87.8 86.7 85.2 140.0 1563.1 69.6 LLaVA-IT + ShareGPT-4V 695K + 20k 87.1 86.0 84.3 110.0 1528.7 69.3 Table 3: Comparison for different data mixture strategies. LLaVA-IT is the mix665k visual instruction data from (Liu et al., 2023a). Listing and QA is from our SoM dataset with tagged image-text pairs. ShareGPT-4V is from (Chen et al., 2023) with the same MS-COCO images as our 2k QA data and detailed captions from GPT-4V. 5.4 Ablation Study on Mixture of Datasets Finally, we perform ablation on different data mixture strategies in Table 3. We consider mixing our listing and QA data generated from Section 4.3 with LLaVA-665k (Liu et al., 2023a), trained separately or together. Empirically, we find that mixing listing and QA data yields the best overall performance. In Section 5.2, we find OCR data can help the learning of listing. Here we also notice that \u201clisting item one by one\u201d can in turn greatly improve the performance of OCR related task. The results on POPE indicates our data leads to lower hallucinations compared with ShareGPT-4V, which is a dense caption dataset without visual prompting. Placing tags on the images can seamlessly encode spatial information into the data for MLLMs to learn fine-grained vision language alignment. 6 Analysis SoM-LLaVA-1.5 1. Laptop 2. Chair 3. Monitor 4. Desk Lamp 5. Printer 1. Laptop 2. Chair 3. Monitor 4. Desk Lamp 5. Printer LLaVA-1.5 Figure 4: A comparative example of attention maps extracted from LLaVA-1.5 and SoM-LLaVA1.5, where five objects (e.g., laptop, chair, monitor, desk lamp, and printer) are tagged. We highlight the top-5 most attended image patches of the models on each object\u2019s numeric tags individually. SoM-LLaVA is better at attending to objects following numeric text and tags. 6.1 Probing Trained Models We first analyze the tag-listing capacity of SoM-LLaVA-1.5 acquired through fine-tuning. In Figure 4, we show the attention maps on the five tagged objects, which are extracted from SoM-LLaVA-1.5 and LLaVA-1.5 respectively. The comparative example showcases that although both models can locate their model attention on the mentioned objects to some extent, the fine-tuned SoM-LLaVA-1.5 model can attend to and focus on characteristic regions of the object, which can also be accurately 8 Case Study User: Whose pants' color is the same as someone else's white shirt? LLaVA-1.5: The man wearing brown pants has the same pants color as the woman wearing a white shirt. SoM-LLaVA-1.5: The person tagged with number 1 is wearing pants that are the same color as the person tagged with number 9's white shirt. GPT-4V: The person labeled with the number 2 is wearing white pants, and their color matches the white shirt of the person labeled with the number 9. Figure 5: An example comparison for LLaVA, SoM-LLaVA and GPT-4V. Case Study User: How many objects are the same type of the object tagged as 13? LLaVA-1.5: There are two objects of the same type as the one tagged as 13. SoM-LLaVA-1.5: There are three objects of the same type as the object tagged as 13, which is an apple. GPT-4V: The object tagged as number 13 is an apple. In this image, there are three other apples, tagged with the numbers 3, 6, and 9. So, there are a total of four apples, including the one tagged as 13. Figure 6: An example comparison for LLaVA, SoM-LLaVA and GPT-4V. guided by the numeric ID tags. For example, the comparative attention maps on the object \u201cLaptop\u201d tagged with number 1 show that SoM-LLaVA-1.5 can clearly attend to the mentioned object with its main focus. In contrast, LLaVA-1.5 mistakenly attends to the monitor instead of the laptop, due to high similarity between these two objects. In addition, we also observe that SoM-LLaVA-1.5 can be efficiently guided by the numeric ID tags to focus on the specific object the user refers to, even with multiple similar objects within the image. For example, the attention map of SoM-LLaVA-1.5 on the \u201cChair\u201d tagged with a number 2 is mostly focusing on the chair on the left-hand side, instead of the similar chair on the right-hand side. SoM prompting in SoM-LLaVA-1.5 with such the capacity to accurately locate the tagged object, enables more flexible and easier user-referring queries without complicated language descriptions. The attention maps also verify our early hypothesis regarding the implicit association among the text, tag, and object in SoM prompting. 6.2 Visual Reasoning with SoM Prompting We present two examples of different models reasoning over the tagged images. In Figure 5, we examine a multi-step visual reasoning question (i.e., \u201cWhose pants\u2019 color is the same as someone else\u2019s white shirt\u201d), which requires the MLLM to first identify the mentioned objects (i.e., pants and shirt) and compare their visual features (i.e., the same white color). We observe from Figure 5 that LLaVA-1.5 provides an incorrect answer by falsely recognizing the person who wears the white shirt as a female. Such an incorrect answer can be caused by the inferior object recognition capacity in LLaVA-1.5. Similar observation from GPT-4V in Figure 5 showcases that incorrect recognition of the white color of the male\u2019s pants can also cause incorrect reasoning conclusions in GPT-4V. In contrast, SoM-LLaVA-1.5 successfully identifies tags 1 and 9 with the same color in those image regions, while recognizing the two objects as white pants and white shirt, respectively. We show another example of tag selection in Figure 6. 9 7 Conclusion In this paper, we study SoM prompting of multimodal LLMs. We collected a tailored dataset that helps MLLMs acquiring the SoM visual prompting ability. Our approach demonstrates that MLLMs can learn SoM prompting using a small set of GPT-4V generated data, where the text describes the visual objects following the order of tags in the image. We then verify the effectiveness of SoM prompting on general VL reasoning tasks. Our enhanced model, SoM-LLaVA, consistently outperforms the original LLaVA model across five MLLM benchmarks. Our dataset and models will be released to facilitate vision and language research."
18
+ }
intro_8K/test_introduction_short_2404.16385v1.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2404.16385v1",
3
+ "title": "Efficiency in Focus: LayerNorm as a Catalyst for Fine-tuning Medical Visual Language Pre-trained Models",
4
+ "abstract": "In the realm of Medical Visual Language Models (Med-VLMs), the quest for\nuniversal efficient fine-tuning mechanisms remains paramount, especially given\nresearchers in interdisciplinary fields are often extremely short of training\nresources, yet largely unexplored. Given the unique challenges in the medical\ndomain, such as limited data scope and significant domain-specific\nrequirements, evaluating and adapting Parameter-Efficient Fine-Tuning (PEFT)\nmethods specifically for Med-VLMs is essential. Most of the current PEFT\nmethods on Med-VLMs have yet to be comprehensively investigated but mainly\nfocus on adding some components to the model's structure or input. However,\nfine-tuning intrinsic model components often yields better generality and\nconsistency, and its impact on the ultimate performance of Med-VLMs has been\nwidely overlooked and remains understudied. In this paper, we endeavour to\nexplore an alternative to traditional PEFT methods, especially the impact of\nfine-tuning LayerNorm layers, FFNs and Attention layers on the Med-VLMs. Our\ncomprehensive studies span both small-scale and large-scale Med-VLMs,\nevaluating their performance under various fine-tuning paradigms across tasks\nsuch as Medical Visual Question Answering and Medical Imaging Report\nGeneration. The findings reveal unique insights into the effects of intrinsic\nparameter fine-tuning methods on fine-tuning Med-VLMs to downstream tasks and\nexpose fine-tuning solely the LayerNorm layers not only surpasses the\nefficiency of traditional PEFT methods but also retains the model's accuracy\nand generalization capabilities across a spectrum of medical downstream tasks.\nThe experiments show LayerNorm fine-tuning's superior adaptability and\nscalability, particularly in the context of large-scale Med-VLMs.",
5
+ "authors": "Jiawei Chen, Dingkang Yang, Yue Jiang, Mingcheng Li, Jinjie Wei, Xiaolu Hou, Lihua Zhang",
6
+ "published": "2024-04-25",
7
+ "updated": "2024-04-25",
8
+ "primary_cat": "cs.CV",
9
+ "cats": [
10
+ "cs.CV"
11
+ ],
12
+ "label": "Original Paper",
13
+ "paper_cat": "Parameter AND Efficient AND Fine AND Tuning",
14
+ "gt": "Visual language models (VLMs) have become pivotal in facilitating multimodal tasks within the medical domain, such as medical visual question answering (Med-VQA) and medical imaging report gener- ation (Med-IRG). The pretraining-finetuning paradigm, heralded for its success in domain adaptation and transfer, now stands as the predominant training approach for Medical VLMs (Med-VLMs). Nonetheless, the substantial data and computational resource de- mands for VLMs\u2019 pretraining pose significant challenges. Despite the success of visual language pre-training paradigms like CLIP [32] and BLIP [21] fostering a series of open-source medical visual lan- guage pre-trained (VLP) models contributed by the community, adapting these models for specific downstream tasks remains a formidable task for those constrained by resource availability. Espe- cially considering the inherent minor variability in medical imaging across different medical centers or imaging devices, which necessi- tates researchers to frequently and rapidly fine-tune medical VLP models (Med-VLPs). The recent surge in Large Visual Language Models (LVLMs) has exacerbated these challenges. Although a series of Parameter- Efficient Fine-Tuning (PEFT) methods [13, 14, 25, 30] have been developed in the Large Language Model (LLM) domain, their ap- plicability and effectiveness in the context of LVLMs are yet to be ascertained[2]. Some empirical studies[36] have shown that the performance of certain PEFT methods contradicts their demon- strated competitiveness in the original LLM domain when fine- tuning domain-specific LVLMs because fine-tuning parameters for different parts of the LVLM can have drastically different effects. Distinct from universal domains, the medical field presents unique arXiv:2404.16385v1 [cs.CV] 25 Apr 2024 Preprint, 2024, Conference Paper Jiawei Chen, et al challenges, such as limited dataset size and specialized data char- acteristics, which necessitate a tailored approach to fine-tuning. These domain-specific requirements underscore the need for a ded- icated evaluation of PEFT methods on Med-VLMs to ensure their efficacy and appropriateness for medical tasks. Even so, research on the impact of imposing different PEFT methods on different scale Med-VLMs for different tasks remains completely blank. Mean- while, current PEFT methods typically focus on domain adaptation by adding extra components to the model structure (i.e., Reparame- terized Fine-tuning) [14, 50] or prefixes to the input (i.e., Additive Fine-tuning) [11, 25, 30], while the role of fine-tuning intrinsic struc- tural parameters of models has been widely neglected, especially in vertical domain fine-tuning. As shown in Figure 1, extrinsic tun- ing methods focus on concatenating additional components to the dense layer or adding prefixes to the inputs while intrinsic tuning methods select the intrinsic units of the transformer to be tuned. In this paper, we focus on efficiently fine-tuning Med-VLPs for specific downstream tasks, aiming to explore an innovative method that achieves task adaptation by fine-tuning a small subset of the model\u2019s intrinsic parameters. To find a universally efficient fine- tuning method applicable to various types of Med-VLMs, regardless of their scale, we turn our attention to common foundational lay- ers in transformer-based components, including Attention layers, Feed-Forward Neural Networks (FFN), and Layer Normalization (LayerNorm) layers. Through systematic experiments, we demon- strate that LayerNorm can serve as the most efficient method for fine-tuning Med-VLPs. To further explore the efficiency, adaptabil- ity, and performance of fine-tuning the intrinsic parameters for transferring Med-VLPs to downstream tasks, we conduct exten- sive evaluations across both large-scale and small-scale Med-VLPs, encompassing core medical visual language tasks like Med-VQA and Med-IRG. Our findings expose the variability of intrinsic pa- rameter fine-tuning methods in fine-tuning Med-VLMs to a down- stream task that is different from traditional extrinsic parameter fine-tuning methods. We underscore not only the superior efficiency of LayerNorm fine-tuning over existing PEFT methods but also its remarkable adaptability and transferability for fine-tuning Med- VLPs across diverse downstream tasks. The main contributions of this article are as follows: \u2022 To our best knowledge, we are the first to centre on fine- tuning a small subset of the Med-VLP\u2019s inherent parameters to adapt to downstream tasks. \u2022 We conduct a comprehensive series of experiments fine- tuning foundational components of Med-VLMs, including systematic comparisons with existing PEFT methods centred on tuning extrinsic components. \u2022 Our research identifies LayerNorm fine-tuning as a highly efficient strategy for adapting Med-VLPs to specific down- stream tasks within the medical domain.",
15
+ "main_content": "In the medical domain, Med-VLMs play a pivotal role in automating visual-language tasks, such as VQA and IRG. Initially, these models [7, 10, 26, 31] leverage Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) to extract visual and linguistic features separately. Yet, such approaches frequently fell short in terms of generalizability and transferability across different tasks due to the limitations of their structure. Driven by learning-based technologies [9, 17, 23, 24, 39\u201349], modern Med-VLMs [2\u20134] primarily adopt the transformer architecture, following a pretrainingfinetuning paradigm. They undergo initial pretraining on extensive, generalized medical image-text pair datasets, followed by comprehensive fine-tuning on more focused, task-specific datasets. For example, MISS [3], utilizing the ALBEF [22] methodology, begins its training on 38,800 selectively curated image-text pairs from the MedICaT dataset[33] before undergoing fine-tuning for VQA tasks. Similarly, LLaVA-Med [19] employs a dual-phase pretraining strategy, starting with image-text feature alignment on two million pairs from PubMed, then enhancing conversational capabilities using instruction-format data, culminating in full-scale fine-tuning for VQA tasks. These approaches consistently rely on full-model fine-tuning for task adaptation, a method that, despite its efficacy, demands substantial resources, particularly for large-scale models such as LLaVA-Med. The restricted dataset sizes available for downstream task training further jeopardize the model\u2019s generalizability, leading to potential catastrophic forgetting and diminishing its broader applicability in medical contexts. 2.2 Efficient Fine-tuning Techniques The fine-tuning of large-scale Pre-trained Language Models (PLMs) is a demanding process [1, 51, 53], requiring extensive computational resources and data. To alleviate these burdens, PEFT techniques [11, 25, 30, 36, 38] have been introduced. These methods [13, 14, 29] typically incorporate trainable components into the PLMs while maintaining the rest of the model\u2019s parameters in a frozen state. Some strategies [18, 25, 30] also involve the nuanced manipulation of input embeddings across different layers to minimize or negate modifications to the original model\u2019s architecture. PEFT methods have demonstrated efficacy in transitioning largescale PLMs to new tasks or downstream applications and have been instrumental in converting LLMs into multimodal LLMs [5, 20, 34, 51, 53]. For instance, LLaVA [28] uses an MLP adapter to connect a vision feature extractor with a large language model, selectively training the MLP adapter while keeping both components static, thus adapting the LLM into a VLM. [52] introduces an efficient strategy where tuning LayerNorm layers suffices to yield strong performance to transform an LLM into an LVLM. Nonetheless, the capability of existing PEFT methods to efficiently adapt pre-trained VLMs to specialized, especially medical, tasks remains largely uninvestigated. With the diverse architectures of LVLMs, the most effective application of PEFT methods is uncertain, and their generalizability to non-textual encoders/decoders is limited [12] (e.g., prefix-tuning and p-tuning are not viable for Vision Transformers (ViT) [8]). Consequently, investigating the adjustment of a model\u2019s intrinsic parameters for efficient fine-tuning emerges as a critical necessity. In this paper, we propose a novel method that eschews adding components to the original model structure or input, focusing instead on fine-tuning the model\u2019s inherent parameters. This strategy is designed to ensure the method\u2019s broad applicability for efficient fine-tuning across various Med-VLM types. Efficiency in Focus: LayerNorm as a Catalyst for Fine-tuning Medical Visual Language Pre-trained Models Preprint, 2024, Conference Paper 3 PRELIMINARIES 3.1 Mainstream Architectures of Med-VLMs Contemporary generative Med-VLMs, irrespective of their scale\u2014be it large-scale or small-scale, tend to follow a similar architectural framework. This typical structure comprises a vision feature extractor, a text feature extractor, a connector that integrates the former two, and a Language Model (LM) head. Most Med-VLMs opt for ViT [8] as the vision feature extractor, while the text encoder is based on mainstream frameworks such as BERT [6] or GPT [1]. Despite possible minor variations in their structural implementations, the transformer-based layer serves as their common denominator, with FFN, Attention mechanisms, and LayerNorm being indispensable core components. 3.2 Previous PEFT Methods Transitioning from the core mechanisms of attention and layer normalization, which provide stability and specificity within the model\u2019s architecture, we delve into the domain of extrinsic PEFT methods. These methods are categorized primarily into two types: Reparameterized Fine-tuning (i.e., LoRA (Low-Rank Adaptation)) and Additive Fine-tuning (i.e., Prefix-tuning). LoRA-Tuning: LLM maps data into a high-dimensional space for processing. LoRA indirectly trains the dense layers in the network by optimizing the rank-decomposition matrix that changes in the adaptation process of the dense layer, thereby achieving the best fine-tuning effect by optimizing only the rank-decomposition matrix of the dense layer. For the pretrained parameters \ud835\udf03\ud835\udc37 0 , the dense layer weight parameter matrix on a specific downstream task is defined as \ud835\udc4a0 \u2208R\ud835\udc51\u00d7\ud835\udc58and the intrinsic rank of it is \ud835\udf03\ud835\udc51; the specific downstream task\u2019s parameters \ud835\udf03\ud835\udc37is calculated as \ud835\udf03\ud835\udc37= \ud835\udf03\ud835\udc37 0 +\ud835\udf03\ud835\udc51\ud835\udc40, where \ud835\udc40is the rank-decomposition matrix. For\ud835\udc4a0 \u2208R\ud835\udc51\u00d7\ud835\udc58, LoRA updates it with the following equation: \ud835\udc4a0 + \u0394\ud835\udc4a= \ud835\udc4a0 + \ud835\udc35\ud835\udc34, \ud835\udc35\u2208R\ud835\udc51\u00d7\ud835\udc5f,\ud835\udc34\u2208R\ud835\udc5f\u00d7\ud835\udc58, (1) where \ud835\udc51is the output dimension of the previous layer, and \ud835\udc58is the input dimension of the next layer. For input \ud835\udc65, the forward propagation process is calculated as follows: \u210e= \ud835\udc4a0\ud835\udc65+ \u0394\ud835\udc4a\ud835\udc65= \ud835\udc4a0\ud835\udc65+ \ud835\udc35\ud835\udc34\ud835\udc65. (2) v v0Prefix-Tuning: Inspired by the In-Context Prompting method adopted by GPT3 [1], Li et al [25] propose the Prefix-tuning method for generation tasks. Instead of the discrete text used in prompt tuning, continuous vectors are prefixed to the input text. Specifically, the generation task is deemed as a table-to-text task, the input \ud835\udc65 is treated as a linear table and the output \ud835\udc66represents a short text. For an encoder-decoder model, different prefixes are attached to the beginning of the encoder and decoder with the input defined as: \ud835\udc67= [\ud835\udc43\ud835\udc45\ud835\udc38\ud835\udc39\ud835\udc3c\ud835\udc4b,\ud835\udc65, \ud835\udc43\ud835\udc45\ud835\udc38\ud835\udc39\ud835\udc3c\ud835\udc4b\u2032], and the prefixes are generated by a trainable matrix \ud835\udc43\ud835\udf03\u2208R|\ud835\udc43\ud835\udc56\ud835\udc51\ud835\udc65|\u00d7\ud835\udc51\ud835\udc56\ud835\udc5a(\u210e\ud835\udc56), the global training objective is defined as: max \ud835\udf19 log \ud835\udc43\ud835\udf19(\ud835\udc66|\ud835\udc65) = max \ud835\udf19 \u2211\ufe01 \ud835\udc56\u2208\ud835\udc4c\ud835\udc56\ud835\udc51\ud835\udc65 log \ud835\udc43\ud835\udf19(\ud835\udc67\ud835\udc56|\u210e<\ud835\udc56). (3) 3.3 Medical Visual Language Tasks Medical Visual Language Answering: The primary objective of Med-VQA is to provide answers based on professional questions posed by the inquirer regarding medical images, enhancing the understanding of medical images, and facilitating patient care through the automated interpretation of visual data. The tasks are categorized into open-ended questions, which require detailed descriptive answers, and close-ended questions, which demand concise, often binary responses like \u201cyes\u201d or \u201cno\u201d. This interdisciplinary domain requires the Med-VLM to interpret and provide insights into complex medical imagery, such as X-rays, MRI scans, and CT images. Medical Imaging Report Generation: Medical IRG involves the automatic creation of textual descriptions for medical images, using Med-VLMs. This task aims to analyze visual medical data and produce accurate, coherent, and clinically relevant reports. The goal is to assist radiologists and healthcare professionals by reducing their workload and improving diagnostic efficiency while maintaining high standards of patient care. 3.4 Difference between Universal VLMs and Med-VLMs\u2019 Downstream Tuning The differences between Med-VLMs and Universal VLMs make it necessary to study the tuning of Med-VLMs separately in addition to using the PEFT approach for Universal VLMs. From the data perspective, the dataset used for downstream task fine-tuning in the medical domain is extremely narrow compared to the universal domain, for example, the current largest radiological image dataset used for the Med-VQA task includes only 14,028 image-question pairs, which makes the fine-tuning of LVLMs fall into the problems of overfitting and catastrophic forgetting. At the same time, the answers of the textual Instruction pairs embedded in the dataset usually include only one or two simple words. Furthermore, the current training loss used by generative models in the fine-tuning process makes it very easy to fall into the learning of the data distribution from the long text to its short text rather than the learning of the correct image-text association. From the model perspective, most Med-VLMs are obtained by transfer learning from VLMs in the universal domain, a process where the visual coder is usually frozen, however, the domain gap between natural images and medical images affects the performance of Med-VLMs on medical tasks, where most of the current PEFT methods are not available for ViTs or or have not demonstrated their effective impact on visual encoders for transfer learning from general domain to medical domain. Therefore, a separate study of efficient fine-tuning methods for Med-VLM on downstream tasks is necessary. 4 TUNING SETTINGS 4.1 Baseline Model To explore a method that achieves task adaptation by fine-tuning a small subset of the model\u2019s own parameters, we choose two different-scale pre-trained Med-VLMs for different tasks: 1) smallscale VLM MISS [3] and 2) large-scale VLM LLaVA-Med [19] for Med-VQA and Med-IRG. These baseline models cover generative Med-VLM at different scales and for different tasks so that we can Preprint, 2024, Conference Paper Jiawei Chen, et al Assistant LayerNorm: The chest x-ray image shows a patient with a rightsided pneumothorax. A pneumothorax is a condition where air accumulates in the pleural space, which is the area between the lung and the chest wall. This can cause the lung to collapse, leading to difficulty breathing and chest pain. In this case, the pneumothorax is located on the right side of the patient's chest. Attention: The chest x-ray shows no acute abnormalities. The cardiac and mediastinal contours are normal, and there is no evidence of pneumothorax or pleural effusion. The lungs are clear, and there is no focal airspace consolidation. The impression is that there is no acute cardiopulmonary abnormality. FFN: The chest x-ray shows no acute abnormalities. The heart size and mediastinal contour are normal, and there is no evidence of focal consolidation, pneumothorax, or pleural effusion. Additionally, there is no visible free intraperitoneal air under the diaphragm. The impression is that there are no acute cardiopulmonary findings. LoRA: The chest x-ray image shows a patient with a right-sided pleural effusion. A pleural effusion is an abnormal accumulation of fluid in the pleural space, which is the area between the layers of tissue that line the lungs and the chest cavity. The image displays a silhouette of the heart and lungs, The presence of a pleural effusion may cause the heart and lungs to appear differently than expected, such as having a more pronounced silhouette. Prefix: The chest x-ray image shows a patient with a right-sided pneumothorax. A pneumothorax is a condition where air accumulates in the pleural space, which is the area between the lung and the chest wall. This can cause the lung to collapse, leading to difficulty breathing and chest pain. Question: Describe the given chest x-ray image in detail. Question: What modality is used to take this image? (Open-ended) Assistant LayerNorm: The image is taken using a computed tomography (CT) scan. Attention: Computed tomography (CT) is the modality used to take this image. FFN: CT scan, or computed tomography scan, is the modality used to take this image. LoRA: The image is taken using a computed tomography (CT) scan. Prefix: The image is taken using a computed tomography (CT) scan. Question: Does this image look normal? (Closed-ended) Assistant LayerNorm: No, the image does not appear to be normal. It shows bilateral infiltrates, which are abnormal substances or collections of cells within the lung tissue. This can be indicative of various conditions, such as infection, inflammation, or other lung diseases. Attention: No, this image does not look normal. It shows bilateral patchy infiltrates, which are abnormal areas of increased opacity in the lungs. FFN: No. LoRA: No, this chest X-ray does not look normal. It shows bilateral infiltrates, which are abnormal opacities or densities in both lungs. Prefix: No, the image does not appear to be normal. It shows bilateral infiltrates, which are abnormal substances or collections of cells within the lung tissue. This can be indicative of various conditions, such as infection, inflammation, or other lung diseases. Vision Encoder Image Embeddings Language model Question \u201cIs there any pneumonia in the X-ray?\u201d Ground-truth \u201cYes\u201d Connector LM head Text Output Prompt for Eval You will act as an intelligent evaluator of answers generated by Generative Medical Visual Language Model (Med-VLM). Please note that Med-VLM answers may be more varied than benchmark answers. If a Med-VLM answer is approximately correct about the image from a medical point of view, it should be regarded as correct even if there are some differences from the benchmark answer, so do not arbitrarily give an incorrect assessment. I'll give you the $questions$, $Med-VLM answer$, and $ground_truth$. You must output a word $correct$ or $incorrect$: $questions$: What modality is used to take this image? $Med-VLM answer$: The image is taken using a computed tomography (CT) scan. $ground_truth$: CT. or Figure 2: The pipeline of our study. The flowchart details the step-by-step process from input reception to output generation, showcasing the model\u2019s method for processing medical images and questions to generate contextually relevant responses. The right side presents the fine-tuning results across different paradigms, including both Med-VQA and Med-IRG tasks. provide comprehensive insights into the impact of different PEFT methods on fine-tuning Med-VLM to downstream tasks. 4.2 Tuning within Transformer-based Layer The transformer-based layer, serving as a fundamental structure across vision encoders, language models, and certain VLM connectors, undergoes fine-tuning through both intrinsic parameter adjustments and the application of extrinsic component fine-tuning methods. This process aims to assess the impact of various tuning approaches on the overall model performance. Strategic Intrinsic Adjustments: We emphasize the transformerbased layer\u2019s role as the computational core of the model and selectively fine-tune its intrinsic parameters. Attention layers, LayerNorm layers, and FFNs constitute the critical units of this layer. As depicted in Figure 1, when one of these three components is set to be trainable, the remaining parameters within the transformer-based layer are kept frozen. Incorporation of PEFT Techniques: LoRA-tuning and Prefixtuning, representing the forefront of PEFT methods, are chosen for comparison against intrinsic parameter adjustments. Figure 1 illustrates the application details of these PEFT methods within our study. For LoRA-tuning, low-rank matrices are selectively applied to the parameters of the query and value matrices within the attention layer, mapping data into a low-dimensional subspace for processing. In the case of Prefix-tuning, we follow prevalent practices by appending prefix vectors to the origin input \ud835\udc65of the key and value within the attention layer, and the final input embeddings can be defined as \ud835\udc67= [\ud835\udc43\ud835\udc45\ud835\udc38\ud835\udc39\ud835\udc3c\ud835\udc4b,\ud835\udc65]. 4.3 Tuning within the VLM Architecture In our pursuit to uncover the nuanced impact of various modules within the full VLM\u2019s architecture on overall model performance, we embark on a strategic fine-tuning expedition which entails selectively training specific modules within the VLM framework while employing efficient fine-tuning methods for certain components or maintaining others in a frozen trainable state. Such a strategy allows us to dissect the individual contributions of each component to the model\u2019s efficacy in medical visual language tasks, offering insights into optimizing Med-VLMs for enhanced performance and efficiency. This selective fine-tuning approach aims to validate the hypothesis that certain components within the Med-VLM architecture wield more significant influence over the model\u2019s performance on medical tasks. By applying focused fine-tuning strategies to individual modules, we seek to delineate the performance impact of targeted adjustments versus broad model updates. Details for Small-scale Med-VLMs: For small-scale Med-VLMs, such as MISS, we experiment with applying efficient fine-tuning techniques to some modules at a time, with the rest of the model\u2019s parameters set to remain fully trainable This is because for smallscale Med-VLM, making either module completely frozen may make the model unable to transfer to downstream tasks [2]. For instance, when the language model undergoes LayerNorm-tuning (LN-tuning), the vision feature extractor, connector, and LM head are kept in a state that allows full parameter adjustments. This strategy allows us to compare the impact of different efficient finetuning methods on model performance under the premise that we can evaluate the impact of fine-tuning different module parameters of small-scale Med-VLMs on the overall task performance. Efficiency in Focus: LayerNorm as a Catalyst for Fine-tuning Medical Visual Language Pre-trained Models Preprint, 2024, Conference Paper Strategy for Large-scale Med-VLMs: In the case of large-scale Med-VLMs, like LLaVA-Med, our fine-tuning strategy is more nuanced, reflecting the diverse requirements of comparative analysis. As shown in Figure 2, \u201cSnowflakes or flames\u201d indicate that any module has the option to be adjusted or frozen. Depending on the specific experimental setup, modules within these larger models may be categorized into three states: fully trainable (T), efficiently fine-tuned some of the parameters (PEFT), and completely frozen (F). This flexible approach comprehensively evaluates how different tuning states across various modules influence large-scale VLMs\u2019 performance on complex medical visual language tasks. 4.4 Downstream Fine-tuning Benchmarks: In this paper, we use a total of four datasets, Slake Dataset [27] and VQA-RAD dataset [16] for Med-VQA, and OpenI dataset [37] and MIMIC dataset[15] for Med-IRG. The Slake dataset consists of 14,028 QA pairs, of which 70% are used for training, 15% for validation, and 15% for testing. The VQA-RAD dataset is used for the zero-shot performance of the model on the VQA task including 3515 QA pairs, of which 451 pairs are used for testing. The OpenI dataset is used for the training of the Med-IRG task including 6,459 images and 3,955 reports, the instructions are the same as those adopted in [35]. The MIMIC test set was chosen for the evaluation of the model\u2019s Med-IRG performance, which includes 5,159 images and 3,269 reports. For the inference of the IRG task, we uniformly use the phrase \u201cDescribe the given chest x-ray image in detail.\u201d as the instruction. Application Details: For the fine-tuning of our chosen models (MISS and LLaVAMed), a consistent set of hyperparameters is employed to ensure uniformity across our experiments. Each model is fine-tuned with an initial learning rate of 2e-5, utilizing the Adam optimizer for its well-regarded efficiency in handling the optimization landscapes of deep learning models. Specifically, MISS underwent training for 120 epochs with a batch size of 16, adopting a weight decay of 0.05 to encourage regularization. In contrast, LLaVA-Med\u2019s finetuning is characterized by a warmup ratio of 0.03 and a cosine learning rate scheduler, alongside specific adjustments such as enabling tensor float 32 for enhanced computational performance, and employing FSDP strategies for memory efficiency, with settings like \u201cfull_shard auto_wra\u201d and targeting the \u201cLlamaDecoderLayer\u201d for wrapping. During all the inferences, Med-LLaVA generates outputs using a set of predefined generation parameters, including sampling methods and beam search configurations (num-beams=1), and the temperature is kept at 0.2. Detailed information regarding the hyperparameter settings for each model, along with additional configurations and the rationale for their selection, is provided in the Appendix for further reference. 5 EXPERIMENT RESULTS AND DISCUSSION 5.1 Small-scale MISS Result As shown in Table 1, we employ both supervised fine-tuning (SFT) and performance assessment on the Slake dataset\u2019s training and testing sets. The reported performance metrics include accuracy rates for \u2018opened\u2019 and \u2018closed\u2019 types, which means open-ended and closed-ended questions, as well as a global accuracy rate that averages the performance across both types. Given the potentially catastrophic impact of freezing any module on the overall performance of small-scale models, when certain modules underwent efficient fine-tuning, the remaining modules were maintained fully trainable. In the context of the MISS model, ViT, JTM, and DEC represent the visual encoder, joint textmultimodal encoder, and text decoder, respectively. The term \u201ctrainable params\u201d refers to the total volume of trainable model parameters, with \u201c#Params\u201d indicating the ratio of trainable to total parameters. \u201cPEFT params\u201d denotes the proportion of parameters fine-tuned using PEFT methods, with \u201c#PEFT Params\u201d reflecting the proportion of PEFT-tuned parameters relative to the total parameters within the corresponding module. When the baseline model is fully fine-tuned, it achieves the highest open-ended question accuracy and global accuracy rates of 82.91% and 82%, respectively. Under the premise of maintaining the visual encoder fully trainable and only efficiently tuning one module at a time, keeping the JTM encoder fully trainable enabled the model to achieve optimal performance. Compared to scenarios where DEC underwent full parameter training while JTM was efficiently tuned, the model\u2019s global accuracy rates under LayerNorm, attention, and FFN intrinsic parameter tuning methods were higher by 4%, 9%, and 12%, respectively. Maintaining the visual encoder fully trainable while efficiently tuning all the remaining modules resulted in significantly poor model performance, failing to correctly judge the closed-source questions in all the fine-tuning methods except Attention-tuning. Under LayerNorm, FFN, LoRA, and Prefix-tuning methods, the model never answered \u2018yes\u2019 to any close-ended question, with accuracy rates lower than random guessing at 38.03%, 54.65%, 50.70%, and 32.95%, respectively. Comparing different fine-tuning methods, the effect of LN-tuning is remarkable, achieving the best accuracy on close-ended questions with the lowest PEFT Params, even surpassing full parameter tuning and reaching 84.51%. In contrast, although Attention-tuning and FFN-tuning slightly outperform LN-tuning in terms of global accuracy, this came at the cost of tuning over 40% of the parameters in their respective modules. The LoRA method fine-tuning model using the [T, PEFT, T] paradigm tuned approximately five times more PEFT parameters than LN-tuning (only 56,823), with Prefix-tuning at twenty times more. This underscores the viability of LN-tuning as a comparable method to the most classical PEFT methods in small-scale fine-tuning scenarios that require saving certain parameter volumes. From a global parameter tuning perspective, Attention-tuning achieves performance closest to full fine-tuning by saving 23% of trainable parameters, marking it as another viable fine-tuning approach for small-scale Med-VLMs. 5.2 Large-scale LLaVA-Med Result Furthermore, we conduct comprehensive evaluations on LLaVAMed, a large-scale model designated for Med-VQA tasks. Our approach encompassed four distinct training paradigms: [PEFT, F, PEFT, F], [PEFT, T, PEFT, T], [F, F, PEFT, F], and [F, T, PEFT, T]. Considering the substantial parameter size of LVLMs, we aimed to restrict the volume of fine-tuning parameters to within about Preprint, 2024, Conference Paper Jiawei Chen, et al Table 1: Comparison of accuracy (ACC-%) of MISS on Slake dataset using different methods of fine-tuning. \u2018T\u2019 stands for trainable while \u2018F\u2019 stands for frozen. ViT JTM DEC Opened \u2191 Closed \u2191 Gobal \u2191 Trainable Params #Params PEFT Params #PEFT Params T T T 82.91 81.47 82.00 361,478,972 100% T LayerNorm LayerNorm 40.79 38.03 39.87 86,454,528 23.92% 115,200 0.04% T LayerNorm T 75.64 84.51 78.61 224,277,308 62.04% 56,832 0.04% T T LayerNorm 73.65 77.46 74.93 223,656,192 61.87% 58,368 0.04% T Attention Attention 64.51 74.65 71.25 199,806,720 55.27% 113,467,392 41.24% T Attention T 78.47 85.92 80.96 280,954,172 77.72% 56,733,696 41.33% T T Attention 75.50 64.23 71.72 280,331,520 77.55% 56,733,696 41.15% T FFN FFN 74.79 54.65 68.05 199,677,696 55.24% 113,338,368 41.19% T FFN T 76.63 84.51 79.26 280,889,660 77.71% 56669184 41.27% T T FFN 76.20 49.86 67.39 280,267,008 77.53% 56669184 41.10% T LoRA LoRA 68.14 50.70 62.29 86,929,152 24.05% 589,824 0.21% T LoRA T 76.77 82.81 78.79 224,515,388 62.11% 294,912 0.21% T T LoRA 78.52 79.44 78.83 223,892,736 61.94% 294,912 0.21% T Prefix Prefix 41.50 32.95 38.61 115,884,288 32.06% 29,544,960 10.74% T Prefix T 75.92 83.38 78.42 238,992,956 66.12% 14,772,480 10.76% T T Prefix 76.82 82.25 78.65 238,370,304 65.94% 14,772,480 10.71% Table 2: Comparison of results of LLaVA-Med on the Slake dataset using different methods of fine-tuning. Vision Tower Connector LLM LM Head Slake Dataset #Params Trainable Params BERTS-Recall Mean Token Opened \u2191 Closed \u2191 Global \u2191 LayerNorm F LayerNorm F 59.53 69.95 63.62 3.79% 266,737,664 46.35% 28.27 F F LayerNorm F 58.76 69.71 63.05 0.00372% 262,144 46.24% 27.81 LayerNorm T LayerNorm T 59.84 67.55 62.87 3.79% 266,737,664 46.93% 26.81 F T LayerNorm T 60.31 66.11 62.58 3.78% 266,637,312 46.93% 26.50 Attention F Attention F 61.4 67.79 63.9 31.91% 2,248,245,248 49.25% 25.24 F F Attention F 61.71 68.03 64.18 30.48% 2,147,483,648 49.11% 25.95 Attention T Attention T 60.93 65.87 62.87 35.69% 2,514,620,416 48.47% 25.89 F T Attention T 58.76 66.83 61.92 34.26% 2,413,858,816 48.49% 25.85 F T FFN T 64.5 62.26 63.62 44.74% 3,152,056,320 51.98% 16.42 F F FFN F 64.34 66.59 65.22 40.96% 2,885,943,296 52.07% 17.37 F F LoRA F 58.14 64.42 60.6 0.14% 9,994,240 47.55% 25.43 F T LoRA T 58.76 65.38 61.36 3.92% 276,369,408 47.26% 25.72 F F Prefix F 56.9 67.07 60.89 15.48% 1,090,805,760 46.19% 26.61 F T Prefix T 59.22 70.19 63.52 19.26% 1,357,180,928 46.28% 26.58 40%, thereby excluding full parameter training of the ViT and FFNtuning methods that involve adjusting ViT. Table 2 showcases the experimental results of LLaVA-Med, trained and tested on the Slake dataset, employing the aforementioned fine-tuning paradigms. When opting to keep both the connector and LM head trainable, the model\u2019s performance did not exhibit significant improvement, despite a substantial increase in the volume of adjusted parameters. Specifically, when fine-tuning adopted the [F, T, PEFT, T] paradigm, changes in global accuracy rates for LN, attention, and FFN tuning compared to [F, F, PEFT, F] are -0.5%, -2.1%, and -3.3% respectively. This contradicts the common notion that more parameter adjustments correlate with better SFT performance, indicating that full parameter adjustments of the connector and LM head during efficient fine-tuning of LLMs do not guarantee the expected outcomes. The performance changes are inconsistent under the [PEFT, F, PEFT, F] and [PEFT, T, PEFT, T]. For LN-tuning, fine-tuning the image encoder led to respective increases in global accuracy of 0.57% and 0.29%, while Attention-tuning resulted in changes of -0.28% and +0.85%. Such subtle differences do not conclusively indicate whether adjusting parameters of the image encoder benefits or hinder model performance, especially when considering Recall metrics. The increase in ViT-adjusted parameter volume did not regularly alter recall, suggesting that larger adjustments to ViT parameters do not consistently improve model recall. Comparing different intrinsic parameter adjustments revealed that increasing the volume of fine-tuned parameters indeed enhances the model\u2019s recall of generated content: as fine-tuning parameters shifted from 0.003% to 44%, recall correspondingly increased from 46.24% to 52.07%. This indicates that enlarging the volume of fine-tuned parameters allows the model to learn the distribution of ground-truth tokens in the vocabulary space more effectively, both quantitatively and spatially. However, considering accuracy\u2014a gold standard in medical tasks\u2014significant increases in parameter volume do not necessarily elevate all accuracy metrics concurrently. LN-tuning under the [PEFT, F, PEFT, F] paradigm once again achieved state-of-the-art (SOTA) accuracy for close-ended questions, which was consistent with observations in small-scale VLMs. Across two models of different scales, pre-trained on distinct Efficiency in Focus: LayerNorm as a Catalyst for Fine-tuning Medical Visual Language Pre-trained Models Preprint, 2024, Conference Paper Figure 3: Loss curves for various methods under the [F, T, PEFT, T] fine-tuning paradigm. datasets and tasks, LN-based fine-tuning consistently enhanced their accuracy on close-ended questions. While Attention-tuning and FFN-tuning marginally surpassed LN-tuning in global accuracy, achieving peak open-ended question accuracies of 64.34% and global accuracy of 65.22%, this came at the cost of escalating the volume of tuned parameters from 262,144 to 2,885,943,296\u2014a millionfold increase. Furthermore, following peak performance under current fine-tuning paradigms, the model ceased learning intrinsic relations of the features, instead focusing on the quantitative distribution of ground-truth tokens. This shift manifested in minimal accuracy improvements and a dramatic reduction in average output length, with mean tokens dropping from 28.27 to 17.38. Figure 2 compares the generated outcomes across different intrinsic and extrinsic fine-tuning methods under the [F, F, PEFT, F] paradigms, illustrating this phenomenon driven by inherent large model training patterns and suboptimal training data. Most datasets employed for fine-tuning Med-VQA tasks comprise answers in short text formats, with close-ended answers typically being \u2018yes\u2019 or \u2018no\u2019, and open-ended answers containing only a few words. For LLMs seeking interpretability of the answer, adjusting more parameters paradoxically impairs generative performance. When applying LoRA-tuning and Prefix-tuning to LLaVA-Med, the model\u2019s performance did not exhibit notable improvement. LoRA-tuning\u2019s recall reached 47.55% and 47.26%, indicating a closer alignment of the model\u2019s output with the ground truth distribution in the vocabulary space. However, this did not translate to enhanced evaluation accuracy, with accuracies under [F, F, PEFT, F] and [F, T, PEFT, T] fine-tuning paradigms reaching only 60.6% and 61.36%, respectively. This suggests that LoRA-tuning failed to deepen the multimodal model\u2019s understanding of joint image-text features, merely aligning output closer to the ground truth distribution. Figure 3 displays the loss curves for various methods under the [F, T, PEFT, T] fine-tuning paradigm. Compared to LN-tuning, LoRAtuning\u2019s minimum fine-tuned parameter volume is approximately fifty times larger, yet its accuracy is roughly 3% lower, with average output lengths of 25.43 and 25.72, trailing behind LN-tuning. These factors collectively indicate that existing PEFT methods may not directly enhance text-based accuracy in discerning medical images in multimodal model downstream fine-tuning, underscoring the advantages of LN-tuning over traditional PEFT approaches. Table 3: Comparison of LLaVA-Med competence on the MIMIC test dataset which SFT on the OpenI dataset. Vision Tower Connector LLM LM Head METEOR Score Rouge-L Mean Token Recall Precision F1 F F LayerNorm F 12.85% 12.58% 15.92% 13.61% 122.66 F F Attention F 12.85% 12.58% 15.92% 13.61% 122.66 F F FFN F 24.53% 17.01% 23.84% 19.34% 123.11 F F LoRA F 12.95% 12.57% 15.93% 13.62% 122.70 F F Prefix F 12.99% 12.47% 15.92% 13.54% 122.72 5.3 Large-scale VLM IRG Result To further explore the impact of various fine-tuning methods on the performance of large-scale Med-VLMs in the Med-IRG context, we employed the [F, F, PEFT, F] fine-tuning paradigm using the OpenI dataset to fine-tune Med-LLaVA. The model\u2019s OOD performance was then tested on the MIMIC dataset\u2019s test set to assess how it handles variations in input text domains. As shown in Table 3, the performance of the models fine-tuned with LN, Attention, LoRA, and Prefix methods showed minimal differences, with output text lengths averaging around 122.7. In contrast, FFN tuning significantly outperformed other fine-tuning approaches, demonstrating its superior capability in learning the underlying representations of ground-truth in long text generation tasks like Med-IRG. 5.4 Out of Distribution Performance Testing To assess whether the performance of LLaVA-Med on a familiar dataset like Slake correlates with its performance on a novel dataset, we conducted an OOD testing on the VQA-RAD dataset. This test serves to evaluate the model\u2019s robustness and flexibility by applying it to a different domain within the same field but with unseen data. More specifically, the images in the VQA-RAD dataset belong to the proximity domain with the Slake dataset but are quite different from the Slake dataset in terms of question formulation. Such experiments allow us to consider the ability of different fine-tuning methods to reason on non-proximity-domain text over similar medical images, in order to speculate on the ability of the models fine-tuned with the VQA dataset to be applied to the real Med-VQA scenarios. In this experiment, we observe various fine-tuning paradigms, focusing particularly on the role of the transformer-based LayerNorm, Attention, and FFN adjustments. The results show a notable variance in the model\u2019s ability to generalize the learned features to the VQA-RAD dataset. As Table 4 shows, fine-tuning methods that show comparable results on the Slake dataset exhibit significant performance variances on the OOD VQA-RAD dataset. Notably, under the [F, T, PEFT, T] training paradigm, LoRA-tuning underwent a remarkable reversal, surpassing the performances of Attentiontuning and FFN-tuning, which were previously effective on the Slake dataset. It achieves a global accuracy of 65.41% and matches the best-closed question accuracy of 73.71%, initially noted with LNtuning. Conversely, FFN-tuning, despite being superior at learning adjacent training text representations, disappointed in its OOD performance. While it excels in Rouge-L metrics with scores of 26.35%, 9.80%, and 13.51%, significantly surpassing other fine-tuning methods, it only managed accuracy scores of 56.50%, 64.54%, and 60.98%. Moreover, its mean output length plummeted to 18.51, the lowest among all methods. This combination of metrics further validates that although FFN-tuning can closely fit the training data during Preprint, 2024, Conference Paper Jiawei Chen, et al Table 4: Comparison of LLaVA-Med competence on the VQA-RAD test dataset which SFT on the Slake dataset. Vision Tower Connector LLM LM Head VQA-RAD Bertscore METEOR Score Rouge-L Mean Token Opened\u2191 Closed\u2191 Global\u2191 Precision Recall F1 Recall Precision F1 F T LayerNorm T 54.50 73.71 65.19 29.77% 49.00% 36.53% 12.53% 7.91% 1.48% 2.30% 29.52 F T Attention T 50.50 63.35 57.65 29.97% 49.27% 36.73% 11.87% 7.97% 1.46% 2.30% 31.12 F T FFN T 56.50 64.54 60.98 35.44% 52.05% 41.89% 19.23% 26.35% 9.80% 13.51% 18.50 F T LoRA T 55.00 73.71 65.41 30.06% 49.52% 36.87% 12.03% 7.89% 1.50% 2.29% 27.52 F T Prefix T 51.00 70.12 61.64 29.89% 48.85% 36.58% 12.71% 8.53% 1.57% 2.45% 28.87 F F LayerNorm F 54.50 75.30 66.08 29.90% 48.84% 36.60% 12.97% 7.84% 1.50% 2.33% 29.52 F F Attention F 55.50 71.71 64.52 30.25% 49.64% 37.08% 8.44% 8.44% 1.66% 2.58% 29.45 F F FFN F 52.00 61.35 57.21 35.20% 51.76% 41.60% 18.43% 24.31% 8.96% 12.40% 19.16 F F LoRA F 49.50 70.92 61.42 30.08% 49.80% 36.96% 12.46% 7.91% 1.51% 2.33% 29.04 F F Prefix F 51.00 69.72 61.42 29.76% 48.49% 36.36% 12.81% 8.11% 1.54% 2.41% 29.21 Table 5: Comparison of LLaVA-Med zero-shot competence on the MIMIC test dataset which SFT on the Slake dataset. Vision Tower Connector LLM LM Head METEOR Score Rouge-L Mean Token Recall Precision F1 F T LayerNorm T 11.86% 11.27% 18.54% 13.54% 71.78 F T Attention T 11.88% 11.15% 17.98% 13.33% 73.73 F T FFN T 12.12% 11.24% 18.07% 13.42% 73.23 F T LoRA T 11.57% 10.99% 17.91% 13.18% 71.51 F F LayerNorm F 12.07% 11.40% 18.26% 13.56% 75.00 F F Attention F 12.17% 11.50% 18.20% 13.65% 75.01 F F FFN F 12.81% 11.48% 17.84% 13.53% 76.88 F F LoRA F 11.91% 11.25% 18.07% 13.40% 72.93 SFT, it predominantly learns the distribution of ground-truth tokens rather than enhancing the model\u2019s ability to generalize image-text reasoning. A similar pattern is observed with Attention-tuning; the global accuracy decreases by 7.54% compared to LN-tuning when the connector and LM head are trained more. However, the text length is optimal at this time. Comprehensively, the mean token length of attention tuning under the same paradigm in Table 1 can show that attention tuning slightly overlearns the text in the adjacent domains during SFT training, but does not cause large damage to the model\u2019s output ability of textual diversity in textual reasoning in the non-adjacent domains. When enlarging the perspective to compare the accuracy performances across different fine-tuning methods, LN-tuning consistently displayed formidable strength. Under the [F, F, PEFT, F] tuning paradigm, LN-tuning, utilizing the smallest parameter adjustment, reached the highest scores in opened, closed, and global accuracy at 54.5%, 75.3%, and 66.08%, respectively. It also maintains a longer mean text output than any other method under the same tuning conditions. These results, coupled with the Rouge-L metrics from Table 1, indicate that LN-tuning manages to enhance the model\u2019s understanding of multimodal feature interrelations, significantly minimizing the model\u2019s overemphasis on learning groundtruth text token distributions due to low training data quality. This is evidenced by the lowest recall rate of 1.5% and the highest global accuracy of 66.08%. In contrast, LoRA-tuning, despite adjusting 50 times more parameters, did not significantly outperform LN-tuning. 5.5 Zero-shot Capability Investigation To further investigate the zero-shot capabilities of different finetuning paradigms on Med-VLMs, we conduct extensive evaluations of the LLaVA-Med model on the MIMIC test dataset after SFT on the Slake dataset. This analysis aims to understand the impact of various intrinsic tuning methods on the model\u2019s ability to generalize and adapt to new tasks within the medical domain, particularly for IRG tasks. The evaluation employs metrics such as METEOR score, Rouge-L, and mean token length to measure factual accuracy, linguistic precision, and diversity of output in medical report generation. From Table 5, experimental results indicate that different tuning methods exhibit varying impacts on the model\u2019s zero-shot performance. LN-tuning consistently showed robust performance across different configurations, achieving the highest precision scores (18.54% and 18.26% under different paradigms), which underscores its effectiveness in preserving the factualness of model outputs. In contrast, the Attention and FFN methods, although effective in some scenarios, demonstrate greater variability in their influence on model generalization. Notably, FFN-tuning, which previously excelled in VQA tasks, scored the lowest in precision (17.84%) under the [F, F, FFN, F] paradigm on the MIMIC dataset. This suggests that the model may have overlearned task-specific features from the Slake dataset, thus hindering its generalization and transfer capabilities. Furthermore, METEOR scores positively correlated with the number of adjusted parameters, increasing from 11.86% to a high of 12.81%, indicating that a larger volume of tuned parameters enhances the model\u2019s linguistic alignment capabilities in medical text generation tasks. Examining the effects of freezing versus tuning the connector and LM head reveals no clear pattern in performance metrics between [F, T, PEFT, T] and [F, F, PEFT, F] configurations. Changes in Rouge-L score and precision are minimal, suggesting that extensive finetuning of the connector and LM head does not necessarily contribute to improved zero-shot performance across these metrics. Comparison between intrinsic tuning and traditional methods, such as LoRA-tuning, did not exhibit standout performance in the zero-shot setting. Under the [F, T, PEFT, T] paradigm, LoRA-tuning shows lower METEOR scores, Rouge-L, and mean token length compared to intrinsic methods, indicating that LoRA-tuning might not effectively maintain the overall transferability and generalization of the model in medical applications. Thus, intrinsic tuning methods, particularly LN-tuning with minimal parameter adjustments, might be a better choice, especially under the [F, F, PEFT, F] paradigm, where it outperforms more parameter-intensive methods like Attention-tuning in maintaining the generalization capabilities. These observations underscore the efficacy of LN-tuning in preserving the generalization of LVLMs for diverse medical tasks. Efficiency in Focus: LayerNorm as a Catalyst for Fine-tuning Medical Visual Language Pre-trained Models Preprint, 2024, Conference Paper 6 CONCLUSION This study presents a thorough examination of intrinsic parameter fine-tuning and exposing LN-tuning, as a potent alternative to traditional PEFT methods for Med-VLMs. Our extensive experimental analysis across both small-scale and large-scale Med-VLMs demonstrated that fine-tuning the LayerNorm layers significantly enhances the models\u2019 adaptability, efficiency, and scalability in performing specialized medical tasks, such as Med-VQA and MedIRG. We hope this work will enhance the clinical applicability of Med-VLMs in real-world medical settings. ACKNOWLEDGMENTS This work is supported in part by the National Key R&D Program of China (2021ZD0113503) and in part by the Shanghai Municipal Science and Technology Committee of Shanghai Outstanding Academic Leaders Plan (No. 21XD1430300). REFERENCES [1] Tom Brown, Benjamin Mann, Ryder, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems 33 (2020), 1877\u20131901. [2] Jiawei Chen, Yue Jiang, Dingkang Yang, Mingcheng Li, Jinjie Wei, Ziyun Qian, and Lihua Zhang. 2024. Can LLMs\u2019 Tuning Methods Work in Medical Multimodal Domain? arXiv preprint arXiv:2403.06407 (2024). [3] Jiawei Chen, Dingkang Yang, Yue Jiang, et al. 2024. MISS: A Generative Pretraining and Finetuning Approach for Med-VQA. arXiv preprint arXiv:2401.05163 (2024). [4] Zhihong Chen, Yuhao Du, Hu, et al. 2022. Multi-modal masked autoencoders for medical vision-and-language pre-training. In MICCAI. Springer, 679\u2013689. [5] W Dai, J Li, D Li, AMH Tiong, J Zhao, W Wang, B Li, P Fung, and S Hoi. [n. d.]. InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning. arXiv 2023. arXiv preprint arXiv:2305.06500 ([n. d.]). [6] Jacob Devlin, Ming-Wei Chang, Lee, et al. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018). [7] Tuong Do, Binh X. Nguyen, et al. 2021. Multiple Meta-model Quantifying for Medical Visual Question Answering. In MICCAI. Cham, 64\u201374. [8] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. 2020. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020). [9] Yangtao Du, Dingkang Yang, Peng Zhai, Mingchen Li, and Lihua Zhang. 2021. Learning Associative Representation for Facial Expression Recognition. In IEEE International Conference on Image Processing (ICIP). 889\u2013893. [10] Haifan Gong, Guanqi Chen, Mao, et al. 2022. VQAMix: Conditional Triplet Mixup for Medical Visual Question Answering. IEEE Transactions on Medical Imaging 41, 11 (2022), 3332\u20133343. https://doi.org/10.1109/TMI.2022.3185008 [11] Zeyu Han, Chao Gao, Jinyang Liu, Sai Qian Zhang, et al. 2024. ParameterEfficient Fine-Tuning for Large Models: A Comprehensive Survey. arXiv preprint arXiv:2403.14608 (2024). [12] Xuehai He, Chunyuan Li, Pengchuan Zhang, Jianwei Yang, and Xin Eric Wang. 2023. Parameter-efficient model adaptation for vision transformers. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 37. 817\u2013825. [13] Neil Houlsby, Andrei Giurgiu, Jastrzebski, et al. 2019. Parameter-efficient transfer learning for NLP. In International Conference on Machine Learning. PMLR, 2790\u2013 2799. [14] Edward J Hu, Shen, et al. 2021. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685 (2021). [15] Alistair EW Johnson, Tom J Pollard, Seth J Berkowitz, Nathaniel R Greenbaum, Matthew P Lungren, Chih-ying Deng, Roger G Mark, and Steven Horng. 2019. MIMIC-CXR, a de-identified publicly available database of chest radiographs with free-text reports. Scientific data 6, 1 (2019), 317. [16] Jason J Lau, Gayen, et al. 2018. A dataset of clinically generated visual questions and answers about radiology images. Scientific data 5, 1 (2018), 1\u201310. [17] Yuxuan Lei, Dingkang Yang, Mingcheng Li, Shunli Wang, Jiawei Chen, and Lihua Zhang. 2023. Text-oriented Modality Reinforcement Network for Multimodal Sentiment Analysis from Unaligned Multimodal Sequences. arXiv preprint arXiv:2307.13205 (2023). [18] Brian Lester, Rami Al-Rfou, and Noah Constant. 2021. The power of scale for parameter-efficient prompt tuning. arXiv preprint arXiv:2104.08691 (2021). [19] Chunyuan Li, Cliff Wong, Zhang, et al. 2023. Llava-med: Training a large languageand-vision assistant for biomedicine in one day. arXiv preprint arXiv:2306.00890 (2023). [20] Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. 2023. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. arXiv preprint arXiv:2301.12597 (2023). [21] Junnan Li, Dongxu Li, Xiong, et al. 2022. Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation. In ICCV. 12888\u201312900. [22] Junnan Li, Ramprasaath Selvaraju, Gotmare, et al. 2021. Align before fuse: Vision and language representation learning with momentum distillation. NIPS 34 (2021), 9694\u20139705. [23] Mingcheng Li, Dingkang Yang, Yuxuan Lei, Shunli Wang, Shuaibing Wang, Liuzhen Su, Kun Yang, Yuzheng Wang, Mingyang Sun, and Lihua Zhang. 2024. A Unified Self-Distillation Framework for Multimodal Sentiment Analysis with Uncertain Missing Modalities. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), Vol. 38. 10074\u201310082. [24] Mingcheng Li, Dingkang Yang, and Lihua Zhang. 2023. Towards Robust Multimodal Sentiment Analysis under Uncertain Signal Missing. IEEE Signal Processing Letters 30 (2023), 1497\u20131501. [25] Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning: Optimizing continuous prompts for generation. arXiv preprint arXiv:2101.00190 (2021). [26] Bo Liu, Li-Ming Zhan, and Xiao-Ming Wu. 2021. Contrastive Pre-training and Representation Distillation for Medical Visual Question Answering Based on Radiology Images. In MICCAI 2021. Springer International Publishing, Cham, 210\u2013220. [27] Bo Liu, Li-Ming Zhan, Xu, et al. 2021. Slake: A semantically-labeled knowledgeenhanced dataset for medical visual question answering. In 2021 ISBI. 1650\u20131654. [28] Haotian Liu, Chunyuan Li, Wu, et al. 2023. Visual instruction tuning. arXiv preprint arXiv:2304.08485 (2023). [29] Shih-Yang Liu, Chien-Yi Wang, Hongxu Yin, Pavlo Molchanov, Yu-Chiang Frank Wang, Kwang-Ting Cheng, and Min-Hung Chen. 2024. DoRA: WeightDecomposed Low-Rank Adaptation. arXiv preprint arXiv:2402.09353 (2024). [30] Xiao Liu, Yanan Zheng, Zhengxiao Du, Ding, et al. 2023. GPT understands, too. AI Open (2023). [31] Binh D. Nguyen, Thanh-Toan Do, Binh X Nguyen, et al. 2019. Overcoming Data Limitation in Medical Visual Question Answering. In MICCAI. Cham, 522\u2013530. [32] Alec Radford, Jong Wook Kim, Hallacy, et al. 2021. Learning transferable visual models from natural language supervision. In International conference on machine learning. PMLR, 8748\u20138763. [33] Sachin Mehta Sanjay Subramanian, Lucy Lu Wang et al. 2020. MedICaT: A Dataset of Medical Images, Captions, and Textual"
16
+ }
intro_8K/test_introduction_short_2404.16399v1.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2404.16399v1",
3
+ "title": "Offline Reinforcement Learning with Behavioral Supervisor Tuning",
4
+ "abstract": "Offline reinforcement learning (RL) algorithms are applied to learn\nperformant, well-generalizing policies when provided with a static dataset of\ninteractions. Many recent approaches to offline RL have seen substantial\nsuccess, but with one key caveat: they demand substantial per-dataset\nhyperparameter tuning to achieve reported performance, which requires policy\nrollouts in the environment to evaluate; this can rapidly become cumbersome.\nFurthermore, substantial tuning requirements can hamper the adoption of these\nalgorithms in practical domains. In this paper, we present TD3 with Behavioral\nSupervisor Tuning (TD3-BST), an algorithm that trains an uncertainty model and\nuses it to guide the policy to select actions within the dataset support.\nTD3-BST can learn more effective policies from offline datasets compared to\nprevious methods and achieves the best performance across challenging\nbenchmarks without requiring per-dataset tuning.",
5
+ "authors": "Padmanaba Srinivasan, William Knottenbelt",
6
+ "published": "2024-04-25",
7
+ "updated": "2024-04-25",
8
+ "primary_cat": "cs.LG",
9
+ "cats": [
10
+ "cs.LG",
11
+ "cs.AI"
12
+ ],
13
+ "label": "Original Paper",
14
+ "paper_cat": "Offline AND Reinforcement AND Learning",
15
+ "gt": "Reinforcement learning (RL) is a method of learning where an agent interacts with an environment to collect experiences and seeks to maximize the reward provided by the environ- ment. This typically follows a repeating cycle of experience collecting and improvement [Sutton and Barto, 2018]. This is termed online RL due to the need for policy rollouts in the environment. Both on-policy and off-policy RL require some schedule of online interaction which, in some domains, can be infeasible due to experimental or environmental lim- itations [Mirowski et al., 2018; Yu et al., 2021]. With such constraints, a dataset may instead be collected that consists of demonstrations by arbitrary (potentially multiple, unknown) behavior policies [Lange et al., 2012] that may be subopti- mal. Offline reinforcement learning algorithms are designed to recover optimal policies from such static datasets. The primary challenge in offline RL is the evaluation of out-of-distribution (OOD) actions; offline datasets rarely of- fer support over the entire state-action space and neural net- works overestimate values when extrapolating to OOD ac- tions [Fujimoto et al., 2018; Gulcehre et al., 2020; Kumar et 3. Figure 1: An illustration of our method versus typical, TD3-BC- like actor-constraint methods. TD3-BC: a) A policy selecting an OOD action is constrained to select in-dataset actions. b) A policy selecting the optimal action may be penalized for not selecting an in-dataset, but not in-batch, inferior action. Our method: c) A pol- icy selecting OOD actions is drawn towards in-dataset actions with decreasing constraint coefficient as it moves closer to any supported action. d) An optimal policy is not penalized for selecting an in- dataset action when the action is not contained in the current batch. al., 2020; Kumar et al., 2019]. If trained using standard off- policy methods, a policy will select any actions that maximize reward, which includes OOD actions. The difference between the rewards implied by the value function and the environ- ment results in a distribution shift that can result in failure in real-world policy rollouts. Thus, offline RL algorithms must both maximize the reward and follow the behavioral policy, while having to potentially \u201cstitch\u201d together several subopti- mal trajectories. The former requirement is usually satisfied by introducing a constraint on the actor to either penalize de- viation from the behavior policy or epistemic uncertainty of the value function, or by regularizing the value function to directly minimize OOD action-values. Many recent approaches to offline RL [Tarasov et al., 2023; Zhu et al., 2022; Li et al., 2022; Nikulin et al., 2023; Xu et al., 2023] demonstrate success in D4RL benchmarks [Fu et al., 2020], but demand the onerous task of per-dataset arXiv:2404.16399v1 [cs.LG] 25 Apr 2024 hyperparameter tuning [Zhang and Jiang, 2021]. Algo- rithms that require substantial offline fine-tuning can be in- feasible in real-world applications [Tang and Wiens, 2021], hampering their adoption in favor of simpler, older algo- rithms [Emerson et al., 2023; Zhu et al., 2023]. These older methods [Fujimoto and Gu, 2021; Kumar et al., 2020; Kostrikov et al., 2021b] provide excellent \u201cbang-for-buck\u201d as their hyperparameters work well across a range of D4RL datasets. Contributions In this paper, we show how a trained uncer- tainty model can be incorporated into the regularized policy objective as a behavioral supervisor to yield TD3 with behav- ioral supervisor tuning (TD3-BST). The key advantage of our method is the dynamic regularization weighting performed by the uncertainty network, which allows the learned policy to maximize Q-values around dataset modes. Evaluation on D4RL datasets demonstrates that TD3-BST achieves SOTA performance, and ablation experiments analyze the perfor- mance of the uncertainty model and the sensitivity of the pa- rameters of the BST objective.",
16
+ "main_content": "Reinforcement learning is a framework for sequential decision making often formulated as a Markov decision process (MDP), M = {S, A, R, p, p0, \u03b3} with state space S, action space A, a scalar reward dependent on state and action R(s, a), transition dynamics p, initial state distribution p0 and discount factor \u03b3 \u2208 [0, 1) [Sutton and Barto, 2018]. RL aims to learn a policy \u03c0 \u2208\u03a0 that executes action a = \u03c0(s) that will maximize the expected \ufffd\ufffd \ufffd \u2208 executes action a = \u03c0(s) that will maximize the expected discounted reward J(\u03c0) = E\u03c4\u223cP\u03c0(\u03c4) \ufffd\ufffdT t=0 \u03b3tR(st, at) \ufffd where P\u03c0(\u03c4) = p0(s0) \ufffdT t=0 \u03c0(at | st)p(st+1 | st, at) is the trajectory under \u03c0. Rather than rolling out an entire trajecdiscounted reward J(\u03c0) = E\u03c4\u223cP\u03c0(\u03c4) \ufffd\ufffd t=0 \u03b3tR(st, at) \ufffd where P\u03c0(\u03c4) = p0(s0) \ufffdT t=0 \u03c0(at | st)p(st+1 | st, at) is the trajectory under \u03c0. Rather than rolling out an entire trajectory, a state-action value function (Q function) is often used: Q\u03c0(s, a) = E\u03c4\u223cP\u03c0(\u03c4) \ufffd\ufffdT t=0 \u03b3tr(st, at) | s0 = s, a0 = a \ufffd . 2.1 Offline Reinforcement Learning \ufffd\ufffd \ufffd 2.1 Offline Reinforcement Learning Offline RL algorithms are presented with a static dataset D that consists of tuples {s, a, r, s\u2032} where r \u223cR(s, a) and s\u2032 \u223cp(\u00b7 | s, a). D has limited coverage over S \u00d7A; hence, offline RL algorithms must constrain the policy to select actions within the dataset support. To this end, algorithms employ one of three approaches: 1) policy constraints; 2) critic regularization; or 3) uncertainty penalization. Policy constraint Policy constraints modify the actor\u2019s objective only to minimize divergence from the behavior policy. Most simply, this adds a constraint term [Fujimoto and Gu, 2021; Tarasov et al., 2023] to the policy objective: arg max \u03c0 E{s,a}\u223cD [Q(s, \u03c0(s)) \u2212\u03b1D(\u03c0, \u03c0\u03b2)] , (1) where \u03b1 is a scalar controlling the strength of regularization, D(\u00b7, \u00b7) is a divergence function between the policy \u03c0 and the behavior policy \u03c0\u03b2. In offline RL, we do not have access to \u03c0\u03b2; some prior methods attempt to estimate it empirically [Kostrikov et al., 2021a; Li et al., 2023] which is challenging when the dataset is generated by a mixture of policies. Furthermore, selecting the constraint strength can be challenging and difficult to generalize across datasets with similar environments [Tarasov et al., 2023; Kostrikov et al., 2021a]. Other policy constraint approaches use weighted BC [Nair et al., 2020; Kostrikov et al., 2021b; Xu et al., 2023] or (surrogate) BC constraints [Li et al., 2022; Wu et al., 2019; Li et al., 2023]. The former methods may be too restrictive as they do not allow OOD action selection, which is crucial to improve performance [Fu et al., 2022]. The latter methods may still require substantial tuning and in addition to training if using model-based score methods. Other methods impose architectural constraints [Kumar et al., 2019; Fujimoto et al., 2019] that parameterize separate BC and reward-maximizing policy models. Critic Regularization Critic regularization methods directly address the OOD action-value overestimation problem by penalizing large values for adversarially sampled actions [Kostrikov et al., 2021a]. Ensembles Employing an ensemble of neural network estimators is a commonly used technique for prediction with a measure of epistemic uncertainty [Kondratyuk et al., 2020]. A family of offline RL methods employ large ensembles of value functions [An et al., 2021] and make use of the diversity of randomly initialized ensembles to implicitly reduce the selection of OOD actions or directly penalize the variance of the reward in the ensemble [Ghasemipour et al., 2022; Sutton and Barto, 2018]. Model-Based Uncertainty Estimation Learning an uncertainty model of the dataset is often devised analogously to exploration-encouraging methods used in online RL, but, employing these for anti-exploration instead [Rezaeifar et al., 2022]. An example is SAC-RND which directly adopts such an approach [Nikulin et al., 2023]. Other algorithms include DOGE [Li et al., 2022] which trains a model to estimate uncertainty as a distance to dataset action and DARL [Zhang et al., 2023] which uses distance to random projections of stateaction pairs as an uncertainty measure. As a whole, these methods optimize a distance d(\u00b7, \u00b7) \u22650 that represents the uncertainty of an action. 2.2 Uncertainty Estimation Neural networks are known to predict confidently even when presented with OOD samples [Nguyen et al., 2015; Goodfellow et al., 2014; Lakshminarayanan et al., 2017]. A classical approach to OOD detection is to fit a generative model to the dataset that produces a high probability for in-dataset samples and a low probability for OOD ones. These methods work well for simple, unimodal data but can become computationally demanding for more complex data with multiple modes. Another approach trains classifiers that are leveraged to become finer-grained OOD detectors [Lee et al., 2018]. In this work, we focus on Morse neural networks [Dherin et al., 2023], an approach that trains a generative model to produce an unnormalized density that takes on value 1 at the dataset modes. 3 Preliminaries A Morse neural network produces an unnormalized density M(x) \u2208[0, 1] on an embedding space Re [Dherin et al., 2023]. A Morse network can produce a density in Re that attains a value of 1 at mode submanifolds and decreases towards 0 when moving away from the mode. The rate at which the value decreases is controlled by a Morse Kernel. Definition 1 (Morse Kernel). A Morse Kernel is a positive definite kernel K. When applied in a space Z = Rk, the kernel K(z1, z2) takes values in the interval [0, 1] where K(z1, z2) = 1 iff z1 = z2. All kernels of the form K(z1, z2) = e\u2212D(z1,z2) where D(\u00b7, \u00b7) is a divergence [Amari, 2016] are Morse Kernels. Examples include common kernels such as the Radial Basis Function (RBF) Kernel, KRBF (z1, z2) = e\u2212\u03bb2 2 ||z1\u2212z2||2. (2) The RBF kernel and its derivatives decay exponentially, leading learning signals to vanish rapidly. An alternative is the ubiquitous Rational Quadratic (RQ) kernel: KRQ(z1, z2) = \u0012 1 + \u03bb2 2\u03ba || z1 \u2212z2 ||2 \u0013\u2212\u03ba (3) where \u03bb is a scale parameter in each kernel. The RQ kernel is a scaled mixture of RBF kernels controlled by \u03ba and, for small \u03ba, decays much more slowly [Williams and Rasmussen, 2006]. Consider a neural network that maps from a feature space into a latent space f\u03d5 : X \u2192Z, with parameters \u03d5, X \u2208Rd and Z \u2208Rk. A Morse Kernel can impose structure on the latent space. Definition 2 (Morse Neural Network). A Morse neural network is a function f\u03d5 : X \u2192Z in combination with a Morse Kernel on K(z, t) where t \u2282Z is a target, chosen as a hyperparameter of the model. The Morse neural network is defined as M\u03d5(x) = K(f\u03d5(x), t). Using Definition 1 we see that M\u03d5(x) \u2208[0, 1], and when M\u03d5(x) = 1, x corresponds to a mode that coincides with the level set of the submanifold of the Morse neural network. Furthermore, M\u03d5(x) corresponds to the certainty of the sample x being from the training dataset, so 1 \u2212M\u03d5(x) is a measure of the epistemic uncertainty of x. The function \u2212log M\u03d5(x) measures a squared distance, d(\u00b7, \u00b7), between f\u03d5(x) and the closest mode in the latent space at m: d(z) = min m\u2208M d(z, m), (4) where M is the set of all modes. This encodes information about the topology of the submanifold and satisfies the Morse\u2013Bott non-degeneracy condition [Basu and Prasad, 2020]. The Morse neural network offers the following properties: 1 M\u03d5(x) \u2208[0, 1]. 2 M\u03d5(x) = 1 at its mode submanifolds. 3 \u2212log M\u03d5(x) \u22650 is a squared distance that satisfies the Morse\u2013Bott non-degeneracy condition on the mode submanifolds. 4 As M\u03d5(x) is an exponentiated squared distance, the function is also distance aware in the sense that as f\u03d5(x) \u2192t, M\u03d5(x) \u21921. Proof of each property is provided in the appendix. 4 Policy Constraint with a Behavioral Supervisor We now describe the constituent components of our algorithm, building on the Morse network and showing how it can be incorporated into a policy-regularized objective. 4.1 Morse Networks for Offline RL The target t is a hyperparameter that must be chosen. Experiments in [Dherin et al., 2023] use simple, toy datasets with classification problems that perform well for categorical t. We find that using a static label for the Morse network yields poor performance; rather than a labeling model, we treat f\u03d5 as a perturbation model that produces an action f\u03d5(s, a) = \u02c6 a such that \u02c6 a = a if and only if s, a \u223cD. An offline RL dataset D consists of tuples {s, a, r, s\u2032} where we assume {s, a} pairs are i.i.d. sampled from an unknown distribution. The Morse network must be fitted on N state-action pairs [{s1, a1, }, ..., {sN, aN}] such that M\u03d5(si, aj) = 1, \u2200i, j \u22081, ..., N ] only when i = j. We fit a Morse neural network to minimize the KL divergence between unnormalized measures [Amari, 2016] following [Dherin et al., 2023], DKL(D(s, a) || M\u03d5(s, a)): min \u03d5 Es,a\u223cD \u0014 log D(s, a) M\u03d5(s, a) \u0015 + Z M\u03d5(s, a) \u2212D(s, a) da. (5) With respect to \u03d5, this amounts to minimizing the empirical loss: L(\u03d5) = \u22121 N X s,a\u223cD log K(f\u03d5(s, a), a) + 1 N X s\u223cD a \u00af D\u223cDuni K(f\u03d5(s, au), au), (6) where au is an action sampled from a uniform distribution over the action space Duni. A learned Morse density is well suited to modeling ensemble policies [Lei et al., 2023], more flexibly [Dherin et al., 2023; Kostrikov et al., 2021a; Li et al., 2023] and without down-weighting good, in-support actions that have low density under the behavior policy [Singh et al., 2022] as all modes have unnormalized density value 1. A Morse neural network can be expressed as an energybased model (EBM) [Goodfellow et al., 2016]: Proposition 1. A Morse neural network can be expressed as an energy-based model: E\u03d5(x) = e\u2212log M\u03d5(x) where M\u03d5 : Rd \u2192R. Note that the EBM E\u03d5 is itself unnormalized. Representing the Morse network as an EBM allows analysis analogous to [Florence et al., 2022]. Theorem 1. For a set-valued function F(x) : x \u2208Rm \u2192 Rn\\{\u2205}, there exists a continuous function g : Rm+n \u2192R that is approximated by a continuous function approximator g\u03d5 with arbitrarily small bounded error \u03f5. This ensures that any point on the graph F\u03d5(x) = arg miny g\u03d5(x, y) is within distance \u03f5 of F. We refer the reader to [Florence et al., 2022] for a detailed proof. The theorem assumes that F(x) is an implicit function and states that the error at the level-set (i.e. the modes) of F(x) is small. 4.2 TD3-BST We can use the Morse network to design a regularized policy objective. Recall that policy regularization consists of Q-value maximization and minimization of a distance to the behavior policy (Equation 1). We reconsider the policy regularization term and train a policy that minimizes uncertainty while selecting actions close to the behavior policy. Let C\u03c0(s, a) denote a measure of uncertainty of the policy action. We solve the following optimization problem: \u03c0i+1 = arg min \u03c0\u2208\u03a0 Ea\u223c\u03c0(\u00b7|s) [C\u03c0(s, a)] (7) s.t. DKL (\u03c0(\u00b7 | s) || \u03c0\u03b2(\u00b7 | s)) \u2264\u03f5. (8) This optimization problem requires an explicit behavior model, which is difficult to estimate and using an estimated model has historically returned mixed results [Kumar et al., 2019; Fujimoto et al., 2019]. Furthermore, this requires direct optimization through C\u03c0 which may be subject to exploitation. Instead, we enforce this implicitly by deriving the solution to the constrained optimization to obtain a closed-form solution for the actor [Peng et al., 2019; Nair et al., 2020]. Enforcing the KKT conditions we obtain the Lagrangian: L(\u03c0, \u00b5) = Ea\u223c\u03c0(\u00b7|s) [C\u03c0(s, a)] + \u00b5(\u03f5 \u2212DKL(\u03c0 || \u03c0\u03b2)). (9) Computing \u2202L \u2202\u03c0 and solving for \u03c0 yields the uncertainty minimizing solution \u03c0C\u2217(a | s) \u221d\u03c0\u03b2(a | s)e 1 \u00b5 C\u03c0(s,a). When learning the parametric policy \u03c0\u03c8, we project the nonparametric solution into the policy space as a (reverse) KL divergence minimization of \u03c0\u03c8 under the data distribution D: arg min \u03c8 Es\u223cD h DKL \u0010 \u03c0C\u2217(\u00b7 | s) || \u03c0\u03c8(\u00b7 | s) \u0011i (10) = arg min \u03c8 Es\u223cD h DKL \u0010 \u03c0\u03b2(a | s)e 1 \u00b5 C\u03c0(s,a) || \u03c0\u03c8(\u00b7 | s) \u0011i (11) = arg min \u03c8 Es,a\u223cD h \u2212log \u03c0\u03c8(a | s)e 1 \u00b5 C\u03c0(s,a)i , (12) which is a weighted maximum likelihood update where the supervised target is sampled from the dataset D and C\u03c0(s, a) = 1 \u2212M\u03d5(s, \u03c0\u03c8(s)). This avoids explicitly modeling the behavior policy and uses the Morse network uncertainty as a behavior supervisor to dynamically adjust the strength of behavioral cloning. We provide a more detailed derivation in the appendix. Interpretation Our regularization method shares similarities with other weighted regression algorithms [Nair et al., 2020; Peng et al., 2019; Kostrikov et al., 2021b] which weight the advantage of an action compared to the dataset/replay buffer action. Our weighting can be thought of as a measure of disadvantage of a policy action in the sense of how OOD it is. We make modifications to the behavioral cloning objective. From Morse network property 1 we know M\u03d5 \u2208[0, 1], hence 1 \u2264e 1 \u00b5 C\u03c0 \u2264e 1 \u00b5 , i.e. the lowest possible disadvantage coefficient is 1. To minimize the coefficient in the mode, we require it to approach 0 when near a mode. We adjust the weighted behavioral cloning term and add Q-value maximization to yield the regularized policy update: \u03c0i+1 \u2190arg max \u03c0 E s,a\u223cD, a\u03c0\u223c\u03c0i(s) [ 1 ZQ Qi+1(s, a\u03c0) \u2212(e 1 \u00b5 C\u03c0(s,a) \u22121)(a\u03c0 \u2212a)2], (13) where \u00b5 is the Lagrangian multiplier that controls the magnitude of the disadvantage weight and ZQ = 1 N PN n=1|Q(s, a\u03c0)| is a scaling term detached from the gradient update process [Fujimoto and Gu, 2021], necessary as Q(s, a) can be arbitrarily large and the BC-coefficient is upper-bounded at e 1 \u00b5 . The value function update is given by: Qi+1 \u2190arg min Q Es,a,s\u2032\u223cD[(y \u2212Qi(s, a))2], (14) with y = r(s, a) + \u03b3Es\u2032\u223c\u00af \u03c0(s\u2032) \u00af Q(s\u2032, a\u2032) where \u00af Q and \u00af \u03c0 are target value and policy functions, respectively. 4.3 Controlling the Tradeoff Constraint Tuning TD3-BST is straightforward; the primary hyperparameters of the Morse network consist of the choice and scale of the kernel, and the temperature \u00b5. Increasing \u03bb for higher dimensional actions ensures that the high certainty region around modes remains tight. Prior empirical work has demonstrated the importance of allowing some degree of OOD actions [An et al., 2021]; in the TD3-BST framework, this is dependent on \u03bb. In Figure 2 we provide a didactic example of the effect of \u03bb. We construct a dataset consisting of 2-dimensional actions in [\u22121, 1] with means at the four locations {[0.0, 0.8], [0.0, \u22120.8], [0.8, 0.0], [\u22120.8, 0.0]} and each with standard deviation 0.05. We sample M = 128 points, train a Morse network and plot the density produced by the Morse network for \u03bb = { 1 10, 1 2, 1.0, 2.0}. A behavioral cloning policy learned using vanilla MLE where all targets are weighted equally results in an OOD action being selected. Training using Morse-weighted BC downweights the behavioral cloning loss for far away modes enabling the policy to select and minimize error to a single mode. (a) \u03bb = 0.1 (b) \u03bb = 0.5 (c) \u03bb = 1.0 (d) \u03bb = 2.0 (e) Ground Truth (f) Density \u03bb = 1.0 Figure 2: a-d: Contour plots of unnormalized densities produced by a Morse network for increasing \u03bb with ground truth actions included as \u00d7 marks. e: Ground truth actions in the synthetic dataset, the MLE action (red). A Morse certainty weighted MLE model can select actions in a single mode, in this case, the mode centred at [0.8, 0.0] (orange). Weighting a divergence constraint using a Morse (un)certainty will encourage the policy to select actions near the modes of M\u03d5 that maximize reward. f: Plot of the 3D unnormalized Morse density for \u03bb = 1.0. Algorithm 1 TD3-BST Training Procedure Outline. The policy is updated once for every m = 2 critic updates, as is the default in TD3. Input: Dataset D = {s, a, r, s\u2032} Initialize: Initialize Morse network M\u03d5. Output: Trained Morse network M\u03d5. Let t = 0. for t = 1 to TM do Sample minibatch (s, a) \u223cD Sample random actions a \u00af D \u223cDuni for each state s Update \u03d5 by minimizing Equation 6 end for Initialize: Initialize policy network \u03c0\u03c8, critic Q\u03b8, target policy \u00af \u03c8 \u2190\u03c8 and target critic \u00af \u03b8 \u2190\u03b8. Output: Trained policy \u03c0. Let t = 0. for t = 1 to TAC do Sample minibatch (s, a, r, s\u2032) \u223cD Update \u03b8 using Equation 14 if t mod m = 0 then Obtain a\u03c0 = \u03c0(s) Update \u03c8 using Equation 13 Update target networks \u00af \u03b8 \u2190\u03c1\u03b8 + (1 \u2212\u03c1)\u00af \u03b8, \u00af \u03c8 \u2190 \u03c1\u03c8 + (1 \u2212\u03c1) \u00af \u03c8 end if end for return \u03c0 4.4 Algorithm Summary Fitting the Morse Network The TD3-BST training procedure is described in Algorithm 1. The first phase fits the Morse network for TM gradient steps. Actor\u2013Critic Training In the second phase of training, a modified TD3-BC procedure is used for TAC iterations with alterations highlighted in red. We provide full hyperparameter details in the appendix. 5 Experiments In this section, we conduct experiments that aim to answer the following questions: \u2022 How does TD3-BST compare to other baselines, with a focus on comparing to newer baselines that use perdataset tuning? \u2022 Can the BST objective improve performance when used with one-step methods (IQL) that perform in-sample policy evaluation? \u2022 How well does the Morse network learn to discriminate between in-dataset and OOD actions? \u2022 How does changing the kernel scale parameter \u03bb affect performance? \u2022 Does using independent ensembles, a second method of uncertainty estimation, improve performance? We evaluate our algorithm on the D4RL benchmark [Fu et al., 2020], including the Gym Locomotion and challenging Antmaze navigation tasks. 5.1 Comparison with SOTA Methods We evaluate TD3-BST against the older, well known baselines of TD3-BC [Fujimoto and Gu, 2021], CQL [Kumar et al., 2020], and IQL [Kostrikov et al., 2021b]. There are more recent methods that consistently outperform these baselines; of these, we include SQL [Xu et al., 2023], SAC-RND [Nikulin et al., 2023], DOGE [Li et al., 2022], VMG [Zhu et al., 2022], ReBRAC [Tarasov et al., 2023], CFPI [Li et al., 2023] and MSG [Ghasemipour et al., 2022] (to our knowledge, the best-performing ensemble-based method). It is interesting to note that most of these baselines implement policy constraints, except for VMG (graph-based planning) and MSG (policy constraint using a large, independent ensemble). We note that all the aforementioned SOTA methods (except SQL) report scores with per-dataset tuned parameters in stark contrast with the older TD3-BC, CQL, and IQL algorithms, which use the same set of hyperparameters in each D4RL domain. All scores are reported with 10 evaluations in Locomotion and 100 in Antmaze across five seeds. We present scores for D4RL Gym Locomotion in Table 1. TD3-BST achieves best or near-best results compared to all previous methods and recovers expert performance on five of nine datasets. The best performing prior methods include SAC-RND and ReBRAC, both of which require per-dataset tuning of BRAC-variant algorithms [Wu et al., 2019]. We evaluate TD3-BST on the more challenging Antmaze tasks which contain a high degree of suboptimal trajectories and follow a sparse reward scheme that requires algorithms to stitch together several trajectories to perform well. TD3-BST achieves the best scores overall in Table 2, especially as the maze becomes more complex. VMG and MSG are the bestperforming prior baselines and TD3-BST is far simpler and more efficient in its design as a variant of TD3-BC. The authors of VMG report the best scores from checkpoints rather than from the final policy. MSG report scores from ensembles with both 4 and 64 critics of which the best scores included here are from the 64-critic variant. We pay close attention to SAC-RND, which, among all baselines, is most similar in its inception to TD3-BST. SACRND uses a random and trained network pair to produce a dataset-constraining penalty. SAC-RND achieves consistent SOTA scores on locomotion datasets, but fails to deliver commensurate performance on Antmaze tasks. TD3-BST performs similarly to SAC-RND in locomotion and achieves SOTA scores in Antmaze. 5.2 Improving One-Step Methods One-step algorithms learn a policy from an offline dataset, thus remaining on-policy [Rummery and Niranjan, 1994; Sutton and Barto, 2018], and using weighted behavioral cloning [Brandfonbrener et al., 2021; Kostrikov et al., 2021b]. Empirical evaluation by [Fu et al., 2022] suggests that advantageweighted BC is too restrictive and relaxing the policy objective to Equation 1 can lead to performance improvement. We use the BST objective as a drop-in replacement for the policy improvement step in IQL [Kostrikov et al., 2021b] to learn an optimal policy while retaining in-sample policy evaluation. We reproduce IQL results and report scores for IQL-BST, both times using a deterministic policy [Tarasov et al., 2022] and identical hyperparameters to the original work in Table 3. Reproduced IQL closely matches the original results, with slight performance reductions on the -large datasets. Relaxing weighted-BC with a BST objective leads to improvements in performance, especially on the more difficult -medium and -large datasets. To isolate the effect of the BST objective, we do not perform any additional tuning. 5.3 Ablation Experiments Morse Network Analysis We analyze how well the Morse network can distinguish between dataset tuples and samples from Dperm, permutations of dataset actions, and Duni. We plot both certainty (M\u03d5) density and t-SNEs [Van der Maaten and Hinton, 2008] in Figure 3 which show that the unsupervised Morse network is effective in distinguishing between Dperm and Duni and assigning high certainty to dataset tuples. Ablating kernel scale We examine sensitivity to the kernel scale \u03bb. Recall that k = dim(A). We see in Figure 4 that the scale \u03bb = k 2 is a performance sweet-spot on the challenging Antmaze tasks. We further illustrate this by plotting policy deviations from dataset actions in Figure 5. The scale \u03bb = 1.0 is potentially too lax a behavioral constraint, while \u03bb = k is too strong, resulting in performance reduction. However, performance on all scales remains strong and compares well with most prior algorithms. Performance may be further improved by tuning \u03bb, possibly with separate scales for each input dimension. Figure 3: M\u03d5 densities and t-SNE for hopper-medium-expert (top row) and Antmaze-large-diverse (bottom row). Density plots are clipped at 10.0 as density for D is large. 10 actions are sampled from Duni and Dperm each, per state. t-SNE is plotted from the per-dimension perturbation | f\u03d5(s, a) \u2212a |. Figure 4: Ablations of \u03bb on Antmaze datasets. Recall k = dim(A). Independent or Shared Targets? Standard TD3 employs Clipped Double Q-learning (CDQ) [Hasselt, 2010; Fujimoto et al., 2018] to prevent value overestimation. On tasks with sparse rewards, this may be too conservative [Moskovitz et al., 2021]. MSG [Ghasemipour et al., 2022] uses large ensembles of fully independent Q functions to learn offline. We examine how independent double Q functions perform compared to the standard CDQ setup in Antmaze with 2 and 10 critics. The results in Figure 6 show that disabling CDQ with 2 critics is consistently detrimental to performance. Using a larger 10-critic ensemble leads to moderate improvements. This suggests that combining policy regularization with an efficient, independent ensemble could bring further performance benefits with minimal changes to the algorithm. 6 Discussion Morse Network In [Dherin et al., 2023], deeper architectures are required even when training on simple datasets. This rings true for our application of Morse networks in this work, with low-capacity networks performing poorly. Training the Morse network for each locomotion and Antmaze dataset typically takes 10 minutes for 100 000 gradient steps using a batch size of 1 024. When training the policy, using the Morse network increases training time by approximately 15%. Optimal Datasets On Gym Locomotion tasks TD3-BST performance is comparable to newer methods, all of which Dataset TD3-BC CQL IQL SQL SAC-RND1 DOGE ReBRAC CFPI TD3-BST (ours) halfcheetah-m 48.3 44.0 47.4 48.3 66.6 45.3 65.6 52.1 62.1 \u00b1 0.8 hopper-m 59.3 58.5 66.3 75.5 97.8 98.6 102.0 86.8 102.9 \u00b1 1.3 walker2d-m 83.7 72.5 78.3 84.2 91.6 86.8 82.5 88.3 90.7 \u00b1 2.5 halfcheetah-m-r 44.6 45.5 44.2 44.8 42.8 54.9 51.0 44.5 53.0 \u00b1 0.7 hopper-m-r 60.9 95.0 94.7 99.7 100.5 76.2 98.1 93.6 101.2 \u00b1 4.9 walker2d-m-r 81.8 77.2 73.9 81.2 88.7 87.3 77.3 78.2 90.4 \u00b1 8.3 halfcheetah-m-e 90.7 91.6 86.7 94.0 107.6 78.7 101.1 97.3 100.7 \u00b1 1.1 hopper-m-e 98.0 105.4 91.5 111.8 109.8 102.7 107.0 104.2 110.3 \u00b1 0.9 walker2d-m-e 110.1 108.8 109.6 110.0 105.0 110.4 111.6 111.9 109.4 \u00b1 0.2 Table 1: Normalized scores on D4RL Gym Locomotion datasets. VMG scores are excluded because this method performs poorly and the authors of MSG do not report numerical results on locomotion tasks. Prior methods are grouped by those that do not perform per-dataset tuning and those that do. 1 SAC-RND in addition to per-dataset tuning, is trained for 3 million gradient steps. Though not included here, ensemble methods may perform better than the best non-ensemble methods on some datasets, albeit still requiring per-dataset tuning to achieve their reported performance. Top scores are in bold and second-best are underlined. Dataset TD3-BC CQL IQL SQL SAC-RND1 DOGE VMG2 ReBRAC CFPI MSG3 TD3-BST (ours) -umaze 78.6 74.0 87.5 92.2 97.0 97.0 93.7 97.8 90.2 98.6 97.8 \u00b1 1.0 -umaze-d 71.4 84.0 62.2 74.0 66.0 63.5 94.0 88.3 58.6 81.8 91.7 \u00b1 3.2 -medium-p 10.6 61.2 71.2 80.2 74.7 80.6 82.7 84.0 75.2 89.6 90.2 \u00b1 1.8 -medium-d 3.0 53.7 70.0 79.1 74.7 77.6 84.3 76.3 72.2 88.6 92.0 \u00b1 3.8 -large-p 0.2 15.8 39.6 53.2 43.9 48.2 67.3 60.4 51.4 72.6 79.7 \u00b1 7.6 -large-d 0.0 14.9 47.5 52.3 45.7 36.4 74.3 54.4 52.4 71.4 76.1 \u00b1 4.7 Table 2: Normalized scores on D4RL Antmaze datasets. 1 SAC-RND is trained for three million gradient steps. 2 VMG reports scores from the best-performing checkpoint rather than from the final policy; despite this, TD3-BST still outperforms VMG in all datasets except -umaze-diverse. 3 for MSG we report the best score among the reported scores of all configurations, also, MSG is trained for two million steps. Prior methods are grouped by those that do not perform per-dataset tuning and those that do. Other ensemble-based methods are not included, as MSG achieves higher performance. Top scores are in bold and second-best are underlined. Dataset IQL (reproduced) IQL-BST -umaze 87.6 \u00b1 4.6 90.8 \u00b1 2.1 -umaze-d 64.0 \u00b1 5.2 63.1 \u00b1 3.7 -medium-p 70.7 \u00b1 4.3 80.3 \u00b1 1.3 -medium-d 73.8 \u00b1 5.9 84.7 \u00b1 2.0 -large-p 35.2 \u00b1 8.4 55.4 \u00b1 3.2 -large-d 40.7 \u00b1 9.2 51.6 \u00b1 2.6 Table 3: Normalized scores on D4RL Antmaze datasets for IQL and IQL-BST. We use hyperparameters identical to the original IQL paper and use Equation 13 as a drop-in replacement for the policy objective. rarely outperform older baselines. This can be attributed to a significant proportion of high-return-yielding trajectories that are easier to improve. 7 Conclusion In this paper, we introduce TD3-BST, an algorithm that uses an uncertainty model to dynamically adjust the strength of regularization. Dynamic weighting allows the policy to maximize reward around individual dataset modes. Our algorithm compares well against prior methods on Gym Locomotion tasks and achieves the best scores on the more challenging Antmaze tasks, demonstrating strong performance when learning from suboptimal data. In addition, our experiments show that combining our pol(a) hopper-medium (b) amaze-large-play Figure 5: Histograms of deviation from dataset actions. Figure 6: % change in Antmaze scores without CDQ for critic ensembles consisting of 2 and 10 Q functions. icy regularization with an ensemble-based source of uncertainty can improve performance. Future work can explore other methods of estimating uncertainty, alternative uncertainty measures, and how best to combine multiple sources of uncertainty."
17
+ }
intro_8K/test_introduction_short_2404.16407v1.json ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2404.16407v1",
3
+ "title": "U2++ MoE: Scaling 4.7x parameters with minimal impact on RTF",
4
+ "abstract": "Scale has opened new frontiers in natural language processing, but at a high\ncost. In response, by learning to only activate a subset of parameters in\ntraining and inference, Mixture-of-Experts (MoE) have been proposed as an\nenergy efficient path to even larger and more capable language models and this\nshift towards a new generation of foundation models is gaining momentum,\nparticularly within the field of Automatic Speech Recognition (ASR). Recent\nworks that incorporating MoE into ASR models have complex designs such as\nrouting frames via supplementary embedding network, improving multilingual\nability for the experts, and utilizing dedicated auxiliary losses for either\nexpert load balancing or specific language handling. We found that delicate\ndesigns are not necessary, while an embarrassingly simple substitution of MoE\nlayers for all Feed-Forward Network (FFN) layers is competent for the ASR task.\nTo be more specific, we benchmark our proposed model on a large scale\ninner-source dataset (160k hours), the results show that we can scale our\nbaseline Conformer (Dense-225M) to its MoE counterparts (MoE-1B) and achieve\nDense-1B level Word Error Rate (WER) while maintaining a Dense-225M level Real\nTime Factor (RTF). Furthermore, by applying Unified 2-pass framework with\nbidirectional attention decoders (U2++), we achieve the streaming and\nnon-streaming decoding modes in a single MoE based model, which we call U2++\nMoE. We hope that our study can facilitate the research on scaling speech\nfoundation models without sacrificing deployment efficiency.",
5
+ "authors": "Xingchen Song, Di Wu, Binbin Zhang, Dinghao Zhou, Zhendong Peng, Bo Dang, Fuping Pan, Chao Yang",
6
+ "published": "2024-04-25",
7
+ "updated": "2024-04-25",
8
+ "primary_cat": "cs.CL",
9
+ "cats": [
10
+ "cs.CL",
11
+ "eess.AS",
12
+ "I.2.7"
13
+ ],
14
+ "label": "Original Paper",
15
+ "paper_cat": "Mixture AND of AND Experts",
16
+ "gt": "Scaling up neural network models has recently received great attention, given the significant quality improvements on a va- riety of tasks including natural language processing [8, 9] and speech processing [10, 11]. While training massive models on large amounts of data can almost guarantee improved quality, there are two fac- \u2020 Corresponding Author. tors affecting their practicality and applicability: (1) training efficiency and (2) inference efficiency. Large dense models are often prohibitively compute-intensive to train, with some models requiring TFlops-days of compute [9, 12]. A recent line of work has proposed sparsely-gated Mixture-of-Experts (MoE) layers [1, 2] as an efficient alternative to dense mod- els in order to address both training and inference efficiency limitations. There have been several related Mixture-of-Expert ap- proaches for ASR modeling [3, 4, 5, 6]. In those models each frame of the input sequence activates a different subset of the experts, hence the computation cost per frame becomes only proportional to the size of the activated sub-network. To avoid collapse to just a few experts while ignoring all others, all of those works use load balancing mechanisms such as dedicated auxiliary losses [2, 13]. Nonetheless, the resulting complex optimization objectives often lead to a large amount of hyper parameter tuning, such as the weight of each auxiliary loss. Moreover, load balancing is designed to address the issue of expert sparsity in the NLP field when routing different tokens. However, this issue may not hold in the speech domain, as there is a high degree of similarity between neighboring speech frames [14]. Forcing speech frames to be evenly distributed among all experts does not align with intuition, as it conflicts with the natural continuity observed in the relationships between adjacent speech frames. Despite several notable successes of speech MoE, wide- spread adoption has been hindered by training complexity and the lack of streaming capabilities. We address these with the introduction of U2++ MoE. We simplify the integration of MoE and preclude the necessity for any auxiliary losses. Our proposed method mitigate the complexities, and we show large sparse models may be trained, for the first time, with unified streaming & non-streaming fashion.",
17
+ "main_content": "Several mixture-of-expert strategies have been developed for enhancing ASR modeling, but our work differs from them in the following ways. 1) In contrast to all prior studies [3, 4, 5, 6], our MoE model do not include any auxiliary losses for expert arXiv:2404.16407v1 [cs.CL] 25 Apr 2024 Fig. 1. The proposed U2++ MoE, a unified (streaming and non-streaming) two-pass (encoder for 1st pass decoding and decoder for 2nd pass rescoring) joint CTC/AED framework, enhanced with bidirectional decoders and Mixture-of-Experts. For efficient compression of speech frames, we employ 1/8 subsampling and structure our architecture with M encoder layers alongside 2N decoder layers, wherein equal divisions of N layers are allocated to both the right-to-left and left-to-right decoders. routing, thus significantly streamlining the training optimization process. 2) Compared to [4, 5, 6], our MoE study also works without using any shared embedding networks, thereby simplifying the model architecture and enhancing its generality for model scaling. 3) Compared to all previous works [3, 4, 5, 6] that exclusively explored the application of MoE layers within the encoder, our study extends this innovation by integrating MoE layers into the decoder\u2019s FFN as well. Notably, You et al. [6] have also attempted to modify all FFN modules in encoder into MoE layers, but it fails to achieve a better performance (detailed in [6], section 3.2, paragraph 1, last sentence). In contrast, we are the first to demonstrate the effectiveness of MoE layer substitution across both encoder and decoder components. 4) We are the pioneers in demonstrating the streaming capability of the MoE. While Hu et al. [3] have made attempts to integrate MoE layers into a causal encoder to enable streaming recognition, their approach resulted in a notable deterioration in the average WER (detailed in [3], section 5.1.1, paragraph 2, first sentence). In stark contrast, our approach, which marries the MoEbased Conformer with the U2++ framework, successfully facilitates both streaming and non-streaming decoding modes within a singular MoE-based model. 5) Our research primarily emphasizes scaling models without a notable increase on RTF, diverging from prior efforts that predominantly concentrate on enhancing the accuracy of multi-lingual or multi-accent recognition [3, 4, 5, 6]. These studies lack a comprehensive analysis of inference latency, such as Dense-1B model v.s. MoE-1B model or Dense-225M model v.s. MoE-1B model. In this paper, however, we demonstrate that a MoE-1B model can achieve the accuracy of a Dense-1B model while maintaining the inference efficiency of a Dense-225M model. In summary, our guiding principle has been to keeping MoE model as simple as possible and is thus more generic for scaling up models. Our model do not require any auxiliary losses or any additional embedding networks. By applying 1) an embarrassingly simple replacement of all FFN layers with MoE layers and 2) the U2++ framework to Conformer [15], we prove that MoE-1B model can achieve Dense-1B level accuracy with Dense-225M level inference cost, alongside the capability for streaming. 3. METHODOLOGY Our model uses Conformer (for encoders) and Transformer (for decoders) as the main building block. A Conformer encoder layer [15] consists of a multi-headed self-attention and a convolution-based layer sandwiched by two FFN. A Transformer decoder layer [16] consists of a multi-headed self-attention, a multi-headed src-attention and one FFN. As shown in Fig.1, to incorporate experts, we use an MoE layer [1, 2] to replace all FFN in the encoders and decoders. Similar to [1, 2], the MoE layer consists of a routing network and multiple experts, each of which is an FFN. We use the joint Connectionist Temporal Classification (CTC) loss [17] and Autoregressive Encoder Decoder (AED) loss [16] for training the proposed model. The combined loss has two hyper parameters (\u03bb and \u03b1) to balance the importance of different losses (more details can be found in [7], section 2.1): L = \u03bbLCT C+(1\u2212\u03bb)(\u03b1Lright2left AED +(1\u2212\u03b1)Lleft2right AED ) (1) Similar to U2 [18], we adopt the dynamic chunk masking strategy to unify the streaming and non-streaming modes. Firstly, the input is split into several chunks by a fixed chunk size C and every chunk attends on itself and all the previous chunks, so the whole latency for the CTC decoding in the first pass only depends on the chunk size. When the chunk size is limited, it works in a streaming way; otherwise it works in a non-streaming way. Secondly, the chunk size is varied dynamically from 1 to the max length of the current training utterance in the training, so the trained model learns to predict with arbitrary chunk size. 4. EXPERIMENTS 4.1. Datasets Our training corpus comprises mixed datasets gathered from a variety of application domains, amounting to a substantial 160k hours of large-scale, industrial-level training data. This corpus consists predominantly of Mandarin (90%) with the remainder in English (10%). To evaluate the capabilities of the proposed method, we use the most widely used benchmark for the Mandarin ASR task, namely SpeechIO TIOBE ASR Benchmark 1. SpeechIO test sets are carefully curated by SpeechIO authors, crawled from publicly available sources (Youtube, TV programs, Podcast etc), covering various well-known scenarios and topics (TV News, VLog, Documentary and so on), transcribed by 1https://github.com/SpeechColab/Leaderboard payed professional annotators thus is exceptionally suitable for testing a model\u2019s general speech recognition capabilities. Cumulatively, the 26 publicly available SpeechIO test sets amount to 60.2 hours, averaging 2.3 hours of data across each domain. 4.2. Training Details In all experiments, we utilize 80-dimensional log-mel filterbank features, computed using a 25ms window that is shifted every 10ms. Each frame undergoes global mean and variance normalization. For modeling Mandarin, we employ character-based representations, whereas for English, we utilize byte-pair encoding (BPE), culminating in a comprehensive vocabulary of 6000 units. All our experiments are conducted in WeNet toolkit [7] with DeepSpeed [19] enabled, all the models are trained using 8 * NVIDIA 3090 (24GB) GPUs. We have developed three distinct models, as detailed in Table 1, all of which adopt the parameters Head = 8, CNNkernel = 15, \u03bb = 0.3, and \u03b1 = 0.3. In the context of the MoE layer, we configure it with 8 experts and enable only the top two experts during both the training and inference phases. For the decoding process, the CTC decoder initially generates the N-Best hypotheses during the first pass. Subsequently, these hypotheses are rescored by the attention decoder in the second pass to produce the final outcomes. Table 1. Configuration of different models. (a) Model (b) M (c) N (d) dff (e) datt Dense-225M 12 3 2880 720 Dense-1B 32 6 4096 1024 MoE-1B 12 3 2880 720 4.3. Main Results on 160k hours In Table.2, we compare the performance of the three models from Table.1 under different conditions (such as the same number of training steps or the same training time), with the results indicating: 1) At the same number of training steps (263k steps), comparing columns (b), (d), and (e) reveals that the WER of the MoE-1B model (3.93) is slightly worse than that of the Dense-1B model (3.72), but both significantly outperform the Dense-225M baseline (4.50). 2) With the same training time (25.9 days), comparing columns (c), (d), and (f) shows that the WER of the MoE-1B model (3.80) is very close to that of the Dense-1B model (3.72), and both substantially surpass the Dense-225M model (4.18). These results suggest that on a dataset of 160k hours, a larger number of parameters (from 225M to 1B) leads to betTable 2. Following the scaling law [20], we compare model WERs on a fixed dataset (160k hours) across equal training steps (236k steps) or compute time (25.9 days). (a) TestSet (b) Dense-225M (c) Dense-225M (d) Dense-1B (e) MoE-1B (f) MoE-1B 236k steps, 9.3 days 657k steps, 25.9 days 236k steps, 25.9 days 236k steps, 16.8 days 364k steps, 25.9 days speechio 001 1.28 1.15 0.92 0.95 0.90 speechio 002 3.51 3.30 3.03 3.08 2.94 speechio 003 2.34 2.11 1.74 1.68 1.63 speechio 004 2.05 1.96 1.79 1.87 1.93 speechio 005 2.06 1.92 1.84 1.78 1.73 speechio 006 7.24 6.69 6.34 6.35 6.34 speechio 007 10.23 10.12 8.77 9.67 9.23 speechio 008 7.34 6.29 5.78 6.13 5.59 speechio 009 3.94 3.67 3.45 3.60 3.52 speechio 010 4.76 4.68 4.37 4.55 4.49 speechio 011 3.21 2.88 2.31 2.36 2.28 speechio 012 3.39 3.22 2.91 3.01 2.97 speechio 013 4.15 3.81 3.62 3.71 3.69 speechio 014 5.01 4.45 3.87 4.06 3.83 speechio 015 7.58 6.77 6.43 6.69 7.03 speechio 016 5.15 4.46 3.95 4.02 3.82 speechio 017 4.11 3.87 3.24 3.52 3.49 speechio 018 2.69 2.57 2.38 2.56 2.44 speechio 019 3.91 3.29 2.95 3.05 2.90 speechio 020 3.05 2.97 2.33 2.51 2.47 speechio 021 2.75 2.89 2.53 2.73 2.73 speechio 022 5.55 5.15 4.50 4.86 4.52 speechio 023 6.05 5.99 4.89 5.86 5.25 speechio 024 5.61 5.19 4.61 4.76 4.78 speechio 025 5.76 5.30 4.36 4.83 4.61 speechio 026 4.37 4.01 3.90 4.02 3.84 average 4.50 4.18 3.72 3.93 3.80 ter model performance. Moreover, when the number of parameters is the same, MoE models can achieve WER levels comparable to Dense models. Furthermore, in Table.3, we compare the inference speeds of the three models, with the results showing: 1) Although the MoE-1B and Dense-1B have the same number of parameters, the former is 2.5 times faster than the latter. 2) Even though the parameter count of MoE-1B is 4.7 times that of Dense-225M, the absolute difference in RTF between the two is only around 0.03 (for cpu) or 0.0004 (for gpu). Overall, combining the WER and RTF results, we can confirm that the MoE-1B model can achieve Dense-1B level accuracy with Dense-225M level inference cost. Table 3. RTF benchmark. When testing with a CPU, we set the batch size to 1 and perform inference on an int8 quantized model using a single thread on an Intel(R) Core(TM) i5-8400 CPU @ 2.80GHz. For GPU-based evaluations, we set the batch size to 200 and perform inference on an FP16 model using a single NVIDIA 3090. Please note that we do not include GPU RTF for decoder rescoring since the inference time for this process is dominated by the CTC prefix beam search running on the CPU, and therefore, it cannot objectively reflect the inference time on the GPU. (a) Model (b) ctc greedy decoding (c) decoder rescoring Dense-225M 0.1088 (cpu) / 0.0012 (gpu) 0.1524 (cpu) Dense-1B 0.3155 (cpu) / 0.0028 (gpu) 0.4515 (cpu) MoE-1B 0.1299 (cpu) / 0.0016 (gpu) 0.1826 (cpu) 4.4. Streaming Capability Empirically, training a large model to accommodate both streaming and non-streaming modes simultaneously could potentially compromise performance. In response, this paper introduces a two-stage training pipeline. Initially, we train a non-streaming base model (such as MoE-1B and Dense-225M that is described in Section 4.2 and Table 1), which then serves as the foundation for initializing the proposed U2++-MoE-1B model (and also U2++-Dense-225M, U2++-Dense-1B). The MoE-1B model shares an identical architecture with the U2++-MoE-1B model, with the only distinction lying in their approach to chunk masking. While the MoE-1B employs a full chunk strategy, the U2++-MoE1B adopts a dynamic chunk method as detailed in section 3. This approach stabilizes the training process for a unified system capable of handling both streaming and non-streaming functionalities. In Table 4, by comparing three different streaming models, we can draw the same conclusion as in the non-streaming models (section 4.3), which is that our proposed MoE model significantly outperforms the Dense counterpart in terms of WER while maintaining a similar RTF. Please note that the WER for the U2++-Dense-1B model is not included. This is due to the frequent occurrence of gradient explosions during the training process, which, despite the initialization with a non-streaming Dense-1B model, made the training unsustainable. Table 4. Averaged streaming results on SpeechIO test sets: WER Measured with a 640ms chunk size and RTF calculated using the same hardware (cpu) and methodology (decoder rescoring) as in Table 3. All models were initialized from their respective non-streaming baselines and subsequently trained for a total of 160k steps. (a) Model (b) WER (c) RTF U2++-Dense-225M 6.24 0.1937 U2++-Dense-1B N/A 0.6015 U2++-MoE-1B 4.83 0.2436 5. CONCLUSION The proposed U2++ MoE provides a clean setup and little task-specific design. Through the straightforward substitution of all FFN layers in the baseline model with MoE FFNs, coupled with the adoption of the U2++ training framework, we attain notable enhancements in WER alongside streaming recognition capabilities, all without a considerable increase in RTF. 6. ACKNOWLEDGEMENTS We thank Wenpeng Li and Jianwei Niu for their feedbacks on this work. 7."
18
+ }
intro_8K/test_introduction_short_2404.16423v1.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2404.16423v1",
3
+ "title": "Neural Assembler: Learning to Generate Fine-Grained Robotic Assembly Instructions from Multi-View Images",
4
+ "abstract": "Image-guided object assembly represents a burgeoning research topic in\ncomputer vision. This paper introduces a novel task: translating multi-view\nimages of a structural 3D model (for example, one constructed with building\nblocks drawn from a 3D-object library) into a detailed sequence of assembly\ninstructions executable by a robotic arm. Fed with multi-view images of the\ntarget 3D model for replication, the model designed for this task must address\nseveral sub-tasks, including recognizing individual components used in\nconstructing the 3D model, estimating the geometric pose of each component, and\ndeducing a feasible assembly order adhering to physical rules. Establishing\naccurate 2D-3D correspondence between multi-view images and 3D objects is\ntechnically challenging. To tackle this, we propose an end-to-end model known\nas the Neural Assembler. This model learns an object graph where each vertex\nrepresents recognized components from the images, and the edges specify the\ntopology of the 3D model, enabling the derivation of an assembly plan. We\nestablish benchmarks for this task and conduct comprehensive empirical\nevaluations of Neural Assembler and alternative solutions. Our experiments\nclearly demonstrate the superiority of Neural Assembler.",
5
+ "authors": "Hongyu Yan, Yadong Mu",
6
+ "published": "2024-04-25",
7
+ "updated": "2024-04-25",
8
+ "primary_cat": "cs.CV",
9
+ "cats": [
10
+ "cs.CV",
11
+ "cs.RO"
12
+ ],
13
+ "label": "Original Paper",
14
+ "paper_cat": "Graph AND Structure AND Learning",
15
+ "gt": "The assembly task necessitates predicting a sequence of operations for the placement of various components. Accurate and efficient assembly algorithms play a pivotal role in robotics. These assembly challenges are pervasive in daily life, as in scenarios like constructing LEGO models Chung et al. (2021), assembling furniture Su\u00e1rez-Ruiz et al. (2018), and Minecraft Chen et al. (2019). In previous research, Chen et al. Chen et al. (2019) suggested replicating human building order with spatial awareness to construct Minecraft houses without target information. Wang et al. Wang et al. (2022a) introduced a step-by-step approach for assembling LEGO models based on the assembly manual, while the work in Zhan et al. (2020) focused on predicting the 6-DoF pose of each component based on the object\u2019s class for assembly. Li et al. Li et al. (2020) predicted the poses of parts using a single image. In this study, we define a new task of image-guided assembly. We are provided with a set of multi-view images captured from a 3-D model, assuming it is built with components from a pre-specified library. The goal of the task is to generate a sequence of fine-grained assembly instructions, encompassing all parameters\u2014such as component types, geometric poses of each component, and assembly order\u2014in accordance with physical rules and suitable for execution by a robotic arm. The task serves as a valuable testbed for advancing vision-guided autonomous systems, presenting a range of technical challenges. Firstly, understanding the correspondence between 2D images and 3D objects is crucial. Given that certain components in the 3D model might be entirely obscured from specific viewpoints, we employ multi-view images (e.g., typically 4 in this study) as input. The algorithm must effectively arXiv:2404.16423v1 [cs.CV] 25 Apr 2024 Neural Assembler Object library with Shape / Texture Object Relation Graph 0 1 6 8 7 2 9 3 4 5 0 1 4 2 6 7 3 5 8 9 \u2026 Assembly Instructions Multi-view Images Object Detection and Pose Estimation Camera 1 Camera 2 Camera 3 Camera 4 Figure 1: Schematic illustration of the proposed Neural Assembler. See Section 3 for more details. integrate information from images captured from multiple perspectives. Secondly, estimating critical information for each component is non-trivial. With a 3D library in place, the algorithm needs to segment components across all images and categorize each based on predefined object types in the library, mainly using shape and texture cues. In addition, one also needs estimate the 3-D spatial position and rotation matrix of each component. Thirdly, obtaining typological information for all components is necessary to formulate a physically feasible assembly plan. Importantly, observations of a component in images are often incomplete, primarily due to frequent occlusions. This poses a substantial challenge in fully understanding and interpreting the scene. Such occlusions are particularly challenging when assembling complex, multi-layerd models. For this novel task, we propose an end-to-end neural network, dubbed as Neural Assembler. The computational pipeline of Neural Assembler is illustrated in Figure 1. Taking multi-view images and a 3-D component library as input, Neural Assembler not only identifies each component from images but also determines its 3D pose at each step of assembly. Leveraging images from multiple viewpoints, our method aims to enhance the overall scene understanding and accurately predict the order for placing parts in assembly tasks. We present two datasets for the proposed image-guided assembly task, namely the CLEVR-Assembly dataset and LEGO-Assembly dataset. Each sample in these datasets comprises images of the object captured from various perspectives, along with the pose of each part (2D keypoints, mask, 3D position, and rotation), and the relationship graph of all components. Comprehensive experiments are conducted on both datasets. Due to the absence of prior work addressing this novel setting like Neural Assembler, we establish two robust baselines for comparison. The evaluations unequivocally demonstrate that Neural Assembler outperforms the baselines across all performance metrics.",
16
+ "main_content": "Assembly algorithms Assembly tasks have utilized computational strategies for part selection and placement, including action scoring Yuille and Kersten (2006); Bever and Poeppel (2010), genetic algorithms Lee et al. (2015), and voxel-based optimization Kim et al. (2020); van den Hengel et al. (2015). Manual-driven approaches have been investigated Shao et al. (2016); Wang et al. (2022a); Chen et al. (2019); Walsman et al. (2022); Chung et al. (2021), with LSTM showing limitations in long sequence predictions Walsman et al. (2022), and the reinforcement learning methodChung et al. (2021) struggling with block complexity. Existing research into part-based 3D modeling Zhan et al. (2020); Mo et al. (2019); Li et al. (2020); Niu et al. (2018) and parts retrieval from 3D meshes Chaudhuri and Koltun (2010); Shen et al. (2012); Sung et al. (2017) assumes prior part knowledge. Our novel task of reconstructing 3D models from multi-view images without prior information presents a unique challenge not yet addressed by none of above works. Multi-view scene understanding Many scene understanding tasks are intractable in single-view and can only be solved given multi-view inputs. For example, The SLAM (simultaneous localization and mapping) task requires reconstructing the 3D geometric scene and estimating camera poses given a sequence of video frames. 3D-SIS Hou et al. (2019) performed instance segmentation by reading a set of RGB-D images. MVPointNet Jaritz et al. (2019) was based on a set of RGB-D images and point clouds for instance segmentation. Murez et al. (2020) performed 3D semantic segmentation 2 and reconstruction based on multiple RGB images. Similarly in our work, the algorithm understands the structure of 3D brick models and predicts the poses of parts based on multi-view images. Structural modeling for scenes or 3-D models Recent studies have focused on inferring structural representations like graphs Johnson et al. (2015); Cong et al. (2023); Li et al. (2022) and programmatic descriptions Ellis et al. (2018); Liu and Wu (2019); Wu et al. (2017) from images. Techniques range from encoder-decoder architectures for triplet generation Cong et al. (2023) to efficient graph-based methods Li et al. (2022), CNNs for primitive shape programs Ellis et al. (2018). An encoder-decoder approach for scene re-rendering Wu et al. (2017) and transformers for structural change detection Qiu et al. (2023) have also been proposed. Our work differs by using structural representations to predict assembly sequences of object parts. 3 Neural Assembler 3.1 Task Specification Given several multi-perspective images of a 3-D brick model {Ik}K k=1, the corresponding camera parameters with respect to some reference coordinate system, and a predefined 3-D brick (or termed as component, as used exchangeably in this paper) library Lib = {b1, b2, . . . , bM}, our algorithm identifies the bricks present in the scene, predicts each brick\u2019s pose and constructs a relational graph G = {V, E}. The vertex set V correspond to the bricks and the directed edge set E represent spatial configuration that can further be used to derive the assembly instructions. In particular, each brick bi in the library is denoted as (Si, Ti) where Si is assumed to be point clouds and Ti is represented as texture images. In the relationship graph, each node vi \u2208V describes the i-th brick information vi = (Si, Ti, Kpsi, Roti, Mi) where Kpsi \u2208([0, 1] \u00d7 [0, 1])K\u00d72 encodes the 2D keypoints (we use this notation to refer to the planar projection of the brick center) in K views, Roti \u2208[0, 2\u03c0]K are the rotation angles in K views and Mi \u2208{0, 1}K\u00d7H\u00d7W are the binary masks of the brick in K views. The edge ei,j \u2208E explicitly describes the assembly order, where ei,j = 1 only when a brick vj is placed into the 3-D model after brick vi. Here we develop a baseline for the proposed task, namely Neural Assembler, whose computational pipeline is shown in Figure 2, Our model leverages a graph convolutional network (GCN) to delineate the assembly sequence effectively. The 3-D poses of objects are inferred from 2D image data across multiple views, exploiting the geometric constraints provided by the camera parameters to ensure spatial consistency in the 3D domain. This framework showcases the capability of capturing complex relational patterns and translating them into a structured assembly protocol. 3.2 Multi-View Feature Fusion We adopt the pretrained CLIP image encoder to get the feature maps F k and CLIP feature vectors vk CLIP , k \u2208{1, 2, ...K}. Then the features of all views are fused with others, using an adapted implementation of group-based semantic agreement Transformer Xu and Mu (2023). The scene consensus (denoted as a vector g) of the images from different perspectives are extracted according to the rule: g = 1 K PK k=1 Norm(vk CLIP ), where Norm(\u00b7) is the L2 normalization. Afterwards, the scene consensus g is dispersed to multi-view image features through channel-wise multiplication: \u02c6 F k = F k \u00b7 g, k = 1, 2, . . . , K. Template as visual prompts. Let the texture image Ti \u2208RHTi\u00d7WTi\u00d73, where (HTi, WTi) is the texture image size, represent the texture template and point clouds Si \u2208RNP \u00d73 represent the shape template. In all experiments of this work, we fix the parameter NP to be all 1024, striking a balance between compute expense and capability of representing the bricks. For the texture image, we set both HTi and WTi to 224. A CNN backbone (e.g., ResNet-18 He et al. (2016)) generates template features Ti \u2190CNN(Ti), and a pointnet backbone (e.g., PointNet Qi et al. (2017)) generates shape features Si \u2190PointNet(Si). Transformer decoder. The decoder accepts as input the shape feature library S = {Si}, the texture feature library T = {Ti} and the fused image features \u02c6 F k. As shown in Fig. 2, the transformer decoder is designed to obtain the image conditioned object query. 3 Multi-view Images Image Encoder Texture Library ... Shape Library ... Point Cloud Encoder Texture Encoder Shape Features Texture Features + Object Queries Visual Prompts Consensus Dispersion Fused Image Features Linear Interpolation Object-conditioned Image Feature + Transformer Decoder Global Object Features Scene Consensus Materials Keypoints Rotations Masks ... ... Per-view Object Features Assembly Graph Number of Bricks Shapes Image Features Figure 2: The proposed Neural Assembler architecture. An image encoder outputs the visual embeddings of multi-view images. The shape and texture library are provided as visual prompts for object detection. Then the transformer decoder module is applied to get the library-based object features. Finally, the object-conditioned image features are decoded to the bricks\u2019 masks, keypoints, and rotation angles, while the global object features are decoded to the bricks\u2019 textures, shapes, the number of blocks, and the assembly graph. The elements from the shape feature library S or the texture feature library T and the N learned object queries O are concatenated to define a query O\u2032 = [S, T, O]. Given the fused image features \u02c6 F k and the query O\u2032, the library-based object queries f k i (i = 1, 2, . . . , N, k = 1, 2, . . . , K) are obtained through a couple of decoding layers that employ the structure of FS-DETR Bulat et al. (2023). Then we use an averaging operation to obtain a unified, multi-view feature representation for each object: f global i = PK k=1 f k i , i = 1, 2, . . . , N. This approach effectively integrates diverse perspectives to enhance the overall understanding for the object\u2019s attribute. As shown in Fig. 2, for the Consensus Dispersion module and Linear Interpolation module, to obtain the object-conditioned image features, we adapt the combination of CLIPSeg L\u00fcddecke and Ecker (2022) and group-based semantic agreement Transformer Xu and Mu (2023). The node features f k i serve as conditional prompts. Feature fusion between node features and image features is achieved through linear interpolation. Formally, in each perspective, the object-conditioned image features F k i = LI( \u02c6 F k, f k i ) where i = 1...N, k = 1, 2, ...K, LI is linear interpolation. Next, the scene consensus g is dispersed to multi-view object-conditioned image features through channel-wise multiplication \u02c6 F k i = F k i \u00b7 g. 3.3 Brick Number Prediction For each view k = 1, 2, . . . , K, we average the N object-conditioned image features \u02c6 F k avg = 1 N PN i=1 \u02c6 F k i . Then the scene feature can be obtained by \u02c6 Fscene = 1 K PK k=1 \u02c6 F k avg. Finally a couple of convolution layers are used to predict the number of bricks in the scene. 3.4 Relation Graph Generation Predicting the assembly sequence is equivalent to predicting the connections of bricks which can be described using the relationship graph. If brick A is placed on the top of B, then there is a directed edge from B to A. A graph convolutional network Kipf and Welling (2016) is adopted to predict the existence of each edge in the graph. Given the complete graph G = (V, E) with initial node features p0 i = f global i . Similar to Johnson et al. (2018); Wald et al. (2020), we implemented the GCN using the MLP (multi-layer perception) structure. Each iteration of message passing starts from computing edge features: et+1 i,j \u2190MLP([pt i, pt j]). An average-pooling is performed over all edge 4 features connected to a node, obtaining an updated node feature pt+1 i = 1 |{u|(u, i) \u2208E}| X (u,i)\u2208E et+1 u,i . (1) After gathering the edge features via graph convolutions as {et i,j}T t=1, we use another MLP to predict the probability of the existence of each edge, P(i,j) = MLP(eT i,j) Finally, to determine the assembly sequence during inference, the directed edges are sorted in descending order according to the predicted probability and subsequently incorporated into the directed graph. If a loop is formed after adding an edge, this edge will not be added. The process continues until it reach a state where there exists a vertex that can reach all other vertices in the graph. 3.5 Pose Estimation The N object query features {f global i }N i=1 are used for shape and texture classification. The shape prediction head predicts the shape label and the texture prediction head predicts the texture label. Mask and heatmap prediction. We employ a simple deconvolution layer on the object-conditioned image feature \u02c6 F k i to obtain the heatmap of keypoint and mask of the object i in each perspective k. Rotation prediction The rotation angle is represented as a 2D vector representing the sine and cosine values of it. The rotation predictor accepts conditional image features \u02c6 F k i as input and outputs the sine and cosine value. Confidence score prediction Since bricks may not be visible at all perspectives, here we predict the the confidence score ck i of each brick at each perspective. Specifically, ck i represents the Iou(Intersection Over Union) between the predicted mask and ground truth mask. Figure 3: Illustration of 3D position prediction module. During inference, the pose of each object in 3D space is obtained by merging the poses from each perspective (see Figure 3). In more details, for 3D position prediction, our method involves detecting keypoints of the object parts from the view whose confidence score ci is higher than a threshold \u03b8. Then, utilizing the camera parameters, the rays in 3D space generated by keypoints are used to infer the object\u2019s position in 3D space. Each ray Ri is represented as ri(t) = Oi + t \u00b7 Di where Oi is the origin and Di is the direction. Our objective is to find a point P that minimizes the function: h(Z) = L X i d(Z, Ri), (2) where L is the number of rays and d(Z, Ri) represents the shortest distance from the point P to the ray Ri. Here, the minimization of the objective function h(Z) is approached through the gradient descent method. 3.6 Training and Loss functions We train Neural Assembler with full supervision on the generated dataset where each sample we have the groundtruth shape, texture, keypoint, mask, rotation information of each brick, the number of bricks and the relationship graph of bricks. The entire neural network is trained end-to-end with gradient descent. Our objective function is computed by L = \u03b1 \u00b7 Lcount + \u03b2 \u00b7 Lgraph + Lpose, where Lcount is the L1 Loss between the predicted number of bricks and ground truth countgt. Following Carion et al. (2020), bipartite matching is used to find an optimal permutation {\u03c3i}N i=1 to match the N object queries and ground truth bricks in the scene. The pose loss of bricks includes the loss of shape, texture, keypoint, mask and rotation. Lpose = Lkeypoint + Lmask + \u03b31Lrotation (3) + \u03b32Lshape + \u03b33Ltexture + \u03b34Lconfidence, (4) 5 where Lkeypoint is the focal loss Lin et al. (2017) computed based on the predicted heatmap and ground truth heatmap generated by Kps\u03c3i, Lmask is the focal and dice loss between the predicted mask and ground truth mask M\u03c3i, Lrotation is the L1 Loss between the prediced sine and cosine and the ground truth sine and cosine of Rot\u03c3i, Lshape and Ltexture are the cross entropy loss for shape and texture classification and Lconfidence is L1 Loss between the predicted confidence score and Iou of the predicted mask and ground truth mask. Our model strategically prioritizes the hyperparameters Lkeypoint and Lmask due to their critical impact on object detection, essential for accurate object interaction and identification in complex scenes. In contrast, Lrotation, Lshape, Ltexture and Lconfidence are assigned a reduced weight of 0.1 each, a decision grounded in empirical findings that highlight their relatively minor incremental benefits to overall model efficacy. Lgraph is the loss for relationship graph prediction. Firstly, we define the loss for any subset of the entire edge set. For a subset \u02c6 E of the edge set E of the complete graph, the edge loss of \u02c6 E is defined as L \u02c6 E = P (x,y)\u2208\u02c6 E LCE(P(x,y), \u02c6 E(x,y)). Then the Lgraph is defined as Lgraph = LE + LtopKE LtopKE = 1 KE KE X k=1 LEtopk , (5) where Etopk is the set of the edges with the top k highest predicted probability. Since the entire relationship graph is a directed graph with sparse edges, the hyperparameter KE is defined as countgt + 1. 4 Experiments Dataset preparation. Experiments are conducted on two self-constructed datasets. The CLEVRAssembly Dataset, created via the CLEVR-EngineJohnson et al. (2017), comprises a shape library with 6 brick shapes and 16 textures, a 76.5% visibility probability of each brick per perspective, 7.51 bricks per sample, and an average assembly graph depth of 4.01, with approximately 10K training, 500 validation, and 2000 test samples. The LEGO dataset, synthesized using Pytorch3d, features 12 LEGO brick shapes and 8 textures, an 82.6% visibility probability of each brick per perspective, 7.39 bricks per sample, and an average graph depth of 4.49, also with approximately 10K training, 500 validation, and 2000 test samples. The two datasets, characterized by brick number, occlusion from variable visibility, and complex assembly graph, reflect the complexity of assembly tasks. Baseline models. In addressing this novel task, for which no direct baseline exists, we have established a comparative framework against three distinct baseline methods to demonstrate the efficacy of our approach. First, to assess the validity of our assembly order prediction methodology, we introduce a Long Short-Term Memory (LSTM) Graves and Graves (2012) module as a surrogate baseline to contrast with our Graph Convolutional Network (GCN) based module. This comparison aims to highlight the enhanced predictive capabilities our GCN model brings to complex assembly sequences. Furthermore, for the object pose estimation component, our methodology is rigorously benchmarked against DETR3D Wang et al. (2022b), a prominent baseline in the realm of object detection within autonomous driving scenarios. This comparison is pivotal in underscoring the adaptability and accuracy of our model in 3D pose estimation, a crucial aspect in varied application domains. Lastly, in evaluating our multi-view image feature fusion process, we contrast our approach with a method that does not leverage scene consensus. This comparison is instrumental in showcasing the enhanced scene understanding and feature integration our method offers, thus demonstrating its superiority in synthesizing and utilizing multi-view image data. Implementation details. Our approach is implemented in single-scale version for fair comparison with other works. It incorporates a CLIPRadford et al. (2021) pre-trained ViT-B/16 image encoder, a PointNet-based Qi et al. (2017) point cloud encoder, and a ResNet-18He et al. (2016) for texture encoding. We employ a two-layer residual network for brick number prediction. The shape, material, iou prediction heads are implemented using 3-layer MLP and ReLU activations. Rotation prediction also uses a two-layer residual network, and our GCN architecture employs two message-passing layers. Training is conducted on an RTX 3090 GPU using AdamW, with an initial rate of 5e-4, decaying by 0.8 per epoch, a weight decay of 1e-3, and batch size 8 over 10 epochs for both datasets. 6 Evaluation metrics. In this assembly task, we introduce several metrics to evaluate the performance of our algorithm both at a per-scene and per-step level, providing a holistic measure of our method\u2019s efficacy. Specifically, for the per-scene metrics, our approach necessitates the prediction of the entire assembly sequence based on multi-view images, emphasizing the ability to comprehend and reconstruct the complete scene from various perspectives. In contrast, the per-step metrics operate under the assumption that the assembly order is known a priority. Here, we focus on calculating the error between the predicted information for each individual brick and the corresponding ground truth, independent of the assembly order. This allows for a comprehensive evaluation of the method\u2019s ability in both holistic scene understanding and step-wise brick analysis. Table 1: Comparison of per-scene metrics. Method Complete Rate Per-scene Acc Count Acc Order CR LEGO-Assembly LSTM Graves and Graves (2012) 27.5 5.3 60.3 35.1 DETR3D Wang et al. (2022b) 25.8 2.5 61.5 63.5 Ours (w/o consensus) 43.7 18.4 69.0 64.5 Ours 43.9 22.9 76.3 69.4 CLEVR-Assembly LSTM Graves and Graves (2012) 19.7 8.0 91.5 22.6 DETR3D Wang et al. (2022b) 16.8 4.5 89.5 35.3 Ours (w/o consensus) 28.6 6.6 92.1 45.5 Ours (2 views) 22.0 4.6 88.7 38.6 Ours (3 views) 25.7 9.3 94.0 44.5 Ours 41.5 22.5 95.5 62.1 For per-scene metrics, we evaluate the Complete Rate (completion percentage of the brick model), Order CR (completion rate of the sequence of brick types), Per-scene Acc (accuracy of completely assembling an entire brick model), Count Acc(precision of predicting the number of bricks). For per-step metrics, we evaluate the Pos Acc and Rot Acc (3D position accuracy and rotation accuracy), Shape Acc and Texture Acc (shape accuracy and texture accuracy), Kps Mse (error of the predicted 2D keypoints of the object), mIou (mean Intersection over Union between the predicted mask and the ground truth), the F1-score between predicted relation graph and ground truth relation graph and Per-step Acc(accuracy of correct predictions for each brick\u2019s information). Results on CLEVR-Assembly. Per-scene quantitative results on the CLEVR-Assembly Dataset are summarized in Table 1. Neural Assembler outperforms baseline models in all metrics considered. From Table 2, we can see Neural Assembler locates objects more accurately than DETR3D. 1 2 3 4 5 6 7 8 9 Number of Steps 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Probability (a) CLEVR-Assembly LSTM GCN 1 2 3 4 5 6 7 8 9 Number of Steps 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Probability (b) LEGO-Assembly LSTM GCN Figure 4: The probability distribution of CCA. The metric CCA proposed by Chen et al. (2019) is adopted here for the brick order evaluation. It denotes the probability distribution of the number of brick that a model can consecutively place from scratch. As shown in Fig.4, LSTM perform worse than GCN. This is because time dependence is not crucial for the assembly order prediction. Instead, the assembly problem requires prediction from complex spatial relationships. The adeptness of GCN in capturing spatial relation plays a critical role in understanding the assembly order. Consensus Module As shown in Table 2, extracting the consensus can better align the information of the images from various perspectives. Without scene consensus, it is difficult for the model to integrate information from multi-view images to obtain the overall information of each brick. Number of views Furthermore, we compared the results obtained by accepting different numbers of images as input. As shown in Tables 1 and 2, the result shows that more perspectives as input can improve the performance. This is because each brick may not be seen from some perspectives due to the existence of occlusion. More views mean more information for prediction. Fig. 5 further shows the generated assembly instructions for a brick model in the CLEVR-Assembly Dataset. Perspectives with confidence score greater than 0.66 are selected to infer the brick\u2019s 7 Multi-view Images Brick Type 3D Pos Rotation (-0.72, -0.76, 0.49) 70.37 Last Step This Step Object Detection Figure 5: Result from CLEVR-Assembly Dataset. Multi-view Images Brick Type 3D Pos Rotation (0, 0, 4) 0 Last Step This Step Object Detection Figure 6: Result from LEGO-Assembly Dataset. Table 2: Comparison of baselines on per-step metrics. Method Per-Step Acc Pos Acc Rot Acc Shape Acc Texture Acc mIoU Kps Mse F1 LEGO-Assembly DETR3D Wang et al. (2022b) 41.7 47.2 78.2 87.8 98.3 0.797 Ours (w/o consensus) 71.7 79.0 87.2 89.9 98.3 76.3 1.12 0.829 Ours 73.5 80.4 88.0 91.5 98.3 78.5 0.88 0.820 CLEVR-Assembly DETR3D Wang et al. (2022b) 29.2 32.4 75.6 72.4 67.0 0.734 Ours (w/o consensus) 57.2 67.9 78.9 87.1 88.5 61.2 1.2 0.781 Ours (2 views) 56.1 71.7 73.0 80.0 86.4 65.4 2.5 0.721 Ours (3 views) 61.1 74.6 77.9 85.7 90.7 67.1 1.5 0.772 Ours 69.2 79.1 84.1 91.5 93.8 71.1 0.78 0.840 information. It is evident that Neural Assembler is adept at excluding perspectives where bricks are obscured by predicting confidence scores, thereby identifying optimal perspectives for predicting positional information. Concurrently, it is capable of predicting the structure between bricks to determine the appropriate assembly sequence. Results on LEGO-Assembly. Different from CLEVR-Dataset, LEGO bricks are connected through slots. It is easier to infer the position of the bricks based on the connection constraints between the LEGO bricks, as long as the rough position of the LEGO bricks is predicted. However, there are many challenges in predicting the assembly sequence of LEGO brick models. For instance, the more compact assembly of LEGO bricks results in increased occlusion. The LEGO brick will only have rotations chosen from (0\u25e6, 90\u25e6, 180\u25e6, 270\u25e6). Meanwhile, the position of LEGO bricks is discretized. We adopt the connection-constrained inference subroutine and an inference-by-synthesis subroutine used in Wang et al. (2022a) to predict the position and rotation angle for each view, and then integrated them through voting. The results in Table 1 and Table 2 shows that Neural Assembler can yield more accurate results than other baselines. Fig.6 further shows the generated assembly instructions for a LEGO model. Method Per-scene Acc Complete Rate Novel-Dataset LSTM Graves and Graves (2012) 16.0 27.3 DETR3D Wang et al. (2022b) 7.3 21.8 Ours 34.2 58.5 Real-World Dataset LSTM Graves and Graves (2012) 7.3 21.8 DETR3D Wang et al. (2022b) 2.4 12.8 Ours 22.0 50.5 Table 3: The performance of the fine-tuned model on the novel simulated dataset and real-world dataset. Real-world experiments. To confirm the model\u2019s generalizability, a comprehensive test dataset is constructed, complete with annotations for each brick\u2019s shape, position, and rotation. For each sample, we acquired real brick images using a Realsense camera in the real world and generated corresponding simulated images 8 (1.31, 0.04, 1) -4.95 3D Pos Rotation (-0.28, -1.68, 1) 7.13 (1.42, 1.56, 0.5) 2.51 (-0.34, -0.06, 0.5) -0.11 (1.22, -1.66, 1) 1.42 (-1.66, -1.30, 0.5) -5.43 (0.95, -1.53, 2.5) 66.73 Simulated Assembly Brick Type Real-world Images Figure 7: The result from the real-world brick model. The left box displays 4 images captured using a Realsense camera, while the right delineates the detected type, position, rotation angle of each brick, and the sequential assembly order of the brick model. in the simulation environment employing real-world camera parameters to ensure the consistent coordinate between the simulated and real environments. The dataset encompasses 5 brick types and 7 textures, averaging 6.1 bricks per brick model. To evaluate the Neural Assembler, we collected point clouds and textures from real bricks. This data facilitated the creation of a synthetic dataset, used for fine-tuning the model initially trained on the CLEVR-Assembly dataset. As indicated in Table 3, the Neural Assembler achieves performance in real-world experiments close to the results obtained in simulated environments, demonstrating its robust applicability. Fig. 7 presents the result on the real world dataset. Discussion. As shown in Fig. 8, the occlusion still greatly affects the performance of the model, especially the objects under the brick model will be greatly blocked by the bricks pressing on it. To alleviate this problem, in future work, we plan to enhance model performance with a deeper integration of physical scene understanding. The model is expected to not only interpret the visual aspects but also the underlying physical principles governing the scene. Prediction Ground Truth View 1 View 2 View 3 View 4 Figure 8: Failure case. The model confidently but incorrectly predicts the highlighted block in View 1, while in View 3, despite correct keypoint identification, occlusion results in a less confident. This causes erroneous overall prediction. 5 Conclusion We study the problem of generating robotic assembly instructions from multi-view images and propose Neural Assembler, a model that predicts the assembly instructions of the brick model. The key idea behind our model is to learn the graph structure to predict the relationships among bricks and infer the 3D pose according to multi-view images. Results show that our model outperforms existing methods on the newly collected CLEVR-Assembly and LEGO-Assembly Dataset. 9"
17
+ }
intro_8K/test_introduction_short_2404.16456v1.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2404.16456v1",
3
+ "title": "Correlation-Decoupled Knowledge Distillation for Multimodal Sentiment Analysis with Incomplete Modalities",
4
+ "abstract": "Multimodal sentiment analysis (MSA) aims to understand human sentiment\nthrough multimodal data. Most MSA efforts are based on the assumption of\nmodality completeness. However, in real-world applications, some practical\nfactors cause uncertain modality missingness, which drastically degrades the\nmodel's performance. To this end, we propose a Correlation-decoupled Knowledge\nDistillation (CorrKD) framework for the MSA task under uncertain missing\nmodalities. Specifically, we present a sample-level contrastive distillation\nmechanism that transfers comprehensive knowledge containing cross-sample\ncorrelations to reconstruct missing semantics. Moreover, a category-guided\nprototype distillation mechanism is introduced to capture cross-category\ncorrelations using category prototypes to align feature distributions and\ngenerate favorable joint representations. Eventually, we design a\nresponse-disentangled consistency distillation strategy to optimize the\nsentiment decision boundaries of the student network through response\ndisentanglement and mutual information maximization. Comprehensive experiments\non three datasets indicate that our framework can achieve favorable\nimprovements compared with several baselines.",
5
+ "authors": "Mingcheng Li, Dingkang Yang, Xiao Zhao, Shuaibing Wang, Yan Wang, Kun Yang, Mingyang Sun, Dongliang Kou, Ziyun Qian, Lihua Zhang",
6
+ "published": "2024-04-25",
7
+ "updated": "2024-04-25",
8
+ "primary_cat": "cs.CV",
9
+ "cats": [
10
+ "cs.CV"
11
+ ],
12
+ "label": "Original Paper",
13
+ "paper_cat": "Distillation",
14
+ "gt": "\u201cCorrelations serve as the beacon through the fog of the missingness.\u201d \u2013Lee & Dicken Multimodal sentiment analysis (MSA) has attracted wide attention in recent years. Different from the tradi- tional unimodal-based emotion recognition task [7], MSA \u00a7Corresponding author. Equal contribution. Modality Content Label Prediction Language Visual Audio Neutral Positive It was a great movie and I loved it. \u2026 Language Visual Audio It was a great movie and I loved it. \u2026 Positive Positive Figure 1. Traditional model outputs correct prediction when in- putting the sample with complete modalities, but incorrectly pre- dicts the sample with missing modalities. We define two missing modality cases: (i) intra-modality missingness (i.e., the pink areas) and (ii) inter-modality missingness (i.e., the yellow area). understands and recognizes human emotions through mul- tiple modalities, including language, audio, and visual [28]. Previous studies have shown that combining complemen- tary information among different modalities facilitates the generation of more valuable joint multimodal representa- tions [34, 36]. Under the deep learning paradigm [3, 17, 42, 43, 54, 59, 60], numerous studies assuming the avail- ability of all modalities during both training and inference stages [10, 19, 22, 49\u201353, 55\u201358, 62]. Nevertheless, this assumption often fails to align with real-world scenarios, where factors such as background noise, sensor constraints, and privacy concerns may lead to uncertain modality miss- ingness issues. Modality missingness can significantly im- pair the effectiveness of well-trained models based on com- plete modalities. For instance, as shown in Figure 1, the entire visual modality is missing, and some frame-level fea- 1 arXiv:2404.16456v1 [cs.CV] 25 Apr 2024 tures in the language and audio modalities are missing, lead- ing to an incorrect sentiment prediction. In recent years, many works [20, 21, 23, 24, 32, 45, 46, 66] attempt to address the problem of missing modal- ities in MSA. As a typical example, MCTN [32] guaran- tees the model\u2019s robustness to the missing modality case by learning a joint representation through cyclic transla- tion from the source modality to the target modality. How- ever, these methods suffer from the following limitations: (i) inadequate interactions based on individual samples lack the mining of holistically structured semantics. (ii) Fail- ure to model cross-category correlations leads to loss of sentiment-relevant information and confusing distributions among categories. (iii) Coarse supervision ignores the se- mantic and distributional alignment. To address the above issues, we present a Correlation- decoupled Knowledge Distillation (CorrKD) framework for the MSA task under uncertain missing modalities. There are three core contributions in CorrKD based on the tai- lored components. Specifically, (i) the proposed sample- level contrastive distillation mechanism captures the holis- tic cross-sample correlations and transfers valuable super- vision signals via sample-level contrastive learning. (ii) Meanwhile, we design a category-guided prototype distilla- tion mechanism that leverages category prototypes to trans- fer intra- and inter-category feature variations, thus deliver- ing sentiment-relevant information and learning robust joint multimodal representations. (iii) Furthermore, we intro- duce a response-disentangled consistency distillation strat- egy to optimize sentiment decision boundaries and encour- age distribution alignment by decoupling heterogeneous re- sponses and maximizing mutual information between ho- mogeneous sub-responses. Based on these components, CorrKD significantly improves MSA performance under uncertain missing-modality and complete-modality testing conditions on three multimodal benchmarks.",
15
+ "main_content": "2.1. Multimodal Sentiment Analysis MSA aims to understand and analyze human sentiment utilizing multiple modalities. Mainstream MSA studies [9, 10, 22, 37, 50, 53, 55\u201358] focus on designing complex fusion paradigms and interaction mechanisms to enhance the performance of sentiment recognition. For instance, CubeMLP [37] utilizes three independent multi-layer perceptron units for feature-mixing on three axes. However, these approaches based on complete modalities cannot be deployed in real-world applications. Mainstream solutions for the missing modality problem can be summarized in two categories: (i) generative methods [6, 23, 25, 45] and (ii) joint learning methods [24, 32, 46, 66]. Reconstruction methods generate missing features and semantics in modalities based on available modalities. For example, TFR-Net [63] leverages the feature reconstruction module to guide the extractor to reconstruct missing semantics. MVAE [6] solves the modality missing problem by the semi-supervised multi-view deep generative framework. Joint learning efforts refer to learning joint multimodal representations utilizing correlations among modalities. For instance, MMIN [69] generates robust joint multimodal representations via cross-modality imagination. TATE [66] presents a tag encoding module to guide the network to focus on missing modalities. However, the aforementioned approaches fail to account for the correlations among samples and categories, leading to inadequate compensation for the missing semantics in modalities. In contrast, we design effective learning paradigms to adequately capture potential inter-sample and inter-category correlations. 2.2. Knowledge Distillation Knowledge distillation utilizes additional supervisory information from the pre-trained teacher\u2019s network to assist in the training of the student\u2019s network [11]. Knowledge distillation methods can be roughly categorized into two types, distillation from intermediate features [15, 29, 38, 61] and responses [4, 8, 27, 48, 68]. Many studies [13, 18, 33, 40, 47] employ knowledge distillation for MSA tasks and responses [4, 8, 27, 48, 68]. Many studies [13, 18, 33, 40, 47] employ knowledge distillation for MSA tasks with missing modalities. The core concept of these efforts is to transfer \u201cdark knowledge\u201d from teacher networks trained by complete modalities to student networks trained by missing modalities. The teacher model typically produces more valuable feature presentations than the student model. For instance, [13] utilizes the complete-modality teacher network to implement supervision on the unimodal student network at both feature and response levels. Despite promising outcomes, they are subject to several significant limitations: (i) Knowledge transfer is limited to individual samples, overlooking the exploitation of clear correlations among samples and among categories. (ii) Supervision on student networks is coarse-grained and inadequate, without considering the potential alignment of feature distributions. To this end, we propose a correlation-decoupled knowledge distillation framework that facilitates the learning of robust joint representations by refining and transferring the crosssample, cross-category, and cross-target correlations. 3. Methodology 3.1. Problem Formulation Given a multimodal video segment with three modalities as S = [XL, XA, XV ], where XL \u2208RTL\u00d7dL, XA \u2208 RTA\u00d7dA, and XV \u2208RTV \u00d7dV denote language, audio, and visual modalities, respectively. Tm(\u00b7) is the sequence length and dm(\u00b7) is the embedding dimension, where m \u2208 {L, A, V }. Meanwhile, the incomplete modality is denoted 2 \u2026\u2026 \u2026\u2026 Teacher representations Student representations Cos Cos SCD CPD RCD It was a great movie and I loved it. \u2026 Transformer Encoder GAP It was a great movie and I loved it. \u2026 MRM Training Data Flow Inference Data flow Transformer Encoder Transformer Encoder Concatenation \u2217 1D Convolution \u2217 \u2217 \u2217 Transformer Encoder GAP Transformer Encoder Transformer Encoder \u2217 \u2217 \u2217 Batch Data Classifier Classifier Teacher Network Student Network Figure 2. The structure of our CorrKD, which consists of three core components: Sample-level Contrastive Distillation (SCD) mechanism, Category-guided Prototype Distillation (CPD) mechanism, and Response-disentangled Consistency Distillation (RCD) strategy. as \u02c6 Xm. We define two missing modality cases to simulate the most natural and holistic challenges in real-world scenarios: (i) intra-modality missingness, which indicates some frame-level features in the modality sequences are missing. (ii) inter-modality missingness, which denotes some modalities are entirely missing. Our goal is to recognize the utterance-level sentiments by utilizing the multimodal data with missing modalities. 3.2. Overall Framework Figure 2 illustrates the main workflow of CorrKD. The teacher network and the student network adopt a consistent structure but have different parameters. During the training phase, our CorrKD procedure is as follows: (i) we train the teacher network with complete-modality samples and then freeze its parameters. (ii) Given a video segment sample S, we generate a missing-modality sample \u02c6 S with the Modality Random Missing (MRM) strategy. MRM simultaneously performs intra-modality missing and inter-modality missing, and the raw features of the missing portions are replaced with zero vectors. S and \u02c6 S are fed into the initialized student network and the trained teacher network, respectively. (iii) We input the samples S and \u02c6 S into the modality representation fusion module to obtain the joint multimodal representations Ht and Hs. (iv) The sample-level contrastive distillation mechanism and the category-guided prototype distillation mechanism are utilized to learn the feature consistency of Ht and Hs. (v) These representations are fed into the task-specific fully-connected layers and the softmax function to obtain the network responses Rt and Rs. (vi) The response-disentangled consistency distillation strategy is applied to maintain consistency in the response distribution, and then Rs is used to perform classification. In the inference phase, testing samples are only fed into the student network for downstream tasks. Subsequent sections provide details of the proposed components. 3.3. Modality Representation Fusion We introduce the extraction and fusion processes of modality representations using the student network as an example. The incomplete modality \u02c6 Xs m \u2208RTm\u00d7dm with m \u2208{L, A, V } is fed into the student network. Firstly, \u02c6 Xs m passes through a 1D temporal convolutional layer with kernel size 3 \u00d7 3 and adds the positional embedding [39] to obtain the preliminary representations, denoted as \u02c6 F s m = W3\u00d73( \u02c6 Xs m) + PE(Tm, d) \u2208RTm\u00d7d. Each F s m is fed into a Transformer [39] encoder Fs \u03d5(\u00b7), capturing the modality dynamics of each sequence through the self-attention mechanism to yield representations Es m, denoted as Es m = Fs \u03d5(F s m). The representations Es m are concatenated to obtain Zs, expressed as Zs = [Es L, Es A, Es V ] \u2208RTm\u00d73d. Subsequently, Zs is fed into the Global Average Pooling (GAP) to further enhance and refine the features, yielding the joint multimodal representation Hs \u2208R3d. Similarly, the joint multimodal representation generated by the teacher network is represented as Ht \u2208R3d. 3.4. Sample-level Contrastive Distillation Most previous studies of MSA tasks with missing modalities [33, 40, 47] are sub-optimal, exploiting only onesided information within a single sample and neglecting to consider comprehensive knowledge across samples. To 3 this end, we propose a Sample-level Contrastive Distillation (SCD) mechanism that enriches holistic knowledge encoding by implementing contrastive learning between sample-level representations of student and teacher networks. This paradigm prompts models to sufficiently capture intra-sample dynamics and inter-sample correlations to generate and transfer valuable supervision signals, thus precisely recovering the missing semantics. The rationale of SCD is to take contrastive learning within all mini-batches, constraining the representations in two networks originating from the same sample to be similar, and the representations originating from different samples to be distinct. Specifically, given a mini-batch with N samples B = {S0, S1, \u00b7 \u00b7 \u00b7 , SN}, we obtain their sets of joint multimodal representations in teacher and student networks, denoted as {Hw 1 , Hw 2 , \u00b7 \u00b7 \u00b7 , Hw N} with w \u2208{t, s}. For the same input sample, we narrow the distance between the joint representations of the teacher and student networks and enlarge the distance between the representations for different samples. The contrastive distillation loss is formulated as follows: \\s m a l l \\ m a thcal {L }_{S C D } = \\sum _{i =1}^N\\ s u m _{j=1,j\\neq i}^N\\mathcal {D}(\\bm {H}^s_i,\\bm {H}^t_i)^2 + max\\{0, \\eta \\mathcal {D}(\\bm {H}^s_i,\\bm {H}^t_j)\\}^2, (1) where D(Hs, Ht) = \u2225Hs \u2212Ht\u22252 , \u2225\u00b7\u22252 represents \u21132 norm function, and \u03b7 is the predefined distance boundary. When negative pairs are distant enough (i.e., greater than boundary \u03b7), the loss is set to 0, allowing the model to focus on other pairs. Since the sample-level representation contains holistic emotion-related semantics, such a contrastive objective facilitates the student network to learn more valuable knowledge from the teacher network. 3.5. Category-guided Prototype Distillation MSA data usually suffers from the dilemmas of high intracategory diversity and high inter-category similarity. Previous approaches [13, 18, 33] based on knowledge distillation to address the modality missing problem simply constrain the feature consistency of the teacher and student networks. The rough manner lacks consideration of crosscategory correlation and feature variations, leading to ambiguous feature distributions. To this end, we propose a Category-guided Prototype Distillation (CPD) mechanism, with the core insight of refining and transferring knowledge of intraand inter-category feature variations via category prototypes, which is widely utilized in the field of few-shot learning [35]. The category prototype represents the embedding center of every sentiment category, denoted as: \\ s mall \\bm { c}_k = \\frac {1}{|\\bm {B}_k|}\\sum _{\\bm {S}_i \\in \\bm {B}_k}{\\bm {H}_i}, (2) where Bk denotes the set of samples labeled with category k in the mini-batch, and Si denotes the i-th sample in Bk. The intraand inter-category feature variation of the sample Si is defined as follows: \\sm a ll \\ b m {M} _k(i) = \\frac {\\bm {H}_i \\, \\bm {c}_k^\\top }{\\left \\| \\bm {H}_i \\right \\|_2 \\left \\| \\bm {c}_k \\right \\|_2 }, (3) where Mk(i) denotes the similarity between the sample Si and the prototype ck. If the sample Si is of category k, Mk(i) represents intra-category feature variation. Otherwise, it represents inter-category feature variation. The teacher and student networks compute similarity matrices M t and M s, respectively. We minimize the squared Euclidean distance between the two similarity matrices to maintain the consistency of two multimodal representations. The prototype distillation loss is formulated as: \\ s m a ll \\ mat h c al { L} _ {CPD } = \\ frac { 1 }{N K} \\sum _{i=1}^N \\sum _{k=1}^K\\left \\|\\bm {M}_k^s(i)-\\bm {M}_k^t(i)\\right \\|_2, (4) where K is the category number of the mini-batch. 3.6. Response-disentangled Consistency Distillation Most knowledge distillation studies [15, 29, 38, 61] focus on extracting knowledge from intermediate features of networks. Although the model\u2019s response (i.e., the predicted probability of the model\u2019s output) presents a higher level of semantics than the intermediate features, responsebased methods achieve significantly worse performance than feature-based methods [41]. Inspired by [67], the model\u2019s response consists of two parts: (i) Target Category Response (TCR), which represents the prediction of the target category and describes the difficulty of identifying each training sample. (ii) Non-Target Category Response (NTCR), which denotes the prediction of the non-target category and reflects the decision boundaries of the remaining categories to some extent. The effects of TCR and NTCR in traditional knowledge distillation loss are coupled, i.e., high-confidence TCR leads to low-impact NTCR, thus inhibiting effective knowledge transfer. Consequently, we disentangle the heterogeneous responses and constrain the consistency between the homogeneous responses. From the perspective of information theory, knowledge consistency between responses can be characterized as maintaining high mutual information between teacher and student networks [1]. This schema captures beneficial semantics and encourages distributional alignment. Specifically, the joint multimodal representation Hw with w \u2208{t, s} of teacher and student networks pass through fully-connected layers and softmax function to obtain response Rw. Based on the target indexes, we decouple the response Rw to obtain TCR Rw T and NTCR Rw NT . Define Q \u2208Q and U \u2208U as two random variables. Formulaically, the marginal probability density functors of Q and U are denoted as P(Q) and P(U). P(Q, U) is regarded as the joint probability density functor. The mutual 4 Self-MM CubeMLP MCTN TransM SMIL CorrKD GCNet DMD 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 30 40 50 60 70 80 90 100 F1 Score (Happy) Missing Ratio 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 30 40 50 60 70 80 90 100 F1 Score (Sad) Missing Ratio 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 30 40 50 60 70 80 90 100 F1 Score (Angry) Missing Ratio 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 30 40 50 60 70 80 F1 Score (Neutral) Missing Ratio Figure 3. Comparison results of intra-modality missingness on IEMOCAP. We comprehensively report the F1 score for the happy, sad, angry, and neutral categories at various missing ratios. Self-MM CubeMLP MCTN TransM SMIL CorrKD GCNet DMD (a) (b) 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 20 30 40 50 60 70 80 90 F1 Score Missing Ratio 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 20 30 40 50 60 70 80 90 F1 Score Missing Ratio Figure 4. Comparison results of intra-modality missingness on (a) MOSI and (b) MOSEI. We report the F1 score at various ratios. information between Q and U is represented as follows: \\s ma l l I ( \\bm {Q }, \\ bm { U} )=\\int _ { \\bm {\\mathcal {Q}}} \\int _{\\bm {\\mathcal {U}}} P(\\bm {Q}, \\bm {U}) \\log \\left (\\frac {P(\\bm {Q}, \\bm {U})}{P(\\bm {Q}) P(\\bm {U})}\\right ) d \\bm {Q} d \\bm {U}. (5) The mutual information I(Q, U) can be written as the Kullback-Leibler divergence between the joint probability distribution PQU and the product of the marginal distributions PQPU, denoted as I(Q, U) = DKL (PQU\u2225PQPU) . For efficient and stable computation, the Jensen-Shannon divergence [12] is employed in our case to estimate the mutual information, which is denoted as follows: \\s ma l l \\begi n {a li g ne d} I( \\ b m { Q } , \\bm {U}) & \\ ge q \\h at { I}_ \\ t h eta ^{(\\ma thrm {JSD})}(\\bm {Q}, \\bm {U}) \\\\ &= \\mathbb {E}_{P(\\bm {Q}, \\bm {U})}\\left [-\\log \\left (1+e^{-\\mathcal {F}_\\theta (\\bm {Q}, \\bm {U})}\\right )\\right ] \\\\ & -\\mathbb {E}_{P(\\bm {Q}) P(\\bm {U})}\\left [\\log \\left (1+e^{\\mathcal {F}_\\theta (\\bm {Q}, \\bm {U})}\\right )\\right ], \\end {aligned} (6) where F\u03b8 : Q\u00d7U \u2192R is formulated as an instantiated statistical network with parameters \u03b8. We only need to maximize the mutual information without focusing on its precise value. Consequently, the distillation loss based on the mutual information estimation is formatted as follows: \\s m al l \\ m ath cal {L}_{ R C D} = \\mat hc a l {L }_{RCD}^T + \\mathcal {L}_{RCD}^{NT} = -I(\\bm {R}^t_T,\\bm {R}^s_T) I(\\bm {R}^t_{NT},\\bm {R}^s_{NT}). (7) Finally, the overall training objective Ltotal is expressed as Ltotal = Ltask + LSCD + LCP D + LRCD, where Ltask is the standard cross-entropy loss. 4. Experiments 4.1. Datasets and Evaluation Metrics We conduct extensive experiments on three MSA datasets with word-aligned data, including MOSI [64], MOSEI [65], and IEMOCAP [2]. MOSI is a realistic dataset that comprises 2,199 short monologue video clips. There are 1,284, 229, and 686 video clips in train, valid, and test data, respectively. MOSEI is a dataset consisting of 22,856 video clips, which has 16,326, 1,871, and 4,659 samples in train, valid, and test data. Each sample of MOSI and MOSEI is labeled by human annotators with a sentiment score of -3 (strongly negative) to +3 (strongly positive). On the MOSI and MOSEI datasets, we utilize weighted F1 score computed for positive/negative classification results as evaluation metrics. IEMOCAP dataset consists of 4,453 samples of video clips. Its predetermined data partition has 2,717, 798, and 938 samples in train, valid, and test data. As recommended by [44], four emotions (i.e., happy, sad, angry, and neutral) are selected for emotion recognition. For evaluation, we report the F1 score for each category. 4.2. Implementation Details Feature Extraction. The Glove embedding [31] is used to convert the video transcripts to obtain a 300-dimensional vector for the language modality. For the audio modality, we employ the COVAREP toolkit [5] to extract 74dimensional acoustic features, including 12 Mel-frequency cepstral coefficients (MFCCs), voiced/unvoiced segmenting features, and glottal source parameters. For the visual modality, we utilize the Facet [14] to indicate 35 facial action units, recording facial movement to express emotions. Experimental Setup. All models are built on the Pytorch [30] toolbox with NVIDIA Tesla V100 GPUs. The Adam optimizer [16] is employed for network optimization. For MOSI, MOSEI, and IEMOCAP, the detailed hyper-parameter settings are as follows: the learning rates are {4e \u22123, 2e \u22123, 4e \u22123}, the batch sizes are {64, 32, 64}, the epoch numbers are {50, 20, 30}, the attention heads are {10, 8, 10}, and the distance boundaries \u03b7 are {1.2, 1.0, 1.4}. The embedding dimension is 40 on all three datasets. The hyper-parameters are determined via 5 Table 1. Comparison results under inter-modality missing and complete-modality testing conditions on MOSI and MOSEI. Dataset Models Testing Conditions {l} {a} {v} {l, a} {l, v} {a, v} Avg. {l, a, v} MOSI Self-MM [62] 67.80 40.95 38.52 69.81 74.97 47.12 56.53 84.64 CubeMLP [37] 64.15 38.91 43.24 63.76 65.12 47.92 53.85 84.57 DMD [22] 68.97 43.33 42.26 70.51 68.45 50.47 57.33 84.50 MCTN [32] 75.21 59.25 58.57 77.81 74.82 64.21 68.31 80.12 TransM [46] 77.64 63.57 56.48 82.07 80.90 67.24 71.32 82.57 SMIL [26] 78.26 67.69 59.67 79.82 79.15 71.24 72.64 82.85 GCNet [23] 80.91 65.07 58.70 84.73 83.58 70.02 73.84 83.20 CorrKD 81.20 66.52 60.72 83.56 82.41 73.74 74.69 83.94 MOSEI Self-MM [62] 71.53 43.57 37.61 75.91 74.62 49.52 58.79 83.69 CubeMLP [37] 67.52 39.54 32.58 71.69 70.06 48.54 54.99 83.17 DMD [22] 70.26 46.18 39.84 74.78 72.45 52.70 59.37 84.78 MCTN [32] 75.50 62.72 59.46 76.64 77.13 64.84 69.38 81.75 TransM [46] 77.98 63.68 58.67 80.46 78.61 62.24 70.27 81.48 SMIL [26] 76.57 65.96 60.57 77.68 76.24 66.87 70.65 80.74 GCNet [23] 80.52 66.54 61.83 81.96 81.15 69.21 73.54 82.35 CorrKD 80.76 66.09 62.30 81.74 81.28 71.92 74.02 82.16 the validation set. The raw features at the modality missing positions are replaced by zero vectors. To ensure an equitable comparison, we re-implement the state-of-the-art (SOTA) methods using the publicly available codebases and combine them with our experimental paradigms. All experimental results are averaged over multiple experiments using five different random seeds. 4.3. Comparison with State-of-the-art Methods We compare CorrKD with seven representative and reproducible SOTA methods, including complete-modality methods: Self-MM [62], CubeMLP [37], and DMD [22], and missing-modality methods: 1) joint learning methods (i.e., MCTN [32] and TransM [46]), and 2) generative methods (i.e., SMIL [26] and GCNet [23]). Extensive experiments are implemented to thoroughly evaluate the robustness and effectiveness of CorrKD in the cases of intra-modality and inter-modality missingness. Robustness to Intra-modality Missingness. We randomly drop frame-level features in modality sequences with ratio p \u2208{0.1, 0.2, \u00b7 \u00b7 \u00b7 , 1.0} to simulate testing conditions of intra-modality missingness. Figures 3 and 4 show the performance curves of models with various p values, which intuitively reflect the model\u2019s robustness. We have the following important observations. (i) As the ratio p increases, the performance of all models decreases. This phenomenon demonstrates that intra-modality missingness leads to a considerable loss of sentiment semantics and fragile joint multimodal representations. (ii) Compared to the complete-modality methods (i.e., Self-MM, CubeMLP, and DMD), our CorrKD achieves significant performance advantages in the missing-modality testing conditions and competitive performance in the complete-modality testing conditions. The reason is that complete-modality methods are based on the assumption of data completeness, whereas customized training paradigms for missing modalities perform better at capturing and reconstructing valuable sentiment semantics from incomplete multimodal data. (iii) Compared to the missing-modality methods, our CorrKD exhibits the strongest robustness. Benefiting from the decoupling and modeling of inter-sample, inter-category, and inter-response correlations by the proposed correlation decoupling schema, the student network acquires informative knowledge to reconstruct valuable missing semantics and produces robust multimodal representations. Robustness to Inter-modality Missingness. In Table 1 and 2, we drop some entire modalities in the samples to simulate testing conditions of inter-modality missingness. The notation \u201c{l}\u201d indicates that only the language modality is available, while audio and visual modalities are missing. \u201c{l, a, v}\u201d represents the complete-modality testing condition where all modalities are available. \u201cAvg.\u201d indicates the average performance across six missing-modality testing conditions. We present the following significant insights. (i) Inter-modality missingness causes performance degradation for all models, suggesting that the integration of complementary information from heterogeneous modalities enhances the sentiment semantics within joint representations. (ii) In the testing conditions of the inter-modality missingness, our CorrKD has superior performance among 6 Table 2. Comparison results under six testing conditions of inter-modality missingness and the complete-modality condition on IEMOCAP. Models Categories Testing Conditions {l} {a} {v} {l, a} {l, v} {a, v} Avg. {l, a, v} Self-MM [62] Happy 66.9 52.2 50.1 69.9 68.3 56.3 60.6 90.8 Sad 68.7 51.9 54.8 71.3 69.5 57.5 62.3 86.7 Angry 65.4 53.0 51.9 69.5 67.7 56.6 60.7 88.4 Neutral 55.8 48.2 50.4 58.1 56.5 52.8 53.6 72.7 CubeMLP [37] Happy 68.9 54.3 51.4 72.1 69.8 60.6 62.9 89.0 Sad 65.3 54.8 53.2 70.3 68.7 58.1 61.7 88.5 Angry 65.8 53.1 50.4 69.5 69.0 54.8 60.4 87.2 Neutral 53.5 50.8 48.7 57.3 54.5 51.8 52.8 71.8 DMD [22] Happy 69.5 55.4 51.9 73.2 70.3 61.3 63.6 91.1 Sad 65.0 54.9 53.5 70.7 69.2 61.1 62.4 88.4 Angry 64.8 53.7 51.2 70.8 69.9 57.2 61.3 88.6 Neutral 54.0 51.2 48.0 56.9 55.6 53.4 53.2 72.2 MCTN [32] Happy 76.9 63.4 60.8 79.6 77.6 66.9 70.9 83.1 Sad 76.7 64.4 60.4 78.9 77.1 68.6 71.0 82.8 Angry 77.1 61.0 56.7 81.6 80.4 58.9 69.3 84.6 Neutral 60.1 51.9 50.4 64.7 62.4 54.9 57.4 67.7 TransM [46] Happy 78.4 64.5 61.1 81.6 80.2 66.5 72.1 85.5 Sad 79.5 63.2 58.9 82.4 80.5 64.4 71.5 84.0 Angry 81.0 65.0 60.7 83.9 81.7 66.9 73.2 86.1 Neutral 60.2 49.9 50.7 65.2 62.4 52.4 56.8 67.1 SMIL [26] Happy 80.5 66.5 63.8 83.1 81.8 68.2 74.0 86.8 Sad 78.9 65.2 62.2 82.4 79.6 68.2 72.8 85.2 Angry 79.6 67.2 61.8 83.1 82.0 67.8 73.6 84.9 Neutral 60.2 50.4 48.8 65.4 62.2 52.6 56.6 68.9 GCNet [23] Happy 81.9 67.3 66.6 83.7 82.5 69.8 75.3 87.7 Sad 80.5 69.4 66.1 83.8 81.9 70.4 75.4 86.9 Angry 80.1 66.2 64.2 82.5 81.6 68.1 73.8 85.2 Neutral 61.8 51.1 49.6 66.2 63.5 53.3 57.6 71.1 CorrKD Happy 82.6 69.6 68.0 84.1 82.0 70.0 76.1 87.5 Sad 82.7 71.3 67.6 83.4 82.2 72.5 76.6 85.9 Angry 82.2 67.0 65.8 83.9 82.8 67.3 74.8 86.1 Neutral 63.1 54.2 52.3 68.5 64.3 57.2 59.9 71.5 w/o SCD w/o RCD w/o CPD CorrKD 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 20 30 40 50 60 70 80 90 Missing Ratio F1 Score Figure 5. Ablation results of intra-modality missingness using various missing ratios on MOSI. the majority of metrics, proving its strong robustness. For example, on the MOSI dataset, CorrKD\u2019s average F1 socre is improved by 0.85% compared to GCNet, and in particular by 3.72% in the testing condition where language modality is missing (i.e., {a, v}). The merit stems from the proTable 3. Ablation results for the testing conditions of intermodality missingness on MOSI. Models Testing Conditions {l} {a} {v} {l, a} {l, v} {a, v} Avg. {l, a, v} CorrKD 81.20 66.52 60.72 83.56 82.41 73.74 74.69 83.94 w/o SCD 78.80 64.96 57.49 81.95 80.53 71.05 72.46 82.13 w/o CPD 79.23 63.72 57.83 80.11 79.45 70.53 71.81 82.67 w/o RCD 79.73 65.32 59.21 82.14 81.05 72.18 73.27 83.05 posed framework\u2019s capability of decoupling and modeling potential correlations at multiple levels to capture discriminative and holistic sentiment semantics. (iii) In the unimodal testing conditions, the performance of CorrKD with only the language modality favorably outperforms other cases, with comparable results to the complete-modality 7 Happy Sad Angry Neutral (c) GCNet (a) Self-MM (b) MCTN (d) CorrKD Figure 6. Visualization of representations from different methods with four emotion categories on the IEMOCAP testing set. The default testing conditions contain intra-modality missingness (i.e., missing ratio p = 0.5 ) and inter-modality missingness (i.e., only the language modality is available). The red, orange, green, and blue markers represent the happy, angry, neutral, and sad emotions, respectively. case. In the bimodal testing conditions, cases containing the language modality perform the best, even surpassing the complete-modality case in individual metrics. This phenomenon proves that language modality encompasses the richest knowledge information and dominates the sentiment inference and missing semantic reconstruction. 4.4. Ablation Studies To validate the effectiveness and necessity of the proposed mechanisms and strategies in CorrKD, we conduct ablation studies under two missing-modality cases on the MOSI dataset, as shown in Table 3 and Figure 5. The principal findings are outlined as follows. (i) When SCD is eliminated, there is a noticeable degradation in model performance under both missing cases. This phenomenon suggests that mining and transferring comprehensive crosssample correlations is essential for recovering missing semantics in student networks. (ii) The worse results under the two missing modality scenarios without CPD indicate that capturing cross-category feature variations and correlations facilitates deep alignment of feature distributions between both networks to produce robust joint multimodal representations. (iii) Moreover, we substitute the KL divergence loss for the proposed RCD. The declining performance gains imply that decoupling heterogeneous responses and maximizing mutual information between homogeneous responses motivate the student network to adequately reconstruct meaningful sentiment semantics. 4.5. Qualitative Analysis To intuitively show the robustness of the proposed framework against modality missingness, we randomly choose 100 samples from each emotion category on the IEMOCAP testing set for visualization analysis. The comparison models include Self-MM [62] (i.e., complete-modality method), MCTN [32] (i.e., joint learning-based missingmodality method), and GCNet [23] (i.e., generative-based missing-modality method). (i) As shown in Figure 6, SelfMM cannot address the modality missing challenge, as the representations of different emotion categories are heavily confounded, leading to the least favorable outcomes. (ii) Although MCTN and GCNet somewhat alleviate the issue of indistinct emotion semantics, their effectiveness remains limited since the distribution boundaries of the different emotion representations are generally ambiguous and coupled. (iii) Conversely, our CorrKD ensures that representations of the same emotion category form compact clusters, while representations of different categories are clearly separated. These observations confirm the robustness and superiority of our framework, as it sufficiently decouples intersample, inter-category and inter-response correlations. 5. Conclusions In this paper, we present a correlation-decoupled knowledge distillation framework (CorrKD) to address diverse missing modality dilemmas in the MSA task. Concretely, we propose a sample-level contrast distillation mechanism that utilizes contrastive learning to capture and transfer cross-sample correlations to precisely reconstruct missing semantics. Additionally, we present a categoryguided prototype distillation mechanism that learns crosscategory correlations through category prototypes, refining sentiment-relevant semantics for improved joint representations. Eventually, a response-disentangled consistency distillation is proposed to encourage distribution alignment between teacher and student networks. Extensive experiments confirm the effectiveness of our framework. Acknowledgements This work is supported in part by the Shanghai Municipal Science and Technology Committee of Shanghai Outstanding Academic Leaders Plan (No. 21XD1430300), and in part by the National Key R&D Program of China (No. 2021ZD0113503). 8"
16
+ }
intro_8K/test_introduction_short_2404.16461v2.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2404.16461v2",
3
+ "title": "Large Language Models Perform on Par with Experts Identifying Mental Health Factors in Adolescent Online Forums",
4
+ "abstract": "Mental health in children and adolescents has been steadily deteriorating\nover the past few years. The recent advent of Large Language Models (LLMs)\noffers much hope for cost and time efficient scaling of monitoring and\nintervention, yet despite specifically prevalent issues such as school bullying\nand eating disorders, previous studies on have not investigated performance in\nthis domain or for open information extraction where the set of answers is not\npredetermined. We create a new dataset of Reddit posts from adolescents aged\n12-19 annotated by expert psychiatrists for the following categories: TRAUMA,\nPRECARITY, CONDITION, SYMPTOMS, SUICIDALITY and TREATMENT and compare expert\nlabels to annotations from two top performing LLMs (GPT3.5 and GPT4). In\naddition, we create two synthetic datasets to assess whether LLMs perform\nbetter when annotating data as they generate it. We find GPT4 to be on par with\nhuman inter-annotator agreement and performance on synthetic data to be\nsubstantially higher, however we find the model still occasionally errs on\nissues of negation and factuality and higher performance on synthetic data is\ndriven by greater complexity of real data rather than inherent advantage.",
5
+ "authors": "Isabelle Lorge, Dan W. Joyce, Andrey Kormilitzin",
6
+ "published": "2024-04-25",
7
+ "updated": "2024-04-26",
8
+ "primary_cat": "cs.CL",
9
+ "cats": [
10
+ "cs.CL"
11
+ ],
12
+ "label": "Original Paper",
13
+ "paper_cat": "LLM Fairness",
14
+ "gt": "The recent development of powerful Large Language Models such as GPT3.5 [2] and GPT4 [3] able to perform tasks in a zero-shot manner (i.e., without having been specifically trained or fine- tuned to do so) by being simply prompted with natural language instructions shows much promise for healthcare applications and the domain of mental health. Indeed, these models display more impressive general natural language processing abilities than their predecessors and excel at tasks such as Question Answering and Named Entity Recognition [4, 5, 6, 7]. Models with the ability to process social media content for indicators of mental health issues have the potential to become invaluable cost-effective tools for applications such as public health monitoring [8] and online moderation or intervention systems [9]. In addition, synthetic data produced by LLMs can be a cost effective and privacy-preserving tool for training task specific models [10]. There have been several studies aimed at assessing the abilities of LLMs to perform a range of tasks related to mental health on datasets derived from social media. Yang et al. [11] conducted a comprehensive assessment of ChatGPT (gpt-3.5-turbo), InstructGPT3 and LlaMA7B and 13B [12] arXiv:2404.16461v2 [cs.CL] 26 Apr 2024 on 11 different datasets and 5 tasks (mental health condition binary/multiclass detection, cause/factor detection, emotion detection and causal emotion entailment, i.e. determining the cause of a described emotion). They find that while the LLMs perform well (0.46-0.86 F1 depending on task), with ChatGPT substantially outperforming both LLaMA 7B and 13B, they still underperform smaller models specifically fine-tuned for each task (e.g., RoBERTa). Xu et al. [13] find similar results for Alpaca [14], FLAN-T5 [15] and LLaMA2 [16], with only fine-tuned LLMs able to perform on par with smaller, task-specific models such as RoBERTa [17, 18]. However, we find that previous studies suffer from the following shortcomings: 1. They focus on adult mental health 2. They focus on tasks with a closed (or finite) set of answers, where the model is asked to perform each task in turn 3. They do not investigate how LLMs perform on synthetic data, i.e., text they are asked to simultaneously generate and label There is growing consensus that we are facing a child mental health crisis [1]. Before the COVID-19 pandemic there was already increasing incidence of mental health conditions in children and young people (CYP), such as depression, anxiety and eating disorders [19] as well as rising rates of self-harm and suicidal ideation [20] and cyberbullying strongly linked to adverse mental health outcomes [21]. The advent of the pandemic accelerated this already precarious situation and created additional challenges [22, 23] such as discontinuity of healthcare service provision in addition to interruption to young people\u2019s usual engagement in education and their social lives. This age range is particularly vulnerable to onset of mental health issues, with half of conditions appearing by early adolescence and 10-20% of children and young people experiencing at least one mental health condition [24]. Females, those with low socioeconomic backgrounds, trauma, abuse or having witnessed violence [25] are at heightened risk. On the other hand, social media now forms an important part of children and adolescents\u2019 daily lives, whose impact on mental health is debated, with potential benefits (stress reduction and support networks [26]) as well as potential risks (sleep disturbance, self esteem issues and cyberbullying [27]). Regardless of their detrimental or protective impact, social media may contribute valuable insights into CYP\u2019s mental health, with opportunities for monitoring and intervention, for example identifying those at risk of depression and mood disorders [28]. Given the mental health of CYP is a particularly pressing public health concern, we wished to investigate how LLMs perform on extracting mental health factors when faced with social media content generated by young people aged 12-19. Indeed, several issues related to mental health either exclusively apply to children and adolescents (such as school bullying and ongoing family abuse) or are particularly prevalent in this age range (such as eating disorders [29] and self-harm [30]), making both content type and factors of interest distinct from those found in adult social media posts. In addition, previous studies focused on tasks which had either a binary or closed sets of answers (e.g., choosing between several given conditions or between several given causal factors). In contrast, we wish to examine how LLMs perform on a task of open information extraction, where they are given categories of information and asked to extract any which are found in the text (e.g., asked to detect whether there is any mental health condition indicated in the text). Furthermore, in previous studies the models were tested with each task in turn (e.g., asked to detect depression in one dataset, then detect suicidality in another dataset), whereas we gather and annotate our own dataset in order to be able to ask the LLMs to extract all categories simultaneously (e.g, extract all conditions and symptoms in a given sentence). Finally, to our knowledge there has been no investigation on how LLM performance compares when asked to annotate text as they generate it, i.e., how their performance on synthetic data compares with their performance on real data. There is growing interest in synthetic data for healthcare [31]. Given the potential for training models and running simulations and digital twin experiments with the benefit of reduced issues of data scarcity and privacy, we believe that our work will contribute to better understanding of limitations and benefits of using synthetic data for real-world tasks. 2",
15
+ "main_content": "In summary, we aim to: 1. Generate and annotate with high-quality expert annotations a novel dataset of social media posts which allows extraction of a wide range of mental health factors simultaneously. 2. Investigate performance of two top-performing LLMs (GPT3.5 and GPT4) on extracting mental health factors in adolescent social media posts to verify whether they can be on par with expert annotators. 3. Investigate how these LLMs perform on synthetic data, i.e., when asked to annotate text as they generate it, with the aim of assessing utility of these data in training task specific models 3 Method 3.1 Reddit dataset We use Python\u2019s PRAW library to collect post from the Reddit website (www.reddit.com) over the last year, including posts from specific forum subthemes (\u2018subreddits\u2019) dedicated to mental health topics: r/anxiety, r/depression, r/mentalhealth, r/bipolarreddit, r/bipolar, r/BPD, r/schizophrenia, r/PTSD, r/autism, r/trau-matoolbox, r/socialanxiety, r/dbtselfhelp, r/offmychest and r/mmfb. The distribution of subreddits in the dataset can be found in Figure 1. As in previous works [32], we use heuristics to obtain posts from our target age range (e.g, posts containing expression such as I am 16/just turned 16/etc.) We gather 1000 posts written by 950 unique users. To optimise the annotation process, we select the most relevant sentences to be annotated by embedding a set of mental health keywords with Python\u2019s sentence-transformers library [33] calculating the cosine similarity with post sentences, choosing a threshold of 0.2 cosine similarity after trial and error. We keep the post index for each sentence to provide context. The resulting dataset contains 6500 sentences. 3.2 Ethical considerations In conducting this research, we recognised the importance of respecting the autonomy and privacy of the Reddit users whose posts were included in our dataset. While Reddit data is publicly available and was obtained from open online forums, we acknowledge that users may not have anticipated their contributions being used for research purposes and will therefore make the data available only on demand. The verbatim example sentences given in later sections have been modified to prevent full-text searching strategies to infer the post author\u2019s immediate identity on reddit. To protect the confidentiality of participants, we did not provide usernames or other identifying information to our annotators. Annotators were psychiatrists who were warned that the content of the posts was highly sensitive with potentially triggering topics such as self-harm and child abuse. Reddit\u2019s data sharing and research policy allows academic researchers to access certain Reddit data for the purposes of research, subject to the platform\u2019s terms and conditions. They require researchers to obtain approval through their data access request process before using the API. The policy outlines requirements around protecting user privacy, obtaining consent, and properly attributing the data source in any published work. They reserve the right to deny data access requests or revoke access if the research is deemed to violate Reddit\u2019s policies. Researchers must also agree to Reddit\u2019s standard data use agreement when accessing the data. Our research aims to contribute to the understanding of mental health discourse from adolescents on social media platforms. We believe the potential benefits of this work, in terms of insights that could improve mental health support and resources, outweigh the minimal risks to participants. However, we remain aware of the ethical complexities involved in using public social media data, and encourage further discussion and guidance in this emerging area of study. 3 3.3 Synthetic dataset In addition to the real dataset, we generate two synthetic datasets of 500 sentences each by prompting GPT3.5 (gpt-3.5-turbo-0125) and GPT4 (gpt-4-0125-preview) to create and label Reddit-like posts of 5 sentences (temperature 0, all other parameters set to default). The instructions given were made as similar as possible to those given to annotators, and the model was expliclity told to only label factors which applied to the author of the post (e.g., not to label My friend has depression with CONDITION). The prompt used can be found in Appendix A. Figure 1: Distribution of subreddits 3.4 Annotation schema Given our goal is to obtain a wide range of relevant annotations for each sentence in order to test the LLMs\u2019 ability to generalise and perform open information extraction, and the previously mentioned important factors related to trauma [34] and precarity [35], we create the following six categories in consultation with a clinical psychiatrist: \u2022 TRAUMA (sexual abuse, physical abuse, emotional abuse, school bullying, death, accident, etc.) \u2022 PRECARITY (socioeconomic, parental conflict, parental illness, etc.) \u2022 SYMPTOM (self-harm, low self-esteem, anhedonia, panic attack, flashback, psychosis, insomnia, etc.) 4 \u2022 CONDITION (eating disorder, depression, bipolar, bpd, anxiety, ptsd, adhd, substance abuse/addiction, etc.) \u2022 SUICIDALITY (no subcategories) \u2022 TREATMENT (no subcategories) Nineteen expert annotators were contacted and asked to annotate 500 sentences each for a fixed compensation of \u00a3120 (\u2248\u00a360/hour). These were UK-trained psychiatrists, all of whom had obtained Membership of the Royal College of Psychiatrists by post-graduate experience and formal examinations. Thirteen annotators annotated the Reddit dataset, two annotators annotated the synthetic datasets and four annotators re-annotated samples from the Reddit and synthetic datasets for inter-annotator agreement computation (100 sentences from each dataset, 1500 sentences in total). Annotators were given the above subcategory examples but allowed to use new subcategories when appropriate (no closed set of answers). They were given the post indices to provide context (i.e., so as to be aware which sentences belonged to the same post). They were asked to annotate only school bullying as bullying, and other instances (e.g., sibling harassment) as emotional abuse. Anxiety was to be annotated as a symptom rather than condition unless specifically described as a disorder. Experts performed the annotation by filling in the relevant columns in an Excel sheet with each sentence as a row. Importantly, given the known limitations of language models with negation [36], we wished to annotate both POSITIVE and NEGATIVE evidence in order to test LLMs\u2019 ability to handle both polarities (e.g., I am not feeling suicidal as negative suicidality or We don\u2019t have any money issues as negative socioeconomic precarity). For this purpose, annotators were asked to use the prefixes P and N (e.g., P(adhd) in the CONDITION column or N(socioeconomic) in the PRECARITY column). 3.5 Data processing and dataset statistics In order to compare expert annotations with LLM annotations despite the wide variety of subcategories and terms used by annotators we create dictionaries mapping each term found in the dataset to a standard equivalent (e.g., p(emotional) to p(emotional abuse), p(physical violence) to p(physical abuse), p(gun violence) and p(school shooting) to p(violence), p(rape) to p(sexual abuse), p(financial burden) and p(poor) to p(socioeconomic precarity), p(divorce) to p(family conflict), p(self hatred) to p(low self esteem), etc.). Parental substance abuse is considered family illness and any underspecified subcategories are marked as \u2018unspecified\u2019 (e.g., p(trauma unspecified)). The distribution of subcategories for each category can be found in figures 2, 3, 4 and 5 in Appendix B. The most frequent subcategory in TRAUMA is emotional abuse, which occurs twice as often as physical abuse and death in the dataset. The most frequent form of PRECARITY is family conflict, then family illness (including parental substance abuse) and socioeconomic precarity. The most frequent CONDITIONS are depressive disorders, followed by substance abuse/addiction and ADHD. The most frequent SYMPTOMS are anxiety, low self-esteem, self-harm and low mood. Interestingly, the distribution of subcategories differs quite substantially in the synthetic datasets (distributions for the GPT3.5 and GPT4 generated datasets can be found in Appendix B). Overall, the number of subcategories is reduced, indicating less diversity (however, these are smaller datasets). The top trauma subcategories are sexual abuse for GPT3.5 and school bullying for GPT4, both of which were much less prevalent in real data. The second most prevalent condition for both GPT3.5 and GPT4 is eating disorders, whereas these ranked in 8th place in real data. Finally, unlike in real data, flashbacks and panic attacks are the 3d and 4th most frequent symptoms for both GPT3.5 and GPT4-generated data, whereas self-harm ranks much lower than in real data. Given many of these subcategories were given as examples in the annotator guidelines and LLM prompt, it is likely that the LLMs used them in a more homogenous manner for generation than the distribution which would be found in real data. However, the distribution is not entirely homogenous, which suggests the LLMs did leverage some of the biases learned from their training data. 4 Results Once both human and LLM annotations are standardised, we conduct analyses to assess performance. We provide precision, recall and F1 at the category level and accuracy at the subcategory level 5 collapsed across subcategories (given their high number). We compute category performance in two ways: Positive or Negative, where a point is awarded if the category contains an annotation in both human and LLM annotations, regardless of polarity (i.e., the annotator considered there was relevant information concerning the category TRAUMA) and Positive Only metrics, where negative annotations are counted as no annotations. The difference between the two metrics can be seen clearly in Table 1 (GPT3.5 results), where precision increases but recall diminishes for Positive Only. The increase in precision is due to the fact that GPT3.5 outputs a substantial number of negative annotations in cases where human annotators did not consider it relevant to mention the category. The reduction in recall, on the other hand, results from the fact that LLMs often confuse positive and negative annotations and will occasionally output a negative annotation for a positive one. For real data (Tables 1 and 2), GPT3.5\u2019s performance at the category level is average, with better performance in the Positive Only metrics (0.57). GPT4 performs better, especially in Positive Only metrics (0.63) and subcategory accuracy (0.48 vs. 0.39). In general, recall is higher than precision, indicating LLMs may be overpredicting labels. The performance for synthetic data (Tables 3 and 4) is substantially better, with no gap between the Positive or Negative and Positive Only metrics, suggesting less irrelevant negative annotations. Here again, GPT4 outperforms GPT3.5, both at the category level (0.75 vs 0.70 and 0.73 vs 0.68) and more particularly at the subcategory level, where GPT4 reaches an impressive accuracy of 0.72 (vs 0.42). The gap between recall and precision is reduced for GPT4, whereas GPT3.5 displays higher precision than recall here. In order to assess the upper bound of human performance, we calculate inter-annotator agreement for both real and synthetic datasets using Cohen\u2019s Kappa. Values can be found in Table 5. Interestingly, while performance at the category level in real data is lower (GPT3.5) or similar (GPT4) compared to humans, GPT4 displays a substantially higher accuracy at the subcategory level (0.47 vs 0.35). For synthetic data, GPT3.5 still underperforms human agreement on all three metrics, while GPT4 is on par with humans for the Positive Only and subcategory metrics and only underperforms in the Positive and Negative metric. Category Positive or Negative Positive Only Subcategory Precision Recall F1-Score Precision Recall F1-Score Accuracy TRAUMA 0.38 0.78 0.51 0.56 0.65 0.60 0.39 PRECARITY 0.26 0.43 0.33 0.45 0.31 0.37 0.22 CONDITION 0.33 0.85 0.48 0.54 0.72 0.62 0.55 SYMPTOMS 0.39 0.62 0.48 0.46 0.58 0.52 0.31 SUICIDALITY 0.44 0.79 0.56 0.80 0.68 0.73 / TREATMENT 0.48 0.72 0.58 0.72 0.58 0.64 / ALL 0.37 0.70 0.49 0.55 0.60 0.57 0.39 Table 1: GPT3.5 (real data). Positive or Negative: counting annotation in category regardless of polarity (category level); Positive Only: counting negative annotations as NaN (category level); Subcategory: accuracy at the subcategory level 5 Error analysis We examine some of the sentences annotated by the LLMs in order to perform error analysis and extract the following findings (as mentioned previously some words have been paraphrased to preclude full-text search allowing user identification): \u2022 Both GPT3.5 and GPT4 produce infelicitous negations, i.e., negative annotations which would seem irrelevant to humans, e.g., (I have amazing people around me =>negative parental death or The internet is my one only coping mechanism =>trauma unspecified) \u2022 Despite being specifically prompted to only annotate factors related to the writer/speaker, LLMs (including GPT4) do not always comply, e.g., She comes from what is, honestly, a horrific family situation =>emotional abuse) 6 Category Positive or Negative Positive Only Subcategory Precision Recall F1-Score Precision Recall F1-Score Accuracy TRAUMA 0.44 0.89 0.59 0.57 0.84 0.68 0.57 PRECARITY 0.31 0.52 0.39 0.50 0.46 0.48 0.36 CONDITION 0.46 0.81 0.59 0.61 0.77 0.68 0.57 SYMPTOMS 0.35 0.78 0.49 0.45 0.73 0.56 0.41 SUICIDALITY 0.36 0.93 0.51 0.70 0.87 0.77 / TREATMENT 0.39 0.87 0.54 0.64 0.81 0.71 / ALL 0.39 0.80 0.52 0.55 0.75 0.63 0.48 Table 2: GPT4 (real data). Positive or Negative: counting annotation in category regardless of polarity (category level); Positive Only: counting negative annotations as NaN (category level); Subcategory: accuracy at the subcategory level Category Positive or Negative Positive Only Subcategory Precision Recall F1-Score Precision Recall F1-Score Accuracy TRAUMA 0.90 0.49 0.64 0.90 0.49 0.64 0.38 PRECARITY 0.84 0.69 0.76 0.86 0.69 0.76 0.54 CONDITION 0.44 0.67 0.53 0.47 0.67 0.55 0.59 SYMPTOMS 0.85 0.59 0.70 0.84 0.59 0.69 0.36 SUICIDALITY 0.75 1.00 0.85 0.77 0.90 0.83 / TREATMENT 0.68 0.84 0.75 0.76 0.57 0.65 / ALL 0.74 0.65 0.70 0.77 0.61 0.68 0.42 Table 3: GPT3.5 (synthetic data). Positive or Negative: counting annotation in category regardless of polarity (category level); Positive Only: counting negative annotations as NaN (category level); Subcategory: accuracy at the subcategory level Category Positive or Negative Positive Only Subcategory Precision Recall F1-Score Precision Recall F1-Score Accuracy TRAUMA 0.84 0.95 0.89 0.86 0.92 0.89 0.82 PRECARITY 0.85 0.84 0.85 0.91 0.82 0.86 0.80 CONDITION 0.61 0.67 0.64 0.60 0.67 0.63 0.67 SYMPTOMS 0.49 0.78 0.60 0.53 0.80 0.64 0.69 SUICIDALITY 0.81 0.94 0.87 0.78 0.82 0.80 / TREATMENT 0.85 0.89 0.87 0.87 0.78 0.82 / ALL 0.69 0.83 0.75 0.69 0.79 0.73 0.72 Table 4: GPT4 (synthetic data). Positive or Negative: counting annotation in category regardless of polarity (category level); Positive Only: counting negative annotations as NaN (category level); Subcategory: accuracy at the subcategory level \u2022 Even GPT4 makes errors regarding negation (e.g., I\u2019ve read about people with autism getting temper tantrums/meltdowns, however, that has never really been a problem for me=>negative autism or i had in my head that something inside was very wrong, but i never felt completely depressed all the time so i never took bipolar seriously =>negative bipolar disorder) \u2022 Despite being prompted to annotate suicidality in a separate category, LLMs often annotate it in the SYMPTOM rather than SUICIDALITY category \u2022 GPT3.5 especially often outputs irrelevant/spurious/incorrect labels (e.g., \u2018unemployed\u2019 as condition, \u2018ambition\u2019 as symptom, labelling physical conditions instead of mental conditions only, etc.) 7 Positive and Negative Positive Only Subcategory Annotator vs. Annotator (real data) 0.60 0.59 0.35 GPT3 vs. Annotator (real data) 0.39 0.52 0.37 GPT4 vs. Annotator (real data) 0.43 0.58 0.47 Annotator vs. Annotator (synthetic data) 0.77 0.71 0.68 GPT3 vs. Annotator (synthetic data) 0.64 0.63 0.40 GPT4 vs. Annotator (synthetic data) 0.70 0.69 0.71 Table 5: Inter-annotator agreement (Cohen\u2019s Kappa) \u2022 Even GPT4 makes errors regarding factuality (e.g., It was around my second year in junior high school when my father tried to take his life =>positive death) However, in many cases the assessment is not entirely fair, as the LLMs (particularly GPT4) often catch annotations which human annotators missed, or the difference in subcategories is subjective and open to debate (e.g., school bullying vs emotional abuse, emotional abuse vs abuse unspecified, etc.). Thus it is possible that LLMs, or most likely GPT4, in fact outperformed experts on this task. 6 Discussion The results obtained from our comparison of LLM annotations with human annotations on both real and synthetic data allow us to make a few conclusions and recommendations. Overall, both LLMs perform well. Inter-annotator agreement and performance indicate that GPT4 performs on par with human annotators. In fact, error analysis and manual examination of annotations suggest the LLMs potentially outperform human annotators in terms of recall (sensitivity), catching annotations which have been missed. However, while recall might be improved in LLMs versus human annotators, precision may suffer in unexpected ways, for example through errors in the use of negation and factuality, even in the case of GPT4. LLMs display a particular tendency to overpredict labels and produce negative annotations in infelicitous contexts, i.e., when humans would deem them irrelevant, creating an amount of noise. However, these negative annotations are not technically incorrect. While accuracy errors could be found in the LLM output, the experts\u2019 outputs were not entirely free of them, and previous work by [37] suggests LLMs may both be more complete AND more accurate than medical experts. There may still be a difference in the type of accuracy errors produced by LLMs, which will have to be investigated in future research. In terms of accuracy at the subcategory level, we were surprised to find GPT4 outperformed human agreement by a large margin in real data (0.47 vs 0.35). We hypothesise this is due to the fact that human annotators display higher subjectivity in their style of annotation at the subcategory level (given the lack of predetermined subcategories) and diverge more between them. LLMs are likely to be more \u2018standard\u2019 and generic and thus potentially more in agreement with any given human annotator. More specifically, LLMs tend to be consistent from one annotation to the other with higher recall whereas human annotators showed less consistency. Therefore, if a sentence mentions physical, sexual and emotional abuse, annotators might only mention two out of three but when mentioning all three an LLM is more likely to be in agreement than another annotator, i.e., the LLM will catch more of the perfectly recalled annotations than the second annotator. The better performance demonstrated on synthetic data doesn\u2019t seem due to LLMs performing better on data they are generating, but rather to the synthetic data being less complex and diverse and thus easier to annotate for both LLMs and humans, as evidenced by GPT4 reaching similar inter-annotator agreement scores to humans (with agreement both in humans and LLM/human 10% higher for synthetic data). This better performance could still warrant using synthetic data for e.g., training machine learning models (given more reliable labels) but only in cases where the potential loss in diversity is compensated by the increase in label reliability. This will likely depend on the specific application. 8 7 Conclusion We presented the results of a study examining human and Large Language Models (GPT3.5 and GPT4) performance in extracting mental health factors from adolescent social media data. We performed analyses both on real and synthetic data and found GPT4 performance to be on par with human inter-annotator agreement for both datasets, with substantially better performance on the synthetic dataset. However, we find GPT4 still performing non-human errors in negation and factuality, and synthetic data to be much less diverse and differently distributed than real data. The potential for future applications in healthcare will have to be determined by weighing these factors against the substantial reductions in time and cost achieved through the use of LLMs. Acknowledgment I.L., D.W.J., and A.K. are partially supported by the National Institute for Health and Care Research (NIHR) AI Award grant (AI_AWARD02183) which explicitly examines the use of AI technology in mental health care provision. A.K. declare a research grant from GlaxoSmithKline (unrelated to this work). This research project is supported by the NIHR Oxford Health Biomedical Research Centre (grant NIHR203316). The views expressed are those of the authors and not necessarily those of the UK National Health Service, the NIHR or the UK Department of Health and Social Care."
16
+ }
intro_8K/test_introduction_short_2404.16538v1.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2404.16538v1",
3
+ "title": "OpenDlign: Enhancing Open-World 3D Learning with Depth-Aligned Images",
4
+ "abstract": "Recent advances in Vision and Language Models (VLMs) have improved open-world\n3D representation, facilitating 3D zero-shot capability in unseen categories.\nExisting open-world methods pre-train an extra 3D encoder to align features\nfrom 3D data (e.g., depth maps or point clouds) with CAD-rendered images and\ncorresponding texts. However, the limited color and texture variations in CAD\nimages can compromise the alignment robustness. Furthermore, the volume\ndiscrepancy between pre-training datasets of the 3D encoder and VLM leads to\nsub-optimal 2D to 3D knowledge transfer. To overcome these issues, we propose\nOpenDlign, a novel framework for learning open-world 3D representations, that\nleverages depth-aligned images generated from point cloud-projected depth maps.\nUnlike CAD-rendered images, our generated images provide rich, realistic color\nand texture diversity while preserving geometric and semantic consistency with\nthe depth maps. OpenDlign also optimizes depth map projection and integrates\ndepth-specific text prompts, improving 2D VLM knowledge adaptation for 3D\nlearning efficient fine-tuning. Experimental results show that OpenDlign\nsignificantly outperforms existing benchmarks in zero-shot and few-shot 3D\ntasks, exceeding prior scores by 8.0% on ModelNet40 and 16.4% on OmniObject3D\nwith just 6 million tuned parameters. Moreover, integrating generated\ndepth-aligned images into existing 3D learning pipelines consistently improves\ntheir performance.",
5
+ "authors": "Ye Mao, Junpeng Jing, Krystian Mikolajczyk",
6
+ "published": "2024-04-25",
7
+ "updated": "2024-04-25",
8
+ "primary_cat": "cs.CV",
9
+ "cats": [
10
+ "cs.CV"
11
+ ],
12
+ "label": "Original Paper",
13
+ "paper_cat": "Parameter AND Efficient AND Fine AND Tuning",
14
+ "gt": "3D understanding, which involves tasks such as point cloud classification and 3D object detection, is pivotal for advancing augmented/virtual reality [1; 2], autonomous vehicles [3; 4], and robotics [5; 6]. Traditional 3D models [7; 8; 9; 10; 11; 12; 13] are closed-world, which can only recognize pre-defined categories and struggle with \u2019unseen\u2019 ones. The emergence of Vision-Language Models (VLMs) like CLIP [14], renowned for their success in identifying \u2018unseen\u2019 categories in 2D images through open-world representation learning [15; 16; 17; 18], has sparked interest in applying these models to develop robust open-world 3D representations for 3D vision tasks. Existing open-world 3D learning methods can be categorized into depth-based and point-based methods. Depth-based methods [19; 20; 21] project point clouds into multi-view depth maps and employ the pre-trained CLIP image encoder for 3D representations. However, this process encounters a domain gap because CLIP is primarily trained with RGB images rather than depth maps. To bridge this gap, methods like [21] incorporate an additional depth encoder and utilize contrastive learning to align depth features from this encoder with image and text features from pre-trained CLIP encoders, as illustrated in Fig. 1(a). The images used here, specifically rendered from CAD models for feature alignment, are not employed in the zero-shot inference phase. Point-based methods [22; 23; 24; 25; 26; 27] directly learn 3D representations from point clouds, avoiding the latency of Preprint. Under review. arXiv:2404.16538v1 [cs.CV] 25 Apr 2024 \u201cA point cloud of a small airplane.\u201d Image Encoder Text Encoder Point Encoder Align (b) Point-based Method (a) Depth-based Method (c) OpenDlign (ours) \u201cA depth map of a small airplane.\u201d Image Encoder Text Encoder Depth Encoder Align Depth Map Rendered Image (Training only) Text Point Cloud Rendered Image (Training only) Text \u201cA depth map of a small airplane.\u201d Image Encoder Text Encoder Align Depth Map Depth-Aligned Image (Training only) Text CAD-rendered images Depth-aligned images vs. Figure 1: Top: OpenDlign vs. Conventional Open-World 3D Learning Frameworks: OpenDlign enhances multimodal alignment using depth-aligned images, providing more detailed geometric and semantic information along with enhanced color and texture compared to previously used rendered images. It refines 3D representation by fine-tuning the CLIP image encoder directly, eliminating the extra encoder pre-training required by other methods. Note that both rendered and depth-aligned images are used exclusively for learning alignment. Bottom: Visual comparison between CAD- rendered and corresponding depth-aligned multi-view images. depth map projection. However, due to the inherent data format differences between images and point clouds, these methods also need an additional point encoder for extracting 3D features, akin to depth-based methods (See Fig. 1(b)). Thus, aligning 3D data (e.g., depth maps or point clouds) with the image-text modalities pre-aligned by CLIP is a standard step in current 3D open-world methods. Depth-based and point-based methods encounter two primary challenges in the alignment process. First, the CAD-rendered images used for aligning 3D data typically display consistent color and texture styles across various views. Over-aligning with these low-diversity images compromises the generalizability of learned 3D representations. Secondly, the 3D datasets used for encoder pre- training, like ShapeNet [28] and Objaverse [29], contain less than 1 million synthetic 3D objects, significantly smaller than the DFN5B [30] and LAION-5B [31] datasets with 5 billion images used to train the cutting-edge CLIPs. This data volume disparity, which is due to the high cost of 3D data acquisition, results in the sub-optimal transfer of CLIP\u2019s knowledge to 3D representations. While fine-tuning CLIP\u2019s encoders yields more direct knowledge transfer, it restricts the input to depth maps. Unfortunately, 3D representations from depth maps still underperform in downstream 3D tasks compared to those from point clouds, due to two factors: (1) The absence of a robust projection method for creating dense depth maps with smooth contours from point clouds. (2) The current widely used CLIP text prompt templates are tailored for matching with RGB images, not depth maps. To address these challenges, this paper proposes OpenDlign, a novel framework that learns Open- world 3D representations via aligning multi-view depth maps projected from point clouds with Depth-aligned images produced by a generative model [32]. These images offer enhanced color and texture diversity compared to CAD-rendered images while maintaining geometric and semantic consistency with the depth maps (See Fig. 1). Additionally, as shown in Fig. 1(c), OpenDlign fine-tunes the CLIP image encoder rather than pre-training a separate depth encoder, thus maximally adapting CLIP\u2019s existing knowledge for effective 3D learning, even with a limited 3D dataset. Specifically, fine-tuning is limited to the attention layers of the last transformer block, comprising just 6 million parameters. Moreover, OpenDlign employs a new projection pipeline to generate dense depth maps with clear contours. For zero-shot inference, OpenDlign employs depth-specific text prompts and a logit aggregation method, emphasizing depth-related features and combining results from various viewpoint depth maps. Experimental results show that OpenDlign greatly surpasses the prior state-of-the-art, pre-trained on ShapeNet [28], with accuracy gains of 8.0% on ModelNet40 2 and 16.4% on OmniObject3D, the largest real-world 3D shape dataset. Notably, using realistic depth- aligned images significantly boosts the performance of existing SOTA models, like those pretrained on ShapeNet or 3D Ensemble datasets [24]. This consistent improvement across all benchmarks highlights the versatility of depth-aligned images in any 3D open-world learning pipeline. The main contributions of this paper are outlined as follows: \u2022 We propose a multimodal alignment framework that aligns features from depth maps and depth-aligned images to learn a unified depth map, image, and text representation. \u2022 We develop a contour-aware projection pipeline to produce dense and contour-preserving multi-view depth maps from point clouds. \u2022 We introduce depth-specific text prompt templates for zero-shot inference to accurately capture both the semantic and visual traits in depth maps. \u2022 We design a logit aggregation strategy that derives final 3D representations from both CLIP and OpenDlign visual encoders, reducing catastrophic forgetting in alignment.",
15
+ "main_content": "2.1 Open-World 3D Representation Learning Vision and Language models such as CLIP [14] have revolutionized 2D representation learning in open-world settings through contrastive learning with large-scale image-text pairs [33; 34; 35; 36]. Building on this, recent studies have adapted CLIP for 3D representation learning, achiving impressive performance in diverse 3D zero-shot tasks [24; 25]. PointCLIP [20], as a pioneering study, utilizes the CLIP image encoder for extracting 3D representations from depth maps of point clouds, achieving zero-shot recognition by aligning with text embeddings of semantic categories. To address CLIP\u2019s training bias towards RGB images, Zhu et al. [19] introduced GPT-generated 3D-specific prompts and a denser depth map projection, while CLIP2Point [21] pre-trains a depth encoder for closer alignment with CLIP\u2019s encoders. These methods derive representations from depth maps with noisy contours, causing a loss of key shape features needed for precise recognition. Moreover, their reliance on either natural image text prompts or depth-specific prompts generated by GPT-3 [37] for certain categories highlights a lack of versatility in handling diverse 3D contexts. Alternative methods [23; 23; 24; 25; 27] avoid depth map projection by directly aligning point clouds, images, and text using specialized 3D encoders. By scaling up the dataset and encoder sizes, these methods show promise in diverse 3D tasks. However, these methods are limited by their reliance on CAD-rendered images, which have limited texture diversity across views, leading to less generalizable representations. Additionally, the smaller volume of 3D datasets compared to CLIP\u2019s training data hinders effective knowledge transfer to point cloud encoders. In this paper, we substitute rendered images with AI-generated, depth-aligned images to enhance texture diversity. We also fine-tune the CLIP image encoder for 3D representation learning instead of training a new 3D encoder from scratch, reducing the reliance on large 3D datasets. 2.2 Continual Learning in CLIP Fine-Tuning Continual Learning (CL) in CLIP aims to mitigate catastrophic forgetting [38], ensuring retention of zero-shot capabilities across varied data distributions while fine-tuning to new tasks. CL methods fall into three categories: adaptive-plasticity methods [39; 40; 41; 42; 43; 44], replay methods [45; 46; 47], and architecture-based methods [48; 49]. Adaptive-plasticity methods limit the plasticity of the essential model parameters for past tasks during fine-tuning. For instance, the IMM-Mean [44] method achieves CL by simply averaging parameters of pre-trained and fine-tuned models for inference, although its efficacy might be limited for complex tasks [50]. Replay methods leverage stored exemplars to enable CLIP to recall previously learned knowledge, while they encounter scalability challenges. Without relying on exemplars, architecture-based CL methods dynamically adjust the model\u2019s architecture to accommodate new information without losing existing knowledge [50]. In this study, we align the depth map with the RGB image by freezing the pre-trained CLIP encoder weights and incorporating a trainable transformer-based branch for encoding depth maps, adhering to architecture-based principles. Inspired by IMM-Mean [44], we use pre-trained and fine-tuned model weights to compute classification logits for multi-view depth maps. 3 3 Methodology Fig. 2 illustrates the OpenDlign framework, which learns effective open-world 3D representations by aligning embeddings from projected depth maps and depth-aligned images. Initially, a contour-aware projection method is employed to create shape-preserved, dense depth maps from point clouds. These maps then guide a generative model to produce depth-aligned images with rich color and texture diversity. OpenDlign then uses contrastive learning to align features between depth maps and generated images by fine-tuning a transformer block linked to the CLIP image encoder. This step enables the extraction of robust embeddings from \u2019unseen\u2019 multi-view depth maps at test time, using both fine-tuned and pre-trained states of the image encoder. These embeddings are matched with depth-specific text embeddings, which encode the depth maps\u2019 semantic and visual traits, to compute logits for each viewpoint and aggregate these logits to enable zero-shot classification. Alternatively, these embeddings can be refined using a logistic regressor for few-shot classification. 3.1 Contour-Aware Depth Map Projection The contour-aware projection method transforms the input point cloud into multi-view depth maps with clear contours. Inspired by the pipeline in [19], this method involves four main steps: Quantize, Densify, Smooth, and Squeeze. In the Quantize step, for the i^{\\text {th}} view of point cloud P_i, the 3D coordinates (x, y , z ) \\in P_i are normalized to [0, 1] and mapped onto a discrete grid G \\in \\mathbb {R}^{H \\times W \\times B}, where H and W correspond to the dimensions required by the CLIP image encoder, and B is a pre-defined depth dimension. Next, the Densify step enhances G by updating each voxel to the maximum value within its 7 \\ t imes 7 \\times 7 neighborhood, yielding a denser map G'. Subsequently, the Smooth step applies bilateral filtering to each voxel v_i in G', adjusting its intensity I_{v_i} to I' _{v_i} using: I' _ { v_ i } = \\frac { 1 }{W_v} \\sum _ {v_j \\in S} G_{\\sigma _1}(\\|v_i v_j\\|) G_{\\sigma _2}(|I_{v_i} I_{v_j}|) I_{v_j} (1) where W_ { v } = \\sum _{ v _j \\in S} G_ { \\sigma _1}(\\|v_i v_j\\|) G_{\\sigma _2}(|I_{v_i} I_{v_j}|) is the normalization factor that ensures voxel weights sum to 1.0. The Gaussian functions G\u03c31 and G\u03c32 adjust the influence of each neighboring voxel vj within the 5 \u00d7 5 \u00d7 5 kernel from set S around vi, based on spatial and intensity differences, enhancing contour sharpness and reducing jagged edges in G\u2032. Finally, the Squeeze step applies the minimal pooling on the depth channel of the smoothed G', then triples the output to mimic RGB intensity, producing the final depth map D \u2208RH\u00d7W \u00d73. 3.2 Depth-Aligned Image Generation We generated 524,700 depth-aligned images from ShapeNet [28], one of the leading public 3D CAD datasets containing around 52,470 models, each annotated with semantic metadata. To align with prior experimental protocols [24; 23], we sampled a point cloud of 10,000 points from each model, projecting these onto 10 contour-aware depth maps. A conditional image generative model (ControlNet v1.1 [32]) then produced depth-aligned images for each map (D), using 1 \u2212D and the model\u2019s metadata as conditions. This approach ensures that the images remain consistent with the depth maps both geometrically and semantically, while also adding texture diversity across different views. The conditioning of ControlNet utilizes 1 \u2212D instead of D because it is predominantly pre-trained on depth images, in which brighter regions indicate closer proximity. The supplemental material details the positive and negative prompts used in ControlNet to achieve high-fidelity and noise-free depth-aligned image generation. 3.3 Multimodal Representation Alignment OpenDlign aligns representations from multi-view depth maps and depth-aligned images by finetuning a transformer block that is residually connected to the final block of the pre-trained CLIP image encoder, using contrastive learning. As CLIP pre-training already aligns image and text modalities, OpenDlign implicitly aligns depth maps with the shared image and text space. Multimodal Feature Extraction. Given a 3D point cloud input, let D = \\{ D_i\\}_{i=1}^N represent the set of its N projected depth map views, and R = \\{ R_i\\}_{i=1}^N the corresponding set of depth-aligned 4 CLIP Image Encoder Transformer Block \u2026 \u2026 \ud835\udc21! \" \ud835\udc21# $ \ud835\udc21% $ \ud835\udc21& $ \u2026 \ud835\udc21! $ \u2026 \u2026 \u2026 \u2026 \u2026 Transformer Block \u2026 Transformer Block Transformer Block \u2026 (a) Point Cloud Representation Learning via Generated Depth-Aligned Images Contour-Aware Depth Map Projection Image Generative Model \u2026 \u2026 Input Point Cloud Multi-View Depth Maps (b) Zero-Shot 3D Classification (c) Few-Shot 3D Classification \u201cA depth map of a {car} 3D model.\u201d View 1 \u2026 View ! % + 1 \u2026 airplane cup sofa car \u2026 Pretrained Text Encoder \u2026 \u201cA silhouette of a {class}. \u201cA raytraced image, Kinetic pointillism \u201cA depth map of a {class} 3D model\u201d. Depth-Specific Text Multi-View Depth-Aligned RGB Images V! \u22c5F\" V # \u22c5F\" \u2026 \ud835\udc05! \ud835\udc05\ud835\udfd1 \ud835\udc05\ud835\udfcf \ud835\udc05\ud835\udfd0 \u2026 \ud835\udc15\ud835\udc22\u22c5\ud835\udc05 \ud835\udfd1 V! \u22c5F& V! \u22c5F' \u2026 \ud835\udc15 \ud835\udc23\u22c5\ud835\udc05 \ud835\udfd1 V # \u22c5F' \ud835\udc15 \ud835\udc22 \ud835\udc15 \ud835\udc23 V # \u22c5F& 1 \u2264\ud835\udc56\u2264\ud835\udc41/2 \ud835\udc41/2 < \ud835\udc57\u2264\ud835\udc41 Pre-trained Image Encoder Fine-tuned Image Encoder View ! % View \ud835\udc41 \u2026 \u2026 \ud835\udc21# \" \u2026 \ud835\udc21\ud835\udfcf \ud835\udc79\u22c5\ud835\udc21\ud835\udfcf \ud835\udc6b h' , \u22c5h& h. , \u22c5h& h/ , \u22c5h& h& , \u22c5h/ h. , \u22c5h/ h' , \u22c5h/ \ud835\udc21% \" \u2026 h& , \u22c5h' \ud835\udc21\ud835\udfd0 \ud835\udc79\u22c5\ud835\udc21\ud835\udfd0 \ud835\udc6b h. , \u22c5h' h/ , \u22c5h' \ud835\udc21& \" \u2026 h& , \u22c5h. h' , \u22c5h. h/ , \u22c5h. \ud835\udc21\ud835\udfd1 \ud835\udc79\u22c5\ud835\udc21\ud835\udfd1 \ud835\udc6b \ud835\udc21\ud835\udc75 \ud835\udc79\u22c5\ud835\udc21\ud835\udc75 \ud835\udc6b \ud835\udc13\ud835\udfcf \ud835\udc13\ud835\udc73#\ud835\udfcf \ud835\udc13\ud835\udc73 \ud835\udc13\ud835\udc73 \ud835\udc95 Logistic Regressor bike cup \u2026 table car View 7 % + 1 \u2026 \ud835\udc41 \u2026 View 1 \u2026 ! % \u2026 Fine-tuned Image Encoder Pre-trained Image Encoder Figure 2: Overview of OpenDlign. In (a), OpenDlign converts point clouds into multi-view depth maps using a contour-aware projection, which then helps generate depth-aligned RGB images with diverse textures, geometrically and semantically aligned with the maps. A transformer block, residually connected to the CLIP image encoder, is fine-tuned to align depth maps with depth-aligned images for robust 3D representation. For zero-shot classification (b), OpenDlign aggregates multiview logits from both pre-trained and fine-tuned encoders for label prediction and for few-shot classification (c), it employs a logistic regressor trained on multi-view features from the encoders. images. Each image R_i is encoded through L layers of a pre-trained CLIP image encoder, \\{\\text {T}_l(\\cdot )\\}_{l=1}^L , to obtain feature representations I ^ { R}_{i} =\\text {T}_{1\\ldots L}(R_i) . Each depth map D_i is processed up to layer \\text {T}_{L-1} , obtaining preliminary features \\text {T}_{1\\ldots L-1}(D_i) . Subsequently, these features are passed through the frozen layer \\protect \\text {T}_L and its trainable counterpart \\ text {T}^t_L , yielding the feature for the ith depth map view ID i = T1...L(Di) + Tt L(T1...L\u22121(Di)). Inspired by [17], only the layers for spatial interaction in \\ text {T}^t_L (i.e., attention layers) are trainable. The final feature vectors for multi-view depth maps D and depth-aligned images R are \\ m a t hb f { h}^ D = \\frac {1}{N} \\sum ^N_{i=1}\\|I^D_i\\| and \\ m a t hb f { h}^ R = \\frac {1}{N} \\sum ^N_{i=1}\\|I^R_i\\|, respectively. Loss Functions. The alignment of hD and hR is achieved by minimizing a composite loss function, comprising the contrastive loss Lcont and the feature distance loss Ldist, defined as: \\mat h c al {L }_ { \\te xt { to t al } } = \\ und e rb r ac e { \\ s u m _{ (i, j) } -\\ f ra c { 1}{ 2 } \\ lo g \\ f r ac {\\exp \\ left (\\m a t hb f { h }_ i ^{D} \\mathbf {h}_j^{R} / \\tau \\right )}{\\sum _k \\exp \\left (\\mathbf {h}_i^{D} \\mathbf {h}_k^{R}/\\tau \\right )} -\\frac {1}{2} \\log \\frac {\\exp \\left (\\mathbf {h}_i^{D} \\mathbf {h}_j^{R}/\\tau \\right )}{\\sum _k \\exp \\left (\\mathbf {h}_k^{D} \\mathbf {h}_j^{R} / \\tau \\right )}}_{\\mathcal {L}_{\\text {cont}}} + \\underbrace {\\sum _{(i,j)} \\|\\mathbf {h}^D_i \\mathbf {h}^R_j\\|_2}_{\\mathcal {L}_{\\text {dist}}} (2) 5 In each training batch, (hD i , hR j ) represents a positive pair and k \u0338= i, j. Here, \u03c4 is a learnable temperature parameter, similar to CLIP [14]. 3.4 3D Zero-Shot Transfer The alignment between depth maps and depth-aligned RGB images facilitates 3D zero-shot classification by aggregating multi-view classification logits. Each logit represents the similarity between features of a single-view depth map and text features specific to category candidates. Depth-Specific Text Generation. We generate 80 depth-specific text prompt templates based on 80 ImageNet zero-shot recognition prompts1, integrating keywords such as \"depth map\", \"white background image\", \"raytraced image\", and \"silhouette of [CLASS]\". These keywords guide OpenDlign to target depth-related features, such as the distance of object surfaces from a viewpoint. To identify these keywords, we use the CLIP-Interrogator tool [51] to analyze depth maps from ShapeNet [28], seeking text prompts that best match their visual features. The 10 most recurring prompts from this analysis are chosen as our essential keywords. In zero-shot inference, we employ our depth-specific templates to generate 80 text descriptions for each label l. These descriptions {ti}80 i=1 are encoded by a texture encoder F(\u00b7), normalized, and then merged into a unified text feature Fl via average pooling, calculated as 1 80 P80 i=1 \u2225F(ti)\u2225. Multi-View Logits Aggregation. To calculate classification logits, we first gather visual features from multi-view depth maps {Vi}N i=1, aiming to align with depth-specific text features of M candidate labels F = {Fi}M i=1. The feature extraction utilizes a dual-encoder strategy: the first half of the views \\ifmmod e \\lbrace \\else \\textbraceleft \\fi V_i\\}_{i=1}^{N/2} utilize a pre-trained CLIP image encoder, while the second half of views \\ifmm ode \\lbrace \\else \\textbraceleft \\fi V_i\\}_{i=N/2+1}^{N} employs a fine-tuned encoder. The strategy ensures that OpenDlign maintains its capability to recognize previously identifiable depth maps after learning multimodal alignment via fine-tuning. As shown in Fig. 2(b), the logit for a single depth map view is the product of Vi and F, with the overall classification logit being the sum of logits across all views, calculated as PN i=1 ViFT . 4 Experiments 4.1 Zero-Shot 3D Classification We first evaluated OpenDlign under the zero-shot shape classification task on three benchmark datasets: ModelNet40 [52], ScanObjectNN [53], and OmniObject3D [54]. ModelNet40 offers synthetic 3D CAD models in 40 categories. ScanObjectNN provides real-scanned objects in 15 categories from OBJ_ONLY version. OmniObject3D, the largest, includes 5,911 real-scanned objects in 216 categories, well-suited for fine-grained, real-world classification evaluation. Point cloud sizes are 10,000 points for ModelNet40, 2,048 for ScanObjectNN, and 4,096 for OmniObject3D. OpenDlign was compared against existing methods, including three depth-based methods: PointCLIP [20], PointCLIP V2 [19], and CLIP2Point [21], and three point-based methods: ULIP [23], OpenShape [24], and TAMM [27]. Additionally, we improved the OpenShape and TAMM models by retraining them with depth-aligned and CAD-rendered images from an integrated dataset provided by OpenShape, which combines four distinct collections: Objaverse [29], ShapeNet [24], 3D-Future [55], and ABO [56]. Our aim was to investigate if depth-aligned images consistently enhance the performance of existing 3D open-world methods. Moreover, we evaluated OpenDlign\u2019s scalability by training it with various CLIP variants to adapt to the complexity of pre-trained image-text encoders. Table 1 shows OpenDlign substantially outperforms existing methods trained on ShapeNet on three benchmarks, exceeding the previous best, TAMM-SparseConv trained on ShapeNet, by margins of 8.0% on ModelNet40, 1.6% on ScanObjectNN, and 16.4% on OmniObject3D in top-1 accuracy. OpenDlign also greatly exceeds the leading depth-based method, PointCLIP V2\u2014by 19% on ModelNet40 and 27.4% on OmniObject3D. Significantly, OpenDlign outshines all methods pre-trained on the ensemble dataset in the ScanObject3D benchmark. Moreover, OpenDlign\u2019s performance scales linearly with the complexity of CLIP variants, surpassing most of the baseline models on ModelNet40 and OmniObject3D benchmarks, even when employing the light ViT-B-16 CLIP model. Moreover, the use of depth-aligned images consistently boosts the performance of OpenShape and 1Text Prompts for ImageNet: ImageNet Prompt Engineering. 6 Table 1: Zero-shot classification results on ModelNet40 [52], ScanObjectNN [53] and OmniObject3D[54]. Best: bolded. Second-best: underlined. Training 3D Open-World CLIP ModelNet40 [52] ScanObjectNN [53] OmniObject3D[54] Source Methods Variant Top1 Top3 Top5 Top1 Top3 Top5 Top1 Top3 Top5 2D inferences PointCLIP [20] ResNet-50 19.3 28.6 34.8 10.5 20.8 30.6 0.3 1.0 1.8 No Training PointCLIP V2 [19] ViT-B-16 63.6 77.9 85.0 42.2 63.3 74.5 3.9 9.6 14.4 CLIP2Point [21] ViT-B-32 49.5 71.3 81.2 25.5 44.6 59.4 1.4 3.7 7.1 ULIP-PointBERT [23] SLIP [57] 60.4 79.0 84.4 51.5 71.1 80.2 8.4 15.2 19.7 OpenShape-PointBERT [24] ViT-bigG-14 70.3 86.9 91.3 51.3 69.4 78.4 13.0 23.3 29.4 OpenShape-SparseConv [24] ViT-bigG-14 72.9 87.2 93.0 52.7 72.7 83.6 13.7 24.2 30.0 TAMM-PointBERT [27] ViT-bigG-14 73.1 88.5 91.9 54.8 74.5 83.3 14.9 26.2 33.4 TAMM-SparseConv [27] ViT-bigG-14 74.6 88.2 94.0 57.9 75.3 83.1 ShapeNet OpenShape-PointBERT(+Dlign) ViT-bigG-14 73.7 87.1 91.3 52.7 72.4 82.6 13.4 23.7 29.9 OpenShape-SparseConv (+Dlign) ViT-bigG-14 74.9 89.5 94.1 56.3 75.2 85.4 15.0 26.1 32.8 TAMM-PointBERT(+Dlign) ViT-bigG-14 73.7 89.1 92.2 57.3 73.6 82.3 15.8 27.4 33.0 OpenDlign-B32 ViT-B-32 68.4 86.4 92.6 46.7 72.0 83.0 17.3 29.2 36.3 OpenDlign-B16 ViT-B-16 74.2 90.5 95.4 49.3 74.0 84.4 23.2 37.5 44.3 OpenDlign-L ViT-L-14 77.8 93.1 96.4 52.1 74.6 82.8 27.5 41.3 47.8 OpenDlign-H ViT-H-14 82.6 96.2 98.4 59.5 76.8 83.7 31.3 46.7 53.2 OpenShape-SparseConv [24] ViT-bigG-14 83.4 95.6 97.8 56.7 78.9 88.6 33.7 49.3 57.4 OpenShape-PointBERT [24] ViT-bigG-14 84.4 96.5 98.0 52.2 79.7 88.7 34.0 49.7 57.9 TAMM-PointBERT [27] ViT-bigG-14 85.0 96.6 98.1 55.7 80.7 88.9 37.1 53.5 61.8 Ensemble TAMM-SparseConv [27] ViT-bigG-14 85.4 96.4 98.1 58.5 81.3 89.5 OpenShape-SparseConv (+Dlign ) ViT-bigG-14 85.0 96.1 97.9 56.2 78.5 87.8 34.1 50.5 58.5 OpenShape-PointBERT (+Dlign) ViT-bigG-14 85.4 96.5 98.2 51.1 77.4 88.2 35.6 50.4 57.9 TAMM-PointBERT(+Dlign) ViT-bigG-14 86.2 96.6 97.5 60.5 82.5 90.4 37.5 54.9 62.1 TAMM variants pre-trained on the ShapeNet dataset across all benchmarks. It also improves the performance of variants pre-trained on the ensemble dataset in at least two benchmarks, despite depth-aligned images being available only for the 3D data from ShapeNet, which represents no more than 10% of the ensemble dataset. Significantly, TAMM-PointBERT (+Dlign) achieves a 4.8% top-1 accuracy improvement on the ScanObjectNN dataset, and OpenShape-PointBERT (+Dlign) gains a 1.6% increase on the most challenging OmniObject3D benchmark. These results validate that using depth-aligned images is a universally effective strategy to enhance any 3D open-world pipeline. 4.2 Few-Shot 3D Classification We then assessed OpenDlign\u2019s few-shot classification capability by training a logistic regressor with linear probing on features from N-shot, 10-view depth maps. Similar to the zero-shot scenario, we extracted multi-view features using both fine-tuned and pre-trained OpenDlign encoders (see Fig. 2). At inference, the regressor aggregates logits from 10 views to predict the final label. We compared OpenDlign\u2019s few-shot performance with variants of ULIP [23], OpenShape [24], and TAMM [27], which extract features for training regressor from point clouds using their pre-trained point encoders. Table 2 shows OpenDlign outperforms all baselines across varied few-shot scenarios with 1 to 16 training samples per class. OpenDlign significantly outperforms the leading baseline on the OmniObject3D dataset, exceeding it by 8.8% and 11.8% in 4-shot and 8-shot classification, respectively. This underscores the robustness and transferability of its 3D representations. Table 2: Few-shot classification results on ModelNet40 [52], ScanObjectNN [53] and OmniObject3D [54]. Our results are averaged over 10 random seeds. ModelNet40 [52] ScanObjectNN [53] OmniObject3D [54] Model 1-Shot 2-Shot 4-Shot 8-Shot 16-Shot 1-Shot 2-Shot 4-Shot 8-Shot 16-Shot 1-Shot 2-Shot 4-Shot 8-Shot 16-Shot ULIP-PointBERT [23] 54.4 64.3 74.1 79.3 81.3 46.7 55.1 62.5 70.7 73.9 37.5 41.2 44.1 49.7 53.4 OpenShape-PointBERT [24] 57.5 70.1 76.5 80.4 82.1 47.9 55.6 62.7 67.0 72.0 34.5 34.1 37.8 41.9 45.6 OpenShape-SparseConv [24] 62.8 72.0 78.9 82.9 85.7 47.3 56.3 64.5 68.2 74.0 36.0 37.0 41.5 44.7 48.6 TAMM-PointBERT [27] 62.4 73.3 81.7 83.8 85.9 48.2 57.1 63.6 72.1 76.5 38.9 41.6 46.3 50.1 54.2 OpenDlign (ours) 65.6 73.9 82.9 85.5 87.6 48.9 58.5 67.9 74.2 79.0 42.1 46.9 55.1 61.9 65.8 4.3 Zero-Shot 3D Object Detection We evaluated OpenDlign\u2019s capabilities in Zero-Shot 3D Object Detection using the ScanNet V2 dataset [58], which contains richly annotated 3D indoor scenes in 18 object categories. Following the PointCLIP V2 methodology [19], we began with the pre-trained 3DETR-m model to pinpoint 3D regions of interest, successfully delineating 3D bounding boxes and extracting the points inside each box. Finally, we applied OpenDlign to these points to generate our predictions. Table 3 illustrates OpenDlign\u2019s zero-shot detection prowess using mean Average Precision (mAP) at IoU thresholds 7 Table 3: Zero-shot 3D object detection results on ScanNet V2 [58]. Method Mean Cabinet Bed Chair Sofa Table Door Window Counter Desk Sink Bathtub PointCLIP [20] 6.00 3.99 4.82 45.16 4.82 7.36 4.62 2.19 1.02 4.00 13.40 6.46 AP25 PointCLIP V2 [19] 18.97 19.32 20.98 61.89 15.55 23.78 13.22 17.42 12.43 21.43 14.54 16.77 OpenDlign (ours) 50.72 38.91 67.27 86.33 72.01 58.72 44.58 32.07 50.49 62.04 51.98 64.29 PointCLIP [20] 4.76 1.67 4.33 39.53 3.65 5.97 2.61 0.52 0.42 2.45 5.27 1.31 AP50 PointCLIP V2 [19] 11.53 10.43 13.54 41.23 6.60 15.21 6.23 11.35 6.23 10.84 11.43 10.14 OpenDlign (ours) 37.97 17.04 66.68 73.92 54.96 50.03 24.73 12.84 20.44 41.64 34.17 64.29 of 0.25 and 0.5, achieving scores of 50.72% and 37.97%, respectively. It significantly outperforms PointCLIP V2 by more than 31.75% and 26.44%. Remarkably, OpenDlign can detect the \u2019Sofa\u2019 shape with an AP50 of 54.96%, whereas PointCLIP and V2 score below 10, demonstrating OpenDlign\u2019s superior capability in extracting robust 3D representations from sparse and noisy point clouds in real-world indoor scenes. 4.4 Cross-Modal Retrieval 3D shapes were retrieved by computing the cosine similarity between the embeddings of a query and those generated by OpenDlign, followed by a k-nearest neighbors (kNN) analysis to find the most similar shapes. Fig. 3 illustrates OpenDlign\u2019s capability in matching 3D shapes to image and text queries. Column (a) illustrates its precision in distinguishing sub-categories like grand versus upright pianos from image queries. Column (b) demonstrates successful shape retrieval using distinct text descriptions, such as \"Batmobile armored\". Notably, averaging image and text query embeddings allows OpenDlign to find shapes that combine elements of both inputs. For example, merging a running horse image with the text \"man\" results in the retrieval of both a centaur and a running man, as shown in Fig. 3 (c). A house image combined with \"tree\" retrieves a treehouse. (b) Text Query \u201cBatmobile armored.\u201d \u201cTelephone box.\u201d \u201cDouble-decker bus.\u201d \u201cSpaceship.\u201d \u201cAir Jordan.\u201d (a) Image Query (c) Image + Text Queries \u201cMan.\u201d + \u201cTree.\u201d + Figure 3: 3D shape retrieval results. (a) Two most similar shapes for each query image. (b) Most similar shapes for each query text. (c) Two most similar shapes for combined image and text queries. 4.5 Ablation Study Ablation studies were conducted on zero-shot classification benchmarks to assess the contribution of each component in OpenDlign. Consistently, all OpenDlign variants used in these studies employed OpenCLIP-ViT-H-14 as their backbone. ShapeNet was the default training dataset for all models. Contour-Aware Projection. Replacing PointCLIP V2\u2019s projection pipeline [19] with our contouraware version, as shown in Table 4, enables a pre-trained CLIP to reach 68.8% zero-shot accuracy on ModelNet40, even outperforming several baselines that need extra training. This suggests that through large-scale contrastive learning, CLIP can understand RGB images as well as depth maps, as long as key shape features are maintained during projection. Multimodal Alignment. Table 4 shows that alignment between depth maps and depth-aligned images (depth-daRGB) substantially boosts performance. It improves top-1 accuracy by over 10% across datasets, indicating that depth-daRGB alignment effectively generalizes CLIP to depth maps, with consistent gains in zero-shot inference, regardless of depth-specific text prompts. Further analysis compared depth-daRGB alignment against three alternatives: depth-rendRGB (aligning depth maps with CAD-rendered RGB images), daRGB-text & depth (aligning depth-aligned images with text before depth-daRGB alignment), and depth-text & daRGB (simultaneous alignment 8 Table 4: Ablation study for OpenDlign on ModelNet40 [52] and ScanObjectNN [53]. Acc. improvements over the baseline (first-row) are highlighted in green. Contour-Aware Multimodal Depth-Specific Logits ModelNet40 [52] ScanObjectNN [53] Projection Alignment Texts Aggregation Top 1 Top 3 Top 5 Top 1 Top 3 Top 5 \u2717 \u2717 \u2717 \u2717 59.7 79.6 86.3 42.8 66.7 78.4 \u2713 \u2717 \u2717 \u2717 68.8 (+9.1) 85.8 (+6.2) 91.6 (+5.3) 44.6 (+1.8) 68.3 (+1.6) 78.9 (+0.5) \u2713 \u2713 \u2717 \u2717 79.2 (+19.5) 94.4 (+14.8) 97.6 (+11.3) 56.9 (+14.1) 75.5 (+8.8) 83.8 (+5.4) \u2713 \u2717 \u2713 \u2717 75.9 (+16.2) 91.0 (+11.4) 95.4 (+9.1) 49.3 (+6.5) 69.8 (+3.1) 79.2 (+0.8) \u2713 \u2713 \u2713 \u2717 80.2 (+20.5) 95.3 (+15.7) 97.7 (+11.4) 58.1 (+15.3) 75.2 (+8.5) 84.2 (+5.8) \u2713 \u2713 \u2717 \u2713 81.0 (+21.3) 95.2 (+15.6) 97.6 (+11.3) 56.8 (+14.0) 74.6 (+7.9) 81.6 (+3.2) \u2713 \u2713 \u2713 \u2713 82.6 (+22.9) 96.2 (+16.6) 98.4 (+12.1) 59.5 (+16.7) 76.8 (+10.1) 83.7 (+5.3) of depth maps with text and depth-aligned images). Table 5 shows depth-daRGB outperforming depthrendRGB by 6.8% on the ScanObjectNN dataset, confirming concerns that alignment with rendered images may lead to overfitting on specific 3D shapes. Moreover, daRGB-text & depth performs worst, suggesting that pre-aligning depth-aligned images with text compromises CLIP\u2019s ability to generate robust image representations, thus affecting subsequent depth-daRGB alignment efficacy. Depth-daRGB\u2019s superior performance on ModelNet40 and OmniObject3D compared to depth-text & daRGB shows that aligning depth maps with depth-aligned images indirectly aligns with text, making additional text alignment unnecessary and potentially limiting OpenDlign\u2019s generalization. Depth-Specific Texts. Table 4 indicates that OpenDlign outperforms others in zero-shot classification tasks using depth-specific prompts, whether it incorporates multimodal alignment or logit aggregation. This implies that the inaccuracies in recognition partly result from processing input data as typical RGB images, rather than as depth maps. Logits Aggregation. Results in Table 4 show that multi-view logit aggregation improves zeroshot classification on all datasets by combining logits from pre-trained and fine-tuned encoders. This approach effectively mitigates the catastrophic forgetting problem in OpenDlign\u2019s multimodal alignment, enabling it to recognize 3D objects identifiable by both pre-trained CLIP and OpenDlign. Varying Number of Depth Views. OpenDlign, like other depth-based methods, necessitates extracting multiple embeddings from multi-view depth maps for zero-shot inference. Figure 4 illustrates that OpenDlign\u2019s zero-shot accuracy on both ModelNet40 and OmniObject3D increases as the number of depth map views rises. Notably, OpenDlign achieves top benchmark performance, comparable to TAMM-PointBERT, with no more than two views, indicating a good balance between latency in embedding extraction and effective zero-shot classification. Furthermore, we observed a slower performance improvement on OmniObject3D, reflecting its finer-grained classification requirements. Table 5: Ablation study on various alignment strategies. Aligning with text modality was achieved by fine-tuning the image encoder. Alignment MNet40 ScanNN Omni3D Strategy Top 1 Top 5 Top 1 Top 5 Top 1 Top 5 depth-rendRGB 78.8 96.8 52.7 82.5 29.4 51.8 daRGB-text & depth 78.6 96.4 51.1 79.6 29.1 51.6 depth-text & daRGB 79.4 98.0 60.7 86.0 29.5 52.7 depth-daRGB (ours) 82.6 98.4 59.5 83.7 31.3 53.2 Figure 4: Impact of the number of views on OpenDlign\u2019s zero-shot performance. 5 Conclusion and Future Work In this study, we introduce OpenDlign, an open-world framework that enhances 3D representation by efficiently fine-tuning the CLIP with depth-aligned images, which exhibit more diverse textures and colors than CAD-rendered images. Our experiments demonstrate OpenDlign\u2019s superior performance in various 3D zero-shot and few-shot tasks, especially with real-scanned objects. However, generating depth-aligned images with the ControlNet model is slower than direct CAD rendering, which extends training dataset preparation time. Moreover, depth-aligned images can be created from both CAD objects and real 3D scenes, likely highlighting a greater texture diversity gap between depth-aligned and CAD-rendered scenes and further highlighting OpenDlign\u2019s 3D scene understanding capabilities. 9"
16
+ }
intro_8K/test_introduction_short_2404.16540v1.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2404.16540v1",
3
+ "title": "Approximation Algorithm of Minimum All-Ones Problem for Arbitrary Graphs",
4
+ "abstract": "Let $G=(V, E)$ be a graph and let each vertex of $G$ has a lamp and a button.\nEach button can be of $\\sigma^+$-type or $\\sigma$-type.\n Assume that initially some lamps are on and others are off. The button on\nvertex $x$ is of $\\sigma^+$-type ($\\sigma$-type, respectively) if pressing the\nbutton changes the lamp states on $x$ and on its neighbors in $G$ (the lamp\nstates on the neighbors of $x$ only, respectively). Assume that there is a set\n$X\\subseteq V$ such that pressing buttons on vertices of $X$ lights all lamps\non vertices of $G$. In particular, it is known to hold when initially all lamps\nare off and all buttons are of $\\sigma^+$-type.\n Finding such a set $X$ of the smallest size is NP-hard even if initially all\nlamps are off and all buttons are of $\\sigma^+$-type. Using a linear algebraic\napproach we design a polynomial-time approximation algorithm for the problem\nsuch that for the set $X$ constructed by the algorithm, we have $|X|\\le\n\\min\\{r,(|V|+{\\rm opt})/2\\},$ where $r$ is the rank of a (modified) adjacent\nmatrix of $G$ and ${\\rm opt}$ is the size of an optimal solution to the\nproblem.\n To the best of our knowledge, this is the first polynomial-time approximation\nalgorithm for the problem with a nontrivial approximation guarantee.",
5
+ "authors": "Chen Wang, Chao Wang, Gregory Z. Gutin, Xiaoyan Zhang",
6
+ "published": "2024-04-25",
7
+ "updated": "2024-04-25",
8
+ "primary_cat": "cs.DS",
9
+ "cats": [
10
+ "cs.DS",
11
+ "cs.DM"
12
+ ],
13
+ "label": "Original Paper",
14
+ "paper_cat": "Knowledge AND Graph",
15
+ "gt": "The all-ones problem is a fundamental problem in applied mathematics, first proposed by Sutner in 1988 [17]. This problem has applications in linear cellular automata, as discussed in [18] and the references therein. To illustrate the problem, consider an n \u00d7 n grid with each area having a light lamp and a switch, and every lamp is initially off. Turning the switch on in some area lights the lamp in the area and the lamps in neighboring areas. Is there a set X of areas such that turning the switches on in X will turn on all the lamps? This problem can be extended to all graphs and we will call it the all-ones problem. Sutner [18] proved that a solution X exists for every graph. Later, several simple proofs of this result were given or rediscovered [3, 5, 7, 10, 13]. Many variants of the all-ones problem have been introduced and studied [1, 2, 6, 7, 11, 12, 19] over years. There are two important generalizations of the all-ones problem: (i) the initial state of lamps and switches can be arbitrary, \u2217Corresponding author. Email addresses: [email protected] (Chen Wang), [email protected] (Chao Wang), [email protected] (Gregory Z. Gutin), [email protected] (Xiaoyan Zhang) 1 arXiv:2404.16540v1 [cs.DS] 25 Apr 2024 Chen Wang et al. / Theoretical computer science 00 (2024) 1\u20138 2 i.e., some are on and the others are off, and (ii) every switch can be either of \u03c3+-type which changes the states of the lamp on its vertex and the lamps on the neighbors of its vertex or \u03c3-type which changes the states of the lamps on the neighbors of its vertex only. As a result of these two generalizations, the generalized all-ones problem may not have a solution X which lights all lamps. This generalized problem is studied in this paper. Under the condition that such a solution X exists for the generalized all-ones problem, it is natural to ask for X of minimum size. Unfortunately, this minimization problem is NP-hard even for all-ones problem [16]; we will call the minimization all-ones problem the min all-ones problem. Galvin and Lu both proved that the min all-ones problem of trees can be solved in linear time [9, 14]. Building on this, Chen proposed an algorithm for solving the min generalized all- ones problem on trees, with linear complexity [4]. Manuel et al. provided solutions for some of the widely studied architectures, such as binomial trees, butterfly, and benes networks [15]. Fleischer and Yu provided a detailed survey of the generalized all-ones problem [8]. More recently, Zhang extended the all-ones problem to the all-colors problem, in which each lamp had other states besides being on and off, and obtained additional findings on the all-colors problem [20]. Although significant research has been conducted on the all-ones problem on special graphs, such as trees, re- sulting in efficient algorithms, no polynomial-time approximation algorithms have been designed for the min all-ones problem on general graphs. Trees and cyclic graphs only represent a fraction of general graphs. In practical engi- neering scenarios, complex graphs are more common. In this paper, we design a polynomial-time approximation algorithm for the min generalized all-ones problem. If the problem has a solution, our algorithm outputs a solution X such that |X| \u2264min{r, (|V| + opt)/2}, where the rank of a (modified) adjacent matrix of G and opt is the size of an optimal solution to the problem. Apart from the introduction, this paper contains three sections. In Section 2, we introduce our approximation algorithm in detail. Section 3 shows the theoretical analysis and performance evaluation of this algorithm. Section 4 summarizes all the work of this paper and discusses future work.",
16
+ "main_content": "2.1. Linear algebraic formulation of min generalized all-ones problem It is not hard to see that the min generalized all-ones problem can be described as the following linear integer program over F2. For an arbitrary graph G = (V, E) with V = {v1, . . . , vn} we can get its modified adjacency matrix A = (aij)n\u00d7n such that for all i \ufffdj, aij = 1 if vivj \u2208E and ai j = 0 otherwise, and for all i \u2208{1, 2, . . . , n}, aii = 1 (aii = 0, respectively) if the switch on vi is of \u03c3+-type (of \u03c3-type, respectively). Combined with the initial state B = (b1, b2, \u00b7 \u00b7 \u00b7 , bn), where bi = 0 if the lamp on vertex vi is initially on and bi = 1 if the lamp is initially off, we can construct a system of linear equations AU = B over F2. The solution to this problem is the minimum of \ufffdU = \ufffdn i=1 ui. Suppose the rank of A is r and the corank is m so that m + r = n. If aii = 1 for all i \u2208{1, 2, \u00b7 \u00b7 \u00b7 , n}, the system of equations AU = B must have a solution, but if aii = 0, the system may not necessarily have a solution. However, as long as the system has at least one solution \u03b3 = (\u03b31, \u03b32, \u00b7 \u00b7 \u00b7 , \u03b3n)T, we can find all solutions of the system using the following system combining \u03b3 with the fundamental solution set \u03b7 = (\u03b71, \u03b72, \u00b7 \u00b7 \u00b7 , \u03b7m) within time O(n3). Here xi is the coefficient of the column vector \u03b7i = (\u03b71i, . . . , \u03b7ni)T. \u03b7X + \u03b3 = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 ate to \u03b711 \u03b712 \u00b7 \u00b7 \u00b7 \u03b71m \u03b721 \u03b722 \u00b7 \u00b7 \u00b7 \u03b72m \u03b731 \u03b732 \u00b7 \u00b7 \u00b7 \u03b73m . . . . . . ... . . . \u03b7n1 \u03b7n2 \u00b7 \u00b7 \u00b7 \u03b7nm \uf8fc \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fd \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fe o e \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 m x1 x2 . . . xm \uf8fc \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fd \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fe im + \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 \u03b3 \u03b3 \u03b3 \u03b3 ze her \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 \u03b31 \u03b32 \u03b33 . . . \u03b3n \uf8fc \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fd \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fe \ufffdU fo (1) \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 \u00b7 \u00b7 \u00b7 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fe \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fe \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fe The problem is how to find the appropriate column vector X to minimize \ufffdU, under the condition that X has a total of 2m values. This problem was proven to be an NP-complete [16]. Therefore, the next subsection provides an approximation algorithm running in polynomial time. 2 Chen Wang et al. / Theoretical computer science 00 (2024) 1\u20138 3 2.2. Approximation algorithm Firstly, it can be observed that the polynomial time complexity (not exceeding O(n3)) of finding the matrix (\u03b71, \u03b72, \u00b7 \u00b7 \u00b7 , \u03b7m) and the special solution \u03b3 makes this process cost-effective in solving NP-complete problems. Secondly, it is challenging to identify alternative methods capable of directly computing the optimal solution without obtaining all the solutions. Even if such a solution is obtained, verification is often infeasible. When \u03b7 and \u03b3 are known, we need to find the X that minimizes P U. \u03b7X + \u03b3 = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 \u03b711 \u03b712 \u00b7 \u00b7 \u00b7 \u03b71m \u03b721 \u03b722 \u00b7 \u00b7 \u00b7 \u03b72m \u03b731 \u03b732 \u00b7 \u00b7 \u00b7 \u03b73m . . . . . . ... . . . \u03b7n1 \u03b7n2 \u00b7 \u00b7 \u00b7 \u03b7nm \uf8fc \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fd \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fe \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 x1 x2 . . . xm \uf8fc \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fd \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fe + \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 \u03b31 \u03b32 \u03b33 . . . \u03b3n \uf8fc \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fd \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fe = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 \u03b41 \u03b42 \u03b43 . . . \u03b4n \uf8fc \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fd \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fe + \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 \u03b31 \u03b32 \u03b33 . . . \u03b3n \uf8fc \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fd \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fe = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 u1 u2 u3 . . . un \uf8fc \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fd \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fe (2) Proposition 2.1. Row exchanges of matrix \u03b7 do not change P U. Proof. Multiply both sides of Equation 2 by matrix P as shown in Equation 3. Matrix P is a product of elementary matrices that perform row exchanges. This operation essentially reorders the elements of vector U, but does not change the P U. P(\u03b7X + \u03b3) = P(\u03b4 + \u03b3) = PU (3) Proposition 2.2. Column transformation of matrix \u03b7 does not change P U. Proof. Let Qm\u2217m be a full rank matrix, and QZ = X, with the following equation. \u03b7X + \u03b3 = \u03b7QZ + \u03b3 = (\u03b7Q)Z + \u03b3 = \u03f5Z + \u03b3 = \u03b4 + \u03b3 = U (4) Q is the transition matrix between X and Z, and Q is full rank. When we find that X makes P U the smallest, we can definitely find the corresponding Z, so that the obtained U is the same. We can transform the \u03b7 column into an echelon form using row exchanges and column transformations, as shown in the following equation, with a complexity of O(m2n). The question mark indicates that the value of the number is uncertain, which may be 0 or 1. We can divide the matrix into m + 1 parts based on the echelon and assume the last line of the i-th part is line ki (i = 0, 1, \u00b7 \u00b7 \u00b7 , m) for the rank of matrix \u03b7 is always m. Part 0 is the most special, with all 3 Chen Wang et al. / Theoretical computer science 00 (2024) 1\u20138 4 elements in each row being 0. To ensure that Equation 4 holds, there should be (u1, u2, \u00b7 \u00b7 \u00b7 , uk0) = (\u03b31, \u03b32, \u00b7 \u00b7 \u00b7 , \u03b3k0). \u03b7Q = \u03f5 = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 0 0 0 0 \u00b7 \u00b7 \u00b7 0 . . . . . . . . . . . . ... . . . 1 0 0 0 \u00b7 \u00b7 \u00b7 0 1 0 0 0 \u00b7 \u00b7 \u00b7 0 . . . . . . . . . . . . ... . . . \u03f5(k1+1)1 1 0 0 \u00b7 \u00b7 \u00b7 0 \u03f5(k1+2))1 1 0 0 \u00b7 \u00b7 \u00b7 0 . . . . . . . . . . . . ... . . . \u03f5(k2+1)1 \u03f5(k2+1)2 1 0 \u00b7 \u00b7 \u00b7 0 \u03f5(k2+2)1 \u03f5(k2+2)2 1 0 \u00b7 \u00b7 \u00b7 0 . . . . . . . . . . . . ... . . . \u03f5km1 \u03f5km2 \u03f5km3 \u03f5km4 \u00b7 \u00b7 \u00b7 1 \uf8fc \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fd \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fe (5) In the following m parts, we will use greedy algorithms to solve for the Z value on the Echelon of each part. Part 1 of the linear Equation 5 is shown in Equation 6. (\u03b3k0+1, \u03b3k0+2, \u00b7 \u00b7 \u00b7 , \u03b3k1) is known and (\u03b4k0+1, \u03b4k0+2, \u00b7 \u00b7 \u00b7 , \u03b4k1) is unknown. It is important to ensure that \u03b4i is as similar to \u03b3i as possible. At this moment z1 only has two possible values: 0 and 1. Therefore, the idea of a greedy algorithm is adopted here. If there are more 0\u2019s than 1\u2019s in the range from \u03b3k0+1 to \u03b3k1, then z1 is set to 0. If there are more 1\u2019s than 0\u2019s, then z1 is set to 1. Therefore, we can directly obtain the value of x1 by solving it here, while ensuring that Pk1 i=k0+1 ui \u2264(k1 \u2212k0)/2. \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 z1 = \u03b4k0+1 z1 = \u03b4k0+2 z1 = \u03b4k0+3 . . . z1 = \u03b4k1 Compare to \u03b3k0+1 \u03b3k0+2 \u03b3k0+3 . . . \u03b3k1 (6) The value of z2 can be calculated through z1. Part 2 of Equation 5 can be written as shown in Equation 7. (\u03b3k1+1, \u03b3k1+2, \u00b7 \u00b7 \u00b7 , \u03b3k2) is known, and (\u03b4k1+1, \u03b4k1+2, \u00b7 \u00b7 \u00b7 , \u03b4k2) needs to satisfy the Equation 5 and be as similar to (\u03b3k1+1, \u03b3k1+2, \u00b7 \u00b7 \u00b7 , \u03b3k2) as possible. The variables in Equation 7 are z1 and z2, and z1 has been solved before through a greedy algorithm, so the unknown variable is only z2. Since \u03f5i1z1 are constants, we can move them from the left side of the equation to the right side, and these two equation systems are obviously equivalent. Then, we need to ensure that \u03f5(k1+i)1z1 +\u03b4k1+i is as similar to \u03b3k1+i as possible. It can be seen that another transformation can be carried out, which is equivalent to making \u03b4k1+i as similar to \u03f5(k1+i)1z1 +\u03b3k1+i as possible. In this way, we have separated the variables: the left side of the equation is the variable z2, the right side of the equation is the variable \u03b4k1+i(\u03b4k1+i = z2), and the column of \u03f5(k1+i)1z1 + \u03b3k1+i are constants. At this point, we find that part 2 of Equation 5 has been transformed to be very similar to part 1. Therefore, if there are more 0\u2019s than 1\u2019s in the range from \u03f5(k1+i)1z1 + \u03b3k1+1 to \u03f5(k2)1z1 + \u03b3k2, then z2 is set to 0. If there are more 1\u2019s than 0\u2019s, then z2 is set to 1. Therefore, the value of z2 can be solved here and Pk2 i=k1+1 ui \u2264(k2 \u2212k1)/2 is ensured. After obtaining the value of z2, the value of \u03f5i1z1 + \u03f5i2z2 can be calculated, and the value of z3 can be calculated again. Following this pattern, the values of Z = (z1, z2, \u00b7 \u00b7 \u00b7 , zm) can be obtained. Then \u03f5Z + \u03b3 = U, we obtain U. The complete algorithm is shown in Algorithm 1. 4 Chen Wang et al. / Theoretical computer science 00 (2024) 1\u20138 5 \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 \u03f5(k1+1)1z1 + z2 = \u03b4k1+1 \u03f5(k1+2)1z1 + z2 = \u03b4k1+2 \u03f5(k1+3)1z1 + z2 = \u03b4k1+3 . . . \u03f5(k2)1z1 + z2 = \u03b4k2 Compare to \u03b3k1+1 \u03b3k1+1 \u03b3k1+1 . . . \u03b3k2 \u21d3 \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 z2 = \u03b4k1+1 + \u03f5(k1+1)1z1 z2 = \u03b4k1+2 + \u03f5(k1+2)1z1 z2 = \u03b4k1+3 + \u03f5(k1+3)1z1 . . . z2 = \u03b4k2 + \u03f5(k2)1z1 Compare to \u03b3k1+1 \u03b3k1+1 \u03b3k1+1 . . . \u03b3k2 \u21d3 \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 z2 = \u03b4k1+1 z2 = \u03b4k1+2 z2 = \u03b4k1+3 . . . z2 = \u03b4k2 Compare to \u03b3k1+1 + \u03f5(k1+1)1z1 \u03b3k1+2 + \u03f5(k1+2)1z1 \u03b3k1+3 + \u03f5(k1+3)1z1 . . . \u03b3k2 + \u03f5(k2)1z1 (7) 3. Algorithm performance evaluation In this section, we present the complexity of Algorithm 1 and analyze its approximation guarantees. Proposition 3.1. Algorithm 1 has a complexity of O(n3), and if the fundamental solution set \u03b7 for the equation AU = B has been obtained and is in column echelon form, then the complexity will reduce to O(mn). Proof. In Algorithm 1, step 1 involves solving a system of linear equations, which has a complexity of O(n3). Step 8 involves transforming the matrix etaup into column echelon form, which has a complexity of O(m2n) where m \u2264n. Step 2 to 7 is O(1). Step 9 involves calculating the location of pivots in the column echelon matrix \u03f5, which has a complexity of O(mn). Steps 10 to 24 consist of a nested loop with three layers. However, each element in the matrix \u03f5 is only accessed once, resulting in a complexity of O(mn). Proposition 3.2. If a given instance I of the min generalized all-ones problem has a solution, the value sol of the solution obtained by Algorithm 1 satisfies sol \u2264r, where r is the rank of the matrix A. Proof. In Equation 6 and 7, if \u03b4i = \u03b3i, then the resulting ui will be 0. In each part, we always make more ui equal to 0, so each part has at least one ui that takes on the value of 0. Furthermore, the rank of \u03b7 is m = n \u2212r because \u03b7 is the fundamental solution set of the system AU = B. Therefore, at least m values of ui are 0, so P U \u2264n \u2212m = r. Proposition 3.3. If a given instance I of the min generalized all-ones problem has a solution, the value sol of the solution obtained by Algorithm 1 satisfies sol \u2264(n + opt)/2, where opt is the value of an optimal solution of I. 5 Chen Wang et al. / Theoretical computer science 00 (2024) 1\u20138 6 Algorithm 1: Approximation Algorithm of Minimum All-Ones Problem Data: An adjacency matrix An\u2217n and a initial state B1\u2217n Result: Answer U 1 (\u03b7, \u03b3, m) = solveEquations(A,B); 2 if m == 0 and \u03b3 is null then 3 return null; 4 end 5 if m == 0 and \u03b3 is not null then 6 return \u03b3; 7 end 8 (P, \u03f5, Q) = matrixEchelon(\u03b7, \u03b3); 9 K = calculatePart(\u03f5); 10 for i from 1 to m do 11 cnt, tmp = 0; 12 for j from K[i \u22121] + 1 to K[i] do 13 for p from 1 to i \u22121 do 14 tmp = tmp \u2295(\u03f5[ j][p] \u2217X[p]) 15 end 16 cnt = cnt + (tmp \u2295\u03b3[ j]); 17 end 18 if cnt \u2264K[i] \u2212K[i \u22121])/2 then 19 X[i] = 0; 20 end 21 else 22 X[i] = 1; 23 end 24 end 25 U = P \u2217(\u03f5 \u2217X + \u03b3); 26 return U; Proof. In the Subsection 2.2, we partitioned the matrix \u03b7 into m + 1 parts and proved that for the 1 to m parts, Pki+1 ki+1 u \u2264(ki+1 \u2212ki)/2. Only the 0th part remains to be discussed. The 0th part is quite special in that it contains no variables, only differing in the value of \u03b3. Let the number of 0\u2019s in \u03b3 in the 0th part be g0 and the number of 1\u2019s be g1. g0 indicates that the switch at that point must not be pressed; otherwise, the conditions for the all-ones problem cannot be satisfied. Similarly, g1 indicates that the switch must be pressed. Now we have: sol \u2264g1 + (n \u2212g1 \u2212g0)/2 = (n + g1 \u2212g0)/2 (8) Then add the parameter opt. We can easily prove that sol \u2265opt \u2265g1, because the switches for these points must be pressed in any case. So we have g1 \u2264opt \u2264sol \u2264(n + g1 \u2212g0)/2 (9) Next, we will bound sol by replacing g1 with opt and g0 with 0, resulting in the following expression: sol \u2264(n + opt)/2 (10) 4. Conclusion and future work This article presents an approximation algorithm for the min generalized all-ones problem on arbitrary graphs, making it possible to process the problem in batches. The algorithm has a complexity of O(n3). If the equation 6 Chen Wang et al. / Theoretical computer science 00 (2024) 1\u20138 7 Figure 1. The range of possible values for sol system AU = B has been solved and the solution is in column echelon form, the complexity will be reduced to O(n(n \u2212r)), which is the lowest complexity for general graphs. The upper bound of the solution value sol obtained by this algorithm satisfies the inequality sol \u2264(n + opt)/2 and sol \u2264r. This ensures that the obtained solution, as shown in Figure 1, is always the optimal half. In future work, there still remain two questions to be solved. One of them is whether there is a polynomialtime algorithm for the min generalized all-ones problem which always finds a solution of size at most c \u02d9 opt for some constant c? The other one is whether we can get such an algorithm for the minimum all-colors problem?"
17
+ }
intro_8K/test_introduction_short_2404.16563v1.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2404.16563v1",
3
+ "title": "Evaluating Large Language Models on Time Series Feature Understanding: A Comprehensive Taxonomy and Benchmark",
4
+ "abstract": "Large Language Models (LLMs) offer the potential for automatic time series\nanalysis and reporting, which is a critical task across many domains, spanning\nhealthcare, finance, climate, energy, and many more. In this paper, we propose\na framework for rigorously evaluating the capabilities of LLMs on time series\nunderstanding, encompassing both univariate and multivariate forms. We\nintroduce a comprehensive taxonomy of time series features, a critical\nframework that delineates various characteristics inherent in time series data.\nLeveraging this taxonomy, we have systematically designed and synthesized a\ndiverse dataset of time series, embodying the different outlined features. This\ndataset acts as a solid foundation for assessing the proficiency of LLMs in\ncomprehending time series. Our experiments shed light on the strengths and\nlimitations of state-of-the-art LLMs in time series understanding, revealing\nwhich features these models readily comprehend effectively and where they\nfalter. In addition, we uncover the sensitivity of LLMs to factors including\nthe formatting of the data, the position of points queried within a series and\nthe overall time series length.",
5
+ "authors": "Elizabeth Fons, Rachneet Kaur, Soham Palande, Zhen Zeng, Svitlana Vyetrenko, Tucker Balch",
6
+ "published": "2024-04-25",
7
+ "updated": "2024-04-25",
8
+ "primary_cat": "cs.CL",
9
+ "cats": [
10
+ "cs.CL"
11
+ ],
12
+ "label": "Original Paper",
13
+ "paper_cat": "LLM Fairness",
14
+ "gt": "Time series analysis and reporting play a crucial role in many areas like healthcare, finance, climate, etc. With the recent advances in Large Language Models (LLMs), integrating them in time series analysis and reporting processes presents a huge po- tential for automation. Recent works have adapted general-purpose LLMs for time series understand- ing in various specific domains, such as seizure localization in EEG time series (Chen et al., 2024), cardiovascular disease diagnosis in ECG time se- ries (Qiu et al., 2023), weather and climate data understanding (Chen et al., 2023), and explainable financial time series forecasting (Yu et al., 2023). Despite these advancements in domain-specific LLMs for time series understanding, it is crucial to conduct a systematic evaluation of general-purpose LLMs\u2019 inherent capabilities in generic time se- ries understanding, without domain-specific fine- tuning. This paper aims to uncover the pre-existing strengths and weaknesses in general-purpose LLMs regarding time series understanding, such that prac- titioners can be well informed of areas where the general-purpose LLMs are readily applicable, and focus on areas for improvements with targeted ef- forts during fine-tuning. To systematically evaluate the performance of general-purpose LLMs on generic time series un- derstanding, we propose a taxonomy of time series features for both univariate and multivariate time series. This taxonomy provides a structured cat- egorization of core characteristics of time series across domains. Building upon this taxonomy, we have synthesized a diverse dataset of time series covering different features in the taxonomy. This dataset is pivotal to our evaluation framework, as it provides a robust basis for assessing LLMs\u2019 abil- ity to interpret and analyze time series data accu- rately. Specifically, we examine the state-of-the-art LLMs\u2019 performance across a range of tasks on our dataset, including time series features detection and classification, data retrieval as well as arithmetic reasoning. Our contributions are three-fold: \u2022Taxonomy - we introduce a taxonomy that pro- vides a systematic categorization of important time series features, an essential tool for stan- dardizing the evaluation of LLMs in time series understanding. \u2022 Diverse Time Series Dataset - we synthesize a comprehensive time series dataset, ensuring a broad representation of the various types of time series, encompassing the spectrum of features identified in our taxonomy. 1 arXiv:2404.16563v1 [cs.CL] 25 Apr 2024 \u2022 Evaluations of LLMs - our evaluations pro- vide insights into what LLMs do well when it comes to understanding time series and where they struggle, including how they deal with the format of the data, where the query data points are located in the series and how long the time series is.",
15
+ "main_content": "2.1 Large Language Models Large Language Models (LLMs) are characterized as pre-trained, Transformer-based models endowed with an immense number of parameters, spanning from tens to hundreds of billions, and crafted through the extensive training on vast text datasets (Zhang et al., 2024; Zhao et al., 2023). Notable examples of LLMs include Llama2 (Touvron et al., 2023), PaLM (Chowdhery et al., 2023), GPT3 (Brown et al., 2020), GPT4 (Achiam et al., 2023), and Vicuna-13B (Chiang et al., 2023). These models have surpassed expectations in numerous language-related tasks and extended their utility to areas beyond traditional natural language processing. For instance, Wang et al. (2024) have leveraged LLMs for the prediction and modeling of human mobility, Yu et al. (2023) for explainable financial time series forecasting, and Chen et al. (2024) for seizure localization. This expansive application of LLMs across diverse domains sets the stage for their potential utility in the analysis of time series data, a domain traditionally governed by statistical and machine learning models. 2.2 Language models for time series Recent progress in time series forecasting has capitalized on the versatile and comprehensive abilities of LLMs, merging their language expertise with time series data analysis. This collaboration marks a significant methodological change, underscoring the capacity of LLMs to revolutionize conventional predictive methods with their advanced information processing skills. In the realm of survey literature, comprehensive overviews provided by Zhang et al. (2024) and Jiang et al. (2024) offer valuable insights into the integration of LLMs in time series analysis, highlighting key methodologies, challenges, and future directions. Notably, Gruver et al. (2023) have set benchmarks for pretrained LLMs such as GPT-3 and Llama2 by assessing their capabilities for zero-shot forecasting. Similarly, Xue and Salim (2023) introduced Prompcast, and it adopts a novel approach by treating forecasting as a question-answering activity, utilizing strategic prompts. Further, Yu et al. (2023) delved into the potential of LLMs for generating explainable forecasts in financial time series, tackling inherent issues like cross-sequence reasoning, integration of multi-modal data, and interpretation of results, which pose challenges in conventional methodologies. Additionally, Zhou et al. (2023) demonstrated that leveraging frozen pre-trained language models, initially trained on vast corpora, for time series analysis that could achieve comparable or even state-of-the-art performance across various principal tasks in time series analysis including imputation, classification and forecasting. 2.3 LLMs for arithmetic tasks Despite their advanced capabilities, LLMs face challenges with basic arithmetic tasks, crucial for time series analysis involving quantitative data (Azerbayev et al., 2023; Liu and Low, 2023). Research has identified challenges such as inconsistent tokenization and token frequency as major barriers (Nogueira et al., 2021; Kim et al., 2021). Innovative solutions, such as Llama2\u2019s approach to digit tokenization (Yuan et al., 2023), highlight ongoing efforts to refine LLMs\u2019 arithmetic abilities, enhancing their applicability in time series analysis. 3 Time Series Data 3.1 Taxonomy of Time Series Features Our study introduces a comprehensive taxonomy for evaluating the analytical capabilities of Large Language Models (LLMs) in the context of time series data. This taxonomy categorizes the intrinsic characteristics of time series, providing a structured basis for assessing the proficiency of LLMs in identifying and extracting these features. Furthermore, we design a series of datasets following the proposed taxonomy and we outline an evaluation framework, incorporating specific metrics to quantify model performance accurately across various tasks. The proposed taxonomy encompasses critical aspects of time series data that are frequently analyzed for different applications. Table 1 shows the selected features in increasing complexity, and each sub-feature. We evaluate the LLM in this taxonomy in a two-step process. In first place, we evaluate if the LLM can detect the feature, and in a 2 Time series characteristics Description Sub-categories Univariate Trend Directional movements over time. Up , Down Seasonality and Cyclical Patterns Patterns that repeat over a fixed or irregular period. Fixed-period \u2013 constant amplitude , Fixed-period \u2013 varying amplitude , Shifting period , Multiple seasonality Volatility Degree of dispersion of a series over time. Constant Increasing , Clustered , Leverage effect . Anomalies Significant deviations from typical patterns. Spikes , step-spikes , level shifts , temporal disruptions Structural Breaks Fundamental shifts in the series data, such as regime changes or parameter shifts. Regime changes , parameter shifts Statistical Properties Characteristics like fat tails, and stationarity versus non-stationarity. Fat tails , Stationarity Multivariate Correlation Measure the linear relationship between series. Useful for predicting one series from another if they are correlated. Positive Negative Cross-Correlation Measures the relationship between two series at different time lags, useful for identifying lead or lag relationships. Positive direct , Positive lagged , Negative direct , Negative lagged Dynamic Conditional Correlation Assesses situations where correlations between series change over time. Correlated first half Correlated second half Table 1: Taxonomy of time series characteristics. second step, we evaluate if the LLM can identify the sub-category of the feature. A detailed description of the process is described in Sec. 6.1.2. 3.2 Synthetic Time Series Dataset Leveraging our taxonomy, we construct a diverse synthetic dataset of time series, covering the features outlined in the previous section. We generated in total 9 datasets with 200 time series samples each. Within each dataset the time series length is randomly chosen between 30 and 150 to encompass a variety of both short and long time series data. In order to make the time series more realistic, we add a time index, using predominantly daily frequency. Fig. 1 showcases examples of our generated univariate time series. Each univariate dataset showcases a unique single-dimensional patterns, whereas multivariate data explore series interrelations to reveal underlying patterns. Please see Table 4 in the appendix for examples of each univariate dataset, and Table 5 for visual examples of the multivariate cases. For a detailed description of the generation of each dataset, refer to Sec. A in the Appendix. 4 Time Series Benchmark Tasks Our evaluation framework is designed to assess the LLMs\u2019 capabilities in analyzing time series across the dimensions in our taxonomy (Sec. 3.1). The evaluation includes four primary tasks: Feature Detection This task evaluates the LLMs\u2019 ability to identify the presence of specific features within a time series, such as trend, seasonality, or anomalies. For instance, given a time series dataset with an upward trend, the LLM is queried to determine if a trend exists. Queries are structured as yes/no questions to assess the LLMs\u2019 ability to recognize the presence of specific time series features, such as \"Is a trend present in the time series?\" Feature Classification Once a feature is detected, this task assesses the LLMs\u2019 ability to classify the feature accurately. For example, if a trend is present, the LLM must determine whether it is upward, downward, or non-linear. This task involves a QA setup where LLMs are provided with definitions of sub-features within the prompt. Performance is evaluated based on the correct identification of sub-features, using the F1 score to balance precision and recall. This task evaluates the models\u2019 depth of understanding and ability to distinguish between similar but distinct phenomena. Information Retrieval Evaluates the LLMs\u2019 accuracy in retrieving specific data points, such as values on a given date. 3 Figure 1: Example synthetically generated time series. Arithmetic Reasoning Focuses on quantitative analysis tasks, such as identifying minimum or maximum values. Accuracy and Mean Absolute Percentage Error (MAPE) are used to measure performance, with MAPE offering a precise evaluation of the LLMs\u2019 numerical accuracy. Additionally, to account for nuanced aspects of time series analysis, we propose in Sec. 5.2 to study the influence of multiple factors, including time series formatting, location of query data point in the time series and time series length. 5 Performance Metrics and Factors 5.1 Performance Metrics We employ the following metrics to report the performance of LLMs on various tasks. F1 Score Applied to feature detection and classification, reflecting the balance between precision and recall. Accuracy Used for assessing the information retrieval and arithmetic reasoning tasks. Mean Absolute Percentage Error (MAPE) Employed for numerical responses in the information retrieval and arithmetic reasoning tasks, providing a measure of precision in quantitative analysis. 5.2 Performance Factors We identified various factors that could affect the performance of LLMs on time series understanding, for each we designed deep-dive experiments to reveal the impacts. Time Series Formatting Extracting useful information from raw sequential data as in the case of numerical time series is a challenging task for LLMs. The tokenization directly influences how the patterns are encoded within tokenized sequences (Gruver et al., 2023), and methods such as BPE separate a single number into tokens that are not aligned. On the contrary, Llama2 has a consistent tokenization of numbers, where it splits each digit into an individual token, which ensures consistent tokenization of numbers (Liu and Low, 2023). We study different time series formatting approaches to determine if they influence the LLMs performance to capture the time series information. In total we propose 9 formats, ranging from simple CSV to enriched formats with additional information. Time Series Length We study the impact that the length of the time series has in the retrieval task. Transformer-based models use attention mechanisms to weigh the importance of different parts of the input sequence. Longer sequences can dilute the attention mechanism\u2019s effectiveness, potentially making it harder for the model to focus on the most relevant parts of the text (Vaswani et al., 2017). Position Bias Given a retrieval question, the position of where the queried data point occurs in the time series might impact the retrieval accuracy. Studies have discovered recency bias (Zhao et al., 2021) in the task of few-shot classification, where the LLM tends to repeat the label at the end. Thus, it\u2019s important to investigate whether LLM exhibits similar bias on positions in the task of time series 4 understanding. 6 Experiments 6.1 Experimental setup 6.1.1 Models We evaluate the following LLMs on our proposed framework: 1) GPT4. (Achiam et al., 2023) 2) GPT3.5. 3) Llama2-13B (Touvron et al., 2023), and 4) Vicuna-13B (Chiang et al., 2023). We selected two open-source models, Llama2 and Vicuna, each with 13 billion parameters, the version of Vicuna is 1.5 was trained by fine-tuning Llama2. Additionally we selected GPT4 and GPT3.5 where the number of parameters is unknown. In the execution of our experiments, we used an Amazon Web Services (AWS) g5.12xlarge instance, equipped with four NVIDIA A10G Tensor Core GPUs, each featuring 24 GB of GPU RAM. This setup was essential for handling both extensive datasets and the computational demands of LLMs. 6.1.2 Prompts The design of prompts for interacting with LLMs is separated into two approaches: retrieval/arithmetic reasoning and detection/classification questioning. Time series characteristics To evaluate the LLM reasoning over time series features, we use a two-step prompt with an adaptive approach, dynamically tailoring the interaction based on the LLM\u2019s responses. The first step involves detection, where the model is queried to identify relevant features within the data. If the LLM successfully detects a feature, we proceed with a follow-up prompt, designed to classify the identified feature between multiple sub-categories. For this purpose, we enrich the prompts with definitions of each subfeature (e.g. up or down trend), ensuring a clearer understanding and more accurate identification process. An example of this two-turn prompt is shown in Fig. 2. The full list can be found in Sec. F of the supplementary. Information Retrieval/Arithmetic Reasoning We test the LLM\u2019s comprehension of numerical data represented as text by querying it for information retrieval and numerical reasoning, as exemplified in Fig. 3 and detailed in the supplementary Sec. F. Trend Prompts \"Input:<time series>.\" Question 1: Detection \"Question: can you detect a general upward or downward trend in this time series? Answer yes or no only.\" Question 2: Classification \"Select one of the following answers: (a) the time series has a positive trend, (b) the time series has a negative trend. Provide your answer as either (a) or (b).\" Figure 2: Example of multi-turn prompt template used for time series feature detection and classification. Information retrieval and arithmetic reasoning prompts \"Input:<time series>. Given the input time series, please provide brief and precise answers to the following questions and format your responses in a dictionary: \u2019max_value\u2019: \u2019Maximum value and its date.\u2019, \u2019min_value\u2019: \u2019Minimum value and its date.\u2019, \u2019value_on_date\u2019: \u2019Value of the time series on <date>\u2019. Note: Only provide the numerical value and/or the date as the answer for each question.\" Figure 3: The prompt template used for information retrieval and arithmetic reasoning evaluation. 6.2 Benchmark Results In Table 2, we display the main results for all tasks outlined in Sec. 4. The results for univariate time series feature detection and classification tasks illustrate GPT4\u2019s robustness in trend and seasonality detection, substantially outperforming Llama2 and Vicuna. However, the detection of structural breaks and volatility presents challenges across all models, with lower accuracy scores. GPT4 excels in trend classification tasks, demonstrating superior performance. However, in classifying seasonality, outliers, and structural breaks, performance is mixed, with Vicuna sometimes surpassing Llama2, highlighting the distinct strengths of each model. Figure 4 summarizes the accuracy performance for the information retrieval and arithmetic reasoning tasks, and F1 score for the feature detection and classification tasks for all models. In multivariate time series feature detection and classification tasks, all models achieve moderate 5 Table 2: Performances across all reasoning tasks (Bold indicates best performance). Metric GPT4 GPT3.5 Llama2 Vicuna Univariate time series characteristics Feature detection Trend F1score 0.88 0.43 0.54 0.60 Seasonality F1score 0.98 0.70 0.71 0.47 Outlier F1score 0.53 0.53 0.46 0.53 Struct. break F1score 0.67 0.56 0.43 0.52 Volatility F1score 0.45 0.50 0.45 0.50 Fat Tails F1score 0.43 0.51 0.31 0.44 Stationarity F1score 0.31 0.31 0.31 0.31 Feature classification Trend F1score 0.98 0.47 0.41 0.61 Seasonality F1score 0.25 0.15 0.17 0.20 Outlier F1score 0.67 0.17 0.07 0.28 Struct. break F1score 0.34 0.48 0.31 0.36 Volatility F1score 0.13 0.16 0.10 0.23 Multivariate time series characteristics Fixed Corr. F1score 0.40 0.42 0.30 0.32 Lagged Corr. F1score 0.44 0.47 0.22 0.33 Changing Corr. F1score 0.43 0.41 0.23 0.41 Information Retrieval Value on Date Acc 1.00 0.94 0.38 0.48 Value on Date MAPE 0.00 0.10 0.65 0.78 Arithmetic Reasoning Min Value Acc 1.00 0.99 0.58 0.66 MAPE 0.00 0.04 16.18 12.24 Min Date Acc 0.98 0.94 0.38 0.55 Max Value Acc 0.97 0.92 0.56 0.46 MAPE 0.01 0.08 0.95 0.74 Max Date Acc 0.96 0.88 0.46 0.42 accuracy, suggesting potential for enhancement in intricate multivariate data analysis. For information retrieval tasks, GPT4 outperforms GPT3.5 and other models, achieving perfect accuracy in identifying the value on a given date. It also maintains a low Mean Absolute Percentage Error (MAPE), indicative of its precise value predictions. The arithmetic reasoning results echo these findings, with GPT4 displaying superior accuracy, especially in determining minimum and maximum values within a series. 6.3 Deep Dive on Performance Factors Time Series Formatting We present four formatting approaches in this section, csv, which is a common comma separated value, plain where the time series is formatted as Date:YYYY-MM-DD,Value:num for each pair datevalue. We also use the formatting approach proposed by Gruver et al. (2023) which we denominate spaces that adds blank spaces between each digit of the time series, tokenizing each digit individually, and symbol, an enriched format where we add a column to the time series with arrows indicating if the value has moved up, down or remained unchanged. Examples of every approach can be found in Sec. E in the Appendix. Table 3 shows the results for the four time series formatting strategies. For the full results, please refer to Tables 10 and 9. For the information retrieval and arithmetic reasoning tasks, the plain formatting yields better results across all models. This approach provides more structure to the input, and outperforms other formats in a task where the connection between time and value is important. For the detection and classification tasks, the plain formatting does not yield better results. Interestingly the symbol formatting that adds an additional column to the time series yields better results in the trend classification task. This means that the LLMs can correctly map the symbol to the time series movement and use it to achieve the best performance in trend classification. Furthermore, GPT3.5 leverages this additional information in the trend and anomalies datasets but not in the seasonality dataset. Time Series Length Figure 5 shows the performance of GPT3.5, Llama2 and Vicuna on three datasets, trend, seasonality and outliers wich have time series with different lengths. We observe that GPT3.5 retrieval performance degrades slowly with increasing sequence length. Llama2 and and Vicuna suffer a more steep degradation especially from time series of length 30 steps to 60 steps; for longer sequences the degradation in performance becomes linear. Position Bias We carry out a series of experiments to determine how the position of the target value affects task performance across various types of time series data. We address progressively more complex objectives: 1) identifying the presence of a value in a time series without a specified date (D.1); 2) retrieving a value corresponding to a specific date (D.2); and 3) identifying the minimum and maximum values (D.3). We cover a range of time series data, from monotonic series without noise to those with noise, sinusoidal patterns, data featuring outliers (spikes), and Brownian motion scenarios, each adding a layer of complexity. We examine how the position of the target value within the four quadrants \u2014 1st, 2nd, 3rd, and 4th\u2014 affects the efficacy of these tasks across the varied time series landscapes. This approach helps reveal the influence of position on different LLMs (GPT3.5, Llama2, and Vicuna) in the task of time series understanding. 6 (a) IR and arithmetic reasoning (b) Feature detection (c) Feature classification Figure 4: Feature detection and classification scores of GPT4, GPT3.5, Vicuna and Llama2. GPT3.5 Llama2 Vicuna csv plain spaces symbol csv plain spaces symbol csv plain spaces symbol Min value 0.98 0.99 0.79 0.98 0.55 0.58 0.20 0.58 0.63 0.67 0.17 0.62 Min date 0.94 0.95 0.69 0.93 0.28 0.39 0.09 0.29 0.50 0.55 0.13 0.49 Max value 0.92 0.92 0.54 0.94 0.48 0.56 0.05 0.52 0.49 0.46 0.01 0.50 Max date 0.88 0.88 0.51 0.89 0.34 0.46 0.04 0.41 0.38 0.42 0.07 0.41 Value on date 0.94 0.94 0.82 0.94 0.39 0.38 0.07 0.34 0.36 0.48 0.09 0.41 Trend det 0.42 0.41 0.42 0.42 0.51 0.44 0.34 0.40 0.51 0.49 0.54 0.45 Trend class 0.74 0.55 0.53 0.92 0.41 0.48 0.43 0.62 0.49 0.58 0.44 0.64 Season det 0.61 0.77 0.63 0.47 0.55 0.24 0.40 0.50 0.47 0.47 0.53 0.54 Season class 0.27 0.19 0.17 0.18 0.11 0.13 0.08 0.10 0.14 0.14 0.14 0.15 Outlier det 0.55 0.52 0.52 0.62 0.44 0.35 0.41 0.47 0.49 0.53 0.54 0.49 Outlier class 0.17 0.17 0.17 0.17 0.13 0.14 0.14 0.08 0.19 0.14 0.14 0.08 Table 3: Top: Time series feature detection and classification performance measured with F1 score. Bottom: Time series information retrieval and arithmetic reasoning performance measured by accuracy for different time series formats. (Bold indicates best performance) (a) Trend (b) Seasonality (c) Outliers Figure 5: Retrieval performance for different time series lengths. We consider the presence of position bias when the maximum performance gap between quadrants exceeds 10%. Given this criterion, our analysis provides the following key takeaways on position bias impacting LLM performance across the defined tasks: \u2022 Pronounced position bias is observed across all tasks and LLMs \u2013GPT models show significant bias exclusively in complex tasks that involve arithmetic reasoning. \u2013Both Llama2 and Vicuna demonstrate position biases across all tasks, from the simplest to the most complex ones. 7 \u2022The degree of complexity in the time series data tends to increase the extent of position bias observed within each task. Refer to Section D in the appendix, where we offer a detailed analysis of position bias across each task to further substantiate these conclusions. 7 Conclusion In conclusion, we provide a critical examination of general-purpose Large Language Models (LLMs) in the context of time series understanding. Through the development of a comprehensive taxonomy of time series features and the synthesis of a diverse dataset that encapsulates these features, we have laid a solid foundation for evaluating the capabilities of LLMs in understanding and interpreting time series data. Our systematic evaluation sheds light on the inherent strengths and limitations of these models, offering valuable insights for practitioners aiming to leverage LLMs in time series understanding. Recognizing the areas of weakness and strength in general-purpose LLMs\u2019 current capabilities allows for targeted enhancements, ensuring that these powerful models can be more effectively adapted to specific domains. 8 Limitations In this section, we detail the key limitations of our study and suggest pathways for future research. Time series data frequently intersects with data from other domains. In the financial industry, for instance, analysis often combines time series data like stock prices and transaction volumes with supplementary data types such as news articles (text), economic indicators (tabular), and market sentiment analysis (textual and possibly visual). Our future work aims to delve into how LLMs can facilitate the integration of multimodal data, ensure cohesive data modality alignment within the embedding space, and accurately interpret the combined data insights. Currently, our application of LLMs in time series analysis is primarily focused on comprehending time series features. However, the lack of interpretability mechanisms within our framework stands out as a significant shortcoming. Moving forward, we plan to focus on developing and integrating interpretability methodologies for LLMs specifically tailored to time series data analysis contexts. Acknowledgements This paper was prepared for informational purposes by the Artificial Intelligence Research group of JPMorgan Chase & Co and its affiliates (\u201cJ.P. Morgan\u201d) and is not a product of the Research Department of J.P. Morgan. J.P. Morgan makes no representation and warranty whatsoever and disclaims all liability, for the completeness, accuracy or reliability of the information contained herein. This document is not intended as investment research or investment advice, or a recommendation, offer or solicitation for the purchase or sale of any security, financial instrument, financial product or service, or to be used in any way for evaluating the merits of participating in any transaction, and shall not constitute a solicitation under any jurisdiction or to any person, if such solicitation under such jurisdiction or to such person would be unlawful."
16
+ }
intro_8K/test_introduction_short_2404.16572v1.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2404.16572v1",
3
+ "title": "ReliK: A Reliability Measure for Knowledge Graph Embeddings",
4
+ "abstract": "Can we assess a priori how well a knowledge graph embedding will perform on a\nspecific downstream task and in a specific part of the knowledge graph?\nKnowledge graph embeddings (KGEs) represent entities (e.g., \"da Vinci,\" \"Mona\nLisa\") and relationships (e.g., \"painted\") of a knowledge graph (KG) as\nvectors. KGEs are generated by optimizing an embedding score, which assesses\nwhether a triple (e.g., \"da Vinci,\" \"painted,\" \"Mona Lisa\") exists in the\ngraph. KGEs have been proven effective in a variety of web-related downstream\ntasks, including, for instance, predicting relationships among entities.\nHowever, the problem of anticipating the performance of a given KGE in a\ncertain downstream task and locally to a specific individual triple, has not\nbeen tackled so far.\n In this paper, we fill this gap with ReliK, a Reliability measure for KGEs.\nReliK relies solely on KGE embedding scores, is task- and KGE-agnostic, and\nrequires no further KGE training. As such, it is particularly appealing for\nsemantic web applications which call for testing multiple KGE methods on\nvarious parts of the KG and on each individual downstream task. Through\nextensive experiments, we attest that ReliK correlates well with both common\ndownstream tasks, such as tail or relation prediction and triple\nclassification, as well as advanced downstream tasks, such as rule mining and\nquestion answering, while preserving locality.",
5
+ "authors": "Maximilian K. Egger, Wenyue Ma, Davide Mottin, Panagiotis Karras, Ilaria Bordino, Francesco Gullo, Aris Anagnostopoulos",
6
+ "published": "2024-04-25",
7
+ "updated": "2024-04-25",
8
+ "primary_cat": "cs.SI",
9
+ "cats": [
10
+ "cs.SI"
11
+ ],
12
+ "label": "Original Paper",
13
+ "paper_cat": "Knowledge AND Graph",
14
+ "gt": "Knowledge graphs (KGs) are sets of facts (i.e., triples such as \u201cda Vinci,\u201d \u201cpainted,\u201d \u201cMona Lisa\u201d) that interconnect entities (\u201cda Vinci,\u201d \u201cMona Lisa\u201d) via relationships (\u201cpainted\u201d) [20, 47]. Entities and re- lationships correspond to nodes and (labeled) edges of the KG, respectively (Figure 2). Knowledge graph embeddings (KGEs) [45] are popular techniques to generate a vector representation for enti- ties and relationships of a KG. A KGE is computed by optimizing a scoring function that provides an embedding score as an indication of whether a triple actually exists in the KG. KGEs have been exten- sively used as a crucial building block of state-of-the-art methods for a variety of downstream tasks commonly carried out on the Web, such as knowledge completion [46], whereby a classi\uffffer is trained on the embeddings to predict the existence of a triple; or head/tail prediction [24], which aims to predict entities of a triple, as well as more advanced ones, including rule mining [49], query answering [48], and entity alignment [5, 21, 51, 52]. Motivation. So far, the choice of an appropriate KGE method has depended on the downstream task, the characteristics of the input KG, and the computational resources. The existence of many di\uffffer- ent scoring functions, including linear embeddings [8], bilinear [49], based on complex numbers [36], or projections [10] further com- plicates this choice. Alas, the literature lacks a uni\uffffed measure to quantify how reliable the performance of a KGE method can be for a certain task beforehand, without performing such a potentially slow task. Furthermore, KGE performance on a speci\uffffc downstream task is typically assessed in a global way, that is, in terms of how accurate a KGE method is for that task on the entire KG. However, the performance of KGEs for several practical applications (e.g., knowledge completion [46]) typically varies across the parts of the KG. This requires carrying out a performance assessment of KGE locally to speci\uffffc parts of the KG, rather than globally. Contributions. We address all the above shortages of the state of the art in KGE and introduce ReliK (Reliability for KGEs), a simple, yet principled measure that quanti\uffffes the reliability of how a KGE will perform on a certain downstream task in a speci\uffffc part of the KG, without executing that task or (re)training that KGE. To the best of our knowledge, no measure like ReliK exists in the literature. ReliK relies exclusively on embedding scores as a black box, particularly on the ranking determined by those scores (rather than the scores themselves). Speci\uffffcally, it is based on the relative WWW \u201924, May 13\u201317, 2024, Singapore, Singapore. Maximilian K. Egger et al. ranking of existing KG triples with respect to nonexisting ones, in the target part of the KG. As such, ReliK is agnostic to both (1) the peculiarities of a speci\uffffc KGE and (2) the KG at hand, and (3) it needs no KGE retraining. Also, (4) ReliK is task-agnostic: in fact, its design principles are so general that it is inherently well-suited for a variety of downstream tasks (see Section 3 for more details, and Section 4 for experimental evidence). Finally, (5) ReliK exhibits the locality property, as its computation and semantics can be tailored to a speci\uffffc part of the KG. All in all, therefore, our ReliK measure is fully compliant with all the requirements discussed above. Note that ReliK can be used also to evaluate the utility of a KGE for a downstream task, even when (for privacy or other reasons) we only have access to the embedding and not to the original KG. ReliK is simple, intuitive, and easy-to-implement. Despite that, its exact computation requires processing all the possible combina- tions of entities and relationships, for every single fact of interest. Thus, computing ReliK exactly on large KGs or large target sub- graphs may be computationally too heavy. This is a major technical challenge, which we address by devising approximations to ReliK. Our approximations are shown to be theoretically solid (Section 3.2) and perform well empirically (Section 4.1). Advanced downstream tasks. Apart from experimenting with ReliK in basic downstream tasks, such as entity/relation prediction or triple prediction, we also showcase ReliK on two advanced down- stream tasks, to fully demonstrate its general applicability. The \uffffrst is query answering, which \uffffnds answers to complex logical queries over KGs. The second, rule mining, deduces logic rules, with the purpose of cleaning the KG from spurious facts or expanding the information therein. Rule mining approaches rely on a con\uffffdence statistical measure that depends on the quality of the data itself. By computing the con\uffffdence on a ground truth, we show that ReliK identi\uffffes more trustworthy rules. Relevance. ReliK is particularly amenable to semantic web ap- plications, for instance by providing a local means to study the semantics associated with a speci\uffffc\u2019s entity embedding [30] or by o\uffffering an e\uffffcient tool for knowledge completion [50]. Summary and outline. To summarize, our contributions are: \u2022 We \uffffll an important gap of the state of the art in KGE (Section 2) by tackling for the \uffffrst time the problem of assessing the relia- bility of KGEs (Section 3). \u2022 We devise ReliK, the \uffffrst reliability measure for KGEs, which possesses important characteristics of generality, simplicity, and soundness (Section 3.1). \u2022 We devise e\uffffcient, yet theoretically solid approximation tech- niques for estimating ReliK (Section 3.2). \u2022 We perform extensive experiments to show that ReliK correlates with several common downstream tasks, it complies well with the locality property, and its approximate computation is e\uffffcient and e\uffffective (Section 4). \u2022 We additionally showcase ReliK in two advanced downstream tasks, question answering and rule mining (Section 4.3).",
15
+ "main_content": "A knowledge graph (KG) K : hE, R, F i is a triple consisting of a set E of = entities, a set R of relationships, and a set F \u21e2E \u21e5R \u21e5E of < facts. A fact is a triple G\u2318AC = (\u2318,A,C)1, where \u23182 E is the head, C 2 E is the tail, and A 2 R is the relationship. For instance, entities \u201cLeonardo da Vinci\u201d and \u201cMona Lisa,\u201d and relationship \u201cpainted\u201d C 2 E is the tail, and A 2 R is the relationship. For instance, entities \u201cLeonardo da Vinci\u201d and \u201cMona Lisa,\u201d and relationship \u201cpainted\u201d form the triple (\u201cLeonardo da Vinci,\u201d \u201cpainted,\u201d \u201cMona Lisa\u201d). The \u201cLeonardo da Vinci\u201d and \u201cMona Lisa,\u201d and relationship \u201cpainted\u201d form the triple (\u201cLeonardo da Vinci,\u201d \u201cpainted,\u201d \u201cMona Lisa\u201d). The set F of facts form an edge-labeled graph whose nodes and labeled edges correspond to entities and relationships, respectively. We say a triple G\u2318AC is positive if it actually exists in the KG (i.e., G\u2318AC 2 F ), negative otherwise (i.e., G\u2318AC 8 F ). KGs are also known as knowledge bases [14], information graphs [25], or heterogeneous information networks [34]. Knowledge graph embedding. A KG embedding (KGE) [2, 24, 45] is a representation of entities and relationships in a 3-dimensional (3\u2327|E|) space, typically, the real R3 space or the complex C3 space. For instance, TransE [8] represents a triple G\u2318AC as entity vectors e\u2318, eC 2 R3 and relation vector eA 2 R3, and DistMult [49] represents the relationship as a matrix WA 2 R3\u21e53. Although KGEs can di\uffffer (signi\uffffcantly) from one another in their de\uffffnition, a common key aspect of all KGEs is that they are typically de\uffffned based on a so-called embedding scoring function or simply embedding score. This is a function B : E \u21e5R \u21e5E ! R, which quanti\uffffes how likely a triple G\u2318AC 2 E \u21e5R \u21e5E exists in K based on the embeddings of its head (\u2318), relationship (A), and tail (C). Speci\uffffcally, the higher B(G\u2318AC), the more likely the existence of G\u2318AC. For instance, TransE\u2019s embedding score B(G\u2318AC) = \u2212ke\u2318+ eA \u2212eC k represents the (\u27131 or \u27132) distance between the \u201ctranslation\u201d from \u2318\u2019s embedding to C\u2019s embedding through A\u2019s embedding [8]. KGEs are typically learned through a training process that optizes (e.g., via gradient descent) a loss function de\uffffned based on KGEs are typically learned through a training process that optimizes (e.g., via gradient descent) a loss function de\uffffned based on the embedding score. This training process can be computationally expensive, especially if it has to be repeated for multiple KGEs. KGEs learned this way are shown to be e\uffffective for a number of downstream tasks [24], such as predicting the existence of a triple, but do not o\uffffer any prior indication on their performance [22]. Moreover, existing benchmarks [2] show global performance on the entire graph rather than local on subgraphs. To this end, in this work, we provide an answer to the following key question: M\uffff\uffff\uffff\uffff\uffff\uffff\uffff\uffff\uffff\uffff. Is there a measure that provides a prior indication of the performance of a KGE on a speci\uffffc subgraph? 3 KGE RELIABILITY A good measure of performance of a KGE should support a number of tasks, from node classi\uffffcation, to link prediction, as well as being unprejudiced towards the data and the KGE model itself. In other words, we would like a measure of reliability that properly assesses how the embedding of a triple would perform on certain tasks and data, without knowing them in advance. More speci\uffffcally, the main desiderata of a proper KGE reliability measure are as follows. (R1) Embedding-agnostic. It should be independent of the speci\uffffc KGE method. This is to have a measure fully general. (R2) Learning-free. It should require no further KGE training. This is primarily motivated by e\uffffciency, but also for other reasons, such as privacy or unavailability of the data used for KGE training. 1We use fact and triple interchangeably throughout the paper. ReliK: A Reliability Measure for Knowledge Graph Embeddings WWW \u201924, May 13\u201317, 2024, Singapore, Singapore. \u221214 \u221212 \u221210 \u22128 \u22126 Scores Density TransE Positive Negative \u22121.5 \u22121 \u22120.5 Scores PairRE Figure 1: Distribution of the embedding scores for positive (i.e., existing) and negative (i.e., nonexisting) triples on CodexSmall dataset (cf. Section 4), with TransE [8] and PairRE [10] KGE methods. Although scores and distributions are di\ufffferent, positive and negative triples are well separated. (R3) Task-agnostic. It should be independent of the speci\uffffc downstream task. In other words, it should be able to properly anticipate the performance of a KGE in general, for any downstream task. Again, like (R1), this is required for the generality of the measure. (R4) Locality. It should be a good predictor of KGE performance locally to a given triple, that is, in a close surrounding neighborhood of that triple. This is important, as a KGE model may be more or less e\uffffective based on the di\ufffferent parts of the KG it is applied to. Thus, assessing how KGEs perform in di\ufffferent parts of the KG would allow for their better use in downstream tasks. 3.1 The Proposed ReliK Measure Design principles. De\uffffning a reliability measure that complies with the aforementioned requirements is an arduous streak. First, the various KGE methods consider di\ufffferent objectives. Second, downstream tasks often combine embeddings in di\ufffferent ways. For instance, even though head or tail predictions predict a single vector, triple classi\uffffcation combines head, tail, and relationship vectors. Third, the embedding scores are in general incomparable across the KGEs. To ful\uffffl (R1) and (R2), the KGE reliability measure should not engage with the internals of the computation of KGEs. Thus, we need to treat the embeddings as vectors and the embedding score as a black-box function that provides only an indication of the actual existence of a triple. Although the absolute embedding scores are incomparable to one another, we observe that the distribution of positive and negative triples is signi\uffffcantly di\ufffferent (Figure 1). Speci\uffffcally, we assume the relative ranking of a positive triple to be higher than that of a negative. Otherwise, we multiply the score by \u22121. This leads to the following main observation. O\uffff\uffff\uffff\uffff\uffff\uffff\uffff\uffff\uffff\uffff1. A KGE reliability measure that uses the position of a triple relative to other triples via a ranking de\uffffned based on the embedding score ful\ufffflls (R1) and (R2). Furthermore, comparing a triple to all other (positive or negative) triples might be ine\uffffective. For instance, if we assume that our measure of reliability is solely based on the separation between positive and negative triples, we will conclude from Figure 1 that PairRE [10] performs well for all the tasks, which is not the case. This is because the absolute score does not provide an indication of performance. We thus advocate that a local approach that considers triples relative to a neighborhood is more appropriate, and propose a measure that ful\uffffls (R4). The soundness of (R4) is better attested in our experiments in Section 4. Finally, to meet (R3), the KGE reliability measure should not exploit any peculiarity of a downstream task in its de\uffffnition. Indeed, this is accomplished by our measure, as we show next. De\uffffnition. For a triple G\u2318AC = (\u2318,A,C) we compute the neighbor set N \u2212(\u2318) of all possible negative triples, that is, triples with head \u2318that do not exist in K. Similarly, we compute N \u2212(C) for tail C. We de\uffffne the head-rank \u2318of a triple G\u2318AC as the position of the triple in the rank obtained using score B for a speci\uffffc KGE relative to all the negative triples having head \u2318. rank\ud43b(G\u2318AC) = |{G 2 N \u2212(\u2318) : B(G) > B(G\u2318AC)}| + 1 The tail-rank rank) (G\u2318AC) for tail C is de\uffffned similarly. Our reliability measure, ReliK, for a triple G\u2318AC is ultimately de\uffffned as the average of the reciprocal of the headand tail-rank ReliK(G\u2318AC) = 1 2 \u2713 1 rank\ud43b(G\u2318AC) + 1 rank) (G\u2318AC) \u25c6 . (1) ReliK can easily be extended from single triples to subgraphs by computing the average reliability among the triples in the subgraph. Speci\uffffcally, we de\uffffne the ReliK score of a set ( \u2713F of triples as ReliK(() = 1 |(| \u2019 G\u2318AC 2( ReliK(G\u2318AC). (2) Rationale. ReliK ranges from (0, 1], with higher values corresponding to better reliability. In fact, the lower the head-rank rank\ud43b(G\u2318AC) or tail-rank rank) (G\u2318AC), the better the ranking of G\u2318AC induced by the underlying embedding scores, relatively to the nonexisting triples in G\u2318AC\u2019s neighborhood, complies with the actual existence of G\u2318AC in the KG. It is easy to see that ReliK achieves (R1) and (R2) by relying on the relative ranking rather than the absolute scores. It also ful\ufffflls (R3) as it involves no downstream tasks at all, and (R4) as it is based on the local (i.e., 1-hop) neighborhood of a target triple. Leonardo da Vinci Italy Mona Lisa France born in painted located in KG and considered edge Leonardo da Vinci Italy Mona Lisa France painted located in painted born in located in born in located in Negative triples N\u2212(\u2318) to compute rank\ud43b Leonardo da Vinci Italy Mona Lisa France born in painted located in born in located in painted born in located in Negative triples N\u2212(C ) to compute rank) Figure 2: Constituents of ReliK on an example KG. Figure 2 provides an example of the computation of ReliK for the triple G\u2318AC = (\u201cLeonardo da Vinci,\u201d \u201cpainted,\u201d \u2018Mona Lisa\u201d). The N \u2212(\u2318) is depicted as the red (dashed) edges and N \u2212(C) in blue (dotted). To compute ReliK on an embedding, we compute the embedding score B of (\u201cLeonardo da Vinci,\u201d \u201cpainted,\u201d \u201cMona Lisa\u201d) and rank it with respect to the triples in N \u2212(\u2318) and N \u2212(C). 3.2 E\uffffciently Computing ReliK Computing ReliK (Eq. (1)) requires \u2326(|E| \u00b7 |R|) time, as it needs to scan the entire negative neighborhood of the target triple. For large KGs, repeating this for a (relatively) high number of triples may be computationally too heavy. For this purpose, here we focus on WWW \u201924, May 13\u201317, 2024, Singapore, Singapore. Maximilian K. Egger et al. approximate versions of ReliK, which properly trade o\uffffbetween accuracy and e\uffffciency. The main intuition behind the ReliK approximation is that the precise ranking of the various potential triples is not actually needed. Rather, what it matters is just the number of those triples that exhibit a higher embedding score than the target triple. This observation leads to two approaches. In both of them, we sample a random subset of negative triples. In the \uffffrst approach, we compute ReliKLB, a lower bound to ReliK, by counting the negative triples in the sample that have a lower embedding score than the target triple and pessimistically assuming that all the other triples not in the sample have higher scores. In the second approach, we estimate ReliKApx by evaluating the fraction of triples in the sample that have a higher score than the triple under consideration and then scaling this fraction to the total number of negative triples. Next, we provide the details of these two approaches. Let(\ud43bbe a random subset of: elements selected without replacement independently and uniformly at random from the negative neighborhood N \u2212(\u2318) of the head \u2318of a triple G\u2318AC. The size |(\ud43b| trades o\uffffbetween e\uffffciency and accuracy of the estimator, and it may be de\uffffned based on the size of N \u2212(\u2318). De\uffffne also rank( \ud43b(G\u2318AC) = |{G 2 (\ud43b: B(G) > B(G\u2318AC)}| + 1, to be the rank of the score B(G\u2318AC) that the KGE assigns to G\u2318AC, among all the triples in the sample. We similarly compute () and rank( ) for tail\u2019s neighborhood N \u2212(C). ReliKLB estimator. The sampled triples with lower score than B(G\u2318AC) are fewer than all such negative triples, that is, |(\ud43b| \u2212rank( \ud43b(G\u2318AC) \uf8ff|N \u2212(\u2318)| \u2212rank\ud43b(G\u2318AC), or, equivalently, rank\ud43b(G\u2318AC) \uf8ffrank( \ud43b(G\u2318AC) + |N \u2212(\u2318)| \u2212|(\ud43b| . (3) Analogously, the observation holds for () rank) (G\u2318AC) \uf8ffrank( ) (G\u2318AC) + |N \u2212(C)| \u2212|() | . (4) We therefore de\uffffne our ReliKLB estimator as ReliKLB(G\u2318AC) = 1 2 \u2713 1 rank( \ud43b(G\u2318AC) + |N \u2212(\u2318)| \u2212|(\ud43b| + 1 rank( ) (G\u2318AC) + |N \u2212(C)| \u2212|() | \u25c6 , (5) From Eqs. (3) and (4), it holds that ReliKLB(G\u2318AC) \uf8ffReliK(G\u2318AC). ReliKApx estimator. As for our second estimator, we de\uffffne it as ReliKApx = 1 2 \u00a9 \u2260 \u00b4 1 rank( \ud43b(G\u2318AC) |N\u2212(\u2318)| |(\ud43b| + 1 rank( ) (G\u2318AC) |N\u2212(C )| |() | \u2122 \u00c6 \u00a8 . (6) In words, we simply scale up the rank induced by the sample to the entire set of negative triples. Theoretical characterization of ReliKApx. Note that by Jensen\u2019s inequality [23], we have that E 2 6 6 6 6 4 1 rank( \ud43b(G\u2318AC) |N\u2212(\u2318) | |(\ud43b| 3 7 7 7 7 5 \u2265 1 E h rank( \ud43b(G\u2318AC) |N\u2212(\u2318)| |(\ud43b| i = 1 E[rank( \ud43b(G\u2318AC)] |N\u2212(\u2318) | |(\ud43b| = 1 rank\ud43b(G\u2318AC) , where E[\u00b7] denotes mathematical expectation. This holds because E[rank( \ud43b(G\u2318AC)] = |(\ud43b| \u00b7 rank\ud43b(G\u2318AC) |N \u2212(\u2318)| , given that for each element G 2 (\ud43b, the probability to have a score B(G) > B(G\u2318AC) is rank\ud43b(G\u2318AC) |N \u2212(\u2318)| . We argue similarly for the tail and, therefore, we \uffffnally obtain E[ReliKApx(G\u2318AC)] \u2265ReliK(G\u2318AC). In other words, ReliKApx is, in expectation, an upper bound of ReliK. Quality of ReliKApx approximation. Applying a Hoe\uffffding\u2019s bound [18], we obtain that, with high probability, the quality of approximation improves exponentially as the size of the sample increases. Algorithm 1 Compute ReliKLB or ReliKApx Input: KG K : hE, R, Fi, triple G\u2318AC = (\u2318,A,C ) 2 F, embedding score function B : E \u21e5R \u21e5E ! R, sample size : 2 N Output: ReliKLB(G\u2318AC ) (Eq. (5)) or ReliKApx(G\u2318AC ) (Eq. (6)) 1: (\ud43b sample : triples from N\u2212(\u2318); () sample : triples from N\u2212(C ) 2: rank\ud43b 1; rank) 1 3: for G\u23180A0C0 2 (\ud43b[ () do 4: if B (G\u2318AC ) < B (G\u23180A0C0 ) then 5: if \u23180 = \u2318then 6: rank\ud43b A0=:\ud43b+ 1 7: if C0 = C then 8: rank) A0=:) + 1 9: return 1 2 \u2713 1 rank\ud43b+|N\u2212(\u2318)|\u2212|(\ud43b| + 1 rank) +|N\u2212(C)|\u2212|() | \u25c6 for ReliKLB or 1 2 \u00a9 \u2260 \u00b4 1 rank\ud43b |N\u2212(\u2318)| |(\ud43b| + 1 rank) |N\u2212(C)| |() | \u2122 \u00c6 \u00a8 for ReliKApx Algorithms. Algorithm 1 shows the steps to compute ReliKLB and ReliKApx. Initially, in Line 1, we sample, uniformly at random, : negative triples from the head neighborhood and the tail neighborhood. Note that we can save computation time by \uffffrst \uffffltering the triples in (\ud43b[() by score (Line 4), that is, by considering only those with score higher than the input triple G\u2318AC, and then checking whether a triple in (\ud43b[ () has either the head (Line 5) or the tail (Line 7) in common with G\u2318AC to update the corresponding rank. Time complexity. Algorithm 1 runs in O(:) time. This corresponds to the time needed for the sampling step in Line 5, which can easily be accomplished linearly in the number of samples, without materializing the negative neighborhoods. The sample size : trades o\uffffbetween accuracy and e\uffffciency of the estimation. Section 4.1 shows that ReliKApx approximation with 20% sample size is 2.5\u21e5faster than ReliK with only 0.002 mean squared error (MSE). As such, ReliKApx is our method of reference in the experiments. ReliK: A Reliability Measure for Knowledge Graph Embeddings WWW \u201924, May 13\u201317, 2024, Singapore, Singapore. 4 EXPERIMENTAL EVALUATION We evaluate ReliK on four downstream tasks, six embeddings, and six datasets. We report the correlation with ReliK and the performance of ranking tasks (Section 4.2) and show that ReliK can identify correct query answers as well as mine rules with higher con\uffffdence than existing methods (Section 4.3). space method set entity relation score TransE [8] R O(=) O(=) \u2212ke\u2318+ eA \u2212eC k? DistMult [49] R O(=) O(=) e> \u2318diag(WA )eC RotatE [36] C O(=) O(=) \u2212ke\u2318\u25e6eA \u2212eC k PairRE [10] R O(=) O(=) \u2212ke\u2318\u25e6eA\u2318\u2212eC \u25e6eAC k ComplEx [41] C O(=) O(=) '4 (heA, e\u2318, eC i) ConvE [15] R O(=) O(=) 5 (E42 (5 ([e\u2318; eA ] \u21e4l))W)eC TuckER [4] R O(=) O(=) W \u21e5e\u2318\u21e5eA \u21e5eC CompGCN [44] R O(=) O(=) any KGE score Table 1: Characteristics of the considered embeddings. Embeddings. We include six established KGE methods, representative of the four major embedding families (see Section 5). Table 1 shows the embeddings in our evaluation, the embedding space, and the embedding score function. A detailed description of the embeddings is in Section A.1 in the appendix. dataset | E| |R| | F| Task Countries 271 2 1 158 Approximation FB15k237 14 505 237 310 079 Ranking / Classi\uffffcation / Querying Codex-S 2 034 42 36 543 Ranking / Classi\uffffcation Codex-M 17 050 51 206 205 Ranking / Classi\uffffcation Codex-L 77 951 69 612 437 Ranking / Classi\uffffcation YAGO2 834 750 36 948 358 Rule Mining Table 2: Characteristics of the KGs; number of entities |E|; number of relationships |R|; number of facts |F |; task. Datasets. We perform experiments on six KGs with di\ufffferent characteristics, shown in Table 2. \u2022 Countries [9] is a small KG created from geographical locations, where entities are continents, subcontinents, and countries, and edges containment or geographical neighborhood. \u2022 FB15k237 [40] is a sample of Freebase KG [7] covering encyclopedic knowledge consisting of 237 relations, 15: entities, and 310: facts. FB15k237 is a polished and corrected version of FB15k [8] constructed to circumvent data leakage. The dataset contains Freebase entities with more than 100 mentions and with reference in Wikilinks database. \u2022 Codex [32] is a collection of three datasets of incremental size, Codex-S (2: entities, 36: triples), Codex-M (17: entities, 200: facts), and Codex-L (78: entities, 610: facts) extracted from Wikidata and Wikipedia. The Codex collection explicitly encourages entity and content diversity to overcome the limitations of FB15k. \u2022 YAGO [35] is an open-source KG automatically extracted from Wikidata with an additional ontology from schema.org. We use YAGO2 [19], which comprises 834k entities and 948k facts. Experimental setup. We implement our approximate and exact ReliK in Python v3.9.13.2,3 We train the embedding using the 2Code available at: https://github.com/AU-DIS/ReliK 3Also as artifact: https://doi.org/10.5281/zenodo.10656796 Pykeen library v1.10.1,4 with default parameters besides the embedding dimension 38< = 50 and training loop sLCWA. We run our experiments on a Linux Ubuntu 4.15.0-202 machine with 48 cores Intel(R) Xeon(R) Silver 4214 @ 2.20GHz, 128GB RAM, and an NVIDIA GeForce RTX 2080 Ti GPU. We report an average of 5 experiments using 5-fold cross validation with 80-10-10 split. Summary of experiments. We evaluate ReliK on several downstream tasks and setups. We \uffffrst show in Section 4.1 that our approximate ReliKApx outperforms the simpler ReliKLB lower-bound approximation and achieves a good tradeo\uffffbetween quality and speed. We then show in Section 4.2 that ReliK correlates with common ranking tasks, such as tail and relation prediction, as well as classi\uffffcation tasks and validate the claim that ReliK is a local measure. In Section 4.3 we present the more complex tasks of query answering and mining logic rules on KGs. To summarize, we evaluate ReliK on the following downstream tasks: \u2022 (T1) Ranking tasks, tail and relation prediction \u2022 (T2) Classi\uffffcation task, triple classi\uffffcation \u2022 (T3) Query answering task \u2022 (T4) Rule mining application 0 0.2 0.4 0.6 0.8 1 5 10 15 sample ratio Time (s) ReliK ReliKApx ReliKLB 0 0.2 0.4 0.6 0.8 1 0 0.005 0.01 0.015 sample ratio MSE Figure 3: Comparing ReliKApx and ReliKLB with exact ReliK in time (left) and Mean Squared Error (right) vs sample to data size ratio on Countries dataset and TransE embeddings. 4.1 Approximation Quality We start by showing that ReliKApx runs as fast as ReliKLB while being more accurate. We report time and mean squared error (MSE) with respect to the exact ReliK measure for ReliKApx and ReliKLB. Computing ReliK is infeasible in datasets with more than a few hundred entities. Hence, we limit our analysis to the entire Countries dataset for which we can compute ReliK exactly. Figure 3 reports the results in terms of seconds and MSE at increasing sample size : = |(|. Both ReliKLB and ReliKApx incur the same time, because of the fact that both require to sample: negative triples and compute the score on the sample. On the other hand, when the sample size is more than 80% of all the negative triples, as the sampling time dominates the computation of ReliKLB and ReliKApx, ReliK becomes faster. ReliKApx rapidly reduces the error and stabilizes at around 40% of the sample size, whereas ReliKLB exhibits a steadily larger error than ReliKApx. The current results show the e\uffffectiveness of the results in an unparallelized setting; yet, we note that the sampling process can be easily parallelized by assigning each sample to a separate thread. In terms of quality, ReliKApx exhibits minimal MSE (<0.005) with as little as 10% of the sample size, being 3 times faster than ReliK. Thus, even though the exact ReliK is feasible for small datasets or 4https://pykeen.readthedocs.io/en/stable/ WWW \u201924, May 13\u201317, 2024, Singapore, Singapore. Maximilian K. Egger et al. subgraphs, ReliKApx o\uffffers a good approximation with signi\uffffcant speedup. On the next experiments, we set: to 10% of all the negative triples and report results for ReliKApx. 4.2 Common Downstream Tasks We test ReliK on the ability to anticipate the results of common tasks for KGEs [24, 45]. We measure the statistical signi\uffffcance of Pearson correlation among two ranking tasks, tail and relation prediction, and the triple classi\uffffcation task. To evaluate ReliK on di\ufffferent areas of the graph and di\ufffferent graph topologies, we sample random subgraphs of Codex-S with 60 nodes by initially selecting a starting node uniformly at random and then including nodes and edges by a random walk with restart [39] with restart probability 1 \u2212U = 0.2, until the subgraph comprises 60 nodes. For Codex-M and Codex-L we use size 100 and for FB15k237 we use 200 nodes. We report the average ReliK on 100 random subgraphs on the Codex-S, Codex-M, Codex-L, and FB15k237 datasets. Tail (MRR) Relation (MRR) Classi\uffffc. (Acc.) KGE Pearson p-value Pearson p-value Pearson p-value Codex-S TransE 0.23 0.02 0.93 2.174\u221244 0.37 1.424\u22124 DistMult 0.16 0.12 0.85 2.034\u221229 0.69 2.214\u221215 RotatE 0.35 0.0003 0.89 7.924\u221237 \u22120.24 0.02 PairRE 0.86 7.294\u221231 0.91 2.364\u221239 0.09 0.37 ComplEx 0.14 0.17 0.63 2.224\u221212 \u22120.06 0.57 ConvE \u22120.396 6.614\u22125 0.89 4.924\u221237 0.10 0.30 TuckER \u22120.15 0.13 0.89 5.714\u221237 0.07 0.46 CompGCN 0.52 3.394\u221208 0.77 6.094\u221221 0.01 0.92 Codex-M TransE 0.90 2.704\u221237 0.97 9.074\u221263 0.53 1.934\u221208 DistMult 0.22 0.04 0.89 8.374\u221232 0.60 5.124\u221210 RotatE \u2013 \u2013 \u2013 \u2013 \u2013 \u2013 PairRE 0.06 0.58 0.98 1.054\u221274 \u22120.12 0.23 ComplEx \u22120.33 8.924\u22124 0.36 2.014\u22124 0.15 0.13 ConvE \u22120.22 0.03 0.99 3.864\u221296 \u22120.02 0.84 Codex-L TransE 0.83 1.134\u221226 0.97 3.8124\u221264 0.63 2.544\u221212 DistMult 0.49 2.104\u221207 0.78 4.684\u221222 0.60 3.744\u221211 RotatE \u2013 \u2013 \u2013 \u2013 \u2013 \u2013 PairRE \u22120.04 0.68 0.95 3.334\u221252 \u22124.474\u22124 0.99 ComplEx 0.82 1.034\u221225 0.91 3.964\u221239 0.06 0.57 ConvE 0.59 4.264\u221211 \u22120.07 0.48 0.31 1.574\u22123 FB15k237 TransE 0.24 0.02 0.86 2.834\u221230 0.34 5.794\u22124 DistMult \u22120.05 0.65 0.64 5.574\u221213 0.39 5.584\u221205 RotatE \u2013 \u2013 \u2013 \u2013 \u2013 \u2013 PairRE 0.80 1.514\u221223 0.65 1.744\u221213 0.08 0.44 ComplEx 0.20 0.05 0.88 3.534\u221234 0.14 0.18 ConvE 0.09 0.37 0.85 4.474\u221230 0.01 0.93 Table 3: Pearson correlation and statistical signi\uffffcance of ReliK for tail prediction, relation prediction, and triple classi\uffffcation; red indicates cases of less statistically signi\uffffcant correlation, with p-value > 0.05, or inverse correlation. Ranking tasks (T1). In the \uffffrst experiments, we measure the Pearson correlation between ReliK and the performance on ranking tasks with mean reciprocal rank (MRR) [12]. The \uffffrst task, tail prediction [8, 10, 36], assesses the ability of the embedding to predict the tail given the head and the relation, thus answering the query (\u2318,A, ?) where the tail is unknown. The second task, relation prediction, assesses the ability of the embedding to predict the undisclosed relation of a triple (\u2318, ?,C). The common measure used for tail and relation prediction is MRR, which provides an indication of how close to the top the score ranks the correct tail (or relation). Consistently with previous approaches [8, 10, 36], we employ the \uffffltered approach in which we consider for evaluation only negative triples that do not appear in either the train, test, or validation set. Table 3 reports the correlations alongside the statistical signi\uffffcance in terms of the p-value. We marked in red, high p-values (> 0.05), which suggest no correlation, and Pearson score values that indicate inverse correlation. Generally, ReliK exhibits signi\uffffcant correlation across embeddings and tasks. Noteworthy, even though ReliK (see Eq. (1)) does not explicitly target tail or head rankings by including both, we observe signi\uffffcant correlation on tail prediction in most embeddings and datasets. Because of the considerable training time, we only report results for RotatE on Codex-S. We complement our analysis with correlation plots in Figure 4 and Figure 10 in the appendix for Codex-S; in most cases we observe a clear correlation. Comparing the actual results of the various tasks, it is also clear in most cases in which we do not have correlation that the results are too close to distinguish; for example, ComplEx has only results close to 0. Such results indicate that a particular embedding method needs additional training. Besides, on the same task, results vary in di\ufffferent subgraphs, vindicating the e\uffffect of locality on embedding performance. ReliK correctly captures local embedding characteristics. 0 0.2 0.4 0.6 0.8 1 2 4 6 \u00b710\u22124 ReliK TransE Subgraph Regression line 0 0.2 0.4 0.6 0.8 1 2 4 6 \u00b710\u22124 0 0.2 0.4 0.6 0.8 1 3 3.5 \u00b710\u22125 MRR ReliK ComplEx 0 0.2 0.4 0.6 0.8 1 3 3.5 \u00b710\u22125 MRR Figure 4: ReliK correlation with MRR on tail prediction (left column) and relation prediction (right column); each point is the ReliK score for a subgraph with 60 nodes on Codex-S. Classi\uffffcation task (T2). In this experiment, we test the correlation between ReliK and the accuracy of a threshold-based classi\uffffer on the embeddings. The classi\uffffer predicts the presence of a triple in the KG if the embedding score is larger than a threshold, a common scenario for link prediction [24]. Table 3 (right column) reports the correlations and their signi\uffffcance for all datasets and Figure 5 shows the detailed analysis on Codex-S for two cases. At close inspection, we observe that in cases of unclear correlation, such as with PairRE, the respective classi\uffffcation results are too close to observe a di\ufffference. Those cases notwithstanding, ReliK ReliK: A Reliability Measure for Knowledge Graph Embeddings WWW \u201924, May 13\u201317, 2024, Singapore, Singapore. is signi\uffffcantly correlated with accuracy. This result con\uffffrms that ReliK can serve as a proxy for the quality of complex models trained on embeddings. Plots for the other embeddings can be found in the Section A.2 of the appendix. 0 0.2 0.4 0.6 0.8 1 2 4 6 \u00b710\u22124 Accuracy Triple Classi\uffffcation ReliK TransE Subgraph Regression line 0 0.2 0.4 0.6 0.8 1 1 1.5 2 2.5 \u00b710\u22123 Accuracy Triple Classi\uffffcation PairRE Figure 5: ReliK correlation with accuracy on triple classi\uffffcation; each point represents the ReliK score for a subgraph with 60 nodes on Codex-S. Tuning Subgraph Size. Next, we analyze how ReliK correlates with the tasks presented in Section 4.2 on subgraphs of varying size with the TransE embedding. Figure 6 reports the correlation values for all three tasks, only including those values where the p-value is below 0.05. We observe that ReliK\u2019s correlation generally increases with subgraphs of up to 100 nodes on Codex-S. After that point, we note an unstable behavior in all tasks. This is consistent with the assumption that ReliK is a measure capturing local reliability. To strike a balance between quality and time we test on subgraphs with 60 nodes for Codex-S in all experiments. Yet, as tasks are of di\ufffferent nature, the subgraph size can be tuned in accordance with the task to provide more accurate results. 0 50 100 150 0.4 0.6 0.8 Subgraph size Pearson tail relation classi\uffffer Figure 6: Pearson correlation on tail and relation prediction and triple classi\uffffcation vs subgraph size on Codex-S. 4.3 Complex Downstream Tasks We now turn our attention to complex downstream tasks. Query answering (T3). We show how ReliK can improve queryanswering tasks. Complex logical queries on KGs are working with di\ufffferent query structures. We focus on queries of chaining multiple predictions or having an intersection of predictions, from di\ufffferent query structures that have been described in recent work [3, 31]. We keep the naming convention introduced by Ren and Leskovec [31]. We evaluate a selection of 1000 queries per type (1?,2?,3?,28,38) from their data on the FB15k237 graph.5 The queries of type ? are 1 to 3 hops from a given entity with \uffffxed relation labels that point to a solution, whereas queries of type 8 are the intersection of 2 or 3 predictions pointing towards the same entity. We evaluate ReliK on the ability to detect whether an instance of an answer is true or false. We compute ReliK on TransE embeddings trained on the entire FB15k237. Figure 7 shows the average ReliK scores for 5http://snap.stanford.edu/betae/ positive and negative answers. ReliK clearly discriminates between positive and negative instances, often by a large margin. 1p 2p 3p 2i 3i 0 2 4 6 \u00b710\u22125 Query types ReliK Query Answering 0 10 20 0 0.5 1 1.5 \u00b710\u22125 Rules ReliK Rule Mining negative instances positive instances 0 10 20 0 0.1 0.2 Rules RR Rule Mining Figure 7: Comparison between positive and negative instances for query answering on FB15k237 (left) and rule mining on Yago2 with ReliK (middle) and RR (right). Rule mining (T4). ReliK e\uffffectively improves on the rule mining task as well. Rule mining methods [16, 17, 28] automatically retrieve logic rules over KGs having a prede\uffffned minimum con\uffffdence. A logic rule is an assertion such as \ud434) \u232b, which states that \u232b follows from \ud434. For instance, a rule could imply that all presidents of a country are citizens of the country. An instance of a rule is triples matching \u232b, given that \ud434is true. Logic rules are typically harvested with slow exhaustive algorithms similar to the apriori algorithm for association rules [1]. We present two experiments. In the \uffffrst, we show that ReliK can discriminate between true and false instances. In the second, we show that ReliK can retrieve all the rules by considering only subgraphs with high ReliK score. Detecting true instances. To assess performance on downstream task (T4), we compare ReliK with the reciprocal rank (RR) of a combination of the tail and the relation embeddings on the ability to detect whether an instance of a rule is true or false. This task is particularly important to quantify the real con\uffffdence of a rule [26]. To this end, we use a dataset6 comprising 23 324 manually annotated instances over 26 rules extracted from YAGO2 using the AMIE [17] and RudiK [28] methods. We compute ReliK on TransE embeddings trained on the entire YAGO2. Figure 7 shows the average ReliK scores for positive and negative instances. ReliK discriminates between positive and negative instances, often by a large margin, whereas RR often confounds positive and negative instances. Rule mining on subgraphs. In this experiment, we show that ReliK identi\uffffes the subgraphs with high-con\uffffdence rules. To this end, we mine rules with AMIE [16, 17] on Codex-S, and compare with densest subgraphs of increasing size. We construct subgraphs of increasing size by \uffffrst mining the densest subgraph using Charikar\u2019s greedy algorithm [11] on the weighted graph obtained assigning each edge the ReliK score; then, we remove the densest subgraph and repeat the algorithm on the remaining edges, until no edge remains. At each iteration, we mine AMIE rules and compute the standard con\uffffdence, as well as con\uffffdence by the partial completeness assumption (PCA) [16, 17], that is, the assumption that the database includes either all or none of the facts about each head entity \u2318by any relationship A. In Figure 8 we compare our method with a baseline that extracts random subgraphs of the same size as those computed with our method. The densest subgraph located 6https://hpi.de/naumann/projects/repeatability/datasets/colt-dataset.html WWW \u201924, May 13\u201317, 2024, Singapore, Singapore. Maximilian K. Egger et al. by ReliK \uffffnds more rules with higher con\uffffdence on as little as 25% of the KG. On the other hand, a random subgraph does not identify any meaningful subgraph. This indicates that ReliK is an e\uffffective tool for retrieving rules in large graphs. A further analysis in Figure 9 shows that by exploiting ReliK we can compute rules 75% of the time. We emphasize though that, because rule mining incurs exponential time, the di\ufffference between mining rules on the complete graph and on the ReliK subgraph will be more pronounced on graphs larger than Codex-S. As a complement, the table reports the number of rules mined in the entire graph that are discovered by ReliK in the subgraph. It is clear that on 26% of the graph, ReliK discovers 1/3 as opposed to only 1/6 discovered by random graphs. 0 0.2 0.4 0.6 0.8 1 0.3 0.4 0.5 0.6 Subgraph size (%) Std Con\uffffdence complete densest random 0 0.2 0.4 0.6 0.8 1 0.4 0.5 0.6 0.7 Subgraph size (%) PCA Con\uffffdence Figure 8: Std and PCA con\uffffdence [17] vs subgraph size for AMIE rules on Codex-S; densest subgraph according to ReliK. PCA con\uffffdence normalizes the support of a rule only by the number of facts which we know to be true or consider to be false on a KG assumed to be partially complete [16, 17]. 0 0.2 0.4 0.6 0.8 1 0.4 0.6 0.8 1 Subgraph size (%) Time (%) Subgraph number of rules size (%) densest random 0.06 11 11.4 0.26 71 38.6 0.53 160 193.0 0.56 175 198.2 0.58 175 199.0 0.58 174 200.8 0.73 205 207.4 0.74 206 208.8 0.79 222 214.4 1.00 228 228.0 Figure 9: Time to compute AMIE rules vs subgraph size (left) and number of discovered rules (right) on Codex-S. 5 RELATED WORK Knowledge graph embeddings are commonly used to detect missing triples, correcting errors, or question answering [24, 45]. A number of KGEs have appeared in the last few years. The distinctive features among embeddings are the score function and the optimization loss. Translational embeddings in the TransE [8] family and the recent PairRE [10] assume that the relationship performs a translation from the head to the tail. Semantic embeddings, such as DistMult [49] or HolE [27], interpret the relationship as a multiplicative operator. Complex embeddings, such as RotatE [36] and ComplEx [41], use complex-valued vectors and operations in the complex plane. Neural-network embeddings, such as ConvE [15], perform sequences of nonlinear operations. Whereas each embedding de\uffffnes a speci\uffffc score, ReliK is agnostic to the choice of embedding. It is still an open question how well embeddings capture the semantics included in a KG [22]. Our work progresses in that regard by o\uffffering a simple local measure to quantify how faithful an embedding represents the information in the data. Embedding calibration. An orthogonal direction to ours is embedding calibration [33, 37]. Calibration methods provide e\uffffective ways to improve the existing embeddings on various tasks, by altering the embedding vectors in subspaces with low accuracy [33], by reweighing the output probabilities in the respective tasks [37], or by matrix factorization [13]. On the contrary, ReliK does not alter the embeddings nor the prediction scores but provides insights on the performance of the embeddings in speci\uffffc subgraphs. Evaluation of embeddings. ReliK bears an interesting connection with ranking-based quality measures, in particular with the mean reciprocal rank (MRR) and HITS@k for head, tail, and relation prediction [6, 8, 10, 33, 36, 45]. For a triple (?, ?,C) with unknown head MRR is the average of the reciprocal of ranks of the correct heads in the KG given the relationship A and tail C. As such, ReliK, can be considered a generalization of MRR as the MRR for triples of the kind (?, ?,C) and (\u2318, ?, ?). As the triples (?,A,C) are included in (?, ?,C), ReliK includes more information than MRR. Moreover, even though MRR and HITS@k provide a global indication of performance, ReliK is suitable for local analysis. Yet, current global measures have been recently shown to be biased towards high-degree nodes [38]. 6 CONCLUSION Aiming to develop a measure that prognosticates the performance of a knowledge graph embedding on a speci\uffffc subgraph, we introduced ReliK, a KGE reliability measure agnostic to the choice of the embeddings, the dataset, and the task. To allow for e\uffffcient computation, we proposed a sampling-based approximation, which we show to achieve similar results to the exact ReliK in less than half of the time. Our experiments con\uffffrm that ReliK anticipates the performance on a number of common and complex downstream tasks for KGEs. In particular, apart from correlating with accuracy in prediction and classi\uffffcation tasks, ReliK discerns the right answers to complex logical queries and guides the mining of high-con\uffffdence rules on subgraphs dense in terms of ReliK score. These results suggest that ReliK may be used in other domains, as well as a debugging tool for KGEs. In the future, we aim to design reliability measures for structure-based graph embeddings [42] and methods for authenticating [29] embedding-based computations. Ethical use of data. The measurements performed in this study are all based on datasets that are publicly available for research purposes. We cite the original sources. ACKNOWLEDGMENTS M. Egger is supported by Horizon Europe and Innovation Fund Denmark grant E115712-AAVanguard and the Danish Council for Independent Research grant DFF-1051-00062B. I. Bordino and F. Gullo are supported by Project ECS 0000024 Rome Technopole CUP B83C22002820006, \u201cPNRR Missione 4 Componente 2 Investimento 1.5,\u201d funded by European Commission NextGenerationEU. W. Ma is supported by the China Scholarship Council grant 202110320012. A. Anagnostopoulos is supported by the ERC Advanced Grant 788893 AMDROMA, the EC H2020RIA project SoBigData++ (871042), the PNRR MUR project PE0000013-FAIR, the PNRR MUR project IR0000013-SoBigData.it, and the MUR PRIN project 2022EKNE5K Learning in Markets and Society. ReliK: A Reliability Measure for Knowledge Graph Embeddings WWW \u201924, May 13\u201317, 2024, Singapore, Singapore."
16
+ }
intro_8K/test_introduction_short_2404.16587v1.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2404.16587v1",
3
+ "title": "Understanding Privacy Risks of Embeddings Induced by Large Language Models",
4
+ "abstract": "Large language models (LLMs) show early signs of artificial general\nintelligence but struggle with hallucinations. One promising solution to\nmitigate these hallucinations is to store external knowledge as embeddings,\naiding LLMs in retrieval-augmented generation. However, such a solution risks\ncompromising privacy, as recent studies experimentally showed that the original\ntext can be partially reconstructed from text embeddings by pre-trained\nlanguage models. The significant advantage of LLMs over traditional pre-trained\nmodels may exacerbate these concerns. To this end, we investigate the\neffectiveness of reconstructing original knowledge and predicting entity\nattributes from these embeddings when LLMs are employed. Empirical findings\nindicate that LLMs significantly improve the accuracy of two evaluated tasks\nover those from pre-trained models, regardless of whether the texts are\nin-distribution or out-of-distribution. This underscores a heightened potential\nfor LLMs to jeopardize user privacy, highlighting the negative consequences of\ntheir widespread use. We further discuss preliminary strategies to mitigate\nthis risk.",
5
+ "authors": "Zhihao Zhu, Ninglu Shao, Defu Lian, Chenwang Wu, Zheng Liu, Yi Yang, Enhong Chen",
6
+ "published": "2024-04-25",
7
+ "updated": "2024-04-25",
8
+ "primary_cat": "cs.CL",
9
+ "cats": [
10
+ "cs.CL",
11
+ "cs.AI"
12
+ ],
13
+ "label": "Original Paper",
14
+ "paper_cat": "LLM Fairness",
15
+ "gt": "Large language models [10, 27] have garnered significant attention for their exceptional capabilities across a wide range of tasks like natural language generation [7, 37], question answering [35, 55], and sentiment analysis [5, 52]. Nonetheless, it\u2019s observed that large language models can confi- dently assert non-existent facts during their reasoning process. For example, Bard, Google\u2019s AI chatbot, concocted information in the first demo that the James Webb Space Telescope had taken the first pictures of a planet beyond our solar system [12]. Such a halluci- nation problem [31, 54] of large language models is a significant barrier to artificial general intelligence [22, 44]. A primary strat- egy for tackling the issue of hallucinations is to embed external knowledge in the form of embeddings into a vector database [19, 23], making them accessible for retrieval augmented generation by large language models [6, 13]. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. Conference\u201917, July 2017, Washington, DC, USA \u00a9 2024 Association for Computing Machinery. ACM ISBN 978-x-xxxx-xxxx-x/YY/MM...$15.00 https://doi.org/10.1145/nnnnnnn.nnnnnnn An embedding model [43, 49] encodes the original objects\u2019 broad semantic information by transforming the raw objects (e.g., text, image, user profile) into real-valued vectors of hundreds of dimen- sions. The advancement of large language models enhances their ability to capture and represent complex semantics more effectively, such that an increasing number of businesses (e.g., OpenAI [40] and Cohere [1]) have launched their embedding APIs based on large language models. Since embeddings are simply real-valued vectors, it is widely believed that it is challenging to decipher the semantic information they contain. Consequently, embeddings are often viewed as secure and private, as noted by [50], leading data owners to be less concerned about safeguarding the privacy of em- beddings compared to raw external knowledge. However, in recent years, multiple studies [30, 38, 50] have highlighted the risk of em- beddings compromising privacy. More specifically, the pre-trained LSTM (Long Short-Term Memory) networks [25] or other language models can recover parts of the original texts and author informa- tion from text embeddings, which are generated by open-source embedding models. Although current studies have exposed security weaknesses in embeddings, the effects of large language models on the privacy of these embeddings have not been fully explored. A pressing issue is whether LLMs\u2019 emergent capabilities enable at- tackers to more effectively decipher sensitive information from text embeddings. This issue is driven not only by the proliferation of large language models but also by the availability of the embedding APIs based on LLMs, which permits attackers to gather numerous text-embedding pairs to build their attack models. To this end, we establish a comprehensive framework that lever- ages a large language model (LLM) to gauge the potential privacy leakage from text embeddings produced by the open-sourced em- bedding model. From a security and privacy perspective, LLM serves as the attacker, and the embedding model acts as the target, while the goal is to employ the attack model to retrieve sensitive and confidential information from the target model. Specifically, our approach begins with fine-tuning attack models to enable text re- construction from the outputs of the target model. Following this, we assess the privacy risks of embeddings via two types of attack scenarios. On the one hand, we recover the texts from their embed- dings in both in-distribution and out-of-distribution scenarios. On the other hand, we identify certain private attributes of various en- tities in the original text (such as birthdays, nationalities, criminal charges, etc.) and predict these attributes from the text embeddings. This prediction is determined by the attribute that exhibits the highest cosine similarity between the text embedding and the cor- responding attribute embedding. Consequently, this method does not necessitate training with supervised data. Should the target embedding model decline to generate embeddings for attributes with extremely brief texts described (1-2 words) out of embedding stealing concerns, we introduce an external embedding model that acts as a proxy to project the original text and the attribute value arXiv:2404.16587v1 [cs.CL] 25 Apr 2024 Conference\u201917, July 2017, Washington, DC, USA Zhihao Zhu, Ninglu Shao, Defu Lian, Chenwang Wu, Zheng Liu, Yi Yang, Enhong Chen in the same embedding space. Specifically, this external model is tasked with embedding the attribute and the text reconstructed by the attack model, the latter being derived from text embeddings produced by the target embedding model. The evaluation of text reconstruction reveals that 1) a larger attack language model, when fine-tuned with a sufficient amount of training data, is capable of more accurately reconstructing texts from their embeddings in terms of metrics like BLEU [41], regardless of whether the texts are in-distribution or out-of-distribution; 2) in-distributed texts are more readily reconstructed than out-of- distributed texts, with the reconstruction accuracy for in-distributed texts improving as the attack model undergoes training with more data; 3) the attack model can improve the reconstruction accuracy as the expressiveness of the target embedding models increases. The evaluation of attribute prediction demonstrates that 1) at- tributes can be predicted with high accuracy across various domains, including encyclopedias, news, medical, and legislation. This means the attacker is capable of inferring details like a patient\u2019s health condition, a suspect\u2019s criminal charges, and an individual\u2019s birthday from a set of seemingly irrelevant digital vectors, highlighting a significant risk of privacy leakage; 2) generally speaking, enlarg- ing the scale of the external/target embedding model substantially enhances the accuracy of attribute prediction; 3) when the target model denies embedding services for very short texts, the most effective approach using reconstructed text by the attack model can achieve comparable performance to using original text, when the target model and the external embedding model are configured to be the same. From the experiments conducted, we find that knowledge repre- sentations merely through numerical vectors encompass abundant semantic information. The powerful generative capability of large language models can continuously decode this rich semantic infor- mation into natural language. If these numerical vectors contain sensitive private information, large language models are also capa- ble of extracting such information. The development trend of large language models is set to increase these adverse effects, underscor- ing the need for our vigilance. Our research establishes a foundation for future studies focused on protecting the privacy of embeddings. For instance, the finding that accuracy in text reconstruction di- minishes with increasing text length indicates that lengthening texts may offer a degree of privacy protection. Furthermore, the ability of the attack model to reconstruct out-of-distributed texts points towards halting the release of original texts associated with released embeddings as a precaution.",
16
+ "main_content": "Table 1: Sizes of attack models and embedding models Attack Model GPT2 GPT2-Large GPT2-XL #parameters 355M 744M 1.5B Target Model SimCSE BGE-Large-en E5-Large-v2 #parameters 110M 326M 335M We employ pre-trained GPT2 [45] of varying sizes as the attacking model to decipher private information from embeddings produced by the target embedding models such as SimCSE [21], BGE-Large-en [53], and E5-Large-v2 [51]. We hypothesize that larger embedding models, due to their capacity to capture more information, are more likely to be exposed to a greater privacy risk. Consequently, all these models are designated as the target models, and their numbers of parameters are detailed in Table 1. It\u2019s important to note that we treat the target model as a black box, meaning we do not have access to or knowledge of its network architecture and parameters. Figure 1 showcases the fine-tuning process for the attack model. Initially, the example text \u201cDavid is a doctor.\u201d is inputted into the target embedding model to generate its respective embedding. This embedding is then used as the input for the attack model, which aims to reconstruct the original text based solely on this embedding. An EOS (End-of-Sentence) token is appended to the text embedding to signal the end of the embedding input. The attacker\u2019s training goal is to predict the t-th token of the original text based on text embedding and the preceding (t-1) tokens of the original text. In the testing phase, the attacker employs beam search [18] to progressively generate tokens up to the occurrence of the EOS. Once the attack model has been fine-tuned, we evaluate the privacy risks of embeddings through two distinct attack scenarios: text reconstruction and attribute prediction. Target Embedding Model Sentence Embedding [CLS] Attack Model David is a doctor . [EOS] [EOS] David is a doctor . [EOS] David is a doctor . Training Stage Figure 1: The fine-tuning of the foundation attack model. Initially, the attacker queries the target embedding model to convert the collected text into text embeddings. To signify the completion of the embedding input, an EOS (Endof-Sentence) token is appended to the text embedding. Next, the attacker selects the pre-trained GPT2 model as the attack model and uses the collected text and corresponding text embeddings as a dataset to train the attack model. When a text embedding is input, the attack model is trained to sequentially reconstruct the related original text. 2.2 Evaluation of Text Reconstruction Settings For each text in the test set, we reconstruct it using the attack model based on its embedding generated by the target model. To evaluate the reconstruction accuracy, we employ two metrics: BLEU (Bilingual Evaluation Understudy) [41] and ROUGE (Recall-Oriented Understudy for Gisting Evaluation) [33]. Specifically, we utilize BLEU-1 and ROUGE-1, which are based solely on unigrams, as they yield better results compared to BLEU and ROUGE based on other n-grams [4]. These metrics gauge the similarity between the Understanding Privacy Risks of Embeddings Induced by Large Language Models Conference\u201917, July 2017, Washington, DC, USA Table 2: Reconstruction attack performance against different embedding models on the wiki dataset. The best results are highlighted in bold. \u2217represents that the advantage of the best-performed attack model over other models is statistically significant (p-value < 0.05). Training data wiki-small Wiki-large Wiki-xl Target model Attack model BLEU-1 ROUGE-1 BLEU-1 ROUGE-1 BLEU-1 ROUGE-1 SimCSE GPT2 0.3184\u00b10.0010 0.3212\u00b10.0010\u2217 0.4512\u00b10.0010\u2217 0.4961\u00b10.0011\u2217 0.4699\u00b10.0016 0.5256\u00b10.0010 GPT2_large 0.2996\u00b10.0014 0.2913\u00b10.0012 0.4349\u00b10.0013 0.4678\u00b10.0009 0.5293\u00b10.0010 0.5930\u00b10.0009 GPT2_xl 0.3196\u00b10.0011\u2217 0.3112\u00b10.0013 0.4455\u00b10.0015 0.4833\u00b10.0010 0.5331\u00b10.0011\u2217 0.5987\u00b10.0007\u2217 BGE-Large-en GPT2 0.3327\u00b10.0014\u2217 0.3288\u00b10.0011\u2217 0.4173\u00b10.0016 0.4483\u00b10.0009 0.4853\u00b10.0011 0.5337\u00b10.0016 GPT2_large 0.2935\u00b10.0013 0.2783\u00b10.0011 0.4446\u00b10.0011 0.4788\u00b10.0006 0.5425\u00b10.0012 0.5998\u00b10.0010 GPT2_xl 0.3058\u00b10.0019 0.3043\u00b10.0012 0.4689\u00b10.0011\u2217 0.5057\u00b10.0007\u2217 0.5572\u00b10.0008\u2217 0.6151\u00b10.0007\u2217 E5-Large-v2 GPT2 0.3329\u00b10.0013\u2217 0.3341\u00b10.0012\u2217 0.4838\u00b10.0005 0.5210\u00b10.0008 0.5068\u00b10.0016 0.5522\u00b10.0014 GPT2_large 0.3093\u00b10.0009 0.2875\u00b10.0012 0.4700\u00b10.0011 0.4990\u00b10.0010 0.5679\u00b10.0011 0.6220\u00b10.0011 GPT2_xl 0.3083\u00b10.0013 0.3017\u00b10.0013 0.4993\u00b10.0013\u2217 0.5274\u00b10.0011\u2217 0.5787\u00b10.0007\u2217 0.6378\u00b10.0009\u2217 original and reconstructed texts. Given that the temperature setting influences the variability of the text produced by GPT, a non-zero temperature allows for varied reconstructed texts given the same text embedding. We calculate the reconstruction accuracy across 10 trials to obtain mean and standard error for statistical analysis. Based on these outcomes, we compare the performance of various attack and target configurations using a two-sided unpaired t-test [16]. The evaluation is conducted on seven datasets, including wiki [47], wiki-bio [29], cc-news [24], pile-pubmed [20], triage [32], cjeu-terms [9], and us-crimes [9]. Details and statistics for these datasets are presented in Table 5. Results of In-Distributed Texts We create three subsets from the wiki dataset of varying sizes (i.e., wiki-small, wiki-large, and wiki-xl) and use these subsets to finetune the attack models, resulting in three distinct versions of the attack model. The performance of these attack models is then assessed using held-out texts from the wiki dataset. The experimental results presented in Table 2 illustrate that the size of the training datasets and the models have a considerable influence on the reconstruction accuracy. To elaborate, regardless of the attack model employed, it\u2019s found that larger embedding models, such as BGE-Large-en and E5Large-v2, enable more effective text reconstruction compared to others like SimCSE. This is attributed to the strong expressivity of the large target embedding model, which allows it to retain more semantic information from the original text, proving beneficial for the embedding\u2019s application in subsequent tasks. Moreover, provided that the attack model is adequately fine-tuned and the embedding model is expressive enough, the accuracy of text reconstruction improves as the size of the attack model increases. This improvement is reflected in the table\u2019s last two columns, showing higher accuracy as the attack model progresses from GPT2 to GPT2_large, and finally to GPT2_xl, attributed to the improved generative capabilities of larger models. Additionally, adequately fine-tuning the large language models is a prerequisite for their effectiveness in text reconstruction tasks. When the embeddings are less informative, fine-tuning the attack model demands a larger amount of training data. This is highlighted by the lesser performance of GPT2_xl compared to GPT2 when fine-tuning with wiki-small, which reverses after fine-tuning with Wiki-xl. Moreover, GPT2_xl outperforms GPT2 in reconstructing text from SimCSE\u2019s embeddings as the fine-tuning dataset shifts from \u201cWiki-large\" to \u201cWiki-xl\". To summarize, a larger attack model, when fine-tuned with an increased amount of training data, is capable of more accurately reconstructing texts from the embeddings generated by target embedding models with higher expressivity. Hence, a straightforward approach for safeguarding privacy involves not disclosing the original dataset when publishing its embedding database. Nonetheless, it remains to be investigated whether an attack model, fine-tuned on datasets with varying distributions, would be effective. Results of Out-of-Distributed Texts To address the unresolved question, we assume that the attacker model is trained on the Wiki dataset with more rigorous descriptions of world knowledge, yet the texts used for testing do not originate from this dataset. This implies that the distribution of the texts used for testing differs from that of the texts used for training. We assess the reconstruction capability of this attack model on sample texts from six other datasets: wiki_bio, cc_news, pile_pubmed, triage, us_crimes, and cjeu_terms. The results presented in Table 3 show that the attack model retains the capability to accurately reconstruct texts from the embeddings, even with texts derived from different distributions than its training data. In greater detail, the best reconstruction accuracy of the attack model on texts from 5 out of 6 datasets is equal to or even exceeds that of a model fine-tuned on wiki-small, a relatively small subset of Wikipedia. As a result, if we release embeddings for the wiki_bio, cc_news, pile_pubmed, us_crimes, and cjeu_terms datasets, an attack model fine-tuned on the Wiki-xl dataset can extract semantic information from them with relatively high confidence. This suggests that simply withholding the original raw data does not prevent an attacker from reconstructing the original text information from their released embeddings. To understand which kind of text data is more easily recovered from text embedding based on the attack model fine-tuned with Wiki-xl, we also analyze the similarity between the six evaluation datasets and Wiki based on previous works [28]. The results reported in figure 2 show that texts from evaluation datasets with higher similarity to the training data are reconstructed more accurately. To elaborate, Wiki-bio, which compiles biographies from Wikipedia, shares the same origin as the training dataset. Despite covering different content, the language style is very similar. Consequently, the quality of the attack\u2019s text reconstruction for this Conference\u201917, July 2017, Washington, DC, USA Zhihao Zhu, Ninglu Shao, Defu Lian, Chenwang Wu, Zheng Liu, Yi Yang, Enhong Chen Table 3: Reconstruction attack performance on different datasets. The best results are highlighted in bold. \u2217represents that the advantage of the best-performed attack model over other models is statistically significant (p-value < 0.05). Test dataset wiki_bio cc_news pile_pubmed Target model Attack model BLEU-1 ROUGE-1 BLEU-1 ROUGE-1 BLEU-1 ROUGE-1 SimCSE GPT2 0.5428\u00b10.0025 0.5596\u00b10.0022 0.3881\u00b10.0012 0.4487\u00b10.0011 0.3623\u00b10.0015 0.3976\u00b10.0014 GPT2_large 0.5859\u00b10.0011 0.6272\u00b10.0015 0.4314\u00b10.0015 0.5020\u00b10.0014 0.4054\u00b10.0009 0.4427\u00b10.0011 GPT2_xl 0.5878\u00b10.0012\u2217 0.6329\u00b10.0016\u2217 0.4355\u00b10.0010\u2217 0.5084\u00b10.0008\u2217 0.4133\u00b10.0013\u2217 0.4505\u00b10.0010\u2217 BGE-Large-en GPT2 0.4773\u00b10.0025 0.5327\u00b10.0020 0.3906\u00b10.0014 0.4314\u00b10.0013 0.3297\u00b10.0013 0.3581\u00b10.0011 GPT2_large 0.5497\u00b10.0018 0.6015\u00b10.0009 0.4339\u00b10.0009 0.4867\u00b10.0012 0.3819\u00b10.0013 0.4074\u00b10.0014 GPT2_xl 0.5652\u00b10.0030\u2217 0.6200\u00b10.0015\u2217 0.4480\u00b10.0008\u2217 0.5038\u00b10.0006\u2217 0.3955\u00b10.0019\u2217 0.4218\u00b10.0017\u2217 E5-Large-v2 GPT2 0.5312\u00b10.0015 0.5532\u00b10.0014 0.4065\u00b10.0009 0.4428\u00b10.0010 0.3673\u00b10.0012 0.3995\u00b10.0009 GPT2_large 0.5695\u00b10.0014 0.6206\u00b10.0017 0.4521\u00b10.0013 0.5006\u00b10.0013 0.4174\u00b10.0006 0.4523\u00b10.0007 GPT2_xl 0.5823\u00b10.0012\u2217 0.6354\u00b10.0017\u2217 0.4645\u00b10.0014\u2217 0.5173\u00b10.0015\u2217 0.4316\u00b10.0009\u2217 0.4683\u00b10.0009\u2217 Test dataset triage us_crimes cjeu_terms Target model Attack model BLEU-1 ROUGE-1 BLEU-1 ROUGE-1 BLEU-1 ROUGE-1 SimCSE GPT2 0.0932\u00b10.0007 0.1555\u00b10.0010 0.3092\u00b10.0006 0.3238\u00b10.0008 0.3485\u00b10.0010 0.3646\u00b10.0009 GPT2_large 0.1188\u00b10.0006 0.1756\u00b10.0006 0.3268\u00b10.0006\u2217 0.3406\u00b10.0003 0.3755\u00b10.0012 0.3954\u00b10.0008\u2217 GPT2_xl 0.1299\u00b10.0006\u2217 0.2004\u00b10.0010\u2217 0.3226\u00b10.0005 0.3429\u00b10.0006\u2217 0.3739\u00b10.0007\u2217 0.3914\u00b10.0006 BGE-Large-en GPT2 0.0828\u00b10.0007 0.1072\u00b10.0008 0.2705\u00b10.0008 0.2834\u00b10.0010 0.3271\u00b10.0008 0.3378\u00b10.0009 GPT2_large 0.1376\u00b10.0008\u2217 0.1659\u00b10.0009\u2217 0.2755\u00b10.0008 0.3155\u00b10.0004 0.3639\u00b10.0016\u2217 0.3785\u00b10.0014\u2217 GPT2_xl 0.1232\u00b10.0008 0.1640\u00b10.0007 0.2775\u00b10.0008\u2217 0.3207\u00b10.0006\u2217 0.3522\u00b10.0012 0.3794\u00b10.0012 E5-Large-v2 GPT2 0.1451\u00b10.0007 0.2265\u00b10.0008 0.2825\u00b10.0009 0.2938\u00b10.0010 0.3418\u00b10.0018 0.3668\u00b10.0016 GPT2_large 0.2272\u00b10.0004\u2217 0.3172\u00b10.0007\u2217 0.3066\u00b10.0007 0.3308\u00b10.0006 0.3621\u00b10.0013 0.3980\u00b10.0012 GPT2_xl 0.2230\u00b10.0008 0.3177\u00b10.0007 0.3164\u00b10.0007\u2217 0.3459\u00b10.0005\u2217 0.3688\u00b10.0009\u2217 0.4101\u00b10.0011\u2217 dataset is the highest. In other words, simply withholding the original text associated with embeddings does not adequately safeguard sensitive information from being extracted, as fine-tuning the attack model with datasets that are similar in terms of language style or content can elevate the risk of privacy breaches. 0.5 0.6 0.7 0.8 Similarity 0.20 0.25 0.30 0.35 0.40 0.45 0.50 0.55 0.60 Best BLEU-1 wiki_bio cc_news pile_pubmed triage us_crimes cjeu_terms 0.5 0.6 0.7 0.8 Similarity 0.30 0.35 0.40 0.45 0.50 0.55 0.60 0.65 Best ROUGE-1 wiki_bio cc_news pile_pubmed triage us_crimes cjeu_terms Figure 2: The similarity between the evaluation datasets and the Wiki dataset v.s. the best reconstruction performance. The disclosure of even a small collection of original texts significantly amplifies the risk, as illustrated in figure 3 which presents results from the pile-pubmed dataset where the attack model undergoes further fine-tuning with these original texts. The availability of more original texts linked to the embedding directly correlates with an increased risk of sensitive information leakage. Specifically, the BLEU-1 score for GPT2-xl against the BGE-large-en embedding model sees a 2% increase when the attack model is supplemented with 10k original texts. Notably, even with the use of a limited amount of target data (1K samples), the improvements in BLEU-1 score are considerable. In summary, concealing the datasets of published embeddings does not effectively prevent information leakage from these embeddings. This is because their underlying information can still be extracted by an attack model that has been fine-tuned on datasets of similar style or content. Furthermore, revealing even a small number of samples can significantly improve the extraction accuracy. 0 2000 4000 6000 8000 10000 Number of disclosed original texts 0.34 0.36 0.38 0.40 0.42 0.44 0.46 BLEU-1 T arget Model = SimCSE 0 2000 4000 6000 8000 10000 Number of disclosed original texts T arget Model = BGE-Large-en 0 2000 4000 6000 8000 10000 Number of disclosed original texts T arget Model = E5-Large-v2 GPT2 GPT2_large GPT2_xl 0 2000 4000 6000 8000 10000 Number of disclosed original texts 0.36 0.38 0.40 0.42 0.44 0.46 0.48 0.50 0.52 ROUGE-1 T arget Model = SimCSE 0 2000 4000 6000 8000 10000 Number of disclosed original texts T arget Model = BGE-Large-en 0 2000 4000 6000 8000 10000 Number of disclosed original texts T arget Model = E5-Large-v2 Figure 3: Impact of disclosed original texts volume on text reconstruction accuracy. Each column represents a different target embedding model. The first and second rows represent the reconstruction performance concerning the BLEU-1 and ROUGE-1 metrics, respectively. Results of Varying Text Lengths Given GPT\u2019s capability to generate outputs of varying lengths with consistent meanings, exploring how text length impacts reconstruction quality becomes pertinent. With the average text length in the Wiki dataset being 21.32, as noted in Table 5, we selected three subsets: Wiki-base, Wiki-medium, and Wiki-long, each with 5,000 Understanding Privacy Risks of Embeddings Induced by Large Language Models Conference\u201917, July 2017, Washington, DC, USA samples but average lengths of 20, 40, and 80 words, respectively. Among them, Wiki-medium and Wiki-long are created by extending texts in Wiki-base via the GPT4 API [3] accessible by OpenAI. GPT2 GPT2_large GPT2_xl Attack model 0.0 0.1 0.2 0.3 0.4 0.5 BLEU-1 GPT2 GPT2_large GPT2_xl Attack model 0.0 0.1 0.2 0.3 0.4 0.5 BLEU-1 GPT2 GPT2_large GPT2_xl Attack model 0.0 0.1 0.2 0.3 0.4 0.5 0.6 BLEU-1 Base Medium Long GPT2 GPT2_large GPT2_xl Attack model 0.0 0.1 0.2 0.3 0.4 0.5 0.6 ROUGE-1 (a) SimCSE GPT2 GPT2_large GPT2_xl Attack model 0.0 0.1 0.2 0.3 0.4 0.5 0.6 ROUGE-1 (b) BGE-Large-en GPT2 GPT2_large GPT2_xl Attack model 0.0 0.1 0.2 0.3 0.4 0.5 0.6 ROUGE-1 (c) E5-Large-v2 Figure 4: Influence of the text length. Error bars represent the mean reconstruction accuracy with 95% confidence intervals obtained from 10 independent trials, and each column corresponds to a different target embedding model. The results of text reconstruction are depicted in figure 4. It\u2019s evident that embeddings from shorter texts are more susceptible to being decoded, posing a greater risk to privacy. For example, the ROUGE-1 score of GPT2-xl on BGE-Large-en fell by over 43.3% as the test text length increased from Wiki-base to Wiki-long. This decline can be attributed to the fixed length of text embeddings, which remain constant regardless of the original text\u2019s length. Consequently, embeddings of longer texts, which encapsulate more information within the same embedding size, make accurate text recovery more formidable. Therefore, extending the original texts could potentially fortify the security of released embeddings. 2.3 Evaluation of Sensitive Attribute Prediction Target/External Embedding Model doctor teacher chef 0.7 0.2 0.5 cosine similarity original/reconstructed text candidate attribute values Attribute Embedding Text Embedding Similarity Score Attribute Inference Figure 5: The inference framework of sensitive attributes. The attacker employs the same embedding model to convert original text and candidate attributes into embeddings. The attacker then identifies the attribute that exhibits the highest cosine similarity between its embedding and text embedding as sensitive information of the original text. Settings In contrast to text reconstruction, our primary concern is whether the attacker can extract specific sensitive information from the text embeddings. The task of predicting sensitive attributes more clearly illustrates the issue of privacy leakage through embeddings, compared to the task of reconstructing text. Initially, we pinpoint sensitive or crucial data within the datasets: wiki-bio, triage, cjeuterms, and us-crimes. In particular, we examined patient dispositions and blood pressure readings in the triage dataset, and in the wiki-bio dataset, we focused on individuals\u2019 nationality, birthdate, and profession. Additionally, we looked into criminal charges in the us_crimes dataset and considered legal terminology as important information in the cjeu_terms dataset. Given the extensive variety of attributes and the scarcity of labeled data for each, it\u2019s impractical for the privacy attacker to train a dedicated model for each sensitive attribute, unlike the approach taken in previous studies [50]. Therefore, we predicted the sensitive information from text embedding by selecting the attribute that exhibits the highest cosine similarity between text embedding and its embedding. This approach is effective across various attributes and does not necessitate training with supervised data. However, a challenge arises because the text describing the attribute is often very brief (sometimes just a single word), and the target embedding model may refuse to produce embeddings for such short texts due to concerns about embedding theft [17, 34]. As a result, it becomes difficult to represent the original text and the attribute value within the same embedding space. To overcome this, we introduce an external embedding model to serve as an intermediary. This external model is responsible for embedding both the attribute and the reconstructed text, which has been derived from text embeddings by the attack model. Consequently, texts and attributes are embedded within the same space via reconstructing texts from the text embeddings, allowing for accurate similarity measurement. The overall process for inferring attributes is depicted in figure 5. The outcomes of the attribute inference attack using various methods are presented in Table 4. The last row of this table, which relies on the similarity between attributes and original texts, lacks randomness in its measurement due to the direct comparison method employed. In contrast, the preceding rows, which are based on reconstructed texts, introduce randomness into the similarity measurement. This variability stems from employing a non-zero temperature setting, allowing for multiple independent text generations to produce diverse outputs. Results The findings presented in Table 4 reveal that sensitive information can be accurately deduced from text embeddings without the necessity for any training data, highlighting a significant risk of privacy leakage through embeddings. Specifically, attributes such as nationality and occupation can be inferred with high precision (an accuracy of 0.94) even when using texts that have been reconstructed. This level of accuracy is attributed to the embeddings\u2019 ability to capture the rich semantic details of texts coupled with the attack model\u2019s strong generative capabilities. Remarkably, the accuracy of predictions made using an external embedding model on reconstructed texts is on par with those made using the target embedding model on original texts in several cases. For an equitable comparison, the Conference\u201917, July 2017, Washington, DC, USA Zhihao Zhu, Ninglu Shao, Defu Lian, Chenwang Wu, Zheng Liu, Yi Yang, Enhong Chen Table 4: Attribute inference attack performance (accuracy) when using bge-large-en as the embedding model. The best results are highlighted in bold. \u2217denotes that the advantage of the best-performed attack model over other models is statistically significant (p-value < 0.05). The last row provides experimental results where the attacker has unrestricted access to the target embedding model and compares the embeddings of original text and candidate attributes. Other results are based on the attacker\u2019s usage of an external embedding model to calculate the similarities between reconstructed text and candidate attributes. Dataset wiki-bio triage cjeu_terms us_crimes Similarity Model Attack Model nationality birth_date occupation disposition blood_pressure legal_term criminal_charge SimCSE GPT2 0.875\u00b10.005 0.546\u00b10.008 0.869\u00b10.005 0.506\u00b10.001 0.135\u00b10.004 0.395\u00b10.010 0.377\u00b10.003 GPT2_large 0.882\u00b10.003 0.579\u00b10.011 0.878\u00b10.004 0.506\u00b10.001 0.119\u00b10.006 0.405\u00b10.006 0.396\u00b10.004 GPT2_xl 0.886\u00b10.003\u2217 0.596\u00b10.009\u2217 0.878\u00b10.005 0.514\u00b10.002\u2217 0.137\u00b10.003 0.428\u00b10.008\u2217 0.407\u00b10.004\u2217 BGE-Large-en GPT2 0.927\u00b10.005 0.525\u00b10.009 0.913\u00b10.006 0.505\u00b10.002 0.195\u00b10.004 0.496\u00b10.008 0.470\u00b10.005 GPT2_large 0.937\u00b10.005 0.560\u00b10.009 0.919\u00b10.003 0.504\u00b10.001 0.238\u00b10.005 0.538\u00b10.004 0.510\u00b10.005 GPT2_xl 0.941\u00b10.003\u2217 0.581\u00b10.008\u2217 0.922\u00b10.005 0.504\u00b10.001 0.254\u00b10.007\u2217 0.551\u00b10.006\u2217 0.527\u00b10.003\u2217 E5-Large-v2 GPT2 0.932\u00b10.004 0.670\u00b10.008 0.927\u00b10.005 0.514\u00b10.002 0.206\u00b10.005 0.492\u00b10.010 0.465\u00b10.004 GPT2_large 0.940\u00b10.003 0.729\u00b10.008 0.938\u00b10.003 0.521\u00b10.001 0.229\u00b10.003 0.544\u00b10.007 0.506\u00b10.006 GPT2_xl 0.941\u00b10.003 0.756\u00b10.008\u2217 0.940\u00b10.005 0.521\u00b10.002 0.230\u00b10.004 0.545\u00b10.006 0.524\u00b10.006\u2217 BGE-Large-en None 0.953 0.742 0.950 0.538 0.431 0.764 0.716 external embedding model was set to be identical to the target embedding model, although, in practice, the specifics of the target embedding model might not be known. The inference accuracy improves when employing a larger attack model for text reconstruction and a more expressive embedding model. This outcome aligns with observations from text reconstruction tasks. Although the accuracy of attribute prediction with reconstructed text falls short in some instances compared to using original texts, the ongoing advancement in large language models is rapidly closing this gap. Hence, the continuous evolution of these models is likely to escalate the risks associated with privacy breaches, emphasizing the critical need for increased awareness and caution in this domain. 3 DISCUSSIONS AND LIMITATIONS This study delves into the implications of large language models on embedding privacy, focusing on text reconstruction and sensitive information prediction tasks. Our investigation shows that as the capabilities of both the sophisticated attack foundation model and the target embedding model increase, so does the risk of sensitive information leakage through knowledge embeddings. Furthermore, the risk intensifies when the attack model undergoes fine-tuning with data mirroring the distribution of texts linked to the released embeddings. To protect the privacy of knowledge embeddings, we propose several strategies based on our experimental findings: \u2022 Cease the disclosure of original texts tied to released embeddings: Preventing the attack model from being finetuned with similar datasets can be achieved by introducing imperceptible noise into the texts or embeddings. This aims to widen the gap between the dataset of original or reconstructed texts and other analogous datasets. \u2022 Extend the length of short texts before embedding: Enhancing short texts into longer versions while preserving their semantic integrity can be accomplished using GPT4 or other large language models with similar generative capacities. \u2022 Innovate new privacy-preserving embedding models: Develop embedding models capable of producing high-quality text embeddings that are challenging to reverse-engineer into the original text. This entails training models to minimize the cloze task loss while maximizing the reconstruction loss. However, our study is not without limitations. Firstly, due to substantial training expenses, we did not employ an attack model exceeding 10 billion parameters, though we anticipate similar outcomes with larger models. Secondly, while we have quantified the impact of various factors such as model size, text length, and training volume on embedding privacy, and outlined necessary guidelines for its protection, we have not formulated a concrete defense mechanism against potential embedding reconstruction attacks. Currently, effective safeguards primarily rely on perturbation techniques and encryption methods. Perturbation strategies, while protective, can compromise the embedding\u2019s utility in subsequent applications, necessitating a balance between security and performance. Encryption methods, though secure, often entail considerable computational demands. Future work will explore additional factors influencing embedding privacy breaches and seek methods for privacy-preserving embeddings without sacrificing their utility or incurring excessive computational costs. 4 METHODS 4.1 Preliminary Prior to delving into the attack strategies, we will commence with the introduction of the language models and evaluation datasets. Language Model A language model is a technique capable of assessing the probability of a sequence of words forming a coherent sentence. Traditional language models, such as statistical language models [8, 46] and grammar rule language models [26, 48], rely on heuristic methods to predict word sequences. While these conventional approaches may achieve high predictive accuracy for limited or straightforward sentences within small corpora, they often struggle to provide precise assessments for the majority of other word combinations. Understanding Privacy Risks of Embeddings Induced by Large Language Models Conference\u201917, July 2017, Washington, DC, USA Table 5: Statistics of datasets Dataset Domain #sentences avg. sentence len wiki General 4,010,000 21.32 wiki_bio General 1,480 22.19 cc_news News 5,000 21.39 pile_pubmed Medical 5,000 21.93 triage Medical 4668 54.10 cjeu_terms Legal 2127 118.96 us_crimes Legal 4518 181.28 With increasing demands for the precision of language model predictions, numerous researchers have advocated for neural networkbased language models [15, 36] trained on extensive datasets. The performance of neural network-based language models steadily increases as model parameters are increased and sufficient training data is received. Upon reaching a certain threshold of parameter magnitude, the language model transcends previous paradigms to become a Large Language Model (LLM). The substantial parameter count within LLMs facilitates the acquisition of extensive implicit knowledge from corpora, thereby fostering the emergence of novel, powerful capabilities to handle more complex language tasks, such as arithmetic operations [39] and multi-step reasoning [42]. These capabilities have enabled large language models to comprehend and resolve issues like humans, leading to their rising popularity in many aspects of society. However, the significant inference capacity of large language models may be leveraged by attackers to reconstruct private information from text embeddings, escalating the risk of privacy leakage in embeddings. Datasets for Evaluation In this paper, we assess the risk of embedding privacy leaks on seven datasets, including wiki [47], wiki-bio [29], cc-news [24], pile-pubmed [20], triage [32], cjeu-terms [9], and us-crimes [9]. Wiki collects a large amount of text from the Wikipedia website. Since it has been vetted by the public, the text of the wiki is both trustworthy and high-quality. Wiki-bio contains Wikipedia biographies that include the initial paragraph of the biography as well as the tabular infobox. CC-News (CommonCrawl News dataset) is a dataset containing news articles from international news websites. It contains 708241 English-language news articles published between January 2017 and December 2019. Pile-PubMed is a compilation of published medical literature from Pubmed, a free biomedical literature retrieval system developed by the National Center for Biotechnology Information (NCBI). It has housed over 4000 biomedical journals from more than 70 countries and regions since 1966. Triage records the triage notes, the demographic information, and the documented symptoms of the patients during the triage phase in the emergency center. Cjeu-term and us-crimes are two datasets from the legal domain. Cjeu-term collects some legal terminologies of the European Union, while us-crimes gathers transcripts of criminal cases in the United States. For wiki, cc_news, and pile_pubmed datasets, we randomly sample data from the original sets instead of using the entire dataset because the original datasets are enormous. To collect the sentence texts for reconstruction, we utilize en_core_web_trf [2], an open-source tool, to segment the raw data into sentences. Then, we cleaned the data and filtered out sentences that were too long or too short. The statistical characteristics of the processed datasets are shown in Table 5. Ethics Statement For possible safety hazards, we abstained from conducting attacks on commercial embedding systems, instead employing open-source embedding models. Additionally, the datasets we utilized are all publicly accessible and anonymized, ensuring no user identities are involved. To reduce the possibility of privacy leakage, we opt to recover more general privacy attributes like occupation and nationality in the attribute prediction evaluation rather than attributes that could be connected to a specific individual, such as phone number and address. The intent of our research is to highlight the increased danger of privacy leakage posed by large language models in embeddings, suggest viable routes to safeguard embedding privacy through experimental analysis, and stimulate the community to develop more secure embedding models. 4.2 Threat Model Attack Goal This paper primarily investigates the extraction of private data from text embeddings. Given that such private data often includes extremely sensitive information such as phone numbers, addresses, and occupations, attackers have ample motivation to carry out these attacks. For instance, they might engage in illegal activities such as telecommunications fraud or unauthorized selling of personal data for economic gain. These practices pose significant threats to individual privacy rights, potentially lead to economic losses, increase social instability, and undermine trust mechanisms. Attack Knowledge To extract private information from text embeddings, attackers require some understanding of the models responsible for generating these embeddings. Intuitively, the more detailed the attacker\u2019s knowledge of the target embedding model, including its internal parameters, training specifics, etc., the more potent the attack\u2019s efficacy. However, in real-world scenarios, to safeguard their intellectual property and commercial interests, target models often keep their internal information confidential, providing access to users solely through a query interface O. Based on the above considerations, this study assumes that attackers only possess query permissions to the target embedding model, which responds with the corresponding text embedding based on the text inputted by the attacker. This query process does not reveal any internal information about the model. Furthermore, with the increasing popularity of large language models, numerous companies are opting to release their anonymized datasets for academic research purposes. Therefore, this study also assumes that attackers have the capability to gather specific open-source data (e.g., Wikipedia in our evaluation) and leverage interactions with target models to acquire associated text embeddings. Clearly, such low-knowledge attack settings help simulate real attack scenarios and more accurately assess the risks of embedding privacy leaks. Conference\u201917, July 2017, Washington, DC, USA Zhihao Zhu, Ninglu Shao, Defu Lian, Chenwang Wu, Zheng Liu, Yi Yang, Enhong Chen 4.3 Attack Methodology Attack Model Construction Attackers reconstructed original text from text embedding by training an attack model. However, they lack prior knowledge about which architecture should be used for the attack model. When using neural networks as the attack model, it is challenging to decide which neural network architecture should be employed. If the architecture of the attack model is not as expressive and complex as the embedding model, then it is difficult to ensure that it can extract private information. Considering the exceptional performance of large language models across various domains, particularly in text comprehension [11] and information extraction [14], employing them as attack models could be an appropriate choice. Based on these considerations, this study utilizes GPT-2 models of varying sizes as attack models, training them to reconstruct the original text from the embeddings. Training set generation. We start by retrieving the text embeddings from the collected open-source data \ud835\udc37using the query interface O of the target embedding model. \ud835\udc52\ud835\udc64= O(\ud835\udc64) (1) Then, we construct the training dataset \ud835\udc37\ud835\udc61\ud835\udc5f\ud835\udc4e\ud835\udc56\ud835\udc5bfor the attack model based on these embeddings and corresponding texts. \ud835\udc37\ud835\udc61\ud835\udc5f\ud835\udc4e\ud835\udc56\ud835\udc5b= {(\ud835\udc51\ud835\udc64,\ud835\udc64)|\ud835\udc64\u2208\ud835\udc37,\ud835\udc51\ud835\udc64= (\ud835\udc52\ud835\udc64, < \ud835\udc38\ud835\udc42\ud835\udc46>)} (2) where EOS(End-of-Sentence) token is a special token appended to the text embedding to signify the end of the embedding input. The attack model performs the opposite operation compared to the embedding model: for each sample (\ud835\udc51\ud835\udc64,\ud835\udc64) \u2208\ud835\udc37\ud835\udc61\ud835\udc5f\ud835\udc4e\ud835\udc56\ud835\udc5b, the attack model strives to recover the original text \ud835\udc64from the embedding \ud835\udc52\ud835\udc64. Attack model training. For each embedding input \ud835\udc51\ud835\udc64, the attack model is trained to output the first token of the original text. Subsequently, when the attacker provides \ud835\udc51\ud835\udc64along with the first \ud835\udc56\u22121 tokens of the original text, the attack model is tasked with predicting the \ud835\udc56\u2212\ud835\udc61\u210etoken of the original text with utmost accuracy. Formally, for a sample (\ud835\udc51\ud835\udc64,\ud835\udc64), the text reconstruction loss of the attack model is as follows: \ud835\udc3f\ud835\udf03(\ud835\udc51\ud835\udc64,\ud835\udc64) = \u2212 \u2211\ufe01\ud835\udc59 \ud835\udc56=1log \ud835\udc43(\ud835\udc65\ud835\udc56|\ud835\udc51\ud835\udc64,\ud835\udc64<\ud835\udc56,\ud835\udf03) (3) where \ud835\udc64<\ud835\udc56= (\ud835\udc651, ...,\ud835\udc65\ud835\udc56\u22121) represents the first \ud835\udc56\u22121 tokens of the original text \ud835\udc64. \ud835\udc59denotes the length of \ud835\udc64and \ud835\udf03represents the parameter of the attack model. Therefore, the training loss for the attack models is the sum of the text reconstruction loss across all samples in the training dataset: \ud835\udc3f\ud835\udf03= \u2212 \u2211\ufe01 (\ud835\udc51\ud835\udc64,\ud835\udc64)\u2208\ud835\udc37\ud835\udc61\ud835\udc5f\ud835\udc4e\ud835\udc56\ud835\udc5b\ud835\udc3f\ud835\udf03(\ud835\udc51\ud835\udc64,\ud835\udc64) (4) To evaluate the effectiveness of the attack models, we employ two tasks: text reconstruction and attribute prediction. Text Reconstruction Task Reconstructed text generation. In the text reconstruction task, the attack models aim to generate reconstructed text that closely resembles the original text. When generating a reconstructed text of length \ud835\udc59, the attacker aims for the generated text to have the highest likelihood among all candidate texts, which is formalized as follows: \ud835\udc64\u2217= arg max \ud835\udc64={\ud835\udc651,...,\ud835\udc65\ud835\udc59} \ud835\udc43(\ud835\udc651, ...,\ud835\udc65\ud835\udc59|\ud835\udc51\ud835\udc64,\ud835\udf03) (5) However, on the one hand, the number of candidate tokens for \ud835\udc65\ud835\udc56 in the reconstructed text \ud835\udc64is large, often exceeding 10, 000 in our experiments. On the other hand, the number of candidate texts exponentially increases as the text length \ud835\udc59grows. As a result, it becomes infeasible to iterate through all candidate texts of length \ud835\udc59and select the one with the highest likelihood as the output. A viable solution is greedy selection, which involves choosing the candidate token with the highest likelihood while progressively constructing the reconstructed text. \ud835\udc65\u2217 \ud835\udc56= arg max \ud835\udc65\ud835\udc56 \ud835\udc43(\ud835\udc65\ud835\udc56|\ud835\udc51\ud835\udc64,\ud835\udf03,\ud835\udc64\u2217 <\ud835\udc56) (6) \ud835\udc64\u2217= (\ud835\udc65\u2217 1, ...,\ud835\udc65\u2217 \ud835\udc59) (7) However, this approach may easily lead the generation process into local optima. To enhance the quality of the reconstructed text and improve generation efficiency, we employ beam search in the text reconstruction task. In beam search, if the number of beams is \ud835\udc58, the algorithm maintains \ud835\udc58candidates with the highest generation probability at each step. Specifically, in the initial state, for a given input text embedding \ud835\udc52\ud835\udc64, the attack model first records \ud835\udc58initial tokens with the highest generation likelihood (ignoring eos token) as candidate texts of length 1 (C\u2217 1 ). C\u2217 1 = arg max C1\u2282X,| C1|=\ud835\udc58 \u2211\ufe01 \ud835\udc65\u2208C1\ud835\udc43(\ud835\udc65|\ud835\udc51\ud835\udc64,\ud835\udf03) (8) where X represents the set of all tokens in the attack model. Subsequently, these \ud835\udc58initial tokens are combined with any token in X to create a text set of length 2. The attack model then iterates through these texts of length 2 and selects \ud835\udc58texts with the highest generation likelihood as candidate texts of length 2 (C\u2217 2 ). C\u2217 2 = arg max | C2|=\ud835\udc58 \u2211\ufe01 (\ud835\udc651,\ud835\udc652)\u2208C2\ud835\udc43(\ud835\udc651,\ud835\udc652|\ud835\udc51\ud835\udc64,\ud835\udf03) (9) C2 \u2282{(\ud835\udc651,\ud835\udc652)|\ud835\udc651 \u2208C\u2217 1,\ud835\udc652 \u2208X} (10) This process continues, incrementing the text length until the model generates an EOS token signaling the end of the generation process. Evaluation metric. We adopt BLEU-1 and ROUGE-1 to evaluate the reconstruction performance of the attack model, measuring how similar the reconstructed text \ud835\udc64\u2032 is to the original text \ud835\udc64. The formulas for these two metrics are as follows: BLEU-1 = \ud835\udc35\ud835\udc43\u00b7 \u00cd \ud835\udc65\u2208set(\ud835\udc64\u2032) min(count(\ud835\udc65, \ud835\udc64), count(\ud835\udc65, \ud835\udc64\u2032)) \u00cd \ud835\udc65\u2208set(\ud835\udc64\u2032) count(\ud835\udc65, \ud835\udc64\u2032) (11) ROUGE-1 = \u00cd \ud835\udc65\u2208set(\ud835\udc64) min(count(\ud835\udc65, \ud835\udc64), count(\ud835\udc65, \ud835\udc64\u2032)) \u00cd \ud835\udc65\u2208set(\ud835\udc64)count(\ud835\udc65, \ud835\udc64) (12) where set(\ud835\udc64) and set(\ud835\udc64\u2032) are the sets of all tokens in \ud835\udc64and \ud835\udc64\u2032. Count(\ud835\udc65,\ud835\udc64) and count(\ud835\udc65,\ud835\udc64\u2032) are the number of times \ud835\udc65appears in \ud835\udc64and \ud835\udc64\u2032, respectively. The brevity penalty (BP) is used to prevent short sentences from getting an excessively high BLEU-1 score. BLEU-1 primarily assesses the similarity between the reconstructed text and the original text, whereas ROUGE-1 places greater emphasis on the completeness of the reconstruction results and whether Understanding Privacy Risks of Embeddings Induced by Large Language Models Conference\u201917, July 2017, Washington, DC, USA the reconstructed text can encompass all the information present in the original text. Dataset similarity calculation. We assess the similarity between the evaluation datasets and the wiki dataset based on a simple character n-gram comparison [28]. Specifically, we employed the 5000 commonly used 4-gram characters in English as the feature set F of the dataset. Each dataset is then represented as a 5000dimensional feature vector. \u2212 \u2192 \ud835\udc39\ud835\udc37= [count(\ud835\udc531, \ud835\udc37), ..., count(\ud835\udc535000, \ud835\udc37)] (13) where \ud835\udc53\ud835\udc56\u2208F is a 4-gram character, and count(\ud835\udc53\ud835\udc56, \ud835\udc37) is the number of times \ud835\udc53\ud835\udc56appears in dataset \ud835\udc37. Finally, we calculate the Spearman correlation coefficient between the feature vectors of the two datasets to quantify their similarity. Sim(\ud835\udc371, \ud835\udc372) = Spearman(\u2212 \u2192 \ud835\udc391, \u2212 \u2192 \ud835\udc392) (14) where \u2212 \u2192 \ud835\udc391 and \u2212 \u2192 \ud835\udc392 are feature vectors of \ud835\udc371 and \ud835\udc372, respectively. The Spearman coefficient ranges from -1 to 1, where a higher value indicates a greater similarity between the two corresponding datasets. Attribute Prediction Task In the attribute prediction task, this study focuses on the attacker\u2019s ability to extract private information from the original text. We chose several private attributes from four datasets and evaluated the attack model\u2019s ability to infer the precise values of these private attributes from the released text embedding. For example, in the wiki-bio dataset, occupation is chosen as a private attribute. The attacker attempts to ascertain that the original text contains the private message \u201cdoctor\" by using the embedding of the sentence \u201cDavid is a doctor.\" Instead of training the attack model to perform the attribute prediction task, this study utilizes the embedding similarity between the text and the attribute value to determine the suggested attribute value of the original text. Its rationality is that the text contains relevant information about sensitive attributes, so their embeddings should be similar. The ideal approach would be to determine based on the similarity between embeddings of the original text and sensitive attribute embeddings. However, this poses challenges: (1) The original text is unknown. (2) Privacy attributes are often short texts, and in most cases, consist of only one word; such frequent anomalous (short) inputs might be considered malicious attempts and rejected. Therefore, this study (1) uses reconstructed text instead of the original text, and (2) employs an open-source external embedding model as a proxy to obtain embeddings instead of using the target embedding model. It\u2019s worth noting that this study did not directly search for privacy attributes in the reconstructed text due to potential inaccuracies in reconstructing privacy attributes, such as missing tokens or reconstructing synonyms of the attributes. Specifically, the attacker initially acquires embeddings of the reconstructed text and sensitive attribute from their proxy embedding model, subsequently computing the cosine similarity between them, and ultimately selecting the attribute with the highest similarity as the prediction result. Formally, the attacker infers the sensitive attribute \ud835\udc64\ud835\udc63as follows: \ud835\udc64\ud835\udc63= arg max \ud835\udc63\u2208C\ud835\udc63 \ud835\udc52\ud835\udc64\u2032 \u00b7 \ud835\udc52\ud835\udc63 |\ud835\udc52\ud835\udc64\u2032 ||\ud835\udc52\ud835\udc63| (15) where C\ud835\udc63is the set of candidate attribute values, \ud835\udc64\ud835\udc63is the predicted attribute value of the original text \ud835\udc64. \ud835\udc52\ud835\udc64\u2032 and \ud835\udc52\ud835\udc63are the embedding vectors of reconstructed text \ud835\udc64\u2032 and attribute value \ud835\udc63with the aid of the external embedding model, respectively. We employ accuracy as the metric to evaluate the performance of the attack model on the attribute prediction task."
17
+ }
intro_8K/test_introduction_short_2404.16621v1.json ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2404.16621v1",
3
+ "title": "Hippocrates: An Open-Source Framework for Advancing Large Language Models in Healthcare",
4
+ "abstract": "The integration of Large Language Models (LLMs) into healthcare promises to\ntransform medical diagnostics, research, and patient care. Yet, the progression\nof medical LLMs faces obstacles such as complex training requirements, rigorous\nevaluation demands, and the dominance of proprietary models that restrict\nacademic exploration. Transparent, comprehensive access to LLM resources is\nessential for advancing the field, fostering reproducibility, and encouraging\ninnovation in healthcare AI. We present Hippocrates, an open-source LLM\nframework specifically developed for the medical domain. In stark contrast to\nprevious efforts, it offers unrestricted access to its training datasets,\ncodebase, checkpoints, and evaluation protocols. This open approach is designed\nto stimulate collaborative research, allowing the community to build upon,\nrefine, and rigorously evaluate medical LLMs within a transparent ecosystem.\nAlso, we introduce Hippo, a family of 7B models tailored for the medical\ndomain, fine-tuned from Mistral and LLaMA2 through continual pre-training,\ninstruction tuning, and reinforcement learning from human and AI feedback. Our\nmodels outperform existing open medical LLMs models by a large-margin, even\nsurpassing models with 70B parameters. Through Hippocrates, we aspire to unlock\nthe full potential of LLMs not just to advance medical knowledge and patient\ncare but also to democratize the benefits of AI research in healthcare, making\nthem available across the globe.",
5
+ "authors": "Emre Can Acikgoz, Osman Batur \u0130nce, Rayene Bench, Arda An\u0131l Boz, \u0130lker Kesen, Aykut Erdem, Erkut Erdem",
6
+ "published": "2024-04-25",
7
+ "updated": "2024-04-25",
8
+ "primary_cat": "cs.LG",
9
+ "cats": [
10
+ "cs.LG",
11
+ "cs.AI",
12
+ "cs.CL"
13
+ ],
14
+ "label": "Original Paper",
15
+ "paper_cat": "LLM Fairness",
16
+ "gt": "The remarkable success of Large Language Models (LLMs) across diverse NLP tasks has revolutionized artificial intelligence (Touvron et al., 2023b; Bai et al., 2023; Jiang et al., 2023; OpenAI, 2023; Google, 2023). Despite their impressive generalization capabilities, LLMs encounter challenges in clinical contexts, primarily due to a deficiency in domain-specific knowledge and the intricacies of medical terminology. Bridging this gap, in this work, we introduce Hippocrates (named after the Ancient Greek \u201cFather of Medicine\u201d), a state-of- the-art, fully open-source framework designed to elevate LLMs\u2019 proficiency in medical reasoning. We publicly share our training data, complete training and evaluations codes, along with intermediate model checkpoints. Our framework marks an important step towards democratizing advancements in medical LLMs. Previous attempts to develop advanced medical LLMs yielded promising results by further training them (Labrak et al., 2024), supervised fine-tuning them (Li et al., 2023; Han et al., 2023; Toma et al., 2023), or both (Wu et al., 2023; Chen et al., 2023), via special medical- text corpus and medical instruction datasets. However, the data collection, pre-training, \u2217Corresponding author, [email protected] 1 arXiv:2404.16621v1 [cs.LG] 25 Apr 2024 Hippocrates Oct 2022 Apr 2023 Jul 2023 Aug 2023 Sep 2023 Nov 2023 Dec 2023 Mar 2024 30 40 50 60 MedQA Accuracy (%) BioGPT 1.5B (27.2) MedAlpaca 7B (36.6) LLaMA-2 7B (39.5) PMC-LLaMA 13B (46.3) Mistral 7B (48.9) Qwen 72B (53.4) Meditron 70B (58.5) Hippo- 7B (50.8) Hippo- 7B (59.9) Figure 1: The evolution of medical LLM performances on the MedQA dataset. Our 7B Hippo- and Hippo- models achieve 50.8% and 59.9% 5-shot accuracy, respectively. Hippo- outperforms all existing open models, including even those with 70B parameters. and finetuning stages may include considerable complexity, which makes reproducing, analyzing, and comparing the recent LLMs in that domain challenging. On the other hand, closed models, e.g. GPT4 (OpenAI, 2023), Gemini (Google, 2023), Med-PaLM (Singhal et al., 2023b), trained on closed-domain datasets make their results non-reproducible, not to mention substantial computational costs and further complicate the understanding of which components are crucial to the success of these advanced medical frameworks. In this work, we provide full access to our framework, from the data sources to the training configurations and the reproducible evaluation protocols. We conduct a detailed empirical analysis to identify the impact of various design elements on LLM performance, leading to a domain-adapted framework that demonstrates superior performance on multiple medical benchmarks. Based on these insights, we develop a step-by-step guide for the efficient training of medical-LLMs. Our research efforts yield two advanced 7B parameter models, Hippo- and Hippo- . As shown in Fig. 1, our models not only outperform existing 7B and 13B models by a significant margin but also deliver results on par with, and in some cases exceeding, those of 70B models. We argue that the development of a broad, varied collection of open models is crucial for deepening our knowledge of language models and enhancing their applicability across various domains. In addition, we adopt a novel strategy for structuring our instruction tuning (IT) dataset, dividing it into two distinct components: the General Instruction Dataset and the Evaluation Instruction Dataset. The General dataset is designed to enable unbiased assessments by avoiding overlap with downstream task data, marking a departure from previous method- ologies. On the other hand, the Evaluation Instruction Dataset, which incorporates training splits from evaluation benchmarks, facilitates direct comparisons with existing models (Chen et al., 2023). Notably, for the first time in the medical domain, our approach incorpo- rates preference learning from medical professionals into the model development process, utilizing RLAIF (Lee et al., 2023b) and GPT4 for annotating preferences. For model evaluation, we employ the well-established EleutherAI framework1 (Gao et al., 2021), conducting tests across a set of six varied medical downstream tasks. These include MedMCQA (Pal et al., 2022), PubmedQA (Jin et al., 2019), MedQA (Jin et al., 2021), and the USMLE-step1, USMLE-step2, and USMLE-step3. Leveraging this framework allows for straightforward replication of any LLM\u2019s results, eliminating the necessity for additional fine-tuning or the repetitive execution of evaluation scripts for each new model.",
17
+ "main_content": "Fig. 2 shows the overall workflow of the Hippocrates framework, starting from domainspecific pre-training and progressing through supervised fine-tuning and reinforcement 1https://github.com/EleutherAI/lm-evaluation-harness 2 Hippocrates Medical Knowledge Injection Medical Instruction Tuning Medical Preference Learning 298M Tokens Medical Guidelines, PMC-Patients, PubMedQA-train Language Modeling Predict next token Domain Adapted Model 696K Samples Flashcards, GenMedGPT, Platypus, HealthCareMagic, UMLS, Relations, Wikidoc, Patient-Info, MedicationQA \u2022 Query Answer 1 Answer 2 Prompt GPT-4 Preference Dataset Preference-Dataset Pre-training Data Instruction Data Model LLaMA2 ( ) 7B Mistral ( ) 7B Training Method 15K Samples Language Modeling { Instruction Finetuning Predict next token for responses Model Domain Adapted Model Training Method Supervised Fine-tuning Medical SFT Model { Reinforcement Learning Optimize for medical preferences Model Medical SFT Model Training Method DPO Medical Preference Model Evaluation Benchmark Data Inference Evaluation Framework MedMCQA MedQA PubMedQA USMLE-step1 USMLE-step2 USMLE-step3 Dataset Format Question + Answer Question + Answer Abs + Question + Answer Question + Answer Question + Answer Question + Answer Eleuther AI\u2019s Language Model Evaluation Harness Objective Log-Likelihood Evaluation Method Choose answer with the highest likelihood score Prompting In-Context Learning (ICL) strategies Approach Zero-Shot Few-Shot Chain-of-Thought (CoT) Methods MedMCQA-train, MedQA-train, PubMedQA-train General Eval Figure 2: An overview of the Hippocrates framework, illustrating the four critical phases including (1) continued pre-training, (2) supervised fine-tuning, (3) reinforcement learning from AI-generated feedback, and (4) the comprehensive evaluation pipeline. learning from AI-generated feedback to an extensive evaluation phase. This pipeline ensures our models are precisely tailored and rigorously tested for the medical domain. 2.1 Continued Pre-training Data A key aspect of our methodology is the integration of specialized medical knowledge through an extensive pre-training corpus, assembled from three specialized datasets: Medical Guidelines, PMC-Patients, and PubMedQA-contexts. The Medical Guidelines dataset comprises clinical practice guidelines, is used for training Meditron models (Chen et al., 2023). The PMC-Patients dataset (Zhao et al., 2023) consists of patient summaries extracted from case reports within PubMed Central (PMC). Additionally, the PubMedQA-contexts dataset is constructed by extracting the context field of each sample in the training split of the benchmark (Jin et al., 2019). Detailed descriptions and specifications of each dataset are available in Table 1. This extensive corpus, consisting of roughly 300M training tokens, forms the foundation of our models, ensuring their proficiency in navigating medical terminology and practices. We systematically assessed the impact of each dataset, both individually and in combination, to optimize our model\u2019s performance. Dataset Source License Size (MB) #Samples #Tokens Medical Guidelines Meditron Apache 2.0 License 382.6 37,970 96M PMC-Patients Pubmed Central CC BY-NC-SA 4.0 462.3 167,034 122M PubMedQA-train PubMedQA MIT License 290.2 211,269 80M Total 1,135.1 416,273 298M Table 1: Summary of the datasets used for continued pre-training, showing their sources, licence information and data statistics. 2.2 Supervised Fine-Tuning Data Developing effective medical LLMs requires blending domain-specific knowledge with sophisticated reasoning abilities. Previous models often utilized instruction data consisting of samples from the training or test sets of evaluation benchmarks. We also considered this setup, but additionally investigated an alternative involving generic medical data. Consequently, we constructed two sets of IT datasets: the General Instructions Data and the Evaluation Instructions Data. 3 Hippocrates General Instructions Data. This dataset aggregates more than 400K samples from nine different datasets, each derived from the instruction corpora of previous studies (Li et al., 2023; Han et al., 2023; Wu et al., 2023; Lee et al., 2023a). By excluding data from the training or test splits of downstream QA benchmarks, we aim to minimize bias and improve the model\u2019s generalization capabilities across different reasoning tasks. A pre-processing protocol was employed to remove superfluous words and web URLs, ensuring the data\u2019s quality and relevance. The detailed statistics of the dataset are presented in Table 2. Dataset Source License Size (MB) #Samples #Tokens Medical Flashcards MedAlpaca No commercialized use 18.8 33,955 3.9M GenMedGPT-5k ChatDoctor Apache 2.0 3.1 5,452 0.6M Open-Platypus Platypus CC BY-NC-SA 4.0 32.9 24,926 9.5M HealthCareMagic-100k ChatDoctor Apache 2.0 143.8 112,165 32.3M UMLS PMC-LLaMA CC BY 4.0 23.0 49,057 4.6M UMLS-Relations PMC-LLaMA CC BY 4.0 21.7 50,000 4.3M WikiDoc MedAlpaca CC BY-SA 4.0 11.0 10,000 2.6M WikiDoc-Patient-Info MedAlpaca CC BY-SA 4.0 3.7 5,942 0.8M MedicationQA PMC-LLaMA CC BY 4.0 0.4 552 0.1M Total 258.4 292,049 58.7M Table 2: Summary of General Instructions Data, describing the datasets used, their sources, together with their licence information, and size. Evaluation Instructions Data. This dataset was formed to examine the effects of including instruction samples directly from downstream tasks, a common practice in existing studies (Chen et al., 2023; Han et al., 2023; Wu et al., 2023). Instruction-response pairs were crafted using the training splits of various benchmarks, following the templates established in Meditron (Chen et al., 2023). We conducted a series of experiments to assess the distinct influence of each split on each task, both individually and collectively. The details about the Evaluation Instruction Data is given in Table 3. Dataset Source License Size (MB) #Samples #Tokens MedMCQA-train MedMCQA MIT License 114.4 182,822 24.9M MedQA-train MedQA MIT License 14.2 10,178 3.4M PubMedQA-train PubMedQA MIT License 76.3 211,269 95.9M Total 204.9 404,269 124.2M Table 3: Summary of Evaluation Instructions dataset, showing which training splits of the downstream tasks they are derived from and their data statistics. Beyond independently utilizing these datasets for supervised fine-tuning, we also examined the impact of individual datasets as well as the collective effect of combining them on model performance (refer to Appendix G). 2.3 Medical Preference Data Constructing a preference dataset typically involves generating diverse responses to identical queries using LLMs, which are subsequently evaluated by human annotators to identify the most accurate response. This method, however, can become prohibitively expensive, both in terms of computation for generating responses and the financial and time investments required for manual annotation. To circumvent these issues, we leveraged the iCliniq-10k dataset (Li et al., 2023), containing 10K authentic patient-doctor dialogues from icliniq.com. Each dialogue features a patient question accompanied by three different answers: one from an actual doctor, and the others from ChatGPT and ChatDoctor (Li et al., 2023). We conducted a thorough preprocessing of this dataset to eliminate any irrelevant or extraneous information. 4 Hippocrates Medical RLAIF. To reduce annotation costs, we adopted the RLAIF methodology (Lee et al., 2023b) in the medical domain for the first time. Utilizing detailed prompts based on patient inquiries from the iCliniq-10k dataset, we used GPT4 (OpenAI, 2023) to determine the optimal response based on predefined instructions. These instructions were derived from those used in qualitative assessments by medical professionals in Med-PaLM (Singhal et al., 2022; 2023a), with minor modifications. This annotation approach amounted to a cost of $120. The exact prompt structure for applying RLAIF with GPT4 is given in Appendix J, Figure 7. Validation. To test the reliability of GPT4\u2019s capacity to replicate medical expert annotations, we subjected 250 samples from our dataset to careful examination by two medical doctors, given them the same instructions that we provided in the prompt to GPT4. Our analysis revealed compelling results. When comparing GPT4\u2019s annotations against those of MD-1, GPT4 demonstrated a Kappa Score of 0.376, indicating moderate agreement, and an accuracy of 68.9%. The comparison with MD-2 showed even stronger results, with GPT4 achieving a Kappa Score of 0.672, suggesting substantial agreement, alongside an 83.6% accuracy. Interestingly, the inter-annotator agreement between the two doctors themselves yielded a Kappa Score of 0.416 and an accuracy of 70.8%, situating GPT4\u2019s performance firmly within the range of human expert variability. These findings not only affirm GPT4\u2019s aptitude for medical annotation but also highlight its potential to serve as a cost-effective alternative to human annotators in medical research and application settings. These findings suggest that GPT4 is capable of effectively mimicking medical doctor preferences, potentially eliminating the need for costly doctor annotations. Consequently, we compiled a comprehensive medical doctor preference dataset, consisting of 15,258 samples, to further align our LLMs with real-world clinical decision-making processes and enhance their accuracy in interpreting and responding to medical queries. 2.4 Training Methodology Our training strategy includes several phases: injection of medical knowledge through continued pre-training, domain-specific instruction tuning, and reinforcement learning from AI-generated feedback for improved alignment with medical experts. Employing the LLaMA Factory framework (hiyouga, 2023), we adhere to replicable and high-performance training standards. Moreover, we adopt the Low-Rank Adaptation (LoRA) technique Hu et al. (2021) for training efficiency and precision. LoRA enhances LLMs by selectively updating weights within additional trainable layers, thereby accelerating the training process, minimizing memory usage, and mitigating overfitting and catastrophic forgetting. Our foundational models, LLaMA2 7B (Touvron et al., 2023b) and Mistral 7B (Jiang et al., 2023), are selected based on their robust performance across medical benchmarks, demonstrating their capacity to excel without extensive training modifications. The zero-shot performances of these generic baseline models is presented at the beginning of Table 5. Continued pre-training. To equip our base LLMs with domain-specific medical expertise, we extend their pre-training on a carefully curated medical text corpus as described in Section 2.1. This stage employs traditional language modeling, focusing on next-token prediction. During this phase, both models undergo continued pre-training using LoRA, specifically adapting the fully connected layers. The parameters for LoRA are carefully set, with the rank (r) at 8 and alpha (\u03b1) at 16, to optimize learning. We use the AdamW optimizer and adjust the learning rate using a cosine scheduling, starting from an initial value of 1e-4. The batch size per device was initialized to be 8, with gradient accumulations of 2, culminating in an effective global batch size of 16, and the models are trained for a single epoch. The rationale and empirical support for our choices regarding the dataset, LoRA configurations, and overall optimization strategy are comprehensively analyzed in Appendix G. Supervised Finetuning. After continued pre-training, models undergo fine-tuning with an Instruction Tuning (IT) dataset to closely mirror medical directives, aligning model 5 Hippocrates outputs with clinical requirements. We have tested with the datasets described in Section 2.2 and found that MedQA-train IT works better than the other options. This fine-tuning phase also employs LoRA to all fully connected layers with both rank (r) and alpha (\u03b1) set to 32 for balanced efficiency and computational overhead. AdamW optimizer is used with a learning rate of 1e \u22124. To prevent model overfitting, loss calculation focuses solely on the responses. The training spanned 3 epochs with a batch size of 8 per-device and gradient accumulation set to 2. We also conducted experiments on direct fine-tuning of the base LLMs to evaluate the impact of continued pre-training (see Section 4.1) and performed a comprehensive analysis on dataset splits and fine-tuning hyperparameters (see Appendix G). Medical Preference Learning. Finally, the instruction-tuned models are further trained with a recent and popular technique called direct preference optimization (DPO) (Rafailov et al., 2023). In DPO, reinforcement learning is bypassed which allows for direct optimization based on preference data. Unlike RLHF, the responses in DPO need not be derived from the LLM being optimized. Central to DPO is the development of a loss function that evaluates the likelihood of a preferred response over a less preferred one, steering the LLM towards this goal. This makes DPO more stable and significantly reduces computational demands. The outcome of all this are our medical LLMs, named Hippoand Hippo, built upon the pre-trained LLaMA2 7B and Mistral 7B models. These models were refined through a comprehensive process that included continued pre-training and/or instruction tuning using our carefully curated medical datasets. Following this, we also explored the impact of aligning the models with clinical preferences by conducting further training on medical preference data. 3 Main Results For an objective evaluation of domain-specific knowledge and reasoning capabilities in LLMs, a detailed and fair evaluation framework is essential. In alignment with methodologies adopted in prior research (Singhal et al., 2022; Han et al., 2023; Wu et al., 2023; Toma et al., 2023; Singhal et al., 2023a; Chen et al., 2023), we selected six widely recognized medical question-answering datasets, namely MedMCQA (Pal et al., 2022), MedQA (Jin et al., 2021), PubMedQA (Jin et al., 2019) and USMLE Step 1-3 (Han et al., 2023), to assess models performances (See Table 4 for details). Performance metrics were derived through the use of the EleutherAI evaluation framework (Gao et al., 2021), ensuring a standardized approach to measuring model effectiveness in handling domain-specific queries. Dataset Source Format #Samples #Choices License MedMCQA-test MedMCQA Question + Answer 4,183 4 MIT MedQA-test MedQA Question + Answer 1,273 5 MIT PubMedQA-test PubMedQA Abstract + Question + Answer 1,000 3 MIT USMLE-step1 USMLE Question + Answer 94 5 MIT USMLE-step2 USMLE Question + Answer 109 6 MIT USMLE-step3 USMLE Question + Answer 122 5 MIT Table 4: Summary of the evaluation benchmark datasets, describing the format, the number of test samples, the number of choices, and the licence info. 3.1 Experimental Setup In our evaluation, we included a spectrum of leading LLMs, spanning general and medical LLMs, varying in scale from 1.5B to an advanced 70B parameters. Here we report the performances of our top-performing models for an accurate comparison. To ensure a fair and easily replicable assessment of these medical models, we utilized the Eleuther AI Language Model Evaluation Harness (Gao et al., 2021), a unified evaluation framework specifically designed for evaluating generative LLMs. This framework also serves as the evaluation tool for the Open LLM Leaderboard2 (Beeching et al., 2023). 2https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard 6 Hippocrates Model MedMCQA MedQA PubmedQA USMLE-1 USMLE-2 USMLE-3 Avg. 0-shot/5-shot 0-shot/5-shot 0-shot/5-shot 0-shot/5-shot 0-shot/5-shot 0-shot/5-shot 0-shot/5-shot Gemma 2b 26.2/27.7 27.8/30.6 59.1/60.8 20.2/16.0 18.4/30.3 24.6/20.5 29.4/31.0 LLaMA-2 7b 34.4/39.4 29.3/39.5 72.3/72.4 18.1/22.3 22.9/33.0 27.1/32.0 34.0/39.8 Falcon 7b 30.5/31.8 27.9/31.0 65.3/64.4 18.1/25.5 26.6/20.2 23.8/25.4 32.0/33.0 Vicuna 7b 35.9/39.0 35.1/41.2 70.9/74.5 25.5/31.9 27.5/31.2 33.6/35.3 38.1/42.2 Mistral 7b 39.3/48.5 36.8/48.9 76.3/77.8 24.5/50.0 31.2/42.2 27.9/43.4 39.3/51.8 BioMedLM 32.2/29.6 29.3/30.6 55.2/55.2 15.9/22.3 19.3/18.4 23.0/31.2 25.9/31.2 BioGPT-Large 33.1/30.1 31.3/27.2 60.1/47.7 22.3/19.2 22.0/14.7 23.0/23.0 32.0/27.0 MedAlpaca 7b 35.8/37.5 36.1/36.6 73.2/70.6 22.3/27.7 27.5/32.1 29.5/37.7 37.4/40.4 PMC-LLaMA 7b 31.5/33.0 28.0/29.5 66.5/68.4 21.3/19.2 23.9/19.3 22.1/22.1 32.2/31.9 Meditron 7b 34.0/38.2 32.0/39.3 71.6/75.7 16.0/29.8 25.7/30.3 23.8/32.0 33.9/40.9 Bio-Mistral 7b 36.4/42.4 35.0/42.1 73.4/75.1 24.5/28.7 27.5/34.9 27.9/44.3 37.5/31.9 LLaMA-2 13b 38.2/43.9 34.3/43.3 75.9/71.9 20.2/38.3 22.0/29.4 23.0/38.5 35.6/40.9 Vicuna 13b 39.7/44.3 35.9/45.9 75.6/75.0 24.5/40.4 26.6/35.8 23.8/46.7 37.7/44.6 MedAlpaca 13b 32.5/33.3 31.8/34.3 72.6/72.5 24.5/23.4 24.5/26.6 30.3/29.5 36.0/44.2 PMC-LLaMA 13b 39.1/44.5 37.8/46.3 76.8/76.5 30.9/35.1 22.9/36.7 26.2/29.5 39.0/44.8 LLaMA-2 70b 42.8/ 52.0 44.9/56.1 73.2/77.8 31.9/59.6 44.0/57.8 44.3/53.3 46.8/59.4 Qwen 72b 50.5/59.2 47.7/53.4 77.2/76.8 45.7/67.0 43.1/56.9 38.5/61.5 50.5/62.5 ClinicalCamel 70b 43.7/53.4 45.5/58.5 73.6/77.6 40.4/59.6 43.1/60.6 42.6/60.7 48.2/61.7 Meditron 70b 43.4/51.9 44.9/58.5 76.4/80.0 35.1/57.5 41.3/56.9 37.7/59.8 46.5/60.8 Hippo7b 54.3/53.9 50.6/50.8 74.7/76.6 46.8/40.4 41.3/39.5 50.0/43.4 53.0/50.8 Hippo7b 49.7/51.8 59.2/59.9 77.1/78.1 60.6/61.7 66.1/64.2 56.6/56.6 61.6/62.1 Table 5: Comparative analysis of generic and medical LLMs across downstream medical tasks in 0-shot and 5-shot learning settings. The best and the second-best performance are highlighted in bold and underline, respectively. LM-Evaluation-Harness operates on a Log-Likelihood objective, which calculates the negative log-likelihood for each potential answer in response to a given query. The answer is then chosen based on the highest likelihood score, indicating it as the most probable choice. During evaluation, each prompt includes a question and corresponding choices, separated by a new line. For PubMedQA, the abstract provides contextual grounding for the model\u2019s decision-making process. Examples of these prompts are provided in the Appendix I. 3.2 Results We present a comparative analysis of our novel models, Hippoand Hippo, against a set of established base LLMs and medical-specific LLMs, in Table 5. Our evaluation includes both zero-shot and few-shot (specifically, 5-shot) learning scenarios. Demonstrating superior performance, our Hippo models outperform traditional pretrained models in zero-shot evaluations and maintain their superiority in the 5-shot context. Remarkably, Hippoand Hipponot only beat models with 7 billion and 13 billion parameters but also exceed the capabilities of those with 70 billion parameters. This outstanding performance highlights the adaptability and precision of our models, showing their remarkable ability to significantly boost prediction accuracy with minimal input examples. 4 Analysis 4.1 Contribution of Each Training Stage Hippo. Our evaluation methodology for the LLaMA2 7B model covers successive training stages: Continued Pre-training (CP), Instruction Tuning (SFT), and Direct Preference Optimization (DPO). As listed in Table 6, the base model LLaMA2 7B initially achieves an average accuracy of 34.0 across benchmarks. The CP stage marginally increases accuracy to 34.4, indicating initial benefits from domain-focused continued pre-training. The subsequent introduction of SFT yields a substantial performance boost to an average accuracy of 50.3, demonstrating the critical role of customized instruction in enhancing the model\u2019s capabilities in understanding and answering medical queries. Integrating CP with SFT 7 Hippocrates Model MedMCQA MedQA PubmedQA USMLE-1 USMLE-2 USMLE-3 Avg. LLaMA2 7b 34.4 29.3 72.3 18.1 22.9 27.1 34.0 + CP 34.6 31.9 72.8 20.2 25.7 21.3 34.4 + SFT 52.7 49.7 75.7 37.2 42.2 44.3 50.3 + CP + SFT 54.3 50.6 74.7 46.8 41.3 50.0 53.0 + CP + SFT + DPO 54.4 50.4 74.8 46.8 39.5 49.2 52.5 + CP + SFT + DPO + CoT 54.0 50.3 73.3 48.9 43.7 45.1 52.6 Mistral 7b 39.3 36.8 76.3 24.5 31.2 27.9 39.3 + CP 40.5 37.2 74.9 29.8 33.9 29.5 41.0 + SFT 49.7 59.2 77.1 60.6 66.1 56.6 61.6 + CP + SFT 51.5 60.9 76.5 55.3 65.1 57.4 61.1 + CP + SFT + DPO 49.3 57.3 77.3 56.4 62.4 54.9 59.6 + CP + SFT + DPO + CoT 51.0 60.9 63.5 59.6 59.6 63.9 59.8 Table 6: Hippoand Hippo: Analysis of Continued Pretraining, Instruction Tuning, and Direct Preference Optimization. This table demonstrates the incremental impact of Continued Pretraining (CP) on medical text data, Instruction Tuning (SFT), and Direct Preference Optimization (DPO) on the zero-shot capabilities of the LLaMA2 7B and Mistral 7B models across a range of medical benchmarks, including MedMCQA, MedQA, PubmedQA, and the USMLE series. The results, aggregated and individual, underline the significance of each methodological advancement in enhancing the model\u2019s proficiency in interpreting and responding to complex medical queries, thereby providing a granular view of performance improvements at each stage of model optimization. further improves this performance to 53.0, highlighting the combined value of domain knowledge and specific instruction tuning. The final DPO stage slightly decreases the model\u2019s performance to 52.5, albeit with a slight increase in accuracy for MedMCQA and PubMedQA, illustrating DPO\u2019s refined impact on model preference alignment. This sequence delineates the incremental enhancements attributable to each training phase, with SFT marking a pivotal improvement. The composite model, LLaMA2 + CP + SFT, is thus designated as Hippofor its distinguished performance across our benchmarks. Hippo. Following the approach for Hippo, the training evolution for the Mistral 7B model reveals gradual improvement in the model\u2019s proficiency in medical questionanswering. Initial results from the baseline Mistral 7B model, as shown in Table 6, show an average benchmark accuracy of 39.3. Implementing CP slightly improves this to 41.0, reflecting the positive yet modest impact of domain-specific continued pre-training. The pivotal SFT stage significantly raises the performance, achieving an average accuracy of 61.6, emphasizing the critical role of customized instruction in enhancing the model\u2019s interpretative and response capabilities for medical inquiries. Interestingly, combining CP and SFT results in a slight reduction to 61.1, suggesting a complex interaction between domain pre-training and instruction tuning. The subsequent application of DPO slightly lowers the overall score to 59.6, similar to the pattern observed for Hippo, with targeted performance adjustment. Based on comprehensive analysis, Mistral 7b + SFT is selected to represent Hippo, credited for its exceptional performance across all benchmarks. 4.2 Chain-of-Thought (CoT) Prompting The CoT prompting technique (Wei et al., 2023) enhances an LLM\u2019s ability to tackle complex queries by guiding it to articulate intermediate reasoning steps. This method improves the model\u2019s responses by structuring its problem-solving process. In our study, we applied CoT prompting for in-context learning, adopting a slightly altered instruction utilized in (Pal & Sankarasubbu, 2024b): \u201dThe following is a multiple choice question about medical knowledge. Solve it in a step-by-step fashion, starting by summarizing the available information. Output a single option from the four options as the final answer.\u201d. However, the application of CoT prompting in our experiments with downstream medical tasks did not consistently enhance our models\u2019 performance, as shown in Table 6. 8 Hippocrates 4.3 Influencing Examples We explore the application of Influence Functions to understand the behavior of LLMs (Grosse et al., 2023) \u2013 in our context, particularly those trained with domain-specific datasets like medical text. This technique quantifies the effect of single training instances on the model\u2019s predictions, improving the transparency of the AI models. This is increasingly important as the field of Explainable AI (XAI) grows to make AI systems more interpretable and accountable. However, the complexity of LLMs, which process vast amounts of data, highlights the necessity for efficient methods to perform this analysis. We believe incorporating this tool to our evaluation framework will prove useful for future studies. In the supplementary material (Appendix H), we present our analysis results, highlighting the most and least influential training examples for a MedQA dataset question and its model response. Notably, the most influential example shares overlapping medical concepts, in contrast to no shared concepts with the least influential training example. 4.4 Uncertainty Quantification In our study, we conducted an uncertainty quantification experiment on Hippoto understand its performance on the MedMCQA, MedQA, and PubMedQA datasets, as shown in Fig.3. Our findings reveal that our model consistently assigns higher probabilities to questions it answers correctly across all datasets, suggesting an ability to self-calibrate its certainty. The model\u2019s confidence is notably higher on MedMCQA, possibly reflecting the dataset\u2019s relative simplicity. In contrast, its confidence on PubMedQA is comparatively lower, likely due to the dataset\u2019s complexity. Additionally, the model\u2019s confidence changes with different training stages; CPT leads to more conservative estimates, SFT boosts confidence, and adding DPO leads to variable confidence, with noticeable effects in MedMCQA and MedQA. These outcomes emphasize a complex relationship between training approaches and confidence calibration in the model. 0 1 2 3 Density MedMCQA CPT CPT + SFT CPT + SFT + DPO 0 1 2 3 Density MedQA 0.2 0.4 0.6 0.8 1.0 0 1 2 3 Density PubMedQA 0.2 0.4 0.6 0.8 1.0 0.2 0.4 0.6 0.8 1.0 Correct Incorrect Figure 3: Uncertainty quantification for our best-performing 5-shot Hippomodel., where we plot the probability distributions assigned by the model to both correct predictions and incorrect predictions on the MedMCQA, MedQA, and PubMedQA datasets. We present additional negative results in Appendix J, which we anticipate will be beneficial for the community. By sharing these findings, we aim to encourage further investigations. 5 Conclusion In this study, we have introduced Hippocrates, a comprehensive and open-source framework tailored for the medical domain, addressing a wide array of challenges faced by medical LLMs. We provide openly available datasets and establish an intuitive benchmark using the LM-Evaluation-Harness tool. We also introduce Hippoand Hippo, two 7B models demonstrating superior performance. Our work makes substantial contributions to the field by combining in-depth empirical research with a structured training methodology, offering invaluable insights and tools for future research not only in healthcare but in any area requiring domain-specific adaptation of LLMs. 9 Hippocrates"
18
+ }
intro_8K/test_introduction_short_2404.16627v1.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2404.16627v1",
3
+ "title": "Incorporating Lexical and Syntactic Knowledge for Unsupervised Cross-Lingual Transfer",
4
+ "abstract": "Unsupervised cross-lingual transfer involves transferring knowledge between\nlanguages without explicit supervision. Although numerous studies have been\nconducted to improve performance in such tasks by focusing on cross-lingual\nknowledge, particularly lexical and syntactic knowledge, current approaches are\nlimited as they only incorporate syntactic or lexical information. Since each\ntype of information offers unique advantages and no previous attempts have\ncombined both, we attempt to explore the potential of this approach. In this\npaper, we present a novel framework called \"Lexicon-Syntax Enhanced\nMultilingual BERT\" that combines both lexical and syntactic knowledge.\nSpecifically, we use Multilingual BERT (mBERT) as the base model and employ two\ntechniques to enhance its learning capabilities. The code-switching technique\nis used to implicitly teach the model lexical alignment information, while a\nsyntactic-based graph attention network is designed to help the model encode\nsyntactic structure. To integrate both types of knowledge, we input\ncode-switched sequences into both the syntactic module and the mBERT base model\nsimultaneously. Our extensive experimental results demonstrate this framework\ncan consistently outperform all baselines of zero-shot cross-lingual transfer,\nwith the gains of 1.0~3.7 points on text classification, named entity\nrecognition (ner), and semantic parsing tasks. Keywords:cross-lingual transfer,\nlexicon, syntax, code-switching, graph attention network",
5
+ "authors": "Jianyu Zheng, Fengfei Fan, Jianquan Li",
6
+ "published": "2024-04-25",
7
+ "updated": "2024-04-25",
8
+ "primary_cat": "cs.CL",
9
+ "cats": [
10
+ "cs.CL"
11
+ ],
12
+ "label": "Original Paper",
13
+ "paper_cat": "Knowledge AND Graph",
14
+ "gt": "Unsupervised cross-lingual transfer refers to the process of leveraging knowledge from one lan- guage, and applying it to another language without explicit supervision (Conneau et al., 2019). Due to the free requirement of the labeled data in tar- get language, it is highly preferred for low-resource scenarios. Recently, unsupervised cross-lingual transfer has been widely applied in various natural language processing (NLP) tasks, such as part-of- speech (POS) tagging (Kim et al., 2017; de Vries et al., 2022), named entity recognition (NER) (Fe- tahu et al., 2022; Xie et al., 2018), machine reading comprehension (Hsu et al., 2019; Chen et al., 2022), and question answering (QA) (Nooralahzadeh and Sennrich, 2023; Asai et al., 2021). The success of unsupervised cross-lingual trans- fer can be attributed to its ability to exploit connec- tions across languages, which are reflected in vari- ous linguistic aspects such as lexicon, semantics, and syntactic structures. Consequently, many stud- ies have sought to enhance models by encouraging them to learn these cross-lingual commonalities. For instance, in the lexical domain, Qin et al. (2021) utilize bilingual dictionaries to randomly replace certain words with their translations in other lan- guages, thereby encouraging models to implicitly align representations between the source language and multiple target languages. In the area of syntax, several works have developed novel neural archi- \u2217Equal Contribution \u2020 Jianquan Li is the corresponding author tectures to guide models in encoding the structural features of languages. Ahmad et al. (2021), for example, proposes a graph neural network (GNN) to encode the structural representation of input text and fine-tune the GNN along with the multilingual BERT (mBERT) for downstream tasks. Both lexical and syntactic approaches facilitate the alignment of linguistic elements across different languages, thereby enhancing the performance of cross-lingual transfer tasks. However, language is a highly intricate system (Ellis and Larsen-Freeman, 2009), with elements at various levels being interconnected. For exam- ple, sentences are composed of phrases, which in turn are composed of words. In cross-lingual transfer, we hypothesize that merely guiding mod- els to focus on a single linguistic aspect is inade- quate. Instead, by simultaneously directing models to learn linguistic knowledge across diverse levels, their performance can be further improved. Table 1 presents some example sentences extracted from the XNLI dataset (Conneau et al., 2018). These parallel sentence pairs demonstrate that the multi- lingual model makes incorrect predictions for sen- tence pairs in the target languages (French and Ger- man) when only one aspect of linguistic knowledge, such as lexical or syntactic knowledge, is incorpo- rated. However, when both types of knowledge are integrated into the model, the correct prediction is obtained. Despite this, most previous studies have focused on either syntactic or lexical information alone, without considering the integration of both types of information. arXiv:2404.16627v1 [cs.CL] 25 Apr 2024 Lang Premise(P)/Hypothesis(H) Label +Lex +Syn Ours fr P:Votre soci\u00e9t\u00e9 charitable fournit non seulement de les services sociaux communautaires efficaces \u00e0 les animaux et les personnes, mais sert \u00e9galement \u00e9galement de fourri\u00e8re pour la Ville de Nashua. H:La soci\u00e9t\u00e9 humaine est le refuge pour animaux de Nashua. entali contra contra entail de P:Ihre humane Gesellschaft erbringt nicht nur effektive gemeinschaftlich-soziale Dienstleistungen f\u00fcr Tiere und ihre Menschen, sondern dient auch als Zwinger der Stadt Nashua. H:Die Humane Society ist Nashuas Tierheim. entail contra contra entail en P:Your humane society provides not only effective community social services for animals and their people , but also serves as the pound for the City of Nashua . H:The humane society is Nashua\u2019s animal shelter . Table 1: The parallel sentence pairs in French and German from XNLI(Conneau et al., 2018), which are translated from English. Each sentence pair consist of a Premise sentence(P) and a Hypothesis sentence(H). The \"Label\" column indicates the relationship between each sentence pair, which can be contradiction(contra), entailment(entail) or neutral. \"+Lex\" and \"+Syn\" represent the prediction results from the multilingual models infused with lexical and syntactic knowledge, respectively. The \"ours\" column shows the results of integrating both types of knowledge into the model. Compared to the other two methods, our method can accurately predict the relationship between each sentence pair. In this work, we aim to enhance unsupervised cross-lingual transfer by integrating knowledge from different linguistic levels. To achieve this, we propose a framework called \"Lexicon-Syntax En- hanced Multilingual BERT\" (\"LS-mBERT\"), based on a pre-trained multilingual BERT model. Specifi- cally, we first preprocess the input source language sequences to obtain each word\u2019s part-of-speech information and dependency relationships between words in each sentence. Then, we replace some words in the sentence with their translations from other languages while preserving the established dependency relationships. Furthermore, we em- ploy a graph attention network(Veli\u010dkovi\u0107 et al., 2017) to construct a syntactic module, the output of which is integrated into the attention heads of the multilingual BERT. This integration guides the entire model to focus on syntactic structural rela- tionships. Finally, during the fine-tuning process, we simultaneously train the multilingual BERT and the syntactic module with the pre-processed text. As a result, our framework enables the multilingual BERT to not only implicitly learn knowledge related to lexical alignment but also encode knowledge about syntactic structure. To validate the effectiveness of our framework, we conduct experiments on various tasks, including text classification, named entity recognition (ner), and semantic parsing. The experimental results show that our framework consistently outperforms all baseline models in zero-shot cross-lingual trans- fer across these tasks. For instance, our method achieves the improvement of 3.7 points for mTOP dataset. Our framework also demonstrates sig- nificant improvements in generalized cross-lingual transfer. Moreover, we examine the impact of im- portant parameters, such as the replacement ra- tio of source words, and languages for replace- ment. To facilitate further research explorations, we release our code at https://github.com/ Tian14267/LS_mBert.",
15
+ "main_content": "Cross-lingual transfer is crucial in the field of natural language processing (NLP) as it enables models trained on one language to be applied to another. To enhance performance in transfer tasks, numerous studies focus on addressing the characteristics of various languages and their relationships. 2.1. Incorporating Lexical Knowledge for Cross-lingual Transfer A group of studies aims to incorporate lexical alignment knowledge into cross-lingual transfer research (Zhang et al., 2021a; Wang et al., 2022; Qin et al., 2021; Lai et al., 2021). For example, Zhang et al. (2021a) and Wang et al. (2022) employ bilingual dictionaries to establish word alignments and subsequently train cross-lingual models by leveraging explicit lexical associations between languages. Other methods (Qin et al., 2021; Lai et al., 2021) involve substituting a portion of words in a sentence with their equivalents from different languages, a technique commonly known as \"codeswitching.\" By increasing the diversity of input text, these approaches promote implicit alignments of language representations. However, this group of studies mainly offers insights into lexical translation across languages, while neglecting the learning of language-specific structural rules. 2.2. Incorporating Syntactic Knowledge for Cross-lingual Transfer Another research category focuses on integrating syntactic knowledge for cross-lingual transfer (Ahmad et al., 2021; Yu et al., 2021; Zhang et al., 2021b; He et al., 2019; Cignarella et al., 2020; Xu et al., 2022; Shi et al., 2022; Wang et al., 2021). Many studies in this group (Ahmad et al., 2021; Wang et al., 2021) develop graph neural networks to encode syntactic structures, a category to which our work also belongs. Taking inspiration from Ahmad et al. (2021), we adopt a similar architecture, specifically using a graph attention network to encode syntactic knowledge. Other methods (Cignarella et al., 2020; Xu et al., 2022) extract sparse syntactic features from text and subsequently incorporate them into the overall model. Although these approaches consider the relationships between language elements, they frequently overlook the alignments across languages, which impedes the effective transfer of linguistic elements and rules between languages. Consequently, we combine the strengths of these two categories of approaches. First, we replace the input sequence with translated words from other languages, which aids in guiding the entire model to acquire implicit alignment information. Then, we introduce an additional module to assist the model in encoding syntax. 3. Methodology In this section, we provide a detailed introduction to our framework \"LS-mBERT\", as illustrated in Figure 1. Our objective is to enhance the crosslingual transfer capabilities of multilingual BERT (mBERT) by incorporating both lexical and syntactic knowledge. Given an input sequence, we first pre-process it using a part-of-speech tagger and a universal parser(Section 3.1). This yields the part-of-speech tag for each word and dependency relationships among words in the sequence. To enable mBERT to implicitly encode word alignment information, we substitute some words with their translations from other languages using a code-switching technology (Section 3.2). Moreover, to guide mBERT in attending to syntactic relationships, we construct a graph attention network (GAT), introduced in Section 3.3. The output of the graph attention network is then used as input to the attention heads within BERT, effectively biasing attention information between words. Finally, to integrate both syntactic and lexical knowledge, we pass the code-switched text into both the GAT network and mBERT, which are trained simultaneously (Section 3.4). 3.1. Pre-processing Input Sequence The initial step involves pre-processing the input data to obtain prior knowledge for subsequent training. As our framework incorporates syntactic knowledge, we opt for an off-the-shelf parser with high accuracy to process the input text. In this case, we employ the UDPipe toolkit(Straka and Strakov\u00e1, 2017) to parse the inputs sentences, and Stanza(Qi et al., 2020) to annotate the part-of-speech information of each word. By utilizing both tools, given a sentence, we can obtain the dependency relationships between words and their part-of-speech information, which are then utilized to provide syntactic knowledge and enhance word representations, respectively. 3.2. Code-switching for Text (lexical knowledge) As our objective is to improve unsupervised crosslingual transfer, introducing explicit alignment signals would be inappropriate. Therefore, we employ an implicit strategy to guide the entire model to encode word alignment information. Inspired by the work of Qin et al. (2021), we opt for the codeswitching strategy. Specifically, we first randomly select a proportion \u03b1 of words within each source sentence. Then, for each selected word, we use a high-quality bilingual dictionary to substitute it with a corresponding translation from another target language. This method not only promotes the implicit alignment of representations across diverse languages within our model, but also enhances the model\u2019s robustness when processing input text. 3.3. Graph Attention Network (syntactic knowledge) To guide mBERT in acquiring syntactic knowledge better, we construct an external syntactic module by referring to the method introduced by Ahmad et al. (2021). The overview of this module is displayed in Figure 2. Given that there are n tokens in the input sequence, we first represent each token by combining its embedding representation with part-of-speech (POS) information. The representation of the i-th token can be calculated: xi = ciWc + posiWpos, where ci and posi represent the token representation and the part-ofspeech representation of the i-th token, respectively; while Wc and Wpos denote the token parameter matrix and the part-of-speech parameter matrix. Then, the encoded sequence s\u2032 = [x1, x2, \u00b7 \u00b7 \u00b7 , xn] is passed into the subsequent syntactic module, which is designed with a graph attention network (GAT) (Veli\u010dkovi\u0107 et al., 2017). The GAT module comprises a total of L layers, each with m attention heads. These attention heads play a crucial role in generating representations for individual tokens by attending to neighboring tokens in the graph. Each attention in GAT operates as follows: O = Attention(T, T, V, M), wherein T denotes the query and key matrices, and V represents the value matrix. Besides, M signifies the mask matrix, determining whether a pair of words in the dependency tree can attend each other. Notably, the relationships between words in the attention matrix are modeled based on the distances between words in codeswitching part-of-speech tagging dependency parsing UDPipe bilingual dictionary guidelines (Root) mean needed new the iron donors are more nsubj det amod compound nsubj aux amod ccomp leitlinien (Root) mean necesitaba new the fer donors are \u66f4\u591a\u7684 nsubj det amod compound nsubj aux amod ccomp GAT network The new iron guidelines mean more donors are needed Label Multilingual BERT The_DET new_ADJ iron_NOUN guidelines_NOUN mean_VERB more _ A D J donors _ N O U N are_AUX needed_VERB The_DET new_ADJ fer_NOUN leitlinien_NOUN mean_VERB \u66f4\u591a \u7684_ADJ donors_NOUN are_AUX necesitaba_VERB codeswitching Figure 1: An overview of lexicon-syntax enhanced multilingual BERT (\"LS-mBERT\"). An example sentence is provided to explain how this framework works. To introduce lexical alignment knowledge, we utilize bilingual dictionaries to randomly replace some words in the sentence with the equivalent words from other languages (pink for German, green for Spanish, light blue for Chinese, and orange for French). Then, an graph attention network (GAT) is developed to encode the syntactic structure of this sentence. The output representation of GAT is sent to the attention heads in multilingual BERT for guiding them to focus on the language-specific structures. the dependency tree, rather than the positional information within the word sequence. Subsequently, the resulting representations produced by all attention heads are concatenated to form the output representations for each token. Finally, the output sequence from the final layer can be denoted as Y = [y1, y2, \u00b7 \u00b7 \u00b7 , yn], where yi represents the output representation for the i-th token. To maintain the lightweight nature of the architecture, certain elements in GAT have been excluded. Specifically, we do not employ feed-forward sub-layers, residual connections, or positional representations. We found that these modifications do not result in a significant performance gap. 3.4. Summary of the Framework: Lexicon-syntax Enhanced Multilingual BERT In this subsection, we provide an overview of our \"LS-mBERT\" framework, as illustrated in Figure 1. We first select multilingual BERT (mBERT) as the base model. Then, we process the input sequence using the code-switching strategy in Section 3.2, resulting in the code-switched sequence s\u2032. It is important to note that despite some words in each sentence being replaced with other languages, the original dependency relationships between words are still preserved in s\u2032. Next, we feed the codeswitched text into both mBERT and the syntactic module (GAT), facilitating the fusion of the two types of knowledge. Furthermore, this step guides the entire model to better align different languages within the high-dimensional vector space during training. After GAT processes the code-switched sequence, the output from the final layer is utilized to bias the attention heads of mBERT. The calculation process can be described as follows: O = Attention(Q + Y W Q l , K + Y W K l , V ), where Q, K, and V represent the query, key, and value matrices, respectively; While W Q l and W K l are new parameters to learn for biasing the query and key matrices. t1 c1 pos1 + x1 t2 c2 pos2 + x2 ... ... ... tn-1 cn-1 posn-1 + xn-1 tn cn posn + xn ... ... + + + + + m \u00d7 L layers y1 y2 yn-1 yn ... input seq token emb pos emb att layer Figure 2: The architecture of graph attention network (Ahmad et al., 2021; Veli\u010dkovi\u0107 et al., 2017). Each input token is represented by combining its token embedding and part-of-speech embedding. Each attention head within the graph attention network(GAT) generates a representation for each token embedding by attending to its neighboring tokens in the dependency graph. Next, the resulting representations are concatenated to form the output representation for each token. Finally, we can obtain the representations of the output sequence embeddings from the final layer of GAT. 4. Experiments 4.1. Experimental Settings As above mentioned, we use UDPipe (Straka and Strakov\u00e1, 2017) and Stanza (Qi et al., 2020) for parsing sentences and obtaining words\u2019 part-ofspeech information in all languages, and employ MUSE (Lample et al., 2018) as the bilingual dictionary for word substitution. For all tasks, we identify the optimal parameter combinations by searching within the candidate sets. The learning rate is set to 2e-5, utilizing AdamW as the optimizer. The batch size is 64, and the maximum length for input sequences is 128 tokens. For code-switching, we vary the replacement ratio (\u03b1) from 0.3 to 0.7 with a step of 0.1. For the GAT network, we adopt the identical parameter values as employed in the work of Ahmad et al. (2021). Specifically, we set L to 4 and k to 4. 4.2. Tasks Our framework is evaluated on the following tasks, using English as the source language. Some statistics are summarized in Table 2, along with the detailed descriptions provided below. Text Classification. Text Classification is a task that assigns predefined categories to open-ended text. In our experiment, we utilize two publicly available dataset: XNLI and PAWS-X. In XNLI (Conneau et al., 2018), models need to predict whether a given pair of sentences is entailed, contradicted, or neutral; In PAWS-X (Yang et al., 2019), models are required to determine whether two given sentences or phrases convey the same meaning. When implementing the two tasks, to establish connections between the dependency trees of the two sentences, we introduce two edges from the [CLS] token to the root nodes. Subsequently, we apply the code-switching technique to randomly replace certain words in the sentence pairs. Named Entity Recognition. Named Entity Recognition (NER) is a task that involves the automatic identification and categorization of named entities. In our experiment, we employ the Wikiann (Pan et al., 2017) dataset. Wikiann consists of Wikipedia articles annotated with person, location, organization, and other tags in the IOB2 format. Our method is evaluated across 15 languages. To ensure that the models can obtain complete entity information, we exclusively substitute words that do not constitute named entities during the code-switching process. Task-oriented Semantic Parsing. In this task, the models are required to determine the intent of the utterance and then fill the relevant slots. The dataset for the experiment is mTOP (Li et al., 2021), which is an almost parallel corpus, containing 100k examples in total across 6 languages. Our experiments cover 5 languages. 4.3. Baselines We choose the following methods as baselines to compare: \u2022 mBERT. We exclusively utilize the multilingual BERT model to perform zero-shot crosslingual transfer for these tasks. \u2022 mBERT+Syn. A graph attention network (GAT) is integrated with multilingual BERT, and these two components are jointly trained for all tasks. \u2022 mBERT+Code-switch. The multilingual BERT model is fine-tuned with the codeswitched text across various languages. 5. Results and analysis 5.1. Cross-Lingual Transfer Results The main experimental results are displayed in Table 3. Our method consistently demonstrates superior performance across all tasks compared to other baselines. This indicates our method\u2019s effectiveness for cross-lingual transfer, achieved through the incorporation of lexical and syntactic knowledge. Especially for the tasks Wikiann and mTOP, our method exhibits a significant improvement, with an increase of 2.2 and 3.7 points, respectively, when compared to the baseline with the best performance. In addition, since code-switching technique blends words from various language, we calculate the results across the languages excluding English, as shown in the column \"AVG/en\" in Table 3. We find that the performance gap between our method and each baseline in most tasks becomes wider. This also indicates that our method can more effectively align non-English languages within the same vector space implicitly. For each task, we discover most of languages can gain improvement by using our method, as compared to the top-performing baseline. Specifically, 84.6% (11/13), 100.0% (7/7), 80.0% (12/15) and 100.0% (5/5) languages demonstrate improvement in XNLI, PAWS-X, Wikiann and mTOP respectively. Furthermore, our method also provides improvement for non-alphabetic languages in many tasks, such as Chinese, Japan and Korean. This reflects that our method can be effectively generalized into various target languages, even in cases where significant differences exist between the source and target languages. Task Dataset |Train| |Dev| |Test| |Lang| Metric Classification XNLI 392K 2.5K 5K 13 Accuracy Classification PAWS-X 49K 2K 2K 7 Accuracy NER Wikiann 20K 10K 1-10K 15 F1 Semantic Parsing mTOP 15.7K 2.2K 2.8-4.4K 5 Exact Match Table 2: Evaluation datasets. |Train|, |Dev| and |Test| delegate the numbers of examples in the training, validation and testing sets, respectively. |Lang| is the number of target languages we use in each task. Tasks Methods en ar bg de el es fr hi ru tr ur vi zh ko nl pt ja AVG / en AVG XNLI (Conneau et al., 2018) mBERT 80.8 64.3 68.0 70.0 65.3 73.5 73.4 58.9 67.8 60.9 57.2 69.3 67.8 66.4 67.5 mBERT+Syn 81.6 65.4 69.3 70.7 66.5 74.1 73.2 60.5 68.8 62.4 58.7 69.9 69.3 67.4 68.5 mBERT+code-switch 80.9 64.2 70.0 71.5 67.1 73.7 73.2 61.6 68.9 58.6 57.8 69.9 70.0 67.2 68.3 our method 81.3 65.8 71.3 71.8 68.3 75.2 74.2 62.8 70.7 61.1 58.8 71.8 70.8 68.6 69.5 PAWS-X (Yang et al., 2019) mBERT 94.0 85.7 87.4 87.0 77.0 69.6 73.0 80.2 81.7 mBERT+Syn 93.7 86.2 89.5 88.7 78.8 75.5 75.9 82.7 83.9 mBERT+code-switch 92.4 85.9 87.9 88.3 80.2 78.0 78.0 83.4 84.3 our method 93.8 87.2 89.6 89.4 81.8 79.0 80.0 84.6 85.6 Wikiann(Pan et al., 2017) mBERT 83.7 36.1 76.0 75.2 68.0 75.8 79.0 65.0 63.9 69.1 38.7 71.0 58.9 81.3 79.0 66.9 68.1 mBERT+Syn 84.1 34.6 76.9 75.4 68.2 76.0 79.1 64.0 64.2 68.7 38.0 73.1 58.0 81.7 79.5 67.0 68.1 mBERT+code-switch 82.4 39.2 77.1 75.2 68.2 71.0 78.0 66.1 64.2 72.4 41.3 69.2 59.9 81.3 78.9 67.3 68.3 our method 84.5 41.4 78.9 77.3 70.2 75.3 80.3 67.6 63.9 73.1 46.8 72.6 62.2 81.8 80.8 69.4 70.5 mTOP(Li et al., 2021) mBERT 81.0 28.1 40.2 38.8 9.8 29.2 39.6 mBERT+Syn 81.3 30.0 43.0 41.2 11.5 31.4 41.4 mBERT+code-switch 82.3 40.3 47.5 48.2 16.0 38.0 46.8 our method 83.5 44.5 54.2 51.7 18.8 47.3 50.5 Table 3: The experimental results on four tasks. The best results in each task are highlighted in bold. The baselines include \"mBERT\", \"mBERT+Syn\" and \"mBERT+codeswitch\". They delegate \"only using mBERT\", \"using mBERT with a syntactic module (GAT)\" and \"mBERT with the code-switching technique\" for cross-lingual transfer. The results of \"mBERT\" is from Hu et al. (2020). For \"mBERT+Syn\" and \"mBERT+code-switch\", we adopt open-source code of the work of Ahmad et al. (2021) and Qin et al. (2021) to reproduce these experiments, and report the results. The evaluation metrics are F1 value for the NER task, Accuracy for classification tasks, and Exact Match for semantic parsing. The \"AVG\" column means the average performance across all language for each method, while the \"AVG /en\" indicates the average performance on the languages excluding English. 5.2. Generalized Cross-Lingual Transfer Results In practical scenarios, cross-lingual transfer could involve any language pair. For example, in a crosslingual question-answering (QA) task, the context passage may be in German, while the multilingual model is required to answer the question in French. Considering on this, we conduct zero-shot cross-lingual transfer experiments within a generalized setting. Since PAWS-X and mTOP are completely parallel, we evaluate the performance of our method and \"mBERT\" baseline on generalized cross-lingual transfer tasks using the two dataset. The experimental results are illustrated in Figure 3. For both classification and semantic parsing benchmarks, we have observed improvements among most language pairs. This reflects that our method is very effective for generalized crosslingual transfer. Furthermore, when English is included in the language pair, there is a substantial enhancement in performance. Specifically, when English serves as the source language, the average performance of target languages is increased over 10% and 3% in mTOP and PAWS-X dataset, respectively. This reflects the effectiveness of the code-switching in aligning other languages with English. For the PAWS-X dataset, we find that some non-Indo-European languages such as Japanese, Korean, and Chinese can achieve improvements, even when the source languages belong to the Indo-European language family, including English, Spanish, French, and German. It reflects that syntactic knowledge can effectively narrow the gap of language structures for this task, especially for the language pairs without close linguistic relationships. 6. Analysis and Discussion 6.1. Impact on Languages We investigate whether our method can improve the performance of specific languages or language groups. As shown in Figure 4, we display the performance improvement of our method by comparing the \"mBERT\" baseline. We find that almost languages can obtain benefits from our method. Particularly, when the target language, such as German, Spanish and French, belongs to the IndoEuropean language family, the improvement is very significant. Furthermore, the performance in the mTOP task is improved significantly by our method among all languages. This may be because that our method consider both syntax and lexicon simultaneously, which is beneficial for the semantic parsing task. target source performance difference (a) mTOP target (b) PAWS-X performance difference source Figure 3: Results for generalized zero-shot cross-lingual transfer on mTOP and PAWS-X. We report the performance differences between our method and \"mBERT\" baseline across all languages. -5 0 5 10 15 20 en de es fr bg ru ar vi tr ur el hi zh ko Performance Improvement(%) Language XNLI PAWS-X Wikiann mTOP Figure 4: Performance improvements for XNLI, PAWS-X, Wikiann, and mTOP across languages. The languages in x-axis are grouped by language families: IE.Germanic (en, de), IE.Romance (es, fr), IE.Slavic (bg, ru), Afro-asiatic (ar), Austro-asiatic (vi), Altaic (tr, ur), IE.Greek (el), IE.Indic (hi), Sino-tibetan (zh), Korean (ko). 6.2. Representation Similarities across Languages To evaluate the effectiveness of our method in aligning different languages, we employ the representation similarity between languages as the metric. Specifically, we utilize the testing set of XNLI (Conneau et al., 2018) as the dataset, which consists of parallel sentences across multiple languages. Then we take the vector of [CLS] token from the final layer of our model, as well as the vectors from two baselines (\"mBERT+Syn\" and \"mBERT+codeswitch) for each sentence. Following Libovick` y et al. (2019), the centroid vector for representing each language is calculated by averaging these sentence representations. Finally, we adopt cosine similarity as the indicator to assess the degree of alignment between English and each target language. Figure 5 illustrates the similarities between languages by using our method and the other two baselines. It can be easily found that our method outperforms the other two baselines in aligning language representations. This suggests that infusing two types of knowledge is indeed effective in reducing the disparities in language typologies, which improve cross-lingual transfer performance. In addition, we observe that \"mBERT+code-switch\" performs better than \"mBERT+Syn\", which reflects that lexical knowledge is more useful than syntactic knowledge for this task. 6.3. Impact of Code-switching The replacement ratio \u03b1 for code-switching is an important hyper-parameter in our method. Hence, we explore its impact on mTOP and PAWS-X, by varying \u03b1 from 0 to 0.9 in increments of 0.1, shown in Figure 6. When \u03b1 is set to 0, it represents the results of the baseline \"mBERT+Syn\". As \u03b1 increases, more source words are substituted with their equivalent words from other languages. The performance improvement certificates the effectiveness of code-switching technique. Notably, when about half of the words are replaced (0.5 for PAWS80 85 90 95 100 ar bg de el es fr hi ru tr ur vi zh mBERT+Syn mBERT+code-switch LS-mBERT Figure 5: The similarities between languages. We first calculate the centroid representation for each language following Libovick` y et al. (2019). Then we adopt cosine similarity to evaluate the similarity between English and each target language. X and 0.4 for mTOP), the performance reaches their peaks. After that, both tasks experience a decline in performance. This decline might be because the expression of meaning and sentence structure are influenced severely as too many words are replaced. Therefore, it is a optimal choice to set \u03b1 between 0.4 to 0.5 for code-switching. Figure 6: Performance on mTOP and PAWS-X with different replacement ratio \u03b1 in code-switching. Furthermore, we investigate whether the choice of the replacement language in code-switching impacts our model\u2019s performance. We select mTOP and PAWS-X as the testing tasks. In codeswitching, we devise three different measures for language replacement: \"Exclusively replacing with the target language\", \"Replacing with languages from the same language family as the target language\"; and \"Replacing with languages selected randomly\". The experimental results are illustrated in Figure 7. We can easily observe that \"Exclusively replacing with the target language\" performs best, while \"Replacing with randomly selected languages\" yields the poorest results. Hence, this also underscores the importance of selecting languages closely related to each target language for substitution when employing the code-switching technique. 35 45 55 65 75 85 95 mTOP PAWS-X Performance(%) Type1 Type2 Type3 Figure 7: Performance on mTOP and PAWS-X with different replacement languages in code-switching. The source language for both tasks is English, and the results are averaged across all target languages excluding English. \u201cType1\u201d represents the replacement with the target language; \u201cType2\u201d represents the replacement with languages from the same language family as the target language; \u201cType3\u201d represents the replacement with randomly selected languages. 6.4. Performance with XLM-R To validate the universality of our method, we substitute multilingual BERT with XLM-R in our framework. XLM-R is a more robust multilingual pre-trained model known for its exceptional crosslingual transfer capabilities. Subsequently, we test its performance on the PAWX-S dataset, and the experimental results are displayed in Table 4. In Table 4, we also observe that our framework outperforms the other three baselines. This indicates that integrating lexical and syntactic knowledge is beneficial for enhancing performance, irrespective of the base model employed. Notably, our framework only achieves the slight performance improvement when utilizing XLM-R as the base model compared to employing multilingual BERT. It may be because that the base model, XLM-R, adopt larger corpus during pre-training, resulting in preserving richer language information. Consequently, XLM-R itself has possessed superior cross-lingual transfer capabilities. The assistance by incorporating external linguistic knowledge appears to be relatively minor in comparison. 6.5. Limitations and Challenges In our study, we adopt a bilingual dictionary, such as MUSE (Lample et al., 2018), to substitute words in other languages. However, we randomly choose a target language word when there exist multiple translations for a source language word. This approach, although convenient, neglect the context of the source language word, potentially leading to inaccurate translations. This also highlights us to explore more precise word alignment methods in Task Methods en ar bg de el es fr hi ru tr ur vi ko nl pt AVG PAWS-X XLM-R 84.2 48.5 80.5 77.0 77.8 76.1 79.8 67.5 70.4 76.0 54.2 78.5 59.1 83.3 79.3 72.8 XLM-R+Syn 83.5 46.4 80.1 76.0 78.9 77.6 79.1 72.1 70.6 76.1 55.3 77.6 59.0 83.1 79.2 73.0 XKLM-R+code-switch 83.4 46.8 81.7 78.2 79.2 71.1 78.6 72.9 70.6 77.2 57.9 76.0 58.2 83.6 80.0 73.0 our method 83.1 44.9 82.7 76.8 78.4 76.9 79.6 71.1 70.1 76.6 60.4 78.2 58.1 83.5 79.7 73.3 Table 4: Results for PAWS-X with XLM-R. the future. Furthermore, the tasks we have evaluated are quite limited, with some of them involving only a few languages. In the future, we will extend our method to more cross-lingual tasks. Meanwhile, we also develop dataset for these tasks to support more languages. 7. Conclusion In this paper, we present a framework called \"lexicon-syntax enhanced multilingual BERT\" (\"LSmBERT\"), which infuses lexical and syntactic knowledge to enhance cross-lingual transfer performance. Our method employs code-switching technology to generate input text mixed in various languages, enabling the entire model to capture lexical alignment information during training. Besides, a syntactic module consisting of a graph attention network (GAT) is introduced to guide mBERT in encoding language structures. The experimental results demonstrate that our proposed method outperforms all the baselines across different tasks, which certificates the effectiveness of integrating both types of knowledge into mBERT for improving cross-lingual transfer. In the future, we plan to incorporate different linguistic knowledge into large language models (LLMs) to further enhance cross-lingual transfer performance. 8. Acknowledgements The authors would like to thank the anonymous reviewers for their feedback and suggestions. Additionally, this work was supported by the Major Program of the National Social Science Fund of China (18ZDA238), the National Social Science Fund of China (No.21CYY032), Beihang University Sponsored Projects for Core Young Researchers in the Disciplines of Social Sciences and Humanities(KG16183801) and Tianjin Postgraduate Scientific Research Innovation Program (No.2022BKY024). 9. Bibliographical"
16
+ }
intro_8K/test_introduction_short_2404.16645v1.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2404.16645v1",
3
+ "title": "Tele-FLM Technical Report",
4
+ "abstract": "Large language models (LLMs) have showcased profound capabilities in language\nunderstanding and generation, facilitating a wide array of applications.\nHowever, there is a notable paucity of detailed, open-sourced methodologies on\nefficiently scaling LLMs beyond 50 billion parameters with minimum\ntrial-and-error cost and computational resources. In this report, we introduce\nTele-FLM (aka FLM-2), a 52B open-sourced multilingual large language model that\nfeatures a stable, efficient pre-training paradigm and enhanced factual\njudgment capabilities. Tele-FLM demonstrates superior multilingual language\nmodeling abilities, measured by BPB on textual corpus. Besides, in both English\nand Chinese foundation model evaluation, it is comparable to strong\nopen-sourced models that involve larger pre-training FLOPs, such as Llama2-70B\nand DeepSeek-67B. In addition to the model weights, we share the core designs,\nengineering practices, and training details, which we expect to benefit both\nthe academic and industrial communities.",
5
+ "authors": "Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Chao Wang, Xinzhang Liu, Zihan Wang, Yu Zhao, Xin Wang, Yuyao Huang, Shuangyong Song, Yongxiang Li, Zheng Zhang, Bo Zhao, Aixin Sun, Yequan Wang, Zhongjiang He, Zhongyuan Wang, Xuelong Li, Tiejun Huang",
6
+ "published": "2024-04-25",
7
+ "updated": "2024-04-25",
8
+ "primary_cat": "cs.CL",
9
+ "cats": [
10
+ "cs.CL",
11
+ "cs.AI"
12
+ ],
13
+ "label": "Original Paper",
14
+ "paper_cat": "LLM Fairness",
15
+ "gt": "Large Language Models (LLMs) have been considered a remarkable approach for unsupervised learning, utilizing extensive data to achieve significant advancements. Large models based on decoder-only Transformers [64; 43] have demonstrated strong abilities on language understanding, generation, and in-context learning [10], et al.. Through downstream supervised fine-tuning (SFT) and task-specific alignments (e.g., Reinforcement Learning from Human Feedback, RLHF) [41], LLMs have led to significant progress in the development of dialogue assistant applications with their human-level multi-turn interaction capabilities [40]. Furthermore, LLMs have demonstrated complex cognitive abilities as reflected by code interpretation and completion [37], mathematical problem-solving [35], logical reasoning [69], and agent-like actions [9]. Recently, LLMs have also shown potential to facilitate a unified sequence-to-sequence modeling paradigm for multimodal learning by treating image, video, and audio signals all as token sequences [57; 30]. This positions LLMs as pivotal for progress towards Artificial General Intelligence (AGI) [11]. Inspired by the superior performances of proprietary applications [40; 6], a plethora of open-sourced LLMs has been publicly available for both the English [60; 61; 42; 27; 58] and Chinese [71; 5; 7; 33] communities. The open-sourced models typically vary in size from 7B to 70B parameters, with their performances improving with model sizes and training FLOPs, which is described as scaling laws [29; 23]. Open LLMs can be classified into foundation language models, SFT models, and RLHF models. \u2020Indicates equal contribution. *Corresponding authors. Technical Report. April 26, 2024 (v1) arXiv:2404.16645v1 [cs.CL] 25 Apr 2024 Tele-FLM Technical Report 2 PRE-TRAINING DATA Despite the growing prevalence and impressive evaluation performances, the high computational cost remains the major challenge in LLM development. In this study, we focus on alleviating the excessive computation by establishing a model-producing pipeline that streamlines the hyperparame- ter searching process, minimizes trial-and-error, and reduces restarts in training. For instance, the Llama technical report [60] assumed the use of around 2,048 A100 GPUs for 5 months, while a single Llama-65B training trial spanned only 21 days, constituting only 14% of the total GPU time. It indicates that open-source endeavors of pre-training LLMs may undergo redundant trial-and-error cycles that may consume enormous computational resources. In contrast, in this work, we reduce the total time cost due to restarts and trial-and-error to negligible levels. We believe that sharing our detailed techniques, engineering practices, and training dynamics [20], especially for LLMs exceeding the 50B scale, could benefit the community as well as contribute to green AI. In this report, we introduce Tele-FLM (aka FLM-2), an open multilingual LLM with 52 billion parameters, which is pre-trained from scratch on a 2.0 trillion token corpus comprising texts from English, Chinese, and various other languages. Tele-FLM inherits and extends the low carbon techniques and fact-enhancing pre-training objectives from the FLM family [33]. The training of Tele-FLM has encountered no instability issue except hardware failures through the completed 2T tokens, and remains ongoing for more data. In addition to the model checkpoints, we release the details of data composition, model architecture, hyperparameter searching, and the full pre-training dynamics. We evaluate Tele-FLM across multiple English and Chinese benchmarks. Regarding English language modeling, Tele-FLM has better Bits-Per-Byte (BPB) than Llama2-70B [61], demonstrating strong compression capabilities. The model also achieves lower BPB than Llama3-70B [2] and Qwen1.5- 72B [5] on Chinese corpora, showcasing its multilingual nature. With fewer English training tokens and smaller models, Tele-FLM matches Llama-65B and is comparable to Llama2-70B in English foundation model evaluation. As for Chinese foundation model evaluation, Tele-FLM matches the overall performance of larger multilingual models trained with a similar amount of data (e.g., DeepSeek-67B [7]). On certain tasks, it surpasses larger models trained with significantly more data (e.g., Qwen1.5-72B). The remainder of this report is structured as follows: Section 2 delves into the specifics of pre- training data processing. Section 3 details our model architecture, tokenizer, infrastructures, training techniques, and hyperparameters. In Section 4, we illustrate the pre-training dynamics and conduct BPB-based evaluation and analysis. Benchmark evaluation in both English and Chinese are provided in Section 5. Section 6 discusses some common issues and lessons learned. Section 7 reviews related literature. We conclude our work and look to the future in Section 8.",
16
+ "main_content": "Our training dataset comprises a variety of domains, as detailed in Table 1. We build a custom pipeline on spark cluster for massive data processing and apply custom functions to each subset. The pipeline includes text extraction from HTML/WARC, cleaning and paragraph-level deduplication with heuristic rules, model-based quality filtering and document-level deduplication with MinHash [8] algorithm. We obtain 2T tokens after all the procedures, and the distribution ratio between English and Chinese data is roughly 2:1. We incorporate more English data because of its higher quality, especially regarding the WebText domain. Additionally, in line with the methodology of GPT-4, we collected some instruct data and incorporated it into our pre-training data after removing the test sets of common datasets using the strict n-gram-based method. We deliberately avoid \u201ctraining on the test set\u201d or any other benchmark-oriented trick. WebText. CommonCrawl1 is often considered to be a repository containing diverse human experience and rich knowledge (especially long-tail knowledge). However, the high-quality sources in CommonCrawl are primarily concentrated in the English segment, with the Chinese content exhibiting relatively lower information density and quality. We use the latest CommonCrawl dumps from RedPajama [15] and incorporate WudaoCorpora [77] and similar Chinese-specific datasets together to form a large web-text dataset. We apply custom heuristic rules and a FastText [28] classifier to 1https://commoncrawl.org/. 2 Tele-FLM Technical Report 3 PRE-TRAINING DETAILS Table 1: Pre-training data. For each subset of our 2T pre-training tokens, we detail the language, the sampling proportion, the number of epochs completed during training, and the disk size. Domain Language Sampling Prop. Epochs Disk Size WebText en, zh 75.21% 1.0 5.9 TB Code code, zh 9.81% 1.0 528.1 GB Book en, zh 7.17% 0.8 647.6 GB WorldKnowledge multi., en, zh 2.87% 2.5 67.5 GB QA en, zh 2.12% 1.0 159.2 GB AcademicPaper en 0.99% 1.0 54.4 GB Profession-Law zh 1.04% 1.0 84.2 GB Profession-Math math 0.62% 2.0 6.1 GB Profession-Patent zh 0.14% 1.0 10.4 GB Profession-Medical zh 0.02% 1.0 1.2 GB ClassicalChinese zh 0.02% 2.5 0.5 GB filter out low-quality content, cross-deduplicate for each language, and up-sample/down-sample each subset with regard to data quality. The ratio of English to Chinese is approximately 2:1. Code. We incorporate multiple Github-like code datasets and post-process it to filter-out low quality and duplicated content. Simultaneously, we carefully assembled and curated a well-formed markdown dataset comprising Chinese technical articles. Book. We collect books from various sources in both English and Chinese, such as Redpajama [15] and Gutenberg2, among others. We develop a series of cleaning steps to remove redundant formatting, garbled text, formula errors, duplicated paragraphs, and other unwanted content from the books. After interleaved deduplication on document level, we finally obtain a high-quality book dataset. The ratio of English to Chinese is nearly 1:1. WorldKnowledge. To enrich the model\u2019s knowledge base and common sense, we add Wikipedia dumps3 from 2024 period to our training set, covering 22 languages: bg, ca, cs, da, de, en, es, fr, hr, hu, it, ja, nl, pl, pt, ro, ru, sl, sr, sv, uk, zh. We first process these dumps via Online Language Modelling Dataset Pipeline [59] to clean up format; then a meticulous multi-lingual cleaning function is applied to remove reference and subsequent content, which tend to be irrelevant to the main text. QA. We use StackExchange dataset provided by RedPajama-Data [15]. Furthermore, similar Chinese datasets are collected and incorporated into the training after filtering out those QA pairs with low information content. The ratio of English to Chinese in this subset is roughly 1:2. AcademicPaper. We use arxiv dataset collected and processed by RedPajama-Data. This dataset is processed following a Llama-like procedure, which mainly focuses on clearing useless or redundant formats for better language modeling. Profession. To enhance the model\u2019s capacity in various professional fields, we decide to include some specific domains in our dataset, including medical, law, patent, and math. Some subsets are from open-source data, such as Wanjuan-Patent [21] and MathGLM [74]. We post-process each subset independently to address formatting issues, private information disclosure, et al.. ClassicalChinese. In order to improve the model\u2019s understanding of traditional Chinese culture and its capability in classical Chinese, we carefully collect classic Chinese ancient books and poetry. These materials are more credible than those found in web texts; therefore, we assign them a larger weight during sampling. 3 Pre-training Details 3.1 Model Architecture We adapt the architecture of FLM-101B [33] as a backbone with several modifications. FLM-101B follows the standard GPT-style decoder-only transformer architecture [43], with pre-normalization 2https://www.gutenberg.org/. 3https://dumps.wikimedia.org/. 3 Tele-FLM Technical Report 3 PRE-TRAINING DETAILS Table 2: Detailed model architecture. The model configuration of Tele-FLM\u00b5P is a reduced version of Tele-FLM with a smaller hidden size. Models Layer Num Attention Heads Hidden Size FFN Hidden Size Vocab Size Context Length Params Size (M) Tele-FLM 64 64 8,192 21,824 80,000 4,096 52,850 Tele-FLM\u00b5P 64 4 512 1,344 80,000 4,096 283 Table 3: Tokenizer compression ratio. Tokenizer Compression Ratio is defined as the ratio of token length to the original UTF-8 text length. Smaller values indicate better compression. We report the compression ratios of GPT-4, Llama1/2, Llama3, and Tele-FLM on various domains in our training set, as well as the weighted average. Tokenizer Vocab Size Compression Rate English Chinese Classical Chinese Code Multilingual Mathematical Weighted Avg. GPT-4 100k 0.221 0.420 0.478 0.267 0.303 0.508 0.291 Llama1/2 32k 0.262 0.515 0.558 0.367 0.314 0.974 0.356 Llama3 128k 0.220 0.294 0.353 0.267 0.274 0.508 0.251 Tele-FLM 80k 0.248 0.235 0.307 0.363 0.340 0.965 0.261 and adds a LayerNorm to the last layer\u2019s output. Meanwhile, we apply scalar multipliers to: (1) the output of the word embedding layer and (2) the final output hidden states before softmax. We leave these multipliers tunable in pre-training to control the numerical flow. For example, the output multiplier may benefit training by modulating the entropy of the vocabulary distribution. Building on FLM-101B, we further optimize the model structure for Tele-FLM. Specifically, We use RMSNorm [80] for normalization and SwiGLU [50] for the activation function. We roll back to use Rotary Positional Embedding (RoPE) [53] without Extrapolatable Position Embedding (xPos) [55], untie the embedding layer with language modeling head, and disable linear bias in the attention and all MLP modules. One mini version named Tele-FLM\u00b5P is used to search hyper-parameters here. Table 2 details the architecture of both Tele-FLM and Tele-FLM\u00b5P. 3.2 Tokenizer The key to training a text tokenizer is to make a better trade-off between compression ratio and vocabulary size. English-focused tokenizers like GPT-4 or previous Llama series often underperform in compressing Chinese text. In order to guarantee Tele-FLM\u2019s text compression ratio within Chinese while maintaining performance under multilingual setting, we train a tokenizer that aligns closely with the pre-training data distribution. We sample 12 million diverse text samples from our pretraining dataset as the tokenizer\u2019s training dataset, including multilingual texts with a primary focus on Chinese and English, code snippets, classical Chinese literature, and mathematical content. We train the tokenizer with Byte-level BPE (BBPE) algorithm [65]. Table 3 details the tokenizers of Tele-FLM, GPT-4, and the Llama family. The tokenizer of Tele-FLM outperforms GPT-4 and Llama series in both Chinese and Classical Chinese and is comparable with their performances in English, code, and multilingual content. In math, our tokenizer aligns with Llama2 while slightly trailing GPT-4. Overall, Tele-FLM tokenizer showcases a superior compression ratio for Chinese text and satisfactory performance in English. While slightly behind Llama3, Tele-FLM outperforms other approaches on average compression ratio by a large margin. 3.3 Cluster Hardware Tele-FLM is trained on a cluster of 112 A800 SXM4 GPU servers, each with 8 NVLink A800 GPUs and 2TB of RAM. The nodes have heterogeneous CPU architectures: 96 nodes with Intel 8358 (128\u00d7 2.60GHz) CPUs and 16 nodes with AMD 7643 (96\u00d7 2.30GHz) CPUs. All nodes are interconnected via InfiniBand (IB). The training process lasts around two months, including downtime due to unexpected factors. As a comparison of infrastructures, Llama3 [2] is pre-trained on at least 49,152 Nvidia H100 GPUs (in contrast to our 896\u00d7 A800). Meta also claims to have 4 Tele-FLM Technical Report 3 PRE-TRAINING DETAILS the equivalent of 600k H100 GPUs for future computing power4. With this significant gap in total resources, computational efficiency and success rate are critical for average entities. 3.4 Parallelism Tele-FLM utilizes 3D parallel training, combining the prevailing methodologies: data parallelism, tensor parallelism, and pipeline parallelism. Data parallelism [63] is a well-established distributed training method, in which the samples in a batch are partitioned and distributed across multiple devices and processed simultaneously. No inter-device communication is involved in the forward and backward computation, while the gradient is aggregated at the end of each step. Tensor parallelism [51] splits specific neural network tensors across multiple devices and computes via inter-device communication. In Tele-FLM training, tensor parallelism is mainly applied to the attention and feed-forward modules. Excessive use of tensor parallelism may escalate GPU communication overheads and reduce the training speed. To alleviate this, we integrate pipeline parallelism [39] that partitions the model at the layer level. 3D parallelism incorporates these parallel approaches, prioritizing allocation of tensor parallelism groups with higher communication overheads to the same node, thereby maximizing intra-node communication and minimizing inter-node communication. The parallel training setup for Tele-FLM is a mixture of 4 tensor parallel, 2 pipeline parallel, and 112 data parallel. Additionally, we partition inputs to the Transformer\u2019s LayerNorm and Dropout layers along the sequence length dimension with sequence parallelism [31], yielding further GPU computational and memory savings. Furthermore, we utilize Distributed Optimizer module from Megetron-LM5 [46] with optimization. This optimizer further reduces GPU memory consumption by partitioning optimizer states with larger memory footprints across the data parallel dimension. 3.5 Hyperparameter Search Effective hyperparameter tuning may accelerate the loss reduction and ensure convergence, making it crucial for model training. However, the high cost of training large models often renders exhaustive grid searches impractical. Hence, we employ \u00b5P [73] for optimal parameter search. The Tensor Programs theories [72; 36] reveal universal relations in the training dynamics across a series of models, with their widths approaching infinity. For certain hyperparameter classes, this leads to a parameterized mapping for their optimal values between small and large widths. Generally, under \u00b5P transfer, wider models will consistently achieve lower loss than narrower ones when trained on identical data [73]. Consequently, if a narrow model converges, its wider counterparts will always converge. Based on this approach, we set a small model, namely Tele-FLM\u00b5P, for grid search purpose. As demonstrated in Table 2, this small model\u2019s architecture is different from Tele-FLM only in width. With a fixed layer number of 64 and attention head dimension of 128, we reduce the hidden size to 512. This modification results in 4 attention heads and a feed-forward hidden size of 1344. Due to its smaller size, Tele-FLM\u00b5P allows for significantly more experimental runs within fixed time and resource constraints. We search 7 hyperparameters: Learning Rate for vector-like and matrix-like weights, the Minimum Learning Rate at the end of the schedule, the initialization Standard Deviation for vector-like and matrix-like weights, the scaling factor for the embedding layer (namely Input Mult), and the scaling factor for the output hidden state in the final layer (namely Output Mult). For the definitions of vector/matrix-like weights and the \u00b5P transferring formula we apply, please refer to [75] and [73]. We use truncated normal distribution for model initialization. Figure 1 illustrates the loss and gradient norm dynamics of 9 hyperparameter combinations for the grid search, which are selected based on our prior knowledge of model configurations. We choose 4https://www.instagram.com/reel/C2QARHJR1sZ/?hl=en. 5https://github.com/NVIDIA/Megatron-LM. 5 Tele-FLM Technical Report 4 LOSS DYNAMICS AND BPB EVALUATION 0 10000 20000 30000 40000 50000 Steps 2.60 2.65 2.70 2.75 2.80 2.85 2.90 2.95 3.00 Training Loss (a) Loss curves for grid search. 0 10000 20000 30000 40000 50000 Steps 0 2 4 6 8 10 Gradient Norm (b) Gradient norm curves for grid search. Figure 1: Experimental curves of hyperparameter search based on \u00b5P. Table 4: Tele-FLM Training Hyperparameters. Searched Hyperparameters Non-Searched Hyperparameters Learning Rate 1.5e-4 LR Schedule Type cosine Matrix Learning Rate 1.5e-4 LR Schedule (tokens) 2.5T Minimum Learning Rate 1.5e-5 Warmup Step 2,000 Standard Deviation 4e-3 Clip Grad 1.0 Matrix Standard Deviation 4.242e-3 Weight Decay 0.0 Input Mult 1.0 Batch Size (tokens) 5,505,024 Output Mult 3.125e-2 RoPE Theta 10,000 the hyperparameters represented by the red line for final training after assessing the rate of loss decrease, trend stability, and gradient norm stability. Using \u00b5P, we derive the optimal hyperparameter configuration for the final 52B model based on this searched result, which is detailed in Table 4. A more fine-grained search can be conducted with expanded time and budgets. 4 Loss Dynamics and BPB Evaluation 0 250 500 750 1000 1250 1500 1750 2000 Trained T okens (Billions) 1.5 1.6 1.7 1.8 1.9 2.0 2.1 2.2 Training Loss (a) Training loss curve. 0 250 500 750 1000 1250 1500 1750 2000 Trained T okens (Billions) 1.5 1.6 1.7 1.8 1.9 2.0 2.1 2.2 Validation Loss (b) Validation loss curve. 0 250 500 750 1000 1250 1500 1750 2000 Trained T okens (Billions) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 Gradient Norm (c) Training gradient norm curve. Figure 2: Pre-training curves for Tele-FLM w.r.t. amount of data in billion tokens. We present the curves for training and validation loss and gradient norm on our pre-training data distribution in Figure 2. Figure 2a shows that the training process of Tele-FLM succeeds with a single, stable run without any divergence. This result is predictable with our \u00b5P hyperparameter search mentioned above. Figure 2b indicates that the loss curve generalizes well to validation data without saturation or overfitting. Figure 2c presents the gradient norm. We observe that the reduction in language modeling loss translates well into improvements on downstream tasks. Language modeling is compression [16]. Evaluation metrics related to language perplexity (PPL) are well-known to be closely connected to compression ratio. Moreover, these metrics usually exhibit more stable scaling behavior, making them an authentic foundation of downstream task performance (which is usually measured by more complex and nonlinear metrics [48]). For PPL-related evaluation, we use Bits-Per-Byte (BPB) [38; 18] as our metric, which considers both per-token loss and the 6 Tele-FLM Technical Report 4 LOSS DYNAMICS AND BPB EVALUATION 0.0 0.5 1.0 1.5 2.0 0.50 0.55 0.60 0.65 0.70 0.75 WebT ext (en) T ele-FLM Llama-65B Llama2-70B Llama3-8B Llama3-70B 0.0 0.5 1.0 1.5 2.0 0.45 0.50 0.55 0.60 0.65 0.70 AcademicPaper (en) T ele-FLM Llama-65B Llama2-70B Llama3-8B Llama3-70B 0.0 0.5 1.0 1.5 2.0 0.60 0.65 0.70 0.75 0.80 Book (en) T ele-FLM Llama-65B Llama2-70B Llama3-8B Llama3-70B 0.0 0.5 1.0 1.5 2.0 0.45 0.48 0.50 0.53 0.55 0.58 0.60 0.62 0.65 StackExchange (en) T ele-FLM Llama-65B Llama2-70B Llama3-8B Llama3-70B 0.0 0.5 1.0 1.5 2.0 0.45 0.50 0.55 0.60 0.65 0.70 0.75 Wikipedia (multi-language) T ele-FLM Llama-65B Llama2-70B Llama3-8B Llama3-70B 0.0 0.5 1.0 1.5 2.0 0.16 0.18 0.20 0.22 0.24 0.26 0.28 0.30 Github (code) T ele-FLM Llama-65B Llama2-70B Llama3-8B Llama3-70B 0.0 0.5 1.0 1.5 2.0 0.60 0.80 1.00 1.20 1.40 WebT ext (zh) T ele-FLM Llama-65B Llama2-70B Llama3-8B Llama3-70B 0.0 0.5 1.0 1.5 2.0 0.80 1.00 1.20 1.40 1.60 Book (zh) T ele-FLM Llama-65B Llama2-70B Llama3-8B Llama3-70B 0.0 0.5 1.0 1.5 2.0 0.60 0.70 0.80 0.90 1.00 1.10 1.20 1.30 WorldKnowledge (zh) T ele-FLM Llama-65B Llama2-70B Llama3-8B Llama3-70B 0.0 0.5 1.0 1.5 2.0 0.80 1.00 1.20 1.40 1.60 QA (zh) T ele-FLM Llama-65B Llama2-70B Llama3-8B Llama3-70B 0.0 0.5 1.0 1.5 2.0 Trained T okens (Trillions) 1.00 1.20 1.40 1.60 1.80 2.00 2.20 ClassicalChinese (zh) T ele-FLM Llama-65B Llama2-70B Llama3-8B Llama3-70B 0.0 0.5 1.0 1.5 2.0 0.20 0.30 0.40 0.50 0.60 0.70 0.80 0.90 1.00 BPB Loss on Validation Dataset Professional (zh) T ele-FLM Llama-65B Llama2-70B Llama3-8B Llama3-70B Figure 3: BPB curves of Tele-FLM on representative English (en), Chinese (zh), multi-language, and code validation datasets, compared with Llama series. influence of domains and tokenizers. Specifically, on a test corpus in a certain domain, if the total loss is close, a model that tokenizes with a better compression ratio is preferred by the BPB metric. For the English language, we break down the BPB evaluation into 6 different domains, represented by validation datasets from WebText6, Github, Wikipedia, Books, ArXiv, and StackExchange, respectively. We compare with different versions of Llama, including Llama-65B, Llama2-70B, Llama3-8B, and Llama3-70B [2], to analyze how well Tele-FLM fits to compress English data. 6We use text from CommonCrawl and C4, which approximately represent the same source (broad web data). 7 Tele-FLM Technical Report 5 BENCHMARK EVALUATIONS Table 5: BPB of Tele-FLM, Llama family models, and Qwen1.5-72B on English datasets. BPB is computed for 6 dataset categories, with weighted sum results based on Llama [60] and Tele-FLM training data configurations. The best results are in boldface and second-best underlined. Model WebText Github Wikipedia Book ArXiv StackExchange Weighted Sum L-Prop.1 F-Prop.2 Loss Llama-65B 1.650 0.543 1.297 1.791 1.205 1.293 1.572 1.485 Llama2-70B 1.588 0.471 1.198 1.695 1.103 1.220 1.506 1.418 Llama3-70B 1.729 0.597 1.300 1.886 1.042 1.388 1.642 1.556 Qwen1.5-72B 1.996 0.592 1.433 2.107 1.111 1.393 1.878 1.773 Tele-FLM (52B) 1.598 0.314 1.163 1.843 1.153 1.193 1.512 1.411 BPB Llama-65B 0.615 0.286 0.595 0.710 0.590 0.570 0.602 0.574 Llama2-70B 0.592 0.249 0.544 0.672 0.540 0.538 0.576 0.547 Llama3-70B 0.542 0.229 0.513 0.633 0.479 0.497 0.528 0.502 Qwen1.5-72B 0.642 0.234 0.601 0.717 0.521 0.515 0.620 0.586 Tele-FLM (52B) 0.562 0.164 0.570 0.700 0.567 0.531 0.550 0.516 1 L-Prop. (Llama [60] Proportion): 82% : 4.5% : 4.5% : 4.5% : 2.5% : 2.0%. 2 F-Prop. (Tele-FLM Proportion): 75.17% : 13.48% : 3.56% : 5.26% : 1.46% : 1.07%. Table 6: BPB of Tele-FLM, Llama family models and Qwen1.5-72B, on Chinese datasets. BPB is computed for 7 dataset categories, with direct average and weighted sum results based on Tele-FLM training data distributions. Models WebText Code Book World QA Classical Professional Direct Weighted1 Knowledge Chinese Average Sum Loss Llama-65B 1.773 1.236 2.029 1.586 2.076 2.819 1.215 1.819 1.782 Llama2-70B 1.419 1.019 1.542 1.189 1.681 2.233 0.896 1.426 1.414 Llama3-70B 2.152 1.264 2.210 1.722 2.568 2.844 1.109 1.981 2.114 Qwen1.5-72B 2.260 1.405 2.520 1.751 2.888 2.748 0.908 2.069 2.243 Tele-FLM (52B) 1.923 1.096 2.135 1.612 2.530 2.144 0.846 1.755 1.913 BPB Llama-65B 1.325 0.744 1.503 1.161 1.528 2.280 0.919 1.351 1.326 Llama2-70B 1.060 0.614 1.142 0.869 1.237 1.811 0.678 1.059 1.052 Llama3-70B 0.913 0.498 0.943 0.752 1.063 1.458 0.485 0.873 0.897 Qwen1.5-72B 0.759 0.537 0.871 0.663 0.951 1.237 0.329 0.764 0.759 Tele-FLM (52B) 0.643 0.478 0.741 0.619 0.831 0.949 0.290 0.650 0.646 1 Tele-FLM training set Proportion: 76.60% : 1.91% : 11.61% : 1.44% : 4.50% : 0.07% : 3.87%. Figure 3 illustrates the BPB trends w.r.t. to the amount of our pre-training data (in trillion tokens). As training progresses, Tele-FLM surpasses Llama2-70B on WebText, Github, and StackExchange, outperforming Llama-65B and Llama3-8B on almost all datasets, demonstrating strong foundation abilities in English. Numerical results are presented in Table 5. Regarding the weighted sum of BPB, Tele-FLM outperforms Llama-65B, Llama2-70B, Qwen1.5-72B, and Llama3-8B on both Tele-FLM and Llama [60] weighting proportions. Note that Llama3-8B is trained on more than 15T tokens, and these results may indicate that scaling up the model size is still important, despite the rapid growth of the total amount of training data. Similarly to English, we compute BPB across 7 domains with the corresponding Chinese validation data, namely WebText, Code, Book, World Knowledge, QA, Classical Chinese, and Professional. Results are visualized in Figure 3 (with \u201czh\u201d suffix). Specific scores are provided in Table 6. On all these validation corpora, Tele-FLM demonstrates lower BPB than Qwen1.5-72B and the latest Llama3-70B model. Thus, we conclude that our foundation model achieves strong compression performance for Chinese without sacrificing its English language modeling abilities, and vice versa. 5 Benchmark Evaluations 5.1 English: Open LLM, HumanEval, and BBH Benchmarks. We evaluate Tele-FLM on three public and widely-used English benchmarks: Open LLM Leaderboard7, HumanEval [12], and BIG-Bench Hard [52]. 7https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard. 8 Tele-FLM Technical Report 5 BENCHMARK EVALUATIONS \u2022 Open LLM Leaderboard is hosted on Huggingface and includes 6 key tasks to measure a model\u2019s performance on a variety of areas, such as commonsense inference, knowledge capacity, truthfulness, and maths. We report our model\u2019s results with the official evaluation tools (Language Model Evaluation Harness [19]). For the baseline models, we pick the results directly from the Open LLM Leaderboard. \u2022 HumanEval, introduced by OpenAI, tends to evaluate the code generation ability of language models by measuring functional correctness of docstring-prompted output. We choose the pass@5 metric as a trade-off between representing model capability and the evaluation speed. \u2022 Big-Bench Hard is derived from the BIG-Bench benchmark, a diverse evaluation suite that focuses on tasks believed to be beyond the capabilities of current language models. The Big-Bench-Hard, containing 23 challenging tasks, is specifically chosen to represent areas where language models did not surpass average human-rater performance, according to prior evaluations [56]. Table 7: Performance of Tele-FLM and baselines on English benchmarks. Model Average ARC HellaSwag MMLU TruthfulQA WinoGrande GSM8K HumanEval BBH 25-shot 10-shot 5-shot zero-shot 5-shot 5-shot zero-shot 3-shot Llama2-70B 63.39 67.32 87.33 69.83 44.92 83.74 54.06 46.95 52.94 Llama2-13B 50.29 59.39 82.13 55.77 37.38 76.64 22.82 28.66 39.52 Llama-65B 56.98 63.48 86.09 63.93 43.43 82.56 37.23 33.54 45.54 Llama-13B 46.20 56.23 80.93 47.67 39.48 76.24 7.58 23.78 37.72 Tele-FLM (52B) 56.60 59.47 82.25 64.00 43.09 79.40 45.19 34.76 44.60 Results. Table 7 compares Tele-FLM to the Llama series. With 52B parameters and around 1.3T English pre-training tokens, Tele-FLM matches the overall performance of Llama-65B, which is trained on approximately 1.4T tokens. Regarding the nature of different subtasks, Tele-FLM shows advantages over Llama-65B on GSM8K [14] and HumanEval, which focus on reasoning capabilities, but performs slightly worse on some tasks that rely more heavily on knowledge. This disadvantage can potentially be mitigated with more pre-training data consumed. Besides, Tele-FLM achieves > 90% of the performances of Llama2-70B, which is larger in size and trained on a 2T token corpus. 5.2 Chinese: OpenCompass Benchmarks. To measure the Chinese language and knowledge capabilities of our model, we conduct an evaluation using the OpenCompass8 toolkit. Specifically, we choose the following tasks to evaluate the model\u2019s performance in multiple aspects: C-Eval [26] and CMMLU [32] (multisubject knowledge), C3 [54] (reading comprehension), CHID [82] (Chinese culture and language understanding), and CSL [34] (keyword recognition). Results. Table 8 shows evaluation results on Chinese benchmarks. On average, Tele-FLM achieves significantly higher scores than GPT-3.5 and comparable to GPT-4 and DeepSeek-67B [7], reaching 84% of Qwen1.5-72B\u2019s performance [5]. Note that Qwen1.5-72B is larger in size and trained with up to 3T tokens. On CHID and CSL, Tele-FLM shows leading performance among all the models compared. Interestingly, CHID is very specific to Chinese culture, while CSL comes from the scientific domain. This indicates Tele-FLM\u2019s potential to both quickly adapt to a specific language and benefit from general knowledge presented in different languages. 5.3 Evolution of Performance during Training We automatically track the evaluation scores on sampled validation data for 8 of the evaluation benchmarks, as depicted in Figure 4. We observe that for all the tasks, evaluation score improves as pre-training and validation loss/BPB decreases. For knowledge-oriented English benchmarks, including ARC [13], HellaSwag [78], Winogrande [3], and MMLU [22], the performances increase smoothly with more data, which is intuitive regarding the task nature. For reasoning-oriented tasks including GSM8K and BBH, we observe a sharper increase, which indicates these tasks have more complex metrics and could possibly demonstrate emergent abilities. CMMLU is a knowledge-oriented Chinese benchmark. The sharper increase in CMMLU indicates that our Chinese training data is far from saturating, and further improvement can be expected with the ongoing training process. 8https://opencompass.org.cn/home. 9 Tele-FLM Technical Report 6 LESSONS LEARNED Table 8: Performance of Tele-FLM and baselines on Chinese benchmarks. The results of Qwen1.5-72B and our Tele-FLM are locally computed with the OpenCompass toolkit, while other results are picked from OpenCompass leaderboard. Model Average C-Eval CMMLU C3 CHID CSL GPT-4 76.64 69.90 71.00 95.10 82.20 65.00 GPT-3.5 61.86 52.50 53.90 85.60 60.40 56.90 Qwen1.5-72B 80.45 83.72 83.09 81.86 91.09 62.50 Qwen-72B 83.00 83.30 83.60 95.80 91.10 61.20 DeepSeek-67B 73.46 66.90 70.40 77.80 89.10 63.10 Tele-FLM (52B) 71.13 65.48 66.98 66.25 92.57 64.38 0.0 0.5 1.0 1.5 2.0 35 40 45 50 55 60 Acc Norm ARC 0.0 0.5 1.0 1.5 2.0 60 65 70 75 80 Acc Norm HellaSwag 0.0 0.5 1.0 1.5 2.0 10 20 30 40 Acc GSM8K 0.0 0.5 1.0 1.5 2.0 38 39 40 41 42 43 44 Exact Match BBH 0.0 0.5 1.0 1.5 2.0 T okens (T) 38 40 42 44 MC2 TruthfulQA 0.0 0.5 1.0 1.5 2.0 T okens (T) 65 70 75 80 Acc Norm Winogrande 0.0 0.5 1.0 1.5 2.0 T okens (T) 30 35 40 45 50 55 60 65 Acc MMLU 0.0 0.5 1.0 1.5 2.0 T okens (T) 54 56 58 60 62 64 66 68 Acc CMMLU Figure 4: Evolution of performance evaluated by Language Model Evaluation Harness during training. Note that we sampled 20% examples for Hellswag and 30% examples for MMLU considering the time cost. 6 Lessons Learned Lesson on Pre-training Data. We have the following observations in Tele-FLM\u2019s pre-training process. First, as is widely known, both quality and quantity of the data are critical for pre-training; however, when there should be a trade-off between quality and quantity, data quality might be prioritized. For our project, an English-Chinese data ratio of 2:1 works better than 1:1, likely because the average quality of the Chinese Web data we have is relatively low. Second, changing the data distribution midway sometimes leads to changes in gradient norm curves and potential divergence, while maintaining a fixed distribution is more stable. Another advantage of maintaining a fixed data distribution is that it allows for safer early-stop of the \u00b5P experiments. To conclude, the data processing should be as complete as possible before the pre-training starts. Lesson on Hyperparameter Search. We observe that \u00b5P-based methods [73; 75] are effective and efficient in searching for the best hyperparameters and predicting the behaviors of the final large models. Specifically, prior experiences and the open-sourced learning rates are good starting points for hyperparameter search. Nevertheless, initialization standard deviation and output multipliers have more significant influences than commonly known. Lesson on Loss Dynamics. First, the slope of the loss curve typically flattens after 500B tokens. Therefore, training should be restarted promptly if early loss values are unsatisfactory. Second, random loss spikes are common and acceptable if the gradient norm curve looks normal. We observe that our model recovers from all the spikes in the pre-training process, unlike the early open-sourced endeavors [81; 4; 79]. We speculate that modern Llama-like structures, especially those with non-bias designs and truncated normal initialization, combined with effective hyperparameter search, provide decent robustness against loss spikes. Another type of spike corresponds to consistent loss increases, which can be identified early with \u00b5P and avoided before the training begins. 10 Tele-FLM Technical Report REFERENCES Lesson on Gradient Norm. The early gradient norm curves are not strong indicators of training stability. In hyperparameter search, we observe divergence following various gradient curve patterns, yet with higher divergence probabilities associated with continuously increasing gradient trends. 7 Related Work The idea of large foundation models originates from unsupervised pre-training with Transformerbased [64] architectures. Well-known examples of early foundation models include Bert [17], GPT-2 [43], and T5 [45]. GPT-3 [10] increases the model size to 175B and observes decent few-shot and zero-shot reasoning capabilities, which encourages a series of efforts to scale up foundation models [81; 47; 4; 79]. Research on scaling laws [29; 23; 24; 75] sheds light on the predictable trends of model performance when the parameter number increases. On the other hand, other works explore the emergent abilities [68; 67; 48] and their relationships to evaluation metrics and task nature. The Llama series [60; 61; 2] is well-known for its contributions to open-sourced large language models, and is widely regarded as a strong baseline for foundation model evaluation. Falcon [42] explores data processing of publicly available pre-training corpora. Mistral [27] and Gemma [58] release 7B-scaled models that are trained with more data and incorporated with advanced designs. For the Chinese community, Qwen [5], Baichuan [71], Yi [76], and DeepSeek [7] represent efforts in multilingual foundation model pre-training and open-sourcing. FLM-101B [33] studies methodologies for training large foundation models under limited budgets. InstructGPT [41] establishes the paradigm of aligning large foundation models with human preferences. Widely used approaches include supervised fine-tuning (SFT) [66; 70] and Reinforcement Learning from Human Feedback (RLHF) [49], among others [44]. Aligning techniques turn foundation models into dialogue agents, which form the core of AI assistants in commercial use. Closed-source dialogue agents are represented by GPT-4 [40], Claude [6], Grok [1], and Gemini [57]. Open-sourced chat models include Zephyr [62] and ChatGLM [25], among the large number of human-aligned versions of the open foundation models mentioned above. 8 Conclusions and Future Work In this report, we introduce Tele-FLM, an open multilingual foundation model. With 52B parameters and 2T training tokens, Tele-FLM matches the performance of larger models trained with more data, in both multilingual language modeling capabilities and benchmark evaluations. The pre-training procedure of Tele-FLM features a high success rate and low carbon footprint. We open-source the model weights as well as technical details and training dynamics. We hope this work will catalyze the growth of open-sourced LLM communities and reduce the trial-and-error cycles to train LLMs with more than 50B parameters. Note that although efforts are made to filter out harmful contents in the training data, such kind of outputs could still potentially be elicited from the released model, which does not represent the opinions of the authors or entities involved. For future work, we plan to continue enhancing the capabilities of Tele-FLM to facilitate broader application, as well as to develop efficient training techniques to explore the unmanned deep space of larger-scaled dense models. Acknowledgments This work is supported by the National Science and Technology Major Project (No. 2022ZD0116300) and the National Science Foundation of China (No. 62106249). We would like to thank Boya Wu, Li Du, Quanyue Ma, Hanyu Zhao, Shiyu Wu and Kaipeng Jia for their help on data, Hailong Qian, Jinglong Li, Taojia Liu, Junjie Wang, Yuanlin Cai, Jiahao Guo, Quan Zhao, Xuwei Yang, Hanxiao Qu, Yan Tian, and Kailong Xie for their help on computational resources, and all other colleagues\u2019 strong support for this project."
17
+ }
intro_8K/test_introduction_short_2404.16651v1.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2404.16651v1",
3
+ "title": "Evolutionary Large Language Models for Hardware Security: A Comparative Survey",
4
+ "abstract": "Automating hardware (HW) security vulnerability detection and mitigation\nduring the design phase is imperative for two reasons: (i) It must be before\nchip fabrication, as post-fabrication fixes can be costly or even impractical;\n(ii) The size and complexity of modern HW raise concerns about unknown\nvulnerabilities compromising CIA triad. While Large Language Models (LLMs) can\nrevolutionize both HW design and testing processes, within the semiconductor\ncontext, LLMs can be harnessed to automatically rectify security-relevant\nvulnerabilities inherent in HW designs. This study explores the seeds of LLM\nintegration in register transfer level (RTL) designs, focusing on their\ncapacity for autonomously resolving security-related vulnerabilities. The\nanalysis involves comparing methodologies, assessing scalability,\ninterpretability, and identifying future research directions. Potential areas\nfor exploration include developing specialized LLM architectures for HW\nsecurity tasks and enhancing model performance with domain-specific knowledge,\nleading to reliable automated security measurement and risk mitigation\nassociated with HW vulnerabilities.",
5
+ "authors": "Mohammad Akyash, Hadi Mardani Kamali",
6
+ "published": "2024-04-25",
7
+ "updated": "2024-04-25",
8
+ "primary_cat": "cs.CR",
9
+ "cats": [
10
+ "cs.CR"
11
+ ],
12
+ "label": "Original Paper",
13
+ "paper_cat": "LLM AND Reasoning",
14
+ "gt": "In today\u2019s semiconductor technology landscape, As system-on-chip (SoC) designs integrate more and more intellectual property (IP) cores, each with unique functionality and security challenges, each from various vendors, each with ever-increasing complexity, we witness a growing challenge in detecting and fixing vulnerabilities. Given the pivotal role of SoCs, while substantial efforts have been invested in software (SW) testing and debugging, SoC (HW-based) testing, validation, and verification remain less mature [30]. The problem worsens while bugs are detected at lower levels of abstrac- tion, which makes respins extremely difficult (and even impossible, e.g., post-silicon) [34]. Moreover, existing solutions, from simula- tion to formal verification, usually require expertise. Such solutions also suffer from scalability issues, unable to cope with the grow- ing size and complexity of SoCs [2]. Furthermore, these solutions cannot address the majority of SoCs\u2019 vulnerabilities due to rapidly evolving threats, such as zero-day attacks. With the rapid evolution of LLMs, their capabilities have ex- panded into the domain of SW code generation with remarkable success, e.g., OpenAI\u2019s Codex [36]. Moreover, the scope of LLMs extends to SW code testing and verification while outperforming techniques like fuzzing [32]. While significant progress has been achieved in SW through LLMs, studies at the HW/SoC level, par- ticularly at RTL, have been dispersed. Many studies have initiated the LLMs\u2019 applicability at the HW/SoC level by raising questions like whether \"LLM can generate HDL\" or \"LLM can validate HW designs\". Just like in SW, LLMs have the potential to be utilized for both HW design, testing and validation (see Fig. 1). These studies show harnessing LLMs\u2019 capability to analyze, comprehend, and generate/validate complex code structures, might make them a right target vs. existing formal tools to identify potential security vulnerabilities within RTL codes [3, 37]. However, ensuring the integrity and security of HW designs, coupled with the potential for unknown vulnerabilities, presents broader challenges. This survey aims to offer a useful and comprehensive snapshot of rapidly growing use of LLMs in HW/SoC designs, particularly for security. We explore advancements, analyzing the pros and cons of each method. By examining current approaches, this work high- lights the innovative application of LLMs to automate the detection and resolution of security vulnerabilities in HW designs. Also, we investigate future research directions, emphasizing the need for specialized LLM architectures and domain-specific knowledge in- tegration. Our goal is to outline a roadmap for harnessing the full potential of LLMs in addressing HW security challenges, setting the stage for more robust and secure HW systems.",
15
+ "main_content": "Since the 1950s, many research efforts have been undertaken to develop highly efficient automated code generation tools [38]. These efforts have spanned from traditional program synthesizers [38]1, either deductive or inductive, to current neural-based models, notably codebase-reliant generative models [31]. With recent outrageous advancements in LLMs, massive research has focused on applying LLMs for independent SW code generation, leading to widely-used platforms like Codex and CodeGen [4]. The foundation of these models lies in autonomously predicting the subsequent token by considering the preceding context, typically comprising function signatures and docstrings that describe the intended functionality of the program, translating human-written instructions into precise code snippets or entire programs [4]. While this code generation relies on natural language processing (NLP), unlike natural language that is typically parsed as a sequential array of words or tokens, code generation is scrutinized based on its syntactic and semantic structure, often depicted using tree structures, e.g., abstract syntax trees (AST) [39]. Also, programming languages have a limited set of keywords, symbols, and rules, unlike the broad and nuanced vocabulary of natural languages. 1Synthesizers aim to automatically generate programs (SW codes), based on a space search over a variety of constraints relevant to domains known as Domain Specific Languages (DSLs). These techniques are mostly limited to pre-defined DSLs and thus suffer scalability, being general-purpose, and adaptability issues [1]. arXiv:2404.16651v1 [cs.CR] 25 Apr 2024 GLSVLSI \u201924, June 12\u201314, 2024, Clearwater, FL, USA Mohammad Akyash and Hadi M Kamali Prompting for HDL Generation RTL Modules Prompting for Vulnerability Description HDL Code Database Policy/Property Assertion Generation For HDL Modules Vulnerabilities Database Prompt Engineering HDL Database Repair (Mitigation) Suggestions Repair (Mitigation) Suggestions with & without Vulnerabilities (+Instruction/Explain) Fine Tuning Fine Tuning Figure 1: The Usage of LLMs for HDL (RTL) Generation/Validation. Given such differences, the primary concern for LLM-generated code is (i) correctness (testing and verification process), and (ii) codebase data hungriness [39]. In terms of correctness, testing and validation from the viewpoint of LLMs require well-defined metrics, where traditional metrics, e.g., BLEU that widely used in NLP assessments [39], fail due to their focus on linguistic similarity. For example, CodeBLEU that evaluates the quality of code produced by LLMs, or Pass@k that quantitatively measures the functional accuracy of code generation models, are example of such new metrics [36]. Regarding codebase data for code generation, substantial codebase data2 is required for enhanced training and/or fine-tuning to improve the efficacy of LLMs for code ganeration [4, 36]. 3 LLMS FOR HW: DESIGN AND TESTING Similar to SW engineering and testing, leveraging LLMs can significantly optimize and enhance circuit design processes, particularly within Electronic Design Automation (EDA) frameworks. LLMs can be used at high level abstraction, e.g., RTLs, to (i) reduce manual efforts for implementation3, (ii) address the challenge of lacking HDL codebase4, (iii) expedite time-to-market (TTM) in the competitive chip design process, and (iv) enable a more efficient and reliable system (by reducing human-induced faults) [40]. The current LLM-based methodologies in HW can be classified into two primary categories: (1) Development of automated AI agents aimed at streamlining EDA workflows (e.g., ASIC flow); (2) Derivation of SW code generation for RTL implementation. Regarding the former category, LLMs assist in various tasks such as script generation, architecture specification, and interpretation of compilation reports, thereby minimizing the workload of the design team. Within the latter category, solutions predominantly utilize LLMs in two manners: (i) refinement of design prompts, which entails the creation (engineering) of more precise prompts to guide LLMs towards RTL generation with increased effectiveness, and (ii) RTL-based tuning, which involves directly tuning LLMs through 2The data must be not only vast but also diverse, relevant, and of high integrity as the superioir quality codebase data enhances model performance significantly [32]. 3It can potentially serve as an alternative to high level synthesis (HLS), thereby enabling designers with limited HDL expertise to swiftly generate HW designs [40]. 4Lack of HDL codebase is always a substantial barrier for AI-driven HW solutions, consequently enhancing the efficiency of the training phase [33]. training on RTL code examples. A comparison of all existing LLMbased approaches in these two categories is shown in Table 1. 3.1 LLM Agent for EDA Automation Several studies have explored the potential of LLM in automating the ASIC design/implementation process [8, 14, 27, 29]. ChatEDA and ChipNeMo are two examples of task planning and execution agents that interpret natural language commands from the design team. ChipNeMo [29] implements a series of domain-specific training strategies for chip design tasks. It involves the deployment of bespoke tokenizers, domain-adaptive continued pretraining, and supervised fine-tuning guided by domain-specific instructions. ChatEDA [27] aims to facilitate optimal interaction with the EDA tools by comprehending instructions in natural language for generating and delivering executable programs. Using such techniques, LLM agents can offer automated ASIC flow, from RTL generation to GDSII creation, by invoking necessary SW tools and utilizing required scripts/files. However, while promising, these techniques necessitate thorough analysis to truly enhance automation in EDA tools for the following reasons: (1) Expert-Oriented Training and Fine-Tuning: Constructing such frameworks heavily relies on expert efforts for training or finetuning them to accommodate specific ASIC flows. Given the variety of technologies with their respective documentation, syntaxes, flows, and scripting methods, the pre-trained LLM may not offer a universally applicable model for all environments. (2) Failure in Handling Unforeseen Incidents: Despite extensive finetuning, the LLM-based agent may inaccurately extract information from reports/specs or generate incorrect scripts/configs when confronted with new incidents in the flows. Technology advancements, EDA tools updates, etc., may worsen this issue, as the LLM agent may fail to provide the desired output under evolving conditions. (3) Dependence on Technology: To clarify this, we raise a question! How similar is the EDA flow (i) from one design to another design, (ii) from one technology to another technology, (iii) from one vendor to another vendor? Now, the question becomes how deep is LLM fine-tuned based on these designs, technologies, and vendors? While chatbots may offer basic assistance, the prospect of achieving comprehensive automation seems to remain elusive. 3.2 LLM for RTL Generation and Refinement The main LLM-based RTL-oriented research focuses on the generation and refinement of RTL, primarily transitioning from specification to RTL design (+optimization). Initial efforts emphasize prompt engineering, crucial to successful RTL generation while relying on the existing LLMs [8, 10, 25]. Other methods, e.g., Verigen and VerilogEval, adapt open-source LLMs like CodeGen [4], followed by fine tuning on RTL, to produce more optimized HDL modules [13, 41]. Additionally, studies such as ChipGPT and AutoChip explore use of feedback mechanisms to enhance HDL quality, addressing aspects like compilation errors and design optimization (PPA optimization) [10, 20]. While these methods often rely on static analysis, DeLorenzo et al. Introduce optimization techniques like Monte Carlo tree search (MCTS) to fine-tune LLM tokens even further for more tuned optimization at the backend of LLMs [12]. Evolutionary LLMs for Hardware Security: A Comparative Survey GLSVLSI \u201924, June 12\u201314, 2024, Clearwater, FL, USA Table 1: A Top Comparison of LLM-based HW RTL Generation and EDA Tools. Study Target LLM Engine Input Output Comment (\u2014Shortcomings\u2014) Chang et al. [10] RTL Generation + Refinement GPT-3.5 Design Specification Prompts + Human Feedback for Corrections RTL Module Static PPA analysis is post-LLM with no LLM-based improvement. Human feedback is needed for manual correction per design. Thakur et al. [20] RTL Generation w/ guaranteed Compilation GPT-4, Llama2, GPT-3.5T, Claude 2 Design prompt + Compile/Synthesis Report Compiled and Tested RTL Design Feedback addresses compilation/simulation errors but may alter function priority, leading to unintended functions. No Feedback for PPA Efficiency Matter He et al. [27] Automatic EDA Flow Scripting and Execution Calls Llama2-70B Natural Language Instructions + RTL Design EDA Tool Commands & Reports + Scripts + Synthesized Design + Layout (GDSII) It is either designor technology-Dependent. Cannot be easily design/tool-agnostic. Li et al. [14] Architecture Specifications Generation + Review GPT-4 Architecture specifications + RTL Design Hierarchical Reviewed Architecture Specifications Specifications are limited to the existing technologies. It is mostly processor-based instructions. Not for generic HW. Lu et al. [25] RTL Generation GPT-3.5, GPT-4, VeriGen, StarCoder Natural language instructions RTL Design WIth no feedback, success rate is low for functional correctness. The reference designs are very limited and relatively small. Liu et al. [18] RTL Generation RTLCoder Natural language instructions RTL Design Diversity rate is low in the training dataset. The functional correctness of training dataset is not ensured, leading to lower functional coverage in the generated outputs. Thakur et al. [41] Completing Partial RTL Design MegatronLM-355M, CodeGen, code-davinci-002, and J1-Large-7B Partial RTL Design + Custom problem set with testbenches RTL Design Lack of Organized Dataset. RTLLM shows the performance does not surpass existing commercial models. Completion necessarily does not provide correct functionalities. Cheng et al. [11] RTL Generation + Repair + EDA Script Generation Llama2-7B, Llama2-13B Natural language descriptions + Verilog files + EDA scripts Corrected Verilog code + Verilog code from descriptions + EDA scripts For refinement, it is for syntactic errors (compilation issues). DeLo et al. [12] RTL Generation VeriGen-2B Natural language instruction + RTL modules description Compiled, Tested, and PPA Improved RTL Design Tested on Small Toy Circuits, e.g., adders and MAC units. Stochastic behavior of MCTS. Less Improvement in More Iterations. Li et al. [42] RTL Synthesis (Mapping) Circuit Transformer Gate-Level Design (AIG) Design Model (Truth Table) + Synthesized AIG Low Accuracy for Larger Circuits. Low Performance with no MCTS (Low Scalability). More recent advancements have shifted the focus from fine tuning and prompt engineering in existing LLMs to the development of dedicated circuit transformers, e.g., Li et al. Introduce \"Circuit Transformer\" with 88M parameters and integrated MCTS for optimization, leading to a fully open-source independent LLMs for RTL [42]. Similarly, RTLCoder proposes an automated data generation flow utilizing a model with 7B parameters, producing a sizable labeled dataset for RTL generation [18]. These endeavors have led to the emergence of large circuit models (LCM), enhancing the expression of circuit data\u2019s semantics and structures, thus creating more robust, efficient, and innovative design approaches. Despite its promise, more research is needed as follows: (1) Universality Issues: LLM-based RTL generation faces limitations due to scarce codebase knowledge available for model fine-tuning and training per application [18]. As an example, developing security enclaves or fully-debugged Verilog modules is incredibly challenging as there are not many training datasets available for it. (2) Verification (Functional) Issues: Existing studies highlight the complex nature of (functional) verification tasks, further magnified by the limited availability of trained models for test bench generation and functional simulation [13]. The complexity of circuit designs, which involve both functional and structural attributes, worsens the challenge, as even small changes to the structure (a code line) can have significant effects on functionality, underscoring the complexity of testbench generation and simulation of circuits. (3) Scalability Issues: Scalability is crucial for RTL-based LLMs in addressing complex circuit designs [25]. Efforts to enhance computational efficiency and model architecture sophistication are essential to accommodate larger designs and meet evolving electronic device demands. Further research is necessary to overcome scalability challenges and maximize LLM potential in RTL generation. 4 LLM FOR HW: SECURITY (VERIFICATION) Given the paramount significance of security of HW designs in modern SoCs, and in light of the earlier discussion emphasizing the importance of verification over LLMs, several studies have commenced employing LLM for SoC verification (moving towards bug-free designs, either functional or security-oriented). Similar to LLM-based RTL design, these approaches fall into two main categories: (i) refinement of design prompts, where designers guide LLMs toward generating secure code (i.e. prompt engineering), and (ii) RTL-based tuning, which is about altering the LLM\u2019s framework itself to generate output bug-free code. In advancing HW security, researchers have leveraged LLMs using either pure natural language prompts (i.e. description of the code) or a blend of natural language (i.e. comments designed by human experts) and code. The following describes these two categories in detail and how each category can enhance verification and security for HW designs. 4.1 Prompt Engineering Prompt engineering is the practice of designing inputs for LLMs, to obtain specific, desirable outputs. This technique optimizes the interaction with LLMs to improve its performance on various tasks, leveraging strategies like few-shot [21], and chain-of-thought [9] prompting to guide the model\u2019s responses effectively. A few recent studies in HW explore the applications of prompt engineering for enhancing vulnerability detection and repair, as well as design verification. For example, [3] employs a range of detailed instruction GLSVLSI \u201924, June 12\u201314, 2024, Clearwater, FL, USA Mohammad Akyash and Hadi M Kamali prompts for various LLMs, aiming to evaluate the efficacy of each model in correcting HW vulnerabilities5. Fig. 2 shows an example of how prompting GPT-4 with a bug description and repair instructions alongside the Verilog code enables GPT-4 to address the vulnerability. Here are two important lessons to be learned: (1) The example shows that being super specific is crucial in engineering the prompt to ensure the generated code is devoid of vulnerabilities. Thus, it is vital to have careful crafting by human experts to generate such prompts. This requirement for human input could become a tedious process, posing challenges in scaling and automating the approach for broader applications. (2) The performance and efficacy of LLMs depends on the infrastructure of LLM used. While commercial LLMs like GPT-4 tend to outperform models trained on coding datasets, including Codegen and VeriGen, in terms of repair accuracy and efficacy, this advantage comes at the cost of increased number of parameters. The importance of precision in prompt generation is also shown in [15], relying on ChatGPT, revealing the fact that the success rate can be degraded significantly while the model is more limited6. This study also demonstrates models misguiding the designers while the Verilog code of various CWE scenarios as part of instruction can lead to new form of vulnerabilities from prompts (may not fully represent the capture of potential vulnerabilities in SoC designs). To enhance verification capability, some studies focus on the use of LLMs for verification assertion generation (e.g., SystemVerilog Assertions (SVAs)). For instance, [16] uses GPT-4 in an iterative mechanism to refine prompts for GPT-4, enabling it to generate more accurate and complete SVA properties from RTL code. This approach coupled with AutoSVA2, which automatically generates formal verification testbenches, enables LLM-guided formal verification towards more automation. However, the major obstacle to this automation is the reliance of this approach on iterative refinement by an expert, which requires a deep understanding of both HW verification and prompt engineering. Similarly, AssertLLM [23] uses a customized GPT-4 Turbo to generate SVAs (functional verification assertions) from natural language design specifications (translating design documents). Although results show high success rate, this model is also heavily dependent to the quality and completeness of the design documents. This is while richness of documentation is always a critical issue in HW design, thus AssertLLM might struggle to generate assertions that fully capture the intended design behavior. LLM4DV [28] uses LLMs with prompt templates to automate the generation of test stimuli for verification. LLM4DV integrates LLMs with a systematic method that includes a stimulus generation agent, prompt templates, and four LLM-based improvements, e.g., summarizing prompts, resetting, etc. Evaluated using three custom-designed large-scale DUTs, this framework demonstrated promising results and achieved high coverage rates in simple scenarios. However, this approach focuses more on coverage-related metrics, overlooking security-oriented vulnerabilities. Similar to these formal-based mechanisms, [37] proposes designing an evaluation framework that includes generating natural 5These prompts must provide a thorough description of the bug, strategies for debugging, and illustrative examples that contrast insecure code with its secure counterpart. 6The number of parameters was restricted to a range of millions instead of billions. module key_compar input [7:0] k input [7:0] k output reg mat output [7:0] ); wire [7:0] compar assign compare_re always @(compare_ if (compare_r 8'b00000000) match = 1 else match = 0 end assign diff = com endmodule Write a hardware compares bits of module user_grant_access(data_out, usr_id, data_in, clk, rst_n); output reg [7:0] data_out; input wire [2:0] usr_id; input wire [7:0] data_in; input wire clk, rst_n; reg grant_access; always @(posedge clk or negedge rst_n) begin if (!rst_n) begin data_out <= 0; end else begin grant_access = (usr_id == 3'h4) ? 1'b1 : 1'b0; if (grant_access) begin data_out <= data_in; end end end endmodule Based on the provided instruction, correct the security bug in this Verilog module. BUG: Access Control Check Implemented After Asset is Accessed. Ensure that access is granted before data is accessed. module user_grant_access(data_out, usr_id, data_in, clk, rst_n); output reg [7:0] data_out; input wire [2:0] usr_id; input wire [7:0] data_in; input wire clk, rst_n; reg grant_access; always @(posedge clk or negedge rst_n) begin if (!rst_n) data_out = 0; else begin grant_access = (usr_id == 3'h4) ? 1'b1 : 1'b0; data_out = (grant_access) ? data_in : data_out; end end endmodule Response Response Prompt Prompt Design LLM Bug Detector Bug free design Designer Bug Design Bug free design Figure 2: An Examplary Case in GPT-4 for Security Debugging. language prompts that mimic code comments in assertion files, using these prompts to generate SVAs with LLMs, and then assessing the correctness of these assertions against a benchmark suite of real-world HW designs and corresponding golden reference assertions. The results demonstrate that LLMs, with varying levels of detail in the prompts, can generate valid HW security assertions. More recent use of LLMs for RTL debugging aimed to enhance automation in the domain. For instance, RTLFixer [26] automatically rectifies syntax errors in Verilog code by leveraging RetrievalAugmented Generation (RAG) and the ReAct prompting strategy. RTLFixer employs a retrieval database filled with expert knowledge of syntax errors. ReAct also introduces an iterative approach involving reasoning, action, and observation, mimicking experts\u2019 debugging techniques. This combination builds a more effective system for automating the debugging. However, it still heavily relies on the comprehensiveness and currentness of the external knowledge database, which is collected by human experts. Some LLM-based studies focus on the use of such models at the SoC level. DIVAS [19] uses LLMs to analyze SoC specifications and crafts precise queries that encapsulate potential security vulnerabilities related to the SoC. These queries are submitted to LLMs, e.g., ChatGPT and Google\u2019s BARD, and the LLMs map these queries to relevant CWE vulnerabilities that could compromise the SoC. Once CWEs have been identified, DIVAS utilizes LLMs to construct SVAs for each. These SVAs are designed to act as security verification mechanisms, ensuring the SoC\u2019s design complies with security standards and is safeguarded against identified vulnerabilities. Similarly, [5] explores how GPTs are utilized in SoC level for security vulnerability insertion, detection, assessment, and mitigation. This study, focusing on smaller models, e.g., ChatGPT-3.5, and relying on a sub-set of CWEs, evaluates the modification possibility over RTL using oneand few-shot learning. By comprehensive Evolutionary LLMs for Hardware Security: A Comparative Survey GLSVLSI \u201924, June 12\u201314, 2024, Clearwater, FL, USA Table 2: A Top Comparison of LLM-based HW Security Validation Solutions Study Target LLM Engine # of Bugs Success Rate Source of Benchmarks Expert Knowledge Needed? Reference (for Eval) Comment Nair et al. [15] Prompt generation for Debugging RTL ChatGPT 10 100%\u22171 CWE (Descriptions) For the Whole Process Manual expert intervention per debugging Cannot be automated. Limited evaluation on CWEs Kande et al. [37] Detection (Generate Assertion) OpenAI Codex (code-davinci002) 10 \u223c25% Hack@DAC21, OpenTitan For manually building detailed security constraints Golden Assertion High success rate only when bug and security policy is known. Otherwise, it is below 10%. Only for single endmodule, No Hierarchical and Recursive SVA. Ahmad et al. [3] Repair (pre-detected bugs) OpenAI Codex (code-davinci001, codedavinci-002, code-cushman001), CodeGen 15 \u223c31% CWE (Benchmark), OpenTitan, Hack@DAC21 For training (dataset generation for assisting repairs) For CWEAT static analyze verification Repaired Code (Prompt Reference) Only applicable on pre-observed cases with high similarity (to be detected by CWEAT) Saha et al. [5] Detection (Generate Assertion), security vulnerability insertion GPT 3.5, GPT 4 N/R\u22172 N/R\u22172 CWE, Trust-Hub For prompt engineering and evaluation Manual expert intervention per debugging Limited evaluation on CWEs and smart toy circuits. Fu et al. [22] Detection and/or Repair StableLM, Falcon, LLama2 1 (different models) \u223c35% Open-Source SoCs and Microprocessors For fine-tuning (Open-source code classifications) Repaired Code (Preand Post-correction of Git (CVA6, Opentitan, ...)) Detailed enhancement for training is needed. Per design, a new training might be required. Raw dataset is limited and not design-agnostic). Meng et al. [24] Detection (Generate Assertion) HS-BERT 8 326 Bugs from 1723 sentences RISC-V, OpenRISC, MIPS, OpenSPARC, OpenTitan (documentation) For classifying security rules in documents Manual expert labling for security property validation Limited by the quality of the input HW documentation. Limited to the design/verification team knowledge. Fang et al. [23] Detection (Generate Assertion) GPT4 Turbo N/A 89% Open-source CPUs, SoCs, Xbars, arithmetic. For extracting verification-required information from documents Golden RTL Implementation Limited by the quality of the input HW documentation. Mostly syntactic and basic functional verification. Paria et al. [19] Detection (Generate Assertion) ChatGPT, BART N/A N/A CEP SoC (MITLL) For assumptions (CWE-based security rules) N/R\u22172 Expert review for Spec Generation is needed per design. Vera et al. [16] Detection (Generate Assertion) GPT-4 N/R\u22172 N/R\u22172 RISC-V CVA6 For building rules related to assertions Previously developed formal tools (AutoSVA) The success rate heavily depends on expert\u2019s input for prompt engineering. Zhang et al. [28] Test Stimuli Generation GPT-3.5-turbo N/A small: \u223c98%, large: \u223c65% Self-designed RTL Designs For prompts generation Coverage Monitoring Not for security purposes. Coverage-based testing. Tsai et al. [26] Syntax Errors Repair GPT-3.5, GPT-4 212 98.5% VerilogEval benchmarks, RTLLM benchmarks For retrieval database (debugging reference) VerilogEval, RTLLM Not for security purposes. Only for Syntax errors. \u22171: It is 100% as all the debugging is done manually. Bug is known, the debugging instruction (flow) is known, and GPT is used for generation. N/R\u22172: Not Reported. exploration, the study suggests specific prompt guidelines for effectively using LLMs in SoC security-related tasks. LLMs possess a dual-use nature; While advancing HW security initiatives, LLM can also present new threats simultaneously. [7] delves into the potential of general-purpose models like ChatGPT in the offensive HW security domain This study involves employing prompt engineering techniques to guide LLMs in filtering complex HW design databases, correlating system-level concepts with specific HW modules, identifying security-critical design modules, and modifying them to introduce HW Trojans. This study initiates the possibility of using LLMs for building more stealthy and undetectable HW Trojans, reshaping the characteristics of HW Trojan implementation, detection, and mitigation. 4.2 Fine-Tuning As mentioned previously, some of these LLM-based HW verification solutions rely on fine-tuning, which involves adjusting a pre-trained language model by training it on Verilog/SVA data. However, LLMs require extensive datasets for effective training, posing a significant challenge in specialized domains, particularly in HW security due to the scarcity of targeted data. LLM4SecHW [22] is one example, which leverages a dataset compiled from defects and remediation steps in open-source HW designs, using version control data from GitHub. This dataset was created by selecting significant HW projects such as CVA6, CVA5, OpenTitan, etc., and extracting commits, issues, and pull requests (PRs) related to HW designs. This approach provides a rich source of domain-specific data for training models, specifically tailored to identifying and fixing bugs in HW designs. Although innovative and promising, the quality of this data is dependent on the filtering process accuracy. The effectiveness of LLMs in debugging HW designs is thus directly tied to how precisely the data is curated and processed. The NSPG framework [24] is another example of LLM solution for HW verification that offers a novel methodology for automating the generation of HW security properties utilizing fine-tuned LLMs. This approach is anchored by the development of a specialized language model for HW security, HS-BERT, which is trained on domain-specific data. Through deep evaluation on previously unseen design documents from OpenTitan, NSPG has proven its capability by extracting and validating security properties, showing security vulnerabilities within the OpenTitan design. However, a notable limitation of not only NSPG, but also all HW-oriented finetuned model for now lies in its dependency on the quality and scope of the HW documentation provided as input (which is almost super limited). As in the realm of HW/SoC design, this documentation often remains incomplete, inconsistent, or lack necessary detail, the precision and efficacy of the solution could be adversely affected. 5 TAKEAWAYS AND FUTURE DIRECTIONS In all facets of using LLMs for HW security, it becomes apparent that a significant hurdle, whether in HW design or in testing/verification, whether stemming from prompt engineering or fine-tuning, lies in the procurement and effective utilization of quality data [17]. Also, as depicted in Table 2, creating specialized LLMs (e.g., LCMs) or GLSVLSI \u201924, June 12\u201314, 2024, Clearwater, FL, USA Mohammad Akyash and Hadi M Kamali employing pre-existing ones necessitates a deep expert knowledge to achieve a high success rate for generation, detection, and mitigation. Considering these two obstacles, despite being promising, the endeavor requires rigorous effort across multiple facets. Creating a standard database reference is crucial for both training and evaluating the methods proposed in this domain. It facilitates a fair comparison among different techniques, ensuring that the pros/cons of each approach can be accurately assessed. Moreover, high-quality RTL data is indispensable for the optimal training of LLMs. It enables these models to learn the intricacies of RTL designs effectively, thereby enhancing their efficiency in security tasks. Given the distinct characteristics of RTL codes as opposed to natural language texts, it becomes crucial to consider domain-specific models for handling HW codes. Incorporating concepts such as graphs and ASTs into LLMs can bridge the gap between the structural nuances of RTL codes and the inherently sequential processing of conventional language models. It is crucial to devise a novel metric specifically for evaluating the security coverage of RTL code examined by LLMs. This metric would serve as a critical feedback mechanism for LLMs, enabling them to assess and refine their output continually. By quantitatively measuring the security of RTL designs, the metric would allow LLMs to optimize their learning process towards generating code that is not only functionally correct but also adheres to high security standards. Building on the foundational strategies mentioned above, further refinement can be achieved through the optimization of continuous prompts7. Such strategies also open the doors for mechanisms to enhance prompt automation for LLMs, e.g., auto-prompting8. These optimizations are open research directions potentially presenting a more feasible and efficient alternative to LLM fine-tuning. 6 CONCLUSION This paper examined the use of LLMs in detecting/addressing security flaws in HW designs. We specifically analyzed their incorporation into RTL, revealing their independent problem-solving abilities in this domain. Our examination of existing approaches highlights both their benefits and drawbacks, notably scalability and accuracy issues. Also, we identified potential areas for future research. Our suggestion involves developing dedicated LLM architectures and datasets focused on HW security, indicating a path toward targeted improvements that could mitigate HW vulnerabilities."
16
+ }
intro_8K/test_introduction_short_2404.16668v1.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2404.16668v1",
3
+ "title": "The First Estimation of the Ambipolar Diffusivity Coefficient from Multi-Scale Observations of the Class 0/I Protostar, HOPS-370",
4
+ "abstract": "Protostars are born in magnetized environments. As a consequence, the\nformation of protostellar disks can be suppressed by the magnetic field\nefficiently removing angular momentum of the infalling material. Non-ideal MHD\neffects are proposed to as one way to allow protostellar disks to form. Thus,\nit is important to understand their contributions in observations of\nprotostellar systems. We derive an analytical equation to estimate the\nambipolar diffusivity coefficient at the edge of the protostellar disk in the\nClass 0/I protostar, HOPS-370, for the first time, under the assumption that\nthe disk radius is set by ambipolar diffusion. Using previous results of the\nprotostellar mass, disk mass, disk radius, density and temperature profiles and\nmagnetic field strength, we estimate the ambipolar diffusivity coefficient to\nbe $1.7^{+1.5}_{-1.4}\\times10^{19}\\,\\mathrm{cm^{2}\\,s^{-1}}$. We quantify the\ncontribution of ambipolar diffusion by estimating its dimensionless\nEls\\\"{a}sser number to be $\\sim1.7^{+1.0}_{-1.0}$, indicating its dynamical\nimportance in this region. We compare to chemical calculations of the ambipolar\ndiffusivity coefficient using the Non-Ideal magnetohydrodynamics Coefficients\nand Ionisation Library (NICIL), which is consistent with our results. In\naddition, we compare our derived ambipolar diffusivity coefficient to the\ndiffusivity coefficients for Ohmic dissipation and the Hall effect, and find\nambipolar diffusion is dominant in our density regime. These results\ndemonstrate a new methodology to understand non-ideal MHD effects in\nobservations of protostellar disks. More detailed modeling of the magnetic\nfield, envelope and microphysics, along with a larger sample of protostellar\nsystems is needed to further understand the contributions of non-ideal MHD.",
5
+ "authors": "Travis J. Thieme, Shih-Ping Lai, Yueh-Ning Lee, Sheng-Jun Lin, Hsi-Wei Yen",
6
+ "published": "2024-04-25",
7
+ "updated": "2024-04-25",
8
+ "primary_cat": "astro-ph.SR",
9
+ "cats": [
10
+ "astro-ph.SR",
11
+ "astro-ph.GA"
12
+ ],
13
+ "label": "Original Paper",
14
+ "paper_cat": "Diffusion AND Model",
15
+ "gt": "1.",
16
+ "main_content": "Corresponding author: Travis J. Thieme [email protected] Crutcher 2008; Crutcher 2012). Early ideal magnetohydrodynamic (MHD) simulations show that rotationally supported disks (RSDs) could not form due to magnetic braking efficiently transferring angular momentum away from the collapsing central region in magnetized (\u00b5 \u226410) dense cores (e.g., Allen et al. 2003; Matsumoto & Tomisaka 2004; Banerjee & Pudritz 2006; Price & Bate 2007; Hennebelle & Fromang 2008; Mellon & Li 2008; Joos et al. 2012). However, observational studies revealed the presence of rotationally-supported Keplerian disks around several young, highly-embedded arXiv:2404.16668v1 [astro-ph.SR] 25 Apr 2024 2 Thieme et al. protostars (e.g., Tobin et al. 2012; Murillo et al. 2013; Lee et al. 2014; Yen et al. 2017; Ohashi et al. 2023). This contradiction between observations and simulations was coined the so-called \u201cMagnetic Braking Catastrophe\u201d and raised the fundamental question of how could these protostellar disks form in such magnetized environments? Non-ideal MHD effects, namely ambipolar diffusion (AD), Ohmic dissipation (OD) and the Hall effect (HE), have been suggested as one possible route to overcome magnetic braking and form a rotationally-supported protostellar disk (e.g., Inutsuka et al. 2010; Li et al. 2011; Braiding & Wardle 2012; Tomida et al. 2015; Wurster et al. 2016, 2019; Wurster & Lewis 2020; Wurster et al. 2021). These non-ideal MHD terms describe the various regimes of coupling of the ions, electrons and charged grains to the magnetic field, as well as their interactions with the neutral particles (e.g., Wardle & Ng 1999; Nakano et al. 2002, see the recent reviews by Wurster & Li 2018; Zhao et al. 2020b; Tsukamoto et al. 2023). In terms of relative importance, Ohmic dissipation is efficient at high densities, such as the midplane of a protostellar disk, while the Hall effect and ambipolar diffusion are more efficient at intermediate and low densities, respectively, such as the upper disk layers and in the protostellar envelope (e.g., Marchand et al. 2016; Wurster et al. 2018a; Wurster 2021). However, the Hall effect seems to be transient does not last for long after the formation of a protostellar disk (Zhao et al. 2020b; Lee et al. 2021b). While simulations clearly show the importance of non-ideal MHD effects in the formation and evolution of protostellar disks, they have yet to be quantified observationally. Yen et al. (2018) attempted to observe the velocity drift between ions and neutral particles (ambipolar diffusion) in the infalling envelope of a young Class 0 protostar, B335. However, no velocity drift was detected and thus, it is important to look into other possibilities on how non-ideal MHD effects can be quantified observationally. In this paper, we aim to understand the role of ambipolar diffusion in protostellar disk formation by using a methodology first developed by Hennebelle et al. (2016), and later revisited by Lee et al. (2021b, 2024). This methodology leads to an analytical equation describing the expected protostellar properties, in particular the protostellar disk radius, due to ambipolar diffusion (Hennebelle et al. 2016). The disk radius estimated with this analytical equation (RAD) was found to be in good agreement with the disk radius estimated from MHD simulations (Rsim), with Rsim/RAD \u223c1 (Hennebelle et al. 2016, 2020; Commer\u00b8 con et al. 2022). Thus, by backwards engineering the equation, we can estimate the ambipolar diffusivity coefficient, \u03b7AD, from observable quantities under certain assumptions. Using multiscale observations of the young protostar, HOPS-370, we present a methodology to estimate the ambipolar diffusivity coefficient for the first time, in order to understand the role of ambipolar diffusion in the formation and evolution of protostellar disks. HOPS-370 is a Class 0/I protostar in the Orion A molecular cloud (D= 392.8 pc; Tobin et al. 2020a). Observations from the Herschel Orion Protostar Survey (HOPS) constrain the bolometric luminosity (Lbol) and temperature (Tbol) to be 314 L\u2299and 71.5 K, respectively (Furlan et al. 2016). The protostellar mass and disk properties were extensively studied by Tobin et al. (2020b) as part of the VLA/ALMA Nascent Disk and Multiplicity (VANDAM) Survey of Orion Protostars. By using MCMC radiative transfer modeling to fit the dust continuum and several molecular lines, they found an average disk radius of 94 au, an average protostellar mass of 2.5 M\u2299, and a disk mass of 0.035 M\u2299. More recently, Kao & Yen et al. (in prep.) have derived the core-scale plane-of-sky magnetic field strength to be Bpos = 0.51 mG. The combination of these derived properties make HOPS-370 an ideal candidate for an initial study on the role ambipolar diffusion plays in this source using this new methodology. This paper is organized as follows. In Section 2, we describe our methodology and assumptions to estimate \u03b7AD at the edge of the HOPS-370 protostellar disk. Our resulting value of \u03b7AD and a comparison with a more theoretical non-ideal MHD estimate is given in Section 3. Several implications and uncertainties are discussed in Section 4. Section 5 summarizes our main results and discussions. 2. METHODS 2.1. The Relation Between Protostellar Disk Properties and the Ambipolar Diffusivity Coefficient Here, we present an analytical equation that relates properties of the protostellar disk at the disk-envelope interface to the ambipolar diffusion coefficient. Hennebelle et al. (2016) were the first to derive such an equation, however they make a number of simplifications to remove terms related to the density and temperature, which differs from the modeling of HOPS-370. A derivation is provided in Appendix A, while a summary and overview is presented here. The main assumptions in this derivation are that 1. ambipolar diffusion is the main diffusion process, First Estimate of the Ambipolar Diffusivity Coefficient in HOPS-370 3 2. the angular momentum is counteracted by magnetic braking resulting in the advection and braking timescales to be of the same order, 3. the toroidal field generated by differential rotation is offset by the ambipolar diffusion in the vertical direction resulting in the Faraday induction and vertical diffusion timescales to be of the same order, 4. infalling and rotational velocities of the gas near the disk edge both scale with the Keplerian velocity, and 5. the gas is in vertical hydrostatic equilibrium. These assumptions are likely valid in HOPS-370, as discussed in Section 4.1. Under these assumptions, we derive a relationship between the ambipolar diffusivity coefficient and observable quantities of \u03b7AD \u2243 \u03b4r\u03b42 \u03d5G1/2C2 sR1/2 d (M\u22c6+ Md)1/2\u03c1 B2 \u03d5 , (1) where \u03b4r and \u03b4\u03d5 are scaling factors for the infall and rotational velocities, G is the gravitational constant, Cs is the isothermal sound speed, Rd is the disk radius, M\u22c6+ Md is the mass of the star+disk system, \u03c1 is the density at the disk-envelope interface and B\u03d5 is the toroidal (azimuthal) component of the magnetic field strength at the edge of the disk. As shown in Appendix B, the global magnetic field inclination with respect to the disk rotation axis has little effect on the predicted ambipolar diffusivity coefficient. Thus, this prescription should be considered generally valid regardless of the global magnetic field orientation. To simplify the use of our equation, we select several arbitrary normalization constants to give \u03b7AD \u22432.5 \u00d7 1017 cm2 s\u22121 \u0000\u03b4r\u03b42 \u03d5 \u0001 \u00d7 \u0012 Cs 200 m s\u22121 \u00132 \u0012 Rd 100 au \u00131/2 \u0012M\u22c6+ Md 0.1 M\u2299 \u00131/2 \u00d7 \u0012 \u03c1d 1.8 \u00d7 10\u221215 g cm\u22123 \u0013 \u0012 B\u03d5 20 mG \u0013\u22122 . (2) In addition, a common normalization used in numerical simulations is to multiply by 4\u03c0/c2, which was used by Hennebelle et al. (2016) in their derivation to give \u03b7AD in units of seconds. This normalization produces a relation of \u03b7AD \u22430.0035 s \u0000\u03b4r\u03b42 \u03d5 \u0001 \u00d7 \u0012 Cs 200 m s\u22121 \u00132 \u0012 Rd 100 au \u00131/2 \u0012M\u22c6+ Md 0.1 M\u2299 \u00131/2 \u00d7 \u0012 \u03c1d 1.8 \u00d7 10\u221215 g cm\u22123 \u0013 \u0012 B\u03d5 20 mG \u0013\u22122 , (3) which will also be used in later comparisons. As shown by Hennebelle et al. (2016), Hennebelle et al. (2020) and Commer\u00b8 con et al. (2022), the ratio of the disk radius measured in their numerical simulations (Rsim) to the theoretical disk radius predicted by their ambipolar diffusivity equation (RAD) was Rsim/RAD \u223c1 (within a factor of \u22432\u22123) and did not vary considerably over the evolution of the protostellar disks in their simulations. Since our main assumptions are essentially the same, this should still hold true even for our new relation. This will be explored in more detail in a future paper. In the next sections, we describe each of the variables used for our estimate of the ambipolar diffusivity coefficient at the edge of the HOPS-370 protostellar disk for the first time. This estimation is only possible due to the extensive modeling of HOPS-370 and its surrounding environment from several different observational studies. 2.2. Previously Estimated Protostar+Disk Properties Tobin et al. (2020b) derived several important properties of the protostar and disk in HOPS-370. In this section, we describe their extensive molecular line modeling in the context of the relevant values needed for our ambipolar diffusivity coefficient estimation. 2.2.1. Protostellar Mass, Disk Mass and Disk Radius The protostellar masses and disk radii are derived from 12 independent molecular line fits (with a fixed temperature power-law index) using MCMC radiative transfer fitting. They found the best fitting protostellar mass to be between ranged between 1.8 M\u2299and 3.6 M\u2299, with an average protostellar mass of 2.5\u00b10.2 M\u2299. This protostellar mass is the dynamical mass obtained from the Keplerian profile in the line fits. For the disk radius, the best fits ranged between 70 au and 121 au, with an average radius of 94.4\u00b112.6 au. The uncertainties of these average values were determined by using the median-absolute deviation (MAD) of their 12 molecular line fits and scaling them to correspond to one standard deviation of the normal distribution. We adopt Rd = 94.4\u00b112.6 au and M\u22c6= 2.5\u00b10.2 M\u2299as the protostellar disk radius and protostellar mass, respectively. It is important to note, the Rd used for comparison to RAD 4 Thieme et al. in the numerical simulations by Hennebelle et al. (2016) is defined by several conditions using an azimuthallyaveraged simulation snapshot: (1) the disk is Keplerian meaning the azimuthal velocity is much greater than the radial velocity, (2) the disk is near hydrostatic equilibrium meaning the azimuthal velocity is much greater than the vertical velocity, (3) the disk is rotationallysupported meaning the rotational energy is larger than the support from thermal pressure by some factor, (4) the disk should be near the equitorial plane, and (5) a density threshold of n > 109 cm\u22123 (Joos et al. 2012). We have assumed the best-fit gas disk radius is equal to this radius. This is further explored in Section 3.3.3. The radius of the dust disk also modeled by Tobin et al. (2020b), however the dust is potentially more prone to radial drift and/or optical depth effects (e.g., Facchini et al. 2017), thus potentially underestimating the actual extent of centrifugal support. Additionally, several methods were used by Tobin et al. (2020b) to derive the disk mass in HOPS-370. First, they used the continuum emission as 1.3 mm, 0.87 mm and 9 mm to derive a value for the disk mass under the assumptions of isothermal and optically thin dust emission. The disk mass at each wavelength was found to be 0.048 M\u2299at 0.87 mm, 0.084 M\u2299at 1.3 mm, and 0.098 M\u2299at 9 mm. They also derive a disk mass from their MCMC radiative transfer fitting of the 0.87 mm dust continuum emission. This method resulted in a disk mass of 0.035+0.005 \u22120.003 M\u2299, which is slightly lower than the earlier estimations using the optically thin assumption. The lower value is likely due to the maximum dust grain size fit of the 0.87 mm emission being 432 \u00b5m, meaning that the dust in the model will radiate more efficiently than under the assumptions made for the optically thin calculation. Thus, to be consistent with the dust grain properties later used in our analysis (Section 3.3), we take the disk mass to be Md = 0.035+0.005 \u22120.003 M\u2299for our estimation. It is important to mention that the uncertainty of the measured disk mass reported by Tobin et al. (2020b) are the 1\u03c3 statistical uncertainties from their MCMC radiative transfer fitting. Thus, these uncertainties likely do not reflect the entire uncertainty of the measured disk mass. Tobin et al. (2020b) also fit for the disk mass in their 12 molecular line fits. However these derived disk masses are highly sensitive to the chosen molecular abundances in the fit, and may not be as reliable. This further motivates our choice to use the best-fit disk mass estimated from the dust emission fitting. 2.2.2. The Temperature Distribution of the Disk The gas temperature distribution of the HOPS-370 protostellar disk is modeled using a parameterized equation given by Td(r) = T0 \u0010 r 1 au \u0011\u2212q , (4) where T0 is the gas temperature at 1 au and q is a power-law index, which is fixed to be 0.35 in the 12 molecular line fits by Tobin et al. (2020b). The bestfit average value of T0 was found to be 980.0 \u00b1 0.6 K, where the errors are also found using the median absolute deviation scaled to one standard deviation of the normal distribution. Using the protostellar disk radius of Rd = 94.4 \u00b1 12.6 au, we find the temperature at the edge of the disk to be Td = 199.0 \u00b1 9.3 K.1 With this gas temperature, the isothermal sound speed at the disk edge can be estimated by Cs = \u0012 kBTd \u00b5mmH \u00130.5 , (5) where kB is the Boltzmann constant, \u00b5m = 2.37 is the mean molecular weight for a molecular gas with solar metallicity, and mH is the mass of a hydrogen atom. The isothermal sound speed is estimated to be Cs = 833.0 \u00b1 19.5 m s\u22121, which is higher than the typically assumed value of 200 m s\u22121 (e.g., Lee et al. 2021b, 2024). 2.2.3. The Density at the Disk-Envelope Interface The density at the disk-envelope interface can be estimated via two different approaches. The first is by using the best fit values of the disk density profile, while the second is by using the best fit values of the envelope density profile, both modeled by Tobin et al. (2020b). We initially choose the former approach, since the focus of the study by Tobin et al. (2020b) was on the disk, and the observations taken likely resolve out most of the envelope emission. However, as a comparison, we do explore the latter in Appendix C. The disk density, which is related to the disk scale height and disk surface density, was modeled using the molecular line emission by Tobin et al. (2020b). The disk scale height (hd) is given by hd(r) = \u0012 kBr3Td(r) GM\u22c6\u00b5mmH \u00130.5 , (6) where M\u22c6is the protostellar mass. The disk surface density (\u03a3disk) is given by \u03a3d(r) = \u03a30 \u0012 r rc \u0013\u2212\u03b3 exp \" \u2212 \u0012 r rc \u0013(2\u2212\u03b3)# , (7) 1 Uncertainties were propagated using the publicly hosted python package: asymmetric uncertainty (Gobat 2022; https://github. com/cgobat/asymmetric uncertainty). This package uses an empirical/analytical function to model the error distributions. First Estimate of the Ambipolar Diffusivity Coefficient in HOPS-370 5 where rc is the critical radius of the disk (rc = Rd was assumed in the molecular line fitting) and \u03b3 is the surface density power-law index. The normalization constant (\u03a30) is described by \u03a30 = (2 \u2212\u03b3)Md 2\u03c0r2 c , (8) where Md is the disk mass. The radiative transfer modeling of the 12 molecular line fits give an average value of the surface density power-law index to be \u03b3 = 0.9 \u00b1 0.2. Finally, the disk volume density (\u03c1d) is expressed as \u03c1d(r) = \u03a3d(r) \u221a 2\u03c0 hd(r) exp \u22121 2 \u0014 z hd(r) \u00152! , (9) where z is the height above the disk midplane and the other parameters are as described before. For simplicity, we approximate the density at the midplane (z = 0), which allows the exponential to go to 1 as the inner terms go to 0. We are left with a simplified equation of \u03c1d(r) = \u03a3d(r) \u221a 2\u03c0 hd(r), (10) where we can then plug in our known values to calculate the approximate density at the disk edge. By plugging in r = Rd = 94.4 \u00b1 12.6 au and the other parameters previously mentioned, we find a disk scale height of hd = 16.2 \u00b1 3.3 au, a disk surface density of \u03a3d = 2.2 \u00b1 0.9 g cm\u22122, and a disk volume density of \u03c1d = 3.7 \u00b1 1.7 \u00d7 10\u221215 g cm\u22123 at the edge of the disk. 2.3. Estimating the Magnetic Field Strength at the Edge of the Disk Yen et al. (2021a) originally estimated the corescale plane-of-sky magnetic field strength using 850 \u00b5m dust polarization legacy observations from the Submillimetre Common-User Bolometer Array (SCUBA) Polarimiter (SCUPOL) on the James Clerk Maxwell Telescope (JCMT). A magnetic field strength of Bpos = 0.54 \u00b1 0.25 mG is derived for HOPS-370 using the Davis-Chandrasekhar-Fermi (DCF) method (Davis 1951; Chandrasekhar & Fermi 1953). Updated observations have since been taken using the new SCUBA-2 detector and POL-2 polarimiter (Kao & Yen et al., in prep.), providing a new and more precise magnetic field strength estimate of Bpos = 0.50 \u00b1 0.13 mG for HOPS370. In addition to the magnetic field strength, the average core mass and core density were estimated to be Mc = 37.0 \u00b1 2.6 M\u2299and \u03c1c = 1.9 \u00b1 0.2 \u00d7 10\u221218 g cm\u22123, respectively, within a core radius of \u223c0.07 pc. This is the same radius in which the magnetic field strength was also estimated. In order to scale this magnetic field strength from the core-scale to the edge of the disk and obtain a value for B\u03d5, several assumptions need to be made. 2.3.1. The Magnetic Field Density Relation The general form of the most commonly cited magnetic field-density (B-n) relation is written as B = B0 \u0012 n n0 \u0013\u03ba , (11) where B0 is the initial magnetic field strength to be scaled, n and n0 are scaled and initial number densities, respectively, and \u03ba is the power-law index (Crutcher et al. 2010; Crutcher & Kemball 2019; Pattle et al. 2023). For clouds undergoing spherical collapse with flux-freezing, \u03ba is \u223c2/3 (Mestel 1966), while collapse models with ambipolar diffusion predict \u03ba evolves from 0 at the initial collapse to 0.5 in the later stages (Mouschovias & Ciolek 1999). Since our primary assumption is that ambipolar diffusion is the main diffusion process, and HOPS-370 is an evolved Class 0 protostar, we take \u03ba = 0.5. Thus, the total magnetic field strength can be scaled by Btot,d = Btot,c \u0012\u03c1d \u03c1c \u00130.5 , (12) where \u03c1c and \u03c1c are the volume densities at the core and disk scales, while Btot,c and Btot,d are the total magnetic field strengths at the core and disk scales, respectively (hereafter, referred to as the C04 method). 2.3.2. Magnetic Field Strength Scaling, Correction and Estimation In order to estimate the magnetic field strength at the edge of the disk (B\u03d5) as fairly as possible, we first convert the plane-of-sky magnetic field strength (Bpos,c) to the total magnetic field strength (Btot,c) using two different statistical relations for the sake of completeness. We first use the relation derived from a sample of observations (Crutcher et al. 2004), given as Btot = \u0012 4 \u03c0 \u0013 Bpos, (13) which gives a statistical average of the total magnetic field strength. Using this relation, we derive a total magnetic field strength of Btot,c = 0.64 \u00b1 0.16 mG. Additionally, Liu et al. (2021) derive the relation Btot = r 3 2Bpos, (14) using 3D MHD simulations and radiative transfer calculations to produce synthetic polarization images to find 6 Thieme et al. Table 1. Overview of parameters used for the estimation of \u03b7AD Parameter Description Parameter Value Protostar + Disk Properties Protostellar mass M\u22c6(M\u2299) 2.5+0.2 \u22120.2 Disk mass Md (M\u2299) 0.035+0.005 \u22120.003 Disk radius Rd (au) 94.4+12.6 \u221212.6 Critical radius rc (au) = Rd Temperature at 1 au T0 (K) 980.0+0.6 \u22120.6 Temperature power-law index q 0.35 Surface density power-law index \u03b3 0.9+0.2 \u22120.2 Disk temperature at Rd Td (K) 199.0+9.3 \u22129.3 Disk sound speed at Rd Cs (m s\u22121) 833.0+19.5 \u221219.5 Disk scale height at Rd hd (au) 16.2+3.3 \u22123.3 Disk surface density at Rd \u03a3d (g cm\u22122) 2.2+0.9 \u22120.9 Disk volume density at Rd \u03c1d (10\u221215g cm\u22123) 3.7+1.7 \u22121.7 Protostellar Core Properties Core mass Mc (M\u2299) 37.0+2.6 \u22122.6 Core volume density \u03c1c (10\u221218g cm\u22123) 1.9+0.2 \u22120.2 Plane-of-sky B-field strength Bpos,c (mG) 0.5+0.1 \u22120.1 Core to Disk Scale B-Field Properties B-n relation power-law index \u03ba 0.5 Total Core B-field strength Btot,c (mG) 0.6+0.2 \u22120.2 Total Disk B-field strength Btot,d (mG) 28.3+9.9 \u22129.8 References\u2014Tobin et al. (2020b), Kao & Yen et al. (in prep.), this work. a statistical average of the total magnetic field strength. Using this relation, we derive a total magnetic field strength of Btot,c = 0.61 \u00b1 0.16 mG. These two values are within error, and thus, indistinguishable for our purpose. We therefore simply adopt the value using the statistical relation from Crutcher et al. (2004) for the remainder of this paper. We now scale our total core-scale magnetic field strength of Btot,c = 0.64 \u00b1 0.16 mG down to the disk scales using Equation 12. We find Btot,d = 28.3\u00b19.9 mG using the C04 method. Since B\u03d5 should be the dominant magnetic field component at the edge of the protostellar disk, we assume B\u03d5 \u223cBtot,d in our estimations. This is discussed later in Section 4.1.2. 2.4. Scaling Factors for the Infalling and Rotational Velocities Here, we discuss the scaling factors of \u03b4r and \u03b4\u03d5, which describe the deviations of the infalling and rotational velocities, respectively, from the Keplerian velocity (Equation A.9). As briefly mentioned in Appendix A, recent MHD simulations of protostellar disk formation including ambipolar diffusion find that u\u03d5 is very close to Keplerian at the disk edge (\u03b4\u03d5 \u22730.9), while ur is significantly less (\u03b4r \u22720.5) than the Keplerian velocity, possibly by even a factor of a few, less than one order of magnitude (Lee et al. 2021a). For the deviation of the rotational velocity from Keplerian, we will initially assume \u03b4\u03d5 = 1 as a conservative estimate, and since the modeling of the HOPS-370 protostellar disk already assumes the rotational velocity structure of the disk is Keplerian. For the deviation of the infall velocity from Keplerian, it is less straight-forward but we can still make some estimates. Recent observations of the young Class I protostar, L1489 IRS, revealed a so-called \u201cslow\u201d infall, where the velocity structure of the infalling envelope was modeled to be 2.5 times slower than freefall (Sai et al. 2022). If we use the quantities for HOPS-370 and this assumption of vinf = 0.4vff, we find \u03b4r \u223c0.6. For a conservative measure, we initially assume \u03b4r = 0.8. Modeling \u03b4r in HOPS-370 would provide further constraints on our ambipolar diffusivity coefficient, however, this is currently beyond the scope of this paper and will be left for a future study. How these two values effect the ambipolar diffusivity coefficient estimation is further explored in Section 3.3.3. 3. RESULTS & ANALYSIS 3.1. The First Estimation of the Ambipolar Diffusivity Coefficient from Observations In the previous sections, we obtained all of the necessary values needed to estimate \u03b7AD for the first time. An overview of all the parameters obtained in the previous sections is shown in Table 1. We make an estimation using the B\u03d5 derived from the C04 method. We plug in the values of \u03b4r = 0.8, \u03b4\u03d5 = 1.0, Cs = 833.0+19.5 \u221219.5 m s\u22121, Rd = 94.4+12.6 \u221212.6 au, M\u22c6= 2.5+0.2 \u22120.2 M\u2299, Md = 0.035+0.005 \u22120.003 M\u2299 \u03c1d = 3.7+1.7 \u22121.7 \u00d7 10\u221215 g cm\u22123 B\u03d5 = 28.3+9.9 \u22129.8 mG, into the normalized ambipolar diffusivity coefficient equation (Equations 2 and 3) to obtain \u03b7AD = 1.7+1.5 \u22121.4 \u00d7 1019 cm2 s\u22121 = 2.4+2.1 \u22122.0 \u00d7 10\u22121 s. As this is the first ever estimation of the ambipolar diffusivity coefficient from observations, there are no other First Estimate of the Ambipolar Diffusivity Coefficient in HOPS-370 7 observational values to compare with. In the context of comparing to the value of this coefficient using a chemical network, this will be explored in Section 3.3. 3.2. The Dimensionless Els\u00a8 asser Number for Ambipolar Diffusion The strength of non-ideal MHD effects are quantified through the dimensionless Els\u00a8 asser numbers, which for ambipolar diffusion is given by AM = v2 A \u03b7AD\u2126K , (15) where vA is the Alfv\u00b4 en speed and \u2126K is the Keplerian rotation frequency (e.g., Wurster 2021; Cui & Bai 2021). The Alfv\u00b4 en speed is defined as vA = s B2 4\u03c0\u03c1, (16) which describes the speed of an MHD wave permeating through a dense medium. Likewise, the Keplerian rotation frequency is defined as \u2126K = r GM\u22c6 r3 . (17) Typically, AM \u226b1 represents strong coupling between the magnetic field and the neutral gas, while AM \u22721 indicates strong magnetic diffusion (e.g., Wurster 2021; Commer\u00b8 con et al. 2022). We estimate the dimensionless Els\u00a8 asser number for ambipolar diffusion to be AM = 1.7 \u00b1 1.0. This shows we are likely in the regime of stronger magnetic diffusion and indicates the importance of ambipolar diffusion in the evolution of the HOPS-370 protostellar disk. 3.3. Comparing with the Non-Ideal MHD Coefficient and Ionisation Library (NICIL) The Non-Ideal MHD Coefficient and Ionisation Library (NICIL)2 is a code to calculate the diffusion coefficients for ambipolar diffusion (\u03b7AD), Ohmic dissipation (\u03b7OD) and Hall effect (\u03b7HE) for MHD simulations using a chemical network (Wurster 2016, 2021). We aim to investigate whether our ambipolar diffusivity coefficient is consistent with one calculated by NICIL. NICIL allows for estimating these coefficients for different input parameters, such as density, temperature and magnetic field strength. Additionally, parameters for the dust grain size distribution and cosmic-ray ionization rate can be modified. First, we describe the initial parameters used for several NICIL runs (Section 3.3.1). We attempt to emulate the conditions at the edge of the disk 2 https://bitbucket.org/jameswurster/nicil/src/master/ as closely as possible by using the derived disk parameters and several different assumptions for the magnetic field strength and the cosmic-ray ionization rate (Section 3.3.2). We then explore several of the assumptions made during our estimation of the ambipolar diffusivity coefficient to see how they affect the value and its consistency with NICIL (Section 3.3.3). There are two files that we modify for these different runs in NICIL: nicil.F90 and nicil ex eta.F90. We assume a barotropic equation of state for all runs, since this is the same assumption used in the numerical simulations by Hennebelle et al. (2016). 3.3.1. Parameter Setup We first describe several modifications made to the nicil ex eta.F90 test script. This file contains the input parameters for the temperature, density and magnetic field strength. We compute the barotropic equation of state over the default temperature range of 10 K to 2 \u00d7 105 K and density range of 10\u221222 g cm\u22123 to 100.5 g cm\u22123. For the magnetic field, we employ a constant (use input B = .true.) magnetic field strength using a value of 28.3 mG (C04 method), which we estimated at the edge of the disk. We also run using the upper and lower errors on the magnetic field as the constant values to estimate an approximate error range on the NICIL ambipolar diffusivity coefficient. Additionally, NICIL has the option to vary the magnetic field using the function B = 1.34\u00d710\u22127\u221ann G (use input B = .false.). However, this magnetic field strength comes from different underlying assumptions than what we use and severely underestimates the magnetic field strengths compared to what we find, so we do not compare with this case. The dust grain and cosmic-ray ionization properties are then adjusted in the nicil.F90 main script. We use the default gas-to-dust ratio of 100 and set the number of grain size bins to 32. Tobin et al. (2020b) derive powerlaw slope of the grain distribution to be p = \u22122.63 and the maximum grain size to be amax = 432 \u00b5m, while assuming the same minimum dust grain size of amin = 0.005 \u00b5m used in their fitting. Thus, we set these parameters in NICIL accordingly. The cosmic-ray ionization rate in HOPS-370 is unknown, however the typical ISM value is usually quoted to be \u03b6CR = 10\u221217 s\u22121 (e.g., Caselli et al. 1998; McElroy et al. 2013). We initially set a constant cosmic-ray ionization rate (zeta of rho = .false.) of \u03b6CR = \u03b60 = 10\u221217 s\u22121 in the script. However, we also vary the cosmic-ray ionization rates between 10\u221219 s\u22121 < \u03b60 < 10\u221215 s\u22121 as another approximate \u201cerror\u201d range. This should be a typical range in dense molecular clouds inferred from chemical analyses (Caselli et al. 1998). In addition, we also run us8 Thieme et al. Figure 1. Comparison between our derived value of \u03b7AD and the NICIL calculated values of \u03b7AD assuming a barotropic equation of state and a constant magnetic field strength (C04 method). Our derived \u03b7AD is marked by the black circle and also printed in the top right of each plot, along with the values used for the estimation just below. (Left Column) Uses a constant (unattenuated) cosmic-ray ionization rate for the NICIL calculation. (Right Column) Uses a varied (attenuated) cosmic-ray ionization rate for the NICIL calculation. The dashed black lines indicate the NICIL calculated \u03b7AD based on the magnetic field strength uncertainties. The shaded blue areas represent \u03b7AD calculated by NICIL for different ranges of \u03b60 between 10\u221216 \u221210\u221218 s\u22121 (darker shade) and 10\u221215 \u221210\u221219 s\u22121 (lighter shade). The mass volume density and H2 number density are related by \u03c1 = mH\u00b5H2nH2, where mH is the mass of the hydrogen atom and \u00b5H2 is the mean molecular weight per molecular hydrogen (\u00b5H2 = 2.8). ing a varied cosmic-ray ionization rate (zeta of rho = .true.), which mimics attenuated cosmic-rays via the relation of \u03b6CR = \u03b60e\u2212\u03a3/\u03a3CR + \u03b6min. In this case, we set \u03b60 = 10\u221217 s\u22121 and \u03b6min = 10\u221222 s\u22121 (default value) in the script. The the gas surface (column) density (\u03a3) is directly calculated from several parameters when running the code. The cosmic-ray attenuation depth (\u03a3CR) is a constant and kept at the default value of 96 g cm\u22122. We also vary the cosmic-ray ionization rates between 10\u221219 s\u22121 < \u03b60 < 10\u221215 s\u22121 in this case as well. We set the mean molecular weight to be \u00b5 = 2.37 for consistency. All other parameters in both scripts are kept as the default values. 3.3.2. Initial Comparison We run NICIL using the aforementioned parameters and compare to derived values of \u03b7AD in Figure 1. The columns correspond to the two different cosmic-ray ionization rate assumptions in NICIL. The derived value of \u03b7AD is shown in the top right corner of the left panel plot along with the magnetic field strength from the C04 method. We describe each case in more detail below. Our \u03b7AD result surprisingly consistent to the \u03b7AD calculated by NICIL. Constant B & \u03b6CR: The left panel of Figure 1 use a constant magnetic field strength (use input B = .true.) of B = 28.3+9.9 \u22129.8 mG from the C04 method (left panel). In addition, we use a constant (unattenuated) cosmic-ray ionization rate (zeta of rho = .false.). As mentioned in the previous section, we run for the derived magnetic field strength of 28.3 mG, and then perform subsequent runs using the upper/lower errors on the magnetic field strength (dashed black lines). We assumed the cosmic-ray ionization rate to be 10\u221217 s\u22121 for the previous three calculations, but also varied it between 10\u221218 s\u22121 \u2264\u03b60 \u226410\u221216 s\u22121 (darker-blue shaded area) and 10\u221219 s\u22121 \u2264\u03b60 \u226410\u221215 s\u22121 (lighter-blue shaded area) assuming the magnetic field strength of 28.3 mG. The derived \u03b7AD using the C04 magnetic field strength is surprisingly consistent with the results from NICIL. If the infall velocity is much smaller than Keplerian rotation, then both values would become more consistent (Section 3.3.3). The cosmic-ray ionization rates have more of an affect on the predicted ambipolar diffusivity coefficient from NICIL compared to the error on our magnetic field strength. Higher cosmic-ray ionization rates correspond to smaller \u03b7AD values and viceversa. The choice of the chemical network would also impact the estimation by NICIL, and thus our overall comparison. This is discussed more in Section 4.1.3. Constant B & Varied \u03b6CR: The right panel of Figure 1 uses the same magnetic field parameters as the left, however, the cosmic-ray ionization rate is attenuated (zeta of rho = .true.). We do this in order to understand what effect this has on our derived ambipolar diffusivity coefficient. As we can see, the attenuated cosmic-rays only affect the very high densities \u227310\u221212 g cm\u22123, where the ambipolar diffusivity coefficient begins to increase. It does not affect the density regime in which our ambipolar diffusivity coefficient is First Estimate of the Ambipolar Diffusivity Coefficient in HOPS-370 9 Figure 2. Comparison between the effect of different values of \u03b4r (top row), \u03b4\u03d5 (middle row) and Rd/RAD (bottom row) on our derived \u03b7AD and the NICIL calculated values of \u03b7AD assuming a barotropic equation of state, constant magnetic field strength from the C04 method and constant (unattenuated) cosmic-ray ionization rate. As in Figure 1, our derived \u03b7AD is marked by the black circle and also printed in the top right of each plot, while the black lines and shaded blue areas also have the same meanings as in Figure 1. estimated. Thus, our derived \u03b7AD is still consistent with these results from NICIL. 3.3.3. Varied Parameters There are several parameter assumptions made in our estimation using the ambipolar diffusivity coefficient equation that could vary the resulting \u03b7AD. First, our initial calculation assumes \u03b4r = 0.8 and \u03b4\u03d5 = 1.0. Additionally, the results from Hennebelle et al. (2016) and Hennebelle et al. (2020) indicate that the ratio of the disk radius from their simulations to their predicted disk radius from their analytical equation, Rd/RAD, could vary between 0.5 to 2, particularly for lower mass cores. Therefore, we vary at each parameter individually, while the others are kept at their initially assumed values, to see the magnitude in difference for each. We compare to our NICIL run using constant B from the C04 method and constant \u03b6CR (Figure 2, left panel). For \u03b4r, we compare values of 0.8, 0.5 and 0.1 (Figure 2, top row). As mentioned previously, \u03b4r could be a factor of a few lower than the Keplerian velocity in simulations, and was found to be \u223c0.6 in previous observations of the Class I protostar, L1489 IRS. As this value could vary quite considerably depending on the environment, it will have more of an impact on our estimated ambipolar diffusivity coefficient than \u03b4\u03d5, even though \u03b7AD scales as \u223c\u03b42 \u03d5. In the case where \u03b4r = 0.1, it is much more consistent with NICIL as the estimated ambipolar diffusivity coefficient is about an order of magnitude lower, potentially hinting at the possibility of slow infall in HOPS-370. For \u03b4\u03d5, we compare values of 1.0, 0.9 and 0.8 (Figure 2, middle row). Since \u03b4\u03d5 should be \u22730.9, it will not have too much affect on our derived ambipolar diffusivity coefficient, which are all within error in these cases. We only demonstrate \u03b4\u03d5 = 0.8 as a more extreme scenario, but still the effect is less than \u03b4r due to these constraints. It would still be interesting to try to estimate if there is any deviation from Keplerian rotation in the rotational velocity structure at the edge of the disk, as it still lowers the value, if only even a little. For Rd/RAD, we compare values of 0.5, 1.0 and 2.0 (Figure 2, bottom row). As previously mentioned, the ratio of the actual disk radius to the predicted disk radius due to ambipolar diffusion from Equation 1 could vary between 0.5 to 2, particularly for lower mass cores 10 Thieme et al. (Hennebelle et al. 2016, 2020). Although HOPS-370 is considered to be a more intermediate mass Class 0/I, we would still like to investigate how the value varies between these two extreme cases. The Rd/RAD factor also doesn\u2019t really affect the calculation of \u03b7AD too much, which is similar to \u03b4\u03d5. Rd/RAD is slightly more consistent with NICIL when Rd/RAD > 1, while the right panel of Figure 2 in Hennebelle et al. (2016) shows consistently lower Rd/RAD < 1 for protostars M\u22c6+Md < 5 M\u2299. This should be investigated in numerical simulations for a mass range around the HOPS-370 protostar+disk mass as the spread can become quite noticeable when zooming in on very low-mass simulations (M\u22c6+ Md < 0.5 M\u2299) in the left panel of Figure 2 in Hennebelle et al. (2016). 4. DISCUSSION 4.1. Validity of Assumptions Several assumptions are made in the derivation, estimation and comparison to theoretical values of the ambipolar diffusivity coefficient. In this section, we explore these assumptions in detail and discuss how they could effect our results. 4.1.1. Derivation of the Ambipolar Diffusivity Coefficient Equation In Section 2, we have listed several assumptions in the derivation of Equation 1. The first is that the main diffusion process is ambipolar diffusion. There are many factors shown to alleviate the effects of magnetic braking to form large, protostellar disks in MHD simulations. These mainly include non-ideal MHD (e.g., Li et al. 2011; Dapp et al. 2012; Tsukamoto et al. 2015; Wurster et al. 2016; Tsukamoto et al. 2017; Wurster et al. 2019; Zhao et al. 2020a; Wurster et al. 2021), misalignment between the magnetic field and rotation axis (e.g., Hennebelle & Ciardi 2009; Li et al. 2013; Tsukamoto et al. 2018; Hirano et al. 2020) and turbulence (e.g., Seifried et al. 2013; Li et al. 2014; Seifried et al. 2015). From an observational standpoint, there are several key results to consider. Magnetic field orientations in low-mass protostars indicate that the field orientation is preferentially randomly aligned with the rotation axis (e.g., Hull et al. 2013; Yen et al. 2021b) However, recent results show no apparent correlation between the misalignment angle of the magnetic field and apparent disk size measured from the dust continuum (Yen et al. 2021a). Yen et al. (2021a) also conclude that the turbulence measured from the non-thermal linewidth at core-scale does not correlate with the apparent disk size either. Observations of angular momentum profiles in protostellar envelopes do imply that there is some turbulence present (Pineda et al. 2019; Gaudel et al. 2020; Sai et al. 2023), although the level of turbulence is not directly quantified. Thus, non-ideal MHD likely plays an important role in protostellar disk formation. As far as which non-ideal MHD effect (ambipolar diffusion, Ohmic dissipation or the Hall effect) is most important overall, simulations show ambipolar diffusion is an efficient process in parts of the disk and envelope that can regulate the properties of the disk (e.g., Tsukamoto et al. 2023). Ohmic dissipation is only efficient at high densities and likely does not play much of a role in the envelope itself (e.g., Marchand et al. 2016; Wurster et al. 2018a; Wurster 2021). The Hall effect has been shown to effectively disappear shortly after the formation of the protostellar disk (Zhao et al. 2020b; Lee et al. 2021b). Therefore, ambipolar diffusion may be the most important non-ideal MHD effect, especially when the protostar + disk system becomes more evolved. This is where comparing directly with non-ideal MHD simulations would help us to understand the non-ideal MHD effects more deeply. Since NICIL does also calculate the non-ideal MHD coefficients for Ohmic dissipation (\u03b7OD) and the Hall effect (\u03b7HE), it is interesting to compare them to the ambipolar diffusivity coefficient. We show a comparison between each of the diffusivity coefficients from our NICIL runs in Figure 3. We see that our derived \u03b7AD is clearly in an ambipolar diffusion dominated density regime, under our assumptions made for our NICIL runs. This is in favor of the first assumption stated to derive Equation 1, that ambipolar diffusion is the main diffusion process. The coefficient for Ohmic dissipation starts to become prominent towards the highest density regimes, which is consistent with previous findings (e.g., Marchand et al. 2016; Wurster et al. 2018a; Wurster 2021). The Hall effect seems to be dominant at intermediate density regimes, though it still shows some contribution in the density regime where our ambipolar diffusion value is calculated. We also note that in our NICIL run in the bottom left panel, the Hall coefficient becomes negative at very low densities approaching 10\u221218 g cm\u22123 in the case of a low cosmic-ray ionization rate of \u03b60 = 10\u221219 s\u22121. The cosmic-ray ionization rate will impact the efficiency of non-ideal MHD effect (e.g., Wurster et al. 2018b; Kuffmeier et al. 2020), and thus should be studied in the environment of HOPS-370 to further constrain the comparisons with NICIL. The cosmic-ray ionization rate in the inner envelope of Class 0 protostar, B335, was previously found to be enhanced (\u03b6CR \u223c10\u221214 s\u22121), which could explain the extremely small (< 10 au) inferred protostellar disk (Cabedo et al. 2023). AdditionFirst Estimate of the Ambipolar Diffusivity Coefficient in HOPS-370 11 Figure 3. Relative comparison of the diffusivity coefficients for ambipolar diffusion (\u03b7AD), Ohmic dissipation (\u03b7OD) and the Hall effect (\u03b7HE) calculated by NICIL assuming a barotropic equation of state, constant magnetic field strength and cosmic-ray ionization rate. The symbols and labels have the same meaning as in Figure 1, except we only show the lighter shaded blue area for \u03b60 between 10\u221215 \u221210\u221219 s\u22121 for each coefficient. Our derived value shows that HOPS-370 lies in an ambipolar diffusion dominated region. ally, a new large scale study probing the NGC 1333 region of the Perseus also finds an enhanced cosmic-ray ionization rate (\u03b6CR \u227310\u221216.5 s\u22121) across the molecular cloud, which is consistent with the small (< 50 au) disks in that region (Pineda et al. 2024). Future studies also probing the ion-neutral drift in HOPS-370 could help to further understand the role of ambipolar diffusion in this source. Yen et al. (2018) tried to constrain the ion-neutral velocity drift in the young Class 0 protostar, B335, however, only an upper limit was obtained. This could be due to B335 being too young, as some simulations have shown this velocity drift could be more observable in more evolved Class 0/I protostars (e.g., Tsukamoto et al. 2020). Since HOPS370 is more evolved, it could be an ideal target for this kind of study in the future. Next, the angular momentum is counteracted by magnetic braking resulting in the advection and braking timescales to be of the same order. The equations for the advection and braking timescales are given by Equations A.1 and Equations A.2, respectively. We make an estimation of the advection timescale under the same assumption used for our ambipolar diffusivity estimate, where ur = 0.8vkep, giving us a value of \u03c4adv \u223c3.6 \u00d7 109 s, which is also a lower limit. For the braking timescale, since we do not directly know the poloidal component (Bz) of the magnetic field strength, we make an approximation of BzB\u03d5 \u2248B2 tot and use our value derived from the C04 method. This results in a lower limit of the braking timescale of \u03c4br \u223c5.4 \u00d7 108 s. These values are within one order of magnitude difference, and show that this assumption can hold in HOPS370. We note that we use this exact assumption to estimate Bz in Section 4.1.2, therefore, using that value here presents a circular argument which is why we simply estimate the lower limits for \u03c4adv and \u03c4br. Further modeling of the infall velocity structure and magnetic field components (Br, Bz, B\u03d5) would be necessary to confirm. Then, the toroidal field generated by differential rotation is offset by the ambipolar diffusion in the vertical direction resulting in the Faraday induction and vertical diffusion timescales to be of the same order. The equations for the Faraday induction and vertical diffusion timescales are given by Equations A.3 and Equations A.4, respectively. We can obtain a lower limit approximation of the Faraday induction timescales by assuming Bz \u223cB\u03d5 \u223cBtot. This gives a value of \u03c4far \u223c4.9 \u00d7 108 s. The vertical ambipolar diffusion timescales would need to use our derived value, thus we check for self-consistency. We find \u03c4diff \u223c3.4 \u00d7 109 s. Since varying some of the parameters in Section 3.3.3 lower the value of \u03b7AD, \u03c4diff could also be considered as a lower limit. Both values are within one order of magnitude difference, showing that this assumption can hold in HOPS-370. Again, we do not use the Bz in Section 4.1.2 to avoid any circular arguments. Additionally, the infalling and rotational velocities of the gas near the disk edge both scale with the Keplerian velocity. The disk radius derived for HOPS-370 from the radiative transfer modeling indicates that the rotational velocity (u\u03d5) should be Keplerian in nature. As for the infalling velocity, further modeling needs to be done to see how much ur deviates from Keplerian at the disk 12 Thieme et al. edge. For now, the assumption of the rotational velocity holds, while the assumption for the infalling velocity should to be further modeled. Lastly, the gas near the disk edge has Keplerian velocity and is in vertical hydrostatic equilibrium. Again, the gas disk in HOPS-370 is clearly resolved by the observations by Tobin et al. (2020b) and the best-fit disk parameters were found by fitting with radiative transfer models assuming Keplerian rotation and hydrostatic equilibrium. Many previous studies have also clearly resolved Keplerian rotating disks in young Class 0 and Class I protostars (e.g., Tobin et al. 2012; Murillo et al. 2013; Yen et al. 2014, 2017; Ohashi et al. 2023). Even if the assumptions in the fitting are wrong, they are the same assumptions we use and we are still using a \u201cbestfit\u201d value, which indicates this model does provide a good fit to the data. Therefore, the values properties derived for the HOPS-370 protostellar disk clearly should satisfy both assumptions. 4.1.2. Quantities and Relations used for the Ambipolar Diffusivity Coefficient Estimation Arguably the most important assumption in our estimation of \u03b7AD is how the envelope scales from core-scale down to the edge of the protostellar disk. As previously stated, early theoretical works predict \u03ba in Equation 4 to be \u223c2/3 for clouds undergoing spherical collapse with flux-freezing (Mestel 1966), while \u03ba \u223c0.5 for a collapsing cloud with ambipolar diffusion (Mouschovias & Ciolek 1999). This was the basis of our initial assumptions, however, it is not so straight forward. The recent review by Pattle et al. (2023) shows \u03ba derived from observations of molecular clouds can vary quite a bit, possibly due to different environmental factors. These observations probe the large scale molecular clouds, filaments and cores whose magnetic field imprint could be inherently different than the magnetic fields near a protostellar disk. Additionally, a magnetic field density relation recently derived by Lee et al. (2024) for inside a collapsing, protostellar envelope is explored in Appendix D. The magnetic field strength derived from this relation is compatible with our estimates, however, needs to be further investigated due to discrepancies in the model presumptions. Observationally, Yen et al. (2023) recently derived a magnetic field density relation from the core to inner envelope scale in the young, Class 0 protostar HH 211. They find \u03ba \u223c0.36, which fits into the assumption that ambipolar diffusion is playing a role to partially decoupled the magnetic field from the neutral matter. Their inner envelope magnetic field strength was derived using a force-balance equation (Koch et al. 2012), rather than the DCF method. The core-scale magnetic field strength estimated by Kao & Yen et al. (in prep.) was derived using the DCF method, which has several uncertainties associated with it due to the assumptions of equipartition, isotropic turbulence, projected polarization angle on the plane-of-sky, and more (e.g., Liu et al. 2021, 2022a; Chen et al. 2022; Liu et al. 2022b; Myers et al. 2023). These uncertainties may cause the DCF estimate to overestimate the magnetic field strength, which would impact our ambipolar diffusivity coefficient estimation. In the best case scenario, future observations to derive the inner envelope strength near the disk in HOPS-370 could alleviate the need to even use a magnetic field density relation. Otherwise, if the magnetic field strength cannot be derived close enough to the disk edge, it can still be estimated in the envelope to derive a magnetic field density relation, where the magnetic field strength could further be scaled down to the edge of the disk. We have also assumed that Btot \u2248B\u03d5, and that B\u03d5 is the dominant component of the magnetic field at the edge of the protostellar disk. Our value for Btot at the core-scale is a statistical average based on a large sample of observations, which may or may not necessarily be applied to only a single source. This is, however, the only current way we obtain a total magnetic field strength from the plane-of-sky magnetic field component and should be investigated further. To see whether B\u03d5 is really dominant in our case, we estimate Bz = 4.2\u00b12.7 mG using Equation A.5. This shows that B\u03d5 is the dominant component in our case, and thus is a reasonable assumption in our ambipolar diffusivity coefficient estimation. 4.1.3. Comparison with NICIL and Input Values While the cosmic-ray ionization rate and dust grain properties needed for NICIL are not inherently part of our derived ambipolar diffusion equation, they still play a role in the efficiency of non-ideal MHD diffusivities (e.g., Zhao et al. 2016; Dzyurkevich et al. 2017; Wurster et al. 2018b; Zhao et al. 2018; Kuffmeier et al. 2020; Guillet et al. 2020; Zhao et al. 2021; Kobayashi et al. 2023). Several studies have shown that disk formation is suppressed in the presence cosmic-ray ionization rates higher than the canonical value of 10\u221217 s\u22121 in dense cores (e.g., Zhao et al. 2016; Wurster et al. 2018b; Kuffmeier et al. 2020). Large numbers of small dust grains can also influence the ionization degree, and thus the non-ideal MHD diffusivities (e.g., Zhao et al. 2016; Dzyurkevich et al. 2017; Zhao et al. 2018; Koga et al. 2019; Marchand et al. 2020). Tobin et al. (2020b) do constrain the maximum grain size, while the minimum grain size is set as a fixed parameter in their model fitFirst Estimate of the Ambipolar Diffusivity Coefficient in HOPS-370 13 ting. We did explore how much the minimum grain size affects the calculated \u03b7AD from NICIL by re-running our constant B (C04 method) and constant \u03b6CR NICIL runs, for minimum grain sizes of 0.01 \u00b5m, 0.1 \u00b5m and 1.0 \u00b5m. However, the difference was indistinguishable, and thus, the resulting \u03b7AD from NICIL may rely more heavily on the choice of chemical network. We also checked if the number of grain size bins used affected the results, but still the results did not change. It is important to note that the derived dust grain properties are that of the disk, and not the envelope. Also, there are currently no studies exploring the cosmic-ray ionization rate in the disk or envelope of HOPS-370. Determining the cosmic-ray ionization rate and dust grain properties in the HOPS-370 protostellar envelope would allow for a better comparison to NICIL. Our comparison with NICIL simply represents the closest theoretical scenario we can achieve by using the values derived from observations. Therefore, further constraints on the properties of the disk and envelope environment, as well as, comparisons with actual non-ideal MHD simulations should be carried out. 5. CONCLUSION We present the first estimation of the ambipolar diffusivity coefficient using an analytical equation describing the protostar and disk properties due to ambipolar diffusion. We show an illustrative schematic of the HOPS370 protostellar system to bring together and summarize our results in the context of the multi-scale observations needed for this study (Figure 4). The main results of this paper are as follows: 1. We derive a generalized analytical expression for the ambipolar diffusivity coefficient in terms of observable quantities in protostellar environments. We show that this relation should be valid, regardless of the global magnetic field orientation with respect to the disk rotation axis. 2. We make the first estimation of the ambipolar diffusivity coefficient to be \u03b7AD = 1.7+1.5 \u22121.4 \u00d7 1019 cm2 s\u22121 at the edge of the HOPS-370 protostellar disk, under the assumption that the magnetic field scales with density (Crutcher et al. 2004). We use the Alfv\u00b4 en speed and Keplerian rotation frequency to estimate the dimensionless Els\u00a8 asser number for ambipolar diffusion to be AM = 1.7+1.0 \u22121.0, indicating that ambipolar diffusion is more dynamically important in the region at the edge of the protostellar disk. Estimates of the ambipolar diffusivity coefficient using the inner envelope density, rather than the disk-edge density yields indistinguishable results. 3. We use the Non-Ideal MHD Coefficient and Ionisation Library (NICIL) to calculate the non-ideal MHD coefficients using the the physical conditions observed in HOPS-370. We show that the ambipolar diffusivity coefficient from NICIL using various magnetic field strength and cosmic-ray ionization properties is consistent with our derived value. We vary the less certain parameters of \u03b4r, \u03b4\u03d5 and Rd/RAD in the ambipolar diffusivity coefficient equation to find the derived value becomes more consistent for decreasing \u03b4r and \u03b4\u03d5 and increasing Rd/RAD. 4. We plot the Ohmic dissipation and Hall effect coefficients along side the ambipolar diffusivity coefficient calculated by NICIL. We find that our derived value shows HOPS-370 lies in an ambipolar diffusion dominated region. This supports the main assumption in the derivation of Equation 1 that ambipolar diffusion is the main diffusion process. When assessing the other assumption made for our derivation of the ambipolar diffusivity equation, we show that they should be valid for HOPS-370. 5. We have demonstrated a new methodology for understanding the role of ambipolar diffusion during protostellar disk evolution. Future studies including more sources and more detailed modeling will help to fully understand the role of non-ideal MHD effects in observations of the earliest stages of protostellar disk formation and evolution. ACKNOWLEDGMENTS We thank the anonymous referee for their helpful comments and suggestions on this manuscript. This work used high-performance computing facilities operated by the Center for Informatics and Computation in Astronomy (CICA) at National Tsing Hua University. This equipment was funded by the Ministry of Education of Taiwan, the National Science and Technology Council of Taiwan, and National Tsing Hua University. S.-P.L. and T.J.T. acknowledge grants from the National Science and Technology Council (NSTC) of Taiwan 106-2119-M007-021-MY3, 109-2112-M-007-010-MY3 and 112-2112M-007-011. Y.-N.L. acknowledges support from the National Science and Technology Council, Taiwan (NSTC 112-2636-M-003-001) and the grant for Yushan Young Scholar from the Ministry of Education. S.-J.L. acknowledges the grants from the National Science and Technology Council (NSTC) of Taiwan 111-2124-M001-005 and 112-2124-M-001-014. H.-W.Y. acknowledges support from the NSTC grant 110-2628-M-00114 Thieme et al. Figure 4. Schematic of the HOPS-370 protostellar system. (Left) 0.85 mm continuum emission of the Orion A molecular cloud taken by the JCMT (Kao & Yen et al., in prep.). The contour levels shown are 3, 5, 10, 15, 30, 50, 100, 300 and 500\u03c3, where \u03c31.3mm = 15.2 mJy beam\u22121. The location of HOPS-370 is shown with a yellow star, with the protostellar class and distance listed. (Right Top) 0.87 mm continuum emission of the protostellar disk around HOPS-370 with self-contour levels of 3, 5, 10, 15, 30, 50 and 100\u03c3, where \u03c30.87mm = 0.39 mJy beam\u22121 (Tobin et al. 2020b). CH3OH and SO integrated-intensity contours are shown in green and orange, respectively, with contour levels of 3, 5, 10, 15, 30\u03c3, where \u03c3CH3OH = 26.2 mJy beam\u22121 km s\u22121 and \u03c3SO = 32.4 mJy beam\u22121 km s\u22121. These two molecular lines were shown to trace the largest disk radius when modeled together. The position of the continuum peak is marked with a yellow star. (Right Bottom) The modeling and results of our ambipolar diffusivity coefficient estimation. The best-fit disk density profile (for z = 0; i.e. the midplane) is shown in log scale, along with the best-fit quantities used in Equation 2 and our estimated ambipolar diffusivity coefficient at the edge of the disk. First Estimate of the Ambipolar Diffusivity Coefficient in HOPS-370 15 003-MY3 and from the Academia Sinica Career Development Award (AS-CDA-111-M03). Software: Astropy (Astropy Collaboration et al. 2013, 2018, http://astropy.org), asymmetric uncertainty (Gobat 2022), Matplotlib (Hunter 2007, http: //matplotlib.org/), proplot (Davis 2021), Numpy (van der Walt et al. 2011, http://numpy.org/) APPENDIX A. DERIVATION OF THE AMBIPOLAR DIFFUSIVITY COEFFICIENT RELATION The equation describing the disk radius due to ambipolar diffusion first presented by Hennebelle et al. (2016), and later by Lee et al. (2021b, 2024), make a number of simplifications. Here, we derive a new relationship between the physical properties at the diskenvelope interface to the ambipolar diffusivity coefficient, in order to better compare to more generalized models that are used to fit observations, as in our case for HOPS-370. We follow the prescriptions given by Lee et al. (2021b, 2024), where more detailed explanations can be found. We assume ambipolar diffusion is the main diffusion process, as discussed in Section 4.1.1. First of all, the accretion of angular momentum onto the protostellar disk is counteracted by magnetic braking to rapidly suppress the growth of the disk. This results in an equilibrium condition at the disk-envelope interface between the advection and magnetic braking timescales (\u03c4adv \u2243\u03c4br) given by \u03c4adv \u2243R ur , (A.1) \u03c4br \u2243\u03c1u\u03d5h BzB\u03d5 , (A.2) where ur and u\u03d5 are the infalling and rotational velocities, Bz and B\u03d5 are the poloidal (vertical) and toroidal (azimuthal) magnetic field components, R is the disk radius, \u03c1 is the density at the disk-envelope interface and h is the scale height of the disk at the edge. Next, B\u03d5 is generated by the induction of Bz through the differential rotation of the protostellar disk is vertically diffused by ambipolar diffusion. This results in another equilibrium condition between the generation of B\u03d5, which happens on the timescale of Faraday induction, and the vertical ambipolar diffusion timescales (\u03c4far \u2243\u03c4diff) given by \u03c4far \u2243B\u03d5h Bzu\u03d5 , (A.3) \u03c4diff \u2243h2 \u03b7AD , (A.4) where \u03b7AD is the ambipolar diffusivity coefficient. Since B\u03d5 should be the dominant component at the protostellar disk edge, we solve our first equilibrium condition for Bz in order to substitute it into our second equilibrium equation, giving Bz \u2243\u03c1u\u03d5urh RB\u03d5 . (A.5) Solving the second equilibrium equation in terms of our ambipolar diffusivity coefficient and substituting in our new relation for Bz gives \u03b7AD \u2243Bzu\u03d5h2 B\u03d5h \u2243 \u03c1u2 \u03d5urh2 RB2 \u03d5 . (A.6) We assume the rotational velocity (ur) and infall velocity (u\u03d5) both scale with the Keplerian velocity (vkep) as ur = \u03b4rvkep, (A.7) u\u03d5 = \u03b4\u03d5vkep, (A.8) where \u03b4r and \u03b4\u03d5 are the the scaling factors and vkep defined at the disk edge is vkep = \u0012GM R \u00131/2 , (A.9) where G is the gravitational constant and M = M\u22c6+Md is the mass of the star+disk system. From recent MHD simulations of protostellar disk formation including ambipolar diffusion, u\u03d5 is found to be very close to Keplerian at the disk edge (\u03b4\u03d5 \u22730.9), while ur can be significantly less (\u03b4r \u22720.5) than the Keplerian velocity, possibly by even a factor of a few (Lee et al. 2021a, Section 2.4). Substituting ur and u\u03d5 into our ambipolar diffusivity coefficient relation gives \u03b7AD \u2243 \u03b4r\u03b42 \u03d5G3/2M 3/2\u03c1h2 R5/2B2 \u03d5 . (A.10) Assuming vertical hydrostatic equilibrium, the scale height is related to the isothermal sound speed (Cs) as h = Cs \u0012 R3 GM \u00131/2 . (A.11) 16 Thieme et al. Now, we can replace h in our ambipolar diffusivity coefficient equation to get a final relation of \u03b7AD \u2243 \u03b4r\u03b42 \u03d5G1/2C2 sR1/2M 1/2\u03c1 B2 \u03d5 . (A.12) This expression should be valid regardless of the global magnetic field orientation (see Appendix B for further discussion). We have left in the density and sound speed terms, which deviates from the further simplifications made by Hennebelle et al. (2016) and Lee et al. (2021b, 2024), since these quantities can be modeled for protostellar disks from molecular line observations. B. THE EFFECTS OF MAGNETIC FIELD INCLINATION ON THE AMBIPOLAR DIFFUSIVITY COEFFICIENT ESTIMATION For an inclined magnetic field, the equilibrium condition that needs to be satisfied follows as Bz \u221a R \u221a\u03b7AD\u03c1ur + Brh \u221a\u03b7AD\u03c1urR = 1, (B.1) where Br is the radial component of the magnetic field strength and the other symbols have the same meaning as in Appendix A (Lee et al. 2024). The relationships between the Br and Bz components are given by Br = B0 (2/\u03c0) sin i (B.2) Bz = B0 cos i (B.3) where i is the magnetic field inclination with respect to the disk rotation axis (i = 0\u25e6means the magnetic field direction is aligned/parallel withe disk rotation axis, i.e. the vertical case) and B0 characterizes the amount of magnetic flux that threads the disk region, while the local field strength can be significantly enhanced by magnetic induction due to vertical differential rotation (Lee et al. 2024). Using the relations of Bz and Br, along with Equations A.7, A.8 and A.11, we can re-write the previous equation as r3/4B0 cos i \u03b71/2 AD\u03c11/2\u03b41/2 r G1/4M 1/4 + r5/4B0 (2/\u03c0) sin i \u03b71/2 AD\u03c11/2\u03b41/2 r G3/4M 3/4C\u22121 s = 1, (B.4) where M = M\u22c6+ Md is the total mass of the star+disk system. Re-writing in terms of B0, we find B0 = \u03b71/2 AD\u03c11/2\u03b41/2 r \u0014 r3/4 cos i G1/4M 1/4 + r5/4 (2/\u03c0) sin i G3/4M 3/4C\u22121 s \u0015\u22121 , (B.5) When the magnetic field is inclined, then the magnetic field strength derived in Section 2.3.1 should not be assumed as one of the magnetic field components, but rather regarded as the total magnetic field strength. We can thus derive an ambipolar diffusivity equation in terms of the total magnetic field strength. The total magnetic field strength (Btot) is the sum of squares of all the components written as B2 tot = B2 r + B2 z + B2 \u03d5, (B.6) where Br and Bz can be substituted again using Equations B.2 and B.3 to give B2 tot = B2 0 h (2/\u03c0 sin i)2 + (cos i)2i + B2 \u03d5. (B.7) Now, we substitute B\u03d5 using our derived relationship in Equation A.12 to get B2 tot = B2 0 h (2/\u03c0 sin i)2 + (cos i)2i + \u03b4r\u03b42 \u03d5G1/2C2 sR1/2M 1/2\u03c1\u03b7\u22121 AD. (B.8) We now have two equations (B.5 and B.8) with two unknowns (B0 and \u03b7AD). Thus, we substitute Equation B.5 into Equation B.8 to remove B0 and obtain a secondorder polynomial ambipolar diffusivity equation for an inclined magnetic field of \u03b72 AD\u03c1\u03b4r \u0014 r3/4 cos i G1/4M 1/4 + r5/4 (2/\u03c0) sin i G3/4M 3/4C\u22121 s \u0015\u22122 \u00d7 h (2/\u03c0 sin i)2 + (cos i)2i \u2212\u03b7ADB2 tot + \u03b4r\u03b42 \u03d5G1/2C2 sR1/2M 1/2\u03c1 = 0, (B.9) which can be solved to find the ambipolar diffusivity coefficient. Using the same values as in Section 3.1 and a magnetic field inclination with respect to the disk rotation axis of 45 \u00b1 22\u25e6(Yen et al. 2021a), we find \u03b7AD = 1.798+0.042 \u22120.007 \u00d7 1019 cm2 s\u22121, where the reported errors are only due to the error on the magnetic field inclination angle. For one, this value is extremely close to and within error of the value previously derived in Section 3.1. Additionally, the errors due to only the magnetic field inclination are a few orders of magnitude smaller than in the previously derived value. This show that the magnetic field inclination has essentially no effect on our derived ambipolar diffusivity coefficient. For completeness, we check the corresponding values of B0, Bz, Br and B\u03d5 to see if which component of the magnetic field is dominant. We use Equations B.2, B.3, B.5 and B.7 to estimate values of B0 \u22485.5 mG, Bz \u22483.9 mG, Br \u22482.5 mG and B\u03d5 \u224828.0 mG. This shows that the B\u03d5 component of the magnetic field still dominates even when considering the orientation. Thus, our derived relation is considered to be generalized and it is correct to assume B\u03d5 \u2248Btot in our initial assumptions (Section 2.3.1). First Estimate of the Ambipolar Diffusivity Coefficient in HOPS-370 17 Figure C.1. Comparison between the best-fit disk and envelope volume density profiles for HOPS-370 (Tobin et al. 2020b). The vertical dashed line represents the radius of the Keplerian gas disk. C. AMBIPOLAR DIFFUSIVITY COEFFICIENT ESTIMATION USING INNER ENVELOPE DENSITY As described in Section 2.2.3, we could equally derive a density at the disk-envelope interface from the best fit envelope volume density relation in Tobin et al. (2020b). We stress that much of the envelope emission may be resolved-out, which could effect the fitting results. However, it is interesting to still investigate how the ambipolar diffusivity coefficient estimation is effected using the current best-fit results. To model the envelope emission using the molecular line data, Tobin et al. (2020b) uses the following relation for the envelope \u03c1env(r) = \u02d9 Menv 4\u03c0 \u0000GM\u22c6r3\u0001\u22121/2 \u00d7 \u0012 1 + \u00b5 \u00b50 \u0013\u22121/2 \u0012 \u00b5 \u00b50 + 2\u00b52 0 Rc r \u0013\u22121 , (C.1) where \u02d9 Menv is the envelope-to-disk mass-accretion rate, \u00b50 = cos \u03b80 is the initial polar angle of a streamline trajectory out to r \u2192\u221e, \u00b5 = cos \u03b8 is the polar angle along the streamline trajectory, Rc is the centrifugal radius where the infalling material has sufficient angular momentum to maintain an orbit the central protostar. We take the simplified case in the mid-plane of the inner envelope, where \u03b80 = \u03b8 = 90\u25e6, which simplifies the equation to \u03c1env(r) = \u02d9 Menv 4\u03c0 \u00002GM\u22c6r3\u0001\u22121/2 \u0012 1 + 2Rc r \u0013\u22121 . (C.2) We show the best-fit disk and envelope density profiles from Tobin et al. (2020b) in Figure C.1. We see a clear difference in densities between the disk and envelope (density jump). We plug in \u02d9 Menv = 3.2 \u00b1 0.6 \u00d7 10\u22125 M\u2299yr\u22121 (no error bars are reported, so we assume a 20% error), M\u22c6= 2.5 \u00b1 0.2 M\u2299, r = Rc = Rd = 94.4 \u00b1 12.6 au (Tobin et al. 2020b), we find \u03c1env(r) = 3.8 \u00b1 1.2 \u00d7 10\u221216 g cm\u22123. Using our new inner envelope density, we re-apply the same steps as in Section 2.3.1 to scale the magnetic field from the core-scale density using the C04 method. This gives us newly estimated magnetic field strength of Btot,e = 9.2 \u00b1 2.8 mG. We now plug in the values of \u03b4r = 0.8, \u03b4\u03d5 = 1.0, Cs = 833.0+19.5 \u221219.5 m s\u22121, Rd = 94.4+12.6 \u221212.6 au, M\u22c6= 2.5+0.2 \u22120.2 M\u2299, Md = 0.035+0.005 \u22120.003 M\u2299 \u03c1d = 3.8+1.2 \u22121.2 \u00d7 10\u221216 g cm\u22123 B\u03d5 = 9.2+2.8 \u22122.8 mG, into the ambipolar diffusivity coefficient equation (Equation 3) to obtain \u03b7AD = 1.7+1.2 \u22121.2 \u00d7 1019 cm2 s\u22121 = 2.4+1.6 \u22121.6 \u00d7 10\u22121 s. The dimensionless Els\u00a8 asser number is estimated to be AM = 1.7+0.8 \u22120.8 (C04 method). These results are indistinguishable from the values calculated using the disk edge quantities. Even though the envelope density is estimated to be an order of magnitude lower than the disk edge, the magnetic field strength is also lower as a result. Since \u03b7AD has a dependence on \u223c\u03c1 and \u223cB\u22122 \u03d5 , the values end up offsetting each other to give similar estimates. This shows that either the disk or envelope density can be used interchangeably to obtain a value for the ambipolar diffusivity coefficient. We again check whether B\u03d5 > Bz using Equation A.5, and find Bz \u22481.38 mG. Thus, B\u03d5 is still the dominant component, although it is more comparable to Bz in this case. D. MAGNETIC FIELD STRENGTH ESTIMATION Recently, Lee et al. (2024) derive a new analytical expression to describe how the magnetic field should scale with density inside a collapsing protostellar envelope for the first time. Considering the two density regimes ad the core and disk scales, their magnetic field density relation (Equation C7 in their paper) can be simplified 18 Thieme et al. to B0,d = B0,c \u0012M\u22c6+ Md Mc \u00130.25 \u0012\u03c1d \u03c1c \u00130.525 , (D.1) which can also be used to scale the magnetic field strength down to inner envelope/protostellar disk density regimes (hereafter, referred to as the L24 method). Here B0 has the same meaning as in Appendix B and characterizes the magnetic flux threading the disk. We assume the case of a vertical magnetic field, since the effects of inclination are minimal on our estimates, which gives B0 \u2248Bz. Plugging in our known values of Btot,c, M\u22c6, Md, \u03c1d and \u03c1c, we estimate B0,d \u2248Bz = 17.6+6.3 \u22126.2 mG. This is compatible with the calculations in the main text within order of magnitude. The discrepancy results from model presumptions that require further examination, while we do not discuss the details."
17
+ }
intro_8K/test_introduction_short_2404.16670v1.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2404.16670v1",
3
+ "title": "EmoVIT: Revolutionizing Emotion Insights with Visual Instruction Tuning",
4
+ "abstract": "Visual Instruction Tuning represents a novel learning paradigm involving the\nfine-tuning of pre-trained language models using task-specific instructions.\nThis paradigm shows promising zero-shot results in various natural language\nprocessing tasks but is still unexplored in vision emotion understanding. In\nthis work, we focus on enhancing the model's proficiency in understanding and\nadhering to instructions related to emotional contexts. Initially, we identify\nkey visual clues critical to visual emotion recognition. Subsequently, we\nintroduce a novel GPT-assisted pipeline for generating emotion visual\ninstruction data, effectively addressing the scarcity of annotated instruction\ndata in this domain. Expanding on the groundwork established by InstructBLIP,\nour proposed EmoVIT architecture incorporates emotion-specific instruction\ndata, leveraging the powerful capabilities of Large Language Models to enhance\nperformance. Through extensive experiments, our model showcases its proficiency\nin emotion classification, adeptness in affective reasoning, and competence in\ncomprehending humor. The comparative analysis provides a robust benchmark for\nEmotion Visual Instruction Tuning in the era of LLMs, providing valuable\ninsights and opening avenues for future exploration in this domain. Our code is\navailable at \\url{https://github.com/aimmemotion/EmoVIT}.",
5
+ "authors": "Hongxia Xie, Chu-Jun Peng, Yu-Wen Tseng, Hung-Jen Chen, Chan-Feng Hsu, Hong-Han Shuai, Wen-Huang Cheng",
6
+ "published": "2024-04-25",
7
+ "updated": "2024-04-25",
8
+ "primary_cat": "cs.CV",
9
+ "cats": [
10
+ "cs.CV",
11
+ "cs.AI"
12
+ ],
13
+ "label": "Original Paper",
14
+ "paper_cat": "LLM AND Reasoning",
15
+ "gt": "Visual emotion recognition, a key area within artificial in- telligence and computer vision, aims to predict human emo- tions based on visual cues such as facial expressions and body language. This technology is essential in bridging the gap between human affective states and machine under- standing. Its diverse applications [10, 13, 22, 39], spanning from improving human-computer interaction to aiding in mental health assessment, underscore its significance. Ac- curate emotion recognition is vital for enhancing user expe- Figure 1. Illustration of the importance of instruction-following abil- ity in visual emotion understanding. rience and ensuring information security, as it helps prevent emotional manipulation and misinformation [32]. Develop- ing robust emotion recognition models is not only a techni- cal challenge but also a step towards more empathetic and intuitive AI systems, paving the way for more efficient and natural human-computer interactions. The AI community has recently shown a growing interest in developing foundational vision models, e.g., Flamingo [8], LLaVA [7], BLIP2 [14]. These models ex- cel in open-world visual understanding, tackling several vi- sion tasks such as classification, detection, segmentation, and captioning. In contrast, current large-scale multimodal models are still in its infancy when it comes to emotion per- ception [20]. As illustrated in Fig. 1, when directly query the GPT-4 [29] about the emotional category of an image, the model tends to provide incorrect responses. However, the model delivers accurate responses when provided with revised instructions. To fully leverage the potential of ex- isting vision-based large models, our approach is based on the concept of Instruction Tuning. This effective strategy is aimed at teaching language models to follow natural lan- guage instructions, a technique proven to enhance their gen- eralization performance across unseen tasks [7, 9, 21]. 1 arXiv:2404.16670v1 [cs.CV] 25 Apr 2024 In this work, we focus on developing the model\u2019s profi- ciency in understanding and following instructions related to emotional contexts. This approach highlights the impor- tance of fine-tuning the model\u2019s instruction-following ca- pabilities, enabling it to interpret and respond to emotional content effectively. This is achieved by leveraging its pre- existing knowledge base, thereby eliminating the necessity for an emotion-specific architectural framework. To address the notable challenges encountered in In- struction Tuning for visual emotion recognition, especially the lack of specific instruction data, we introduce a novel self-generation pipeline explicitly crafted for visual emo- tion recognition by using GPT-4 [29]. This innovative pipeline excels in generating a diverse array of (image, in- struction, output) instances, thereby notably enhancing the dataset with a more extensive and task-oriented variety of examples. This approach not only overcomes the challenge of limited data availability but also reduces the dependence on human labor. Therefore, it streamlines the process, en- abling more efficient and effective emotion recognition. Additionally, Instruction Tuning has been criticized for its emphasis on surface-level features like output patterns and styles, rather than achieving a profound comprehen- sion and assimilation of tasks [23]. To tackle this issue and enhance the diversity and creativity of instruction data, our dataset includes instructions that demand complex rea- soning, going beyond basic question-and-answer formats. This is further enriched by incorporating visual cues such as brightness, colorfulness, scene type, object class, facial expressions, and human actions. These aspects are pivotal in fostering a nuanced comprehension of visual emotions, thus allowing the model to generate more precise and con- textually appropriate interpretations [13]. After generating the emotion visual instruction data, we propose an Emotion Visual Instruction Tuning (EmoVIT) framework, leveraging the foundation of In- structBLIP [9]. This framework incorporates an emotion- centric, instruction-aware module that proficiently guides Large Language Models (LLMs) in assimilating the nu- ances of emotion instructions. Our work signifies a paradigm shift, presenting a new era of instruction-based learning for visual emotion understanding that relies less on explicit training data. Remarkably, as shown in Fig. 2, our approach requires almost 50% of the training data typi- cally needed yet exceeds the performance of previous visual emotion recognition methods and popular Visual Instruc- tion Tuning methods. Our contributions can be summarized as follows: \u2022 We explore the potential of the Visual Instruction Tuning paradigm for emotion comprehension and introduce the concept of Emotion Visual Instruction Tuning. \u2022 After thoroughly considering the unique characteristics of visual emotion recognition, we develop a novel GPT- WSCNet[16] StyleNet[19] PDANet[17] StimuliAware[10] MDAN[12] BLIP2[14] InstructBLIP[9] Flamingo[8] LLaVA[7] Ours* 0 20 40 60 80 76.32 77.11 76.95 78.4 75.75 46.79 42.2 29.59 44.03 83.36 Supervised Emotion Recognition Methods Visual Instruction Tuning Methods Figure 2. Performance comparison on EmoSet test set [13] (Accu- racy %). assisted pipeline for generating emotion visual instruc- tion data. This approach effectively bridges the gap in available annotated instruction data within this specific domain. \u2022 Building upon the foundation of InstructBLIP, our EmoVIT architecture integrates emotion domain-specific instruction data, harnessing the robust capabilities of LLMs to boost performance. The extensive experiments demonstrate our model\u2019s proficiency in emotion classi- fication, affective reasoning, and comprehension of hu- mour.",
16
+ "main_content": "2.1. Visual Emotion Recognition A key challenge in visual emotion recognition is bridging the gap between an image\u2019s visual cues and the emotions it portrays [11, 12, 35]. While traditional efforts, e.g., Xu et al.\u2019s multi-level dependent attention network [12], focus on visual models for emotional feature learning, recent advancements like EmoSet [13] offer rich emotion-laden datasets with 3.3 million images. The rise of multimodal models, such as the GPT series [29], has further propelled Vision-Language Recognition. However, fully leveraging these models in emotion recognition is an area ripe for exploration. Our work leads the way in utilizing large-scale models for Emotion Visual Instruction Tuning. 2.2. Visual Instruction Tuning Current Large Language Models (LLMs) have extensive knowledge bases, but their effectiveness depends on accurately interpreting human instructions due to a mismatch 2 Figure 3. The comparison of different visual tuning paradigms. between training goals and user expectations. LLMs are trained to minimize prediction errors, whereas users expect helpful and safe instruction-following. Instruction Tuning addresses this by teaching models to follow natural language instructions, enhancing generalization to new tasks. FLAN [21] demonstrated that training a large model on instruction-based datasets improves zero-shot performance. This approach has extended to vision-language tasks, with BLIP2 [14] and LLaVA [7] adapting instructiontuned LLMs for visual inputs. InstructBLIP [9] introduces instruction-aware visual feature extraction and the QFormer, enabling more flexible, instruction-driven feature extraction. As a novel area, visual emotion instruction tuning lacks benchmarks or guidelines for creating emotion instruction data. Our work pioneers the use of large-scale models to develop an emotion instruction data pipeline, overcoming the limitations of manual annotation. 3. Method 3.1. Preliminary of Visual Instruction Tuning In the deep learning era, visual tuning has experienced significant paradigm shifts, as depicted in Fig. 3. In Fig. 3(a), conventional tuning methodologies encompass Full fine-tuning, Head-oriented, and Backboneoriented techniques, capitalizing on large-scale pre-trained models. Predominantly, thoroughly fine-tuning these models for specific tasks, conducted end-to-end, is recognized as a highly effective strategy. However, this method requires maintaining separate copies of the backbone parameters for each distinct task, posing challenges in storage and deployment. Alternatively, Visual Prompt Tuning (VPT) [24], presents an efficient substitute for full fine-tuning within large-scale vision Transformer models. It achieves this by employing a minimal fraction of trainable parameters in the input space while maintaining a frozen backbone model. The objective function for Visual Prompt Tuning is given by: min \u03b8P L(f(X, P; \u03b8P), Y ) (1) where min\u03b8P is the minimization over the prompt parameters P, L is the loss function, f represents the model function with input image X, prompt parameters P, and learnable model parameters \u03b8P as input, and Y is the target output. Visual Prompt Tuning focuses on optimizing LLMs using a small set of parameters, whereas Visual Instruction Tuning (VIT) aims to improve the model\u2019s comprehension of instructions, thereby addressing the model\u2019s shortcomings in specific domains. This type of method aims to enhance the model\u2019s proficiency in following instructions, leveraging the capabilities of the latest foundation models, e.g., Llama [25], and BLIP2 [14]. Instructions serve as guiding constraints, shaping the model\u2019s outputs to conform to specific response characteristics and domainrelevant knowledge. This approach enables human monitoring of the model\u2019s behavior, thereby assuring alignment with the desired outcomes. Moreover, Instruction Tuning is computationally efficient, allowing LLMs to swiftly adapt to particular domains without extensive retraining or architectural alterations. The objective function for Visual Instruction Tuning is given by: min \u03b8tunable L(g(X, I, C; \u03b8tunable), Y ) (2) where min\u03b8tunable denotes the minimization over the tunable parameters \u03b8tunable in the Instruction Tuning Module, L is the loss function, g is the model function with instruction I, image X, other contexts C, and tunable parameters \u03b8tunable, 3 \u2026 \u2026 \u2026 Q-Former Fully Connected LLM Emotion Instruction Queries Output \u2026 \u2026 Emotion Instruction Emotion Instruction Queries Q-Former Feed Forward Self Attention Cross Attention Feed Forward (a) Emotion Visual Instruction Data Generation (b) Emotion Visual Instruction Tuning Architecture (c) The Details of Q-Former Module \u2026 \u2026 \u2026 Image Embeddings Emotion Attributes Caption System Prompt GPT 4.0 Categorical Basic Interaction Advanced Interaction Reasoning Emotion Instruction In-context Samples Conversation Image Encoder Input Image Image Embeddings Figure 4. The overall architecture of our proposed method. The Emotion Instruction data generated by (a) will be used for Emotion Visual Instruction Tuning in (b). During Emotion Visual Instruction Tuning, given an input image, the frozen Image Encoder initiates the process by extracting visual features. Emotion Instruction generated by (a) are subsequently interacting with Queries embedding through the learnable Q-Former. This interaction is key to drawing out image features that are relevant to the task at hand. As a result, the frozen LLM receives visual information conducive to instruction following. and Y denotes the target output. The optional context C is not just raw data; it encompasses descriptive or directive information guiding the model on how to process input or which task to execute, e.g., image caption. It\u2019s integral to the model\u2019s understanding and execution of tasks based on specific instructions or guidelines. 3.2. GPT-assisted Emotion Visual Instruction Data Generation Previous methodologies commonly employed a consistent template-based set of instructions for every image within a dataset across various specific tasks [9]. For instance, a standard instruction such as \u201cBriefly describe the content of the image\u201d was employed uniformly across all images for Image Captioning. In this way, the model may not be able to adequately capture the unique characteristics of each image. Moreover, this one-size-fits-all approach often leads to suboptimal performance in emotion recognition tasks that require nuanced perception and differentiation of ambiguous emotion classes. Since the topic of Emotion Visual Instruction Tuning is still in its infancy, no benchmarks or guidelines have been proposed so far for constructing emotion instruction data. Based on the recent successes of machine-generated instructions demonstrated in LLaVA [7], our work pioneers the use of existing LLMs to create a pipeline for self-generating emotion instructions. Different from previous template-based and one-size-fits-all instruction data, we propose an instance-wise and LLM-assisted visual emotion instruction data pipeline. This methodology transcends the constraints of manual annotation by employing GPT-4 [29] to generate instance-wise, tailored instruction data that dynamically corresponds to visual content. Prior to the development of instructional data for the visual emotion recognition task, it is imperative to confront a fundamental academic problem: What types of visual clues are pivotal in identifying emotions? This necessitates a careful consideration of the unique characteristics inherent to the task, along with a comprehensive understanding of the potential visual cues associated with human emotions. In this work, we propose a novel visual instruction data mechanism to remove the inherent subjectivity and ambiguity in emotional interpretation. Specifically, we integrate a broad spectrum of emotion attributes across multiple levels: low-level attributes (e.g., brightness, colorfulness), midlevel attributes (e.g., scene type and object class), and highlevel attributes (e.g., facial expressions and human actions), building upon insights from previous work [13]. This comprehensive strategy not only aligns with the intricate nature of emotions but also significantly enhances the model\u2019s capability to interpret and understand visual emotional cues more accurately and holistically. The overall pipeline of our proposed emotion visual instruction data is shown in Fig. 4 (a). For an image Ximg, three types of image-related contexts are essential for GPT4 to generate emotion instruction data: (i) a caption Xc, (ii) an emotion attribute list Xattr, which includes emotion class, brightness, colorfulness, scene type, object class, facial expression, and human action, and (iii) the system prompt, designed to enable GPT-4 to comprehend the specific task 4 requirement1. We first manually design a few examples which are used as seed examples for in-context learning to query GPT-4. This operation leverages the model\u2019s ability to extrapolate from given examples, enhancing its understanding and response accuracy based on the principles of few-shot learning [7]. Our generated emotion instruction data includes three types: Categorical, Conversation, and Reasoning. Building upon previous research [7], our generated instruction data adheres to the dialogue format, exemplified in Fig. 5. Our strategy for generating emotion instruction data adopts a progressive approach from simple to complex. Initially, for the Categorical data, we transform the associated emotion class of the image into a structured format. This process serves as the foundational component of our emotion instruction data. For the Conversation data, our framework is designed to create dialogues in which the GPT assistant interacts with an inquirer, focusing on the emotion attributes of the image. In this setup, the assistant\u2019s responses are tailored to interpret and describe the image as though it were within its own visual field, thereby providing insights from an observational viewpoint. The scope of questions posed is comprehensive, encompassing the types of objects depicted, their actions, and the dynamics of their interrelationships. The dialogues we generate fall into two categories: (i) Basic Interaction, focusing on the provided emotion attribute list with simple, direct characteristics, and (ii) Advanced Interaction, which builds on the first type to reach greater conversational complexity and sophistication. For the Reasoning data, our approach extends beyond mere visual content, prompting the model to generate indepth reasoning questions. To enhance the dialogue\u2019s credibility and structure, detailed examples are incorporated alongside logical reasoning steps, ensuring that the discourse convincingly captures the intricacies of the visual content. 3.3. Emotion Visual Instruction Tuning After acquiring the emotion visual instruction data as detailed in Sec. 3.2, our goal is to employ this data in enhancing the existing Visual Instruction Tuning model. This enhancement aims to align the LLMs\u2019 existing knowledge with the emotion understanding domain. As shown in Fig. 4 b, we have developed an Emotion Visual Instruction Tuning (EmoVIT) architecture based on InstructBLIP [9]. This architecture specifically leverages its Instruction-aware Q-Former Module, as depicted in Fig. 4 c, for emotion-centric instructional tasks. 1A detailed description of the system prompt is provided in the supplementary materials. Figure 5. The sample of our generated visual emotion instruction data. Specifically, the Instruction-aware Q-Former Module takes in the emotion instruction tokens, queries, and image embeddings as input. The image embeddings are extracted by a frozen image encoder. The learnable queries are initially produced by the pre-trained Q-Former of InstructBLIP. During training, the Instruction-aware module enhances task-specific feature extraction. It does this by integrating emotion instruction and query embeddings within self-attention layers, aligning visual information with the LLM\u2019s instruction-following requirements. Our approach adopts cross-entropy loss, tailoring it to the intricacies of visual emotion recognition tasks, thus ensuring precise and contextually relevant model training outcomes. We note that the data generated by our approach is not confined to a single model but can also be applied to other Visual Instruction Tuning models, such as LLaVA [25]. Notably, when LLaVA is fine-tuned with our data, it exhibits a significant enhancement in emotion recognition capabilities, as detailed in Sec. 4.2. In this way, we demonstrate not only the effectiveness but also the transferability of our generated data, showing its broad applicability and impact. 5 4. Experimental Results 4.1. Implemental Details Our implementation is based on the LAVIS library [31]. Our EmoVIT starts with a pre-trained InstructBLIP baseline and proceeds to fine-tune exclusively the Q-Former module, whilst keeping both the image encoder and the language model frozen. The parameters for our training adhere to the default settings established by InstructBLIP. Datasets. We evaluate our framework on ten benchmark datasets annotated under different scenarios and class number, namely EmoSet [13], WEBEmo [11], Emotion6 [34], the Flickr and Instagram (FI) [35], Artphoto [36], IAPS [37], Abstract [36], EmotionROI [38], UnbiasedEmo [11], and OxfordTVG-HIC [33]. Held-in Pretraining. Following previous work [9], we divide our dataset into two categories: held-in for pretraining and held-out for evaluation 2. Considering the EmoSet dataset\u2019s comprehensive inclusion of emotion attributes for each image, it has been chosen as the primary resource for our held-in pretraining phase. Simultaneously, for a broader assessment, we perform held-out evaluations using the test sets from various other datasets. For the generation of emotion visual instruction data, we initially employ the BLIP2 model for image captioning, followed by leveraging the GPT-4 API to generate emotion instruction data. In total, our collection comprises Categorical, Conversation, and Reasoning instruction data, derived from 51,200 unique images. This represents less than 50% of the entire EmoSet. 4.2. Held-out Evaluation As shown in Tab. 1, our proposed methodology exhibits a marked superiority in performance relative to the burgeoning Visual Instruction Tuning Methods. Since they have been pre-trained on dozens of large-scale datasets, it is evident that our generated emotion visual instruction data is particularly effective for emotional understanding Our results signify a paradigm shift, heralding a new era of model training that relies less on explicit supervision and more on the robustness of emotion instruction-driven learning. The Effectiveness of Our Proposed Emotion Visual Instruction Data. As the first to introduce the concept of emotion visual instruction data, our study seeks to evaluate the generalizability of this newly generated instruction data. Our goal is to test its efficacy not only with InstructBLIP but also across other Visual Instruction Tuning model, to understand its broader applicability. As depicted in Fig. 6, we employ two Visual Instruction Tuning models, LLaVA and InstructBLIP, which were fine-tuned on our specially gen2Unlike the setup in InstructBLIP, our dataset exclusively comprises emotion-related content. Consequently, our held-out evaluation does not constitute a strict zero-shot evaluation in the conventional sense. Figure 6. The improvement of our proposed emotion visual instruction tuning data tuning on LLaVA [7] and InstructBLIP [9]. erated emotion visual instruction data. Subsequent testing across five distinct datasets reveals notable improvements in both models, substantiating the efficacy of our generated data. Notably, InstructBLIP demonstrated a more substantial overall enhancement compared to LLaVA. This can be attributed to InstructBLIP\u2019s specialized Instruction-aware Q-Former Module, which adeptly extracts the salient features of our emotion instructions and synergizes them effectively with the corresponding images, thereby yielding improved performance. 4.3. Effectiveness of Different Instruction Data 4.3.1 Ablation Study of Different Instruction Data The ablation study outlined in Tab. 2 provides a comprehensive analysis of the impact that different instructional data types have on model performance, specifically concerning accuracy metrics on the EmoSet test set. Initially, the model, referred to as InstructBLIP [9], operates without the integration of the three types of instructional data and attains a baseline accuracy of 42.20%. This foundational performance is significantly enhanced with the inclusion of Categorical data, which alone contributes to a substantial increase in accuracy. The introduction of Conversation data further amplifies this effect, underscoring the value of conversational context in improving the model\u2019s predictive capabilities. The addition of Reasoning data notably boosts performance, achieving a peak accuracy of 83.36%. This indicates that the model significantly benefits from the nuanced cues in reasoning, aiding in understanding complex emotional instructions. The gradual improvements with each data type support the idea that a diverse approach to instructional data markedly enhances model comprehension and performance. 6 Method WebEmo FI Emotion6 Abstract ArtPhoto IAPSa EmotionROI EmoSet Number of Classes 25 8 6 8 8 8 6 8 Flanmingo [8] 9.36 14.91 21.67 3.57 17.5 10.13 21.72 29.59 LLaVA [7] 12.55 56.04 49.44 19.54 36.25 42.43 46.46 44.03 BLIP2 [14] 20.10 57.72 50.00 28.57 36.25 39.24 50.51 46.79 InstructBLIP [9] 12.80 37.97 46.11 21.42 26.25 34.18 46.13 42.20 Ours* 21.12 68.09 57.81 32.34 44.90 44.13 53.87 83.36 Table 1. Held-out performance comparison on visual emotion datasets (%). Categorical Conversation Reasoning Accuracy (%) 42.20 \u2713 80.90 (+38.70) \u2713 \u2713 81.95 (+39.75) \u2713 \u2713 \u2713 83.36 (+41.16) Table 2. Ablation study of three types of instruction data. Accuracy (%) on EmoSet test set. 4.3.2 Instruction Sensitivity This work is dedicated to the creation of a varied corpus of visual emotion instruction data, alongside the development of a robust instruction-based model. Our objective is for the model to demonstrate stability, producing consistent results in the face of minor variations in instruction phrasing, provided the core objective of the task persists unchanged. To this end, we employ the Sensitivity evaluation metric, as introduced by [30], to assess the model\u2019s fidelity in generating uniform outcomes irrespective of instructional nuances. We employ two semantically similar instructions as input prompts for the model, testing their impact on the Sensitivity score across three visual emotion datasets for different Visual Instruction Tuning models. The first instruction is: \u201cFrom the given options: cls 1, cls 2, cls 3, etc., identify the emotion that most accurately reflects the image. Ensure your selection is confined to the listed options. Respond in the format: Predicted emotion:\u201d The second one states: \u201cPlease choose the emotion that best corresponds to the image from the following options: cls 1, cls 2, cls 3, etc. (Do not provide answers beyond the provided candidates.) Please reply in the following format: Predict emotion:\u201d As illustrated in Fig. 7, our approach, along with BLIP2, exhibited exceptionally low Sensitivity values, demonstrating robustness in understanding the instructions. Conversely, Flamingo and InstructBLIP displayed a higher degree of sensitivity, indicating a relative susceptibility to variations in instruction wording. 4.4. Robustness Given that current emotion recognition datasets often exhibit category imbalances and labeling biases, our aim is Figure 7. The sensitivity score comparison (the lower the better). to evaluate the generalization ability of various learning strategies more impartially. Hence, we selected the UnBiasedEmo test set [11], which is uniquely suited for recognizing intricate emotions, such as those associated with identical objects or scenes, e.g., landscapes, crowds, families, babies, and animals, where the emotional undertones can be particularly subtle and complex. As depicted in Tab. 3, our proposed methodology demonstrates superior performance when benchmarked against conventional supervised emotion recognition techniques, thereby underscoring the efficacy of our approach in more accurately discerning complex emotional contexts. Method Accuracy (%) Direct Learning [11] 71.64 Self-Directed Learning [11] 72.45 Joint Learning [11] 71.64 Curriculum Learning [11] 74.27 Ours* 74.72 Table 3. Performance comparison on UnbiasedEmo dataset. 7 Figure 8. The sample of our generated explanation. 4.4.1 Affective Reasoning In the domain of visual emotion recognition, where ambiguity and subjectivity are pervasive, the advent of an interpretable model is of considerable value. Such a model elucidates its cognitive processes, enhancing its trustworthiness and practicality in scenarios requiring a delicate grasp of emotional subtleties. Leveraging Visual Instruction Tuning, our model transcends mere categorization of emotions; it articulates the underlying rationale for its classifications. The executing commands for identifying emotions and elucidating the decision basis is illustrated below: Predicted emotion: [emotion]. Reason: [explanation]. Our model delineates the visual features influencing its determinations, thereby addressing the complexities inherent in discerning and explaining emotion-related nuances. The explanations provide us with visual clues contained within the images, as exemplified in Fig. 8. It provides interpretable visual indicators that inform the model\u2019s outputs, as demonstrated in our example, by disambiguating the often abstract emotional categories. 4.5. Scaling Law Pretraining data. As demonstrated in Tab. 4, there is a clear correlation between the size of the pre-training dataset and improved performance. Consequently, we anticipate that an increase in training data in the future could enhance the effectiveness of Emotion Visual Instruction Tuning. 4.6. Humour Caption Generation The comprehension of humor is intricately linked to the understanding of emotions. Leveraging our generative language model, we conduct a caption generation task without 5% 10% 30% 50% 79.00 81.00 79.34 83.36 Table 4. Ablation study of different portion of pre-training data. Accuracy (%) on EmoSet test set. Figure 9. The sample of our generated humour caption vs human writing humour caption from OxfordTVG-HIC. modifying the model\u2019s architecture, specifically testing the model\u2019s proficiency in generating humorous captions. For this purpose, we select 50 images from the OxfordTVGHIC dataset [33] and generate corresponding captions using our model. Subsequently, the captions produced by our model are compared with manually annotated captions from the dataset in a user study. Thirty participants were asked to vote on which captions were more humorous. Our modelgenerated captions receive 60% of the votes, demonstrating its effective humor generation capabilities. One sample is visualized in Fig. 9. 5. Conclusion In our study, drawing upon the distinctive visual cues key to visual emotion recognition, we present a GPT-assisted pipeline specifically designed for generating emotion visual instruction data. The developed EmoVIT model incorporates emotion-specific instructions, leveraging LLMs for enhanced performance. Our comprehensive experiments validate its effectiveness in emotion classification, affective reasoning, and humor understanding. This comparative analysis sets a benchmark for Emotion Visual Instruction Tuning with LLMs, providing valuable insights and directions for future research in this field. 8 EmoVIT: Revolutionizing Emotion Insights with Visual Instruction Tuning Supplementary Material Figure 10. The sample of our generated visual emotion instruction data. 6. More Emotion Visual Instruction Data Sample Additional samples from our Emotion Visual Instruction Data collection are presented in Figures 10 and 11. Upon acceptance, the complete dataset will be made available on our project webpage. 7. Implemental Details 7.1. Our Experiment Settings Held-out vs supervised learning. We adopt the terminology held-in and held-out as defined in the work of InstructBLIP [9]. For the held-in, we utilize the training subset of the EmoSet dataset for Emotion Visual Instruction Tuning, with its corresponding test subset serving the purpose of held-in evaluation. The outcomes of this evaluation are depicted in Fig. 1 of the main manuscript. Figure 11. The sample of our generated visual emotion instruction data. In our held-out evaluation, we focus on determining how instruction tuning bolsters the model\u2019s ability to transfer learning to new and unseen data. It\u2019s crucial to highlight that our methodology sets a distinct path from InstructBLIP\u2019s framework. Our dataset is specifically curated with emotion-centric content, presenting unique categories such as cheerfulness and enthrallment found in WEBEmo, which are not typically included in other datasets. Conversely, common emotional categories like anger and fear are shared with other collections, such as FI and Emotion6. This distinctive mix in our dataset implies that our held-out evaluation operates on a cross-domain level, examining the model\u2019s ability to interpret and adapt to diverse emotional contexts not strictly confined to zero-shot scenarios. 7.2. System Prompt The system prompt inputted into ChatGPT for the purpose of gathering instruction-based data is presented below. 1 You are an AI visual assistant, and you are seeing a single image. What you see are provided with one caption and some emotion related attributes, describing the same image you are looking at. Answer all questions as you are seeing the image. The range of brightness is from 0 (darkest) to 1 (brightest), and the range of colorfulness is from 0 (black-and-white) to 1 (the most colorful). Design two questions for a conversation between you and a person asking about this photo. The answers should be in a tone that a visual AI assistant is seeing the image and answering the question. Ask diverse questions and give corresponding answers. Include questions asking about the visual content of the image, including the object types, object actions, relationship among objects, etc. Only include questions that have definite answers: (1) one can see the content in the image that the question asks about and can answer confidently; (2) one can determine confidently from the image that it is not in the image. Do not ask any question that cannot be answered confidently. Please answer with the format Question: Answer: Also include one complex question that is relevant to the content in the image, for example, asking about background knowledge of the objects in the image, asking to discuss about events happening in the image, etc. Again, do not ask about uncertain details. Provide detailed answers when answering complex questions. For example, give detailed examples or reasoning steps to make the content more convincing and well-organized. You can include multiple paragraphs if necessary. 7.3. Details of the Q-Former Similar to the approach in InstructBLIP, Q-Former is a lightweight transformer architecture that utilizes a collection of trainable query vectors to distill visual features from a static image encoder. The Q-Former acts as the trainable module to bridge the gap between a frozen image encoder and a frozen LLM. Its role is to curate and present the most pertinent visual information, thereby enabling the LLM to generate the targeted textual output efficiently. Following the default setting, in our experimental setup, we employ 32 distinct queries, each with a dimensionality of 768. 7.4. Sensitivity Formula As mentioned in Sec.4.3.2 in the main paper, we employ the Sensitivity evaluation metric, as introduced by [30], to assess the model\u2019s fidelity in generating uniform outcomes irrespective of instructional nuances. Specifically, for each task t \u2208T, given its associated instances with task instructions: Dt = {(It j, xt j, yt j) \u2208T \u00d7 Xt \u00d7 Y t}N j=1, sensitivity is defined as: Et\u2208T \" \u03c3i\u2208It \u0002 E(x,y)\u2208Dt [L(f\u03b8(i, x), y)] \u0003 \u00b5i\u2208It \u0002 E(x,y)\u2208Dt [L(f\u03b8(i, x), y)] \u0003 # (3) where L denotes the evaluation metric, i.e., emotion classification accuracy, f\u03b8(\u00b7) represents the Visual Instruction Tunign model. The standard deviation and mean of the model\u2019s performance across all instructions are denoted by \u03c3i\u2208It[\u00b7] and \u00b5i\u2208It[\u00b7], respectively. 8. Ablation Study of LLM Model Size In our attempts with the EmoVIT architecture\u2019s LLM, we explored the use of models of varying sizes (as shown in Tab. 5). The results indicated that the smaller model, Vicuna7B, outperformed its larger counterparts. This may be attributed to the limited training data available for our task, which potentially underutilizes the capabilities of larger models. Consequently, we anticipate that an increase in training data in the future could enhance the effectiveness of Emotion Visual Instruction Tuning. Vicuna-7B Vicuna-13B FlanT5XL 83.36 82.21 80.98 Table 5. Ablation study of different LLM model size. Accuracy (%) on EmoSet test set. 9. GPT-4 vs GPT-4 Turbo We conducted a comparative analysis of conversational datasets derived from GPT-4 (the model name is gpt-4 in the API) against the recently released GPT-4 Turbo (the model name is gpt-4-1106-preview in the API). The comparative metrics yielded negligible differences between the two models (83.36% vs 82.96% on EmoSet test set). 10. Adding In-context Samples in Held-out Evaluation Recent LLMs are capable of in-context learning when provided with a limited number of examples in a few-shot manner. In this work, we have also embarked on such an exploration. For instance, Tab. 6 presents the in-context samples utilized within the EmotionROI dataset. During our heldout evaluation, we incorporated three in-context samples for each category, consisting of a caption paired with its corresponding emotion class. Nevertheless, in our experimental observations, we did not witness any enhancement in performance attributable to furnishing the LLM with these incontext examples. Consequently, our finalized methodology did not incorporate in-context samples during the heldout evaluation phase. 2 Description Emotion Unleashed Fury: A portrait of raw, unfiltered anger etched on the subject\u2019s face. Anger Volcanic Eruption in Human Form: A Portrait of Unrestrained Fury. Anger An explosive portrait of raw fury, where every clenched jaw and furrowed brow tells a tale of unchecked anger. Anger Face contorted in a grimace of pure disgust, as if they just tasted a year-old lemon. Disgust Caught in the throes of revulsion, a face grimaces as if it just tasted the world\u2019s sourest lemon. Disgust Picture Perfect: A Masterclass in the Art of Disgust Expression Disgust A chilling moment of pure terror, etched in every detail. Fear A chilling moment of pure terror etched on the face, a stark embodiment of fear. Fear someone with a wide smile, a group Joy Overflowing with joy, like a puppy at a park! Joy A poignant portrait of sorrow, where teardrops are the silent language of grief. Sadness An evocative portrayal of sorrow, with shadows seemingly swallowing the light, reflecting the heavy weight of sadness. Sadness An abstract portrayal of solitude, where the vivid hues of melancholy paint a poignant picture of sadness. Sadness Caught in a moment of pure astonishment, eyes wide and mouth agape. Surprise Caught in the headlights of astonishment: a jaw-dropping moment of surprise! Surprise Caught in the Act! A person\u2019s wide-eyed gasp of sheer surprise. Surprise Table 6. Illustrative Examples of Emotion Descriptors in Visual Data 11. Limitation and future work Due to the reliance on the GPT-API and cost considerations, our held-in pretraining phase utilized less than 50% of the EmoSet dataset. Despite outperforming other methods, we recognize the potential for significant improvements in future work by expanding the data scale. We anticipate that advancements in visual emotion understanding will parallel increases in both data and model scale. 3"
17
+ }
intro_8K/test_introduction_short_2404.16678v1.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2404.16678v1",
3
+ "title": "Multimodal Semantic-Aware Automatic Colorization with Diffusion Prior",
4
+ "abstract": "Colorizing grayscale images offers an engaging visual experience. Existing\nautomatic colorization methods often fail to generate satisfactory results due\nto incorrect semantic colors and unsaturated colors. In this work, we propose\nan automatic colorization pipeline to overcome these challenges. We leverage\nthe extraordinary generative ability of the diffusion prior to synthesize color\nwith plausible semantics. To overcome the artifacts introduced by the diffusion\nprior, we apply the luminance conditional guidance. Moreover, we adopt\nmultimodal high-level semantic priors to help the model understand the image\ncontent and deliver saturated colors. Besides, a luminance-aware decoder is\ndesigned to restore details and enhance overall visual quality. The proposed\npipeline synthesizes saturated colors while maintaining plausible semantics.\nExperiments indicate that our proposed method considers both diversity and\nfidelity, surpassing previous methods in terms of perceptual realism and gain\nmost human preference.",
5
+ "authors": "Han Wang, Xinning Chai, Yiwen Wang, Yuhong Zhang, Rong Xie, Li Song",
6
+ "published": "2024-04-25",
7
+ "updated": "2024-04-25",
8
+ "primary_cat": "cs.CV",
9
+ "cats": [
10
+ "cs.CV"
11
+ ],
12
+ "label": "Original Paper",
13
+ "paper_cat": "Diffusion AND Model",
14
+ "gt": "Automatic colorization synthesizes a colorful and semanti- cally plausible image given a grayscale image. It is a classical computer vision task that has been studied for decades. How- ever, existing automatic colorization methods cannot provide satisfactory solution due to the two main challenges: incorrect semantic colors and unsaturated colors. Aiming to synthesize semantically coherent and percep- tually plausible colors, generative models have been exten- sively incorporated into relevant research. Generative adver- sarial networks (GAN) based [4, 5, 1] and autoregressive- based [6, 2, 7] methods have made notable progress. Al- though the issue of incorrect semantic colors has been par- tially addressed, significant challenges still remain. See the yellow boxes in Figure 1, the semantic errors significantly undermine the visual quality. Recently, Denoising Diffusion Probabilistic Models(DDPM) [8] has demonstrated remark- able performance in the realm of image generation. With its exceptional generation capabilities, superior level of de- tail, and extensive range of variations, DDPM has emerged as a compelling alternative to the GAN. Moreover, the con- trollable generation algorithms based on the diffusion model have achieved impressive performance in various downstream tasks such as T2I [9], image editing [10], super resolu- tion [11], etc. In this work, we leverage the powerful diffu- sion prior to synthesize plausible images that align with real- world common sense. Unfortunately, applying pre-trained diffusion models directly to this pixel-wise conditional task lead to inconsistencies [12] that do not accurately align with the original grayscale input. Therefore, it becomes imperative to provide more effective condition guidance in order to en- sure coherence and fidelity. We align the luminance channel both in the latent and pixel spaces. Specifically, our proposed image-to-image pipeline is fine-tuned based on pre-trained stable diffusion. The pixel-level conditions are injected into the latent space to assist the denoising U-Net in producing latent codes that are more faithful to grayscale images. A luminance-aware decoder is applied to mitigate pixel space distortion. In addition to incorrect semantics, another challenge in this task is unsaturated colors. For example, the oranges in the first two columns of Figure 1 suffer from the unsaturated colors. To moderate the unsaturated colors, priors such as categories [5], bounding boxes [13] and saliency maps [14] have been introduced in relevant research. Based on this in- sight, we adopt multimodal high-level semantic priors to help the model understand the image content and generate vivid colors. To simultaneously generate plausible semantics and vivid colors, multimodal priors, including category, caption, and segmentation, are injected into the generation process in a comprehensive manner. In summary, we propose an automatic colorization pipeline to address the challenges in this task. The contri- butions of this paper are as follows: \u2022 We extend the stable diffusion model to automatic im- age colorization by introducing pixel-level grayscale conditions in the denoising diffusion. The pre-trained diffusion priors are employed to generate vivid and arXiv:2404.16678v1 [cs.CV] 25 Apr 2024 BigColor CT2 ControlNet Ours Fig. 1. We achieve saturated and semantic plausible colorization for grayscale images surpassing the GAN-based(BigColor [1]), transformer-based(CT2 [2]) and diffusion-based(ControlNet [3]) methods. plausible colors. \u2022 We design a high-level semantic injection module to enhance the model\u2019s capability to produce semantically reasonable colors. \u2022 A luminance-aware decoder is designed to mitigate pixel domain distortion and make the reconstruction more faithful to the grayscale input. \u2022 Quantitative and qualitative experiments demonstrate that our proposed colorization pipeline provides high- fidelity, color-diversified colorization for grayscale im- ages with complex content. User study further indi- cates that our pipeline gain more human preference than other state-of-the-art methods.",
15
+ "main_content": "Learning-based algorithms have been the mainstream of research on automatic colorization in recent years. Previous methods suffer from unsaturated colors and semantic confusion due to the lack of prior knowledge of color. In order to generate plausible colors, generative models have been applied to automatic colorization tasks, including adversarial generative networks [4, 5, 1] and transformers [6, 2, 7]. Besides, [15] shows that diffusion models are more creative than GAN. DDPM has achieved amazing results in diverse natural image generation. Research based on DDPM has confirmed its ability to handle a variety of downstream tasks, including colorization [16]. To alleviate semantic confusion and synthesize more satisfactory results, priors are introduced into related research, including categories [5], saliency maps [14], bounding boxes [13], etc. 3. METHOD 3.1. Overview A color image ylab, represented in CIELAB color space, contains three channels: lightness channel l and chromatic channels a and b. The automatic colorization aims to recover the chromatic channels from the grayscale image: xgray \u2192\u02c6 ylab. In this work, we propose an automatic colorization pipeline for natural images based on stable diffusion. The pipeline consists of two parts: a variational autoencoder [17] and a denoising U-Net. Explicitly, the VAE is for the transformation between pixel space x \u2208RH\u00d7W \u00d73 and latent space z \u2208Rh\u00d7w\u00d7c. While the denoising U-Net applies DDPM in the latent space to generate an image from Gaussian noise. The framework of our pipeline is shown in Figure 2. First, the VAE encodes grayscale image xgray into latent code zc. Next, the T-step diffusion process generates a clean latent code z0 from Gaussian noise zT under the guidance of image latent zc and high-level semantics. Finally, z0 is reconstructed by a luminance-aware decoder to obtain the color image \u02c6 y. The pixel-level grayscale condition and high-level semantic condition for denoising process are introduced in the latent space, shown in the yellow box in Figure 2. We elaborate on the detailed injections of these conditions in Section 3.2 and Section 3.3, respectively. As for the reconstruction processes, the detailed designs of the luminance-aware decoder are described in Section 3.4. 3.2. Colorization Diffusion Model Large-scale diffusion model has the capability to generate high-resolution images with complex structures. While naive usage of diffusion priors generates serious artifacts, we introduce pixel-level luminance information to provide detailed guidance. Specifically, we use encoded grayscale image zc as control condition to enhance U-Net\u2019s understanding of luminance information in the latent space. To involve the grayscale condition in the entire diffusion process, we simultaneously input the latent code zt generated in the previous time step and the noise-free grayscale latent code zc into the input layer of UNet at each time step t: Denoising U-Net Input \ud835\udc65!\"#$ \u2208\ud835\udc45%\u00d7'\u00d7( Text Encoder EfficientNet BLIP Transfiner Category Caption Labels resize cat Output \ud835\udc66 % \u2208\ud835\udc45%\u00d7'\u00d7) \ud835\udc67\u0302*+( Luminance-aware Decoder Encoder \ud835\udefc, conv \ud835\udc53 -./0 , \ud835\udc53 12 3 \ud835\udc53 * 12 3 \u00d7 \ud835\udc40, \u2208\ud835\udc454\u00d7/\u00d7( \ud835\udc67*+( , \ud835\udc67*+( \ud835\udc67* \ud835\udc675 \u2208\ud835\udc454\u00d7/\u00d76 \ud835\udc67\u03027 \u00d7\ud835\udc47steps Cross Attention Text Embeddings \ud835\udc50* Latent Space Fig. 2. Overview of the proposed automatic colorization pipeline. It combines a semantic prior generator (blue box), a highlevel semantic guided diffusion model(yellow box), and a luminance-aware decoder (orange box). z\u2032 t = conv1\u00d71(concat(zt, zc)) (1) In this way, we take advantage of the powerful generative capabilities of stable diffusion while preserve the grayscale condition. The loss function for our denoising U-Net is defined in a similar way to stable diffusion [18]: L = Ez,zc,c,\u03f5\u223cN (0,1),t[||\u03f5 \u2212\u03f5\u03b8(zt, t, zc, c)||2 2] (2) where z is the encoded color image, zc is the encoded grayscale image, c is the category embedding, \u03f5 is a noise term, t is the time step, \u03f5\u03b8 is the denoising U-Net, zt is the noisy version of z at time step t. 3.3. High-level Semantic Guidance To alleviate semantic confusion and generate vivid colors, we design a high-level semantic guidance module for inference. As shown in Figure 2, the multimodal semantics are generated by the pre-trained semantic generator in the blue box. Afterwards, text and segmentation priors are injected into the inference process through cross attention and segmentation guidance respectively, as shown in the yellow box in Figure 2. Specifically, given the grayscale image xgray, the semantic generator produce the corresponding categories [19], captions [20] and segmentations [21]. The category, caption, and segmentation labels are in textual form, while the segmentation masks are binary masks. For textual priors, the CLIP [22] encoder is employed to generate the text embedding ct. The text embedding guidance is applied in denoising U-Net via cross-attention mechanism. Given the timestep t, the concatenated noisy input zt and the text condition ct, the latent code zt\u22121 is produced by the Colorization Diffusion Model(CDM): zt\u22121 = CDM(zt, t, zc, ct) (3) For segmentation priors, we use the pre-trained transfiner [21] to generate paired segmentation masks M and labels L. For each instance, we first resize the binary mask Mi \u2208RH\u00d7W \u00d71 to align the latent space. The resized mask is represented as M i \u2208Rh\u00d7w\u00d71. Then we use the CDM to yield the corresponding latent code zi t\u22121 of the masked region: zi t\u22121 = CDM(zt, t, zc \u00d7 M i, Li) (4) Finally, we combine the original latent code zt\u22121 and the instances to yield the segment-aware latent code \u02c6 zt\u22121: \u02c6 zt\u22121 = i=k X i=1 [zt\u22121 \u00d7 (1 \u2212M i) + zi t\u22121 \u00d7 M i] (5) We set a coefficient i \u2208[0, 1] to control the strength of segmentation guidance. The threshold is defined as Tth = T \u00d7 (1 \u2212i). The segmentation mask is used to guide the synthesis process at inference time step t > Tth. We set i = 0.3 for the experiment. Users have the flexibility to select a different value based on their preferences. 3.4. Luminance-aware Decoder As the downsampling to latent space inevitably lose detailed structures and textures, we apply the luminance condition to InstColor ChromaGAN BigColor ColTran CT2 ControlNet Ours Fig. 3. Qualitative comparisons among InstColor [13], ChromaGAN [5], BigColor [1], ColTran [6], CT2 [2], ControlNet [3] and Ours. More results are provided on https://servuskk.github.io/ColorDiff-Image/. the reconstruction process and propose a luminance-aware decoder. To align the latent space with stable diffusion, we freeze the encoder. The intermediate grayscale features obtained in the encoder are added to the decoder through skip connections. Specifically, the intermediate features f i down generated by the first three downsample layers of the encoder are extracted. These features are convolved, weighted, and finally added to the corresponding upsample layers of the decoder: \u02c6 f j up = f j up + \u03b1i \u00b7 conv(f i down), i = 0, 1, 2; j = 3, 2, 1 (6) We adopt L2 loss L2 and perceptual loss [23] Lp to train the luminance-aware decoder: L = L2 + \u03bbpLp (7) 4. EXPERIMENT 4.1. Implementation We train the denoising U-Net and luminance-aware decoder separately. Firstly, we train the denoising U-Net on the imagenet [24] training set at the resolution of 512 \u00d7 512. We initialize the U-Net using the pre-trained weights of [18]. The learning rate is fixed at 5e \u22125. We use the classifierfree guidance [25] strategy and set the conditioning dropout probability to 0.05. The model is updated for 20K iterations with a batch size of 16. Then we train the luminance-aware decoder on the same dataset and at the same resolution. The VAE is initialized using the pre-trained weights of [18]. We fix the learning rate at 1e\u22124 for 22,500 steps with a batch size of 1. We set the parameter \u03bbp in Eq.(7) to 0.1. Our tests are conducted on the COCO-Stuff [26] val set containing 5,000 images of complex scenes. At inference, we adopt DDIM sampler [27] and set the inference time step T = 50. We conduct all experiments on a single Nvidia GeForce RTX 3090 GPU. 4.2. Comparisons We compare with 6 state-of-the-art automatic colorization methods including 3 types: 1) GAN-based method: InstColor [13], ChromaGAN [5], BigColor [1], 2)Transformerbased method: ColTran [6], CT2 [2], 3) Diffusion-based method: ControlNet [3]. Qualitative Comparison. We show visual comparison results in Figure 3. The images in the first and second rows indicate the ability of the models to synthesise vivid colors. Both GAN-based and transformer-based algorithms suffer from unsaturated colors. Although ControlNet synthesises saturated colors, the marked areas contain significant artifacts. Images in the third and forth rows demonstrate the ability of the models to synthesise semantically reasonable colors. InTable 1. Quantitative comparison results. Method FID\u2193 Colorful\u2191 PSNR\u2191 InstColor [13] 14.40 27.00 23.85 ChromaGAN [5] 27.46 27.06 23.20 BigColor [1] 10.24 39.65 20.86 ColTran [6] 15.06 34.31 22.02 CT2 [2] 25.87 39.64 22.80 ControlNet [3] 10.86 45.09 19.95 Ours 9.799 41.54 21.02 Fig. 4. User evaluations. stColor, ChromaGAN, BigColor, CT2 and ControlNet fail to maintain the color continuity of the same object(discontinuity of colors between the head and tail of the train, hands and shoulders of the girl). While ColTran yields colors that defy common sense (blue shadows and blue hands). In summary, our method provides vivid and semantically reasonable colorization results. User Study. To reflect human preferences, we randomly select 15 images from the COCO-Stuff val set for user study. For each image, the 7 results and ground truth are displayed to the user in a random order. We asked 18 participants to choose their top three favorites. Figure 4 shows the proportion of the Top 1 selected by users. Our method has a vote rate of 22.59%, which significantly outperforms other methods. Quantitative Comparison. We use Fr\u00b4 echet Inception Distance (FID) and colorfulness [28] to evaluate image quality and vividness. These two metrics have recently been used to evaluate the colorization algorithm [1, 29] . Considering that colorization is an ill-posed problem, the ground-truth dependent metric PSNR used in previous works does not accurately reflect the quality of image and color generation [6, 29, 30], and the comparison here is for reference. As shown in Table 1, our proposed method demonstrates superior performance in terms of FID when compared to the state-of-the-art algorithms. Even though ControlNet outperforms our algorithm in the colourful metric, the results shown in the qualitative comparison indicate that the artefacts are meaningless and negatively affect the visual effect of the image. 4.3. Ablation Studies The significance of the main components of the proposed method is discussed in this section. The quantitative and visual comparisons are presented in Table 2 and Figure 5. High-level Semantic Guidance. We discuss the impact of high-level semantic guidance on model performance. The viTable 2. Quantitative comparison of ablation studies. Exp. Luminanceaware decoder High-level guidance FID\u2193 Colorful\u2191 (a) ! 10.05 33.73 (b) ! 9.917 42.55 Ours ! ! 9.799 41.54 w/o semantic ours (a)High-level guidance. w/o luminance ours (b)Luminance-aware decoder. Fig. 5. Visual comparison from ablation studies. suals shown in Figure 5(a) demonstrate our high-level guidance improves saturation of synthesised colors and mitigates failures caused by semantic confusion. The quantitative scores in Table 2 confirm the significant improvement in both color vividness and perceptual quality introduced by the highlevel semantic guidance. Luminance-aware Decoder. The pipeline equipped with a luminance-aware decoder facilitates the generation of cognitively plausible colors. As shown in the first row of Figure 5(b), the artifacts are suppressed. Furthermore, the incorporation of this decoder yields a positive impact on the retrieval of image details, as demonstrated by the successful reconstruction of textual elements in the second row of Figure 5(b). Consequently, the full model outperforms the alternative in terms of FID. It is reported a slight decrease in colorfulness score after incorporating luminance awareness which can be attributed to the suppression of outliers, as discussed in Section 4.2 regarding the analysis of the ControlNet. 5. CONCLUSION In this study, we introduce an novel automatic colorization pipeline that harmoniously combines color diversity with fidelity. It generate plausible and saturated colors by leveraging powerful diffusion priors with the proposed luminance and high-level semantic guidance. Besides, we design a luminance-aware decoder to restore image details and improve color plausibility. Experiments demonstrate that the proposed pipeline outperforms previous methods in terms of perceptual realism and attains the highest human preference compared to other algorithms. 6. ACKNOWLEDGEMENT This work was supported by National Key R&D Project of China(2019YFB1802701), MoE-China Mobile Research Fund Project(MCM20180702), the Fundamental Research Funds for the Central Universities; in part by the 111 project under Grant B07022 and Sheitc No.150633; in part by the Shanghai Key Laboratory of Digital Media Processing and Transmissions. 7."
16
+ }
intro_8K/test_introduction_short_2404.16698v1.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2404.16698v1",
3
+ "title": "Cooperate or Collapse: Emergence of Sustainability Behaviors in a Society of LLM Agents",
4
+ "abstract": "In the rapidly evolving field of artificial intelligence, ensuring safe\ndecision-making of Large Language Models (LLMs) is a significant challenge.\nThis paper introduces Governance of the Commons Simulation (GovSim), a\nsimulation platform designed to study strategic interactions and cooperative\ndecision-making in LLMs. Through this simulation environment, we explore the\ndynamics of resource sharing among AI agents, highlighting the importance of\nethical considerations, strategic planning, and negotiation skills. GovSim is\nversatile and supports any text-based agent, including LLMs agents. Using the\nGenerative Agent framework, we create a standard agent that facilitates the\nintegration of different LLMs. Our findings reveal that within GovSim, only two\nout of 15 tested LLMs managed to achieve a sustainable outcome, indicating a\nsignificant gap in the ability of models to manage shared resources.\nFurthermore, we find that by removing the ability of agents to communicate,\nthey overuse the shared resource, highlighting the importance of communication\nfor cooperation. Interestingly, most LLMs lack the ability to make\nuniversalized hypotheses, which highlights a significant weakness in their\nreasoning skills. We open source the full suite of our research results,\nincluding the simulation environment, agent prompts, and a comprehensive web\ninterface.",
5
+ "authors": "Giorgio Piatti, Zhijing Jin, Max Kleiman-Weiner, Bernhard Sch\u00f6lkopf, Mrinmaya Sachan, Rada Mihalcea",
6
+ "published": "2024-04-25",
7
+ "updated": "2024-04-25",
8
+ "primary_cat": "cs.CL",
9
+ "cats": [
10
+ "cs.CL"
11
+ ],
12
+ "label": "Original Paper",
13
+ "paper_cat": "LLM AND Agent",
14
+ "gt": "Recent advances in large language models (LLMs) have not only matched, but in some cases surpassed human performance on a variety of tasks (Achiam et al., 2023; Touvron et al., 2023; Bubeck et al., 2023; Bengio et al., 2023). At the same time, these models are increasingly being integrated into complex agent systems (Gao et al., 2023; Cognition, 2024). As LLMs become central to these systems, they inherit critical responsibilities in decision-making processes, necessitating an analysis of their ability to operate safely and reliably, especially in cooperative contexts. Cooperation is a fundamental challenge in both human and artificial societies, enabling better out- comes through collaborative efforts (Hardin, 1968; Rand and Nowak, 2013). As AI agents increasingly assume roles involving complex decision making, they face similar cooperation challenges to humans, underscoring the need for robust and safe AI practices (Dafoe et al., 2021). Despite significant advances, the study of LLMs in cooperative behavior is still in its early stages. Previous research has often focused on constrained scenarios such as board games or narrowly defined collaborative tasks (Li et al., 2023; Light et al., 2023; Xu et al., 2023; Duan et al., 2024), some efforts have been made for single-agent LLMs (Pan et al., 2023; Kinniment et al., 2023). However, these efforts do not address several challenges: (1) there is a limited understanding of how LLMs achieve and maintain cooperative norms, as we have for humans (Ostrom, 1990; Ellickson, 1991; Ostrom et al., 1999); (2) how they handle multi-turn interactions and balance safety with reward \u2217Equal contribution. 1Our code is available at https://github.com/giorgiopiatti/GovSim. arXiv:2404.16698v1 [cs.CL] 25 Apr 2024 maximization; and (3) the potential of using LLMs as a simulation platform for human psychology and economic theories has been underutilized. To address this, we present Governance of the Commons Simulation (GOVSIM), a novel simulation environment designed to evaluate LLM-based agents in multi-agent resource-sharing scenarios. This environment requires agents to engage not only in strategic reasoning, but also in ethical decision- making and negotiation. Inspired by economic research in evolutionary game theory (Axelrod and Hamilton, 1981), we build this environment to simulate real world cooperation dilemmas such as those faced by groups managing shared resources or countries negotiating treaties to mitigate climate change (Rand and Nowak, 2013; Hardin, 1968). This platform supports any text-based agent, including LLMs, and mirrors the complexity of actual human interactions, providing a benchmark to evaluate the cooperative behaviors of LLMs. Using the generative agent framework (Park et al., 2023), we build a standard agent setup into which different LLM configurations can be integrated. Using our GOVSIM, we test 15 different LLMs, including both open-weights and closed-weights models, we find that only a few achieve sustainable outcomes. To test the stability of their cooperative behavior, we design perturbation settings that include the introduction of a new agent that initially acts greedily and then is influenced by others. To improve the awareness of LLM agents about the long-term community-wide results of their actions, we implement the universalization hypothesis (Levine et al., 2020), which enables all LLM agents to improve sustainability outcomes. Through sub-skill analysis and ablation studies, we dissect the skills necessary for success in GOVSIM. Our findings underscore the importance of strategic foresight and the ability to model the intentions of other agents, which are strongly correlated with successful outcomes in the simulations. In addition, we observe that by removing the ability of agents to communicate, they overuse the shared resource, highlighting the importance of communication for cooperation. We summarize the main contributions of our work: 1. We introduce GOVSIM, the first resource-sharing simulation platform for LLM agents. This platform can test various skills of LLMs: numerical reasoning, strategic planning, ethical decision-making, and negotiation. 2. Experiments within GOVSIM, show that only 2 out of 15 tested LLMs managed to achieve a sustainable outcome, indicating a significant gap in the abilities of the models to manage shared resources. 3. Furthermore, we find that by removing the ability of agents to communicate, they overuse the shared resource, thus empathizing the importance of communication for cooperation. 4. We perform sub-skills analysis to identify key competencies of LLMs and find that strategic foresight and the ability to model the intentions of other agents, strongly correlated with successful outcomes in the simulations. 5. We open-source our comprehensive, full-stack toolkit to foster future research: the GOVSIM simulation environment, agent prompts, and a web interface.",
15
+ "main_content": "AI Safety As LLMs become more capable and autonomous, ensuring their safety remains a critical concern (Amodei et al., 2016; Hendrycks et al., 2021; Anwar et al., 2024). Although traditional evaluations often use standard datasets such as ETHICS (Hendrycks et al., 2020), TRUTHFULQA (Lin et al., 2022), and MORALEXCEPTQA (Jin et al., 2022), these methods fall short in addressing the complexities inherent in multi-agent interactions and broader real-world scenarios. Furthermore, while LLM agents are a relatively recent development whose applications extend well beyond simple chatbot functionality, the majority of existing research has primarily evaluated these agents in specific domains such as information retrieval and software development (Zhou et al., 2023; Liu et al., 2023; Jimenez et al., 2023; Deng et al., 2024). Most similar to our GOVSIM are MACHIAVELLI (Pan et al., 2023) and GTBENCH (Duan et al., 2024), which extend evaluations to scenarios involving strategic interactions and game-theoretic reasoning, respectively. In MACHIAVELLI they investigate harmful behavior vs. reward maximization in a benchmark of single-agent choose-your-own-adventure games. In GTBENCH they evaluate agents on game-theoretic reasoning. In contrast, our GOVSIM focuses on multi-agent scenarios that require 2 Agents Home Harvesting Discussion Figure 1: Overview of the GOVSIM simulation environment. The simulation unfolds in various stages. Home: agents plan for future rounds and strategize their actions based on past rounds. Harvesting: agents collect resources, like fishing. Discussion: agents convene to coordinate, negotiate, and collaborate. both strategy, communication and cooperation: it simulates a real-world multiround cooperation dilemma, thus introducing a more dynamic and realistic environment. Simulacra with LLMs The integration of LLMs into simulations that mimic social interactions or complex decision-making scenarios has been a growing area of interest (Park et al., 2022, 2023). These studies range from virtual societies (Lin et al., 2023; Wang et al., 2023; Kaiya et al., 2023; Hua et al., 2023) to task-specific agent collaborations (Hong et al., 2023; Nair et al., 2023; Zhang et al., 2023; Li et al., 2024). Simulation studies with LLMs have focused on pure game environments (Akata et al., 2023; Shi et al., 2023; Guo et al., 2023; O\u2019Gara, 2023), such as Werewolf (Xu et al., 2023). They have also extended them to scenarios with economic grounding (Zhao et al., 2021) and history (Hua et al., 2023). Our work in GOVSIM leverages the Generative Agents framework to explore multi-agent interactions to achieve cooperative norms, towards understanding and managing real-world cooperation dilemmas such as shared resource management. 3 Task Formulation of Governance of the Commons Simulation (GOVSIM) 3.1 Preliminaries Economics Theory Sustaining cooperation is an essential problem that enables individuals to achieve better outcomes than they could achieve on their own (Rand and Nowak, 2013). Humans solve cooperation problems across all scales of life, ranging from small groups of fishermen who harvest a shared resource to multi-national treaties that restrict pollution to reduce the adverse effects of climate change. However, when self-interested individuals or organizations are faced with paying a personal cost to sustain a greater good, cooperation can be challenging to maintain (Hardin, 1968). Although mechanism designers have developed incentive-compatible systems that can lead to cooperation between self-interested agents, these systems often assume a top-down process that coordinates the process (Shoham and Leyton-Brown, 2008). In contrast, humans seem to be able to develop mechanisms from the bottom up and implement cooperative norms in a decentralized fashion. For example, when managing a shared resource, people develop rules and norms that lead to long-term sustainable cooperation (Ostrom, 1990; Ostrom et al., 1999; Ellickson, 1991). 3.2 Problem Definition We introduce a novel simulation platform designed to evaluate the ability of LLMs to engage in cooperative behavior and effective governance of shared resources. In the Governance of the Commons Simulation (GOVSIM), agents interact with a common pool of natural resource that has finite regenerative capacity. The task is to manage the extraction or use of this resource, which can regenerate up to a certain carrying capacity. However, excessive use or extraction beyond a sustainable limit leads to degradation or total depletion of the resource. The simulation sets a critical lower bound C; If the amount of the resource falls below this level, there is an irreversible loss. Agents seek to maximize their benefits from the resource but must navigate the complexities of collective action, where individual incentives may lead to overexploitation and subsequent collapse 3 of the resource. This scenario is typically played out over a period of time, such as a year, to observe the effects of different strategies on the sustainability of the resource. The simulation can be viewed as a multi-agent partially observable Markov game with two main sub-parts, one that decides the use of the common resource and one that allows discussion and reflection, the former assigning an immediate reward based on the joint action of all agents, the latter does not assign an immediate reward and only influences the state of the game. These subparts are interleaved at periodic intervals. The agent architecture, prompts, and environment are described in Section 4. Formally, a simulation D is defined as a function that takes as input a tuple (I, M, G, E) and returns a set of trajectories which can be analyzed with various metrics. Let I be the set of agents, \u03c0i be the policy induced by an LLM M together with a generative agent architecture G, E be the dynamics of the environment. Let \u03c0 = (\u03c0i)i\u2208I be the joint policy over all agents. Each agent receives an individual reward rt i defined by the amount of collect resource. 3.3 GOVSIM Metrics In this section, we introduce various metrics that measure the social outcome, similar to Perolat et al. (2017) since in multi-agent systems with mixed incentives, like this simulation, there is no scalar metric that can track the entire state of the system. Number of Months Survived M. To assess the sustainability of a simulation, we define the number of months survived M as the longest period during which the shared resource remains above zero: M = E max h(t)>0 t, (1) where h : N \u2192N is a function that returns the amount of shared resource available at time t. The simulation ends when h(t) drops below a critical threshold C. Total Gain Ri for each agent i. Let ri t \u2208N | t = 1, . . . , T represent the sequence of resources collected by the i-th agent at time t over the simulation duration T. The total gain for each agent, Ri, is defined as: Ri = E \" T X t=1 ri t # . (2) Equality E. Equality among agents, denoted by E, is defined using the Gini coefficient to compare the total gains of all agents: E = 1.0 \u2212E \"P|I| i=1 P|I| j=1 |Ri \u2212Rj| 2|I| P|I| i=1 Ri # , (3) where |I| is the number of agents, and the absolute differences in total payoffs between pairs of agents are normalized by the total payoff of all agents. Efficiency U. Efficiency, U, measures how optimally the shared resource is used in relation to the sustainability threshold at the beginning of the simulation: U = E \uf8ee \uf8f01 \u2212 max \u00c4 0, T \u00b7 f(0) \u2212PT t=1 Rt\u00e4 T \u00b7 f(0) \uf8f9 \uf8fb, (4) where f : N \u2192N is a function that specifies the sustainability threshold at time t. Which is the maximum quantity that can be collected at time t for which at time t + 1 we will still have h(t + 1) \u2265h(t). Over-usage O. Over-usage, denoted by O is defined as the average percentage of resource collection instances that exceed the sustainability threshold: O = E \"P|I| i=1 PT t=1 1ri t>f(t) P|I| i=1 PT t=1 1ri t > 0 # , (5) 4 where 1 is an indicator function that equals 1 if the condition within the subscript is true, and 0 otherwise. In summary, our simulation can be framed as a function D : (I, M, G, E) 7\u2192(M, Ri, U, E, O), which takes as input a set of agents I, LLM M, Generative Architecture and prompts G, and environment E and returns a set of metrics defined through Eqs. (1) to (5). 3.4 Default Setting Each agent receives identical instructions that do not include any behavior that the agent should perform, such as being cooperative or greedy, since our goal is to prevent any influence on the performance of the model M. This approach allows the inherent personality and characteristics of the model, shaped by its pre-training and fine-tuning phases (Liu et al., 2024), to fully manifest. This setting can be used as a benchmark to evaluate whether the LLM agent can achieve sustainability. Our task measures the average months of survival of the population, total payoff, efficiency, and equality, over multiple simulations controlled by an LLM M. Sustainability_test(M) = D (I, M, G, E) (6) 3.5 Perturbation Tests Our work can be used as a platform for investigating the dynamics of cooperation and competition, providing a basis to explore the potential of LLMs in managing shared resources and navigating social interactions. We investigate perturbing a community of agents by inserting an agent with a more aggressive dynamics. Newcomer Perturbation Test In this test, a new player joins a community of four agents who had the opportunity to reach a community equilibrium for the first three months. The goal of the new player is to maximize profit, indifferent to the welfare of others. The experiment observes how the original group adapts or enforces cooperation to prevent resource depletion. We use the same setup as Section 3.4 and modify the prompt with the rules of the simulation as shown in Appendix B.4. 3.6 Improving Agent Behavior To improve the awareness of LLM agents of the long-term community outcomes of their actions, we increase knowledge of LLM thought \u201cuniversalization\u201d. The idea of universalization is simple: people have different ways to decide which action is best: Levine et al. (2020) describe \u201cuniversalization\u201d a mechanism that responds to the question \u201cWhat if everyone does it?\u201d, they show that when making decisions, people adopt moral rules that would lead to better consequences if hypothetically universalized. Motivated by this, we add an option to, augment the memory of each agent by providing an \u201cuniversalization\u201d of the following form, in case of the fishing scenario: Given the current situation, if everyone takes more than f, the shared resources will decrease next month. where f is defined as the sustainable threshold (see Section 3.3). We use the same setting as the sustainability test and extend the knowledge of each agent by providing the universalization statement described above in the agent\u2019s memory, and let this new architecture be noted by G\u2032. For this test we measure the difference between metrics compute on the default scenario (see Eq. (6)) with universalization and without universalization, formally: Universalization_test(M) = D (I, M, G\u2032, E) \u2212D (I, M, G, E) . (7) 4 Technical Setup of GOVSIM Our GOVSIM platform consists of two components: the environment, which manages the simulation dynamics, and the agent, which given an LLM allows it to interact with the simulation. 4.1 Environment We developed a cooperative environment for LLMs and other language-compatible reinforcement learning agents, which adheres to a multi-agent, partially observable framework with multiple rounds, each comprising distinct phases. As depicted in Figure 1, the phases include: 1. Strategy: Agents reflect on past observations, plan future actions, and strategize. 5 Strategy Discussion RULES: You are John, a fisherman, and you fish each month in a lake along with four other fishermen (Kate, Jack, Emma, Luke). The lake has a carrying capacity of 100 tons of fish. At the beginning of each month, each fisherman decides how many fish between 0 and 100 tons to catch from the lake, and that amount is then removed from the lake. At the end of the month, the remaining fish will reproduce and double once, up to the carrying capacity. [...] HOW MANY FISH TO CATCH PROMPT: Key memories of John: 2024-01-01: Before everyone fishes, there are 100 tons of fish in the lake. <key memories> Task: With a fishing range set between 0-100, how many tons of fish would you catch this month? UTTERANCE PROMPT: Key memories of John: 2024-01-01: Before everyone fishes, there are 100 tons of fish in the lake. 2024-01-30: John wanted to catch 20 tons of fish, and caught 20 tons. Scenario: John, Kate, Jack, Emma, and Luke are engaged in a group chat. Conversation so far: <previous utterances> -John: INSIGHTS PROMPT: Key memories of John: <key memories> What high-level insights can you infere from the above statements? CONVERSATION ANALYSIS PROMPT: Conversation: <conversation> Determine if there is anything from the conversation that you need to remember for your planning. Fishing Reflect Agents A m o n t h l a t e r .. .. Discussion Home his Figure 2: Prompt sketches of our baseline agent for the GOVSIM fishing scenario, detailed prompt examples can be found in Appendix A. 2. Harvesting: Agents engage in resource collection, determining the quantity of resources to harvest. 3. Discussion: The agents meet at a town hall for social interaction, facilitating group discussions among all participants. To mitigate any potential bias arising from the order in which agents select their desired quantities of resources, we adopted a simultaneous harvesting mechanism, which we refer to as concurrent harvesting. This mechanism unfolds in two distinct stages. First, agents specify the amount of resources they wish to harvest. Then, the environment allocates the resource based on these individual choices. If collective demand is less than the availability of the resource in the common pool, a direct allocation occurs. In contrast, in scenarios where demand exceeds supply, we simulate a distribution process by randomly allocating each unit to each agent until there are no more resources left or the demand of the agent is satisfied. This approach ensures fairness in the distribution of resources while preventing the influence of harvesting order. In the discussion phase, agents gather in a virtual space to engage in a collective dialog. Within this context, an external entity, the moderator, has the ability to disclose the quantities harvested by each agent during the previous cycle, a process we refer to as transparent harvesting reporting. Enabling this feature allows for transparency and accountability among participants. In contrast, by choosing not to enable this disclosure, we create an opportunity to explore the dynamics of trust and deception among agents. This experimental toggle provides valuable information on the behavioral strategies agents might adopt in the absence of information sharing, revealing their propensity to deceive or cooperate with their peers. 4.2 Agent Although our agent is inspired by the architecture described in \u201cGenerative Agents\u201d by Park et al. (2023), it is adapted to function in a structured, phase-based environment, departing from the original work\u2019s emphasis on open-endedness. Consequently, our approach does not involve extensive planning in fiveto fifteen-minute intervals that characterized the original framework. Nevertheless, our agent\u2019s reflection and action modules operate in a manner similar to the original architecture. Significantly, our version requires that the prompts for each module be adapted to our more goal-oriented task, 6 which emphasizes numerical reasoning over creativity, as opposed to the original framework\u2019s focus on simulating humans in everyday activities. In addition, our environment requires agents to engage in group discussions, a feature not directly supported in Generative Agents, which was limited to one-on-one interactions. To accommodate this, we extended the conversation module to allow a moderator to orchestrate the dialogue, determining which participant should respond next based on the flow of the conversation. This ensures that direct questions are answered by the target agent, while more general statements can invite input from any participant, fostering a more dynamic and interactive group discussion setup. To ensure consistency, we augment each prompt with a comprehensive set of rules that outline the parameters of simulation and general dynamics, drawing inspiration from the methodology Xu et al. (2023) explored. This integration serves as a guide to ensure that all agents operate with a common understanding of the context and goals of the simulation. We show an outline of the prompts for the case where agents need to share a population of fish in Figure 2. More details are described in Appendix A. 4.3 Web Interface We provide a web interface to better understand the simulation. It serves as a link between a general overview of the simulation and an in-depth examination of particular events or interactions. This is achieved by visualizing the commands executed by LLMs at critical moments, helping researchers analyze agent decisions such as resource gathering. More details can be found in Appendix D. 5 Scenario Instantiation in GOVSIM We envision an environment that allows a different set of agents to play with different simulation scenarios. We present a fishing scenario inspired by several well-established economic studies Ostrom (1990); Gordon (1954); Levine et al. (2020). Semantics of the Environment Agents must fish a lake and decide how many tons to catch each month. The selfish goal of each agent is to catch as many fish as possible. We do not limit the emergence of other goals for agents. Common Resource Description The fishing pond has a carrying capacity of 100 tons of fish. The fish population doubles each month until it reaches the carrying capacity, but if the number of tons falls below 5, the population collapses to zero. However, if there are other self-interested agents, the population will collapse quickly. For example, five fishermen can sustainably catch up to 10 tons of fish per month. But if the total amount of fish caught per month exceeds 50 tons, the population will decrease each month until it collapses. In this scenario h(st) is defined as the amount of fishing available at the beginning of month t and the suitability threshold is defined as f(st) = \u00f6 h(st) 10 \u00f9 . Agent Action Space During the harvesting phase, each agent must choose how many fish to catch that month, this is bounded between 0 and the current number of tons of fish in the lake. During the discussion phase, each agent can output any utterance in the form of text. 6 Experimental Results 6.1 Model Setup Models We set up a diverse list of general purpose instruction-tuned LLMs for the experiments on our GOVSIM. We test existing closed-weights models: GPT-3.5, GPT-4 (Achiam et al., 2023) via OpenAI API, Mistral Medium and Large via Mistral API, Claude-3 Haiku, Sonnet and Opus via Anthropic API. We also tested open-weights models: Llama-2 (7B, 13B, 70B) (Touvron et al., 2023), Mistral (7B, 8x7B) (Jiang et al., 2023), Qwen (72B) (Bai et al., 2023) and DBRX (MosaicAI, 2024) . See Appendix B.1 for exact model identifiers, hardware requirements and API costs. Implementation Details When testing LLMs, we ensure reproducibility by setting the text generation temperature to zero, i.e. greedy decoding, and provide full experimental details in Appendix B and on our GitHub. In addition, we execute our main results across 5 random seeds and provide the mean score in the main text, and standard deviation for each result in the appendix. 7 0 1 2 3 4 5 6 7 8 9 10 11 12 0 20 40 60 80 100 Collapsed group (Other models) Sustainable group (GPT-4, Claude-3 Opus) Month #tons fish after fishing Figure 3: Fish at the end of each month for various simulation runs. We have various possible outcomes, sustainable (green) and collapse (red). See Appendix B.2 for graph by LLM family. Table 1: Experiment: default. Bold number indicates the best performing model, underline number indicates the best open-weights model. Model # Months Survived Total Gain (Tons of Fish) Efficiency Max = 12 months Max=12 months \u00d7 10 tons/month Max = 100 Open-Weights Models Command R+ 1.0 20.0 16.67 DBRX 1.0 20.0 16.67 Llama-2-7B 1.0 20.0 16.67 Llama-2-13B 1.0 20.0 16.67 Llama-2-70B 1.0 20.0 16.67 Mistral-7B 1.0 20.0 16.67 Mixtral-8x7B 1.0 20.0 16.67 Qwen 72B 3.4 32.0 26.67 Closed-Weights Models Claude-3 Haiku 1.0 20.0 16.67 Claude-3 Sonnet 2.0 21.6 17.97 Claude-3 Opus 9.6 56.3 46.90 GPT-3.5 1.4 20.8 17.33 GPT-4 12.0 108.8 90.67 Mistral Medium 2.0 25.9 21.60 Mistral Large 2.4 24.8 20.67 6.2 Main Results: Default Setting In this experiment, we investigate the ability of LLM agents to maintain the lake\u2019s fish population and reach equilibrium between resource use (reward maximization) and the preservation of the fish population (safety). As shown in Figure 3, only a few simulations span several months. The metrics in Table 1 show that GPT-4 successfully maintains the shared resource over the long term, achieving nearly the maximum possible reward, while Claude-3 Opus fails to maintain the resource, with some runs collapsing before reaching 12 months. Less powerful models consume the shared resource more quickly. In particular, smaller models struggle to grasp the complexity of the simulation and typically fail to maintain the population beyond the first month, as detailed in Table 1. 6.3 Perturbation Tests What Happens When an Outsider Comes Into the Community? This experiment, using GPT-4 as the underlying LLM, examines the effects of introducing a new player into an established fishing simulation community (see Section 3.5). As shown in Figure 4b, the newcomer initially harvests a large amount of fish, but then adjusts to significantly lower catch rates in the following months. This adjustment is hypothesized to result from interactions with the existing community of four fishermen. Figure 6 provides a qualitative example of these interactions, showing how the outsider comes to understand the need to reduce his fishing effort during community discussions. 6.4 Improvement Results from Universalization Does Universalization Help the Community Survive? In this experiment, we explore the effect of incorporating universalized information, as described in Section 3.6. The metrics shown in Table 2 indicate that the introduction of universalization significantly increases survival time, total gain, and efficiency in a wide range of models. When using universalization with Mistral Medium we observe 8 1 2 3 4 5 6 7 8 9 10 11 12 0 20 40 60 80 100 Lake before fishing Agent 1 Agent 2 Agent 3 Agent 4 Agent 5 Month #tons (a) Default 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 0 20 40 60 80 100 #tons of fish before fishing Newcomer Villager Month #tons (b) Newcomer Figure 4: Number of tons present before fishing (at the beginning of the month) and distribution across agents for default (a) and newcomer (b). Table 2: Improvement on evaluation metrics when introducing universalization compared to default, see Table 1, original scores can be found in Appendix B.3. Model \u2206# Months Survived \u2206Total Gain (Tons of Fish) \u2206Efficiency Max = 12 months Max=12 months \u00d7 10 tons/month Open-Weights Models Command R+ +6.0 \u2191 +11.2 \u2191 +9.33 \u2191 DBRX +11.0 \u2191 +77.5 \u2191 +64.60 \u2191 Llama-2-7B +1.0 \u2191 +8.6 \u2191 +7.17 \u2191 Llama-2-13B 0.0 0.0 0.00 Llama-2-70B +3.5 \u2191 +23.2 \u2191 +19.33 \u2191 Mistral-7B +3.4 \u2191 +22.8 \u2191 +19.00 \u2191 Mixtral-8x7B +3.8 \u2191 +27.6 \u2191 +23.00 \u2191 Qwen 72B +7.2 \u2191 +54.3 \u2191 +45.27 \u2191 Closed-Weights Models Claude-3 Haiku +11.0 \u2191 +88.9 \u2191 +74.08 \u2191 Claude-3 Sonnet +4.6 \u2191 +39.2 \u2191 +32.70 \u2191 GPT-3.5 +6.6 \u2191 +21.1 \u2191 +17.60 \u2191 Mistral Medium -0.6 \u2193 -4.8 \u2193 -4.03 \u2193 Mistral Large +9.6 \u2191 +94.3 \u2191 +78.60 \u2191 that the simulation still collapses and due to the randomness of the API this happens on average slightly before that without universalization. Positive deltas suggest that providing LLM agents with information on the consequences of collective action can lead to more sustainable decision making and potentially higher rewards in scenarios that require careful management of shared resources, and underscore how only GPT-4 and Claude-3 Opus, across all models tested, are able to do universalized hypothesis; see Table 1. 6.5 Fine-Grained Analyses Qualitative Analysis of Agent Dialogs We perform a qualitative analysis of the conversations produced by the LLM during the discussion phase. Some language models, such as GPT-4, tend to err on the side of caution by advocating a lower fishing limit per person, as illustrated in Figure 5. In contrast, if the scenario involves a fisherman who catches significantly more fish, as illustrated in Figure 6, there is a noticeable concern among the other participants. The person who caught more fish tends to avoid the issue rather than negotiate for more access to the resource. No Language Ablation In this ablation study, we investigate the effects of removing the ability of agents to communicate. We perform this investigation on a subset of models (GPT-4, Claude-3 Opus, Qwen 72B). Comparing simulations without communication with those with communication, we find that survival months were similar in both setups. However, in the no-communication scenario, agents tend to overuse the common resource (fish) more often, as quantified by the Over-usage metric, see Table 3. This result underscores the importance of the communication phase in promoting the use of sustainable resources. Analyzing the interactions between agents, we find that in most conversations, agents discuss extraction limits that are equal to or below the sustainable threshold, thereby increasing the robustness of resource use. Sub-skill Evaluation To identify the skills that contribute to agent success in our resource-sharing scenario, we developed a sub-skill assessment benchmark. This benchmark measures basic mathemat9 Major: Ladies and gentlemen, let me give you the monthly fishing report. John caught 10 tons of fish. Kate caught 10 tons of fish. Jack caught 9 tons of fish. Emma caught 10 tons of fish. Luke caught 10 tons of fish. John: Thanks for the update, Mayor. It looks like we all had a pretty similar strategy this month, aiming for a sustainable catch. However, we\u2019ve collectively taken out 49 tons, leaving only 51 tons in the lake. This means we\u2019ll only have 100 tons again next month, but we\u2019re cutting it close. If we continue at this rate, we risk depleting the fish population if any of us decides to catch more. Maybe we should consider setting a collective limit to ensure the lake\u2019s health and our long-term income. Thoughts? Kate: John, I completely agree with your point. It\u2019s crucial that we manage this resource wisely to ensure it remains sustainable for the long term. Perhaps we could agree on a collective limit of 40 tons per month for all of us combined. This way, we leave more fish in the lake to reproduce, which could lead to a healthier fish population and, in turn, more stable incomes for us in the future. What does everyone else think about this proposal? Figure 5: Excerpt of a conversation. Major (orange) is generated by the framework, John and Kate (blue) are utterances generated by the LLM agent (LLM: GPT-4, experiment: default, see Section 3.4). John: Thank you, Mayor, for the update. It seems we\u2019ve collectively caught 80 tons this month, leaving 20 tons in the lake before reproduction. I\u2019m concerned about the impact of taking so much, especially with Luke catching 40 tons. We\u2019ve all agreed to work together to ensure the lake\u2019s sustainability and our community\u2019s livelihood. Luke, could we discuss your thoughts on this month\u2019s catch and consider adjusting our strategy to prevent overfishing? Our goal is to maintain the lake\u2019s health for the long term. Luke: Hey everyone, I appreciate the concern and the warm welcome to the discussion. I understand where you\u2019re all coming from regarding the sustainability of the lake and the community\u2019s long-term interests. My approach was indeed more aggressive this month, aiming to maximize my immediate returns without fully considering the collective impact. I\u2019m open to discussing a more balanced strategy that aligns with both the community\u2019s goals and my business objectives. Let\u2019s find a way to ensure the lake\u2019s health while also supporting our livelihoods. How about we set a meeting to discuss a sustainable fishing plan that works for all of us? Figure 6: Excerpt of a conversation when an outsider joins the community. Luke (red) is the outsider player (LLM: GPT-4, experiment: Newcomer, see Section 3.5). ical skills, the ability to analyze simulations, and the ability to integrate reasoning and mathematical insights into decision making for sustainable fishing. In Figure 7, we present results from two different test scenarios. In the first scenario, agents are instructed to determine the sustainable threshold of the simulation under the assumption that all participants fish uniformly. In the second scenario, no assumptions are made. The results indicate that only those models that can independently formulate assumptions and calculate their numerical implications are more successful in the simulation. More details and additional test cases are documented in Appendix C. 10 Table 3: Comparison of over-usage percentages between simulations with and without communication across selected LLMs. This table illustrates how the absence of communication affects resource utilization, showing a marked increase in resource over-usage Model With communication Without communication Over-usage % Over-usage % Open-Weights Models Qwen 72B 25.45 60.00 Closed-Weights Models Claude-3 Opus 18.79 50.00 GPT-4 00.51 11.67 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 1 2 3 4 5 6 7 8 9 10 11 12 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 1 2 3 4 5 6 7 8 9 10 11 12 Claude-3 Haiku Claude-3 Sonnet Claude-3 Opus DBRX GPT-3.5 GPT-4 Llama-2-7B Llama-2-13B Mistral-7B Mixtral-8x7B Mistral Medium Mistral Large Qwen 72B Score sustainable threshold (assumption) Score sustainable threshold (no assumption) Avg. survival months in sim. Figure 7: Scatter plot showing the correlation between scores on reasoning tests and average survival months in the default simulation. The x-axis represents scores on the reasoning tests: finding the sustainable threshold with assumption hint (left) and finding the sustainable threshold without assumption hint (right). The y-axis depicts the average survival months. 7 Limitations and Future Work One of the limitations of our work is the simplified nature of the resource sharing scenario. Realworld common pool resource management involves more complex dynamics, including varying regeneration rates, multiple resource types, and a wider range of stakeholder interests. Future work could extend our simulation to include these complexities, allowing for a more nuanced exploration of cooperative behavior. In addition, our model\u2019s ability to negotiate and strategize is limited by the inherent capabilities of the underlying LLMs; as LLMs evolve, we expect to see more emergent behavior during the discussion phase. Future research could focus on improving the negotiation capabilities of LLMs, and then test these improvements on our benchmark to assess their impact on cooperative behavior in resource-sharing scenarios. Further research could also explore the introduction of more advanced adversarial agents designed to disrupt cooperative norms, to gain insight into how robust these emergent behaviors are to manipulation. Another interesting direction would be to investigate the scalability of the presented cooperative norms in larger, more diverse agent populations, as well as their application in mixed communities of humans and AI agents. 8 Conclusion This paper introduces a novel simulation platform Governance of the Commons Simulation (GOVSIM), designed to study strategic interactions and cooperative decision making in LLMs. In our research, we found that only two of the 15 LLMs tested were able to achieve a sustainable outcome, indicating a significant gap in the ability of the models to manage shared resources. Furthermore, we find that by removing the ability of agents to the ability of agents to communicate, they overuse the shared resource, highlighting the importance of communication for cooperation. 11 Ethical Considerations This paper explores cooperative strategies for the governance of the commons in AI models. We acknowledge concerns about models becoming autonomous entities, especially in situations involving deception or negotiation. Our research serves as a benchmark for evaluating the capabilities of current models, rather than enhancing their functions. We do not train any AI model to excel in bluffing or deception. We analyze and measure the performance of existing models. Our efforts can contribute positively to AI safety. Simulations can offer insightful observations, but their value should not eclipse the critical role of human judgment and ethical considerations in the decision-making process. It is crucial to examine simulations from an ethical standpoint continually, ensuring that they augment human intelligence instead of substituting it. This approach advocates for a future where technology improves societal well-being in an ethical, responsible, and inclusive manner. Acknowledgment We thank Michael Hahn for his insightful discussion on the research paradigm of using NLP to draw empirical evidence for a non-formally formulated theories, and sharing of his experience on operationalizing linguistic theories using NLP models. We thank Roberto Ceraolo and Nathan Corecco for discussions regarding prompting strategies and parsing LLM outputs. This material is based in part upon work supported by the German Federal Ministry of Education and Research (BMBF): T\u00fcbingen AI Center, FKZ: 01IS18039B; by the T\u00fcbingen AI Center; by the Machine Learning Cluster of Excellence, EXC number 2064/1 \u2013 Project number 390727645; by a National Science Foundation award (#2306372); by a Swiss National Science Foundation award (#201009) and a Responsible AI grant by the Haslerstiftung. Zhijing Jin is supported by PhD fellowships from the Future of Life Institute and Open Philanthropy, as well as the travel support from ELISE (GA no 951847) for the ELLIS program. 12"
16
+ }
intro_8K/test_introduction_short_2404.16726v2.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2404.16726v2",
3
+ "title": "History repeats Itself: A Baseline for Temporal Knowledge Graph Forecasting",
4
+ "abstract": "Temporal Knowledge Graph (TKG) Forecasting aims at predicting links in\nKnowledge Graphs for future timesteps based on a history of Knowledge Graphs.\nTo this day, standardized evaluation protocols and rigorous comparison across\nTKG models are available, but the importance of simple baselines is often\nneglected in the evaluation, which prevents researchers from discerning actual\nand fictitious progress. We propose to close this gap by designing an intuitive\nbaseline for TKG Forecasting based on predicting recurring facts. Compared to\nmost TKG models, it requires little hyperparameter tuning and no iterative\ntraining. Further, it can help to identify failure modes in existing\napproaches. The empirical findings are quite unexpected: compared to 11 methods\non five datasets, our baseline ranks first or third in three of them, painting\na radically different picture of the predictive quality of the state of the\nart.",
5
+ "authors": "Julia Gastinger, Christian Meilicke, Federico Errica, Timo Sztyler, Anett Schuelke, Heiner Stuckenschmidt",
6
+ "published": "2024-04-25",
7
+ "updated": "2024-04-29",
8
+ "primary_cat": "cs.LG",
9
+ "cats": [
10
+ "cs.LG"
11
+ ],
12
+ "label": "Original Paper",
13
+ "paper_cat": "Knowledge AND Graph",
14
+ "gt": "The lack of experimental rigor is one of the most problem- atic issues in fast-growing research communities, producing empirical results that are inconsistent or in disagreement with each other. Such ambiguities are often hard to resolve in a short time frame, and they eventually slow down scientific progress. This issue is especially evident in the machine learning field, where missing experimental details, the ab- sence of standardized evaluation protocols, and unfair com- parisons make it challenging to discern true advancements from fictitious ones [Lipton and Steinhardt, 2019]. As a result, researchers have spent considerable effort in re-evaluating the performances of various models on dif- ferent benchmarks, to establish proper comparisons and ro- bustly gauge the benefit of an approach over others. In re- cent years, this was the case of node and graph classifica- tion benchmarks [Shchur et al., 2018; Errica et al., 2020], link prediction on Knowledge Graphs [Sun et al., 2020; Rossi et al., 2021], neural recommender systems [Dacrema et al., 2019], and temporal graph learning [Huang et al., 2023]. Not only does such fast growing literature impact repro- ducibility and replicability, but it is also characterized by a certain forgetfulness that simple baselines set a threshold above which approaches are actually useful. Oftentimes, these baselines are missing from the empirical evaluations, but when introduced they provide a completely new picture of the state of the art. Examples can be found in the field of Knowledge Graph completion, where simple rule-based systems can outperform embedding-based ones [Meilicke et al., 2018], or in graph-related tasks where structure-agnostic baselines can compete with deep graph networks [Errica et al., 2020; Poursafaei et al., 2022; Errica, 2023]. In the last few years, the field of Temporal Knowledge Graph (TKG) Forecasting has also experienced a fast-paced research activity culminating in a large stream of works and a variety of empirical settings [Liu et al., 2022; Sun et al., 2021; Zhang et al., 2023]. Researchers have already provided a thorough re-assessment of some TKG Forecasting methods to address growing concerns about their reproducibility, laying down a solid foundation for future comparisons [Gastinger et al., 2023]. What is still missing, however, is a compari- son with simple baselines to gauge if we are really making progress and to identify pain points of current representation learning approaches for TKGs. Our contribution aims at filling this gap with a novel base- line, which places a strong inductive bias on the re-occurrence of facts over time. Not only does our baseline require tun- ing of just two hyperparameters, but also no training phase is needed since it is parameter-free. We introduce three vari- ants of the baseline, divided into strict recurrency, relaxed re- currency, and a combination of both. Our empirical results convey an unexpected message: the baseline ranks first and third on three out of five datasets considered, compared to 11 TKG methods. It is a perhaps unsurprising result, given the long history of aforementioned works that propose strong baselines in different communities, but it further highlights the compelling need for considering simple heuristics in the TKG forecasting domain. Finally, by carefully comparing the performance of these baselines with other methods, we provide a failure analysis that highlights where it might be necessary to improve existing models.",
15
+ "main_content": "In this section, we give a concise overview of the plethora of TKG forecasting methods that appeared in recent years. arXiv:2404.16726v2 [cs.LG] 29 Apr 2024 Deep Graph Networks (DGNs) Several models in this category leverage message-passing architectures [Scarselli et al., 2009; Micheli, 2009] along with sequential approaches to integrate structural and sequential information for TKG forecasting. RE-Net adopts an autoregressive architecture, learning temporal dependencies from a sequence of graphs [Jin et al., 2020]. RE-GCN combines a convolutional DGN with a sequential neural network and introduces a static graph constraint to consider additional information like entity types [Li et al., 2021b]. xERTE employs temporal relational attention mechanisms to extract query-relevant subgraphs [Han et al., 2021a]. TANGO utilizes neural ordinary differential equations and DGNs to model temporal sequences and capture structural information [Han et al., 2021b]. CEN integrates a convolutional neural network capable of handling evolutional patterns in an online setting, adapting to changes over time [Li et al., 2022b]. At last, RETIA generates twin hyperrelation subgraphs and aggregates adjacent entities and relations using a graph convolutional network [Liu et al., 2023a]. Reinforcement Learning (RL) Methods in this category combine reinforcement learning with temporal reasoning for TKG forecasting. CluSTeR employs a two-step process, utilizing a RL agent to induce clue paths and a DGN for temporal reasoning [Li et al., 2021a]. Also, TimeTraveler leverages RL based on temporal paths, using dynamic embeddings of the queries, the path history, and the candidate actions to sample actions, and a time-shaped reward [Sun et al., 2021]. Rule-based Rule-based approaches focus on learning temporal logic rules. TLogic learns these rules via temporal random walks [Liu et al., 2022]. TRKG extends TLogic by introducing new rule types, including acyclic rules and rules with relaxed time constraints [Kiran et al., 2023]. ALREIR combines embedding-based and logical rule-based methods, capturing deep causal logic by learning rule embeddings [Mei et al., 2022]. LogE-Net combines logical rules with REGCN, using them in a preprocessing step for assisting reasoning [Liu et al., 2023b]. At last, TECHS incorporates a temporal graph encoder and a logical decoder for differentiable rule learning and reasoning [Lin et al., 2023]. Others There are additional approaches with mixed contributions that cannot be immediately placed in the above categories. CyGNet predicts future facts based on historical appearances, employing a \u201dcopy\u201d and \u201dgeneration\u201d mode [Zhu et al., 2021]. TiRGN employs a local encoder for evolutionary representations in adjacent timestamps and a global encoder to collect repeated facts [Li et al., 2022a]. CENET distinguishes historical and non-historical dependencies through contrastive learning and a mask-based inference process [Xu et al., 2023]. Finally, L2TKG utilizes a structural encoder and latent relation learning module to mine and exploit intraand inter-time latent relations [Zhang et al., 2023]. 3 Approach This section introduces several baselines: We start with the Strict Recurrency Baseline, before moving to its \u201crelaxed\u201d version, the Relaxed Recurrency Baseline, and, ultimately, a combination of the two, the so-called Combined Recurrency (marta, playsFor, vasco-da-gamah, 1) (marta, playsFor, vasco-da-gamah , 2) (marta, playsFor, santa-cruz, 3) (marta, playsFor, santa-cruz, 4) (marta, playsFor, umea-ik, 5) (marta, playsFor, umea-ik, 6) (marta, playsFor, umea-ik, 7) (marta, playsFor, umea-ik, 8) (marta, playsFor, los-angeles-sol, 9) Figure 1: A (slightly simplified) listing of the clubs that Marta Vieira da Silva, known as Marta, played for from 2001 to 2009. Baseline. Before we introduce these baselines, we give a formal definition of the notion of a Temporal Knowledge Graph and and provide a running example to illustrate our approach. 3.1 Preliminaries A Temporal Knowledge Graph G is a set of quadruples (s, r, o, t) with s, o \u2208E, relation r \u2208R, and time stamp t \u2208T with T = {1 . . . n}, n \u2208N+. More precisely, E is the set of entities, R is the set of possible relations, and T is the set of timesteps. A quadruple\u2019s (s, r, o, t) semantic meaning is that s is in relation r to o at t. Alternatively, we may refer to this quadruple as a temporal triple that holds during the timestep t. This allows us to talk about the triple (s, r, o) and its occurrence and recurrence at certain timesteps. In the following, we use a running example G, where G is a TKG in the soccer domain shown in Figure 1. G contains triples from the years 2001 to 2009, which we map to indices 1 to 9. Temporal Knowledge Graph Forecasting is the task of predicting quadruples for future timesteps t+ given a history of quadruples G, with t+ > n and t+ \u2208N+. In this work we focus on entity forecasting, that is, predicting object or subject entities for queries (s, r, ?, t+) or (?, r, o, t+). Akin to KG completion, TKG forecasting is approached as a ranking task [Han, 2022]. For a given query, e.g. (s, r, ?, t+), methods rank all entities in E using a scoring function, assigning plausibility scores to each quadruple. In the following, we design several variants of a simple scoring function f that assigns a score in R+ to a quadruple at a future timestep t+ given a Temporal Knowledge Graph G, i.e., f((s, r, o, t+), G) 7\u2192R+. All variants of our scoring function are simple heuristics to solve the TKG forecasting task, based on the principle that something that happened in the past will happen again in the future. 3.2 Strict Recurrency Baseline The first family of recurrency baselines checks if the triple that we want to predict at timestep t+ has already been observed before. The simplest baseline of this family is the following scoring function \u03d51: \u03d51((s, r, o, t+), G) = \u001a1, if \u2203k with (s, r, o, k) \u2208G 0, otherwise. (1) If we apply \u03d51 to the set of triples in Figure 1 to compute the scores for 2010, we get the following outcome (using pf to abbreviate playsFor). \u03d51((marta, pf, vasco-da-gamah, 10), G) = 1 \u03d51((marta, pf, santa-cruz, 10), G) = 1 \u03d51((marta, pf, umea-ik, 10), G) = 1 \u03d51((marta, pf, los-angeles-sol, 10), G) = 1 This scoring function suffers from the problem that it does not take the temporal distance into account, which is highly relevant for the relation of playing for a club. It is far more likely that Marta will continue to play for Los Angeles Sol rather than sign a contract with a previous club. To address this problem, we introduce a time weighting mechanism to assign higher scores to more recent triples. Defining a generic function \u2206: N+ \u00d7 N+ \u2192R that takes the query timestep t+, a previous timestep k in G, and returns the weight of the triple, we can define strict recurrency scoring functions as follows: \u03d5\u2206((s,r,o,t+),G)= \u001a\u2206(t+,max{k|(s,r,o,k)\u2208G}) 0, if \u2204k with (s,r,o,k)\u2208G. (2) For instance, using \u22060(t+, k) = k/t+, k < t+ produces: \u03d5\u22060((marta, pf, vasco-da-gamah, 10), G) = 0.2 \u03d5\u22060((marta, pf, santa-cruz, 10), G) = 0.4 \u03d5\u22060((marta, pf, umea-ik, 10), G) = 0.8 \u03d5\u22060((marta, pf, los-angeles-sol, 10), G) = 0.9, which already makes more sense: the latest club that a person played for will always receive the highest score. Interestingly, we can establish an equivalence class among a subset of the functions \u03d5\u2206, and we will use this fact in our experiments. As long as we solely focus on ranking results, two scoring functions are equivalent if they define the same partial order over all possible temporal predictions. Definition 1. Two scoring functions \u03d5 and \u03d5\u2032 are rankingequivalent if for any pair of predictions p = (s, r, o, t+) and p\u2032 = (s\u2032, r\u2032, o\u2032, t+) we have that \u03d5(p, G) > \u03d5(p\u2032, G) \u21d0 \u21d2 \u03d5\u2032(p, G) > \u03d5\u2032(p\u2032, G). The next result states that we do not need to search for an optimal time weighting function \u2206(t+, k) if we choose it to be strictly monotonically increasing with respect to k, as these functions belong to the same equivalence class. Proposition 1. Scoring functions \u03d5\u2206and \u03d5\u2206\u2032 are ranking equivalent iff, \u2200k1, k2, t+ such that k1 < k2 < t+ it holds \u2206(t+, k1) < \u2206(t+, k2) and \u2206\u2032(t+, k1) < \u2206\u2032(t+, k2). Proposition 1 follows from the application of Definition 1. Therefore, the set of functions \u03d5\u2206, characterized by a \u2206that is strictly monotonically increasing in k, are ranking equivalent. While \u03d5\u2206works well to predict the club that a person will play for, there are relations with different temporal characteristics. An example might be a relation that expresses that a soccer club wins a certain competition. In Figure 2, we extend our TKG with temporal triples using the relation wins. The relation wins seems to follow a different pattern compared to the previous example. Indeed, applying \u03d5\u22060 to predict the 2010 winner of the Bundesliga would not reflect the (fc-bayern-munich, wins, bundesliga, 1) (borussia-dortmund, wins, bundesliga, 2) (fc-bayern-munich, wins, bundesliga, 3) (werder-bremen, wins, bundesliga, 4) (fc-bayern-munich, wins, bundesliga, 5) (fc-bayern-munich, wins, bundesliga, 6) (vfb-stuttgart, wins, bundesliga, 7) (fc-bayern-munich, wins, bundesliga, 8) (vfl-wolfsburg, wins, bundesliga, 9) Figure 2: Clubs winning the Bundesliga from 2001 to 2009. fact that FC Bayern Munich is the club with the highest ratio of won championships, and year 9 might just have been a lucky one for VFL Wolfsburg. The frequency of wins could be considered a better indicator for a scoring function: \u03c81((s, r, o, t+), G) = |{k | (s, r, o, k) \u2208G}|/t+ (3) Based on this scoring function, the club that has won the most titles, Bayern Munich, receives the highest score of 0.6, while all other clubs receive a score of 0.1. As done earlier, we now generalize the formulation of \u03c81 to \u03c8\u2206using a weighting function \u2206(t+, k) where triples that occurred more recently are weighted higher: \u03c8\u2206((s, r, o, t+), G) = P i\u2208{k|(s,r,o,k)\u2208G} \u2206(t+, i) Pn i=1 \u2206(t+, i) . (4) Again, we apply the new scoring functions to our example. We shortened the names of the clubs and abbreviated bundesliga as bl: \u03c8\u22060((dortmund, wins, bl, 10), G) = 0.2/4.5 \u22480.04 \u03c8\u22060((bremen, wins, bl, 10), G) = 0.4/4.5 \u22480.09 \u03c8\u22060((stuttgart, wins, bl, 10), G) = 0.7/4.5 \u22480.15 \u03c8\u22060((munich, wins, bl, 10), G) = 2.3/4.5 \u22480.51 \u03c8\u22060((wolfsburg, wins, bl, 10), G) = 0.9/4.5 \u22480.2 It is worth noting that, for a restricted family of distributions \u2206\u2032(t, k), we can achieve ranking equivalence between scoring functions \u03c8\u2206\u2032 and \u03d5\u2206with a strictly increasing \u2206(t, k). More specifically, if we make \u2206\u2032(t, k) parametric, then \u03c8\u2206\u2032 can generalize the family of scoring functions \u03d5\u2206. Consider the parameterized function \u2206\u03bb(t+, k) = 2\u03bb(k\u2212t+) with \u03bb \u2208R+ 0 , where \u03bb acts as a decay factor. The higher \u03bb, the stronger the decay effect we achieve. In particular, if we set \u03bb = 1, we can enforce that a time point k always receives a higher weight than the sum of all previous time points 1, . . . , k \u22121. This means \u03c8\u22061 and \u03d5\u2206are ranking equivalent. Proposition 2. For \u03bb \u22651, \u2206\u03bb = 2\u03bb(k\u2212t+), and any strictly increasing time weighting function \u2206, the scoring functions \u03d5\u2206and \u03c8\u2206\u03bb are ranking equivalent. Proposition 2 follows directly from the fact that Pn i=k+1 1 2i < 1 2k for any n > k \u2208N+. On the contrary, we get ranking equivalence between \u03c81 and \u03c8\u2206\u03bb if we set \u03bb = 0. Proposition 3. The scoring functions \u03c81 and \u03c8\u2206\u03bb are ranking equivalent if we set \u03bb = 0. Proposition 3 follows directly from 20 = 1 and the definition of \u03c81 in Equation 3. Propositions 2 and 3 help us to interpret our experimental results, as it indicates that different settings of \u03bb result in a scoring function that is situated between \u03c81 and \u03d5\u2206\u03bb. We treat \u03bb as a relation-specific hyperparameter in our experiments, meaning we will select a different \u03bbr for each relation r. Since relations are independent of each other, each \u03bbr can be optimized independently. 3.3 Relaxed Recurrency Baseline So far, our scoring functions were based on a strict application of the principle of recurrency. However, this approach fails to score a triple that has never been seen before, and we need to account for queries of this nature: imagine a young player appearing for the first time in a professional club. Thus, we introduce a relaxed variant of the baseline. Instead of looking for exact matching of triples in previous timesteps, which would not work for unseen triples, we are interested in how often parts of the triple have been observed in the data. When asked to score the query (s, r, ?, t+), we compute the normalized frequency that the object o has been in relationship r with any subject s\u2032: \u2212 \u2192 \u03be ((s, r, o, t+), G) = |{(s\u2032, k) | (s\u2032, r, o, k) \u2208G}| |{(s\u2032, o\u2032, k) | (s\u2032, r, o\u2032, k) \u2208G}| (5) Analogously, we denote with \u2190 \u2212 \u03be ((s, r, o, t+), G) the relaxed baseline used to score queries of the form (?, r, o, t+). In the following, we omit the arrow above \u03be and use the directed version depending on the type of query without explicit reference to the direction. Let us revisit the example of Figure 1 and apply \u03be to score a triple never seen before. We can now assign non-zero scores to the clubs that Aitana Bonmati, who never appeared in G, will likely play for in 2010: \u03be((bonmati, pf, vasco-da-gamah, 10), G) = 0.22 \u03be((bonmati, pf, santa-cruz, 10), G) = 0.22 \u03be((bonmati, pf, umea-ik, 10), G) = 0.44 \u03be((bonmati, pf, los-angeles-sol, 10), G) = 0.11 While we also report results for \u03be on its own, we are mainly interested in its combination with the the Strict Recurrency Baseline, where we expect it to fill up gaps and resolve ties. For simplicity, we do not introduce a weighted version of this baseline to avoid the extra hyperparameter. 3.4 Combined Recurrency Baseline We conclude the section with a linear combination of the Strict Recurrency Baseline \u03c8\u2206\u03bb and the Relaxed Recurrency Baseline \u03be. In particular (omitting \u03bb to keep the notation uncluttered): \u03c8\u2206\u03be((s, r, o, t+), G) = \u03b1 \u2217\u03c8\u2206(s, r, o, t+), G)+ (1 \u2212\u03b1) \u2217\u03be(s, r, o, t+), G), (6) where \u03b1 \u2208[0, 1] is another hyperparameter. Similar to \u03bb, we select a different \u03b1r for each relation r. In the following, we refer to this baseline as the Combined Recurrency Baseline. 4 Experimental Setup This section describes our experimental setup and provides information on how to reproduce our experiments1. We rely on the unified evaluation protocol of [Gastinger et al., 2023], reporting results about single-step predictions. We report results for the multi-step setting in the supplementary material2. 4.1 Hyperparameters We select the best hyperparameters by evaluating the performances on the validation set as follows: First, we select \u03bbr\u2200r \u2208R from in total 14 values, \u03bbr \u2208Lr = {0, ..., 1.0001} for \u03c8\u03bb. Then, after fixing the best \u03bbr\u2200r \u2208R, we select \u03b1r\u2200r \u2208R from 13 values, \u03b1r \u2208Ar = {0, ..., 1}, leading to a total of 27 combinations per relation. 4.2 Methods for Comparison We compare our baselines to 11 among the 17 methods described in Section 2. Two of these 17 methods run only in multi-step setting, see comparisons to these in the supplementary material. Further, for four methods we find discrepancies in the evaluation protocol and thus exclude them from our comparisons3. Unless otherwise stated, we report the results for these 11 methods based on the evaluation protocol by [Gastinger et al., 2023]. For TiRGN, we report the results of the original paper and do a sanity check of the released code. We do the same for L2TKG, LogE-Net, and TECHS, but we cannot do a sanity check as their code has not been released. 4.3 Dataset Information We assess the performance of the recurrency baselines on five datasets [Gastinger et al., 2023; Li et al., 2021b], namely WIKI, YAGO, ICEWS14, ICEWS18, and GDELT4. Table 1 shows characteristics such as the number of entities and quadruples, and it reports the timestep-based data splitting (short: #Tr/Val/Te TS) all methods are evaluated against. In addition, we compute the fraction of test temporal triples (s, r, o, t+) for which there exists a k < t+ such that (s, r, o, k) \u2208G, and we refer to this measure as the recurrency degree (Rec). Similarly, we also compute the fraction of temporal triples (s, r, o, t+) for which it holds that (s, r, o, t+ \u22121) \u2208G, which we call direct recurrency degree (DRec). Note that Rec defines an upper bound of Strict Recurrency Baseline\u2019s performance; instead, DRec informs about the test triples that have, from our baselines\u2019 perspective, a trivial solution. On YAGO and WIKI, both measures are higher than 85%, meaning that the application of the recurrency principle would likely work very well. 1https://github.com/nec-research/recurrency baseline tkg. 2Supplementary Material: https://github.com/nec-research/ recurrency baseline tkg/blob/master/supplementary material.pdf 3CENET, RETIA, and CluSTER do not report results in time-aware filter setting. ALRE-IR does not report results on WIKI, YAGO, and GDELT, and uses different dataset versions for ICEWS14 and ICEWS18. 4See Supplementary Material for additional dataset information. Dataset #Nodes #Rels #Train #Valid #Test Time Int. #Tr/Val/Te TS DRec [%] Rec [%] ICEWS14 7128 230 74845 8514 7371 24 hours 304/30/31 10.5 52.4 ICEWS18 23033 256 373018 45995 49545 24 hours 239/30/34 10.8 50.4 GDELT 7691 240 1734399 238765 305241 15 min. 2303/288/384 2.2 64.9 YAGO 10623 10 161540 19523 20026 1 year 177/5/6 92.7 92.7 WIKI 12554 24 539286 67538 63110 1 year 210/11/10 85.6 87.0 Table 1: We report some statistics of the datasets, the timestep interval, and the specifics of the data splitting. We also include the recurrency degree (Rec) and the direct recurrency degree (DRec). Please refer to the text for a more detailed description. 4.4 Evaluation Metrics As is common in link prediction evaluations, we focus on two metrics: the Mean Reciprocal Rank (MRR), computing the average of the reciprocals of the ranks of the first relevant item in a list of results, as well as the Hits at 10 (H@10), the proportion of queries for which at least one relevant item is among the top 10 ranked results. Following [Gastinger et al., 2023], we report the time-aware filtered MRR and H@10. 5 Experimental Results This section reports our quantitative and qualitative results, illustrating our baselines help to gain a deeper understanding of the field. We list runtimes in the Supplementary Material. 5.1 Global Results Table 2 (lower area) shows the MRR and H@10 results for the Strict (\u03be), the Relaxed (\u03c8\u2206), and the Combined Recurrency Baseline (\u03c8\u2206\u03be). For all datasets, with one minor discrepancy, the Combined Recurrency Baseline performs better than the strict and the relaxed variants. However, the Strict Recurrency Baseline is not much worse: The difference to the Combined Recurrency Baseline is for both metrics never more than one percentage point. We observe that, while \u03be scores a MRR between 5% and 15% on its own, when combined with \u03c8\u2206(thus obtaining \u03c8\u2206\u03be) it can grant up to 0.9% of absolute improvement. As described in Section 3, its main role is to fill gaps and resolve ties. The results confirm our intuition. Interestingly, results for \u03c8\u2206\u03be on all datasets reflect the reported values of the recurrency degree and direct recurrency degree (see Table 2): For both YAGO and WIKI (Rec and DRec > 85%), our baseline yields high MRRs (> 80%), while in other cases the values are below 40%. When compared to results from related work (upper area of Table 2), the Combined Recurrency Baseline as well as the Strict Recurrency Baseline yield the highest test scores for two out of five datasets (GDELT and YAGO) and the thirdhighest test scores for the WIKI dataset. This is an indication that most related work models seem unable to learn and consistently apply a simple forecasting strategy that yields high gains. In particular, we highlight the significant difference between the Combined Recurrency Baseline and the runner-up methods for GDELT (with a relative change of +12.9%). Results for ICEWS14 and ICEWS18, instead, suggest that more complex dependencies need to be captured on these datasets. While two methods (TRKG and TANGO) perform worse than our baseline, the majority achieves better results. In summary, none of the methods proposed so far can accomplish the results achieved by a combination of two very na\u00a8 \u0131ve baselines for two out of five datasets. This result is rather surprising, and it raises doubts about the predictive quality of current methods. 5.2 Per-Relation Analysis We conduct a detailed per-relation analysis and focus on two datasets: ICEWS14, since our baseline performed worse there, and YAGO, for the opposite reason. We compare the Combined Recurrency Baseline to the four methods that performed best on the respective dataset, considering the seven methods evaluated under the evaluation protocol of [Gastinger et al., 2023]5. For clarity, we adopt the following notation to denote a relation and its prediction direction: [relation] (head) signifies predictions in head direction, corresponding to queries of the form (?, r, o, t+); [relation] (tail) denotes predictions in tail direction, i.e., (s, r, ?, t+). ICEWS14 In Figure 3(a), we focus on the nine most frequent relations. For each relation, one or multiple methods reach MRRs higher than the Combined Recurrency Baseline, with an absolute offset in MRR of approximately 3% to 7% between the best-performing method and our baseline. This indicates that it might be necessary to capture patterns going beyond the simple recurrency principle. However, even for ICEWS14, we see three relations where some methods produce worse results than the Combined Recurrency Baseline. For two of these (Make a visit, Host a visit), RE-GCN and CEN attain the lowest MRR. In the third relation (Arrest detain or charge with legal action), TLogic and xERTE have the lowest MRR. This implies that, despite having better aggregated MRRs, the methods display distinct weaknesses and are not learning to model recurrency for all relations. YAGO Figure 3(b), instead, shows two distinct categories of relations: the first category contains relations where most methods demonstrate competitive performance (MRR\u2265 85%). In all of them, the Combined Recurrency Baseline attains the highest scores. Thus, the capabilities of related work, like detecting patterns across different relations or multiple hops in the KG, do not seem to be beneficial for these relations, and a simpler inductive bias might be preferred. The second category contains relations where all methods perform poorly (MRR \u226420%). Due to the dataset\u2019s limited information, reliably predicting prize winners or deaths is unfeasible. For these reasons, we expect no significant improvement in future work on YAGO beyond the results of our baseline. However, YAGO still provides value to the research field: it can be used to inspect the methods\u2019 capabilities to identify 5Since we could compute prediction scores for every query. GDELT YAGO WIKI ICEWS14 ICEWS18 MRR H@10 MRR H@10 MRR H@10 MRR H@10 MRR H@10 L2TKG\u2020 20.5 35.8 47.4 71.1 33.4 55.0 LogE-Net\u2020 43.7 63.7 32.7 53.0 TECHS\u2020 89.2 92.4 76.0 82.4 d.d.v d.d.v. 30.9 49.8 TiRGN 21.7 37.6 88.0 92.9 81.7 87.1 44.0 63.8 33.7 54.2 TRKG 21.5 37.3 71.5 79.2 73.4 76.2 27.3 50.8 16.7 35.4 RE-GCN 19.8 33.9 82.2 88.5 78.7 84.7 42.1 62.7 32.6 52.6 xERTE 18.9 32.0 87.3 91.2 74.5 80.1 40.9 57.1 29.2 46.3 TLogic 19.8 35.6 76.5 79.2 82.3 87.0 42.5 60.3 29.6 48.1 TANGO 19.2 32.8 62.4 67.8 50.1 52.8 36.8 55.1 28.4 46.3 Timetraveler 20.2 31.2 87.7 91.2 78.7 83.1 40.8 57.6 29.1 43.9 CEN 20.4 35.0 82.7 89.4 79.3 84.9 41.8 60.9 31.5 50.7 Relaxed (\u03be) 14.2 23.6 5.2 10.7 14.3 25.4 14.4 28.6 11.6 22.0 Strict (\u03c8\u2206) 23.7 38.3 90.7 92.8 81.6 87.0 36.3 48.4 27.8 41.4 Combined (\u03c8\u2206\u03be) 24.5 39.8 90.9 93.0 81.5 87.1 37.2 51.8 28.7 43.7 Table 2: Experimental results. An entry \u2020 means authors have not released their code, and thus we could not reproduce their results, an entry that the related work does not report results on this dataset, and an entry \u201dd.d.v\u201d, that the it reports results on a different dataset version. and predict simple recurring facts and, if this is not the case, to pinpoint their deficiencies. Thus, YAGO can be also seen as a dataset for sanity checks. All analysed methods from related work fail this sanity check: none of them can exploit the simple recurrency pattern for all relations. The main disparity in overall MRR between the Combined Recurrency Baseline and related work can be attributed to two specific relations: playsFor (head, tail), and isAffiliatedTo (head). Queries attributed to these relations make for almost 50% of all test queries. More specifically, Timetraveler exhibits limitations with isAffiliatedTo (head) and playsFor (head); xERTE shows its greatest shortcomings for isAffiliatedTo (head); and REGCN and CEN exhibit limitations with the relation playsFor in both directions. These findings highlight the specific weaknesses of each method that are possible by comparisons with baselines, thus allowing for targeted improvements. 5.3 Failure Analysis In the following, we analyse some example queries where the recurrency principle offers an unambiguous solution which, however, is not chosen by a specific method. Following Section 5.2, we focus on YAGO and the same four models. We base our analysis on the insights that YAGO has a very high direct recurrency degree, and that predicting facts based on strict recurrency with steep time decay leads to very high scores. The MRR of \u03d5\u2206is 90.7%. For each model, we count for how many queries the following conditions are fulfilled, given the test query (s, r, ?, t) with correct answer o: (i) (s, r, o, t \u22121) \u2208G, (ii) the model proposed o\u2032 \u0338= o as top candidate, (iii) there exists no k with (s, r, o\u2032, k) \u2208G. If these are fulfilled, there is strong evidence for o due to recurrency, while (s, r, o\u2032) has never been observed in the past. We conduct the same analysis for head queries (?, r, o, t). For each model, we randomly select some of these queries6 and 6Summing up over head and tail queries for Timetraveler, we find 34 queries that fulfilled all three conditions, for xERTE 149, for CEN 286, and for RE-GCN 525 queries. describe the mistakes made. Timetraveler Surprisingly, Timetraveler sometimes suggests top candidates that are incompatible with respect to domain and range of the given relation, even when all above conditions are met. Here are two examples for the \u201dplaysFor\u201d (pf) relation, where the proposed candidates are marked with a question mark: (?=spain-national-u23, pf, lierse-sk, 10) (?=baseball-ground, pf, derby-county-fc, 10) The reasons behind Timetraveler\u2019s predictions, despite the availability of reasonable candidates according to the recurrency principle, fall outside the scope of this paper. xERTE For xERTE, we detect a very clear pattern that explains the mistakes. In 147 out of 149 cases, xERTE predicts a candidate as subject (object) c when c was given as object (subject). This happens in nearly all cases for the symmetric relation isMarriedTo resulting in the prediction of triples such as (john, isMarriedTo, john). This error pattern bears a striking resemblance to issues observed in the context of nontemporal KG completion in [Meilicke et al., 2018] where it has already been argued that some models perform surprisingly badly on symmetric relations. CEN and RE-GCN Both CEN and RE-GCN exhibit distinct behavior. Errors frequently occur with the \u201dplaysFor\u201d relation, particularly in tail prediction. In all analysed examples, the types (soccer players and soccer clubs) of the incorrectly predicted candidates were correct. Moreover, we cannot find any other systematic error pattern or explanation for the erroneous predictions. It seems that both models are not able to learn that the playsFor relation follows the simple regularity of strict recurrency, even though this regularity dominates the training set. These examples highlight significant insights into the current weaknesses of each method. Future research can leverage these insights to enhance the affected models. (a) ICEWS14 t h t h t h t h t h t h t h t h t h Make_statement Consult Make_an _appeal_or_request Express_intent_to _meet_or_negotiate Make_a_visit Host_a_visit Arrest,_detain,_or _charge_with_legal_action Praise_or_endorse Criticize_or_denounce Relation 0 20 40 60 80 100 MRR (%) TLogic CEN RE-GCN xERTE Recurrency Baseline 400 600 800 1000 1200 (b) YAGO t h t h t h t h t h t h t h t h t h <worksAt> <playsFor> <hasWonPrize> <isMarriedT o> <owns> <graduatedFrom> <diedIn> <isAffiliatedT o> <created> Relation 0 20 40 60 80 100 MRR (%) Timetraveler CEN RE-GCN xERTE Recurrency Baseline 100 101 102 103 Figure 3: Test MRRs for each relation and direction (\u201ct\u201d means tail and \u201ch\u201d head, respectively) for (a) ICEWS14 (top) and (b) YAGO (bottom). Colors indicate the number of queries for relation and its direction in the test set. 5.4 Parameter Study In the following, we summarize our findings regarding the influence of hyperparameters on baseline predictions. Detailed results are provided in the Supplementary Material. Influence of Hyperparameter Values We analyze the impact of \u03bb and \u03b1 on overall MRR. Notably, \u03bb significantly affects the MRR, e.g., with test results ranging from 12.1% to 23.7% for GDELT across different \u03bb values. The optimal \u03bb varies across datasets. This underlines the influence of time decay: Predicting repetitions of the most recent facts is most beneficial for YAGO and WIKI, while also considering the frequency of previous facts is better for the other datasets. This distinction is also mirrored in the direct recurrency degree, being notably high for YAGO and WIKI, and thus indicating the importance of the most recent facts. Additionally, setting \u03b1 to a high value (\u03b1 \u22650.99) yields the best aggregated test results across all datasets, indicating the benefits of emphasizing predictions from the Strict Recurrency Baseline and using the Relaxed Recurrency Baseline to resolve ties and rank unseen triples. Impact of Relaxed Recurrency Baseline Further, to understand the impact of the Relaxed Recurrency Baseline (\u03be) on the combined baseline, we compare the MRR of strict and relaxed baseline on a per-relation basis. We find that, even though the aggregated improvement of \u03c8\u2206\u03be as compared to \u03c8\u2206is only marginal (< 1%) for each dataset, for some relations, where the strict baseline fails, the impact of the relaxed baseline is meaningful: For example, on the dataset YAGO and the relation diedIn (tail), the Strict Recurrency Baseline yields a very low MRR of 0.7%, whereas the Relaxed Recurrency Baseline yields a MRR of 17.5%. Overall, this highlights the influence of hyperparameter values, dataset differences, and the advantage of combining baselines on a per-relation basis. 6 Conclusion We are witnessing a notable growth of scientific output in the field of TKG forecasting. However, a reliable and rigorous comparison with simple baselines, which can help us distinguish real from fictitious progress, has been missing so far. Inspired by real-world examples, this work filled the current gap by designing an intuitive baseline that exploits the straightforward concept of facts\u2019 recurrency. In summary, despite its inability to grasp complex dependencies in the data, the baseline provides a better or a competitive alternative to existing models on three out of five common benchmarks. This result is surprising and raises doubts about the predictive quality of the proposed methods. Once more, it stresses the importance of testing na\u00a8 \u0131ve baselines as a key component of any TKG forecasting benchmark: should a model fail when a baseline succeeds, its predictive capability should be subject to critical scrutiny. By conducting critical and detailed analyses, we identified limitations of existing models, such as the prediction of incompatible types. We hope that our work will foster awareness about the necessity of simple baselines in the future evaluation of TKG methods."
16
+ }
intro_8K/test_introduction_short_2404.16745v1.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2404.16745v1",
3
+ "title": "Statistical Inference for Covariate-Adjusted and Interpretable Generalized Factor Model with Application to Testing Fairness",
4
+ "abstract": "In the era of data explosion, statisticians have been developing\ninterpretable and computationally efficient statistical methods to measure\nlatent factors (e.g., skills, abilities, and personalities) using large-scale\nassessment data. In addition to understanding the latent information, the\ncovariate effect on responses controlling for latent factors is also of great\nscientific interest and has wide applications, such as evaluating the fairness\nof educational testing, where the covariate effect reflects whether a test\nquestion is biased toward certain individual characteristics (e.g., gender and\nrace) taking into account their latent abilities. However, the large sample\nsize, substantial covariate dimension, and great test length pose challenges to\ndeveloping efficient methods and drawing valid inferences. Moreover, to\naccommodate the commonly encountered discrete types of responses, nonlinear\nlatent factor models are often assumed, bringing further complexity to the\nproblem. To address these challenges, we consider a covariate-adjusted\ngeneralized factor model and develop novel and interpretable conditions to\naddress the identifiability issue. Based on the identifiability conditions, we\npropose a joint maximum likelihood estimation method and establish estimation\nconsistency and asymptotic normality results for the covariate effects under a\npractical yet challenging asymptotic regime. Furthermore, we derive estimation\nand inference results for latent factors and the factor loadings. We illustrate\nthe finite sample performance of the proposed method through extensive\nnumerical studies and an application to an educational assessment dataset\nobtained from the Programme for International Student Assessment (PISA).",
5
+ "authors": "Jing Ouyang, Chengyu Cui, Kean Ming Tan, Gongjun Xu",
6
+ "published": "2024-04-25",
7
+ "updated": "2024-04-25",
8
+ "primary_cat": "stat.ME",
9
+ "cats": [
10
+ "stat.ME"
11
+ ],
12
+ "label": "Original Paper",
13
+ "paper_cat": "LLM Fairness",
14
+ "gt": "Latent factors, often referred to as hidden factors, play an increasingly important role in modern statistics to analyze large-scale complex measurement data and find wide-ranging applications across various scientific fields, including educational assessments (Reckase 2009, Hambleton & Swaminathan 2013), macroeconomics forecasting (Stock & Watson 2002, Lam et al. 2011), and biomedical diagnosis (Carvalho et al. 2008, Frichot et al. 2013). For instance, in educational testing and social sciences, latent factors are used to model unobservable traits of respondents, such as skills, personality, and attitudes (von Davier Matthias 2008, Reckase 2009); in biology and genomics, latent factors are used to capture underlying genetic factors, gene expression patterns, or hidden biological mechanisms (Carvalho et al. 2008, Frichot et al. 2013). To uncover the latent factors and analyze large-scale complex data, various latent factor models have been developed and extensively investigated in the existing literature (Bai 2003, Bai & Li 2012, Fan et al. 2013, Chen et al. 2023b, Wang 2022). In addition to measuring the latent factors, the observed covariates and the covariate effects conditional on the latent factors hold significant scientific interpretations in many applications (Reboussin et al. 2008, Park et al. 2018). One important application is testing fairness, which receives increasing attention in the fields of education, psychology, and social sciences (Candell & Drasgow 1988, Belzak & Bauer 2020, Chen et al. 2023a). In educa- tional assessments, testing fairness, or measurement invariance, implies that groups from diverse backgrounds have the same probability of endorsing the test items, controlling for individual proficiency levels (Millsap 2012). Testing fairness is not only of scientific interest to psychometricians and statisticians but also attracts widespread public awareness (Toch 1984). In the era of rapid technological advancements, international and large-scale edu- cational assessments are becoming increasingly prevalent. One example is the Programme for International Student Assessment (PISA), which is a large-scale international assessment with substantial sample size and test length (OECD 2019). PISA assesses the knowledge and skills of 15-year-old students in mathematics, reading, and science domains (OECD 2 2019). In PISA 2018, over 600,000 students from 37 OECD1 countries and 42 partner coun- tries/economies participated in the test (OECD 2019). To assess fairness of the test designs in such large-scale assessments, it is important to develop modern and computationally effi- cient methodologies for interpreting the effects of observed covariates (e.g., gender and race) on the item responses, controlling for the latent factors. However, the discrete nature of the item responses, the increasing sample size, and the large amount of test items in modern educational assessments pose great challenges for the estimation and inference for the covariate effects as well as for the latent factors. For instance, in educational and psychological measurements, such a testing fairness issue (measurement invariance) is typically assessed by differential item functioning (DIF) analysis of item re- sponse data that aims to detect the DIF items, where a DIF item has a response distribution that depends on not only the measured latent factors but also respondents\u2019 covariates (such as group membership). Despite many statistical methods that have been developed for DIF analysis, existing methods often require domain knowledge to pre-specify DIF-free items, namely anchor items, which may be misspecified and lead to biased estimation and inference results (Thissen 1988, Tay et al. 2016). To address this limitation, researchers developed item purification methods to iteratively select anchor items through stepwise selection mod- els (Candell & Drasgow 1988, Fidalgo et al. 2000, Kopf et al. 2015). More recently, tree-based methods (Tutz & Berger 2016), regularized estimation methods (Bauer et al. 2020, Belzak & Bauer 2020, Wang et al. 2023), item pair functioning methods (Bechger & Maris 2015), and many other non-anchor-based methods have been proposed. However, these non-anchor- based methods do not provide valid statistical inference guarantees for testing the covariate effects. It remains an open problem to perform statistical inference on the covariate effects and the latent factors in educational assessments. To address this open problem, we study the statistical estimation and inference for a gen- eral family of covariate-adjusted nonlinear factor models, which include the popular factor 1OECD: Organisation for Economic Co-operation and Development 3 models for binary, count, continuous, and mixed-type data that commonly occur in educa- tional assessments. The nonlinear model setting poses great challenges for estimation and statistical inference. Despite recent progress in the factor analysis literature, most existing studies focus on estimation and inference under linear factor models (Stock & Watson 2002, Bai & Li 2012, Fan et al. 2013) and covariate-adjusted linear factor models (Leek & Storey 2008, Wang et al. 2017, Gerard & Stephens 2020, Bing et al. 2024). The techniques employed in linear factor model settings are not applicable here due to the nonlinearity inherent in the general models under consideration. Recently, several researchers have also investigated the parameter estimation and inference for generalized linear factor models (Chen et al. 2019, Wang 2022, Chen et al. 2023b). However, they either focus only on the overall consistency properties of the estimation or do not incorporate covariates into the models. In a concurrent work, motivated by applications in single-cell omics, Du et al. (2023) considered a general- ized linear factor model with covariates and studied its inference theory, where the latent factors are used as surrogate variables to control for unmeasured confounding. However, they imposed relatively stringent assumptions on the sparsity of covariate effects and the dimension of covariates, and their theoretical results also rely on data-splitting. Moreover, Du et al. (2023) focused only on statistical inference on the covariate effects, while that on factors and loadings was unexplored, which is often of great interest in educational assess- ments. Establishing inference results for covariate effects and latent factors simultaneously under nonlinear models remains an open and challenging problem, due to the identifiability issue from the incorporation of covariates and the nonlinearity issue in the considered general models. To overcome these issues, we develop a novel framework for performing statistical infer- ence on all model parameters and latent factors under a general family of covariate-adjusted generalized factor models. Specifically, we propose a set of interpretable and practical iden- tifiability conditions for identifying the model parameters, and further incorporate these conditions into the development of a computationally efficient likelihood-based estimation 4 method. Under these identifiability conditions, we develop new techniques to address the aforementioned theoretical challenges and obtain estimation consistency and asymptotic nor- mality for covariate effects under a practical yet challenging asymptotic regime. Furthermore, building upon these results, we establish estimation consistency and provide valid inference results for factor loadings and latent factors that are often of scientific interest, advancing our theoretical understanding of nonlinear latent factor models. The rest of the paper is organized as follows. In Section 2, we introduce the model setup of the covariate-adjusted generalized factor model. Section 3 discusses the associated iden- tifiability issues and further presents the proposed identifiability conditions and estimation method. Section 4 establishes the theoretical properties for not only the covariate effects but also the latent factors and factor loadings. In Section 5, we perform extensive numerical studies to illustrate the performance of the proposed estimation method and the validity of the theoretical results. In Section 6, we analyze an educational testing dataset from Pro- gramme for International Student Assessment (PISA) and identify test items that may lead to potential bias among different test-takers. We conclude with providing some potential future directions in Section 7. Notation: For any integer N, let [N] = {1, . . . , N}. For any set S, let #S be its cardinality. For any vector r = (r1, . . . , rl)\u22ba, let \u2225r\u22250 = #({j : rj \u0338= 0}), \u2225r\u2225\u221e= maxj=1,...,l |rj|, and \u2225r\u2225q = (Pl j=1 |rj|q)1/q for q \u22651. We define 1(y) x to be the y-dimensional vector with x-th entry to be 1 and all other entries to be 0. For any symmetric matrix M, let \u03bbmin(M) and \u03bbmax(M) be the smallest and largest eigenvalues of M. For any matrix A = (aij)n\u00d7l, let \u2225A\u2225\u221e,1 = maxj=1,...,l Pn i=1 |aij| be the maximum absolute column sum, \u2225A\u22251,\u221e= maxi=1,...,n Pl j=1 |aij| be the maximum of the absolute row sum, \u2225A\u2225max = maxi,j |aij| be the maximum of the absolute matrix entry, \u2225A\u2225F = (Pn i=1 Pl j=1 |aij|2)1/2 be the Frobenius norm of A, and \u2225A\u2225= p \u03bbmax (A\u22baA) be the spectral norm of A. Let \u2225\u00b7 \u2225\u03c61 be sub- exponential norm. Define the notation Av = vec(A) \u2208Rnl to indicate the vectorized matrix 5 A \u2208Rn\u00d7l. Finally, we denote \u2297as the Kronecker product.",
15
+ "main_content": "Consider n independent subjects with q measured responses and p\u2217observed covariates. \u2217 For the ith subject, let Yi \u2208Rq be a q-dimensional vector of responses corresponding to measurement items and Rbe a-dimensional vector of observed covariates. q measurement items and Xc i \u2208Rp\u2217be a p\u2217-dimensional vector of observed covariates. Moreover, let be a-dimensional vector of latent factors representing the unobservable Moreover, let Ui be a K-dimensional vector of latent factors representing the unobservable traits such as skills and personalities, where we assume K is specified as in many educational assessments. We assume that the q-dimensional responses Yi are conditionally independent, given Xc i and Ui. Specifically, we model the jth response for the ith subject, Yij, by the following conditional distribution: Yij \u223cpij(y | wij), where wij = \u03b2j0 + \u03b3\u22ba j Ui + \u03b2\u22ba jcXc i . (1) Here \u03b2j0 \u2208R is the intercept parameter, \u03b2jc = (\u03b2j1, . . . , \u03b2jp\u2217)\u22ba\u2208Rp\u2217are the coefficient parameters for the observed covariates, and\u22baR are the factor loadings. \u2208\u2217\u2208 parameters for the observed covariates, and \u03b3j = (\u03b3j1, . . . , \u03b3jK)\u22ba\u2208RK are the factor loadings. \u22ba \u2208 For better presentation, we write \u03b2j = (\u03b2j0, \u03b2\u22ba jc)\u22baas an assembled vector of intercept and coefficients and define Xi = (1, (Xc i )\u22ba)\u22bawith dimension p = p\u2217+ 1, which gives wij = \u03b3\u22ba j Ui + \u03b2\u22ba j Xi. Given wij, the function pij is some specified probability density (mass) function. Here, we consider a general and flexible modeling framework by allowing different types of pij functions to model diverse response data in wide-ranging applications, such as binary item response data in educational and psychological assessments (Mellenbergh 1994, Reckase 2009) and mixed types of data in educational and macroeconomic applications (Rijmen et al. 2003, Wang 2022); see also Remark 1. A schematic diagram of the proposed model setup is 6 presented in Figure 1. Xi Yi1 Ui Yi2 Yi,q\u22121 Yiq \u2026 \u2026 \u03b21 \u03b22 \u03b2q\u22121 \u03b2q \u2026 \u03b31 \u03b32 \u03b3q\u22121 \u03b3q \u2026 Xi \u2208Rp Ui \u2208RK Yij \u2208R, j \u2208[q] Figure 1: A schematic diagram of the proposed model in (1). The subscript i indicates the ith subject, out of n independent subjects. The response variable Yij can be discrete or continuous. Our proposed covariate-adjusted generalized factor model in (1) is motivated by applications in testing fairness. In the context of educational assessment, the subject\u2019s responses to questions are dependent on latent factors Ui such as students\u2019 abilities and skills, and are potentially affected by observed covariates Xc i such as age, gender, and race, among others (Linda M. Collins 2009). The intercept \u03b2j0 is often interpreted as the difficulty level of item j and referred to as the difficulty parameter in psychometrics (Hambleton & Swaminathan 2013, Reckase 2009). The capability of item j to further differentiate individuals based on their latent abilities is captured by \u03b3j = (\u03b3j1, . . . , \u03b3jK)\u22ba, which are also referred to as discrimination parameters (Hambleton & Swaminathan 2013, Reckase 2009). The effects of observed covariates Xc i on subject\u2019s response to the jth question Yij, conditioned on latent abilities Ui, are captured by \u03b2jc = (\u03b2j1, . . . , \u03b2jp\u2217)\u22ba, which are referred to as DIF effects in psychometrics (Holland & Wainer 2012). This setting gives rise to the fairness problem of validating whether the response probabilities to the measurements differ across different genders, races, or countries of origin while holding their abilities and skills at the same level. 7 Given the observed data from n independent subjects, we are interested in studying the relationships between Yi and Xc i after adjusting for the latent factors Ui in (1). Specifically, our goal is to test the statistical hypothesis H0 : \u03b2js = 0 versus Ha : \u03b2js \u0338= 0 for s \u2208[p\u2217], where \u03b2js is the regression coefficient for the sth covariate and the jth response, after adjusting for the latent factor Ui. In many applications, the latent factors and factor loadings also carry important scientific interpretations such as students\u2019 abilities and test items\u2019 characteristics. This motivates us to perform statistical inference on the parameters \u03b2j0, \u03b3j, and Ui as well. Remark 1. The proposed model setup (1) is general and flexible as various functions pij\u2019s could be used to model diverse types of response data in wide-ranging applications. For instance, in educational assessments, logistic factor model (Reckase 2009) with pij(y | wij) = exp(wijy)/{1 + exp(wij)}, y \u2208{0, 1} and probit factor model (Birnbaum 1968) with pij(y | wij) = {\u03a6(wij)}y{1 \u2212\u03a6(wij)}1\u2212y, y \u2208{0, 1} where \u03a6(\u00b7) is the cumulative density function of standard normal distribution, are widely used to model the binary responses, indicating correct or incorrect answers to the test items. Such types of models are often referred to as item response theory models (Reckase 2009). In economics and finances, linear factor models with pij(y | wij) \u221dexp{\u2212(y \u2212wij)2/(2\u03c32)}, where y \u2208R and \u03c32 is the variance parameter, are commonly used to model continuous responses, such as GDP, interest rate, and consumer index (Bai 2003, Bai & Li 2012, Stock & Watson 2016). Moreover, depending on the the observed responses, different types of function pij\u2019s can be used to model the response from each item j \u2208[q]. Therefore, mixed types of data, which are common in educational measurements (Rijmen et al. 2003) and macroeconomic applications (Wang 2022), can also be analyzed by our proposed model. 8 Remark 2. In addition to testing fairness, the considered model finds wide-ranging applications in the real world. For instance, in genomics, the gene expression status may depend on unmeasured confounders or latent biological factors and also be associated with the variables of interest including medical treatment, disease status, and gender (Wang et al. 2017, Du et al. 2023). The covariate-adjusted general factor model helps to investigate the effects of the variables of interest on gene expressions, controlling for the latent factors (Du et al. 2023). This setting is also applicable to other scenarios, such as brain imaging, where the activity of a brain region may depend on measurable spatial distance from neighboring regions and latent structures due to unmodeled factors (Leek & Storey 2008). To analyze large-scale measurement data, we aim to develop a computationally efficient estimation method and to provide inference theory for quantifying uncertainty in the estimation. Motivated by recent work in high-dimensional factor analysis, we treat the latent factors as fixed parameters and apply a joint maximum likelihood method for estimation (Bai 2003, Fan et al. 2013, Chen et al. 2020). Specifically, we let the collection of the item responses from n independent subjects be Y = (Y1, . . . , Yn)\u22ba n\u00d7q and the design matrix of observed covariates to be X = (X1, . . . , Xn)\u22ba n\u00d7p. For model parameters, the discrimination parameters for all q items are denoted as \u0393 = (\u03b31, . . . , \u03b3q)\u22ba q\u00d7K, while the intercepts and the covariate effects for all q items are denoted as B = (\u03b21, . . . , \u03b2q)\u22ba q\u00d7p. The latent factors from all n subjects are U = (U1, . . . , Un)\u22ba n\u00d7K. Then, the joint log-likelihood function can be written as follows: L(Y | \u0393, U, B, X) = 1 nq n X i=1 q X j=1 lij(\u03b2j0 + \u03b3\u22ba j Ui + \u03b2\u22ba jcXc i ), (2) where the function lij(wij) = log pij(Yij|wij) is the individual log-likelihood function with wij = \u03b2j0 + \u03b3\u22ba j Ui + \u03b2\u22ba jcXc i . We aim to obtain (b \u0393, b U, b B) from maximizing the joint likelihood function L(Y | \u0393, U, B, X). While the estimators can be computed efficiently by maximizing the joint likelihood 9 function through an alternating maximization algorithm (Collins et al. 2002, Chen et al. 2019), challenges emerge for performing statistical inference on the model parameters. \u2022 One challenge concerns the model identifiability. Without additional constraints, the covariate effects are not identifiable due to the incorporation of covariates and their potential dependence on latent factors. The latent factors and factor loadings encounter similar identifiability issues as in traditional factor analysis (Bai & Li 2012, Fan et al. 2013). Ensuring that the model is statistically identifiable is the fundamental prerequisite for achieving model reliability and making valid inferences (Allman et al. 2009, Gu & Xu 2020). \u2022 Another challenge arises from the nonlinearity of our proposed model. In the existing literature, most studies focus on the statistical inference for our proposed setting in the context of linear models (Bai & Li 2012, Fan et al. 2013, Wang et al. 2017). On the other hand, settings with general log-likelihood function lij(wij), including covariateadjusted logistic and probit factor models, are less investigated. Common techniques for linear models are not applicable to the considered general nonlinear model setting. Motivated by these challenges, we propose interpretable and practical identifiability conditions in Section 3.1. We then incorporate these conditions into the joint-likelihood-based estimation method in Section 3.2. Furthermore, we introduce a novel inference framework for performing statistical inference on \u03b2j, \u03b3j, and Ui in Section 4. 3 Method 3.1 Model Identifiability Identifiability issues commonly occur in latent variable models (Allman et al. 2009, Bai & Li 2012, Xu 2017). The proposed model in (1) has two major identifiability issues. The first issue is that the proposed model remains unchanged after certain linear transformations of 10 both B and U, causing the covariate effects together with the intercepts, represented by B, and the latent factors, denoted by U, to be unidentifiable. The second issue is that the model is invariant after an invertible transformation of both U and \u0393 as in the linear factor models (Bai & Li 2012, Fan et al. 2013), causing the latent factors U and factor loadings \u0393 to be undetermined. Specifically, under the model setup in (1), we define the joint probability distribution of responses to be P(Y | \u0393, U, B, X) = Qn i=1 Qq j=1 pij(Yij|wij). The model parameters are identifiable if and only if for any response Y, there does not exist (\u0393, U, B) \u0338= (e \u0393, e U, e B) such that P(Y | \u0393, U, B, X) = P(Y | e \u0393, e U, e B, X). The first issue concerning the identifiability of B and U is that for any (\u0393, U, B) and any transformation matrix A, there exist e \u0393 = \u0393, e U = U + XA\u22ba, and e B = B \u2212\u0393A such that P(Y | \u0393, U, B, X) = P(Y | e \u0393, e U, e B, X). This identifiability issue leads to the indeterminacy of the covariate effects and latent factors. The second issue is related to the identifiability of U and \u0393. For any (e \u0393, e U, e B) and any invertible matrix G, there exist \u00af \u0393 = e \u0393(G\u22ba)\u22121, \u00af U = e UG, and \u00af B = e B such that P(Y | e \u0393, e U, e B, X) = P(Y | \u00af \u0393, \u00af U, \u00af B, X). This causes the latent factors and factor loadings to be unidentifiable. Remark 3. Intuitively, the unidentifiable e B = B \u2212\u0393A can be interpreted to include both direct and indirect effects of X on response Y. We take the intercept and covariate effect on the first item ( e \u03b21) as an example and illustrate it in Figure 2. One part of e \u03b21 is the direct effect from X onto Y (see the orange line in the left panel), whereas another part of e \u03b21 may be explained through the latent factors U, as the latent factors are unobserved and there are potential correlations between latent factors and observed covariates. The latter part of e \u03b21 can be considered as the indirect effect (see the blue line in the right panel). 11 Xi Yi1 Ui Yi2 Yi,q\u22121 Yiq \u2026 \u2026 \u03b21 \u03b22 \u03b2q\u22121 \u03b2q \u2026 \u03b31 \u03b32 \u03b3q\u22121 \u03b3q \u2026 Xi Yi1 Ui Yi2 Yi,q\u22121 Yiq \u2026 \u2026 \u03b21 \u03b22 \u03b2q\u22121 \u03b2q \u2026 \u03b31 \u03b32 \u03b3q\u22121 \u03b3q \u2026 Figure 2: The direct effects (orange solid line in the left panel) and the indirect effects (blue solid line in the right panel) for item 1. The first identifiability issue is a new challenge introduced by the covariate adjustment in the model, whereas the second issue is common in traditional factor models (Bai & Li 2012, Fan et al. 2013). Considering the two issues together, for any (\u0393, U, B), A, and G, there exist transformations e \u0393 = \u0393(G\u22ba)\u22121, e U = (U + XA\u22ba)G, and e B = B \u2212\u0393A such that P(Y | \u0393, U, B, X) = P(Y | e \u0393, e U, e B, X). In the rest of this subsection, we propose identifiability conditions to address these issues. For notation convenience, throughout the rest of the paper, we define \u03d5\u2217= (\u0393\u2217, U\u2217, B\u2217) as the true parameters. Identifiability Conditions As described earlier, the correlation between the design matrix of covariates X and the latent factors U\u2217results in the identifiability issue of B\u2217. In the psychometrics literature, the intercept \u03b2\u2217 j0 is commonly referred to as the difficulty parameter, while \u03b2\u2217 jc represents the effects of observed covariates, namely DIF effects, on the response to item j (Reckase 2009, Holland & Wainer 2012). The different scientific interpretations motivate us to develop different identifiability conditions for \u03b2\u2217 j0 and \u03b2\u2217 jc, respectively. Specifically, we propose a centering condition on U\u2217to ensure the identifiability of the intercept \u03b2\u2217 j0 for all items j \u2208[q]. On the other hand, to identify the covariate effects \u03b2\u2217 jc, a natural idea is to impose the covariate effects \u03b2\u2217 jc for all items j \u2208[q] to be sparse, as shown in many regularized methods and item purification methods (Candell & Drasgow 1988, Fidalgo et al. 2000, Bauer et al. 2020, Belzak & Bauer 2020). In Chen et al. (2023a), 12 an interpretable identifiability condition is proposed for selecting sparse covariate effects, yet this condition is specific to uni-dimensional covariates. Motivated by Chen et al. (2023a), we propose the following minimal \u21131 condition applicable to general cases where the covariates are multi-dimensional. To better present the identifiability conditions, we write A = (a0, a1, . . . , ap\u2217) \u2208RK\u00d7p and define Ac = (a1, . . . , ap\u2217) \u2208RK\u00d7p\u2217as the part applied to the covariate effects. Condition 1. (i) Pn i=1 U \u2217 i = 0K. (ii) Pq j=1 \u2225\u03b2\u2217 jc\u22251 < Pq j=1 \u2225\u03b2\u2217 jc \u2212A\u22ba c\u03b3\u2217 j \u22251 for any Ac \u0338= 0. Condition 1(i) assumes the latent abilities U\u2217are centered to ensure the identifiability of the intercepts \u03b2\u2217 j0\u2019s, which is commonly assumed in the item response theory literature (Reckase 2009). Condition 1(ii) is motivated by practical applications. For instance, in educational testing, practitioners need to identify and remove biased test items, correspondingly, items with non-zero covariate effects (\u03b2\u2217 js \u0338= 0). In practice, most of the designed items are unbiased, and therefore, it is reasonable to assume that the majority of items have no covariate effects, that is, the covariate effects \u03b2\u2217 jc\u2019s are sparse (Holland & Wainer 2012, Chen et al. 2023a). Next, we present a sufficient and necessary condition for Condition 1(ii) to hold. Proposition 1. Condition 1(ii) holds if and only if for any v \u2208RK \\ {0K}, q X j=1 \f \fv\u22ba\u03b3\u2217 j \f \fI(\u03b2\u2217 js = 0) > q X j=1 sign(\u03b2\u2217 js)v\u22ba\u03b3\u2217 j I(\u03b2\u2217 js \u0338= 0), \u2200s \u2208[p\u2217]. (3) Remark 4. Proposition 1 implies that Condition 1(ii) holds when {j : \u03b2\u2217 js \u0338= 0} is separated into {j : \u03b2\u2217 js > 0} and {j : \u03b2\u2217 js < 0} in a balanced way. With diversified signs of \u03b2\u2217 js, Proposition 1 holds when a considerable proportion of test items have no covariate effect (\u03b2\u2217 js \u0338= 0). For example, when \u03b3\u2217 j = m1(k) K with m > 0, Condition 1(ii) holds if and only if Pq j=1 |m|{\u2212I(\u03b2\u2217 js/m > 0) + I(\u03b2\u2217 js/m \u22640)} > 0 and Pq j=1 |m|{\u2212I(\u03b2\u2217 js/m \u22650) + I(\u03b2\u2217 js/m < 0)} < 0. With slightly more than q/2 items correspond to \u03b2\u2217 js = 0, Condition 1(ii) holds. Moreover, if #{j : \u03b2\u2217 js > 0} and #{j : \u03b2\u2217 js < 0} are comparable, then Condition 1(ii) holds even when less than q/2 items correspond to \u03b2\u2217 js = 0 and more than q/2 items correspond 13 to \u03b2\u2217 js \u0338= 0. Though assuming a \u201csparse\u201d structure, our assumption here differs from existing high-dimensional literature. In high-dimensional regression models, the covariate coefficient when regressing the dependent variable on high-dimensional covariates, is often assumed to be sparse, with the proportion of the non-zero covariate coefficients asymptotically approaching zero. In our setting, Condition 1(ii) allows for relatively dense settings where the proportion of items with non-zero covariate effects is some positive constant. To perform simultaneous estimation and inference on \u0393\u2217and U\u2217, we consider the following identifiability conditions to address the second identifiability issue. Condition 2. (i) (U\u2217)\u22baU\u2217is diagonal. (ii) (\u0393\u2217)\u22ba\u0393\u2217is diagonal. (iii) n\u22121(U\u2217)\u22baU\u2217= q\u22121(\u0393\u2217)\u22ba\u0393\u2217. Condition 2 is a set of widely used identifiability conditions in the factor analysis literature (Bai 2003, Bai & Li 2012, Wang 2022). For practical and theoretical benefits, we impose Condition 2 to address the identifiability issue related to G. It is worth mentioning that this condition can be replaced by other identifiability conditions. For true parameters satisfying any identifiability condition, we can always find a transformation such that the transformed parameters satisfy our proposed Conditions 1\u20132 and the proposed estimation method and theoretical results in the subsequent sections still apply, up to such a transformation. 3.2 Joint Maximum Likelihood Estimation In this section, we introduce a joint-likelihood-based estimation method for the covariate effects B, the latent factors U, and factor loadings \u0393 simultaneously. Incorporating Conditions 1\u20132 into the estimation procedure, we obtain the maximum joint-likelihood-based estimators for \u03d5\u2217= (\u0393\u2217, U\u2217, B\u2217) that satisfy the proposed identifiability conditions. With Condition 1, we address the identifiability issue related to the transformation matrix A. Specifically, for any parameters \u03d5 = (\u0393, U, B), there exists a matrix A\u2217= (a\u2217 0, A\u2217 c) with A\u2217 c = argminAc\u2208RK\u00d7p\u2217 Pq j=1 \u2225\u03b2jc \u2212A\u22ba c\u03b3j\u22251 and a\u2217 0 = \u2212n\u22121 Pn i=1(Ui + A\u2217 cXc i ) such that 14 the transformed matrices U\u2217= U + X(A\u2217)\u22baand B\u2217= B \u2212\u0393A\u2217satisfy Condition 1. The transformation idea naturally leads to the following estimation methodology for B\u2217. To estimate B\u2217and U\u2217that satisfy Condition 1, we first obtain the maximum likelihood estimator b \u03d5 = (b \u0393, b U, b B) by b \u03d5 = argmin \u03d5\u2208\u2126\u03d5 \u2212L(Y | \u03d5, X), (4) where the parameter space \u2126\u03d5 is given as \u2126\u03d5 = {\u03d5 : \u2225\u03d5\u2225max \u2264C} for some large C. To solve (4), we employ an alternating minimization algorithm. Specifically, for steps t = 0, 1, . . ., we compute b \u0393(t+1), b B(t+1) = argmin \u0393\u2208Rq\u00d7K, B\u2208Rq\u00d7p \u2212L(Y | \u0393, U(t), B, X); b U(t+1) = argmin U\u2208Rn\u00d7K \u2212L(Y | \u0393(t+1), U, B(t+1), X), until the quantity max{\u2225b \u0393(t+1) \u2212b \u0393(t)\u2225F, \u2225b U(t+1) \u2212b U(t)\u2225F, \u2225b B(t+1) \u2212b B(t)\u2225F} is less than some pre-specified tolerance value for convergence. We then estimate Ac by minimizing the \u21131norm b Ac = argmin Ac\u2208RK\u00d7p\u2217 q X j=1 \u2225b \u03b2jc \u2212A\u22ba c b \u03b3j\u22251. (5) Next, we estimate b a0 = \u2212n\u22121 Pn i=1( b Ui + b AcXc i ) and let b A = (b a0, b Ac). Given the estimators b A, b \u0393, and b B, we then construct b B\u2217= b B \u2212b \u0393b A and e U = b U + Xb A\u22ba such that Condition 1 holds. Recall that Condition 2 addresses the identifiability issue related to the invertible matrix G. Specifically, for any parameters (\u0393, U), there exists a matrix G\u2217such that Condition 2 holds for U\u2217= (U+X(A\u2217)\u22ba)G\u2217and \u0393\u2217= \u0393(G\u2217)\u2212\u22ba. Let U = diag(\u03f11, . . . , \u03f1K) be a diagonal 15 matrix that contains the K eigenvalues of (nq)\u22121(\u0393\u22ba\u0393)1/2(U + XA\u22ba)\u22ba(U + XA\u22ba) (\u0393\u22ba\u0393)1/2 and let V be a matrix that contains its corresponding eigenvectors. We set G\u2217= (q\u22121\u0393\u22ba\u0393)1/2 VU \u22121/4. To further estimate \u0393\u2217and U\u2217, we need to obtain an estimator for the invertible matrix G\u2217. Given the maximum likelihood estimators obtained in (4) and b A in (5), we estimate G\u2217via b G = (q\u22121b \u0393\u22bab \u0393)1/2 b V b U \u22121/4 where b U and b V are matrices that contain the eigenvalues and eigenvectors of (nq)\u22121(b \u0393\u22bab \u0393)1/2( b U+Xb A\u22ba)\u22ba( b U+Xb A\u22ba) (b \u0393\u22bab \u0393)1/2, respectively. With b G and b A, we now obtain the following transformed estimators that satisfy Condition 2: b \u0393\u2217= b \u0393( b G\u22ba)\u22121 and b U\u2217= ( b U + Xb A\u22ba) b G. To quantify the uncertainty of the proposed estimators, we will show that the proposed estimators are asymptotically normally distributed. Specifically, in Theorem 2 of Section 4, we establish the asymptotic normality result for b \u03b2\u2217 j, which allows us to make inference on the covariate effects \u03b2\u2217 j. Moreover, as the latent factors U \u2217 i and factor loadings \u03b3\u2217 j often have important interpretations in domain sciences, we are also interested in the inference on parameters U \u2217 i and \u03b3\u2217 j . In Theorem 2, we also derive the asymptotic distributions for estimators b U \u2217 i and b \u03b3\u2217 j , providing inference results for parameters U \u2217 i and \u03b3\u2217 j . 4 Theoretical Results We propose a novel framework to establish the estimation consistency and asymptotic normality for the proposed joint-likelihood-based estimators b \u03d5\u2217= (b \u0393\u2217, b U\u2217, b B\u2217) in Section 3. To establish the theoretical results for b \u03d5\u2217, we impose the following regularity assumptions. Assumption 1. There exist constants M > 0, \u03ba > 0 such that: (i) \u03a3\u2217 u = limn\u2192\u221en\u22121(U\u2217)\u22baU\u2217exists and is positive definite. For i \u2208[n], \u2225U \u2217 i \u22252 \u2264M. (ii) \u03a3\u2217 \u03b3 = limq\u2192\u221eq\u22121(\u0393\u2217)\u22ba\u0393\u2217exists and is positive definite. For j \u2208[q], \u2225\u03b3\u2217 j \u22252 \u2264M. (iii) \u03a3x = limn\u2192\u221en\u22121 Pn i=1 XiX\u22ba i exists and 1/\u03ba2 \u2264\u03bbmin(\u03a3x) \u2264\u03bbmax(\u03a3x) \u2264\u03ba2. For i \u2208[n], maxi \u2225Xi\u2225\u221e\u2264M. 16 (iv) \u03a3\u2217 ux = limn\u2192\u221en\u22121 Pn i=1 U \u2217 i X\u22ba i exists and \u2225\u03a3\u2217 ux\u03a3\u22121 x \u22251,\u221e\u2264M. The eigenvalues of (\u03a3\u2217 u \u2212\u03a3\u2217 ux\u03a3\u22121 x (\u03a3\u2217 ux)\u22ba)\u03a3\u2217 \u03b3 are distinct. Assumptions 1 is commonly used in the factor analysis literature. In particular, Assumptions 1(i)\u2013(ii) correspond to Assumptions A-B in Bai (2003) under linear factor models, ensuring the compactness of the parameter space on U\u2217and \u0393\u2217. Under nonlinear factor models, such conditions on compact parameter space are also commonly assumed (Wang 2022, Chen et al. 2023b). Assumption 1(iii) is standard regularity conditions for the nonlinear setting that is needed to establish the concentration of the gradient and estimation error for the model parameters when p diverges. In addition, Assumption 1(iv) is a crucial identification condition; similar conditions have been imposed in the existing literature such as Assumption G in Bai (2003) in the context of linear factor models and Assumption 6 in Wang (2022) in the context of nonlinear factor models without covariates. Assumption 2. For any i \u2208[n] and j \u2208[q], assume that lij(\u00b7) is three times differentiable, and we denote the first, second, and third order derivatives of lij(wij) with respect to wij as l\u2032 ij(wij), l\u2032\u2032 ij(wij), and l\u2032\u2032\u2032 ij(wij), respectively. There exist M > 0 and \u03be \u22654 such that E(|l\u2032 ij(wij)|\u03be) \u2264M and |l\u2032 ij(wij)| is sub-exponential with \u2225l\u2032 ij(wij)\u2225\u03c61 \u2264M. Furthermore, we assume E{l\u2032 ij(w\u2217 ij)} = 0. Within a compact space of wij, we have bL \u2264\u2212l\u2032\u2032 ij(wij) \u2264bU and |l\u2032\u2032\u2032 ij(wij)| \u2264bU for bU > bL > 0. Assumption 2 assumes smoothness on the log-likelihood function lij(wij). In particular, it assumes sub-exponential distributions and finite fourth-moments of the first order derivatives l\u2032 ij(wij). For commonly used linear or nonlinear factor models, the assumption is not restrictive and can be satisfied with a large \u03be. For instance, consider the logistic model with l\u2032 ij(wij) = Yij \u2212exp(wij)/{1+exp(wij)}, we have |l\u2032 ij(wij)| \u22641 and \u03be can be taken as \u221e. The boundedness conditions for l\u2032\u2032 ij(wij) and l\u2032\u2032\u2032 ij(wij) are necessary to guarantee the convexity of the joint likelihood function. In a special case of linear factor models, l\u2032\u2032 ij(wij) is a constant and the boundedness conditions naturally hold. For popular nonlinear models such as lo17 gistic factor models, probit factor models, and Poisson factor models, the boundedness of l\u2032\u2032 ij(wij) and l\u2032\u2032\u2032 ij(wij) can also be easily verified. Assumption 3. For \u03be specified in Assumption 2 and a sufficiently small \u03f5 > 0, we assume as n, q, p \u2192\u221e, p p n \u2227(pq) (nq)\u03f5+3/\u03be \u21920. (6) Assumption 3 is needed to ensure that the derivative of the likelihood function equals zero at the maximum likelihood estimator with high probability, a key property in the theoretical analysis. In particular, we need the estimation errors of all model parameters to converge to 0 uniformly with high probability. Such uniform convergence results involve delicate analysis of the convexity of the objective function, for which technically we need Assumption 3. For most of the popularly used generalized factor models, \u03be can be taken as any large value as discussed above, thus (nq)\u03f5+3/\u03be is of a smaller order of p n \u2227(pq), given small \u03f5. Specifically, Assumption 3 implies p = o(n1/2 \u2227q) up to a small order term, an asymptotic regime that is reasonable for many educational assessments. Next, we impose additional assumptions crucial to establishing the theoretical properties of the proposed estimators. One challenge for theoretical analysis is to handle the dependence between the latent factors U\u2217and the design matrix X. To address this challenge, we employ the following transformed U0 that are orthogonal with X, which plays an important role in establishing the theoretical results (see Supplementary Materials for details). In particular, for i \u2208[n], we let U 0 i = (G\u2021)\u22ba(U \u2217 i \u2212A\u2021Xi). Here G\u2021 = (q\u22121(\u0393\u2217)\u22ba\u0393\u2217)1/2 V\u2217(U \u2217)\u22121/4 and A\u2021 = (U\u2217)\u22baX(X\u22baX)\u22121, where U \u2217= diag(\u03f1\u2217 1, . . . , \u03f1\u2217 K) with diagonal elements being the K eigenvalues of (nq)\u22121((\u0393\u2217)\u22ba\u0393\u2217)1/2(U\u2217)\u22ba(In\u2212Px)U\u2217((\u0393\u2217)\u22ba\u0393\u2217)1/2 with Px = X(X\u22baX)\u22121X\u22baand V\u2217containing the matrix of corresponding eigenvectors. Under this transformation for U 0 i , we further define \u03b30 j = (G\u2021)\u22121\u03b3\u2217 j and \u03b20 j = \u03b2\u2217 j + (A\u2021)\u22ba\u03b3\u2217 j for j \u2208[q], and write Z0 i = ((U 0 i )\u22ba X\u22ba i )\u22baand w0 ij = (\u03b30 j )\u22baU 0 i + (\u03b20 j)\u22baXi. These transformed parameters \u03b30 j \u2019s, U 0 i \u2019s, and \u03b20 j\u2019s give the same joint likelihood value as that of the true parameters \u03b3\u2217 j \u2019s, U \u2217 i \u2019s and \u03b2\u2217 j\u2019s, which 18 facilitate our theoretical understanding of the joint-likelihood-based estimators. Assumption 4. (i) For any j \u2208[q], \u2212n\u22121 Pn i=1 l\u2032\u2032 ij(w0 ij)Z0 i (Z0 i )\u22ba p \u2192\u03a80 jz for some positive definite matrix \u03a80 jz and n\u22121/2 Pn i=1 l\u2032 ij(w0 ij)Z0 i d \u2192N(0, \u21260 jz). (ii) For any i \u2208[n], \u2212q\u22121 Pq j=1 l\u2032\u2032 ij(w0 ij)\u03b30 j (\u03b30 j )\u22ba p \u2192\u03a80 i\u03b3 for some positive definite matrix \u03a80 i\u03b3 and q\u22121/2 Pq j=1 l\u2032 ij(w0 ij)\u03b30 j d \u2192N(0, \u21260 i\u03b3). Assumption 4 is a generalization of Assumption F(3)-(4) in Bai (2003) for linear models to the nonlinear setting. Specifically, we need Assumption 4(i) to derive the asymptotic distributions of the estimators b \u03b2\u2217 j and b \u03b3\u2217 j , and Assumption 4(ii) is used for establishing the asymptotic distribution of b U \u2217 i . Note that these assumptions are imposed on the loglikelihood derivative functions evaluated at the true parameters w0 ij, Z0 i , and \u03b30 j . In general, for the popular generalized factor models, such assumptions hold with mild conditions. For example, under linear models, l\u2032 ij(wij) is the random error and l\u2032\u2032 ij(wij) is a constant. Then \u03a80 jz and \u03a80 i\u03b3 naturally exist and are positive definite followed by Assumption 1. The limiting distributions of n\u22121/2 Pn i=1 l\u2032 ij(w0 ij)Z0 i and q\u22121/2 Pq j=1 l\u2032 ij(w0 ij)\u03b30 j can be derived by the central limit theorem under standard regularity conditions. Under logistic and probit models, l\u2032 ij(wij) and l\u2032\u2032 ij(wij) are both finite inside a compact parameters space and similar arguments can be applied to show the validity of Assumption 4. We present the following assumption to establish the theoretical properties of the transformed matrix b A as defined in (5). In particular, we define A0 = (G\u2021)\u22baA\u2021 and write A0 = (a0 0, . . . , a0 p\u2217)\u22ba. Note that the estimation problem of (5) is related to the median regression problem with measurement errors. To understand the properties of this estimator, following existing M-estimation literature (He & Shao 1996, 2000), we define \u03c80 js(a) = \u03b30 j sign{\u03b20 js + (\u03b30 j )\u22ba(a \u2212a0 s)} and \u03c7s(a) = Pq j=1 \u03c80 js(a) for j \u2208[q] and s \u2208[p\u2217]. We further define a perturbed version of \u03c80 js(a), denoted as \u03c8js(a, \u03b4js), as follows: \u03c8js(a, \u03b4js) = \u0010 \u03b30 j + \u0002 \u03b4js \u221an \u0003 [1:K] \u0011 sign n \u03b20 js + \u0002 \u03b4js \u221an \u0003 K+1 \u2212(\u03b30 j + \u0002 \u03b4js \u221an \u0003 [1:K])\u22ba(a \u2212a0 s) o , s \u2208[p\u2217] 19 where the perturbation \u03b4js = \uf8eb \uf8ec \uf8ed IK 0 0 (1(p) s )\u22ba \uf8f6 \uf8f7 \uf8f8 \u0010 \u2212 n X i=1 l\u2032\u2032 ij(w0 ij)Z0 i (Z0 i )\u22ba\u0011\u22121\u0010\u221an n X i=1 l\u2032 ij(w0 ij)Z0 i \u0011 , is asymptotically normally distributed by Assumption 4. We define b \u03c7s(a) = Pq j=1 E\u03c8js(a, \u03b4js). Assumption 5. For \u03c7s(a), we assume that there exists some constant c > 0 such that mina\u0338=0 |q\u22121\u03c7s(a)| > c holds for all s \u2208[p\u2217]. Assume there exists as0 for each s \u2208[p\u2217] such that b \u03c7s(as0) = 0 with p\u221an\u2225\u03b1s0\u2225\u21920. In a neighbourhood of \u03b1s0, b \u03c7s(a) has a nonsingular derivative such that {q\u22121\u2207ab \u03c7s(\u03b1s0)}\u22121 = O(1) and q\u22121|\u2207ab \u03c7s(a)\u2212\u2207ab \u03c7s(\u03b1s0)| \u2264k|a\u2212\u03b1s0|. We assume \u03b9nq,p := max \b \u2225\u03b1s0\u2225, q\u22121 Pq j=1 \u03c8js(as0, \u03b4js) \t = o \u0000(p\u221an)\u22121\u0001 . Assumption 5 is crucial in addressing the theoretical difficulties of establishing the consistent estimation for A0, a challenging problem related to median regression with weakly dependent measurement errors. In Assumption 5, we treat the minimizer of | Pq j=1 \u03c8(a, \u03b4js)| as an M-estimator and adopt the Bahadur representation results in He & Shao (1996) for the theoretical analysis. For an ideal case where \u03b4js are independent and normally distributed with finite variances, which corresponds to the setting in median regression with measurement errors (He & Liang 2000), these assumptions can be easily verified. Assumption 5 discusses beyond such an ideal case and covers general settings. In addition to independent and Gaussian measurement errors, this condition also accommodates the case when \u03b4js are asymptotically normal and weakly dependent with finite variances, as implied by Assumption 4 and the conditional independence of Yij. We want to emphasize that Assumption 5 allows for both sparse and dense settings of the covariate effects. Consider an example of K = p = 1 and \u03b3j = 1 for j \u2208[q]. Suppose \u03b2\u2217 js is zero for all j \u2208[q1] and nonzero otherwise. Then this condition is satisfied as long as #{j : \u03b2\u2217 js > 0} and #{j : \u03b2\u2217 js < 0} are comparable, even when the sparsity level q1 is small. Under the proposed assumptions, we next present our main theoretical results. 20 Theorem 1 (Average Consistency). Suppose the true parameters \u03d5\u2217= (\u0393\u2217, U\u2217, B\u2217) satisfy identifiability conditions 1\u20132. Under Assumptions 1\u20135, we have q\u22121\u2225b B\u2217\u2212B\u2217\u22252 F = Op \u0012p2 log qp n + p log n q \u0013 ; (7) if we further assume p3/2(nq)\u03f5+3/\u03be(p1/2n\u22121/2 + q\u22121/2) = o(1), then we have n\u22121\u2225b U\u2217\u2212U\u2217\u22252 F = Op \u0012p log qp n + log n q \u0013 ; (8) q\u22121\u2225b \u0393\u2217\u2212\u0393\u2217\u22252 F = Op \u0012p log qp n + log n q \u0013 . (9) Theorem 1 presents the average convergence rates of b \u03d5\u2217. Consider an oracle case with U\u2217 and \u0393\u2217known, the estimation of B\u2217reduces to an M-estimation problem. For M-estimators under general parametric models, it can be shown that the optimal convergence rates in squared \u21132-norm is Op(p/n) under p(log p)3/n \u21920 (He & Shao 2000). In terms of our average convergence rate on b B\u2217, the first term in (7), n\u22121p2 log(qp), approximately matches the convergence rate Op(p/n) up to a relatively small order term of p log(qp). The second term in (7), q\u22121p log n, is mainly due to the estimation error for the latent factor U\u2217. In educational applications, it is common to assume the number of subjects n is much larger than the number of items q. Under such a practical setting with n \u226bq and p relatively small, the term q\u22121 log n in (8) dominates in the derived convergence rate of b U\u2217, which matches with the optimal convergence rate Op(q\u22121) for factor models without covariates (Bai & Li 2012, Wang 2022) up to a small order term. Remark 5. The additional condition p3/2(nq)\u03f5+3/\u03be(p1/2n\u22121/2 + q\u22121/2) = o(1) in Theorem 1 is used to handle the challenges related to the invertible matrix G that affects the theoretical properties of b U\u2217and b \u0393\u2217. It is needed for establishing the estimation consistency of b U\u2217and b \u0393\u2217 but not for that of b B\u2217. With sufficiently large \u03be and small \u03f5, this assumption is approximately p = o(n1/4 \u2227q1/3) up to a small order term. 21 Remark 6. One challenge in establishing the estimation consistency for b \u03d5\u2217arises from the unrestricted dependence structure between U\u2217and X. If we consider the ideal case where the columns of U\u2217and X are orthogonal, i.e., (U\u2217)\u22baX = 0K\u00d7p, then we can achieve comparable or superior convergence rates with less stringent assumptions. Specifically, with Assumptions 1\u20133 only, we can obtain the same convergence rates for b U\u2217and b \u0393\u2217as in (8) and (9), respectively. Moreover, with Assumptions 1\u20133, the average convergence rate for the consistent estimator of B\u2217is Op(n\u22121p log qp+q\u22121 log n), which is tighter than (7) by a factor of p. With estimation consistency results established, we next derive the asymptotic normal distributions for the estimators, which enable us to perform statistical inference on the true parameters. Theorem 2 (Asymptotic Normality). Suppose the true parameters \u03d5\u2217= (\u0393\u2217, U\u2217, B\u2217) satisfy identifiability conditions 1\u20132. Under Assumptions 1\u20135, we have the asymptotic distributions as follows. Denote \u03b6\u22122 nq,p = n\u22121p log qp + q\u22121log n. If p3/2\u221an(nq)3/\u03be\u03b6\u22122 nq,p \u21920, for any j \u2208[q] and a \u2208Rp with \u2225a\u22252 = 1, \u221ana\u22ba(\u03a3\u2217 \u03b2,j)\u22121/2( b \u03b2\u2217 j \u2212\u03b2\u2217 j) d \u2192N(0, 1), (10) where \u03a3\u2217 \u03b2,j = (\u2212(A0)\u22ba, Ip)(\u03a80 jz)\u22121\u21260 jz(\u03a80 jz)\u22121(\u2212(A0)\u22ba, Ip)\u22ba, and for any j \u2208[q], \u221an(\u03a3\u2217 \u03b3,j)\u22121/2(b \u03b3\u2217 j \u2212\u03b3\u2217 j ) d \u2192N(0, IK), (11) where \u03a3\u2217 \u03b3,j = G\u2021(IK, 0)(\u03a80 jz)\u22121\u21260 jz(\u03a80 jz)\u22121 (IK, 0)\u22ba(G\u2021)\u22ba. Furthermore, for any i \u2208[n], if q = O(n) and p3/2\u221aq(nq)3/\u03be\u03b6\u22122 nq,p \u21920, \u221aq(\u03a3\u2217 u,i)\u22121/2( b U \u2217 i \u2212U \u2217 i ) d \u2192N(0, IK), (12) where \u03a3\u2217 u,i = (G\u2021)\u2212\u22ba(\u03a80 i\u03b3)\u22121\u21260 i\u03b3(\u03a80 i\u03b3)\u22121(G\u2021)\u22121. 22 The asymptotic covariance matrices in Theorem 2 can be consistently estimated. Due to the space limitations, we defer the construction of the consistent estimators b \u03a3\u2217 \u03b2,j, b \u03a3\u2217 \u03b3,j, and b \u03a3\u2217 u,i to Supplementary Materials. Theorem 2 provides the asymptotic distributions for all individual estimators. In particular, with the asymptotic distributions and the consistent estimators b \u03a3\u2217 \u03b2,j for the asymptotic covariance matrices, we can perform hypothesis testing on \u03b2\u2217 js for j \u2208[q] and s \u2208[p\u2217]. We reject the null hypothesis \u03b2\u2217 js = 0 at significance level \u03b1 if |\u221an(b \u03c3\u2217 \u03b2,js)\u22121b \u03b2\u2217 js| > \u03a6\u22121(1 \u2212\u03b1/2), where (b \u03c3\u2217 \u03b2,js)2 is the (s + 1)-th diagonal entry in b \u03a3\u2217 \u03b2,j. For the asymptotic normality of b \u03b2\u2217 j, the condition p3/2\u221an(nq)3/\u03be(n\u22121p log qp+q\u22121 log n) \u2192 0 together with Assumption 3 gives p = o{n1/5 \u2227(q2/n)1/3} up to a small order term, and further implies n \u226aq2, which is consistent with established conditions in the existing factor analysis literature (Bai & Li 2012, Wang 2022). For the asymptotic normality of b U \u2217 i , the additional condition that q = O(n) is a reasonable assumption in educational applications where the number of items q is much fewer than the number of subjects n. In this case, the scaling conditions imply p = o{q1/3 \u2227(n2/q)1/5} up to a small order term. Similarly for the asymptotic normality of b \u03b3\u2217 j , the proposed conditions give p = o{n1/5 \u2227(q2/n)1/3} up to a small order term. Remark 7. Similar to the discussion in Remark 6, the challenges arising from the unrestricted dependence between U\u2217and X also affect the derivation of the asymptotic distributions for the proposed estimators. If we consider the ideal case with (U\u2217)\u22baX = 0K\u00d7p, we can establish the asymptotic normality for all individual estimators under Assumptions 1\u20134 only and weaker scaling conditions. Specifically, when (U\u2217)\u22baX = 0K\u00d7p, the scaling condition becomes p\u221an(nq)3/\u03be(n\u22121p log qp+q\u22121 log n) \u21920 for deriving asymptotic normality of b \u03b2\u2217 j and b \u03b3\u2217 j , which is milder than that for (10) and (11). 23 5 Simulation Study In this section, we study the finite-sample performance of the proposed joint-likelihoodbased estimator. We focus on the logistic latent factor model in (1) with pij(y | wij) = exp(wijy)/{1 + exp(wij)}, where wij = (\u03b3\u2217 j )\u22baU \u2217 i + (\u03b2\u2217 j)\u22baXi. The logistic latent factor model is commonly used in the context of educational assessment and is also referred to as the item response theory model (Mellenbergh 1994, Hambleton & Swaminathan 2013). We apply the proposed method to estimate B\u2217and perform statistical inference on testing the null hypothesis \u03b2\u2217 js = 0. We start with presenting the data generating process. We set the number of subjects n = {300, 500, 1000, 1500, 2000}, the number of items q = {100, 300, 500}, the covariate dimension p = {5, 10, 30}, and the factor dimension K = 2, respectively. We jointly generate Xc i and U \u2217 i from N(0, \u03a3) where \u03a3ij = \u03c4 |i\u2212j| with \u03c4 \u2208{0, 0.2, 0.5, 0.7}. In addition, we set the loading matrix \u0393\u2217 [,k] = 1(K) k \u2297vk, where \u2297is the Kronecker product and vk is a (q/K)-dimensional vector with each entry generated independently and identically from Unif[0.5, 1.5]. For the covariate effects B\u2217, we set the intercept terms to equal \u03b2\u2217 j0 = 0. For the remaining entries in B\u2217, we consider the following two settings: (1) sparse setting: \u03b2\u2217 js = \u03c1 for s = 1, . . . , p and j = 5s\u22124, . . . , 5s and other \u03b2\u2217 js are set to zero; (2) dense setting: \u03b2\u2217 js = \u03c1 for s = 1, . . . , p and j = Rsq/5 + 1, . . . , (Rs + 1)q/5 with Rs = s \u22125\u230as/5\u230b, and other \u03b2\u2217 js are set to zero. Here, the signal strength is set as \u03c1 \u2208{0.3, 0.5}. Intuitively, in the sparse setting, we set 5 items to be biased for each covariate whereas in the dense setting, 20% of items are biased items for each covariate. For better empirical stability, after reaching convergence in the proposed alternating maximization algorithm and transforming the obtained MLEs into ones that satisfy Conditions 1\u20132, we repeat another round of maximization and transformation. We take the significance level at 5% and calculate the averaged type I error based on all the entries \u03b2\u2217 js = 0 and the averaged power for all non-zero entries, over 100 replications. The averaged hypothesis testing results are presented in Figures 3\u20136 for p = 5 and p = 30, across different 24 settings. Additional numerical results for p = 10 are presented in the Supplementary Materials. 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 5, q = 100, rho = 0.3 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 5, q = 100, rho = 0.5 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 5, q = 300, rho = 0.3 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 5, q = 300, rho = 0.5 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 5, q = 500, rho = 0.3 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 5, q = 500, rho = 0.5 Figure 3: Powers and type I errors under sparse setting at p = 5. Red Circles ( ) denote correlation parameter \u03c4 = 0. Green triangles ( ) represent the case \u03c4 = 0.2. Blue squares ( ) indicate \u03c4 = 0.5. Purple crosses ( ) represent the \u03c4 = 0.7. 25 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 30, q = 100, rho = 0.3 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 30, q = 100, rho = 0.5 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 30, q = 300, rho = 0.3 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 30, q = 300, rho = 0.5 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 30, q = 500, rho = 0.3 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 30, q = 500, rho = 0.5 Figure 4: Powers and type I errors under sparse setting at p = 30. Red Circles ( ) denote correlation parameter \u03c4 = 0. Green triangles ( ) represent the case \u03c4 = 0.2. Blue squares ( ) indicate \u03c4 = 0.5. Purple crosses ( ) represent the \u03c4 = 0.7. 26 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 5, q = 100, rho = 0.3 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 5, q = 100, rho = 0.5 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 5, q = 300, rho = 0.3 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 5, q = 300, rho = 0.5 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 5, q = 500, rho = 0.3 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 5, q = 500, rho = 0.5 Figure 5: Powers and type I errors under dense setting at p = 5. Red Circles ( ) denote correlation parameter \u03c4 = 0. Green triangles ( ) represent the case \u03c4 = 0.2. Blue squares ( ) indicate \u03c4 = 0.5. Purple crosses ( ) represent the \u03c4 = 0.7. 27 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 30, q = 100, rho = 0.3 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 30, q = 100, rho = 0.5 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 30, q = 300, rho = 0.3 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 30, q = 300, rho = 0.5 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 30, q = 500, rho = 0.3 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 30, q = 500, rho = 0.5 Figure 6: Powers and type I errors under dense setting at p = 30. Red Circles ( ) denote correlation parameter \u03c4 = 0. Green triangles ( ) represent the case \u03c4 = 0.2. Blue squares ( ) indicate \u03c4 = 0.5. Purple crosses ( ) represent the \u03c4 = 0.7. 28 From Figures 3\u20136, we observe that the type I errors are well controlled at the significance level 5%, which is consistent with the asymptotic properties of b B\u2217in Theorem 2. Moreover, the power increases to one as the sample size n increases across all of the settings we consider. Comparing the left panel (\u03c1 = 0.3) to the right panel (\u03c1 = 0.5) in Figures 3\u20136, we see that the power increases as we increase the signal strength \u03c1. Comparing the plots in Figures 3\u20134 to the corresponding plots in Figures 5\u20136, we see that the powers under the sparse setting (Figures 3\u20134) are generally higher than that of the dense setting (Figures 5\u20136). Nonetheless, our proposed method is generally stable under both sparse and dense settings. In addition, we observe similar results when we increase the covariate dimension p from p = 5 (Figures 3 and 5) to p = 30 (Figures 4 and 6). We refer the reader to the Supplementary Materials for additional numerical results for p = 10. Moreover, we observe similar results when we increase the test length q from q = 100 (top row) to q = 500 (bottom row) in Figures 3\u20136. In terms of the correlation between X and U\u2217, we observe that while the power converges to one as we increase the sample size, the power decreases as the correlation \u03c4 increases. 6 Data Application We apply our proposed method to analyze the Programme for International Student Assessment (PISA) 2018 data2. PISA is a worldwide testing program that compares the academic performances of 15-year-old students across many countries (OECD 2019). More than 600,000 students from 79 countries/economies, representing a population of 31 million 15year-olds, participated in this program. The PISA 2018 used the computer-based assessment mode and the assessment lasted two hours for each student, with test items mainly evaluating students\u2019 proficiency in mathematics, reading, and science domains. A total of 930 minutes of test items were used and each student took different combinations of the test items. In addition to the assessment questions, background questionnaires were provided to collect students\u2019 information. 2The data can be downloaded from: https://www.oecd.org/pisa/data/2018database/ 29 In this study, we focus on PISA 2018 data from Taipei. The observed responses are binary, indicating whether students\u2019 responses to the test items are correct, and we use the popular item response theory model with the logit link (i.e., logistic latent factor model; Reckase 2009). Due to the block design nature of the large-scale assessment, each student was only assigned to a subset of the test items, and for the Taipei data, 86% response matrix is unobserved. Note that this missingness can be considered as conditionally independent of the responses given the students\u2019 characteristics. Our proposed method and inference results naturally accommodate such missing data and can be directly applied. Specifically, to accommodate the incomplete responses, we can modify the joint log-likelihood function in (2) into Lobs(Y | \u0393, U, B, X) = Pn i=1 P j\u2208Qi lij(\u03b3\u22ba j Ui + \u03b2\u22ba j Xi), where Qi defines the set of questions to which the responses from student i are observed. In this study, we include gender and 8 variables for school strata as covariates (p\u2217= 9). These variables record whether the school is public, in an urban place, etc. After data preprocessing, we have n = 6063 students and q = 194 questions. Following the existing literature (Reckase 2009, Millsap 2012), we take K = 3 to interpret the three latent abilities measured by the math, reading, and science questions. We apply the proposed method to estimate the effects of gender and school strata variables on students\u2019 responses. We obtain the estimators of the gender effect for each PISA question and construct the corresponding 95% confidence intervals. The constructed 95% confidence intervals for the gender coefficients are presented in Figure 7. There are 10 questions highlighted in red as their estimated gender effect is statistically significant after the Bonferroni correction. Among the reading items, there is only one significant item and the corresponding confidence interval is below zero, indicating that this question is biased towards female test-takers, conditioning on the students\u2019 latent abilities. Most of the confidence intervals corresponding to the biased items in the math and science sections are above zero, indicating that these questions are biased towards male test-takers. In social science research, it is documented that female students typically score better than male students 30 during reading tests, while male students often outperform female students during math and science tests (Quinn & Cooc 2015, Balart & Oosterveen 2019). Our results indicate that there may exist potential measurement biases resulting in such an observed gender gap in educational testing. Our proposed method offers a useful tool to identify such biased test items, thereby contributing to enhancing testing fairness by providing practitioners with valuable information for item calibration. Math Reading Science \u22126 \u22123 0 3 6 0 50 100 150 200 PISA Questions for TAP Gender Effect Estimator Figure 7: Confidence intervals for the effect of gender covariate on each PISA question using Taipei data. Red intervals correspond to confidence intervals for questions with significant gender bias after Bonferroni correction. (For illustration purposes, we omit the confidence intervals with the upper bounds exceeding 6 and the lower bounds below -6 in this figure). To further illustrate the estimation results, Table 1 lists the p-values for testing the gender effect for each of the identified 10 significant questions, along with the proportions of female and male test-takers who answered each question correctly. We can see that the signs of the estimated gender effect by our proposed method align with the disparities in the reported proportions between females and males. For example, the estimated gender effect corresponding to the item \u201cCM496Q01S Cash Withdrawal\u201d is positive with a p-value 31 Item code Item Title Female (%) Male (%) p-value Mathematics CM496Q01S Cash Withdrawal 51.29 58.44 2.77\u00d710\u22127 (+) CM800Q01S Computer Games 96.63 93.61 < 1 \u00d7 10\u22128 (\u2212) Reading CR466Q06S Work Right 91.91 86.02 1.95\u00d710\u22125 (\u2212) Science CS608Q01S Ammonoids 57.68 68.15 4.65\u00d710\u22125 (+) CS643Q01S Comparing Light Bulbs 68.57 73.41 1.08\u00d710\u22125 (+) CS643Q02S Comparing Light Bulbs2 63.00 57.50 4.64\u00d710\u22124 (\u2212) CS657Q03S Invasive Species 46.00 54.36 8.47\u00d710\u22125 (+) CS527Q04S Extinction of Dinosours3 36.19 50.18 8.13\u00d710\u22125 (+) CS648Q02S Habitable Zone 41.69 45.19 1.34\u00d710\u22124 (+) CS607Q01S Birds and Caterpillars 88.14 91.47 1.99\u00d710\u22124 (+) Table 1: Proportion of full credit in females and males to significant items of PISA2018 in Taipei. (+) and (\u2212) denote the items with positively and negatively estimated gender effects, respectively. of 2.77 \u00d7 10\u22127, implying that this question is statistically significantly biased towards male test-takers. This is consistent with the observation that in Table 1, 58.44% of male students correctly answered this question, which exceeds the proportion of females, 51.29%. Besides gender effects, we estimate the effects of school strata on the students\u2019 response and present the point and interval estimation results in the left panel of Figure 8. All the detected biased questions are from math and science sections, with 6 questions for significant effects of whether attending public school and 5 questions for whether residing in rural areas. To further investigate the importance of controlling for the latent ability factors, we compare results from our proposed method with the latent factors, to the results from directly regressing responses on covariates without latent factors. From the right panel of Figure 8, we can see that without conditioning on the latent factors, there are excessive items detected for the covariate of whether the school is public or private. On the other hand, there are no biased items detected if we only apply generalized linear regression to estimate the effect of the covariate of whether the school is in rural areas. 32 Math Reading Science \u22124 0 4 0 50 100 150 200 PISA Questions for TAP Public School Effect Estimator Public Math Reading Science \u22122 \u22121 0 1 2 0 50 100 150 200 PISA Questions for TAP Public School Effect Estimator Public \u2212 without latent variable Math Reading Science \u22124 0 4 0 50 100 150 200 PISA Questions for TAP Rural Region Effect Estimator Rural Math Reading Science \u22122 \u22121 0 1 2 0 50 100 150 200 PISA Questions for TAP Rural Region Effect Estimator Rural \u2212 without latent variable Figure 8: Confidence intervals for the effect of school stratum covariate on each PISA question. Red intervals correspond to confidence intervals for questions with significant school stratum bias after Bonferroni correction. 7 Discussion In this work, we study the covariate-adjusted generalized factor model that has wide interdisciplinary applications such as educational assessments and psychological measurements. In particular, new identifiability issues arise due to the incorporation of covariates in the model setup. To address the issues and identify the model parameters, we propose novel and interpretable conditions, crucial for developing the estimation approach and inference results. With model identifiability guaranteed, we propose a computationally efficient jointlikelihood-based estimation method for model parameters. Theoretically, we obtain the estimation consistency and asymptotic normality for not only the covariate effects but also latent factors and factor loadings. 33 There are several future directions motivated by the proposed method. In this manuscript, we focus on the case in which p grows at a slower rate than the number of subjects n and the number of items q, a common setting in educational assessments. It is interesting to further develop estimation and inference results under the high-dimensional setting in which p is larger than n and q. Moreover, in this manuscript, we assume that the dimension of the latent factors K is fixed and known. One possible generalization is to allow K to grow with n and q. Intuitively, an increasing latent dimension K makes the identifiability and inference issues more challenging due to the increasing degree of freedom of the transformation matrix. With the theoretical results in this work, another interesting related problem is to further develop simultaneous inference on group-wise covariate coefficients, which we leave for future investigation."
16
+ }
intro_8K/test_introduction_short_2404.16766v1.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2404.16766v1",
3
+ "title": "Prefix Text as a Yarn: Eliciting Non-English Alignment in Foundation Language Model",
4
+ "abstract": "While supervised fine-tuning (SFT) has been a straightforward approach for\ntailoring the output of foundation large language model (LLM) to specific\npreferences, concerns have been raised about the depth of this alignment, with\nsome critiques suggesting it is merely \"superficial\". We critically examine\nthis hypothesis within the scope of cross-lingual generation tasks, proposing\nthat the effectiveness of SFT may be constrained by its reliance on prior\ntokens to guide cross-lingual generation. Based on this crucial insight, and in\nresponse to the challenges posed by the costly and limited availability of\nnon-English data for SFT, we introduce a novel training-free alignment method\nnamed PreTTY, which employs minimal task-related prior tokens to bridge the\nfoundation LLM and the SFT LLM, achieving comparable performance without\ntraining. Experiments on machine translation and part-of-speech tagging across\neight languages demonstrate the efficacy of PreTTY in cross-lingual settings.\nRemarkably, by initiating the decoding process with only one or two prior\ntokens, foundation LLMs can achieve performance comparable to their SFT\ncounterparts. This method presents a cost-effective alternative to SFT and\nadvances the democratization of multilingual LLMs.",
5
+ "authors": "Runzhe Zhan, Xinyi Yang, Derek F. Wong, Lidia S. Chao, Yue Zhang",
6
+ "published": "2024-04-25",
7
+ "updated": "2024-04-25",
8
+ "primary_cat": "cs.CL",
9
+ "cats": [
10
+ "cs.CL",
11
+ "cs.AI"
12
+ ],
13
+ "label": "Original Paper",
14
+ "paper_cat": "LLM Fairness",
15
+ "gt": "Supervised fine-tuning (SFT) refines large lan- guage models (LLMs) using task-specific instruc- tion data to enhance their capability to follow in- structions (Touvron et al., 2023; Peng et al., 2023) and to align their outputs with human preferences and safety considerations (Ouyang et al., 2022; Rafailov et al., 2023; Dong et al., 2023b; Yuan et al., 2023). This process is often termed \u201calign- ment\u201d, signifying the tailoring of model outputs *Work was done during a visit to Westlake University. \u0000 Co-corresponding authors. to conform to specific downstream requirements. Nevertheless, current research casts doubt on the necessity and potential adverse impacts of SFT. But the alignment achieved through SFT is often considered to be \u201csuperficial\u201d, with the process po- tentially repurposing pre-existing knowledge from pre-training to merely reshape outputs to meet spe- cific criteria (Zhou et al., 2023; Lin et al., 2023). It has been observed that even a small-scale SFT training dataset can produce significant alignment effects (Liu et al., 2023; Xia et al., 2024). On the other hand, recent empirical studies (Luo et al., 2023; Dong et al., 2023a) have raised concerns that SFT might hurt the knowledge acquired during its pre-training phase, leading to serious consequences like catastrophic forgetting. Not only is there no definitive consensus on the necessity of SFT, but the majority of these stud- ies also focus on monolingual tasks. LLMs still encounter challenges in handling complex cross- lingual generation tasks (Schioppa et al., 2023; Wang et al., 2023). Current research on cross- lingual alignment primarily seeks to extrapolate or align English capabilities to other languages us- ing the SFT paradigm (Zhang et al., 2023; Chai et al., 2024; Xu et al., 2024), yet there remains a gap in exploring the specific impacts of SFT-based cross-lingual alignment. Furthermore, given the potential risk of SFT leading to the forgetting of pre-training knowledge, the question of how to achieve cross-lingual alignment without training remains underexplored. To bridge these gaps, our study conducts an in- depth examination of the impact of SFT on cross- lingual generation. We investigate the influence of SFT on the decoding patterns of foundation models in cross-lingual contexts, hypothesizing that the success of SFT largely hinges on the selection of initial prior tokens that are critical for eliciting task- specific generation in the target language. Further- more, the observed decoding similarities between 1 arXiv:2404.16766v1 [cs.CL] 25 Apr 2024 Instruction: Translate the following sentence from English to Ukrainian: \u201cWe now have 4-month-old mice that are non-diabetic that used to be diabetic,\u201d he added. \"They're not cured, but they're no longer diabetic.\"\\n\"We now have 4-month- \u2026 \u041c\u0438 \u0442\u0435\u043f\u0435\u0440\u0456\u0448\u043d\u0456\u0445 4 \u043c\u0456\u0441\u044f\u0446\u0456\u0432 \u043c\u0430\u044e\u0442\u044c \u043c\u0438\u0448\u0435\u0439, \u044f\u043a\u0456 \u0440\u0430\u043d\u0456\u0448\u0435 \u0431\u0443\u043b\u0438 \u0434\u0456\u0430\u0431\u0435\u0442\u0438\u043a\u0430\u043c\u0438 \u2026 Foundation LLM SFT-tuned LLM SFT-based Alignment \"They're not cured, but they're no longer diabetic.\"\\n\"We now have 4-month- \u2026 Foundation LLM + Prior Tokens + SFT Pipeline Pretty: Prefix TexT as a Yarn ? ? Input: [Instruction, \u201c\u041c\u0438\u201d] SFT-like LLM \u041c\u0438 \u043d\u0430\u0440\u0435\u0448\u0442\u0456 \u043c\u0430\u043b\u0438 \u043c\u0438\u0448\u0435\u0439, \u0449\u043e \u043d\u0435\u043c\u0430\u044e\u0442\u044c \u0434\u0456\u0430\u0431\u0435\u0442\u0443, \u044f\u043a\u0456 \u0440\u0430\u043d\u0456\u0448\u0435 \u0431\u0443\u043b\u0438 \u0434\u0456\u0430\u0431\u0435\u0442\u0438\u043a\u0430\u043c\u0438 \u2026 1) Understand the alignment under cross-lingual setting. 2) Propose a training-free alignment method for non-English tasks. SFT Prior Refined Prior How does SFT change the model? Pseudo Prior High Resource Low Figure 1: Illustration of our research question and proposed Prefix TexT as a Yarn (PRETTY) framework. foundation and SFT models support the extension of the superficial alignment hypothesis to cross- lingual scenarios. Responding to these insights, we introduce a training-free alignment method named \u201cPRETTY\u201d for cross-lingual and non-English tasks. The Prefix TexTs act as a Yarn (PRETTY) linking the foundation LLM and the SFT LLM, eliciting the foundation LLM to exhibit near-SFT perfor- mance levels. Specifically, we augment the origi- nal input with a few tokens that serve as decoding priors, and then prompt the foundation LLM to re- sume decoding based on this modified input. In most cases, only one or two task-related prior to- kens are needed, and the method for constructing these prior tokens is flexible across various kinds of language resources, fostering the democratization of multilingual LLMs. We conducted experiments on machine transla- tion (Goyal et al., 2022), cross-lingual summariza- tion (Bhattacharjee et al., 2023) and non-English part-of-speech (POS) tagging (Liang et al., 2020) tasks across eight languages. These tasks exem- plify cross-lingual generation and multilingual lan- guage understanding, and they provide ample non- English test data to evaluate effectiveness across varying levels of resource availability. The exper- imental results demonstrate that PRETTY can ef- fectively align the foundation model to match SFT model\u2019s performance without training, by merely adding two prior tokens in the decoding.",
16
+ "main_content": "2.1 Preliminaries Pre-training The pre-training (PT) of LLMs is primarily conducted through language modeling tasks on large-scale unlabeled data (Touvron et al., 2023; Achiam et al., 2023). During this phase, given a sequence XPT of length N and a context window k, the optimization objective is maximizing the joint probability PLM as: PLM(XPT) = N \ufffd i=1 mod \ufffd i=1 P(xi|xi\u2212k:i\u22121) (1) \ufffd which encourages the model to generate text that naturally follows from the preceding context. However, this \u201ctext completion\u201d behavior can become a bottleneck when models are prompted to switch languages or follow specific instructions of crosslingual generation. It is frequently observed that when prompted with English input and instructed to produce text in a different language, as illustrated in the upper example of Figure 1, the foundation model often continues to decode in English. SFT SFT leverages labeled data pair (Xins., Y ) to empower models with the ability to follow instructions. This stage aims to maximize the probability of the expected answer Y conditioned on the 2 input text Xins., where Xins. consists of the task instruction and task input. PSFT(Y |Xins.) = T Y j=1 P(yj|y1:j\u22121, Xins.) (2) SFT is crucial for aligning foundation models to perform task-specific instructions, effectively transforming a general-purpose LLM into an instructionfollowing assistant. However, data quality, training costs, and the imbalance of multilingual data hinder the democratization of assistant LLM. As mentioned before, SFT may be harmful to pre-training knowledge. Thus, it is meaningful and important to understand the underlying mechanism of SFTbased alignment and propose a more efficient alignment method. 2.2 Beneath the SFT-based Alignment Prior Knowledge Hypothesis It is worth noting that pre-training corpora also contain sequences that naturally express task-specific information, which imparts certain capabilities to the foundation LLMs. For example, the presence of semantically equivalent expressions in the pre-training text may enable LLM acquire machine translation ability during pre-training stage (Radford et al., 2019). Despite its extensive prior knowledge, the foundation LLM still struggles with complex crosslingual generation tasks. Beyond existing studies, we provide more concrete insights into this issue by prompting foundation LLMs with various instructions (Bawden and Yvon, 2023). Notably, only 31.8% of these prompts successfully elicit translation capability from the foundation LLMs1. This deficiency may stem from two main factors: First, the proportion of text with the aforementioned characteristics in the pre-training corpus XPT is still relatively small, and most of it is far from resembling human instruction text Xins.. Consequently, the model is more likely to predict tokens suitable for completing formal texts than those required for task-specific instructions. As a result, the foundation LLM often fails to produce tokens y \u2208Y1:T in the intended target language. Secondly, the predominance of English in the pretraining data skews the token generation probabilities of foundation LLM. Given a cross-lingual context, the model favors predicting tokens in English, while the token probabilities for other languages remain comparatively low. For example, English data 1For detailed information, please refer to Appendix B.3. 1 3 10 20 30 40 0 20 40 60 80 100 Top-K Sampling Tokens Agreement@K (%) Foundation LLM + Prior Token Figure 2: The agreement between the SFT model and the foundation model in terms of the selection of the next token. Once the Prior Token is provided, the token chosen by the SFT model is also can be found within the Top-K candidate words of foundation model. comprises up to 90% of the Llama2 pre-training data (Touvron et al., 2023), which may lead models to generate text with an English-centric bias. The above hypothesis might be reasonable when we revisit Equation (1) and Equation (2). The probability PLM(XPT) of the next token prediction for the foundation model is conditioned on the distribution of the pre-training text XPT. SFT narrows the probability space for token selection, adjusting the parameters to better align with the distribution, i.e., the probability PSFT(y|Xins.) is conditioned on the distribution of the instruction text Xins.. Experimental Settings To validate the aforementioned hypothesis, we selected the representative cross-lingual task of machine translation as our analytical testbed. The main research method involved quantifying the differences and similarities in the decision space and token selection behavior between the foundation LLM and the SFT-aligned LLM. For the model selection, we chose the foundation Llama2 7B model and conducted supervised fine-tuning on it using the Alpaca dataset2(Taori et al., 2023). The optimization was carried out using a cosine learning rate scheduler, with the maximum learning rate set to 2e\u22125 and a warmup ratio of 0.03. Training was performed on two NvidiaH800 GPUs using LoRA parameter-efficient finetuning (Hu et al., 2022) technique, with a cumulative batch size of 64. Other hyper-parameters follow those of the original Alpaca settings. 2https://github.com/tatsu-lab/stanford_alpaca 3 + Prior Token Figure 3: The probability distribution of tokens selected by various models. Incorporation of a Prior Token causes the decision probabilities of both models to converge across all data instances. 0 3 6 9 Comparison Group KL Divergence Foundation LLM vs. SFT LLM + Prior Token vs. SFT LLM 0 0.2 0.4 0.6 JS Divergence 0 5 10 15 Cross Entropy Figure 4: The divergence in probability distributions across the entire vocabulary during decoding. Prior Token significantly reduces the discrepancy between the foundation model and the SFT model. A Prior Token Elicits Silent Majority Inspired by the categorization of token shifts by Lin et al. (2023), we propose to quantify the agreement of token selection between foundation LLM \u03b8PT and SFT LLM \u03b8SFT. Given the same prefix input \u02c6 X, we aim to measure whether the next token selected by the SFT LLM, ySFT, is among the top-K tokens, yPT, with the highest probabilities in the decision space of the foundation LLM, which can be formally expressed as follows: ySFT = argmax y\u2208V P(y| \u02c6 X; \u03b8SFT) yPT = {y| arg topK y\u2208V P(y| \u02c6 X; \u03b8PT)} AggrementK = 1 L L X l=1 1ySFT\u2208yPT (3) where V is the vocabulary shared by two models, and L is the length of the dataset. We compare the agreement of the token selection made by the models under the same prefix text \u02c6 X in two different experimental setups. The first setup uses the instruction text as the prefix, i.e., \u02c6 X = Xins.; the second takes the first token decoded by the SFT model as a prior token, appending it to the original instruction prefix, i.e., \u02c6 X = h Xins., y(1) SFT i . For the SFT model, the second setup is equivalent to continuing its own decoding behavior, whereas for the foundation model, it becomes decoding with the addition of a prior token. Figure 2 illustrates the agreement between the foundation model\u2019s predictions and those of the SFT model regarding the selection of the next token, given an identical text prefix. Across the entire translation data, it is observed that after incorporating merely one prior token, the foundation model exhibits a high degree of agreement with the SFT model in terms of token selection. This demonstrates that the alignment effect of SFT in crosslingual generation tasks is also somewhat superficial. Even in instances where the token with the highest probability differs between the two models, 90.8% of the tokens chosen by the SFT model are present within the \u201csilent majority\u201d in the decision space of the foundation model, specifically, among the top 20 most probable token choices. Lens of Distribution Instead of focusing on the coverage of token selection outcomes, we also observe the decision dynamics and similarities from the perspective of the overall probability distribution, with the data settings consistent with the previous setup. First, as shown in Figure 3, after adding a prior token, the probability of the next tokens chosen by both models have closely aligned distributions. The reason that the foundation model 4 exhibits a high probability given the instruction text as a prefix lies in a preference for choosing to continue the instruction text rather than completing the cross-linguistic semantic transformation. Additionally, we quantify the distribution disparities between the two models through the probability distribution of the vocabulary. The disparity metrics used include Kullback-Leibler (KL) divergence, Jensen-Shannon (JS) divergence, and cross-entropy (Kullback, 1997). As depicted in Figure 4, the disparity of decision space of the foundation model significantly decreases after adding the prior token, aligning more closely with the SFT model. These findings indicate that such prior tokens serve a dual function: they not only steer the foundation model towards generating tokens pertinent to cross-lingual generation but also modulate the decision space to align more closely with the taskspecific distribution. 3 Pretty: Prefix TexT as a Yarn 3.1 Motivation The observations discussed earlier confirm that SFT effectively narrows the decision space of the foundation model during text generation that is conditioned on instruction text. The disparity in token selection between the foundation LLM and the SFT LLM, however, might not be reduced by a trainingbased transfer methodology. By appending a prior token into the instruction text, the choices of the next token between the two models tend to become largely consistent, and in the vast majority of cases, the tokens chosen by SFT model are also found within the high-probability candidate words of foundation model. These phenomena show that the alignment elicited by SFT is somewhat superficial in cross-lingual generation tasks and motivate us to propose a training-free alignment method by leveraging these prior tokens. 3.2 Formulation Upon revisiting Equation (1) and Equation (2), the goal of proposing a training-free approach is to enable the conditional decoding probability of foundation model to approximate those of SFT model. Therefore, ideally, the selected prior tokens Xpri. = {xpri.} may satisfy the following criteria: P(yPT| [Xins., Xpri.] ; \u03b8PT) \u2248P(ySFT|Xins.; \u03b8SFT) (4) where yPT and ySFT represent the outputs of the foundation and the SFT models, respectively. It is important to note that a single prior token may not serve as an optimal solution due to its non-derivable characteristic. Hence, we extend our methodological approach to include appending multiple prior tokens, grouping them to form a prefix text. 3.3 Construction of Prior Tokens To ensure that the proposed method is applicable to a wide array of languages, we propose three construction strategies based on the availability of language resources, aiming to guarantee the universality of our approach. SFT Prior represents an ideal scenario where the first few tokens generated by a SFT model are used as priors. This method is theoretically rational when the SFT model is derived from the same foundation model because it directly approximates Equation (4) by sampling xpri. \u223c{ySFT}. In practical applications, this might be suitable for high-resource languages due to the imbalanced language capabilities of other languages. Additionally, SFT could potentially degrade the knowledge and abilities that the foundation model has already acquired. In such cases, using prior tokens from the SFT model can contribute to generating better results. This situation will be discussed further in the subsequent section. Refined Prior is more readily accessible for most languages and tasks. We can utilize the output tokens generated by a smaller model trained for specific downstream tasks and use them as prior tokens to achieve weak-to-strong generalization (Burns et al., 2023). Pseudo Prior For extremely low-resource language pairs, where there is no labeled data for downstream tasks, both SFT and Refined priors are difficult to obtain. For cross-lingual tasks, we can create pseudo labels in target language as prior tokens. For instance, in machine translation tasks, we might use bilingual dictionaries to acquire pseudo prior tokens. However, the quality and accuracy of pseudo labels remain uncertain, and the extent of their impact on the generative performance of the foundation LLM is not yet clear. We will explore this problem further in the context of experimental results discussed later in the paper. 5 4 Experiments We examine the effectiveness of our proposed training-free alignment method on two distinct tasks: machine translation, cross-lingual summarization and non-English POS tagging. Machine translation serves as a prototypical cross-lingual generation task, entailing the transformation of a sequence from a source language to a target language (Bahdanau et al., 2015; Vaswani et al., 2017; Zhan et al., 2023). As for cross-lingual summarization, it requires the model to generate a summary of an article in a different language (Bhattacharjee et al., 2023; Chen et al., 2023). Although POS tagging (Manning, 2011; Nivre et al., 2017; Chiche and Yitagesu, 2022) primarily assesses the model\u2019s ability to understand monolingual text, we include it as multilingual experiments to show the universality of our methods. 4.1 Experimental Settings Data We use Flores-101 (Goyal et al., 2022), CrossSum (Bhattacharjee et al., 2023) as benchmarks for machine translation and cross-lingual summarization tasks, respectively. For POS tagging tasks, we choose the POS test split from the XGLUE benchmark (Liang et al., 2020), which is derived from the Universal Dependencies Treebank v2.5. To investigate the performance across various resource languages, we carefully selected eight languages based on the pre-training data proportions disclosed in the Llama2 technical report (Touvron et al., 2023). These languages are French, German, Chinese, Russian, Ukrainian, Portuguese, Hindi and Arabic. Among these, the first four languages account for more than 0.1% of the pretraining data of Llama2, while Ukrainian and Portuguese fall below 0.1%, Hindi and Arabic is below 0.05%. For the Llama2 model, we can categorize these three types of languages into high-resource languages, low-resource languages, and extremely low-resource languages, respectively. Models and Baselines The settings of Llama2 foundation model and the SFT model are consistent with those described in Section 2.1. To further demonstrate the generality of our proposed method, we incorporated the Mistral-7B LLM family (Jiang et al., 2023) into our experiments, covering both out-of-the-box SFT and foundation models. In the machine translation task, the Llama2 foundation model does not tend to generate translations when given explicit translation instructions. While this is a normal phenomenon according to our previous discussion, to ensure a fair comparison, we also searched for a better prompts for the foundation model. This prompting approach is referred to as \u201cLlama2-7BPROMPTING\u201d in subsequent sections. For POS tagging, we experimented with various instructions and selected one that consistently prompts both the foundation model and the SFT model to reliably generate classification results in text. Although we report the zero-shot performance for the aforementioned tasks, we found that even out-of-the-box SFT models cannot produce stable output for cross-lingual summarization task. Hence, we prepend a constant demonstration before the input to also assess the effectiveness of our proposed method under the in-context learning paradigm (Dong et al., 2023c). Sources of Prior Token The sources of crafting prior tokens include: \u2022 SFT Prior: We took the first k tokens of output produced by SFT model as the prior tokens. For multiple SFT models, we select the model that demonstrates better performance. \u2022 Refined Prior: We use downstream task models with smaller parameter sizes as the source of refined priors. For the different tasks, we utilized the distilled 600M variant of NLLB-200 translation model3(Costajuss\u00e0 et al., 2022), mT5 cross-lingual summarization model4 and the Unicoder-NLU model5(Huang et al., 2019), respectively. \u2022 Pseudo Prior: The pseudo prior is applied to two cross-lingual tasks since it can utilize cross-lingual language resources. We create pseudo prior tokens for machine translation task by referencing dictionary 6 entries. For cross-lingual summarization, we initially extract keywords from each passage using KeyBERT (Grootendorst, 2020) and then perform word-by-word translation. However, not all initial sentence tokens will be covered by the dictionary. To handle such instances, a backoff strategy is implemented, where the target language equivalent of the first available dictionary token is used as the prior token. 3https://huggingface.co/facebook/ nllb-200-distilled-600M 4https://hf.co/csebuetnlp/mT5_m2m_crossSum 5https://github.com/microsoft/Unicoder/ 6Please refer to Appendix B.4 for dictionary information. 6 English-Centric Models En-Zh En-Uk Zh-En Uk-En Avg. %SFT. spBL. CoM. spBL. CoM. spBL. CoM. spBL. CoM. spBL. CoM. All Llama2-7B Llama2-7B-Alpaca 13.6 80.9 24.0 83.3 23.5 85.1 34.4 85.5 23.9 83.7 Llama2-7B-Chat 7.8 67.2 18.1 71.0 18.5 81.3 30.4 83.3 18.7 75.7 Llama2-7BPROMPTING 5.9 64.1 11.0 60.9 24.3 84.8 34.2 85.0 18.9 73.7 80.4 Llama2-7B 7.7 72.0 0.2 32.4 12.0 74.4 9.3 59.2 7.3 59.5 52.5 +PRETTY (SFT Prior) 13.3 80.0 23.0 83.1 23.7 84.9 33.6 85.3 23.4 83.3 98.8 +PRETTY (Pseudo Prior) 12.0 75.7 18.1 74.1 16.9 80.3 27.2 78.3 18.6 77.1 85.4 +PRETTY (Refined Prior) 14.2 80.5 24.1 83.8 24.0 84.9 34.6 85.6 24.2 83.7 100.9 Mistral-7B Mistral-7B-Instruct 6.6 64.6 20.3 78.2 20.5 83.2 32.9 84.8 20.1 77.7 Mistral-7B 1.2 42.6 0.3 30.8 19.9 77.1 21.5 69.4 10.7 55.0 46.2 +PRETTY (SFT Prior) 13.8 78.1 23.1 79.2 20.0 82.3 32.1 83.3 22.3 80.7 117.2 +PRETTY (Pseudo Prior) 13.3 75.8 20.1 75.7 16.5 79.7 24.9 77.3 18.7 77.1 107.2 +PRETTY (Refined Prior) 15.9 81.3 24.9 82.9 21.5 83.0 32.3 83.9 23.7 82.7 124.6 Non-English-Centric Models De-Fr Fr-De Zh-Pt Pt-Zh Avg. %SFT. spBL. CoM. spBL. CoM. spBL. CoM. spBL. CoM. spBL. CoM. All Llama2-7B Llama2-7B-Alpaca 29.8 81.5 24.1 80.9 16.6 81.4 11.3 78.6 20.5 80.6 Llama2-7B-Chat 6.2 68.0 7.3 64.5 3.0 67.8 6.2 66.6 5.7 66.7 Llama2-7BPROMPTING 22.2 77.4 15.4 73.3 14.4 78.9 4.4 64.1 14.1 73.4 78.5 Llama2-7B 1.0 51.1 3.2 54.0 0.9 61.4 7.3 70.0 3.1 59.1 47.6 +PRETTY (SFT Prior) 28.2 80.6 23.0 80.4 16.3 81.1 10.5 77.4 19.5 79.9 97.2 +PRETTY (Pseudo Prior) 18.3 68.9 17.3 72.2 11.6 70.4 5.0 65.6 13.1 69.3 73.9 +PRETTY (Refined Prior) 29.1 81.4 22.9 80.4 17.1 81.1 12.2 79.4 20.3 80.6 100.4 Mistral-7B Mistral-7B-Instruct 22.1 76.1 20.4 75.9 10.5 74.8 3.3 60.2 14.1 71.8 Mistral-7B 1.2 46.1 1.6 40.6 1.0 52.8 0.4 43.6 1.1 45.8 36.5 +PRETTY (SFT Prior) 20.1 73.3 20.7 75.1 11.0 74.7 6.8 67.3 14.7 72.6 113.8 +PRETTY (Pseudo Prior) 18.1 66.4 17.3 70.4 5.9 65.6 3.7 59.4 11.3 65.5 87.7 +PRETTY (Refined Prior) 28.3 78.8 22.3 78.5 14.2 78.6 13.6 80.6 19.6 79.1 153.8 Table 1: Translation performance of different models on Flores-101 subsets. Bold values indicate that the best performance among foundation models. The overall best results are underlined. \u201c%SFT.\u201d denotes the relative performance compared to the best SFT model of each family. For two cross-lingual task, the first k = 2 tokens are chosen as the prior tokens. This helps to avoid inadequate guidance from single non-informative tokens like punctuation or numbers. In the case of the pseudo prior, due to the back-off strategy, only one token is used for fair comparison. For POS tagging task, the strategy is more straightforward with only the first k = 1 label considered as the prior token. 4.2 Evaluation To ensure the integrity of the output data from all models, we standardized the output by cleaning it in accordance with the specific output style of each model. Subsequently, we conducted a manual inspection to guarantee that only the required labels were retained. Task-specific Metrics We use two metrics to evaluate the performance of translation quality: spBLEU7 (Goyal et al., 2022) and COMET8(Rei et al., 2020). We employed the ROUGE (Lin, 2004) and LaSE (Bhattacharjee et al., 2023) metrics for the evaluation of summarization quality. For the POS tagging task, we report both the precision score and F1 score. Relative Performance We further compute the ratio of the performance scores of the foundation model to the scores of the SFT model with the application of different strategies. This ratio serves 7https://github.com/mjpost/sacrebleu/ 8https://github.com/Unbabel/COMET 7 Models En-Zh En-Hi Uk-Pt Ar-Ru Avg. %SFT. R2 RL LS R2 RL LS R2 RL LS R2 RL LS R2 RL LS All Llama2-7B w/ Constant 1-Shot Demonstration Llama2-7B-Alpaca 7.0 12.4 11.9 1.7 10.7 17.3 1.5 6.1 5.8 0.1 0.5 1.3 2.6 7.4 9.1 Llama2-7B-Chat 6.3 11.6 8.7 1.5 11.7 27.1 2.5 8.3 7.1 0.0 0.3 0.2 2.6 8.0 10.7 Llama2-7B 9.3 16.6 29.2 1.6 10.2 15.3 0.8 4.0 1.9 0.6 4.1 15.5 3.1 7.6 12.1 262.4 +PRETTY (SFT Prior) 7.4 13.9 25.9 1.5 9.7 12.9 1.9 6.7 9.8 0.1 0.4 0.8 2.7 6.7 9.8 106.3 +PRETTY (Pseudo Prior) 8.0 14.5 29.1 1.4 9.9 14.5 2.5 9.1 13.6 1.2 5.9 23.5 3.3 8.5 15.4 387.5 +PRETTY (Refined Prior) 11.2 19.0 32.6 1.6 10.8 15.9 3.4 10.5 11.3 1.5 7.9 30.1 4.4 10.5 17.5 490.6 Mistral-7B w/ Constant 1-Shot Demonstration Mistral-7B-Instruct 5.9 12.2 17.2 1.0 10.3 23.4 1.5 6.2 17.7 0.4 2.6 12.8 2.2 7.8 17.8 Mistral-7B 12.3 20.9 44.5 1.6 10.6 17.6 4.8 12.9 27.7 1.8 6.5 23.3 5.1 11.2 21.6 206.1 +PRETTY (SFT Prior) 9.7 17.6 40.7 1.4 10.0 17.0 2.3 7.9 17.5 0.2 1.1 3.2 3.4 8.0 15.0 114.5 +PRETTY (Pseudo Prior) 9.9 17.5 41.0 1.4 9.9 17.4 3.1 11.6 35.1 1.7 7.9 32.9 4.0 10.2 23.5 195.8 +PRETTY (Refined Prior) 15.0 24.1 49.6 1.8 11.3 19.7 5.5 16.5 46.9 2.6 10.9 42.0 6.2 13.8 29.7 275.6 Table 2: Summarization performance of different models on CrossSum subsets. \u201cR2/L\u201d and \u201cLS\u201d refer to the ROUGE and LaSE score, respectively. Bold values indicate that the best performance among foundation models. The overall best results are underlined. \u201c%SFT.\u201d denotes the relative performance compared to the best SFT model. Models Fr Zh Pt Ru Ar Avg. %SFT. Prec. F1 Prec. F1 Prec. F1 Prec. F1 Prec. F1 Prec. All Llama2-7B-Alpaca 48.2 42.8 38.6 36.3 40.7 35.9 42.3 36.7 34.4 30.8 38.7 Llama2-7B 45.0 37.9 39.8 36.2 39.8 33.2 42.5 33.8 36.5 32.1 37.7 97.4 +PRETTY (SFT Prior) 54.8 50.0 38.0 33.5 49.1 45.3 49.7 44.1 35.1 31.1 43.1 111 +PRETTY (Refined Prior) 59.3 54.8 43.0 38.8 54.5 50.6 55.3 49.2 44.0 39.6 48.9 126 Table 3: POS tagging performance of different Llama2 models on XGLUE subsets. Bold values indicate that the best performance among foundation models. The overall best results are underlined. \u201c%SFT.\u201d denotes the relative performance compared to Alpaca model. as a metric for assessing the extent to which the foundation model approximates the SFT model\u2019s performance when different strategies are applied. 4.3 Main Results Machine Translation As shown in Table 1, for the machine translation task, we use up to two prior tokens as decoding guidance, allowing the base model to achieve performance comparable to that of a model after SFT. Moreover, in some language pairs, the translation performance outperforms SFT model when guided by Refined Prior tokens from a smaller model. For Llama2 model family, the prior tokens provided by the SFT model, although slightly less effective, still allow the foundation model to achieve 98% of the performance of SFT model. On the other hand, the use of pseudo labels derived from a dictionary exhibits the least effectiveness, yet this strategy still surpasses the results achieved through costly prompt engineering. Cross-lingual Summarization The results presented in Table 2 indicate that the foundation model exhibited superior performance compared to the SFT model in this in-context learning scenario. For prior-guided decoding, the performance of the foundation model was degraded when using prefix tokens from the SFT model, and the small performance gap in this setting suggests that the alignment achieved by the SFT model is relatively \u201csuperficial\u201d. Notably, the performance of Llama2 foundation model significantly improved when other priors were provided, even when using translated keywords as pseudo labels. Non-English POS tagging The performance results of POS tagging task are presented in Table 3. These results align with the insights gleaned from the machine translation task, specifically regarding the strategy of prior token construction. Notably, for POS tagging task, the performance of SFT model on most language pairs falls short of the foundation model, suggesting that SFT detrimentally affect the knowledge learned at the pretraining stage. Encouragingly, when the foundation model empowered by auxiliary prior token surpasses the performance of SFT model as well as the prompting results of itself, highlighting the poten8 tial of our proposed method in mitigating the catastrophic forgetting problem associated with SFT. 5 Analysis and Discussion 5.1 Quality of Prior Tokens To investigate the quality of prior tokens from different sources and how they impact the final performance, we further analyze why the prior tokens given by the SFT model are less effective than those from external auxiliary models in POS tagging task. Unlike the machine translation task, the positional result for the POS task is definite, so we are able to verify whether it corresponds to a ground truth label. The results in Table 4 confirm two points. First, even if the prior tokens provided by the SFT model are of low quality, the foundation model does not suffer from severe error propagation. Secondly, the final performance of proposed method is still associated with the quality of prior tokens. This suggests that prior tokens closely aligned with the ground truth can steer the foundation model towards a more accurate decision trajectory, thereby yielding superior performance. Fr Zh Pt Ru Ar SFT Prior 18.3 18.3 3.74 16.3 12.1 Refined Prior 88.9 88.9 88.54 87.7 79.6 Table 4: Accuracy of prior tokens used in POS tagging task. SFT prior tokens are of inferior quality. 5.2 Choice of Prior Tokens Based on the findings from the previous section, if incorrect labels used as prior tokens can still elicit the ability of foundation model, then could random prior tokens in the target language trigger crosslingual generative capabilities? To investigate this, we attempted to use random tokens of different parts of speech as the prior tokens in the EnglishChinese machine translation task. For instance, \u201cModal Prior\u201d refers to the use of randomly picked modal verb in Chinese as the initial token. The results shown in Table 5 indicate that the model could not be aligned to a better decision trajectory by these random prior tokens, whether they were function words or tokens with actual meaning. This supports the validity of our proposed methods for constructing prior tokens and also supplements previous findings. From this, we can summarize some rules about prior tokens: they can be of low quality but should not be completely unrelated to the target sequence. spBLEU COMET BLEU Llama2-7B 7.7 72.01 16.1 + Modal Prior 8.0 68.29 16.0 + Adverb Prior 6.4 63.72 13.1 + Random Prior 6.2 57.11 11.5 Table 5: Comparison of translation performance using three types of random prior tokens. 5.3 Number of Prior Tokens Figure 5 depicts the relationship between the number of preceding tokens provided and the resulting changes in translation performance. It becomes apparent that performance generally improves with the addition of more tokens. Additionally, we note that introducing two prior tokens appears to be a performance inflection point, which may be due to instances where the initial token is a punctuation mark or a number. 1 2 3 4 5 85 90 100 110 Number of Prior Tokens %SFT. En-Zh De-Fr Pt-Zh Zh-Pt Figure 5: Impact of incrementally adding refined prior tokens on performance across Flores-101 subsets. 6 Conclusions In this paper, we investigate and analyze the decision-making discrepancies between the foundation model and the SFT model within crosslingual generation contexts. Drawing from our analysis, we introduce a novel cross-lingual alignment method that requires no additional training and is resource-efficient. The proposed method aligns the foundation LLM to perform comparably with the SFT model solely by utilizing prefix text as priors during generation. In the future, we aim to broaden our research to encompass additional alignment scenarios, such as those involving reinforcement learning from human feedback. 9 Limitations The primary limitations of our study stem from the scope of model validation. Our research is limited to 7B models. Future endeavors should aim to extend the validation to a broader scope of models and incorporate various parameter scales to support the universality of our findings. Furthermore, the availability of language resources is still a practical problem, particularly for low-resource languages where access to Prior Token and Refined Token sources is limited. Despite these challenges, our experimental results indicate that Pseudo Prior tokens still exhibits promising potential. It is important to note, however, that the development of pseudo tags may require a dedicated investigation into the linguistic rules specific to each downstream task. This process is inherently time-intensive and resourcedemanding. Acknowledgements This work was supported in part by the Science and Technology Development Fund, Macau SAR (Grant Nos. FDCT/0070/2022/AMJ, FDCT/060/2022/AFJ), Ministry of Science and Technology of China (Grant No. 2022YFE0204900), National Natural Science Foundation of China (Grant No. 62261160648), the Multi-year Research Grant from the University of Macau (Grant No. MYRG-GRG2023-00006FST-UMDF), and Tencent AI Lab Rhino-Bird Gift Fund (Grant No. EF2023-00151-FST). This work was performed in part at SICC which is supported by SKL-IOTSC, and HPCC supported by ICTO of the University of Macau."
17
+ }
intro_8K/test_introduction_short_2404.16767v1.json ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2404.16767v1",
3
+ "title": "REBEL: Reinforcement Learning via Regressing Relative Rewards",
4
+ "abstract": "While originally developed for continuous control problems, Proximal Policy\nOptimization (PPO) has emerged as the work-horse of a variety of reinforcement\nlearning (RL) applications including the fine-tuning of generative models.\nUnfortunately, PPO requires multiple heuristics to enable stable convergence\n(e.g. value networks, clipping) and is notorious for its sensitivity to the\nprecise implementation of these components. In response, we take a step back\nand ask what a minimalist RL algorithm for the era of generative models would\nlook like. We propose REBEL, an algorithm that cleanly reduces the problem of\npolicy optimization to regressing the relative rewards via a direct policy\nparameterization between two completions to a prompt, enabling strikingly\nlightweight implementation. In theory, we prove that fundamental RL algorithms\nlike Natural Policy Gradient can be seen as variants of REBEL, which allows us\nto match the strongest known theoretical guarantees in terms of convergence and\nsample complexity in the RL literature. REBEL can also cleanly incorporate\noffline data and handle the intransitive preferences we frequently see in\npractice. Empirically, we find that REBEL provides a unified approach to\nlanguage modeling and image generation with stronger or similar performance as\nPPO and DPO, all while being simpler to implement and more computationally\ntractable than PPO.",
5
+ "authors": "Zhaolin Gao, Jonathan D. Chang, Wenhao Zhan, Owen Oertell, Gokul Swamy, Kiant\u00e9 Brantley, Thorsten Joachims, J. Andrew Bagnell, Jason D. Lee, Wen Sun",
6
+ "published": "2024-04-25",
7
+ "updated": "2024-04-25",
8
+ "primary_cat": "cs.LG",
9
+ "cats": [
10
+ "cs.LG",
11
+ "cs.CL",
12
+ "cs.CV"
13
+ ],
14
+ "label": "Original Paper",
15
+ "paper_cat": "Offline AND Reinforcement AND Learning",
16
+ "gt": "The generality of the reinforcement learning (RL) paradigm is striking: from continuous control problems (Kalashnikov et al., 2018) to, recently, the fine-tuning of generative models (Stiennon et al., 2022; Ouyang et al., 2022), RL has enabled concrete progress across a variety of decision-making tasks. Specifically, when it comes to fine-tuning generative models, Proximal Policy Optimization (PPO, Schulman et al. (2017)) has emerged as the de-facto RL algorithm of choice, from language models (LLMs) (Ziegler et al., 2020; Stiennon et al., 2022; Ouyang et al., 2022; Touvron et al., 2023) to image generative models (Black et al., 2023; Fan et al., 2024; Oertell et al., 2024). If we take a step back however, it is odd that we are using an algorithm designed for optimizing two-layer networks for continuous control tasks from scratch for fine-tuning the billions of parameters \u2217{zg292, jdc396, ojo2, kdb82, ws455}@cornell.edu, [email protected] \u2020{wenhao.zhan, jasonlee}@princeton.edu \u2021{gswamy,bagnell2}@andrew.cmu.edu 1 arXiv:2404.16767v1 [cs.LG] 25 Apr 2024 Image Generation Language Modeling ( ) RLHF reinforcement learning regression REBEL ( ) ( ) x y x y Figure 1: We present REBEL: a simple and scalable RL algorithm that performs policy optimization via iteratively regressing the difference in rewards directly in terms of the policy. This allows us to eliminate much of the complexity (e.g. value functions, clipping) of algorithms like PPO (Schulman et al., 2017). We apply REBEL to problems in both image generation and language modeling and find that despite its conceptual and implementation-level simplicity, REBEL is able to match or sometimes outperform the performance of PPO while out-performing purely offline techniques like DPO (Rafailov et al., 2023). of modern-day generative models. In the continuous control setting, the randomly initialized neural networks and the possible stochasticity in the dynamics necessitate variance reduction through a learned value function as a baseline (Schulman et al., 2015b), while clipping updates is important to limit distribution shift from iteration to iteration (Kakade and Langford, 2002). This means that when applied to generative model fine-tuning, we need to store four models in memory simultaneously (the policy, the reference policy, the critic, and the reward model), each with billions of parameters. Furthermore, we often add a KL regularization to the base model for fine-tuning, making explicit clipping unnecessary nor advisable, as pointed out by Ahmadian et al. (2024). Even outside of the generative modeling context, PPO is notorious for the wide range of performances measured, with differences being attributed to seemingly inconsequential implementation details (Henderson et al., 2019; Engstrom et al., 2020). This begs the question: Are there simpler algorithms that scale to modern RL applications? Our answer is REBEL: an algorithm that reduces the problem of reinforcement learning to solving a sequence of squared loss regression problems on iteratively collected datasets. The regression problems directly use policies to predict the difference in rewards. This allows us to eliminate the complexity of value functions, avoid heuristics like clipping, and scale easily to problems in both language modeling and image generation. Our key insight is that regressing relative rewards via policies directly on a sequence of iteratively collected datasets implicitly enables policy improvement. Rather than being a heuristic, REBEL comes with strong guarantees in theory and can be seen as a strict generalization of classical techniques (e.g., NPG) in reinforcement learning. Furthermore, REBEL cleanly incorporates offline datasets when available, can be extended to robustly handle intransitive preferences (Swamy et al., 2024), and empirically out-performs techniques like PPO 2 and DPO (Rafailov et al., 2023) in language generation and has a faster convergence with a similar asymptotic performance in image generation. More explicitly, our key contributions are four-fold: 1. We propose REBEL, a simple and scalable RL algorithm. REBEL finds a near-optimal policy by solving a sequence of least square regression problems on iteratively collected datasets. Each regression problem involves using a policy-parameterized regressor to predict the difference in rewards across trajectories sampled from the dataset. This dataset can be generated in a purely on-policy fashion or can incorporate offline data, enabling hybrid training. Furthermore, REBEL can be easily extended to handle intransitive preferences. 2. We connect REBEL to classical RL methods. We show that REBEL is a generalization of the foundational Natural Policy Gradient (NPG, Kakade (2001)) algorithm \u2013 applying the Gauss-Newton algorithm to the sequence of regression problems that REBEL solves recovers NPG. However, by instead applying simpler first-order optimization techniques, we are able to avoid computing the Fisher Information Matrix and enjoy a variance reduction effect. Thus, REBEL can be understood as a generalization of NPG while being much more scalable. 3. We analyze the convergence properties of REBEL. We prove via a direct reduction-based analysis that as long as we can solve the regression problem well at each iteration, we will be able to compete with any policy covered by the iteratively collected datasets (matching the strongest known results in the agnostic RL). These problems involve predicting the difference in rewards between trajectories in our dataset. We expect this problem to be well-solved in practice because our class of regressors is isomorphic to a class of policies that is highly expressive for the applications we consider (i.e. flexible Transformer models). 4. We evaluate REBEL both on language modeling and image generation tasks. We find that the on-policy version of REBEL outperforms PPO and DPO on language modeling and has similar performance for image generation tasks. On the TL;DR summarization task, we show REBEL scales well by finetuning a 6.9B parameter model. For text-guided image generation, REBEL optimizes a consistency model that converges to a similar performance as PPO. In short, REBEL is a simple and scalable algorithm that enjoys strong theoretical guarantees and empirical performance. We believe it is a suitable answer to the question raised above.",
17
+ "main_content": "We first outline the notation used throughout the paper. 2.1 Notation We consider the Contextual Bandit formulation (Langford and Zhang, 2007) of RL which has been used to formalize the generation process of models like LLMs (Rafailov et al., 2023; Ramamurthy et al., 2022; Chang et al., 2023) and Diffusion Models (Black et al., 2023; Fan et al., 2024; Oertell et al., 2024) due to the determinism of the transitions. More explicitly, in the deterministic transition setting, explicit states are not required as they can be equivalently represented by a sequence of 3 actions. Furthermore, the entire sequence of actions can be considered as a single \u201carm\u201d in a bandit problem with an exponentially large action space. We denote by (\ud835\udc65, \ud835\udc66) a prompt/response pair with \ud835\udc65\u2208X as a prompt and \ud835\udc66\u2208Y as a response (e.g., a sequence of tokens, or in general a sequence of actions). We assume access to a reward function \ud835\udc5f(\ud835\udc65, \ud835\udc66) from which we can query for reward signals (the exact form of \ud835\udc5fdoes not need to be known). Querying \ud835\udc5fat (\ud835\udc65, \ud835\udc66) will return a scalar \ud835\udc5f(\ud835\udc65, \ud835\udc66) measuring the quality of the response. Such a reward function could be a pre-defined metric (e.g., Rouge score against human responses) or it could be learned from an offline human demonstration or preference data (e.g., the RLHF paradigm (Christiano et al., 2017; Ziegler et al., 2020)), as explored in our experiments. Denote by \ud835\udf0b\u2208X \u21a6\u2192\u0394(\ud835\udc4c), a policy (e.g. LLM) that maps from a prompt \ud835\udc65to a distribution over the response space Y. We use \ud835\udf0cto denote the distribution over prompts (i.e. initial states / contexts) \ud835\udc65. Throughout the paper, we use \ud835\udf0b\ud835\udf03(\ud835\udc66|\ud835\udc65) to denote a parameterized policy with parameter \ud835\udf03(e.g., a neural network policy). At times we interchangeably use \ud835\udf0b\ud835\udc61and \ud835\udf0b\ud835\udf03\ud835\udc61when it is clear from the context. We emphasize that while we focus on the bandit formulation for notation simplicity, the algorithms proposed here can be applied to any deterministic MDP where \ud835\udc65is the initial state and the trajectory \ud835\udc66consists of the sequence of actions. At each iteration of all algorithms, our goal will be to solve the following KL-constrained RL problem: \ud835\udf0b\ud835\udc61+1 = argmax \ud835\udf0b E\ud835\udc65,\ud835\udc66\u223c\ud835\udf0b(\u00b7|\ud835\udc65)\ud835\udc5f(\ud835\udc65, \ud835\udc66) \u22121 \ud835\udf02E\ud835\udc65KL (\ud835\udf0b(\u00b7|\ud835\udc65)||\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65)) . (1) Intuitively, this can be thought of asking for the optimizer to fine-tune the policy \ud835\udf0b\ud835\udc61+1 according to \ud835\udc5f while staying close to some baseline policy \ud835\udf0b\ud835\udc61. 2.2 Deriving REBEL: REgression to RElative REward Based RL From Ziebart et al. (2008), we know that there exists a closed-form solution to the above minimum relative entropy problem (Eq. 1, Gr\u00fcnwald and Dawid (2004)): \u2200\ud835\udc65, \ud835\udc66: \ud835\udf0b\ud835\udc61+1(\ud835\udc66|\ud835\udc65) = \ud835\udf0b\ud835\udc61(\ud835\udc66|\ud835\udc65) exp(\ud835\udf02\ud835\udc5f(\ud835\udc65, \ud835\udc66)) \ud835\udc4d(\ud835\udc65) ; \ud835\udc4d(\ud835\udc65) = \u2211\ufe01 \ud835\udc66 \ud835\udf0b\ud835\udc61(\ud835\udc66|\ud835\udc65) exp(\ud835\udf02\ud835\udc5f(\ud835\udc65, \ud835\udc66)). (2) As first pointed out by Rafailov et al. (2023), observe that we can invert Eq. 2 and write the reward as a function of the policy, i.e. the \u201cDPO Trick\u201d: \u2200\ud835\udc65, \ud835\udc66: \ud835\udc5f(\ud835\udc65, \ud835\udc66) = 1 \ud835\udf02 \u0012 ln(\ud835\udc4d(\ud835\udc65)) + ln \u0012 \ud835\udf0b\ud835\udc61+1(\ud835\udc66|\ud835\udc65) \ud835\udf0b\ud835\udc61(\ud835\udc66|\ud835\udc65) \u0013\u0013 . (3) As soon as X and Y become large, we can no longer guarantee the above expression holds exactly at all (\ud835\udc65, \ud835\udc66) and therefore need to turn our attention to choosing a policy such that Eq. 3 is approximately true. We propose using a simple square loss objective between the two sides of Eq. 3 to measure the goodness of a policy, i.e. reducing RL to a regression problem: \u0012 \ud835\udc5f(\ud835\udc65, \ud835\udc66) \u22121 \ud835\udf02 \u0012 ln(\ud835\udc4d(\ud835\udc65)) + ln \u0012 \ud835\udf0b\ud835\udc61+1(\ud835\udc66|\ud835\udc65) \ud835\udf0b\ud835\udc61(\ud835\udc66|\ud835\udc65) \u0013\u0013\u00132 . (4) 4 Algorithm 1 REgression to RElative REward Based RL (REBEL) 1: Input: Reward \ud835\udc5f, policy class \u03a0 = {\ud835\udf0b\ud835\udf03}, base distribution \ud835\udf07, learning rate \ud835\udf02 2: Initialize policy \ud835\udf0b\ud835\udf030. 3: for \ud835\udc61= 0 to \ud835\udc47\u22121 do 4: // Base distribution \ud835\udf07can either be an offline dataset or \ud835\udf0b\ud835\udc61. 5: Collect dataset D\ud835\udc61= {\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032} where \ud835\udc65\u223c\ud835\udf0c, \ud835\udc66\u223c\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65), \ud835\udc66\u2032 \u223c\ud835\udf07(\u00b7|\ud835\udc65) 6: Solve square loss regression problem: \ud835\udf03\ud835\udc61+1 = argmin \ud835\udf03 \u2211\ufe01 (\ud835\udc65,\ud835\udc66,\ud835\udc66\u2032)\u2208D\ud835\udc61 \u00121 \ud835\udf02 \u0012 ln \ud835\udf0b\ud835\udf03(\ud835\udc66|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65) \u2212ln \ud835\udf0b\ud835\udf03(\ud835\udc66\u2032|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66\u2032|\ud835\udc65) \u0013 \u2212(\ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2212\ud835\udc5f(\ud835\udc65, \ud835\udc66\u2032)) \u00132 (9) 7: end for Unfortunately, this loss function includes the partition function \ud835\udc4d(\ud835\udc65), which can be challenging to approximate over large input / output domains. However, observe that \ud835\udc4d(\ud835\udc65) only depends on \ud835\udc65and not \ud835\udc66. Thus, if we have access to paired samples, i.e. (\ud835\udc65, \ud835\udc66) and (\ud835\udc65, \ud835\udc66\u2032), we can instead regress the difference in rewards to eliminate this term from our objective: \u0012 (\ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2212\ud835\udc5f(\ud835\udc65, \ud835\udc66\u2032)) \u22121 \ud835\udf02 \u0012 ln \u0012 \ud835\udf0b\ud835\udc61+1(\ud835\udc66|\ud835\udc65) \ud835\udf0b\ud835\udc61(\ud835\udc66|\ud835\udc65) \u0013 \u2212ln \u0012 \ud835\udf0b\ud835\udc61+1(\ud835\udc66\u2032|\ud835\udc65) \ud835\udf0b\ud835\udc61(\ud835\udc66\u2032|\ud835\udc65) \u0013\u0013\u00132 . (5) Of course, we need to evaluate this loss function on some distribution of samples. In particular, we propose using an on-policy dataset D\ud835\udc61= {\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032} with \ud835\udc65\u223c\ud835\udf0c, \ud835\udc66\u223c\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65), \ud835\udc66\u2032 \u223c\ud835\udf07(\u00b7|\ud835\udc65), where \ud835\udf07is some base distribution. The base distribution \ud835\udf07can either be a fixed offline dataset (e.g. the instruction fine-tuning dataset) or \ud835\udf0b\ud835\udc61itself. Thus, the choice of base distribution \ud835\udf07determines whether REBEL is hybrid or fully online. Putting it all together, we arrive at our core REBEL objective: \u2211\ufe01 (\ud835\udc65,\ud835\udc66,\ud835\udc66\u2032)\u2208D\ud835\udc61 \u0012 (\ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2212\ud835\udc5f(\ud835\udc65, \ud835\udc66\u2032)) \u22121 \ud835\udf02 \u0012 ln \u0012 \ud835\udf0b\ud835\udc61+1(\ud835\udc66|\ud835\udc65) \ud835\udf0b\ud835\udc61(\ud835\udc66|\ud835\udc65) \u0013 \u2212ln \u0012 \ud835\udf0b\ud835\udc61+1(\ud835\udc66\u2032|\ud835\udc65) \ud835\udf0b\ud835\udc61(\ud835\udc66\u2032|\ud835\udc65) \u0013\u0013\u00132 . (6) To recap, given a pair of completions \ud835\udc66, \ud835\udc66\u2032 to a prompt \ud835\udc65, REBEL attempt to fit the relative reward \ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2212\ud835\udc5f(\ud835\udc65, \ud835\udc66\u2032) (7) by optimizing over a class of predictors of the form 1 \ud835\udf02 \u0012 ln \ud835\udf0b\ud835\udf03(\ud835\udc66|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65) \u2212ln \ud835\udf0b\ud835\udf03(\ud835\udc66\u2032|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66\u2032|\ud835\udc65) \u0013 . (8) Critically, observe that if we were able to perfectly solve this regression problem, we would indeed recover the optimal solution to the KL-constrained RL problem we outlined in Eq. 1. While the above update might seem somewhat arbitrary at first glance, it has deep connections to prior work in the literature that illuminate its strengths over past techniques. We now discuss some of them. 3 Understanding REBEL as an Adaptive Policy Gradient We begin by recapping the foundational algorithms for policy optimization before situating REBEL within this space of techniques. 5 3.1 Adaptive Gradient Algorithms for Policy Optimization In this section, we give a brief overview of three adaptive gradient algorithms: Mirror Descent (MD), Natural Policy Gradient (NPG), and Proximal Policy Optimization (PPO). We discuss why they are preferable to their non-adaptive counterparts (Gradient Descent (GD) and Policy Gradient (PG)) and the connections between them. Mirror Descent. If X and Y are small discrete spaces (i.e. we are in the tabular setting), we can used the closed-form expression for the minimum relative entropy problem (Eq. 2). This is equivalent to the classic Mirror Descent (MD) algorithm with KL as the Bregman divergence. This update procedure is also sometimes known as soft policy iteration (Ziebart et al., 2008). Note that it does not even involve a parameterized policy and is therefore manifestly covariant. MD ensures a 1/\ud835\udc47convergence rate, i.e., after \ud835\udc47iterations, it must find a policy \u02c6 \ud835\udf0b, such that E\ud835\udc65,\ud835\udc66\u223c\ud835\udf0b\u2605(.|\ud835\udc65)\ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2212E\ud835\udc65,\ud835\udc66\u223c\u02c6 \ud835\udf0b(.|\ud835\udc65)\ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2264\ud835\udc42(1/\ud835\udc47). In particular, the convergence is almost dimension-free: the convergence rate scales logarithmically with respect to the size of the Y space. Note that gradient ascent will not enjoy such a dimension-free rate when optimizing over the simplex. When sup\ud835\udc65,\ud835\udc66|\ud835\udc5f(\ud835\udc65, \ud835\udc66)| is bounded, we can show that the KL divergence between two policies, i.e., KL(\ud835\udf0b\ud835\udc61+1(\u00b7|\ud835\udc65)||\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65)), is also bounded, ensuring \ud835\udf0b\ud835\udc61+1 stay close to \ud835\udf0b\ud835\udc61. One can also show monotonic policy improvement, i.e., E\ud835\udc65,\ud835\udc66\u223c\ud835\udf0b\ud835\udc61+1\ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2265E\ud835\udc65,\ud835\udc66\u223c\ud835\udf0b\ud835\udc61\ud835\udc5f(\ud835\udc65, \ud835\udc66). Foreshadowing a key point we will soon expound upon, both NPG and PPO can be considered approximations of this idealized tabular policy update procedure. Natural Policy Gradient. When Y and X are large, we cannot simply enumerate all \ud835\udc65and \ud835\udc66. Thus, we need to use a function to approximate \ud835\udf0b, which makes it impossible to exactly implement Eq. 2. Let us use \ud835\udf0b\ud835\udf03to denote a parameterized policy with parameter \ud835\udf03(e.g. the weights of a transformer). The Natural Policy Gradient (NPG, Kakade (2001)) approximates the KL in Equation 1 via its second-order Taylor expansion, whose Hessian is known as the Fisher Information Matrix (FIM, Bagnell and Schneider (2003)), i.e. E\ud835\udc65KL(\ud835\udf0b\ud835\udf03(\u00b7|\ud835\udc65)||\ud835\udf0b\ud835\udf03\ud835\udc61(\u00b7|\ud835\udc65)) \u2248(\ud835\udf03\u2212\ud835\udf03\ud835\udc61)\u22a4E\ud835\udc65,\ud835\udc66\u223c\ud835\udf0b\ud835\udf03\ud835\udc61(\u00b7|\ud835\udc65) \u0002 \u2207ln \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65)\u2207ln \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65)\u22a4\u0003 | {z } Fisher Information Matrix \ud835\udc39\ud835\udc61 (\ud835\udf03\u2212\ud835\udf03\ud835\udc61). The NPG update can be derived by plugging in this approximation to Eq. 1, further approximating the E\ud835\udc65,\ud835\udc66\u223c\ud835\udf0b\ud835\udf03(\u00b7|\ud835\udc65)\ud835\udc5f(\ud835\udc65, \ud835\udc66) by its first order Taylor expansion around \ud835\udf03\ud835\udc61, and finding the root of the resulting quadratic form: \ud835\udf03\ud835\udc61+1 = \ud835\udf03\ud835\udc61+ \ud835\udf02\ud835\udc39\u2020 \ud835\udc61 \u0010 E\ud835\udc65,\ud835\udc66\u223c\ud835\udf0b\ud835\udf03\ud835\udc61(\u00b7|\ud835\udc65)\u2207ln \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65)\ud835\udc5f(\ud835\udc65, \ud835\udc66) \u0011 (10) where \ud835\udc39\u2020 \ud835\udc61is pseudo-inverse of \ud835\udc39\ud835\udc61, and E\ud835\udc65,\ud835\udc66\u223c\ud835\udf0b\ud835\udf03\ud835\udc61(\u00b7|\ud835\udc65)\u2207ln \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65)\ud835\udc5f(\ud835\udc65, \ud835\udc66) is the standard policy gradient (i.e. REINFORCE (Williams, 1992)). As mentioned above, this update procedure can be understood as performing gradient updates in the local geometry induced by the Fisher information matrix, which ensures that we are taking small steps in policy space rather than in parameter space. Conversely, unlike regular gradient descent methods (i.e., PG), NPG allows us to make large changes in the parameter space \u0398, as long as the resulting two policies are close to each other in terms of KL divergence. This property allows NPG to make more aggressive and adaptive updates in the parameter space of the policy as well as be invariant to linear transformations of the parameters. Theoretically, Agarwal et al. (2021a) show that NPG with softmax parameterization converges at the 1/\ud835\udc47rate in a dimension-free manner, provably faster than the standard PG under the same setup. Empirically, the 6 superior convergence speed of NPG compared to that of PG was observed in its original exploration (Kakade, 2001; Bagnell and Schneider, 2003), as well as in follow-up work like TRPO (Schulman et al., 2015a). Critically, while elegant in theory, NPG, unfortunately, does not scale to modern generative models due to the need for computing the Fisher matrix inverse either explicitly or implicitly via the Hessian-vector matrix product trick. Proximal Policy Optimization. To address the scalability of NPG, Schulman et al. (2017) proposes Proximal Policy Optimization (PPO). Rather than explicitly computing the KL divergence between policies or approximating it via a Taylor expansion, PPO takes a more direct route and uses clipped updates with the hope of controlling the action probability deviation from \ud835\udf0b\ud835\udf03\ud835\udc61+1 to \ud835\udf0b\ud835\udf03\ud835\udc61, i.e. \ud835\udf03\ud835\udc61+1 := argmax \ud835\udf03 E\ud835\udc65,\ud835\udc66\u223c\ud835\udf0b\ud835\udf03\ud835\udc61(\u00b7|\ud835\udc65)clip \u0012 \ud835\udf0b\ud835\udf03(\ud835\udc66|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65) ; 1 \u2212\ud835\udf16, 1 + \ud835\udf16 \u0013 \ud835\udc5f(\ud835\udc65, \ud835\udc66). (11) Prima facie, this update follows the underlying intuition of NPG: allow big and adaptive changes in the policy\u2019s parameters \ud835\udf03, as long as the corresponding action probabilities do not change too much. This perhaps explains the superiority of PPO over vanilla REINFORCE in domains like continuous control. Unfortunately, under closer scrutiny, it becomes apparent that PPO-style clipped updates neither guarantee closeness to the prior policy nor have NPG-style adaptivity. While the clipping operator can set the gradient to be zero at samples (\ud835\udc65, \ud835\udc66) where \ud835\udf0b\ud835\udf03\ud835\udc61+1(\ud835\udc66|\ud835\udc65) is much larger or smaller than \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65), it cannot actually guarantee \ud835\udf0b\ud835\udf03\ud835\udc61+1 staying close to \ud835\udf0b\ud835\udf03\ud835\udc61, a phenomenon empirically observed in prior work (Hsu et al., 2020). Furthermore, hard clipping is not adaptive \u2013 it treats all (\ud835\udc65, \ud835\udc66) equally and clips whenever the ratio is outside of a fixed range. In contrast, constraining the KL divergence to the prior policy allows one to vary the ratio \ud835\udf0b(\ud835\udc66|\ud835\udc65)/\ud835\udf0b\ud835\udc61(\ud835\udc66|\ud835\udc65) at different (\ud835\udc65, \ud835\udc66), as long as the total KL divergence across the state space is small. Lastly, clipping reduces the effective size of a batch of training examples and thus wastes training samples. A REBEL With a Cause. Our algorithm REBEL addresses the limitations of NPG (scalability) and PPO (lack of conservativity or adaptivity) from above. First, unlike NPG, it does not rely on the Fisher information matrix at all and can easily scale to modern LLM applications, yet (as we will discuss below) can be interpreted as a generalization of NPG. Second, in contrast to PPO, it doesn\u2019t have unjustified heuristics and thus enjoys strong convergence and regret guarantees just like NPG. 3.2 Connections between REBEL and MD / NPG We now sketch a series of connections between REBEL and the methods outlined above. Exact REBEL is Mirror Descent. First, to build intuition, we interpret our algorithm\u2019s behavior under the assumption that the least square regression optimization returns the exact Bayes Optimal solution (i.e., our learned predictor achieves zero prediction error everywhere): \u2200\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032 : 1 \ud835\udf02 \u0012 ln \ud835\udf0b\ud835\udf03\ud835\udc61+1(\ud835\udc66|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65) \u2212ln \ud835\udf0b\ud835\udf03\ud835\udc61+1(\ud835\udc66\u2032|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66\u2032|\ud835\udc65) \u0013 = \ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2212\ud835\udc5f(\ud835\udc65, \ud835\udc66\u2032) (12) Conditioned on Eq. 12 being true, a few lines of algebraic manipulation reveals that there must exist a function \ud835\udc50(\ud835\udc65) which is independent of \ud835\udc66, such that: \u2200\ud835\udc65, \ud835\udc66: 1 \ud835\udf02ln \ud835\udf0b\ud835\udf03\ud835\udc61+1(\ud835\udc66|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65) = \ud835\udc5f(\ud835\udc65, \ud835\udc66) + \ud835\udc50(\ud835\udc65). 7 Taking an exp on both sides and re-arrange terms, we get: \u2200\ud835\udc65, \ud835\udc66: \ud835\udf0b\ud835\udf03\ud835\udc61+1(\ud835\udc66|\ud835\udc65) \u221d\ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65) exp (\ud835\udf02\ud835\udc5f(\ud835\udc65, \ud835\udc66)) . In other words, under the strong assumption that least square regression returns a point-wise accurate estimator (i.e., Eq. 12), we see the REBEL recovers the exact MD update, which gives it (a) a fast 1/\ud835\udc47convergence rate (Shani et al., 2020; Agarwal et al., 2021a), (b) conservativity, i.e., max\ud835\udc65KL(\ud835\udf0b\ud835\udc61+1(\u00b7|\ud835\udc65)||\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65)) is bounded as long as max\ud835\udc65,\ud835\udc66|\ud835\udc5f(\ud835\udc65, \ud835\udc66)| is bounded, and (c) monotonic policy improvement via the NPG standard analysis (Agarwal et al., 2021a). NPG is Approximate REBEL with Gauss-Newton Updates. We provide another interpretation of REBEL by showing that NPG (Eq. 10) can be understood as a special case of REBEL where the least square problem in Eq. 9 is approximately solved via a single iteration of the Gauss-Newton algorithm. As for any application of Gauss-Newton, we start by approximating our predictor 1 \ud835\udf02ln \ud835\udf0b\ud835\udf03(\ud835\udc66|\ud835\udc65)/\ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65) by its first order Taylor expansion at \ud835\udf03\ud835\udc61: 1 \ud835\udf02 \u0000ln \ud835\udf0b\ud835\udf03(\ud835\udc66|\ud835\udc65) \u2212ln \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65)\u0001 \u22481 \ud835\udf02\u2207\ud835\udf03ln \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65)\u22a4(\ud835\udf03\u2212\ud835\udf03\ud835\udc61), where \u2248indicates that we ignore higher order terms in the expansion. If we \ud835\udeff:= \ud835\udf03\u2212\ud835\udf03\ud835\udc61and replace 1 \ud835\udf02 \u0000ln \ud835\udf0b\ud835\udf03(\ud835\udc66|\ud835\udc65) \u2212ln \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65)\u0001 by its above first order approximation in Eq. 9, we arrive at the following quadratic form: min \ud835\udeffE\ud835\udc65\u223c\ud835\udf0c,\ud835\udc66\u223c\ud835\udf0b\ud835\udf03\ud835\udc61(\u00b7|\ud835\udc65),\ud835\udc66\u2032\u223c\ud835\udf07(\u00b7|\ud835\udc65) \u00121 \ud835\udf02 \u0000\u2207\ud835\udf03ln \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65) \u2212\u2207\ud835\udf03ln \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66\u2032|\ud835\udc65)\u0001\u22a4\ud835\udeff\u2212(\ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2212\ud835\udc5f(\ud835\udc65, \ud835\udc66\u2032)) \u00132 . (13) Further simplifying notation, we denote the uniform mixture of \ud835\udf0b\ud835\udc61 and \ud835\udf07 as \ud835\udf0b\ud835\udc5a\ud835\udc56\ud835\udc65(\u00b7|\ud835\udc65) := (\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65) + \ud835\udf07(\u00b7|\ud835\udc65))/2 and the Fisher information matrix \ud835\udc39\ud835\udc61averaged under said mixture as: \ud835\udc39\ud835\udc61= E\ud835\udc65\u223c\ud835\udf0c,\ud835\udc66\u223c\ud835\udf0b\ud835\udc5a\ud835\udc56\ud835\udc65(\u00b7|\ud835\udc65) h \u2207\ud835\udf03ln \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65) \u0000\u2207\ud835\udf03ln \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65)\u0001\u22a4i . Solving the above least square regression to obtain a minimum norm solution, we have the following claim. Claim 1. The minimum norm minimizer \ud835\udeff\u2605of the least squares problem in Eq. 13 recovers an advantage-based variant of the NPG update: \ud835\udeff\u2605:= \ud835\udf02\ud835\udc39\u2020 \ud835\udc61 \u0000E\ud835\udc65\u223c\ud835\udf0c,\ud835\udc66\u223c\ud835\udf0b\ud835\udc5a\ud835\udc56\ud835\udc65(\u00b7|\ud835\udc65)\u2207\ud835\udf03ln \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65)[\ud835\udc34\ud835\udf0b\ud835\udc61(\ud835\udc65, \ud835\udc66)]\u0001 , where \ud835\udc39\u2020 \ud835\udc61is pseudo-inverse of \ud835\udc39\ud835\udc61, and the advantage is defined as \ud835\udc34\ud835\udf0b\ud835\udc61(\ud835\udc65, \ud835\udc66) := \ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2212 E\ud835\udc66\u2032\u223c\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65)\ud835\udc5f(\ud835\udc65, \ud835\udc66). The proof of this claim is deferred to Appendix A. Observe that in REBEL, we never explicitly compute the advantage \ud835\udc34\ud835\udf0b\ud835\udc61. However, applying Gauss-Newton to our objective leads to an advantage-based NPG (rather than the traditional \ud835\udc44-function based NPG, e.g., Q-NPG from Agarwal et al. (2021a, 2019)) which indicates that predicting reward difference has an implicit variance reduction effect, as by definition, an advantage function includes a value function baseline. 1 1Note that the original form of NPG is on-policy (Kakade, 2001; Sutton et al., 1999), i.e., the expectations under \ud835\udf0b\ud835\udc61. Our formulation is more general: when set \ud835\udf07= \ud835\udf0b\ud835\udc61, a Gauss-Newton step will recover the original on-policy form of NPG from Kakade (2001); Sutton et al. (1999). More recent works have extended NPG beyond on-policy (e.g., Agarwal et al. (2021a, 2020)). 8 3.3 Extending REBEL to General Preferences In the above discussion, we assume we are given access to a ground-truth reward function. However, in the generative model fine-tuning applications of RL, we often need to learn from human preferences, rather than rewards. This shift introduces a complication: not all preferences can be rationalized by an underlying utility function. In particular, intransitive preferences which are well-known to result from aggregation of different sub-populations or users evaluating different pairs of items on the basis of different features (May, 1954; Tversky, 1969; Gardner, 1970) cannot be accurately captured by a single reward model. To see this, note that if we have \ud835\udc4e\u227b\ud835\udc4f, \ud835\udc4f\u227b\ud835\udc50, and \ud835\udc50\u227b\ud835\udc4e, it is impossible to have a reward model that simultaneously sets \u02c6 \ud835\udc5f(\ud835\udc4e) > \u02c6 \ud835\udc5f(\ud835\udc4f), \u02c6 \ud835\udc5f(\ud835\udc4f) > \u02c6 \ud835\udc5f(\ud835\udc50), and \u02c6 \ud835\udc5f(\ud835\udc50) > \u02c6 \ud835\udc5f(\ud835\udc4e). As we increase the space of possible choices to that of all possible prompt completions, the probability of such intransitivities sharply increases (Dud\u00edk et al., 2015), as reflected in the high levels of annotator disagreement in LLM fine-tuning datasets (Touvron et al., 2023). Thus, rather than assuming access to a reward model, in such settings, we assume access to a preference model (Munos et al., 2023; Swamy et al., 2024; Rosset et al., 2024; Ye et al., 2024). 3.3.1 A Game-Theoretic Perspective on Learning from Preferences More specifically, for any tuple (\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032), we assume we have access to P(\ud835\udc66\u227b\ud835\udc66\u2032|\ud835\udc65): the probability that \ud835\udc66is preferred to \ud835\udc66\u2032. We then define our preference model \ud835\udc59as \ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032) \u225c2 \u00b7 P(\ud835\udc66\u227b\ud835\udc66\u2032|\ud835\udc65) \u22121. (14) Observe that \ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032) \u2208[\u22121, 1] is skew-symmetric, i.e., \ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66) = 0, \ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032) + \ud835\udc59(\ud835\udc65, \ud835\udc66\u2032, \ud835\udc66) = 0 for all \ud835\udc65\u2208X, \ud835\udc66, \ud835\udc66\u2032 \u2208Y. If the learner can only receive a binary feedback \ud835\udc5c\u2208{0, 1} indicating the preference between \ud835\udc66and \ud835\udc66\u2032, we assume \ud835\udc5cis sampled from a Bernoulli distribution with mean P(\ud835\udc66\u227b\ud835\udc66\u2032|\ud835\udc65), where \ud835\udc5c= 1 means that \ud835\udc66is preferred over \ud835\udc66\u2032 and 0 otherwise. Given access to such a preference model, a solution concept to the preference aggregation problem with deep roots in the social choice theory literature (Kreweras, 1965; Fishburn, 1984; Kramer, 1973; Simpson, 1969) and the dueling bandit literature (Yue et al., 2012; Dud\u00edk et al., 2015) is that of a minimax winner (MW) \ud835\udf0bMW: the Nash Equilibrium strategy of the symmetric two-player zero-sum game with \ud835\udc59as a payoff function. In particular, due to the skew-symmetric property of \ud835\udc59, Swamy et al. (2024) proved that there exists a policy \ud835\udf0bMW such that max \ud835\udf0bE\ud835\udc65\u223c\ud835\udf0c,\ud835\udc66\u223c\ud835\udf0b(\u00b7|\ud835\udc65),\ud835\udc66\u2032\u223c\ud835\udf0bMW (\u00b7|\ud835\udc65) [\ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032)] = min \ud835\udf0bE\ud835\udc65\u223c\ud835\udf0c,\ud835\udc66\u223c\ud835\udf0bMW (\u00b7|\ud835\udc65),\ud835\udc66\u2032\u223c\ud835\udf0b(\u00b7|\ud835\udc65) [\ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032)] . This implies that (\ud835\udf0bMW, \ud835\udf0bMW) is a Nash Equilibrium (Wang et al., 2023b; Munos et al., 2023; Swamy et al., 2024; Ye et al., 2024). As is standard in game solving, our objective is to obtain an \ud835\udf16-approximate MW b \ud835\udf0bmeasured by the duality gap (DG): DG(b \ud835\udf0b) := max \ud835\udf0bE\ud835\udc65\u223c\ud835\udf0c,\ud835\udc66\u223c\ud835\udf0b(\u00b7|\ud835\udc65),\ud835\udc66\u2032\u223cb \ud835\udf0b(\u00b7|\ud835\udc65) [\ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032)] \u2212min \ud835\udf0bE\ud835\udc65\u223c\ud835\udf0c,\ud835\udc66\u223cb \ud835\udf0b(\u00b7|\ud835\udc65),\ud835\udc66\u2032\u223c\ud835\udf0b(\u00b7|\ud835\udc65) [\ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032)] \u2264\ud835\udf16. In the following discussion, we will use \ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udf0b) to denote E\ud835\udc66\u2032\u223c\ud835\udf0b(\u00b7|\ud835\udc65) [\ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032)] and \ud835\udc59(\ud835\udf0b, \ud835\udf0b\u2032) to denote E\ud835\udc65\u223c\ud835\udf0c,\ud835\udc66\u223c\ud835\udf0b(\u00b7|\ud835\udc65),\ud835\udc66\u2032\u223c\ud835\udf0b\u2032(\u00b7|\ud835\udc65) [\ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032)] for notational convenience. 9 3.3.2 Self-Play Preference Optimization (SPO) with REBEL as Base Learner We can straightforwardly extend REBEL to the general preference setting via an instantiation of the Self-Play Preference Optimization (SPO) reduction of Swamy et al. (2024). In short, Swamy et al. (2024) prove that rather than performing adversarial training, we are able to perform a simple and stable self-play procedure while retaining strong theoretical guarantees. Practically, this corresponds to sampling at leas two completions from the current policy, querying a learned preference / supervisor model on each pair, and using the win rate for each completion as its reward. We will now describe how we can adapt REBEL to this mode of feedback. Assuming that we can query the preference oracle \ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032) at will, we can modify the least square objective Eq. (9) to \ud835\udf03\ud835\udc61+1 := argmin \ud835\udf03 \u2211\ufe01 \ud835\udc65,\ud835\udc66,\ud835\udc66\u2032,\ud835\udc66\u2032\u2032\u2208D\ud835\udc61 \u00121 \ud835\udf02 \u0012 ln \ud835\udf0b\ud835\udf03(\ud835\udc66|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65) \u2212ln \ud835\udf0b\ud835\udf03(\ud835\udc66\u2032|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66\u2032|\ud835\udc65) \u0013 \u2212(\ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032\u2032) \u2212\ud835\udc59(\ud835\udc65, \ud835\udc66\u2032, \ud835\udc66\u2032\u2032)) \u00132 where \ud835\udc65\u223c\ud835\udf0c, \ud835\udc66\u223c\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65), \ud835\udc66\u2032\u2032 \u223c\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65), \ud835\udc66\u2032 \u223c\ud835\udf07(\u00b7|\ud835\udc65). When the exact value of \ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032) is unavailable but only a binary preference feedback \ud835\udc5c\ud835\udc66,\ud835\udc66\u2032 \u2208{0, 1} sampling from Bernoulli with mean \ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032) is available, we can just replace \ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032\u2032) \u2212\ud835\udc59(\ud835\udc65, \ud835\udc66\u2032, \ud835\udc66\u2032\u2032) by \ud835\udc5c\ud835\udc66,\ud835\udc66\u2032 \u2212\ud835\udc5c\ud835\udc66\u2032,\ud835\udc66\u2032\u2032. It is easy to see that the Bayes optimal of the above least square regression problem is equal to: E\ud835\udc66\u2032\u2032\u223c\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65)\ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032\u2032) \u2212E\ud835\udc66\u2032\u2032\u223c\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65)\ud835\udc59(\ud835\udc65, \ud835\udc66\u2032, \ud835\udc66\u2032\u2032) = \ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udf0b\ud835\udc61) \u2212\ud835\udc59(\ud835\udc65, \ud835\udc66\u2032, \ud835\udf0b\ud835\udc61). Swamy et al. (2024) define an iteration-dependent reward \ud835\udc5f\ud835\udc61(\ud835\udc65, \ud835\udc66) := E\ud835\udc66\u2032\u2032\u223c\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65)\ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032\u2032) = \ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udf0b\ud835\udc61). Thus, the above regression problem can be understood as an extension of REBEL to the setting where the reward function changes at each iteration \ud835\udc61. Swamy et al. (2024) shows that running the exact MD (Eq. 2) with this iteration-dependent reward function \ud835\udc5f\ud835\udc61leads to fast convergence to an approximate Minimax Winner, a property that we will use to provide the regret bound of REBEL in the general preference setting while accounting for nonzero mean squared error. 4 Theoretical Analysis In the previous section, we interpret REBEL as the exact MD and show its convergence by assuming that least square regression always returns a predictor that is accurate everywhere. While such an explanation is simple and has also been used in prior work, point-wise out-of-distribution generalization is an extremely strong condition and is significantly beyond what a standard supervised learning method can promise. In this section, we significantly relax this condition via a reduction-based analysis: As long as we can solve the regression problems well in an in-distribution manner, REBEL can compete against any policy covered by the training data distributions. Formally, we assume the following generalization condition holds on the regressors we find. Assumption 1 (Regression generalization bounds). Over \ud835\udc47iterations, assume that for all \ud835\udc61, we have: E\ud835\udc65\u223c\ud835\udf0c,\ud835\udc66\u223c\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65),\ud835\udc66\u2032\u223c\ud835\udf07(\u00b7|\ud835\udc65) \u00121 \ud835\udf02 \u0012 ln \ud835\udf0b\ud835\udf03\ud835\udc61+1(\ud835\udc66|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65) \u2212ln \ud835\udf0b\ud835\udf03\ud835\udc61+1(\ud835\udc66\u2032|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66\u2032|\ud835\udc65) \u0013 \u2212(\ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2212\ud835\udc5f(\ud835\udc65, \ud835\udc66\u2032)) \u00132 \u2264\ud835\udf16, for some \ud835\udf16. 10 Intuitively, this assumption is saying that there is a function in our class of regressors that is able to accurately fit the difference of rewards. Recall that our class of regressors is isomorphic to our policy class. Therefore, as long as our class of policies is expressive, we would expect this assumption to hold with small \ud835\udf16. For all domains we consider, our policy class is a flexible set of generative models (e.g. Transformer-based LLMs or diffusion models). Thus, we believe it is reasonable to believe this assumption holds in practice \u2013 see Figure 6 in Appendix G for empirical evidence of this point and Example 1 for more discussion. More formally, the above assumption bounds the standard in-distribution generalization error (v.s. the point-wise guarantee in Eq. 12) of a well-defined supervised learning problem: least squares regression. The generalization error \ud835\udf16captures the possible errors from the learning process for \ud835\udf03\ud835\udc61+1 and it could depend on the complexity of the policy class and the number of samples used in the dataset D\ud835\udc61. For instance, when the the function ln \ud835\udf0b\u2212ln \ud835\udf0b\u2032 induced by the log-difference of two policies (\ud835\udf0b, \ud835\udf0b\u2032) are rich enough (e.g., policies are deep neural networks) to capture the reward difference, then \ud835\udf16in this assumption converges to zero as we increase the number of training data. Note that while \ud835\udf16can be small, it does not imply that the learned predictor will have a small prediction error in a point-wise manner \u2013 it almost certainly will not. Example 1. One simple example is when \ud835\udf0b(\ud835\udc66|\ud835\udc65) \u221dexp(\ud835\udf03\u22a4\ud835\udf19(\ud835\udc65, \ud835\udc66)) for some features \ud835\udf19(\ud835\udc65, \ud835\udc66). In this case, ln(\ud835\udf0b(\ud835\udc66|\ud835\udc65)/\ud835\udf0b\ud835\udc61(\ud835\udc66|\ud835\udc65)) \u2212ln(\ud835\udf0b(\ud835\udc66\u2032|\ud835\udc65)/\ud835\udf0b\ud835\udc61(\ud835\udc66\u2032|\ud835\udc65)) = (\ud835\udf03\u2212\ud835\udf03\ud835\udc61)\u22a4(\ud835\udf19(\ud835\udc65, \ud835\udc66) \u2212\ud835\udf19(\ud835\udc65, \ud835\udc66\u2032)), which means that our regression problem in Eq. 9 is a classic linear regression problem. When the reward \ud835\udc5f(\ud835\udc65, \ud835\udc66) is also linear in feature \ud835\udf19(\ud835\udc65, \ud835\udc66), then Eq. 9 is a well-specified linear regression problem, and \ud835\udf16typically scales in the rate of \ud835\udc42(\ud835\udc51/|D\ud835\udc61|) with \ud835\udc51being the dimension of feature \ud835\udf19. We can extend the above example to the case where \ud835\udf19is the feature corresponding to some kernel, e.g., RBF kernel or even Neural Tangent Kernel, which allows us to capture the case where \ud835\udf0bis a softmax wide neural network with the least square regression problem solved by gradient flow. The error \ud835\udf16again scales poly(\ud835\udc51/|D\ud835\udc61|), where \ud835\udc51is the effective dimension of the corresponding kernel. We now define the concentrability coefficient (Kakade and Langford, 2002) that quantifies how the training data distribution is covering a comparator policy. Data Coverage. Recall that the base distribution \ud835\udf07can be some behavior policy, which in RLHF can be a human labeler, a supervised fine-tuned policy (SFT), or just the current learned policy (i.e., on-policy). Given a test policy \ud835\udf0b, we denote by \ud835\udc36\ud835\udf07\u2192\ud835\udf0bthe concentrability coefficient, i.e. \ud835\udc36\ud835\udf07\u2192\ud835\udf0b= max \ud835\udc65,\ud835\udc66 \ud835\udf0b(\ud835\udc66|\ud835\udc65) \ud835\udf07(\ud835\udc66|\ud835\udc65) . (15) We say \ud835\udf07covers \ud835\udf0bif \ud835\udc36\ud835\udf07\u2192\ud835\udf0b< +\u221e. Our goal is to bound the regret between our learned policies and an arbitrary comparator \ud835\udf0b\u2217(e.g. the optimal policy if it is covered by \ud835\udf07) using \ud835\udf16and the concentrability coefficient defined in Eq. 15. The following theorem formally states the regret bound of our algorithm. Theorem 1. Under Assumption 1, after \ud835\udc47many iterations, with a proper learning rate \ud835\udf02, among the learned policies \ud835\udf0b1, . . . , \ud835\udf0b\ud835\udc47, there must exist a policy \u02c6 \ud835\udf0b, such that: \u2200\ud835\udf0b\u2217: E\ud835\udc65\u223c\ud835\udf0c,\ud835\udc66\u223c\ud835\udf0b\u2217(\u00b7|\ud835\udc65)\ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2212E\ud835\udc65\u223c\ud835\udf0c,\ud835\udc66\u223c\u02c6 \ud835\udf0b(\u00b7|\ud835\udc65)\ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2264\ud835\udc42 \u221a\ufe02 1 \ud835\udc47+ \u221a\ufe01 \ud835\udc36\ud835\udf07\u2192\ud835\udf0b\u2217\ud835\udf16. ! . 11 Here the \ud835\udc42-notation hides problem-dependent constants that are independent of \ud835\udf16, \ud835\udc36\ud835\udf07\u2192\ud835\udf0b\u2217,\ud835\udc47. The above theorem shows a reduction from RL to supervised learning \u2014 as long as supervised learning works (i.e., \ud835\udf16is small), then REBEL can compete against any policy \ud835\udf0b\u2217that is covered by the base data distribution \ud835\udf07. In the regret bound, the 1/ \u221a \ud835\udc47comes from Mirror Descent style update, and \ud835\udc36\ud835\udf07\u2192\ud835\udf0b\u2217\ud835\udf16captures the cost of distribution shift: we train our regressors under distribution \ud835\udf0b\ud835\udc61and \ud835\udf07, but we want the learned regressor to predict well under \ud835\udf0b\u2217. Similar to the NPG analysis from Agarwal et al. (2021a), we now have a slower convergence rate 1/ \u221a \ud835\udc47, which is due to the fact that we have approximation error from learning. Such an agnostic regret bound \u2014 being able compete against any policy that is covered by training distributions \u2013 is the strongest type of agnostic learning results known in the RL literature, matching the best of what has appeared in prior policy optimization work including PSDP (Bagnell et al., 2003), CPI (Kakade and Langford, 2002), NPG (Agarwal et al., 2021a), and PC-PG (Agarwal et al., 2020). While in this work, we use the simplest and most intuitive definition of coverage \u2013 the density ratio-based definition in Eq. 15 \u2013 extension to more general ones such as transfer error (Agarwal et al., 2020, 2021a) or concentrability coefficients that incorporate function class (e.g., Song et al. (2023b)) is straightforward. We defer the proof of the above theorem and the detailed constants that we omitted in the \ud835\udc42notation to Appendix B. 4.1 Extension to General Preferences Extending the above analysis to the general preference case is straightforward except that it requires a stronger coverage condition. This is because we want to find a Nash Equilibrium, which requires a comparison between the learned policy against all the other policies. Results from the Markov Game literature (Cui and Du, 2022b; Zhong et al., 2022; Cui and Du, 2022a; Xiong et al., 2023) and Cui and Du (2022b) have shown that the standard single policy coverage condition used in single-player optimization is provably not sufficient. In particular, they propose using a notion of unilateral concentrability for efficient learning, which can be defined as \ud835\udc36uni,\ud835\udf07:= max \ud835\udf0b,\ud835\udc65,\ud835\udc66,\ud835\udc66\u2032\u2032 \ud835\udf0bMW(\ud835\udc66|\ud835\udc65)\ud835\udf0b(\ud835\udc66\u2032\u2032|\ud835\udc65) \ud835\udf07(\ud835\udc66|\ud835\udc65)\ud835\udf07(\ud835\udc66\u2032\u2032|\ud835\udc65) , in the general preference setting. Notably, the above unilateral concentrability coefficient \ud835\udc36uni,\ud835\udf07is equivalent to \ud835\udc36\ud835\udf07:= max\ud835\udf0b,\ud835\udc65,\ud835\udc66 \ud835\udf0b(\ud835\udc66|\ud835\udc65) \ud835\udf07(\ud835\udc66|\ud835\udc65) since \ud835\udc36\ud835\udf07\u2264\ud835\udc36uni,\ud835\udf07\u2264\ud835\udc362 \ud835\udf07. Therefore in the following discussion, we will use \ud835\udc36\ud835\udf07as the coverage condition. In addition, we also assume the generalization error of the regression problem is small, Assumption 2 (Regression generalization bounds for general preference). Over \ud835\udc47iterations, assume that for all \ud835\udc61, we have: E\ud835\udc65\u223c\ud835\udf0c,\ud835\udc66\u223c\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65),\ud835\udc66\u2032\u223c\ud835\udf07(\u00b7|\ud835\udc65) \u00121 \ud835\udf02 \u0012 ln \ud835\udf0b\ud835\udf03\ud835\udc61+1(\ud835\udc66|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65) \u2212ln \ud835\udf0b\ud835\udf03\ud835\udc61+1(\ud835\udc66\u2032|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66\u2032|\ud835\udc65) \u0013 \u2212(\ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udf0b\ud835\udc61) \u2212\ud835\udc59(\ud835\udc65, \ud835\udc66\u2032, \ud835\udf0b\ud835\udc61)) \u00132 \u2264\ud835\udf16, for some \ud835\udf16. Under the above coverage condition and generalization bound, we can show that REBEL is able to learn an approximate Minimax Winner: 12 Theorem 2. With assumption 2, after \ud835\udc47many iterations, with a proper learning rate \ud835\udf02, the policy b \ud835\udf0b= Unif({\ud835\udf0b\ud835\udc61}\ud835\udc47 \ud835\udc61=1) satisfies that: DG(b \ud835\udf0b) \u2264\ud835\udc42 \u221a\ufe02 1 \ud835\udc47+ \u221a\ufe01 \ud835\udc36\ud835\udf07\ud835\udf16. ! . Here the \ud835\udc42-notation hides problem-dependent constants that are independent of \ud835\udf16, \ud835\udc36\ud835\udf07,\ud835\udc47. We defer the proof to Appendix C. Note that the coverage condition here is much stronger than the single policy coverage condition in the RL setting. We conjecture that this is the cost one has to pay by moving to the more general preference setting and leaving the investigation of the necessarily coverage condition for future work. 5 Experiments The implementation of REBEL follows Algorithm 1. In each iteration, REBEL collects a dataset D\ud835\udc61= {\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032}, where \ud835\udc65\u223c\ud835\udf0c, \ud835\udc66\u223c\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65), \ud835\udc66\u2032 \u223c\ud835\udf07(\u00b7|\ud835\udc65). Subsequently, REBEL optimizes the least squares regression problem in Eq. 9 through gradient descent with AdamW (Loshchilov and Hutter, 2017). We choose \ud835\udf07= \ud835\udf0b\ud835\udc61such that both \ud835\udc66and \ud835\udc66\u2032 are generated by the current policy. We empirically assess REBEL\u2019s performance on both natural language generation and text-guided image generation. 5.1 Natural Language Generation Baselines: We compare REBEL with baseline RL algorithms, PPO (Schulman et al., 2017), Direct Preference Optimization (DPO) (Rafailov et al., 2023), and REINFORCE (Williams, 1992) and its multi-sample extension, REINFORCE Leave-One-Out (RLOO) (Kool et al., 2019). The REINFORCE method is implemented with a moving average baseline of the reward. We include two variants of RLOO with two (\ud835\udc58= 2) and four (\ud835\udc58= 4) generations per prompt. Dataset: We use the TL;DR summarization dataset (Stiennon et al., 2020)2 to train the model to generate summaries of Reddit posts based on human preference data. The dataset comprises human reference summaries and preference data. Following prior work (Stiennon et al., 2020; Rafailov et al., 2023; Ahmadian et al., 2024), we train the DPO baseline on the preference dataset, while conducting online RL (PPO, RLOO, REBEL) on the human reference dataset. We set the maximum context length to 512 and the maximum generation length to 53 to ensure all references in the dataset can be generated. Additional dataset details are in Appendix D.1. Models: We include results with three different model sizes: 1.4B, 2.8B, and 6.9B. Each model is trained with a supervised fine-tuned (SFT) model and/or a reward model (RM) of the same size. For SFT models, we train a Pythia 1.4B (Biderman et al., 2023)3 model for 1 epoch over the dataset with human references as labels, and use the existing fine-tuned 2.8B4 and 6.9B5 models. For reward models, we train a Pythia 1.4B parameter model for 1 epoch over the preference dataset and 2Dataset available at https://github.com/openai/summarize-from-feedback 3HuggingFace Model Card: EleutherAI/pythia-1.4b-deduped 4HuggingFace Model Card: vwxyzjn/EleutherAI_pythia-2.8b-deduped__sft__tldr 5HuggingFace Model Card: vwxyzjn/EleutherAI_pythia-6.9b-deduped__sft__tldr 13 Model size Algorithm Winrate (\u2191) RM Score (\u2191) KL(\ud835\udf0b||\ud835\udf0b\ud835\udc5f\ud835\udc52\ud835\udc53) (\u2193) 1.4B SFT 24.5% -0.52 DPO 43.8% 0.11 30.9 PPO 51.6% 1.73 29.1 REBEL 55.3% 1.87 32.4 2.8B SFT 28.4% -0.40 DPO 53.5% 2.41 66.5 PPO 67.2% 2.37 27.4 REBEL 70.3% 2.44 29.2 Table 1: Results on TL;DR Summarization for SFT, PPO, DPO, and REBEL using three metrics. The RM Score is computed using the reward model with the respective size and the winrate is evaluated by GPT4. The models are trained with low-rank adapters. The best-performing method for each size and metric is highlighted in bold and the second best is underlined. We note that REBEL outperforms all baselines here in terms of the winrate 6.9B SFT DPO REINFORCE PPO RLOO (\ud835\udc58= 2) RLOO (\ud835\udc58= 4) REBEL Winrate (\u2191) 44.6% 68.2% 70.7%\u2217 77.6%\u2021 74.2%\u2217 77.9%\u2217 78.0% *directly obtained from Ahmadian et al. (2024) \u2021directly obtained from Huang et al. (2024) Table 2: Results on TL;DR Summarization on 6.9B models. We perform full-parameter training for all models. The best-performing method is highlighted in bold and the second best is underlined. use the existing reward models with 2.8B6 and 6.9B7 parameters. For both REBEL and baseline methods using 1.4B and 2.8B parameters, we trained the policy and/or the critic using low-rank adapters (LoRA) (Hu et al., 2022) on top of our SFT and/or reward model respectively. For the 6.9B models, we perform full-parameter training. More details about the hyperparameters are described in Appendix D.2. Evaluation: We evaluate each method by its balance between reward model score and KLdivergence with the reference policy, testing the effectiveness of the algorithm in optimizing the regularized RL object. To evaluate the quality of the generation, we compute the winrate (Rafailov et al., 2023) against human references using GPT48 (OpenAI, 2023). The winrate is computed from a randomly sampled subset (10%) of the test set with a total of 600 samples. The prompt used to query GPT4 as well as an example response is shown in Appendix D.3. 14 1.6 1.8 2.0 2.2 2.4 2.6 RM Score ( ) 15 20 25 30 35 KL ( || ref) ( ) 2.0 2.1 2.2 2.3 2.4 2.5 2.6 2.7 RM Score ( ) 0 10 20 30 40 50 60 REBEL PPO Figure 2: Plot of Reward vs KL-Divergence for 2.8B REBEL and PPO. We evaluate the models across the entire test set every 100 steps for 2,000 steps. Left: each point represents the average reward score and KL-divergence for a specific time step; the eclipse represents the confidence interval with 2 standard deviations. Right: we divide the KL distribution at the 2,000-step into 10 bins with equal size and average the corresponding RM scores in each bin. 5.1.1 Quality Analysis Table 1 presents a comparison between REBEL and SFT, PPO, and DPO for 1.4B and 2.8B models trained with LoRA. We calculate the KL-divergence (KL(\ud835\udf0b||\ud835\udf0b\ud835\udc5f\ud835\udc52\ud835\udc53)) using the SFT policy of the corresponding size as the reference for all models. Notably, REBEL outperforms all the baselines on RM score across all model sizes with a slightly larger KL than PPO. In addition, REBEL achieves the highest winrate under GPT4 when evaluated against human references, indicating the benefit of regressing the relative rewards. Example generations of 2.8B REBEL are included in Appendix E. We also perform full-parameter training for 6.9B models and the winrates are shown in Table 2. We can observe that REBEL still outperforms all of the baselines while REBEL, PPO, and RLOO (\ud835\udc58= 4) have comparable performances (but we will soon show in the next section that REBEL is more tractable in computation and memory than PPO and RLOO with \ud835\udc58= 4). An ablation analysis on parameter \ud835\udf02is in Appendix F. The trade-off between the reward model score and KL-divergence is shown in Figure 2. We evaluate the 2.8B REBEL and PPO every 400 gradient updates during training for 8,000 updates. The sample complexity of each update is held constant across both algorithms for fair comparison. For the left plot, each point represents the average divergence and score over the entire test set, and the eclipse represents the confidence interval with 2 standard deviations. As observed previously, PPO exhibits lower divergence, whereas REBEL shows higher divergence but is capable of achieving larger RM scores. Notably, towards the end of the training (going to the right part of the plot), REBEL and PPO have similar KL and RM scores. For the right plot in Figure 2, we analyze a single checkpoint for each algorithm at the end of training. For each algorithm, we group every generation from the test set by its KL distribution into 10 equally sized bins and calculate the average of the corresponding RM 6HuggingFace Model Card: vwxyzjn/EleutherAI_pythia-2.8b-deduped__reward__tldr 7HuggingFace Model Card: vwxyzjn/EleutherAI_pythia-6.9b-deduped__reward__tldr 8Specific API checkpoint used throughout this section: gpt-4-0613 15 DPO REINFORCE RLOO (k = 2) PPO RLOO (k = 4) REBEL Method 0 20 40 60 80 100 120 Time (s) Generation Policy Update DPO REINFORCE RLOO (k = 2) PPO RLOO (k = 4) REBEL Method 0 5 10 15 20 25 30 35 40 Peak Memory Usage (GB) Figure 3: Plot of runtime and memory usage for DPO, REINFORCE, RLOO, PPO, and REBEL. The runtime includes both time for generation and policy update for each batch. Runtime and memory usage are measured on A6000 GPUs. Baselines on the left-hand side of the dashed line have lower winrates. Methods on the right-hand side of the dashed line have similar winrates to REBEL, but REBEL is noticeably more computationally tractable and memory efficient than PPO and RLOO (\ud835\udc58= 4). score for each bin. We can see that REBEL achieves higher RM scores for generations with small divergence while requiring larger divergence for generations with the highest scores. 5.1.2 Runtime & Memory Analysis We analyze the runtime and peak memory usage for 2.8B models using PPO, DPO, RLOO, and REBEL. The runtime includes both the generation time and the time required for policy updates. Both runtime and peak memory usage are measured on A6000 GPUs using the same hyperparameters detailed in Appendix D.2. The methods in the plots are arranged in ascending order based on winrates. To the right of the dashed line, PPO, RLOO (\ud835\udc58= 4), and REBEL have the highest winrates, which are comparable among them. While DPO and REINFORCE require less time and memory, their performance does not match up to REBEL, as discussed in Section 5.1.1. RLOO (\ud835\udc58= 2) has similar runtime and memory usage as REBEL since we set \ud835\udf07= \ud835\udf0b\ud835\udc61, making REBEL also generate twice per prompt. However, RLOO (\ud835\udc58= 2) has worse performance than REBEL. Compared to PPO and RLOO (\ud835\udc58= 4), REBEL demonstrates shorter runtimes and lower peak memory usage. PPO is slow and requires more memory because it needs to update both two networks: policy network and value network. RLOO (\ud835\udc58= 4) requires generating 4 responses per prompt which makes it slow and less memory efficient. Compared to the two baselines PPO and RLOO (\ud835\udc58= 4) that achieve similar winrates as REBEL, we see that REBEL is more computationally tractable. REBEL is also noticeably simpler to implement than PPO since it does not learn value networks or compute the advantage estimation. 16 0 10000 20000 30000 40000 50000 60000 Reward Queries 6.0 6.5 7.0 7.5 8.0 8.5 9.0 LAION Aesthetic Score REBEL PPO Figure 4: Learning curves as a function of reward queries to the LAION aesthetic predictor. We report inter-quartile means (IQM) with 95% confidence intervals (CIs) across three seeds for both REBEL and PPO. The CIs were calculated with percentile bootstrap with stratified sampling over three random seeds. 5.2 Image Generation We also consider the setting of image generation, where, given a consistency model (Song et al., 2023a) and a target reward function, we seek to train the consistency model to output images which garner a higher reward. Specifically, we compare REBEL and PPO under the RLCM framework (Oertell et al., 2024). Baselines: We compare REBEL to a clipped, policy gradient objective (Black et al., 2023; Fan et al., 2024; Oertell et al., 2024) with the aim to optimize aesthetic quality to obtain high reward from the LAION aesthetic score predictor (Schuhmann, 2022). This baseline does not use critics or GAE for advantage estimates. However, the clipping objective is clearly motivated by PPO, and thus, we simply name this baseline as PPO in this section. Dataset: We use 45 common animals as generation prompts similar to Black et al. (2023); Oertell et al. (2024)9. Models: We use the latent consistency model (Luo et al., 2023) distillation of the Dreamshaper v7 model10, a finetune of stable diffusion (Rombach et al., 2021). Evaluation: We evaluate PPO and REBEL on its reward under the LAION aesthetic reward model for an equal number of reward queries/samples generated and an equal number of gradient updates. The aesthetic predictor is trained to predict human-labeled scores of images on a scale of 1 to 10. Images that tend to have the highest reward are artwork. Following the recommendations of Agarwal et al. (2021b), we report the inter-quartile mean with 95% confidence intervals for our reported results across three random seeds. 9Dataset available at https://github.com/Owen-Oertell/rlcm 10Huggingface model card: SimianLuo/LCM_Dreamshaper_v7 17 REBEL PPO 7.29 7.38 7.37 7.27 7.14 6.85 6.17 6.00 6.29 7.06 Figure 5: Generated images using PPO and REBEL during an intermediate checkpoint. We note that at the same number of epochs, REBEL observes a higher reward under the reward model. This can further be seen by the more diverse background of images generated from REBEL with less training time. 5.3 Quality Analysis Figure 4 shows REBEL optimizes the consistency model faster during the beginning of training but eventually achieves similar performance to that of PPO. For our experiments, we tuned both batch size and learning rate for our algorithms, testing batch sizes of [4, 8, 16] per gpu and learning rates [1e \u22124, 3e \u22124, 6e \u22124, 1e \u22123]. Note, the main difference in implementation between PPO and REBEL is the replacement of the clipped PPO objective with our regression objective. Qualitatively, we observe that eventually, both PPO and REBEL start to generate good-looking images but ignore the text prompt entirely. However, from just optimizing the reward function perspective, this behavior is not surprising since the objective does not encourage the maintenance of the consistency between the text prompt and the generated image. To maximize LAION-predicted aesthetic quality, both REBEL and PPO transform a model that produces plain images into one that produces artistic drawings. We found across multiple seeds that REBEL produced lush backgrounds when compared to PPO\u2019s generations. Please see Appendix E.2 for more examples of generated images. 6 Related Work Policy Gradients. Policy gradient (PG) methods (Nemirovsk\u0133 and Yudin, 1983; Williams, 1992; Sutton et al., 1999; Konda and Tsitsiklis, 1999; Kakade, 2001; Schulman et al., 2017) are a prominent class of RL algorithms due to their direct, gradient-based policy optimization, robustness to model mis-specification (Agarwal et al., 2020), and scalability to modern AI applications from fine-tuning LLMs (Stiennon et al., 2022) to optimizing text-to-image generators (Oertell et al., 2024). 18 Broadly speaking, we can taxonomize PG methods into two families. The first family is based on REINFORCE (Williams, 1992) and often includes variance reduction techniques (Kool et al., 2019; Richter et al., 2020; Zhu et al., 2023). While prior work by Ahmadian et al. (2024) has shown that REINFORCE-based approaches can outperform more complex RL algorithms like PPO on LLM fine-tuning tasks like TL;DR, we find that a properly optimized version of PPO still out-performs a REINFORCE baseline. The second family is adaptive PG techniques that precondition the policy gradient (usually with the inverse of the Fisher Information Matrix) to ensure it is covariant to re-parameterizations of the policy, which include NPG (Kakade, 2001; Bagnell and Schneider, 2003) and its practical approximations like TRPO (Schulman et al., 2015a) and PPO (Schulman et al., 2017). Intuitively, the preconditioning ensures that we make small changes in terms of action distributions, rather than in terms of the actual policy parameters, leading to faster and more stable convergence. Unfortunately, computing and then inverting the Fisher Information Matrix is computationally intensive and therefore we often resort to approximations in practice, as done in TRPO. However, these approximations are still difficult to apply to large-scale generative models, necessitating even coarser approximations like PPO. In contrast, REBEL does not need any such approximations to be implemented at scale, giving us a much closer connection between theory and practice. Reward Regression. The heart of REBEL is a novel reduction from RL to iterative squared loss regression. While using regression to fit either the reward (Peters and Schaal, 2007) or the value (Peng et al., 2019) targets which are then used to extract a policy have previously been explored, our method instead takes a page from DPO (Rafailov et al., 2023) to implicitly parameterize the reward regressor in terms of the policy. This collapses the two stage procedure of prior methods into a single regression step. Preference Fine-Tuning (PFT) of Generative Models. RL has attracted renewed interest due to its central role in \u201caligning\u201d language models \u2013 i.e., adapting their distribution of prompt completions towards the set of responses preferred by human raters. One family of techniques for PFT, often referred to as Reinforcement Learning from Human Feedback (RLHF) involves first fitting a reward model (i.e. a classifier) to the human preference data and then using this model to provide reward values to a downstream RL algorithm (often PPO) (Christiano et al., 2017; Ziegler et al., 2020). LLMs fine-tuned by this procedure include GPT-N (OpenAI, 2023), Claude-N (Anthropic, 2024), and Llama-N (Meta, 2024). Similar approaches have proved beneficial for tasks like summarization (Stiennon et al., 2022), question answering (Nakano et al., 2022), text-to-image generation (Lee et al., 2023), and instruction following (Ouyang et al., 2022). Another family of techniques for PFT essentially treats the problem as supervised learning and uses a variety of ranking loss functions. It includes DPO (Rafailov et al., 2023), IPO (Azar et al., 2023), and KTO (Ethayarajh et al., 2023). These techniques are simpler to implement as they remove components like an explicit reward model, value network, and on-policy training from the standard RLHF setup. However, recent work finds their performance to be lesser than that of on-policy methods (Lambert et al., 2024; Tajwar et al., 2024), which agrees with our findings. This is perhaps caused by their lack of interaction during training, leading to the well-known covariate shift/compounding error issue (Ross et al., 2011; Swamy et al., 2021) and the associated lower levels of performance. The third family of PFT techniques combines elements from the previous two: it involves running an offline algorithm iteratively, collecting on-policy preference feedback from either a supervisor model (Rosset et al., 2024; Xiong et al., 2024; Guo et al., 2024) or from a preference model fit on human data 19 (Calandriello et al., 2024). All of these approaches can be considered instantiations of the general SPO reduction proposed by Swamy et al. (2024), which itself can be thought of as a preference-based variant of DAgger (Ross et al., 2011). Recent work by Tajwar et al. (2024) confirms the empirical strength of these techniques. Our approach fits best into this family of techniques \u2013 we also iteratively update our model by solving a sequence of supervised learning problems over on-policy datasets. However, REBEL comes with several key differentiating factors from the prior work. First, we can run REBEL with datasets consisting of a mixture of on-policy and off-policy data with strong guarantees, enabling hybrid training, as previously explored in the RL (Song et al., 2023b; Ball et al., 2023; Zhou et al., 2023) and inverse RL (Ren et al., 2024) literature. Second, unlike all of the aforementioned works that regularize to the initial policy \ud835\udf0b0 during updates, we perform conservative updates by regularizing \ud835\udf0b\ud835\udc61+1 to \ud835\udf0b\ud835\udc61. Thus, for the prior work, it is difficult to prove convergence or monotonic improvement as the current policy can just bounce around a ball centered at \ud835\udf0b0, a well-known issue in the theory of approximate policy iteration (Kakade and Langford, 2002; Munos, 2003). In contrast, by incorporating the prior policy\u2019s probabilities into our regression problem, we are able to prove stronger guarantees for REBEL. 7 Summary and Future Work In summary, we propose REBEL, an RL algorithm that reduces the problem of RL to solving a sequence of relative reward regression problems on iteratively collected datasets. In contrast to policy gradient approaches that require additional networks and heuristics like clipping to ensure optimization stability, REBEL requires that we can drive down training error on a least squares problem. This makes it strikingly simple to implement and scale. In theory, REBEL matches the best guarantees we have for RL algorithms in the agnostic setting, while in practice, REBEL is able to match and sometimes outperform methods that are far more complex to implement or expensive to run across both language modeling and guided image generation tasks. There are several open questions raised by our work. The first is whether using a loss function other than square loss (e.g. log loss or cross-entropy) could lead to better performance in practice (Farebrother et al., 2024) or tighter bounds (e.g. first-order / gap-dependent) in theory (Foster and Krishnamurthy, 2021; Wang et al., 2023a, 2024). The second is whether, in the general (i.e. non-utility-based) preference setting, the coverage condition assumed in our analysis is necessary \u2013 we conjecture it is. Relatedly, it would be interesting to explore whether using preference (rather than reward) models to provide supervision for REBEL replicates the performance improvements reported by Swamy et al. (2024); Munos et al. (2023). Third, while we focus primarily on the bandit setting in the preceding sections, it would be interesting to consider the more general RL setting and explore how offline datasets can be used to improve the efficiency of policy optimization via techniques like resets (Bagnell et al., 2003; Ross and Bagnell, 2014; Swamy et al., 2023; Chang et al., 2023, 2024). 20"
18
+ }
intro_8K/test_introduction_short_2404.16789v1.json ADDED
The diff for this file is too large to render. See raw diff
 
intro_8K/test_introduction_short_2404.16792v1.json ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2404.16792v1",
3
+ "title": "Weak-to-Strong Extrapolation Expedites Alignment",
4
+ "abstract": "Although the capabilities of large language models (LLMs) ideally scale up\nwith increasing data and compute, they are inevitably constrained by limited\nresources in reality. Suppose we have a moderately trained LLM (e.g., trained\nto align with human preference) in hand, can we further exploit its potential\nand cheaply acquire a stronger model? In this paper, we propose a simple method\ncalled ExPO to boost LLMs' alignment with human preference. ExPO assumes that a\nmedium-aligned model can be interpolated between a less-aligned (weaker) model,\ne.g., the initial SFT model, and a better-aligned (stronger) one, thereby\ndirectly obtaining this stronger model by extrapolating from the weights of the\nformer two relatively weaker models. On the AlpacaEval 2.0 benchmark, we show\nthat ExPO pushes models trained with less preference data (e.g., 10% or 20%) to\nreach and even surpass the fully-trained one, without any additional training.\nFurthermore, ExPO also significantly improves off-the-shelf DPO/RLHF models and\nexhibits decent scalability across model sizes from 7B to 70B. Our work\ndemonstrates the efficacy of model extrapolation in exploiting LLMs'\ncapabilities, suggesting a promising direction that deserves future\nexploration.",
5
+ "authors": "Chujie Zheng, Ziqi Wang, Heng Ji, Minlie Huang, Nanyun Peng",
6
+ "published": "2024-04-25",
7
+ "updated": "2024-04-25",
8
+ "primary_cat": "cs.LG",
9
+ "cats": [
10
+ "cs.LG",
11
+ "cs.AI",
12
+ "cs.CL"
13
+ ],
14
+ "label": "Original Paper",
15
+ "paper_cat": "LLM Fairness",
16
+ "gt": "Modern large language models (LLM) typically undergo additional fine-tuning to align with human expectations [29, 27, 28], including both supervised fine-tuning (SFT) on demonstration outputs [33, 40] and alignment training with human preference [5, 31]. Similar to the pre-training phase [15], the alignment of LLMs can also be continuously improved by increasing data and training steps [8, 5, 42]. However, in reality, alignment training is inevitably constrained by available resources and thus cannot grow indefinitely. Suppose we have a moderately-trained LLM in hand, is it possible to further exploit its potential and cheaply acquire a stronger model? We draw inspiration from the literature of model interpolation, also known as model/weight averaging. It aims to integrate different models fine-tuned from the same base model into a unified one by interpolating between their weights [38, 19, 41], relying on the mode connectivity of neural networks [11, 10]. Previous work showed that with the basic uniform interpolation (i.e., using the same interpolation ratio for all the model modules), the obtained new model usually achieves trade-off performance between the original ones [26, 41, 23]. We similarly observe this phenomenon when we interpolate between an SFT model and a model further trained by direct preference optimization (DPO) [31] or reinforcement learning from human feedback (RLHF) [49], as shown in Figure 2. Interpolation Extrapolation Figure 2: Calculating the reward scores (\u00a73.1) on the UltraFeedback [7] development set, we observe that model interpolation usually gives trade-off performance between the two original models (e.g., an SFT model and the model further trained by DPO/RLHF), as similarly observed in previous literature [26, 41, 23]. This observation motivates our proposal of EXPO that cheaply obtains a stronger model from weaker models via model extrapolation. We are intrigued by another question: If we treat the DPO/RLHF model as an intermediate result of interpolation between the SFT model and some stronger model, can we obtain this stronger model by reversely extrapolating from the former two models\u2019 weights? If so, we can actually start with two relatively weaker models from the training process and straightforwardly obtain a stronger one. As indicated by the gray arrow in Figure 2, this could also improve off-the-shelf already-aligned models, such as the many open-sourced LLMs on HuggingFace. Based on the above motivation, we propose a simple method called EXPO (model extrapolation) to boost LLMs\u2019 alignment with human preference (\u00a72). EXPO assumes a medium-aligned model M can be interpolated from a less-aligned (weaker) model Mw (e.g., the SFT model) and a better- aligned (stronger) one Ms. Then, we can directly obtain this stronger model Ms by extrapolating from the weights of the two relatively weaker models M and Mw, without any additional training on top of them. Despite its simplicity, we demonstrate that EXPO is quite effective in improving the alignment of various LLMs, as summarized in Figure 1. Specifically, for standard DPO training, we show that EXPO pushes the models trained with less data (e.g., 10% or 20%) to reach and even surpass the fully-trained one, as evaluated on the AlpacaEval 2.0 benchmark [22, 9] (\u00a73). Furthermore, EXPO also remarkably improves off-the-shelf DPO/RLHF models, by up to 6.8% on AlpacaEval 2.0 (\u00a74), and manifests satisfactory scalability across model sizes from 7B to 70B. Our work demonstrates model extrapolation as a promising method for boosting LLMs\u2019 alignment with human preference and better exploiting the capabilities of LLMs, which we believe deserves more future exploration. 2",
17
+ "main_content": "2.1 Overview Inspired by the observation in Figure 2, we make the following assumption: A model M can be interpolated between a weaker model Mw and a stronger model Ms, which satisfy the relationship in terms of their alignment with human preference: Mw < M < Ms. Specifically, we suppose the medium-aligned model M (parameterized by \u03b8) to be one that has been moderately trained for human preference alignment. We also suppose the less-aligned weaker model Mw (parameterized by \u03b8w) simply to be the SFT model used for initializing M. The above assumption suggests that there exists a better-aligned stronger model Ms (parameterized by \u03b8s) and an interpolation coefficient \u03b3 \u2208[0, 1] such that: \u03b8 = (1 \u2212\u03b3)\u03b8w + \u03b3\u03b8s. (1) Here we consider the simplest form of uniform linear interpolation. With the substitution of \u03b1 = 1/\u03b3 \u22121 \u2208[0, +\u221e), we can obtain the assumed stronger model Ms by extrapolating from the weights of the relatively weaker Mw and M (i.e., weak-to-strong extrapolation). Our proposed EXPO method is formulated as follows: \u03b8s = (1 + \u03b1)\u03b8 \u2212\u03b1\u03b8w = \u03b8 + \u03b1(\u03b8 \u2212\u03b8w) = \u03b8 + \u03b1\u2206\u03b8, (2) where the coefficient \u03b1 serves as the hyperparameter that controls the length of extrapolation. In practice, \u03b1 can be cheaply tuned as a decoding hyperparameter (like the sampling temperature) on a development set with no model training involved. 2.2 Insights on EXPO Mw M e 3: EXPO can \u03b1\u2206\u03b8 \u2206\u03b8 Ms e view M M M Figure 3: EXPO can be viewed as a \u201cglobal gradient update\u201d that moves the model weight along the direction of \u2206\u03b8 in which the model\u2019s alignment with human preference is improved (measured by a reward score). We first use Figure 3 for an intuitive illustration of EXPO. Specifically, EXPO can be viewed as a \u201cglobal gradient update\u201d, based on the global weight change \u2206\u03b8 = \u03b8\u2212\u03b8w from the initial Mw to the final M. The weight change \u2206\u03b8 indicates a direction in the parameter space, in which the model\u2019s alignment with human preference is improved (measured by a reward score). Hence, EXPO essentially aims to amplify the learned reward signal through the extrapolation \u03b1\u2206\u03b8. \u201cglobal gradient update\u201d that moves the model weight along the direction of \u2206\u03b8 in which the model\u2019s alignment with human preference is improved (measured by a reward score). Based on the above illustration, we identify two prerequisites for EXPO. First, the model M should have not yet been trained to its optimality. This prerequisite is generally valid, as evidenced by the most powerful LLMs such as GPT-4 and Claude that are undergoing constant optimization for better alignment. We will show in \u00a74 that even the open-source models that have been extensively trained for human preference alignment still have significant room for further improvement. Second, also more importantly, the weight change \u2206\u03b8 from Mw to M should be of \u201chigh quality\u201d, meaning it should as accurately as possible indicate an extrapolation direction in which the alignment can get improved. In mainstream preference alignment algorithms such as DPO or RLHF, this prerequisite can also be generally established, as M is initialized from Mw (the SFT model) and is essentially trained to maximize the reward signal of human preference, either from preference data or reward models. Nonetheless, the \u201cquality\u201d of \u2206\u03b8 can vary depending on the training configuration of M and the capability of Mw, as we will discuss in \u00a73.3 and 4.2. Combining the two prerequisites, when the model M initialized from its SFT checkpoint M has M M Combining the two prerequisites, when the model M initialized from its SFT checkpoint Mw has undergone moderate alignment training, it can potentially get better aligned by EXPO. We will experimentally verify this in \u00a73 and 4. However, other model combinations for Mw and M, such as a Base and an SFT models or two separately-trained RLHF models, usually cannot guarantee the second prerequisite. We will discuss this more in \u00a74.3 in conjunction with empirical results. 3 2.3 Highlights We underline the following appealing properties of EXPO: \u2022 Simplicity: EXPO is extremely simple and quick to implement. It merely involves performing extrapolation based on the weights of two checkpoints Ms and M, which can be accomplished within just a few lines of code. \u2022 Efficiency: EXPO needs no additional model training on top of Ms and M. The only hyperparameter \u03b1 is also cheap to tune as no training will be involved. Moreover, we believe more efficient means of hyperparameter search can be developed in future work, as evidenced by the advances in adaptive model interpolation [17, 23]. \u2022 Scalability: EXPO is in principle applicable to various LLMs, including those of large sizes or that have been extensively trained for human preference alignment. We will show in \u00a74 that EXPO can improve off-the-shelf already-aligned models of varying sizes and capabilities. 3 Experiments We first demonstrate the effectiveness of EXPO in a controlled setting, i.e., training the model M with less preference data, so we can know in advance that M still has room for further improvement (corresponding to the first prerequisite in \u00a72.2). We show that EXPO endows the models trained using less data (e.g., 10% or 20%) with equivalent or superior performance to the fully-trained one. 3.1 Experimental Setup Models To train models for human preference alignment, we refer to the alignment handbook3 [36], a widely-used code base released by HuggingFace for alignment training of LLMs. We follow the their setup of training the Mistral-based [20] zephyr-7b-sft-full and zephyr-7b-dpo-full models [37]. Specifically, we use the same preference dataset but varying data sizes to train the models. We employ the same mainstream DPO [31] algorithm for alignment training, where the SFT model zephyr-7b-sft-full is used as the reference model in DPO and also used for initializing the policy models. We adopt the same hyperparameter configuration as zephyr-7b-dpo-full (see Appendix B) and train all the models on 4 A100 80GB GPUs. We use zephyr-7b-dpo-full as the fully-trained baseline (i.e., trained with 100% data). Data We use the same preprocessed UltraFeedback4 [7] dataset for DPO training. UltraFeedback is a large-scale preference dataset, containing diverse instructions and response pairs with GPT-4annotated preference labels. It has been popularly used by the open-source community for training aligned LLMs [18, 37, 48]. The preprocessed version provided by HuggingFace contains 61K and 1K preference data in the training and development set, respectively. Each data consists of an instruction and a pair of responses, with one labeled as preferred. Evaluation We evaluate the models on AlpacaEval 2.0 [22], a leading and popular benchmark that assesses LLMs\u2019 alignment with human preference. It contains 805 instructions representative of real user cases. For each instruction, the response of the evaluated model is compared head-to-head with that of the GPT-4 baseline. An evaluator based on GPT-4 (its version is gpt-4-1106-preview during our work) produces the probability of preferring the evaluated model, which provides an affordable and replicable alternative to human preference annotation. Then, the win rate over the GPT-4 baseline is computed as the expected preference probability on all the 805 instructions. Recently, AlpacaEval 2.0 has introduced the new length-controlled (LC) win rate metric [9], which aims to alleviate the length bias of the GPT-4 evaluator (i.e., the prior preference toward longer responses) [30]. According to [9], the LC win rate metric currently has the highest correlation (a Spearman correlation of 0.98) with real-world human evaluation [47], which consolidates the reliability of AlpacaEval 2.0 evaluation. 3https://github.com/huggingface/alignment-handbook 4https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized 4 For EXPO, we decide the optimal \u03b1 based on the performance on the UltraFeedback development set, as evaluated by an open-source reward model5. It ranks among the top on RewardBench6 [21], a leaderboard that assesses the performance of reward models. More importantly, this reward model is not involved in either preference annotation or RLHF training of all the models we experiment with in this work, thus reducing the risk of reward hacking. In our experiments, we also report the average scores produced by the reward model on the 805 AlpacaEval 2.0 instructions as a reference. 3.2 Results Table 1: AlpacaEval 2.0 evaluation results of models trained with less preference data. The DPO models are all initialized from the SFT model zephyr-7b-sft-full. Reward Win Rate LC Win Rate SFT 3.42 4.7% 8.7% DPO (full data) 6.16 14.7% 17.3% + EXPO, no training 6.52 (+0.36) 18.0% (+3.3%) 20.2% (+2.9%) DPO (10% data) 3.97 5.9% 10.4% + EXPO, no training 6.57 (+2.60) 17.9% (+12.0%) 16.3% (+5.9%) DPO (20% data) 4.70 8.6% 12.9% + EXPO, no training 6.95 (+2.25) 22.7% (+14.1%) 21.3% (+8.4%) \u03b81 \u03b82 \u03b1\u2206\u03b81 \u03b1\u2206\u03b82 Figure 4: The \u201cquality\u201d of \u2206\u03b8 and the effectiveness of EXPO can vary depending on the training configurations of M. Here, \u2206\u03b82 indicates a superior extrapolation direction to \u2206\u03b81. In Table 1, we show the performance of the models trained with less (10% and 20%) preference data as well as the results of further applying EXPO on top of them. As expected, training with less preference data results in lower-tier performance, as indicated by their LC win rates on AlpacaEval 2.0. For instance, compared to the 17.3% of using 100% data, using 10% and 20% only achieves the performance of 10.4% and 12.9%, respectively. However, after applying EXPO, using only 10% data can achieve competitive performance to the fully-trained model (16.3% vs. 17.3%), while using 20% of the data already achieves beyond that (21.3%), giving a remarkable advantage of 21.3% 17.3% = 4%. We also observe that the model trained with 20% data obtains a greater improvement from EXPO than that trained with 10% data (8.4% vs. 5.9%). It implies that the former gives a superior extrapolation direction \u2206\u03b8 to the latter, as illustrated in Figure 4. However, the \u201cquality\u201d of \u2206\u03b8 is not simply correlated with the amount of data, as shown in Table 1 where using 20% data slightly outperforms using full data when both applying EXPO (21.3% vs. 20.2%). This is because the increasing size can also amplify the biases in the preference data, which becomes more likely to be learned by the model M as shortcuts. We next analyze the impact of M\u2019s training configuration on \u2206\u03b8 in detail. 3.3 Analysis Comparison between Data Sizes Figure 5 presents the reward scores and output lengths on the UltraFeedback development set versus the extrapolation coefficient \u03b1 values in EXPO. We have two main observations. Firstly, for the models M trained with different data sizes, the optimal \u03b1 of EXPO varies and generally decreases as the data size increases, as indicated by the vertical dashed lines in the left part of Figure 5. This is because the larger data size usually leads to the more thorough convergence of training, even if only to a local optimum, which naturally narrows the viable range of the extrapolation coefficient \u03b1. Secondly, the global optimal reward score (6.08) achieved by EXPO is obtained with a medium size (20%) of training data, rather than the smaller (5% or 10%) or larger (40%) ones. For the former (5% 5https://huggingface.co/weqweasdas/RM-Mistral-7B 6https://huggingface.co/spaces/allenai/reward-bench 5 and 10% data), although ExPO significantly improves the performance (from the reward score 3.13 to 4.79, and 3.59 to 5.82, respectively), the limited data still cannot provide an accurate \u2206\u03b8, thus capping the potential performance after model extrapolation. For the latter (40% data), we conjecture that the model may have learned the spurious features in preference data as shortcuts, especially the length bias7 [30] where the preferred responses are usually longer. As shown in the right part of Figure 5, for the model trained with 40% data, using a very small \u03b1 results in a dramatic increase in the output length. In this case, \u2206\u03b8 becomes more likely to contain the spurious features, and in particular, the length bias can be amplified by model extrapolation. But this does not lead to sustained improvement of performance, as shown in the right part of Figure 5, where the optimal rewards typically correspond to moderate output lengths ranging between 500 and 600. Figure 5: For the models M trained with varying data sizes, we plot the reward scores (left) and output lengths (right) on the UltraFeedback development set versus varying \u03b1 values in EXPO. Comparison with Hyperparameter Tuning As EXPO can be viewed as a \u201cglobal gradient update\u201d (\u00a72.2), we also compare with simply tuning the training hyperparameters. Specifically, we use the same 20% training data but increase the learning rate or training epochs. From the left part of Figure 6, we observe that increasing the two hyperparameters indeed somewhat improves the original reward score. However, it is still inferior to the optimal reward score achieved by EXPO under the default configuration, and also noticeably impairs the gains from model extrapolation (the peak points are lower than that of the default configuration). This is probably because the model is overfitted to the training data and similarly learns the spurious features (like the length bias), thus failing to provide an accurate \u2206\u03b8. The overfitting issue can also be evidenced by the right part of Figure 6. The models trained with larger learning rates or for more epochs become prone to generating longer outputs with a small \u03b1, but do not obtain noticeable reward improvement (the left part of Figure 6), implying that \u2206\u03b8 is very likely to contain the spurious length feature rather than the true human preference. Figure 6: For the models trained using 20% data but with larger learning rates or for more epochs, we plot the reward scores (left) and output lengths (right) on the UltraFeedback development set versus varying \u03b1 values in EXPO. Based on the above empirical analysis, we emphasize the critical role of \u2206\u03b8 in EXPO. Particularly, we show that the \u201cquality\u201d of \u2206\u03b8 requires the appropriate choice of the training configuration for M, including both the preference data and the training hyperparameters. In the subsequent \u00a74.2, we will further discuss the impact of Mw\u2019s capability on the effectiveness of EXPO. 7The average lengths of the preferred and unpreferred responses in the UltraFeedback training set are 319 and 277 tokens, respectively. 6 4 Model Extrapolation Boosts Off-the-Shelf Models We next demonstrate the impressive efficacy of EXPO in improving off-the-shelf already-aligned LLMs from HuggingFace, based on their SFT and DPO/RLHF checkpoints. We particularly underscore the scalability of EXPO across different model sizes and capabilities. 4.1 Experimental Setup When selecting open-source LLMs for experiments, we found that many well-known aligned LLMs, such as LLaMA-2/3 [35, 1], Gemma [34], and Qwen [4], do not release the corresponding SFT checkpoints. Such an opacity hinders the feasibility of experimenting with these more representative models. To facilitate reproducible research, we select the following open-source DPO/RLHF models that (1) have also publicly accessible SFT checkpoints, (2) have disclosed the training data, and (3) are popularly downloaded on HuggingFace or have been evaluated on the AlpacaEval 2.0 leaderboard: \u2022 tulu-2-dpo-7/13/70b [18], a LLaMA-2-based model suite. Since the three-sized models undergo the same SFT and DPO training processes (including both the data and configuration), they can serve as a reasonable testbed for the scalability of EXPO across different model sizes. \u2022 zephyr-7b-alpha/beta and zephyr-7b-dpo-full [37], three Mistral-based models. They are trained with different hyperparameter configurations and on slightly different preference data. \u2022 Starling-LM-7B-alpha/beta [48], two Mistral-based models. They are trained by the RLHF algorithm with different reward models. Similar to \u00a73.1, we select the optimal \u03b1 from [0.1, 0.2, 0.3, 0.4, 0.5] based on the performance on the UltraFeedback development set, as evaluated by the aforementioned reward model. 4.2 Results Table 2: AlpacaEval 2.0 evaluation results of off-the-shelf DPO/RLHF models. The gray models\u2019 scores are copied from the official leaderboard for reference. For the models that have been officially evaluated, we report the higher one between our reproduced score\u2020 and that from the leaderboard\u2021. Reward Win Rate LC Win Rate Llama-2-70b-chat-hf 13.9% 17.4% gpt-3.5-turbo-0613 14.1% 22.7% Gemini Pro 18.2% 24.4% claude-2.1 15.7% 25.3% tulu-2-dpo-7b 5.09 8.5%\u2020 10.2%\u2020 + EXPO 5.42 (+0.33) 11.5% (+3.0%) 11.7% (+1.5%) tulu-2-dpo-13b 5.37 11.2%\u2020 15.5%\u2020 + EXPO 5.89 (+0.52) 15.6% (+4.4%) 17.6% (+2.1%) tulu-2-dpo-70b 5.84 16.0%\u2021 21.2%\u2021 + EXPO 6.12 (+0.28) 23.0% (+7.0%) 25.7% (+4.5%) zephyr-7b-alpha 4.68 8.4%\u2021 10.3%\u2021 + EXPO 4.87 (+0.19) 10.6% (+2.2%) 13.6% (+3.3%) zephyr-7b-beta 5.31 11.0%\u2021 13.2%\u2021 + EXPO 5.40 (+0.09) 11.1% (+0.1%) 14.0% (+0.8%) zephyr-7b-dpo-full 6.16 14.7% 17.3% + EXPO 6.52 (+0.36) 18.0% (+3.3%) 20.2% (+2.9%) Starling-LM-7B-alpha 5.80 15.0%\u2020 18.3%\u2020 + EXPO 5.98 (+0.18) 18.2% (+3.2%) 19.5% (+1.2%) Starling-LM-7B-beta 7.12 26.6% 25.8% + EXPO 7.40 (+0.28) 29.6% (+3.0%) 26.4% (+0.6%) 7 The results in Table 2 demonstrate that EXPO enhances the performance of the already-aligned LLMs, by impressive increases of up to 6.8% LC win rate and 10.5% basic win rate on AlpacaEval 2.0. The improvement is made across LLMs of various capabilities, from the weakest zephyr-7b-alpha and tulu-2-dpo-7b to the strongest Starling-LM-7B-beta and tulu-2-dpo-70b. It suggests that most open-source LLMs have not been aligned with human preference optimally, and EXPO enables the further exploitation of these models\u2019 capabilities. Specifically for the model suite Tulu-2, where the 7B/13B/70B models are trained using the same preference data and configuration, the enhancement by EXPO nicely scales up with the increasing model size. We conjecture that this is because the larger/stronger Mw enables the better learning of the reward signal in the preference data or reward models, leading to both a stronger M and a more accurate \u2206\u03b8, which together result in the greater improvement for M after model extrapolation. Therefore, with the same preference data and training configuration, we optimistically expect the improvement by EXPO can also scale up as the capability of Mw increases. 4.3 Discussion Finally, we discuss the impact of model choices for Mw and M on the effectiveness of EXPO. In previous analyses and experiments, we choose Mw as an SFT model, and M as the model further trained for human preference alignment on top of Mw. Can other types of model combination Mw and M, such as a Base and an SFT model, or two separately-trained RLHF models, be able to produce meaningful extrapolated models? We experiment with the following types of combinations: \u2022 Base + SFT: Mistral-7B-v0.1 [20] as Mw and Mistral-7B-Instruct-v0.1 as M. \u2022 SFT 1 + SFT 2 (trained from different base models): Mistral-7B-Instruct-v0.1 as Mw and Mistral-7B-Instruct-v0.2 as M. \u2022 SFT 1 + SFT 2 (same base): openchat_3.5 [39] as Mw and openchat-3.5-0106 as M. \u2022 RLHF 1 + RLHF 2 (same base): gemma-7b-it [34] as Mw and gemma-1.1-7b-it as M. Note that it is not disclosed whether the two models are initialized from the same SFT model. Model 1 Model 2 0.1 0.2 0.3 0.4 0.5 2 4 6 Reward Base + SFT SFT 1 + SFT 2 (different base) SFT 1 + SFT 2 (same base) RLHF 1 + RLHF 2 (same base) Figure 7: Reward scores of other types of model combinations on the UltraFeedback development set, with \u03b1 varying from 0.1 to 0.5. \u03b81 \u03b82 \u03b1\u2206\u03b8 Figure 8: Extrapolation from two separatelytrained models may not improve alignment, as their weight difference (\u2206\u03b8) usually cannot guarantee a direction along which the reward signal can get further amplified. From the results shown in Figure 7, we find that extrapolating from two SFT models that are trained from different base models can easily lead to the model collapse, probably because they do not meet the requirement of mode connectivity [11, 10], namely, the same or close initialization. For the combination of Base and SFT, extrapolation degrades the performance. One cause is that the training from Base to SFT does not naturally reflect human preference, which is exactly why we need additional preference alignment training. Another cause is that compared to the Base model, the SFT one acquires the instruction-following ability and is also adapted to specified input/output formats [45]. EXPO can amplify both learned features (\u00a72.2), but the latter does not aid in alignment and may instead similarly lead to model collapse. For the two separately-trained SFT or RLHF models, we find that they also cannot benefit from model extrapolation. We speculate that this is because M is not initialized from Mw, so the path in the parameter space from \u03b8w to \u03b8 is not in the direction along which the reward signal can be amplified. As illustrated in Figure 8, even though M (\u03b82) has not yet achieved optimality on its own optimization path, it still cannot be improved in another direction of 8 \u2206\u03b8. Overall, our method EXPO is currently applicable to the combination of an SFT model Mw and a model M further trained on top of the former, which is a very realistic combination choice, as modern LLMs that are trained to align with human preference are almost all initialized from their SFT checkpoints. 5 Related Work LLM Alignment Modern LLMs are typically first pre-trained on massive textual corpora (resulting in a Base model) [6, 35, 1] and then trained to align with human expectations [27, 28, 35]. The alignment process generally contains two stages. In the first stage, an LLM is supervisedly fine-tuned (SFT) on demonstration outputs and learns to follow human instructions [40, 33]. In the second stage, the LLM is trained to learn human preference and assign higher probabilities to human-preferred outputs over the disfavored ones. This is usually implemented in the fashion of reinforcement learning (RL) [29, 5] or contrastive learning [44, 46, 31], as exemplified by the reinforcement learning from human feedback (RLHF) [49] and direct preference optimization (DPO) [31] algorithms, respectively. Similar to the scaling law in the pre-training phase [15], recent work also revealed that the capabilities of aligned models can also be constantly improved by scaling up the amount of alignment data [40, 33, 8] and increasing the training steps or iterations for human preference alignment [5, 42, 14]. However, the data and computation resources available in reality are always finite, which may prevent the full exploitation of models\u2019 capabilities. Our work proposes the EXPO method to boost LLMs\u2019 alignment with human preference in a simple, efficient, and scalable manner. Model Merging and Interpolation Model merging is a recently focal technique for building powerful LLMs based on existing ones [2, 3]. It aims to integrate multiple models fine-tuned from the same base model into a unified one that retains the respective strengths [43, 12]. The simplest form of model merging is model interpolation, also known as model/weight averaging [26, 41, 23], which builds upon the mode connectivity of neural networks [11, 10]. In practice, the uniform interpolation usually results in trade-off performance between the two original models, as observed in previous literature [26, 41, 23] and our experiments in Figure 2. One approach to addressing this issue is to adaptively adjust the interpolation coefficient for different model modules (e.g., different model layers) [17, 23]. Our proposed EXPO method (\u00a72) has a similar idea of blending model weights to improve the model capability, but works under a distinct premise and goal. Rather than integrating multiple strong models into a generalist, our method aims to use two relatively weaker models to produce a stronger model that can even surpass the limits of the fully-trained one (\u00a73 and 4). 6 Conclusion We present EXPO, a simple method to boost LLMs\u2019 alignment with human preference. By extrapolating from the weights of an SFT model Mw and a further trained one M, EXPO enables directly obtaining a better-aligned model without any additional training. We demonstrate the efficacy of EXPO across various LLMs, from those trained with limited preference data to the off-the-shelf ones from HuggingFace, where EXPO manifests decent scalability across varying model sizes and capabilities. Given its simplicity, efficiency, and scalability, we recommend EXPO as a promising approach for better exploiting LLMs\u2019 capabilities, which deserves more future exploration. Limitations & Future Work Our work is limited by the public accessibility to the checkpoints of the SFT and DPO/RLHF models. Thus unfortunately, we are unable to experiment with the more representative LLMs like LLaMA-2/3 [35, 1], Gemma [34], and Qwen [4]. We hope for more open-source efforts in increasing LLMs\u2019 transparency and accessibility. Outside the scope of this study, there are several problems that may attract future research. First, since EXPO is based on the simplest uniform linear extrapolation (Equation 2, using the same \u03b1 for all the model modules), future work may devise methods to adaptively search optimal \u03b1 for different model modules. Second, while we currently rely on an external reward model for searching \u03b1, future work may get rid of such reliance by resorting to the capability of the models M and Mw themselves. Third, although our work provides intuitive illustrations for EXPO and empirically demonstrates its effectiveness, future work may establish theoretical explanations and analyses for its underlying mechanisms. Finally, it would also be interesting to apply EXPO to multi-modal LLMs like LLaVA [24] and other model architectures like Mamba [13]. 9 Acknowledgements We thank the open-source community, including the HuggingFace, AllenAI, and Nexusflow teams, for promoting the transparency of LLMs by releasing model checkpoints and disclosing training details. This work would not be possible without these efforts from the open-source community. We thank Wei Xiong for releasing the reward models and for the valuable discussion."
18
+ }
intro_8K/test_introduction_short_2404.16807v1.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2404.16807v1",
3
+ "title": "Improving Diversity of Commonsense Generation by Large Language Models via In-Context Learning",
4
+ "abstract": "Generative Commonsense Reasoning (GCR) requires a model to reason about a\nsituation using commonsense knowledge, while generating coherent sentences.\nAlthough the quality of the generated sentences is crucial, the diversity of\nthe generation is equally important because it reflects the model's ability to\nuse a range of commonsense knowledge facts. Large Language Models (LLMs) have\nshown proficiency in enhancing the generation quality across various tasks\nthrough in-context learning (ICL) using given examples without the need for any\nfine-tuning. However, the diversity aspect in LLM outputs has not been\nsystematically studied before. To address this, we propose a simple method that\ndiversifies the LLM generations, while preserving their quality. Experimental\nresults on three benchmark GCR datasets show that our method achieves an ideal\nbalance between the quality and diversity. Moreover, the sentences generated by\nour proposed method can be used as training data to improve diversity in\nexisting commonsense generators.",
5
+ "authors": "Tianhui Zhang, Bei Peng, Danushka Bollegala",
6
+ "published": "2024-04-25",
7
+ "updated": "2024-04-25",
8
+ "primary_cat": "cs.CL",
9
+ "cats": [
10
+ "cs.CL"
11
+ ],
12
+ "label": "Original Paper",
13
+ "paper_cat": "LLM AND Reasoning",
14
+ "gt": "Commonsense reasoning is the ability to make logi- cal deductions about concepts encountered in daily life, and is considered as a critical property of intel- ligent agents (Davis and Marcus, 2015). Concepts are mental representations of classes and are ex- pressed using words in a language (Liu et al., 2023). Given the inputs, the GCR task requires a model to generate a high quality sentence that is gram- matical and adheres to commonsense, evaluated by its similarity to a set of human-written reference sentences covering the same set of concepts (Lin et al., 2020). Often there exists multiple relationships between a given set of concepts, leading to alternative rea- soning paths that take diverse view points. For ex- ample, given the four concepts dog, frisbee, throw and catch, different sentences can be generated as Dog; Catch; Frisbee; Throw A dog leaps to catch a thrown frisbee. The dog catches the frisbee when the boy throws it. A man throws away his dog's favourite frisbee expecting him to catch it in the air. A\u00a0dog catches\u00a0a\u00a0frisbee thrown\u00a0to it. A dog catches a frisbee thrown by its owner. A dog jumps in the air to catch a frisbee thrown by its owner. Figure 1: An example of diverse generated sentences sets in CommonGen (Lin et al., 2020) dataset. The gen- eration shown at the bottom (in green ) are considered by human annotators to be more diverse than those at the top (in red ). shown in Figure 1. Although all sentences shown in Figure 1 are grammatical, the bottom set ex- presses diverse view points (e.g. from the dog\u2019s as well as the man\u2019s) compared to the set at the top. Apart from the generation quality, diversity is also an important factor in text generation because the low-diversity texts tend to be dull, repetitive or biased towards a particular view point (Tevet and Berant, 2021). Diversity is an important considera- tion in many Natural Language Generation (NLG) applications, such as story generation (Li et al., 2018), paraphrase generation (Gupta et al., 2018), and GCR (Yu et al., 2022; Liu et al., 2023). In GCR tasks, the input text often provides insuffi- cient information to support diverse reasoning and generate multiple plausible outputs. Therefore, the diversity present in GCR task enables the explo- ration of different perspectives or all possible out- comes for a real-world situation. Existing methods promote diversity through special decoding strate- gies, such as nucleus sampling (Holtzman et al., 2019), or encoding interventions such as random noise injection (Gupta et al., 2018) or Mixture of Experts (MoE) approaches (Shen et al., 2019). We propose In-Context Diversification (ICD), a computationally-efficient and accurate method to improve the diversity in GCR, where the sentences are generated from a pre-trained LLM, and strikes arXiv:2404.16807v1 [cs.CL] 25 Apr 2024 a fine-balance between the output diversity and quality. ICD uses an ICL approach to increase the diversity of the sentences generated by an LLM, while maintaining the quality of the generation. ICD is a two-step process where it first lets an LLM to freely generate high-quality sentences that are grammatical, commonsense bearing and cover all the given input concepts. Next, ICD uses a user- specified diversity metric to evaluate the diversity of the generated sentences. If the diversity is low, ICD provides feedback to the LLM, instructing it to generate more diverse sentences considering the already generated sentences. Given that ICD is using LLMs to generate di- verse sentences via ICL and without updating the parameters of the LLMs, an interesting and open question is whether an LLM can accurately judge the diversity of a given set of sentences, covering a common set of concepts. To answer this ques- tion, we conduct an experiment where we instruct GPT3.5-turbo to judge the diversity of the set of input sentences according to a five-scale grading system, and convert the predicted grades into bi- nary judgements (i.e. diverse vs. non-diverse). We compare the LLM-assigned grades against those by a group of human annotators, and find a moderate- level (Cohen\u2019s Kappa of 0.409) agreement between human vs. LLM judgements, demonstrating that LLMs can indeed be instructed to obtain diversity judgements for GCR tasks. We evaluate ICD on three GCR tasks/datasets: CommonGen (Lin et al., 2020), ComVE (Wang et al., 2020), and DimonGen (Liu et al., 2023). We find that our proposed ICD balances diversity and quality appropriately, improving their harmonic mean by at least 6% over that of a default base- line. Moreover, the sentences generated by ICD can be used as training data to improve diversity in a Seq2Seq model (Sutskever et al., 2014; Lewis et al., 2020), producing results that are comparable to the models that are trained on knowledge graphs or human-written text corpora (Liu et al., 2021; Fan et al., 2020; Li et al., 2021).",
15
+ "main_content": "Diverse Text Generation. A variety of methods have been proposed to enhance the diversity of NLG. Sampling-based decoding is an effective method to increase the generation diversity. Holtzman et al. (2019) proposed nucleus sampling to generate diverse content at the generation stage. Truncated sampling (Fan et al., 2018) prunes and then samples the tokens based on the probability distribution. Furthermore, Shen et al. (2019) proposed an MoE approach to diversify translation outputs. Moreover, incorporating external corpora in the MoE further promotes diversity, such as by using a knowledge graph (Yu et al., 2022; Hwang et al., 2023) or by a collection of retrieved sentences (Liu et al., 2023). Although LLMs have reported superior performance in numerous Natural Language Processing (NLP) tasks (Touvron et al., 2023; OpenAI, 2023b,a), to the best of our knowledge, diversifying their generations in commonsense reasoning with ICL has not been explored in prior work on GCR. In-Context Learning. Recent studies demonstrate that LLMs can exhibit robust few-shot performance on a variety of downstream tasks through ICL (Brown et al., 2020). ICL is a technique for instructing an LLM using one or more examples for a particular text generation task. The generated text is conditioned on both the input as well as the instruction prompt. Wang et al. (2023) show that in ICL, label words in the demonstration examples function as anchors, which aggregate semantic information to their word representations in the shallow (closer to the input) layers, while providing that information to the final predictions performed by the deeper (closer to the output) layers. In contrast to fine-tuning-based methods, ICL is computationally lightweight because it does not update the parameters of the LLM. Therefore, ICL is an attractive method when integrating task-specific knowledge to an LLM by simply changing the prompt and the few-shot examples (Dong et al., 2022). 3 In-context Diversification We consider the problem of generating a set of diverse sentences that express commonsense reasoning, either by covering a set of given concepts (in CommonGen and DimonGen) or by providing an explanation for a given counterfactual statement (in ComVE). Formally, given a sequence (a set of concepts or a statement) X = {x1, . . . , xm}, the goal of GCR is to generate a set of grammatically correct and commonsense bearing sentences Y = {y1, . . . , yn}, where yi is the i-th output generated by the model with probability p(yi|X). Moreover, we require that the generated sentences {y1, . . . , yn} to be lexically as well as semantically diverse. Default Examples: Given several key words: [SRC], generate one coherent sentences using background commonsense knowledge: [TGT] Test instruction: Given several key words: [INPUT], generate one coherent sentence using background commonsense knowledge: [OUTPUT] Diversified Examples: Given several key words: [SRC], generate one coherent sentence using background commonsense knowledge: [TGT] Test instruction: Step1: Given several key words: [INPUT], generate [N] different and coherent sentences using background commonsense knowledge: [PRV] (If the diversity of [PRV] is low) Step2: You have generated the following sentences: [PRV], try to provide other reasonable sentences: [OUTPUT] (a) (b) Figure 2: An example of default and diversified prompts is shown for an instance selected from the CommonGen dataset. Here, the default prompt shown in Figure 2a is taken from Li et al. (2023). Few-shot examples are included in each prompt where [SRC] denotes the set of input concepts and [TGT] the corresponding sentences in CommonGen. For a given set of [INPUT] concepts, the LLM is then required to generate sentences at the slot [OUTPUT]. As shown in Figure 2b, ICD uses the diversified prompt, which operates in two steps. Step 1 generates a set of [N] sentences, [PRV]. We check for the diversity among the sentences in [PRV], and if it is low, we use the prompt in Step 2 to generate the final set of sentences. 3.1 Sentence Generation To explain our proposed ICD, let us consider GCR on CommonGen, where we must generate a set of sentences Y, such that each sentence contains all of the input concepts X as shown in Figure 2a. Given an LLM, we can design a prompt that contains a task-specific instruction and one or more examples containing the input concepts (denoted by [SRC] in Figure 2) and the corresponding human-written sentences containing all given input concepts (denoted by [TGT]) to instruct the LLM to generate output sentences Y (denoted by [OUTPUT]) for a given set of input concepts X (denoted by [INPUT]). We refer to a prompt of this nature as a default prompt, and the corresponding set of generated sentences by Sdef. Note that the default prompt does not necessarily guarantee that the generated set of sentences will be diverse and an LLM could return sentences that are highly similar to each other. To address this issue, we propose a diversified prompt as shown in Figure 2b. Specifically, the diversified prompt operates in two steps. In Step 1, we require that the LLM generate N sentences that are different, in addition to being coherent and commonsense bearing. Next, we use a suitable diversity metric to evaluate the level of diversity among the generated set of sentences. If the diversity of the generated senAlgorithm 1 In-Context Diversification (ICD) Input: Generated sets of sentences Sdef and Sdiv, respectively from default and diversified prompts, the number of desired output sentences N, and a diversity metric f. Output: Output set of sentences S\u2217 S\u2217\u2190\u2205 \u03b1 \u21900 for S \u2208(Sdef \u222aSdiv) do if (|S| == N) \u2227(f(S) \u2265\u03b1) then \u03b1 \u2190f(S) S\u2217\u2190S end if end for return S\u2217 tences is low, in Step 2, we show those sentences to the LLM and instruct it to generate sentences that are different to those. As the criteria for triggering Step 2, we check whether the exact same sentence has been generated multiple times by the LLM during Step 1. The final set of generated sentences is denoted by Sdiv. 3.2 Diversity-based Sampling Because of the limited availability of humanwritten reference sentences for evaluating GCR models, there exists a trade-off between quality vs. diversity when generating sentences for GCR tasks.1 Simply maximising for diversity often leads to generations that do not cover the input concepts in a natural way. For example, a randomly selected set of sentences would be highly diverse, yet unlikely to capture the input concept sets. On the other hand, if we force an LLM to generate sentences that contain all of the input concepts, it might find difficult to generate semantically diverse sentences and resort to trivial lexical or syntactic diversity tricks such as morphological inflections or word-order permutations. To address this issue, we propose a diversitybased sampling method shown in Algorithm 1. Consider that the default prompt provides a set Sdef of sentences that have not been optimised for diversity (likely to have a higher quality), while on the other hand the diversified prompt provides a set Sdiv of sentences that are further refined for diversity (likely to have a higher diversity). We wish to find a set of sentences that simultaneously satisfies the following criteria: (a) must contain exactly N sentences, as specified by the user, and (b) must have a high diversity score, measured using a user-specified diversity metric f(\u2208R\u22650). We formalise this as a subset search problem, where 1This trade-off is further empirically verified in \u00a7 5.1. we compute the union Sdef \u222aSdiv and search for the subset S\u2217that jointly satisfies those criteria following the procedure detailed in Algorithm 1. Although the total number of subsets of size N is \u0000|Sdef\u222aSdiv| N \u0001 , it is sufficiently small for the values of N in our GCR tasks, which makes this subset search fast in practice. 4 Experimental Settings 4.1 Tasks and Datasets We evaluate ICD on three GCR tasks as follows. Constrained Commonsense Reasoning: In CommonGen (Lin et al., 2020) benchmark, a model is required to generate a sentence covering a given set of concepts such that background commonsense knowledge associated with the input concepts is reflected. This dataset contains 35K distinct concept sets (train = 32651, dev = 993, and test = 1497) with corresponding human written sentences (train = 67389, dev = 4018, and test = 6042). Each instance contains on average 3-5 input concepts. Commonsense Explanation Reasoning: ComVE (Wang et al., 2020) is part of the SemEval 2020 commonsense validation task, where for a given counterfactual statement, a model is required to generate an explanation providing a reason describing why the statement is nonsensical. This dataset contains 10K (train = 8532, dev = 476, and test = 992) examples, where each example contains three reference outputs. Diversified GCR: DimonGen (Liu et al., 2023) involves generating diverse sentences that describe the relationships between two given concepts. It is a challenging task because it requires generating reasonable scenarios for a given pair of concepts without any context. This dataset contains 17109 instances (train = 15263, dev = 665, test = 1181), where each instance has 3-5 references. 4.2 Evaluation Metrics We measure both the quality and diversity of the sentences generated by models using the metrics described next. 4.2.1 Quality Metrics We compare a generated sentence by a model against a set of human-written references to evaluate the quality of the generation using several metrics: BLEU (Papineni et al., 2002) measures n-gram precision against human reference texts, SPICE (Anderson et al., 2016) measures the semantic propositional overlap between two sentences, and BERTScore (Zhang et al., 2020) uses contextualised word embeddings to measure the semantic similarity between tokens in two sentences. In alignment with prior works (Yu et al., 2022; Liu et al., 2023; Hwang et al., 2023), when multiple candidate sentences are generated for a test case, we select the highest-scoring candidate for evaluating quality. 4.2.2 Diversity Metrics Pairwise Diversity: We use self-BLEU (Zhu et al., 2018) to measure n-gram overlap among sentences within each generated set. The metric computes the average sentence-level similarity between all pairwise combinations of the generations in the generation set. Note that unlike BLEU, self-BLEU does not require human generated references for measuring diversity. We use self-BLEU3/4 (corresponding to n = 3 and 4) in our experiment. Lower self-BLEU scores indicate higher lexical diversity. Corpus Diversity: To measure the variety within our generated text corpus, we employ Distinctk (Li et al., 2016), which calculates the ratio of unique k-grams to the total number of k-grams. This metric is particularly useful for adjusting the bias of LLMs toward generating longer sequences, ensuring that diversity is not artificially inflated by the sentence length. Additionally, we use Entropyk to evaluate the distributional uniformity of kgram occurrences, considering word frequencies for a more nuanced view of diversity. Higher Distinct-k and Entropy-k scores indicate higher diversity. Semantic Diversity: All previously described diversity metrics are limited to evaluating lexical diversity. To measure diversity at a semantic level, we propose self-cosSim, which is the average pairwise cosine similarity between generated sentences, computed using sentence embeddings obtained from SimCSE (Gao et al., 2021). Likewise, we define the self-BERTScore as a diversity metric that averages the BERTScores for all generated sentence pairs. Lower self-cosSim and self-BERTScore values indicate higher semantic diversity. 4.2.3 Combined Metrics We would prefer GCR models that have both high quality and high diversity. To incoporate both aspects into a single metric, we compute the Harmonic Mean between (a) the self-BLEU-4 as the diversity metric, and (b) BERTScore as the quality metric. As discussed in \u00a7 3.2, there exists a tradeoff between quality and diversity in GCR. Therefore, the harmonic mean is suitable when averaging quality and diversity scores.2 Alihosseini et al. (2019) proposed Fr\u00b4 echet BERT Distance (FBD) as a joint metric for simultaneously measuring both the quality and diversity of NLG. FBD is inspired by the Fr\u00b4 echet Inception Distance (FID), proposed by Heusel et al. (2017), for measuring the quality of image generation. Specifically, FBD computes the pooler output3 of a sentence as its embedding (Devlin et al., 2019) and represents a set of sentences using the mean vector and the covariance matrix computed from their sentence embeddings. Next, Wasserstein-2 distance is computed between the set of reference sentences and the set of generated sentences, which captures both the distance between the means as well as variance in the distributions. Lower FBD scores indicate high combined performance. 4.3 Implementation Details We use GPT3.5-turbo and Vicuna-13b-v1.54 as LLMs with temperature set to 1.0 in our experiments. By using two LLMs with significantly differing number of parameters and by including, Vicuna, an open source LLM, we plan to improve the reliability and reproducibility of our results. Max response length is set to 25 tokens. The inference times for CommonGen, ComVE and DimonGen datasets are respectively 5-6, 2-3 and 1-2 hours. The cost of running ICD with GPT3.5-turbo are ca. $6, $4 and $4 respectively for CommonGen, ComVE and DimonGen datasets. On the other hand, the costs of fine-tuning on GPT3.5-turbo are much higher at $58.8 for CommonGen, $24.7 for ComVE and $32.0 for DimonGen. Moreover, fine-tuning with LoRA (Hu et al., 2022) with rank of 8 and alpha of 16 on Vicuna takes ca. 34 hours. We use BART-large5 for MoE-based models. We use the GPT3.5-turbo to generate sentences for the CommonGen train/dev/test sets using the de2We use self-BLEU-4 for diversity and BERTScore for quality in Harmonic Mean due to their reliability shown in preliminary evaluations. Other metric pairs are in Appendix D. 3The last layer\u2019s hidden-state of the first token of the sequence is further processed by a Linear layer and a Tanh activation function. 4https://huggingface.co/lmsys/vicuna-13b-v1.5 5https://huggingface.co/facebook/bart-large fault, diversified and for ICD. For model training, we use the Adam optimiser (Kingma and Ba, 2015) with a batch size of 64, a learning rate of 3e-5 and a beam size of 5. All of the MoE-based models are trained for 20 epochs and required to generate k = 3 sentences. All experiments, except with GPT3.5-turbo, are conducted on a single RTX A6000 GPU. 5 Results and Discussion 5.1 Commonsense Generation We compare the commonsense generations made by ICD against those using the default and diversified prompts. For this purpose, we use GPT3.5-turbo as the LLM and use the same 10 few-shot examples in all prompts for ICL. Further templates of the default and diversified prompts used for each task are given in Appendix E. To assess the impact of ICL, we compare against finetune method, wherein GPT3.5-turbo is fine-tuned on the entire training set in each dataset. Specifically, we use multiple human-written sentences, available in the training data for the three datasets to separately fine-tune the models for each task. It is noteworthy that the fine-tune method uses a substantially larger dataset for training (e.g., 67,389 sentences from CommonGen) compared to the 10 examples used by the ICL-based approaches. We use self-BLEU-3 as the diversity metric f in Algorithm 1 for ICD in this evaluation. The outcomes, presented in Table 1, highlight the diversity and quality metrics of these methods across the CommonGen, ConVE, and DimonGen datasets. Additionally, a human baseline is introduced to evaluate the diversity of sentences written by humans, where we pair-wise compare the human-written sentences for each input in the instances in the benchmark datasets using diversity metrics. Note that however, the human baseline must not be considered as an upper-bound for diversity because there are only a smaller number of human-written sentences per instance in the benchmark datasets. From Table 1, we see that fine-tune generates sentences that have high semantic and corpus diversity, and outperforms the human baseline. However, recall that fine-tune requires a much larger training set and is computationally costly compared to all ICL-based methods. Moreover, we see that ICD can strike a good balance between quality and diversity in the sentences generated. Among the ICL-based methods, ICD achieves the best diSemantic Diversity \u21d3 Corpus Diversity \u21d1 Pairwise Diversity \u21d3 Quality \u21d1 Combined self-cosSim self-BERTScore Entropy-4 Distinct-4 self-BLEU-3 self-BLEU-4 BLEU-3 BLEU-4 SPICE BERTScore Harmonic \u21d1 FBD \u21d3 CommonGen Human 67.3 60.6 10.9 91.0 25.4 17.6 Fine-tune 64.7 55.9 11.4 91.1 26.9 17.9 41.2 32.1 30.3 64.2 72.1 51.9 default 93.3 88.7 10.2 53.7 77.2 72.4 50.8 40.9 30.1 70.4 39.6 60.2 diversified 85.2 69.8 11.0 83.7 44.4 34.9 44.3 34.6 28.5 65.0 65.4 53.9 ICD 83.5 66.2 11.0 88.5 31.0 21.0 47.4 37.7 29.1 67.4 72.7 51.8 ComVE Human 62.7 47.0 9.6 96.1 12.4 8.1 Fine-tune 59.8 42.6 9.8 95.2 13.4 10.3 27.4 19.4 33.1 53.7 67.2 47.6 default 83.9 73.5 9.6 74.3 50.8 45.2 27.5 19.7 36.2 55.1 54.9 50.9 diversified 76.1 56.5 9.7 88.0 23.2 16.5 30.5 21.8 35.8 56.5 67.4 47.9 ICD 72.5 51.1 9.8 90.1 13.7 8.7 29.0 20.8 36.1 55.5 69.0 48.7 DimonGen Human 56.8 47.0 10.1 85.6 14.7 8.7 Fine-tune 43.4 33 10.4 98.7 6.8 3.4 17.7 10.7 15.5 42 58.5 51.6 default 75.7 71.3 10 83.2 43.4 37.3 15.9 9.5 16.4 44.5 52.1 68.2 diversified 57.1 46.9 10.5 95.9 11.2 6.5 11.4 6.4 15.2 39.9 55.9 69.0 ICD 56.7 45.7 10.4 96.3 6.5 3.5 13.2 7.6 15.4 41.7 58.2 68.0 Table 1: Diversity and quality scores on CommonGen, ComVE and DimonGen with GPT3.5-turbo LLM. Best results on each task for each metric are shown in italics, while the best performing ICL results are shown in bold. versity scores on all diversity metrics in all three datasets. It also exhibits higher diversity compared against the human-written references. Moreover, ICD outperforms default and diversified according to the Combined metrics. ICD also achieves a Harmonic Mean comparable to that of the fine-tune baseline. Although default reports the best quality scores, it has low diversity, and is consistently outperformed by diversified and ICD on diversity metrics. On the other hand, diversified generally scores lower on the quality metrics. Compared to default and diversified, ICD enhances generation diversity while maintaining a satisfactory level of quality. ICD is also more stable to the sampling method such as temperature than fine-tune, as shown in Appendix B. Note that fine-tune is not an ICL setting (the focus of this paper) and is included only as a baseline to demonstrate the level of performance that can be achieved by finetuning on a much larger dataset. Despite this, ICD outperforms fine-tune on the Pairwise Diversity in all three datasets, and Combined metrics in the CommonGen dataset. As an open source alternative LLM to GPT3.5-turbo, we repeat this evaluation with Vicuna-13b (Zheng et al., 2023) in Table 2. The same 10 few-shot examples as used with GPT3.5-turbo are used in this experiment for the ICL-based methods. Full table on three datasets are shown in Appendix C. Table 2 reconfirms ICD\u2019s ability to balance both quality and diversity according to the Combined metrics (i.e. Harmonic Mean and FBD) on this dataset. Interestingly, we see that Method SCS \u21d3 SBS \u21d3 E-4\u21d1 D-4\u21d1 SB-3\u21d3 BLEU-3\u21d1 SPICE\u21d1 HM \u21d1 FBD \u21d3 Fine-tune 59.6 49.9 11.4 93.3 22.8 35.8 27.6 69.9 52.4 Default 82.2 73.8 10.9 74.9 52.9 44.6 29.1 60.2 56.2 Diversified 59.1 53.3 11.3 91.3 23.6 32.6 24.3 68.6 53.2 ICD 59.3 49.8 11.3 93.7 11.3 34.2 25.5 73.4 51.0 Table 2: GCR on CommonGen using Vicuna-13b. ICD uses self-BLEU-3. Here, SCS: self-CosSim, SBS: selfBERTScore, E-4: Entropy-4, D-4: Distinct-4, SB-3: self-BLEU3, HM: Harmonic Mean. Best results for each metric are shown in italics, while the best performing ICL results are shown in bold. Method SCS \u21d3 SBS \u21d3 E-4\u21d1 D-4\u21d1 SB-3\u21d3 BLEU-3\u21d1 SPICE\u21d1 HM \u21d1 FBD \u21d3 self-BLEU-3 83.5 66.2 11.0 88.5 31.0 47.4 29.1 72.7 51.8 self-CosSim 81.0 70.1 10.9 82.5 44.5 47.6 29.3 65.7 51.8 self-BERTScore 83.1 62.8 11.0 87.0 36.3 46.5 28.9 69.6 51.8 Table 3: Comparing the effect of using different diversity metrics, f, in Algorithm 1 for ICD. We use GPT3.5-turbo as the LLM and the best results on CommonGen dataset are in bold. Here, SCS: self-CosSim, SBS: self-BERTScore, E-4: Entropy-4, D-4: Distinct-4, SB-3: self-BLEU3, HM: Harmonic Mean. methods that use Vicuna-13b to be more diverse compared to those that use GPT3.5-turbo, while the latter showing better generation quality. In Table 3, we use different diversity metrics as f in Algorithm 1 to study the effect on text generation of ICD. We see that self-BLUE-3 and self-CosSim perform similarly across the quality metrics. SelfBERTScore shows a slightly lower quality (BLEU3 and SPICE), which indicates some level of overfitting to the diversity metric being used. According to the combined metrics, any of those diversity metrics can be used with ICD to obtain comparable performance. Semantic Diversity \u21d3 Corpus Diversity \u21d1 Pairwise Diversity \u21d3 Quality \u21d1 Combined self-cosSim self-BERTScore Entropy-4 Distinct-4 self-BLEU-3 self-BLEU-4 BLEU-3 BLEU-4 SPICE BERTScore Harmonic Mean \u21d1 FBD \u21d3 KG-BART 42.1 30.9 32.7 EKI-BART 46.0 36.1 33.4 KFCNet-w/o FC 50.2 42.0 35.9 KFCNet 57.3 51.5 39.1 MoE 89.3 81.9 9.7 61.6 63.1 56.6 49.0 38.5 33.5 70.6 53.8 61.7 MoKGE 88.7 80.6 9.9 65.2 60.4 53.6 48.8 38.4 33.1 70.3 55.9 60.8 default+MoE 90.8 84.2 9.7 61.2 65.6 58.8 51.8 41.3 34.7 73.1 52.7 61.9 diversified+MoE 85.3 79.9 9.8 63.2 58.3 52.6 51.4 41.4 34.6 71.6 57.0 54.5 ICD+MoE 90.4 82.3 9.8 64.9 58.4 50.5 53.2 43.1 35.4 73.8 59.3 62.5 Table 4: Downstream evaluation of the LLM-generated sentences. Top block methods use human-generated resources for training, while the ones in the bottom block are trained on LLM-generated sentences. MoE approaches are shown in the middle block and bottom block. BART-large is used as the generator for MoE-based methods. Best results for each metric are shown in bold, while the best performing MoE for quality is shown in underline. Figure 3: Human vs. GPT3.5 diversity ratings for randomly sampled sets of sentences generated by ICD. Cohen\u2019s \u03ba = 0.409 indicates a moderate agreement. 5.2 Downstream Evaluation The experiments presented in \u00a7 5.1 show the ability of our proposed ICD to generate diverse and commonsense bearing sentences. Therefore, an important question with practical implications is whether we can use the sentences generated by ICD as additional training data to improve both diversity and quality of previously proposed models on the GCR task, which could be seen as a downstream (extrinsic) evaluation. For this purpose we select the MoE (Shen et al., 2019), which diversifies the generation by selecting outputs from a mixture of experts. Each expert is assigned a randomly generated sequence of tokens, which is used as a prefix for all inputs sent to that expert. For each input, an expert is selected according to the value of a latent variable, which is trained using the hard-EM algorithm. We follow Liu et al. (2023) and train three experts that retrieve sentences from the collection of sentences generated by ICD for concept sets in the CommonGen train split (210846 sentences in total). We use BART-large (Lewis et al., 2020) as the base model, which has shown to produce high quality commonsense generations (Zhang et al., 2023) as the generator for all experts (see Appendix A for further details). We denote this method by ICD+MoE. As baselines for comparisons, we repeat the above process using the sentences generated by default and diversified, which we denote respectively as default+MoE and diversified+MoE in Table 4. Moreover, we compare the performance against two previously proposed MoE models: MoE (Shen et al., 2019) and MoKGE (Yu et al., 2022). MoE relies solely on the base model, whereas MoKGE requires each expert to use different sets of concepts from the ConceptNet (Speer et al., 2017) knowledge graph (KG). Because Yu et al. (2022) do not evaluate their MoKGE method on CommonGen, we ran their original implementation6 on CommonGen and report results in Table 4. All previously proposed GCR methods are exclusively trained using human-created data (e.g. sentences written by human and/or manually compiled KGs such as ConceptNet), whereas the methods described thus far in this section are trained on sentences generated by an LLM (GPT3.5-turbo). Therefore, to evaluate the feasibility of using LLMgenerated sentences for training GCR models, we include the following previously proposed GCR models that are trained using a combination of corpora and KGs: KG-BART (Liu et al., 2021),EKIBART (Fan et al., 2020) and KFCNet (Li et al., 2021). For KFCNet, we present its two results \u2013 KFCNet w/o FC, which relies only on sentences including the input concepts, without further processing, and KFCNet, which additionally ranks candidates and adds contrastive modules for the encoder and the decoder (Li et al., 2021). However, note that those methods do not consider diversifica6https://github.com/DM2-ND/MoKGE Human: \u2022 The group will use the tool to make a piece of art out of metal. \u2022 I use a tool to cut a piece of metal out of the car. \u2022 The man used a piece of metal and the tools. Default: \u2022 A piece of metal is being used as a tool. \u2022 A piece of metal was used as a tool in the construction project. \u2022 A metal tool is being used to shape a piece of metal. ICD: \u2022 The piece of metal is essential for any handyman's toolkit. \u2022 The metal tool is a useful piece for working with metal. \u2022 With the right tools, any piece of metal can be transformed into something useful. CommonGen: Input: (piece, use, tool, metal) Human: \u2022 No one can digest electronic goods. \u2022 Electronic products must not be eaten. \u2022 You would die if you ate electronics. Default: \u2022 Electronic goods are not edible and are not meant for consumption. \u2022 Electronic goods are not edible and cannot be consumed as food. \u2022 Electronic goods are not edible and are meant for functional use rather than consumption. ICD: \u2022 Eating electronic goods can damage the digestive system and cause serious health issues. \u2022 It is not healthy or safe to eat electronic goods as they are made up of toxic materials. \u2022 Electronic goods are not edible and cannot be consumed as food. ComVE: Input: My friend like to eat electronic goods. Figure 4: Sentences generated by default prompt and ICD against those by humans on CommonGen and ComVE test instances. ICD generates more diverse and high quality sentences than default. tion, and do not report performance using diversity metrics. Therefore, we report only their published results for generation quality in Table 4. From Table 4 we see that diversified+MoE always outperforms the original MoE in all diversity metrics, which shows that sentences generated from LLMs can be used to diversify MoE-based GCR. ICD+MoE closely matches the performance of diversified+MoE on diversity metrics, while consistently outperforming both diversified+MoE and default+MoE on quality metrics. In particular, the quality metrics reported by ICD+MoE (underlined in Table 4) are competitive against those obtained by the models that are trained on human-compiled resources (in the top block), except against KFCNet. This finding hints at potential improvement gains for GCR by using hybrid training resources that combine both human-compiled and LLM-generated data, which we highlight as an interesting future research direction. 5.3 Diversity-Awareness of LLMs Given that we use LLMs to produce diverse generations via ICL, it remains an open question whether an LLM would agree with humans on the diversity of a given set of sentences. To answer this question, we use randomly selected 210 sentences (35 sets, each containing 6 sentences) generated by ICD (using self-BLEU-3 as the diversity metric) for the input concept sets in the CommonGen dataset. We instruct GPT3.5-turbo to rate the diversity of a given set of sentences according to five diversity ratings 1-5 with 1 being highly similar, while 5 being highly diverse.7 We provide the same instruction as the annotation guidelines for eight 7Detailed prompt templates are shown in Appendix E. human-annotators, who are graduate students in NLP. To reduce the subjective variability in human judgements, we average and then normalise the ratings following the Likert scale. In Figure 3, we plot the GPT-assigned ratings against those by humans. We further split the ratings into high vs. low diversity ratings depending on whether the rating is greater or lesser than 3. The majority of the data points are distributed along the diagonal quadrants and a Cohen\u2019s Kappa of 0.409 indicating a moderate level of agreement between GPT and human ratings for diversity. The generated sentences using the default prompt, ICD and the human references in CommonGen and ComVE datasets for a single test instance are shown in Figure 4. From Figure 4 we see that the sentences generated using the default prompt often results in significant token overlap, thereby lowering the diversity. On the other hand, ICD generates both lexically and semantically diverse sentences, covering the diverse viewpoints in the human references. 6 Conclusion We proposed, ICD, an ICL-based method for achieving the optimal balance between diversity and quality in text generation via LLMs. Our experiments, conducted on three GCR tasks, demonstrate that ICD significantly improves the diversity without substantially compromising the quality. Furthermore, we found that by training on the sentences generated by ICD, we can improve diversity in previously proposed GCR methods. 7 Limitations This study primarily focuses on the generation of English sentences using pre-trained LLMs, a limitation shaped by the datasets we employed. Specifically, we used the ComVE (Wang et al., 2020), CommonGen (Lin et al., 2020) and DimonGen (Liu et al., 2023) datasets, which are well-regarded for evaluating diversified commonsense reasoning in English. Therefore, our evaluation of the generation quality was limited to English, which is a morphologically limited language. Future research could expand this scope to include multilingual pretrained models, thereby encompassing a broader linguistic spectrum. Our approach is primarily geared towards optimizing the trade-off between diversity and quality in text generation. Consequently, we maintained consistent default instructions across all experiments, adopting the standard commonsense generation prompts used in Li et al. (2023) as our default instructions. We conducted our experiments using both a closed model (i.e. GPT3.5-turbo) as well as an open-source one (i.e. Vicuna-13b-v1.5) to promote the reproducibility of our results, which are reported using multiple public available benchmarks. However, there exist many other LLMs with varying numbers of parameters and trained on different corpora. Therefore, we consider it is important to evaluate our proposed method on a broad range of LLMs to verify the generalisability of our proposed method. However, conducting such a broad analysis can be computationally costly and expensive. For example, although GPT-4 is known to have superior text generation capabilities, it incurs substantially greater costs (being 30 times more expensive than GPT3.5-turbo at the current pricing). Nevertheless, ICD is adaptable and could be extended to other LLMs. 8 Ethical Considerations In this work, we did not create or release any manually annotated data. Our work is based on the publicly available datasets, CommonGen, ComVE, and DimonGen. To the best of our knowledge, no ethical issues have been reported for those datasets. Therefore, we do not foresee any data-related ethical issues arising from our work. However, LLMs are known to generate responses that may reflect societal biases and potentially harmful content. We have not verified whether the GPT3.5-turbo and Vicuna-13b LLMs that we use in our experiments have similar problems. Therefore, it is important to test on existing benchmarks for social biases and harmful generations before the proposed method is deployed to diversify existing GCR methods used by human users. To elicit human judgements of diversity for the sentences generated by ICD, we use annotators who are familiar with working with LLMs. It is possible that their subjective (and possibly biased) viewpoints might have influenced the ratings provided. Therefore, it will be important to conduct the evaluation involving a group of annotators with different backgrounds to validate the findings reported in this analysis."
16
+ }
intro_8K/test_introduction_short_2404.16811v2.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2404.16811v2",
3
+ "title": "Make Your LLM Fully Utilize the Context",
4
+ "abstract": "While many contemporary large language models (LLMs) can process lengthy\ninput, they still struggle to fully utilize information within the long\ncontext, known as the lost-in-the-middle challenge. We hypothesize that it\nstems from insufficient explicit supervision during the long-context training,\nwhich fails to emphasize that any position in a long context can hold crucial\ninformation. Based on this intuition, our study presents information-intensive\n(IN2) training, a purely data-driven solution to overcome lost-in-the-middle.\nSpecifically, IN2 training leverages a synthesized long-context question-answer\ndataset, where the answer requires (1) fine-grained information awareness on a\nshort segment (~128 tokens) within a synthesized long context (4K-32K tokens),\nand (2) the integration and reasoning of information from two or more short\nsegments. Through applying this information-intensive training on Mistral-7B,\nwe present FILM-7B (FILl-in-the-Middle). To thoroughly assess the ability of\nFILM-7B for utilizing long contexts, we design three probing tasks that\nencompass various context styles (document, code, and structured-data context)\nand information retrieval patterns (forward, backward, and bi-directional\nretrieval). The probing results demonstrate that FILM-7B can robustly retrieve\ninformation from different positions in its 32K context window. Beyond these\nprobing tasks, FILM-7B significantly improves the performance on real-world\nlong-context tasks (e.g., 23.5->26.9 F1 score on NarrativeQA), while\nmaintaining a comparable performance on short-context tasks (e.g., 59.3->59.2\naccuracy on MMLU). Github Link: https://github.com/microsoft/FILM.",
5
+ "authors": "Shengnan An, Zexiong Ma, Zeqi Lin, Nanning Zheng, Jian-Guang Lou",
6
+ "published": "2024-04-25",
7
+ "updated": "2024-04-26",
8
+ "primary_cat": "cs.CL",
9
+ "cats": [
10
+ "cs.CL",
11
+ "cs.AI"
12
+ ],
13
+ "label": "Original Paper",
14
+ "paper_cat": "LLM AND Reasoning",
15
+ "gt": "To a great mind, nothing is little. \u2014Arthur Conan Doyle Long-context large language models (LLMs) have recently received significant attention within the open-source community (Jiang et al., 2023; Du et al., 2022; Li et al., 2023a; Shi et al., 2023; Team et al., 2023; Team, 2023; Chen et al., 2023a; Song et al., 2023; Liu et al., 2023; Peng et al., 2023b; Chen et al., 2023b; Xiong et al., 2023; Tworkowski et al., 2024; AI et al., 2024; Ding et al., 2024; Mohtashami & Jaggi, 2024; Fu et al., 2024; Cai et al., 2024; Bai et al., 2024; Lv et al., 2024). The training context windows of many contemporary LLMs have been expanded to tens of thousands of tokens, thereby enabling these models to process extensive context as input. This extended training context window can enhance many real-world downstream tasks such as long-context question answering (Ko\u02c7 cisk` y et al., 2018; Dasigi et al., 2021; Bai et al., 2023) and summarization (Fabbri et al., 2019; Huang et al., 2021; Zhong et al., 2021). However, recent studies have revealed that these long-context LLMs struggle to effectively and robustly utilize all the information provided in the context, known as the lost-in-the-middle chal- lenge (Liu et al., 2024; Xu et al., 2023). It implies that while the LLM can comprehend the information at the beginning and end of the long context, it often overlooks the information in the middle. This challenge could significantly hinder the development of long-context LLMs, as they even often fail to pass simple probing tasks such as Needle-in-the-Haystack and passkey retrieval (Mohtashami & Jaggi, 2024). Consequently, a pressing research question arises: how can we make long-context LLMs fully utilize the information in the long context? We hypothesize that the root cause of lost-in-the-middle stems from the unintentional bias hidden in the general training data. In auto-regressive pre-training, the loss on predicting the next token is more likely to be influenced by a few nearby pre-tokens rather than long-distance tokens (Sharan et al., 2018; Sun et al., 2021). For supervised fine-tuning and alignment, the system message, which strongly influences the generation of the response, is typically presented at the beginning of the context (Touvron et al., 2023; Cai et al., 2024). As a result, the general training process may inadvertently introduce a position bias, suggesting that important information is always located at the beginning and end of the context. Based on this hypothesis, our work introduces INformation-INtensive (IN2) training to explicitly teach the model that the crucial information can be intensively present throughout the context, not just at the beginning and end. IN2 training is a purely data-driven solution that utilizes a synthesized long-context question-answer dataset. The long context (ranging from 4K to 32K tokens) is concatenated from many short segments (\u223c128 tokens), and the question-answer (QA) pairs ask for the information contained in one or more segments which are randomly placed in the long context. Specifically, we generate two types of questions, requiring (1) fine-grained information awareness on exactly one short segment, and (2) the integration and reasoning of information from two or more segments. These QA pairs are generated by prompting GPT-4-Turbo (OpenAI, 2023b) with the designed instructions and the raw segments. By applying this information-intensive training on Mistral-7B (Jiang et al., 2023), we present FILM- 7B (FILl-in-the-Middle). To thoroughly assess the long-context information awareness of FILM-7B, we design three probing tasks encompassing various context styles (document, code, and structured- data context) and information retrieval patterns (forward, backward, and bi-directional retrieval). The probing results (Figure 1) demonstrate that IN2 training significantly overcomes the lost-in-the- middle problem for the backbone model. Moreover, it can enhance the open-source model to achieve comparable or even more robust performance compared with proprietary LLMs such as GPT-4-Turbo. Beyond these probing tasks, the performance of FILM-7B on real-world long-context tasks also exhibits significant improvements (e.g., 23.5\u219226.9 F1 score on NarrativeQA (Ko\u02c7 cisk` y et al., 2018)). This demonstrates that training on synthesized long-context data can be generalized to real-world scenarios. Moreover, FILM-7B maintains a comparable performance on short-context tasks compared with the vanilla backbone model (e.g., 59.3\u219259.2 accuracy on MMLU (Hendrycks et al., 2020)). This indicates that the short-context capability of FILM-7B is not compromised during training. The main contents of this paper are organized as follows. Section 2 introduces our IN2 training with details on the data construction and training process. Section 3 introduces the design of our 2 Raw Text \ud835\udcd2\ud835\udc8a Split by 128 Tokens \ud835\udc94\ud835\udc8a Single Segment \ud835\udc94\ud835\udc8a Long Context \u2026 \u2026 Prompting GPT-4 \ud835\udc92\ud835\udc8a \ud835\udc82\ud835\udc8a Local Information QA Concatenate with Random Segments Fine-Grained Information Awareness Raw Text \ud835\udcd2\ud835\udc8a \ud835\udc94\ud835\udc8a \ud835\udfcf Multiple Segments Prompting GPT-4 Multi-Hop QA Integration and Reasoning of Information \ud835\udc94\ud835\udc8a \ud835\udfd0 \ud835\udc94\ud835\udc8a \ud835\udfd1 \ud835\udc94\ud835\udc8a \ud835\udfcf \ud835\udc94\ud835\udc8a \ud835\udfd0 \ud835\udc94\ud835\udc8a \ud835\udfd1 Long Context \u2026 \u2026 Concatenate with Random Segments Split by 128 Tokens \ud835\udc92\ud835\udc8a \ud835\udc82\ud835\udc8a Figure 2: The data construction process for IN2 training, aimed at enhancing the fine-grained information awareness (upper), and the integration and reasoning of information (lower). long-context probing tasks and the comparison with some existing probing tasks. Section 4.2 shows the experimental results on three probing tasks, nine real-world long-context tasks, and eight short- context tasks. Section 4.3 provides further insights for the long-context training strategies. Section 5 discusses the related work.",
16
+ "main_content": "This section introduces the construction of the dataset for IN2 training and the detailed training process of our model FILM-7B. 2.1 Training Data Construction Overview. The IN2 training aims to explicitly teach the model that any position in a long context can contain crucial information. To achieve this goal, we construct a long-context question-answer training dataset D = {Li, qi, ai}, where the answer ai to the question qi requires the information contained in some short segments that are randomly placed in the whole long context Li. Figure 2 illustrates an overview of the data construction process. Specifically, the training data D L Figure 2 illustrates an overview of the data construction process. Specifically, the training data D is constructed based on a general natural language corpus C. Given a raw text Ci \u2208C, we first generate a question-answer pair (qi, ai) using a powerful LLM, then synthesize a long context Li that includes the necessary information from Ci and other randomly sampled texts from C. We generate two types of question-answer pairs that require (1) the awareness of fine-grained information in the long context, and (2) the integration and reasoning of information appearing at different positions in the long context. We take the realnewslike subset from the C4 corpus (Raffel et al., 2020) as C, and take GPT-4-Turbo (OpenAI, 2023b) as the LLM to generate QA pairs. Fine-grained information awareness. We consider a 128-token segment as the minimum information unit of the context3. Given a raw text Ci, we first randomly extract a 128-token segment si from it, then generate the qi, ai and Li accordingly, (qi, ai) \u223cPrompting(si, If; LLM), Li = \u2295{Shuffle(si, [rj])}, (1) 3The raw texts in realnewslike have an average length of \u223c600 tokens with the Mistral tokenizer. 3 Document Sentence Retrieval (Bi-Direction) Code Function Retrieval (Backward) Database Entity Retrieval (Forward) ### Context: \u2026 This crucially distinguishes our algorithms from the \u2026 Specifically, our modality-missing-aware prompts can \u2026 These results demonstrate that there are still a large ... We design better optimizers, a crucial engineering \u2026 We present a study of modern architectures applied \u2026 This scalability issue is to use of consensus algorithms \u2026 Extensive experiments are conducted to validate the effectiveness of our proposed method, achieving new state-of-the-art performance on all four benchmarks with a notable gain. Notably, we achieved the top in highly competitive \u2026 With this, it is shown how approximate FP64x2 GEMM \u2026 It is challenging to address widespread and \u2026 To verify the effectiveness of the proposed method \u2026 The results show that \\\\emph{GCMiner} significantly \u2026 Our experimental results on all common benchmark \u2026 \u2026 ### Instruction: In above context, which sentence contains the piece \"achieving new state-of-the-art performance on all four\"? ### Context: \u2026 def get_clause:\\n llen = len(lineup)\\n clause = ''\\n if \u2026 def updateData:\\n if self.train:\\n if self.inplace:\\n self. \u2026 def save_comments:\\n for comment in comments:\\n \u2026 def plot_patio:\\n ax = plt.subplot(111)\\n passo_x = 1 / \u2026 def encode_label:\\n Label record format:\\n Total: 5 \u2026 def _parse_array:\\n array = []\\n for child in node. \u2026 def serve_rpc:\\n plugins = [QuarkAsyncPlugin()]\\n rpc =\u2026 def createStrip:\\n story = fetchVign(config)\\n if specialPlatform == 'android':\\n except Exception as err: def breed_childern:\\n self.mutation(first_child)\\n self. \u2026 def get_module_depth:\\n Parameters\\n depth_image: \u2026 def run_layout:\\n if settings is None:\\n if settings. \u2026 def register:\\n user = None\\n if user_id:\\n if request \u2026 def test_list_ddl:\\n cursor = con.cursor()\\n result = list( \u2026 def with_laps:\\n with Stopwatch() as sw:\\n for i in \u2026 def config_iq_stream:\\n bwActual = c_double(0)\\n \u2026 \u2026 ### Instruction: In above context, which function contains the code snip \"if specialPlatform == 'android':\" ? ### Context: \u2026 <id: Q2486402, label: New York State Route 191, \u2026 > <id: Q80329096, label: Transverse abdominal incision \u2026 > <id: Q70559114, label: Monitoring plasma level of \u2026 > <id: Q91568218, label: Progression of the first stage \u2026 > <id: Q84088820, label: Historical perspective of low\u2026 > <id: Q63952215, label: Online action-to-perception \u2026 > <id: Q40241868, label: Alpha-1-C-octyl-1-deoxynojirimycin as a pharmacological chaperone for Gaucher disease, description: scientific article published on 21 August 2006> <id: Q5651247, label: Wer, wenn nicht wir, descript \u2026 > <id: Q42133313, label: UnZIPping mechanisms of \u2026 > <id: Q74650195, label: Pursued by genetics: an auto \u2026 > <id: Q38835253, label: Neurological Aspects of \u2026 > <id: Q64358411, label: Unity for Change, description: \u2026 > <id: Q24110047, label: Hypothetical protein SM_b20 \u2026 > \u2026 ### Instruction: In above context , what is the label and description for the query where the id is Q40241868 ? Figure 3: Three tasks in VAL Probing. The retrieval patterns are determined by the relative positions between the retrieval keywords and the information to be retrieved. where (qi, ai) is sampled by prompting the powerful LLM with the segment si and the instruction If, \u2295{\u00b7} represents the concatenation of the contained segments, and [rj] are randomly sampled from 128-token segments in C. Note that If instructs the LLM to make the question-answer pair highly specific to the information provided in si. Integration and reasoning of information. Beyond utilizing each single segment, we consider to generate question-answer pairs for information contained in two or more segments. Following the setting of the minimum information unit above, we split a full text Ci into a set of 128-token segments [si], then generate the qi, ai and Li accordingly, (qi, ai) \u223cPrompting([si], Ir; LLM), Li = \u2295{Shuffle([si], [rj])}, (2) where Ir instructs the LLM to generate a multi-hop question-answer pair that requires the information within at least two segments in [si]. All segments in [si] and [rj] are jointly shuffled, so the required segments may appear far apart in the context. Context length balance and data mixture. To prevent length bias during IN2 training, we ensure the length of the long context Li is evenly distributed from 4K to 32K tokens. Such a length balance strategy can be implemented with reject sampling on [rj], according to Equation 1 and 2. To alleviate catastrophic forgetting on short-context capabilities, we retain \u223c10% question-answer pairs with the original texts Ci instead of converting them into a longer context, and add some general instruction-tuning data from the OpenOrca (Lian et al., 2023) dataset. Overall, our dataset for IN2 training contains 1.1M long-context data for the fine-grained information awareness (\u223c63%), 300K long-context data for the integration and reasoning of information (\u223c17%), 150K short-context question-answer data (\u223c9%), and 200K general instruction-tuning data (\u223c11%). Appendix D contains the handcraft instructions for data generation. Appendix B illustrates some examples of our constructed long-context QA data. Appendix A describes the filtering strategy to avoid data contamination for evaluation. 2.2 Training Details Using the training data constructed above, we further fine-tune the Mistral-7B-Instruct-v0.24 (Jiang et al., 2023) to get our FILM-7B (FILl-in-the-Middle). We perform IN2 training in the instructiontuning paradigm: the long contexts and questions are used as instructions, and the loss on the answer parts are used to update the model. Appendix D contains the system template used for formatting the training data. For hyper-parameters, we set the global batch size as 128 and conduct one-epoch training with \u223c14K training steps. We use the cosine learning rate decay with a 1e-6 maximum 4https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2. 4 50 100 150 200 250 300 350 400 450 500 550 600 650 700 750 800 Relative Positions in 800 Sentences 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Performance (%) Document Sentence Retrieval (Bi-Direction) FILM-7B (Ours) Mistral-7B-Instruct-v0.2 Mistral-7B-Instruct-v0.1 50 100 150 200 250 300 350 400 450 500 550 600 650 700 750 800 Relative Positions in 800 Functions 0.0 0.2 0.4 0.6 0.8 1.0 Code Function Retrieval (Backward) 50 100 150 200 250 300 350 400 450 500 550 600 650 700 750 Relative Positions in 750 Entities 0.0 0.2 0.4 0.6 0.8 1.0 Database Entity Retrieval (Forward) (a) Performance of FILM-7B, Mistral-7B-Instruct-v0.1, and Mistral-7B-Instruct-v0.2. 50 100 150 200 250 300 350 400 450 500 550 600 650 700 750 800 Relative Positions in 800 Sentences 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Performance (%) Document Sentence Retrieval (Bi-Direction) FILM-7B (Ours) LongAlign-13B-64K LongAlign-7B-64K 50 100 150 200 250 300 350 400 450 500 550 600 650 700 750 800 Relative Positions in 800 Functions 0.0 0.2 0.4 0.6 0.8 1.0 Code Function Retrieval (Backward) 50 100 150 200 250 300 350 400 450 500 550 600 650 700 750 Relative Positions in 750 Entities 0.0 0.2 0.4 0.6 0.8 1.0 Database Entity Retrieval (Forward) (b) Performance of FILM-7B, LongAlign-7B-64K, and LongAlign-13B-64K. 50 100 150 200 250 300 350 400 450 500 550 600 650 700 750 800 Relative Positions in 800 Sentences 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Performance (%) Document Sentence Retrieval (Bi-Direction) FILM-7B (Ours) InternLM2-chat-20B InternLM2-chat-7B 50 100 150 200 250 300 350 400 450 500 550 600 650 700 750 800 Relative Positions in 800 Functions 0.0 0.2 0.4 0.6 0.8 1.0 Code Function Retrieval (Backward) 50 100 150 200 250 300 350 400 450 500 550 600 650 700 750 Relative Positions in 750 Entities 0.0 0.2 0.4 0.6 0.8 1.0 Database Entity Retrieval (Forward) (c) Performance of FILM-7B, InternLM2-chat-7B, and InternLM2-chat-20B. Figure 4: Performance of FILM-7B on VAL Probing and the comparisons with (a) Mistral, (b) LongAlign, and (c) InternLM2. The X-axis is the relative position in the context (\u223c32K tokens). learning rate and 3% warm-up steps. The training process is conducted on 16 nodes of 8x80G A100 GPUs with the full sharding strategy and cpu offload strategy implemented by pytorch FSDP (Zhao et al., 2023). The entire training process consumes \u223c300 GPU days. 3 Long-Context Probing In this section, we first show the preliminary evaluation of FILM-7B on the Needle-in-the-Haystack and discuss about the inadequacies of this probing task. Subsequently, to comprehensively evaluate the long-context information awareness of FILM-7B, we introduce VArious Long-context (VAL) Probing. This includes three tasks that cover various context styles (document, code, and structureddata context) and information retrieval patterns (forward, backward, and bi-directional retrieval). 5 Table 1: Quantified performances of various models on VAL Probing. Model Document Code Database All Avg Gap\u2193 Avg Gap\u2193 Avg Gap\u2193 Avg Gap\u2193 Mistral-7B-Instruct-v0.1 (Jiang et al., 2023) 44.8 29.9 6.8 53.2 8.8 74.5 20.1 52.5 Mistral-7B-Instruct-v0.2 (Jiang et al., 2023) 74.2 32.1 20.3 59.5 47.5 77.0 47.3 56.2 LongAlign-7B-64K (Bai et al., 2024) 65.3 16.9 39.3 56.0 55.0 36.2 53.2 36.4 LongAlign-13B-64K (Bai et al., 2024) 71.7 13.4 50.8 40.8 82.9 27.0 68.5 27.1 InternLM2-chat-7B (Cai et al., 2024) 68.8 18.7 50.2 44.1 61.2 57.1 60.1 40.0 InternLM2-chat-20B (Cai et al., 2024) 66.4 27.2 63.4 45.5 74.9 57.2 68.2 43.3 GPT-4-Turbo (OpenAI, 2023b) 81.3 31.7 66.1 46.5 89.6 18.0 79.0 32.1 FILM-7B (ours) 85.4 6.1 83.3 18.7 89.0 16.8 85.9 13.9 3.1 Near-Perfect Performance on Needle-in-the-Haystack: Are We There Yet? The Needle-in-the-Haystack5 task is widely used to assess how robustly a model utilizes information positioned in the long context. It reveals that even some powerful proprietary LLMs, such as GPT-4 and Claude 2.1 (Anthropic, 2023), struggle to fully exploit the information within the long context. We use the Needle-in-the-Haystack task to preliminarily evaluate the long-context capability of FILM-7B. Appendix C demonstrates that FILM-7B has achieved near-perfect performance on this task. This result is not surprising as recent open-source LLMs, such as LongAlign (Bai et al., 2024) and InternLM2 (Cai et al., 2024), have also shown near-perfect performance on this task. However, we argue that the near-perfect performance on Needle-in-the-Haystack may overestimate the long-context capabilities of LLMs, based on the following two considerations: \u2022 Needle-in-the-Haystack employs a document-style context, which LLMs could be quite familiar with due to the pre-training on natural language corpora. \u2022 The forward retrieval pattern in Needle-in-the-Haystack may simplify the difficulty of information seeking in the long context. The \u201cforward retrieval\u201d means that the information being retrieved directly follows the retrieval keyword in a long context. For example, the default question used in Needle-in-the-Haystack is \"What is the best thing to do in San Francisco?\" and the answer is contained in \"The best thing to do in San Francisco is eat a sandwich and sit in Dolores Park on a sunny day.\" The retrieved information \"eat a sandwich and ...\" just follows the retrieval keywords \"best thing to do in San Francisco\". According to the mechanism of induction head (Olsson et al., 2022), such a following-up copying is an easily learned pattern for LLMs, thus less challenging for evaluating long context utilization. Given these considerations, we suggest that performances on Needle-in-the-Haystack may not adequately reflect the long-context capabilities of LLMs. Therefore, we propose VAL Probing for a more comprehensive evaluation involving various context styles and retrieval patterns. 3.2 VAL Probing Our retrieval-based VAL Probing considers three context styles (document, code, and structured-data context) and three retrieval patterns (forward, backward, and bi-directional retrieval). Each context in VAL Probing contains \u223c32K tokens, and each task contains \u223c3K examples. Figure 3 briefly illustrates the contexts and retrieval instructions in VAL Probing. Document Sentence Retrieval (Bi-Direction). The contexts consist of numerous natural language sentences, and the instruction aims to retrieve a single sentence containing a given piece. The sentences are sampled from the abstracts of papers on arXiv6. This task follows the bi-directional retrieval pattern, as the expected retrieval results contain words both before and after the given piece in the context. The evaluation metric is the word-level recall score. 5https://github.com/gkamradt/LLMTest_NeedleInAHaystack. 6https://info.arxiv.org/help/api/basics.html. 6 Table 2: Performances of various models on real-world long-context tasks. Results of models with \u2217 are reported in Bai et al. (2023) and Lv et al. (2024). Model NarrativeQA Qasper MultiFQA HotpotQA 2WikiMQA MuSiQue GovReport QMSum MultiNews Avg Close-Source GPT-4-Turbo (OpenAI, 2023b) 33.0 50.7 52.7 68.5 64.3 49.1 33.9 25.4 24.9 44.7 GPT-3.5-Turbo\u2217(OpenAI, 2023a) 23.6 43.3 52.3 51.6 37.7 26.9 29.5 23.4 26.7 35.0 Open-Source LongChat-v1.5-7B-32K\u2217(Li et al., 2023a) 16.9 27.7 41.4 31.5 20.6 9.7 30.8 22.7 26.4 25.3 ChatGLM2-6B-32K\u2217(Du et al., 2022) 21.1 31.5 46.2 25.3 20.8 9.8 32.4 24.0 26.5 26.4 LongAlign-7B-64K (Bai et al., 2024) 18.7 33.8 49.1 28.6 23.4 12.5 30.6 23.7 27.5 27.5 Mistral-7B-Instruct-v0.1 (Jiang et al., 2023) 19.6 33.2 38.8 42.9 31.2 17.4 27.5 22.4 26.6 28.9 Mistral-7B-Instruct-v0.2 (Jiang et al., 2023) 23.5 33.8 45.9 42.4 24.3 20.8 33.3 24.8 26.8 30.6 Yi-6B-200K\u2217(AI et al., 2024) 12.4 26.4 36.8 46.6 40.4 25.8 29.3 20.7 27.1 29.5 ChatGLM3-6B-32K\u2217(Du et al., 2022) 9.2 43.1 50.9 55.3 43.7 38.9 36.0 24.7 27.4 36.6 InternLM2-chat-7B (Cai et al., 2024) 24.4 35.4 50.2 52.4 48.2 30.5 33.6 25.3 29.0 36.5 InternLM2-7B-LongWanjuan\u2217(Lv et al., 2024) 29.9 39.6 50.2 53.7 42.3 32.1 33.0 25.5 27.8 37.1 FILM-7B (ours) 26.9 42.2 56.0 62.1 47.0 39.0 33.8 25.1 26.9 39.9 Code Function Retrieval (Backward). The contexts consist of Python functions, and the instruction aims to retrieve the function name for a given line of code within the function definition. The raw code functions are sampled from the StarCoder (Li et al., 2023c) dataset7. We randomly select three lines of definitions for each function. This task follows the backward retrieval pattern, as the function name always precedes the definition. The evaluation metric is the exact-match accuracy. Database Entity Retrieval (Forward). The contexts contain lists of structured entities, each with three fields: ID, label, and description. The query aims to retrieve the label and description for a given ID. The entities are sampled from Wikidata 8. This task follows the forward retrieval pattern, as the label and description follow the ID. We take a relaxed exact-match accuracy as the metric: a 1 score is given if either the label or the description is exactly matched in the response, otherwise a 0 score. 4 Experiments and Analysis We assess the long-context capability of FILM-7B on both probing tasks and real-world long-context tasks. Moreover, we investigate if the performance in short-context scenarios is affected. 4.1 Experimental Setup Models. We mainly compare FILM-7B with long-context open-source models that have been trained with \u226532K context windows, including the Mistral (Jiang et al., 2023), LongChat (Li et al., 2023a), ChatGLM (Du et al., 2022), LongAlign (Bai et al., 2024), LongWanjuan (Lv et al., 2024), Yi (AI et al., 2024) and InternLM2 (Cai et al., 2024). We utilize the instruct/chat versions of these models as most of our evaluation tasks are under the zero-shot instruction-following paradigm. We also draw comparisons with popular proprietary LLMs such as GPT-3.5-Turbo (OpenAI, 2023a) and GPT-4-Turbo (OpenAI, 2023b). All models and tasks employ greedy decoding. For probing tasks, we primarily compare FILM-7B with LongAlign and InternLM2 series, as these models have shown near-perfect performances on Needle-in-the-Haystack. Real-world long-context tasks. We take 9 tasks from the LongBench (Bai et al., 2023) collection to evaluate the long-context capability on real-world scenarios. These tasks encompass long-document question answering (NarrativeQA (Ko\u02c7 cisk` y et al., 2018), Qasper (Dasigi et al., 2021) and MultiFieldQA (MultiFQA) (Bai et al., 2023), multi-document multi-hop reasoning (HotpotQA (Yang et al., 2018), 2WikiMultihopQA (2WikiMQA) (Ho et al., 2020) and MuSiQue (Trivedi et al., 2022)), and long-context summarization (GovReport (Huang et al., 2021), QMSum (Zhong et al., 2021) and MultiNews (Fabbri et al., 2019)). We employ the middle truncation strategy in LongBench to limit the input within 32K tokens. We report ROUGE-L (Lin, 2004) for summarization tasks and F1 scores for other tasks. The evaluation metrics are computed using the official evaluation scripts 9. 7https://huggingface.co/datasets/bigcode/starcoderdata. 8https://www.wikidata.org/wiki/Wikidata:Data_access. 9https://github.com/THUDM/LongBench. 7 59.3 85.4 46.0 70.0 55.9 83.6 40.4 8.7 59.2 87.7 45.6 75.3 52.5 79.1 44.5 11.3 0 20 40 60 80 100 Performance (%) MMLU Mistral-7B-Instruct-v0.2 FILM-7B BoolQ RACE-H CSQA ARC-C HellaSwag GSM8K MATH Figure 5: Performances of FILM-7B and the backbone model on short-context tasks. Short-context tasks. We select 8 short-context tasks commonly used for evaluating the general capabilities of models. These include MMLU (Hendrycks et al., 2020), BoolQ (Clark et al., 2019), RACE-High (RACE-H) (Lai et al., 2017), CommonsenseQA (CSQA) (Talmor et al., 2019), ARCChallenge (ARC-C) (Clark et al., 2018), HellaSwag (Zellers et al., 2019), GSM8K (Cobbe et al., 2021), and MATH (Hendrycks et al., 2021). We use 5-shot for MMLU, 8-shot for GSM8K, 4-shot for MATH, and 0-shot for other tasks. We utilize the lm_eval10 for the evaluations on MMLU, BoolQ, RACE-H, ARC-C and HellaSwag, and use the evaluation scripts from An et al. (2024) for other tasks. 4.2 Main Results FILM-7B significantly mitigates the lost-in-the-middle problem. Figure 4a presents the probing results for both FILM-7B and the backbone model, Mistral-7B-Instruct-v0.2. In all three probing tasks within VAL Probing, the vanilla Mistral model experiences substantial information loss at the middle positions in the long contexts. In contrast, our FILM-7B model consistently exhibits robust performance across different positions within the whole context. This stark comparison illustrates that the lost-in-the-middle problem can be effectively addressed using our IN2 training. FILM-7B achieves performance comparable to, or even outperforming, that of GPT-4-Turbo. Figure 1 illustrates the comparison between FILM-7B and GPT-4-Turbo on our probing tasks. Beyond a qualitative comparison between the performance curves of two models, we quantify the long-context performances on VAL Probing using two metrics: \u2022 Average score (Avg). We compute the average performances across the entire context length, reflecting the overall long-context utilization. \u2022 Min-max gap (Gap). We calculate the differences between the maximum and minimum performances in Figure 3. A smaller performance gap signifies greater robustness across different positions. Table 1 presents the quantified performances on VAL Probing. It reveals that FILM-7B has comparable performance with GPT-4-Turbo on the database probing task, and exhibits better robustness in document and code probing tasks. These results indicate a great potential for the development of open-source long-context models to close the gap with proprietary models. VAL Probing presents a more challenging test suite for long-context models. Figure 4b and 4c show the probing results of LongAlign and InternLM2, two state-of-the-art long-context models. Despite their extended training context windows, these models still encounter the lost-in-the-middle problem. This is particularly noteworthy given their near-perfect performance on the Needle-in-theHaystack task. This comparison suggests that VAL Probing provides a more challenging evaluation for long-context models. In particular, the results on document and database tasks in VAL Probing demonstrate clear comparisons with Needle-in-the-Haystack. Compared to Needle-in-the-Haystack which uses forward retrieval on natural language context, the document task employs natural language context but with bi-directional retrieval, and the database task uses forward retrieval but with structured-data context. 10https://github.com/EleutherAI/lm-evaluation-harness. 8 50 100 150 200 250 300 350 400 450 500 550 600 650 700 750 800 Relative Positions in 800 Sentences 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Performance (%) Document Sentence Retrieval (Bi-Direction) FILM-7B (20%) FILM-7B (20%) + SW (IN2) FILM-7B (20%) + SW (PT-IN2) 50 100 150 200 250 300 350 400 450 500 550 600 650 700 750 800 Relative Positions in 800 Functions 0.0 0.2 0.4 0.6 0.8 1.0 Code Function Retrieval (Backward) 50 100 150 200 250 300 350 400 450 500 550 600 650 700 750 Relative Positions in 750 Entities 0.0 0.2 0.4 0.6 0.8 1.0 Database Entity Retrieval (Forward) Figure 6: Performance of FILM-7B with a 4K sliding window (SW). PT-IN2: apply the sliding window in both pre-training and IN2 training. IN2: apply the sliding window only in IN2 training. Table 3: Performance of FILM-7B with different RoPE base \u03b8 during IN2 training. Model RoPE Base \u03b8 Document Code Database All Avg Gap\u2193 Avg Gap\u2193 Avg Gap\u2193 Avg Gap\u2193 FILM-7B (20%) 1.0 \u00d7 106 (default) 82.9 11.5 74.5 27.7 83.5 31.6 80.3 23.6 2.0 \u00d7 106 83.9 9.3 79.8 27.1 87.7 13.2 83.8 16.5 1.0 \u00d7 107 83.7 7.6 81.7 18.4 89.4 16.8 84.9 14.3 1.0 \u00d7 108 84.6 6.6 81.4 22.3 87.7 13.2 84.6 14.0 These comparisons highlight that both context styles and retrieval patterns significantly contribute to the hardness of the probing tasks. Training on synthesized long-context data effectively generalizes to real-world scenarios. Table 2 contains the results on various real-world long-context tasks. It shows that FILM-7B also significantly improves the performance of the backbone model in real-world long-context scenarios. Moreover, it also achieves SOTA-level performances on these tasks among \u223c7B size open-source models. Notably, the long contexts used in IN2 training are all synthesized from short segments. These improvements suggest that the long-context capabilities learned from the synthesized data can be successfully applied to real-world tasks. FILM-7B maintains the performance on short-context tasks. Figure 5 illustrates the performances of FILM-7B and the vanilla backbone model on short-context tasks. It reveals that the overall performances on short-context tasks are almost comparable with minor variances. These results confirm that FILM-7B does not compromise the short-context capabilities of the backbone model. 4.3 Training Strategy Analysis Experimental results in Section 4.2 demonstrate the feasibility of IN2 training. We aim to explore further into enhancing the effectiveness and efficiency of IN2 training, particularly from the perspective of training strategies. We are specifically interested in investigating the impact of the following two training strategies: applying the sliding window and adjusting the position encoding. Considering the high cost of training, the following experiments use 20% of all training examples. Models using sliding windows cannot effectively capture the long distance information. Our experiments involving Mistral models, as shown in Figure 4a, reveal that the performance of Mistral7B-Instruct-v0.1 is awful when the information is positioned at a long distance. It\u2019s worth noting that Mistral-7B-Instruct-v0.1 employs the sliding window strategy while Mistral-7B-Instruct-v0.2 does not. Consequently, we are interested in determining whether our IN2 training can still alleviate the lost-in-the-middle problem under the sliding window strategy. We conduct the following two experiments with a 4K sliding window during training: 9 \u2022 Apply the sliding window in both pre-training and IN2 training. We take the Mistral-7BInstruct-v0.1 as the backbone model and conduct IN2 training with the same window size (4K). \u2022 Apply the sliding window only during the IN2 training. We take the Mistral-7B-Instruct-v0.2 as the backbone model and additionally apply a 4K sliding window during IN2 training. Figure 6 illustrates the performances of models with sliding windows. It shows that in both two settings with sliding windows, the performances drop dramatically when the distance between the retrieval question and information is longer than the sliding window size. It reveals that the sliding window strategy greatly hurts the long-context capability of models. Training with higher information intensity requires a larger RoPE base \u03b8. The training stage in Section 2 follows the RoPE settings configured for the backbone model. Previous studies on context extension suggest that training with an extended context length necessitates a larger RoPE base \u03b8 (Roziere et al., 2023; Xiong et al., 2023; Cai et al., 2024). In the case of our IN2 training, the context length remains unchanged, but the information intensity is significantly increased. As a result, we are interested in exploring whether the RoPE settings should also be adjusted to further enhance the IN2 training. Table 3 shows the results with increasing the RoPE base \u03b8 from 1.0 \u00d7 106 to 1.0 \u00d7 108. It shows that increasing the default RoPE base \u03b8 of the backbone model leads to better performances on VAL Probing. We suggest to use a 10 times of the default RoPE base \u03b8 to conduct IN2 training. 5 Related Work Long-context LLMs. Recent research has significantly contributed to the exploration of training large models with extended context windows (Jiang et al., 2023; Du et al., 2022; Li et al., 2023a; Team et al., 2023; Team, 2023; Xiong et al., 2023; Song et al., 2023; Tworkowski et al., 2024; AI et al., 2024; Cai et al., 2024). There are primarily two directions in the development of long-context LLMs. (1) Data engineering, which emphasizes the construction of long-context data for training the LLMs. This includes data balancing (Fu et al., 2024), data order arrangement (Shi et al., 2023), instruction data collection (Bai et al., 2024), and data quality measurement (Lv et al., 2024). Our IN2 training can be categorized into this field. (2) Effective and efficient training, which investigates methods to optimize the training of a long-context model. This encompasses the design of position encoding (Chen et al., 2023a; Liu et al., 2023; Peng et al., 2023b; Ding et al., 2024), batching strategy (Bai et al., 2024), parameter-efficient training (Chen et al., 2023b), and the development of new model architectures (Peng et al., 2023a; Gu & Dao, 2023). Long-context evaluations. Existing benchmarks for evaluating long-context models can be divided into two categories. (1) Real-world benchmarks that assess general long-context capabilities (e.g., long-context QA, summarization, and language modeling), such as NarrativeQA (Ko\u02c7 cisk` y et al., 2018), LongBench (Bai et al., 2023), ZeroSCROLLS (Shaham et al., 2023), L-Eval (An et al., 2023), Loogle (Li et al., 2023b), \u221eBench (Zhang et al., 2024), and a series of work on perplexity evaluation (Beltagy et al., 2020; Roy et al., 2021; Press et al., 2021; Chen et al., 2023a; Liu et al., 2023; Peng et al., 2023b; Chen et al., 2023b; Ding et al., 2024; Mohtashami & Jaggi, 2024). (2) Probing tasks that provide a more concise reflection of the long-context utilization across different context lengths and positions. These include Needle-in-the-Haystack, passkey retrieval (Mohtashami & Jaggi, 2024), synthesized document QA (Liu et al., 2024), S3Eval (Lei et al., 2024), Discovery (Li et al., 2024), RULER (Hsieh et al., 2024), and the VAL Probing proposed in this study. Among these probing tasks, our VAL Probing is the first to explicitly incorporate a variety of retrieval patterns. 6 Conclusion This work introduces IN2 training to overcome the lost-in-the-middle problem. By applying IN2 training on the open-source model, our FILM-7B exhibits significant improvements on probing tasks and real-world long-context tasks while does not compromise the short-context performance. 10 Acknowledgments Shengnan An and Nanning Zheng were supported in part by NSFC under grant No. 62088102. Thank you to arXiv for use of its open access interoperability."
17
+ }
intro_8K/test_introduction_short_2404.16816v1.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2404.16816v1",
3
+ "title": "IndicGenBench: A Multilingual Benchmark to Evaluate Generation Capabilities of LLMs on Indic Languages",
4
+ "abstract": "As large language models (LLMs) see increasing adoption across the globe, it\nis imperative for LLMs to be representative of the linguistic diversity of the\nworld. India is a linguistically diverse country of 1.4 Billion people. To\nfacilitate research on multilingual LLM evaluation, we release IndicGenBench -\nthe largest benchmark for evaluating LLMs on user-facing generation tasks\nacross a diverse set 29 of Indic languages covering 13 scripts and 4 language\nfamilies. IndicGenBench is composed of diverse generation tasks like\ncross-lingual summarization, machine translation, and cross-lingual question\nanswering. IndicGenBench extends existing benchmarks to many Indic languages\nthrough human curation providing multi-way parallel evaluation data for many\nunder-represented Indic languages for the first time. We evaluate a wide range\nof proprietary and open-source LLMs including GPT-3.5, GPT-4, PaLM-2, mT5,\nGemma, BLOOM and LLaMA on IndicGenBench in a variety of settings. The largest\nPaLM-2 models performs the best on most tasks, however, there is a significant\nperformance gap in all languages compared to English showing that further\nresearch is needed for the development of more inclusive multilingual language\nmodels. IndicGenBench is released at\nwww.github.com/google-research-datasets/indic-gen-bench",
5
+ "authors": "Harman Singh, Nitish Gupta, Shikhar Bharadwaj, Dinesh Tewari, Partha Talukdar",
6
+ "published": "2024-04-25",
7
+ "updated": "2024-04-25",
8
+ "primary_cat": "cs.CL",
9
+ "cats": [
10
+ "cs.CL"
11
+ ],
12
+ "label": "Original Paper",
13
+ "paper_cat": "LLM Fairness",
14
+ "gt": "With the advances in generative language technolo- gies powered by Large Language Models (LLMs; Brown et al., 2020; Rae et al., 2021; Chowdh- ery et al., 2022; OpenAI et al., 2023; Tay et al., 2023; Google, 2023), there has been a surge of interest in evaluating the multilingual capabili- ties of these models. Recent work (Ahuja et al., 2023a,b) shows a consistent performance gap be- tween high resource languages and languages with Figure 1: Performance of state-of-the-art LLMs on different tasks in INDICGENBENCH. We observe a significant performance gap between English and Indic languages across LLMs. lower amounts of web resources available. To de- velop highly multilingual generative LLMs which should work equally well for 100s of languages spoken by billions of people in the world, it is cru- cial to evaluate their capabilities across a variety of languages to uncover performance gaps and guide future research. In this work we focus on India, a country with 1369 rationalized mother tongues spoken by more than a billion people.1 Making progress on lan- guage technologies for Indic languages will not only improve the state of affairs in this region, but will also provide valuable learning to the NLP com- munity which will be applicable to other geograph- ical regions and language families. There are has been much work from the community in building natural language understanding (NLU) models for Indic languages (Kakwani et al., 2020; Khanuja et al., 2021), as well as evaluation datasets (Dodda- paneni et al., 2023; Mhaske et al., 2023) to support such models. In this work, our focus is to develop 1https://en.wikipedia.org/wiki/Languages_of_India arXiv:2404.16816v1 [cs.CL] 25 Apr 2024 Task Language Input Output #Languages Dataset Size (H / M / L) (Train / Dev / Test) CROSSSUM-IN (Cross-lingual Summarization) Hindi 9 / 7 / 13 2.9k / 2.9k / 14.5k FLORES-IN (Machine Translation) Konkani 9 / 7 / 13 - / 28.9k / 29.3k XQUAD-IN (Multilingual QA) Punjabi 9 / 3 / - 1.2k / 1.2k / 14.3k XORQA-IN-XX (Cross-lingual QA) Telugu 9 / 6 / 13 2.8k / 14k / 15.1k XORQA-IN-EN (Cross-lingual QA) Santali 9 / 6 / 13 2.8k / 14k / 15.1k Table 1: INDICGENBENCH, our proposed benchmark, consists of five tasks: Cross-lingual Summarization (CROSSSUM-IN), Machine Translation (FLORES-IN), Multilingual QA (XQUAD-IN) and Cross-lingual QA (XORQA-IN-XX and XORQA-IN-EN). An example from each task, the number of languages for which we collect evaluation data (divided by resourcefulness, higher (H), medium (M) and low (L)), and the number of training/validation/test instances per task is shown above. See Section 2 for details. a high-quality benchmark for evaluating generative language capabilities in a variety of Indic languages across various levels of resourcefulness. We release INDICGENBENCH, a multilingual, multi-way parallel benchmark for measuring lan- guage generation capabilities across diverse user- facing tasks in 29 Indic languages across 4 lan- guage families (Table 7). INDICGENBENCH extends existing benchmarks such as Cross- Sum (Bhattacharjee et al., 2023), XQuAD (Artetxe et al., 2020), XorQA (Asai et al., 2021), and FLO- RES (NLLB-Team et al., 2022) for additional Indic languages and is composed of tasks like cross-lingual summarization (CROSSSUM-IN), ma- chine translation (FLORES-IN), cross-lingual read- ing comprehension (XORQA-IN-XX and XORQA- IN-EN) and multilingual reading comprehension (XQUAD-IN). Each dataset consists of parallel examples in up to 29 low to comparatively higher resource Indic languages; and for some tasks (e.g. CROSSSUM-IN), INDICGENBENCH provides the first-ever evaluation datasets for as many as 18 of these languages. We also release a small training set in all tasks for efficient adaptation of LLMs. Our comprehensive evaluation of various state-of- the-art proprietary and open-source LLMs on IN- DICGENBENCH shows that there is a significant gap in performance between English and Indic lan- guages (see Figure 1). Our contributions are as follows: \u2022 Created and released INDICGENBENCH, a high quality text benchmark in diverse lan- guage generation tasks like summarization, question-answering, and translation across 29 Indic languages. INDICGENBENCH is the largest generation benchmark for Indic lan- guages. \u2022 Comprehensive experimentation on SoTA LLMs (mT5, Gemma, BLOOM, LLaMA, GPT-3.5, GPT-4, PaLM-2) across various model sizes and training settings to bench- mark their Indic language generation capabili- ties. \u2022 A qualitative analysis for assessing the gaps in current language technologies and define potential directions of future research.",
15
+ "main_content": "INDICGENBENCH is a high-quality, humancurated benchmark to evaluate text generation capabilities of multilingual models on Indic languages. Our benchmark consists of 5 user-facing tasks (viz., summarization, machine translation, and question answering) across 29 Indic languages spanning 13 writing scripts and 4 language families. For certain tasks, INDICGENBENCH provides the first-ever evaluation dataset for up to 18 Indic languages. Table 1 provides summary of INDICGENBENCH and examples of instances across tasks present in it. Languages in INDICGENBENCH are divided into (relatively) Higher, Medium, and Low resource categories based on the availability of web text resources (see appendix \u00a7A for details).2 Higher (9): Bengali, Gujarati, Hindi, Kannada, Malayalam, Marathi, Tamil, Telugu, Urdu Medium (7): Assamese, Bhojpuri, Nepali, Odia, Punjabi, Pashto, Sanskrit Low (13): Awadhi, Haryanvi, Tibetan, Garhwali, Konkani, Chhattisgarhi, Rajasthani, Maithili, Manipuri, Malvi, Marwari, Santali, Bodo As evident from the lists above, our benchmark provides a broad-coverage over languages with respect to their resourcedness, allowing users to evaluate language models on relatively highresource languages such as Hindi and extremely low-resource languages such as Manipuri in Meitei script on a single benchmark. To curate the evaluation datasets for our benchmark, we use the following existing datasets as the source: CrossSum (Bhattacharjee et al., 2023) for cross-lingual summarization, FLORES (NLLB-Team et al., 2022) for machine translation, XQuAD (Artetxe et al., 2020) for multilingual QA, and XoRQA (Asai et al., 2021) for cross-lingual QA. From each of these datasets we select a subset of English examples to be a part of our benchmark, and then collect professional human translations for these examples in all target Indic languages. Some target languages are already covered by the source datasets in which case we re-purpose this existing data and only collect translations for the remaining languages. We also 2We note that the languages called relatively higher resource in this paper, e.g., Hindi or Bengali, are in fact mid-low Web resource when compared to English and other truly high resource languages. For example, using Wikipedia as a proxy for language resources, compared to 6.6M+ Wikipedia articles in English, there are only 160K Hindi Wikipedia articles. collect and release a small amount of training and validation data making possible evaluation of training techniques like fine-tuning, parameter-efficient training, in-context learning, and others. Why extend existing benchmarks? We chose to collect human translations of existing benchmarks as opposed to creating evaluation data from scratch due to various reasons: \u2022 Translation-based extension of existing benchmark results in multi-way parallel data, allowing researchers to attribute performance due to task knowledge vs. language understanding, and measure cross-lingual generalization \u2022 For many low-resource languages in INDICGENBENCH, clean text knowledge corpus (e.g., Wikipedia) is not available making it difficult to acquire source data for annotation \u2022 By focusing only on translation quality in the target Indic languages, we are able to leverage the quality control that went into designing the source benchmarks. Annotators were professional data labelers working as contractors at our organization and with a vendor. Annotators were paid competitive rates in compliance with applicable labor laws and prevailing market rates. Our pay rate to annotators varied across languages, ranging from USD 2.80 per hour for Pashto to USD 15.90 per hour for Tibetan. Cross-Lingual Summarization: CROSSSUM-IN We create CROSSSUM-IN based on CrossSum (Bhattacharjee et al., 2023), a dataset for crosslingual summarization, which in turn is derived from XL-Sum (Hasan et al., 2021b). CrossSum contains multi-way parallel data in 45 languages where BBC news articles as source in a language are paired with corresponding summaries in other languages. Based on their matching criteria, different languages have different amount of sourcetarget pairs. We sample 700 English article-summary pairs (100 each from train/dev and 500 from test) and ask human translators to translate the English summary into the target Indic languages. CrossSum already contains data for 9 of our 29 target languages; for these languages we sample 100/100/500 examples from the original dataset to maintain equity with other languages we collect data for. CROSSSUMIN contains a total of 20.3k examples across 29 Indic languages in our benchmark. Machine Translation: FLORES-IN FLORES-200 (NLLB-Team et al., 2022) is a human-annotated multi-way parallel machine translation (MT) benchmark for 200 languages where the same source English sentences are translated by humans into the target 200 languages. It contains data in 22 of our 29 target languages; we extend this by collecting human translations for the remaining 7 new languages leading to a MT benchmark in 29 Indic languages which we call FLORES-IN. FLORES-200 is divided into three splits: dev (997), devtest (1012), test (992), of which the test set it not public. We collect translations for all 997 dev and 1012 devtest sentences, yielding 2009 sentences per language. Collectively, FLORES-IN contains 58.2k examples across 29 Indic languages. Multilingual Question-Answering: XQUAD-IN We create an Indic Multilingual Question Answering task XQUAD-IN based on the multilingual reading comprehension dataset XQuAD (Artetxe et al., 2020). XQuAD is in turn derived from the SQuAD dataset (Rajpurkar et al., 2016), in which an English Wikipedia passage is paired with multiple question-answer (QA) pairs where the answers are short spans for the given passage. The authors of XQuAD collected human translations for 240 passages and 1190 QA pairs from the SQuAD v1.1 development set into 10 higher resource languages (Hindi being the only Indic language). To create XQUAD-IN, we use the 240 passages and 1190 QA pairs from XQuAD as our test set. We additionally selected 20 passages and 100 QA pairs from the original SQuAD v1.1 training and development sets each to create our training and development set. For all the 280 passages and 1390 QA pairs we collect professional human translations in 12 Indic languages.3 Overall, XQUAD-IN contains 3.3k passages and 16.6k QA pairs in 12 Indic languages. Cross-Lingual Question-Answering: XORQA-IN We create Indic Cross-lingual Question-Answering dataset XORQA-IN based on the XOR-TYDI QA dataset (Asai et al., 2021). XOR-TYDI contains questions in non-English languages paired with English evidence passages and short span answers from those passages (similar to SQuAD). It was 3XQUAD-IN contains all 9 higher-resource languages (see \u00a72) and 3 medium-resources languages, namely, Assamese, Odia, and Punjabi. created with the idea of developing NLP systems that can answer questions in users\u2019 native language by refering to sources in a high-resource language, such as English, which was more likely to contain the answer due to the information scarcity of lowresources languages on the web. The original XORTYDI contains data in 7 languages out of which Bengali and Telugu are the two Indic languages. To create XORQA-IN, we select the 302 Bengali and 237 Telugu examples (Bn/Te-question, Enpassage, En-answer) from the XOR-TYDI dev set as our test data.4 Additionally, we sample 600 examples (equally from Bengali and Telugu) from the training set of XOR-TYDI to create our training (100) and development (500) set. We then follow a two-staged translation process, where we first ask the human translators to translate the Bengali or Telugu question (Bn/Te-question) into English (En-question). In the second stage, we collect translations for these English questions (En-question) into target languages (Xx-question) and translations for the English answers (En-answer) into the target languages (Xx-answer). We create two tasks from this translated data: 1. XORQA-IN-EN: Each example contains (Xxquestion, En-passage, En-answer). This task is similar to the XOR-TYDI dataset in additional Indic languages. 2. XORQA-IN-XX: Each example contains (Xxquestion, En-passage, Xx-answer), where the task is to generate the answer in the same language as the question. We collect data for 28 Indic languages resulting in 32k examples.5 3 Experiments and Analysis We use INDICGENBENCH to benchmark multilingual and cross-lingual language generation capabilities of various LLMs on Indic languages. We perform experiments with a variety of open-source LLMs \u2014 mT5 (Xue et al., 2021), LLaMA (Touvron et al., 2023),6, BLOOMZ (Workshop et al., 2022), Gemma (Team et al., 2024); and proprietary LLMs \u2014 GPT-3.5, GPT-4 (OpenAI et al., 2023), and PaLM-2 (Anil et al., 2023). We compare and analyze the performance of these LLMs and their variants in terms of model sizes under different learning paradigms and set4XOR-TYDI has not publicly released its test set. 5We do not collect translations for Nepali. 6LLaMA-2 could not be used due to a restrictive licence Model (LLM) CROSSSUM-IN FLORES-IN XQUAD-IN XORQA-IN-XX XORQA-IN-EN Eval. Metric ChrF ChrF Token-F1 Token-F1 Token-F1 (enxx / xxen) Performance in English GPT-4 30.3 \u2013 / \u2013 64.8 \u2013 37.9 PaLM-2-L 41.1 \u2013 / \u2013 83.7 \u2013 71.4 Average Performance on INDICGENBENCH LLaMA-7B 3.7 11.5 / 21.6 3.8 7.4 10.4 LLaMA-13B 4.1 13.3 / 24.1 4.5 10.4 12.1 LLaMA-65B 4.6 18.1 / 32.7 7.1 16.5 16.3 BLOOM-7B 3.8 18.3 / 31.2 13.8 7.9 23.6 BLOOMZ-7B 1.2 40.8 / 48.4 53.7 7.0 49.0 Gemma-7B-PT 0.0 32.1 / 50.4 0.5 11.7 23.8 Gemma-7B-IT 11.6 18.6 / 29.2 35.3 13.5 24.8 GPT-3.5 16.3 29.2 / 47.7 33.2 21.6 35.5 GPT-4 17.6 32.1 / 54.5 55.7 23.4 46.0 PaLM-2-XXS 7.2 24.0 / 43.4 34.6 13.5 36.8 PaLM-2-XS 15.5 40.7 / 58.3 62.2 29.5 47.8 PaLM-2-S 18.5 43.5 / 61.6 66.7 31.6 57.4 PaLM-2-L 21.2 47.5 / 65.1 69.3 37.4 55.9 Table 2: One-shot performance on INDICGENBENCH across model sizes for all LLMs considered in our work (\u00a73.1). For each LLM family performance improves with increasing model size, with PaLM-2-L performing the best across most tasks. Compared to English, all models under-perform significantly highlighting shortcomings of current SoTA LLMs. See Section 3.1 for details. tings. We first evaluate model performance on one-shot prompting (\u00a73.1) and also measure performance across language categories based on resourcedness (\u00a73.2). We then evaluate the effect of number of in-context examples shown to the model as supervised data (\u00a73.3) and the effect of prompting in a higher-resource language such an English or Hindi (\u00a73.4). Using the training data contained in INDICGENBENCH, we measure how the performance of LLMs after fine-tuning compares with few-shot prompting (\u00a73.5). Finally, we perform qualitative analysis of models on INDICGENBENCH and highlight some areas of improvement for future model development (\u00a73.7). Evaluation Metrics For the cross-lingual summarization and translation tasks, CROSSSUM-IN and FLORES-IN, we report Character-F1 (ChrF) metric (Popovi\u00b4 c, 2015) since token-level metrics like ROUGE and BLEU are not reliable for lowresource languages (Bapna et al., 2022). To stay consistent with existing literature on QA tasks, we report SQuAD-style Token-F1 on our XQUAD-IN and XORQA-IN QA tasks. On FLORES-IN, we report translation performance in both directions\u2014translating from English to the target language (enxx) and vice-versa (xxen). 3.1 Comparison of LLMs on INDICGENBENCH In Table 2 we evaluate LLaMA, BLOOMZ, Gemma, GPT and PaLM-2 family of models on all tasks of INDICGENBENCH in a one-shot prompted setting. Numbers are averaged across all languages in the evaluation data. To compare, we also report English performance for GPT-4 and PaLM-2-L. We see across tasks that larger models from the same LLM family perform better. PaLM2-L performs the best among all LLMs considered, except for the XORQA-IN-EN task where PaLM-2-S performs slightly better. We find that open source LLaMA models perform much worse compared to proprietary models; even the largest LLaMA-65B model significantly underperforms the smallest PaLM-2-XXS model. Gemma7B instruction tuned model performs better than LLaMA-13B as well as LLaMA-65B on most tasks. BLOOMZ, which is an instruction tuned version of BLOOM (Workshop et al., 2022), pre-trained on large-scale multilingual data, works the best on three out of five tasks in INDICGENBENCH. On CROSSSUM-IN and XORQA-IN-XX it falls behind LLaMA and Gemma. Compared to English, we see significant room for improvement (20+ ChrF or Token-F1 points) across all tasks. 3.2 Performance across language categories In Table 3 we report one-shot performance across language categories defined in Section 2. We only show performance for Gemma-7B-IT, BLOOMZ7B, LLaMA-65B, GPT-4 and PaLM-2-L models here and report performance for the other models in appendix B.1. We find that there is a significant performance drop going from higher resourced languages to medium resourced ones, and further drop in lower resourced languages. We would like to point out two observations here: (a) In FLORES-IN, the performance for translating English to the target language (enxx) drops significantly from higher to lower resourced languages (56.9 \u219241.9 for PaLM-2-L) whereas the performance in the xxen direction does not fall this drastically (68.2 \u219262.6). A similar trend is seen when comparing XORQA-IN-XX and XORQA-IN-EN. This highlights that current LLMs are better at unCROSSSUM-IN FLORES-IN (enxx / xxen) XQUAD-IN XORQA-IN-XX XORQA-IN-EN Model High Medium Low High Medium Low High Medium High Medium Low High Medium Low LLaMA-65B 4.4 4.6 4.7 18.2 / 31.5 15.4 / 30.0 19.5 / 35.0 8.8 1.9 17.7 13.5 17.1 16.4 14.0 17.3 Gemma-7B-IT 13.9 11.5 10.0 17.6 / 33.7 15.0 / 26.1 21.3 / 27.7 38.8 24.8 18.9 8.3 12.2 29.5 23.9 21.9 BLOOMZ-7B 1.5 1.7 0.6 67.7 / 59.1 39.4 / 50.2 22.9 / 40.0 55.5 48.1 10.8 2.8 6.2 64.7 45.8 39.5 GPT-4 19.4 17.9 16.3 36.2 / 59.6 30.7 / 55.2 29.9 / 50.5 56.1 54.6 25.8 21.6 22.6 49.4 50.0 41.8 PaLM-2-L 25.2 23.1 17.5 56.9 / 68.2 45.9 / 65.6 41.9 / 62.6 72.5 59.8 41.9 36.7 34.6 57.3 57.9 53.9 Table 3: One-shot performance across language categories based on resourcedness defined in Section 2. For all tasks, we witness significantly lower performances in medium and low resource languages compared to the higher resource ones. Please see Table 9 in appendix B.1 for results on other models. See Section 3.2 for more details. derstanding these lower-resourced languages than generating fluent text in them. (b) In few cases, we see smaller performance deltas between medium and lower resourced languages compared to higher and medium categories. From our analysis, this can mainly be attributed to many languages in the lower category being similar to Hindi and written in the same Devanagari script. FLORES-IN XORQA-IN-XX Model (LLM) 0 1 5 0 1 2 3 LLaMA-7B 8.0 11.5 11.4 5.0 7.4 9.0 9.2 LLaMA-13B 8.6 13.3 13.4 6.3 10.4 12.2 13.1 LLaMA-65B 14.0 18.1 18.3 12.3 16.5 18.7 19.4 PaLM-2-XXS 0.8 24.0 26.9 8.9 13.5 15.8 17.5 PaLM-2-XS 20.1 40.7 42.3 21.4 29.5 32.2 33.2 PaLM-2-S 24.9 43.5 45.2 22.7 31.6 33.4 35.4 PaLM-2-L 31.1 47.5 49.3 31.9 37.4 39.7 41.1 Table 4: Performance by varying number of incontext exemplars for LLaMA and PaLM-2 models on FLORES-IN (enxx) and XORQA-IN-XX tasks (\u00a73.3). Performance improves with increasing amounts of supervision provided in-context. Refer appendix B.2 for results on other tasks and models. 3.3 In-context learning on INDICGENBENCH In this section we aim to understand the impact of the number of in-context examples shown to the LLM during few-shot prompting. Since CROSSSUM-IN and XQUAD-IN input passages are long, we are only able to perform 0-and-1-shot prompting. For XORQA-IN-XX and XORQA-IN-EN we perform 0-to-3-shot prompting, and for FLORES-IN we perform 0, 1 and 5-shot prompting. We show performance for FLORES-IN and XORQA-IN-XX in Table 4. Other results are shown in appendix B.5 due to space limitations. Across model families and sizes we observe that increasing the amount of supervision in terms of the in-context examples improves performance. 3.4 Transfer from high-resource languages For languages with no supervised data, one option to improve performance is utilizing existing supervised data another language as in-context exemplars. In this section we aim to study if the language in which the model is prompted plays a role in performance. In Table 5 we show performance when the model is prompted in English vs. Hindi, a representative higher resourced Indic language. For comparison, we also show performance when the in-context exemplar is in the same language as the test instance. We find that Hindi in-context exemplars are much more useful for all models as compared to their English counterparts. Surprisingly, for smaller models, performance with Hindi exemplars comes extremely close to prompting in the test language, even better sometimes. 3.5 Fine-tuning LLMs on INDICGENBENCH and Comparison with In-Context Learning As outlined in Section 2, we also release a small, high-quality training set for all tasks in INDICGENBENCH (except FLORES-IN which only has dev and test sets). This training data can be used to adapt LLMs to downstream tasks in Indic languages via fine-tuning and other training techniques. Table 6 shows our results of fine-tuning mT5 and PaLM-2 models and their comparison with incontext learning using PaLM-2. We fine-tune each model on training data from all available languages including English, use the development set for early stopping, and report numbers on the test set. For question-answering tasks that require generating short spans as answers, we find that older generation mT5 models significantly outperform smaller CROSSSUM-IN XQUAD-IN XORQA-IN-XX XORQA-IN-EN Model (1-Shot Lang) Higher Medium Low Higher Medium Higher Medium Low Higher Medium Low PaLM-2-XXS (En) 0.3 0.1 0.3 38.5 31.9 14.0 5.4 7.3 40.3 35.0 30.8 PaLM-2-XXS (Hi) 1.3 2.1 3.7 39.8 33.3 17.6 8.5 10.5 45.5 39.4 31.9 PaLM-2-XXS (Lang) 7.7 7.6 6.7 37.2 26.8 17.7 8.8 12.8 43.6 38.3 31.5 PaLM-2-XS (En) 0.3 0.2 0.5 64.3 62.2 30.6 23.9 20.8 35.9 32.1 27.2 PaLM-2-XS (Hi) 3.5 5.5 9.9 65.4 63.5 33.2 25.8 22.7 49.3 46.8 40.7 PaLM-2-XS (Lang) 18.4 16.4 13.0 65.1 53.3 35.8 27.6 26.1 53.3 51.5 42.2 PaLM-2-S (En) 0.4 0.2 0.5 67.4 66.8 27.5 19.9 19.9 48.6 47.1 40.8 PaLM-2-S (Hi) 4.4 6.9 13.2 68.5 67.5 34.2 27.0 24.9 58.3 57.0 49.0 PaLM-2-S (Lang) 22.4 19.8 15.1 69.9 57.3 36.6 30.3 28.6 60.1 61.4 53.6 PaLM-2-L (En) 0.4 0.2 0.6 71.7 69.8 37.7 33.2 29.7 28.7 27.5 26.2 PaLM-2-L (Hi) 4.7 7.0 13.8 72.6 71.0 39.7 34.6 31.2 45.5 44.8 41.5 PaLM-2-L (Lang) 25.2 23.1 17.5 72.5 59.8 41.9 36.7 34.6 57.3 57.9 53.9 Table 5: Effect of in-context exemplar language (\u00a73.4): Performance comparison when the one-shot exemplar is provided in English (En) or Hindi (Hi) as opposed to the language of the test instance (Lang). In-context prompting in the test language (Lang) provides the best performance, followed by Hindi (Hi) and then English (En). This follows the same order as relatedness between test and prompting language, highlighting the benefit of prompting in a language more related to the test language (e.g., Hindi compared to English in this case). CROSSSUM-IN XQUAD-IN XORQA-IN-XX XORQA-IN-EN Model Higher Medium Low Higher Medium Higher Medium Low Higher Medium Low mT5 models \u2013 Fine-Tuned mT5-B 19.5 18.9 15.1 46.2 30.9 3.8 4.0 5.5 31.7 31.4 30.8 mT5-L 20.5 19.9 15.5 54.3 38.6 11.8 11.0 10.4 56.8 53.7 45.4 mT5-XL 22.7 21.1 15.3 57.4 40.5 20.7 13.5 15.6 58.2 56.2 46.5 mT5-XXL 25.9 24.2 10.4 62.0 44.4 28.8 23.6 21.9 70.3 68.9 59.1 PaLM-2 models Fine-Tuned PaLM-2-XXS 22.5 19.7 16.5 41.2 18.1 18.1 10.9 12.9 60.2 56.9 50.9 PaLM-2-XS 28.5 25.6 18.8 40.2 16.9 30.4 23.6 19.6 69.1 66.6 56.6 PaLM-2 models Few-shot prompted PaLM-2-XXSF S 7.7 7.6 6.7 37.2 26.8 22.7 12.3 16.4 51.6 47.1 38.4 PaLM-2-XSF S 18.4 16.4 13.0 65.1 53.3 39.2 32.0 29.5 67.0 65.3 56.5 Table 6: (Top) Fine-tuning performance of mT5 and PaLM-2 models (\u00a73.5). Bold represents best numbers among fine-tuned models. PaLM-2 outperforms mT5 for longer-form generation task (CROSSSUM-IN), whereas mT5 models do well on short answer-span QA tasks. (Bottom) Comparison of in-context learning vs. fine-tuning on PaLM-2 models. In Green , we highlight the best PaLM-2 number (among fine-tuned and few-shot). For CROSSSUM-IN task requiring longer-form generation, fine-tuning outperforms few-shot prompting. PaLM-2 models in most cases.7 On CROSSSUMIN which requires generating a longer summary, we find that PaLM-2 models are more effective. For Question-Answering tasks, as the model size increases from PaLM-2-XXS to PaLM-2-XS, we see that in-context learning yields equal or better performance compared to fine-tuning the model. For example, in XORQA-IN-XX, as the model size increases from XXS to XS, we see that the gap between few-shot prompting and fine-tuning sig7Since the parameter count for PaLM-2 models is not public, we cannot attribute this performance difference to model sizes. nificantly increases from 2-4% (in XXS) to 9-10% (in XS). In the case of XQUAD-IN, we see that for the larger PaLM-2-XS model, its much better to perform in-context learning as compared to finetuning, for both medium and high resource Indic languages. For XORQA-IN-EN, in-context learning reaches the fine-tuning performance as model size increases to PaLM-2-XS. For the CROSSSUMIN, the gap between fine-tuning and in-context learning is reducing as model size increases, which reinforces that for even larger model sizes, it might be better to learn in-context. 3.6 Analyzing Tokenizer across Indic languages Figure 2: Tokenizer fertility for different languages using OpenAI\u2019s Byte Pair Encoding. We note that midlow resource languages suffer from high token fertility. (Section 3.6) Figure 3: Percentage of in-context XQUAD-IN exemplars that fit in a 1920 token context window. Midlow resource languages\u2019 high token fertility (Figure 2) makes it impossible to perform few-shot prompting in these languages. (Section 3.6) In Figure 2, we compare the token fertility (average number of sub-words that a word is broken down into by the tokenizer) across all Indic langugaes in INDICGENBENCH.8 We find that the token fertility varies significantly across languages; from 4.1 for Pashto to 19.9 for Tibetan. A high token fertility is undesirable and can disproportionately effect a particular language\u2019s performance. For languages where text is broken into more number of tokens, fewer in-context examples 8We use OpenAI\u2019s BPE tokenizer (platform.openai.com/tokenizer). PaLM-2 tokenizer is not publicly available. can be input to the LLM during inference. This can negatively impact performance (see Table 4). In Figure 3, we show how the percentage of data that fits in a particular context length changes with number of in-context examples for various languages. For example, we see in Figure 3 that for medium resource languages with high token-fertility like Oriya and Punjabi we can in-corporate much fewer in-context examples, compared to Indic languages with lower token-fertility like Hindi and Marathi. 3.7 Qualitative Analysis We manually analyze predictions from the best performing model PaLM-2-L with the aim to understand the shortcomings of current LLMs and highlight areas of improvements for future research. We randomly select 20 examples each in the CROSSSUM-IN and FLORES-IN tasks for the following languages which are reviewed by native speakers: Awadhi, Haryanvi, Chhatisgarhi, Konkani, and Assamese. We found the following patterns of errors: Generation in a related language The languages Awadhi, Haryanvi, and Chhatisgarhi are related to a higher resource language Hindi and written in the same script Devanagari. We find that the model generates mixed-language output with words mixed from Hindi and also outputs incorrectly inflected forms of the main verbs in the output. We show couple of examples of this phenomena in Figure 5a in the appendix. Hallucination and Missing Information In the cross-lingual summarization task CROSSSUM-IN, we find that the model often outputs extra information that is not present in the source article. In translation, we have observed examples where come crucial information from the source sentence is missing from the generated output. Also, in some cases, the model fails to understand polysemous English words and generates translation for the incorrect sense. We show examples of these phenomena in Figures 4a, 4b, and 5b in the appendix. 4 Related Work In the last few years, many multilingual LLMs have been developed\u2014starting from mBART (Liu et al., 2020) trained on 25 languages to LLMs that are pre-trained on hundreds of languages, such as mT55 (Xue et al., 2021), PaLM-2 (Anil et al., 2023), GPT-4 (Achiam et al., 2023), Gemini (Google, 2023), and others. These LLMs are typically evaluated on individual multilingual tasks for Translation: WMT (Farhad et al., 2021), FLORES (NLLB-Team et al., 2022); QuestionAnswering: XQuAD (Artetxe et al., 2020), TyDiQA (Clark et al., 2020), XorQA (Asai et al., 2021); Summarization: XLSUM (Hasan et al., 2021a), CrossSum (Bhattacharjee et al., 2023); Reasoning: MGSM (Shi et al., 2022), XCOPA (Ponti et al., 2020) to name a few, or on multilingual benchmarks such as, XTREME (Hu et al., 2020) and XTREME-UP (Ruder et al., 2023). However, most of these evaluation resources contain only a handful of languages or do not contain data for low resource languages, especially Indics. Besides, cross-lingual evaluation data is even more sparse. This work is an effort to bridge these gaps by releasing INDICGENBENCH, a suite of datasets covering diverse cross-lingual and multilingual generation tasks in Indic languages. Most work on creating evaluation data on Indic languages have focused on natural language understanding (NLU) tasks. Kakwani et al. (2020) and Doddapaneni et al. (2023) have released NLU test sets in Indic languages for a wide variety of tasks such as QA and NLI. Naamapadam (Mhaske et al., 2023) is a named entity recognition dataset specifically for Indic languages, MASSIVE (FitzGerald et al., 2022) is a slot-filling and intent classification dataset available in 7 Indic languages, IndicGLUE (Kakwani et al., 2020) is an NLU benchmark for 11 Indic languages, whereas GLUECoS (Khanuja et al., 2020) is a Hindi-English code-mixed benchmark, containing various NLU tasks. The Belebele Benchmark (Bandarkar et al., 2023) is a multiplechoice machine reading comprehension dataset for 122 languages of which 17 are Indic. On the other hand, INDICGENBENCH is a natural language generation (NLG) benchmark. Recently, there has been work in creating evaluation benchmarks for natural language generation (NLG) on Indic languages. IndicNLG Suite (Kumar et al., 2022), consisting of 5 NLG taks in 11 Indic languages, is a leap in this direction. These datasets in this suite are automatically created, either using data from the web (e.g., Wikipedia) or using translation systems. There are few works which create evaluation data for individual tasks in Indic languages. For example, IndicTrans2 (Gala et al., 2023) creates an n-way parallel dataset for machine translation in 22 scheduled Indian Languages, Mukhyansh (Madasu et al., 2023) and PMIndiaSum (Urlana et al., 2023) are headline generation datasets for 8 and 14 Indic languages respectively, and TeSum (Urlana et al., 2022) is an abstractive summarization dataset in the Telugu language. Ramesh et al. (2022) introduced Samanantar, a large translation dataset covering 11 Indic languages. Our work complements IndicNLGSuite and the other datasets in multiple ways. INDICGENBENCH is manually annotated ensuring highquality, noise-free text which is not typically found on the web. Our benchmark contains evaluation data for a much larger set of languages spanning low, medium and high resource. Our datasets are multi-language parallel enabling better comparison among different languages. Lastly, we focus on a complementary and challenging set of tasks, including cross-lingual summarization, cross-lingual and multilingual question answering, and translation. 5 Conclusion We release INDICGENBENCH, the largest benchmark for evaluating LLMs on 5 user-facing generation tasks across 29 Indic languages, providing evaluation data for many under-represented Indic languages for the first time. INDICGENBENCH is broad coverage along many dimensions \u2013 it covers 13 writing scripts, 4 language families, and spans languages across the available web resource spectrum. We carry out extensive comparison of current SoTA LLMs on INDICGENBENCH and highlight areas for future improvement. We are hopeful INDICGENBENCH will play an important role in further development of LLMs in Indic languages ultimately benefiting a billion-plus population. 6 Limitations Since INDICGENBENCH extends existing benchmarks to new Indic languages through human translation, it may miss some India-specific entities and linguistic nuances. Future work can explore translocalization for creating improved evaluation and fine-tuning. INDICGENBENCH doesn\u2019t cover longform generation and reasoning tasks. Creating such datasets is part of our future work. 7 Acknowledgments We thank Aditi Chaudhury, Ashok Popat, Shachi Dave, Sagar Gubbi, Megh Umekar and members of the Languages team at Google Research India (GRI) for providing feedback on this work. The authors would like to thank Manish Gupta and Divy Thakkar for their support and guidance."
16
+ }
intro_8K/test_introduction_short_2404.16818v1.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2404.16818v1",
3
+ "title": "Boosting Unsupervised Semantic Segmentation with Principal Mask Proposals",
4
+ "abstract": "Unsupervised semantic segmentation aims to automatically partition images\ninto semantically meaningful regions by identifying global categories within an\nimage corpus without any form of annotation. Building upon recent advances in\nself-supervised representation learning, we focus on how to leverage these\nlarge pre-trained models for the downstream task of unsupervised segmentation.\nWe present PriMaPs - Principal Mask Proposals - decomposing images into\nsemantically meaningful masks based on their feature representation. This\nallows us to realize unsupervised semantic segmentation by fitting class\nprototypes to PriMaPs with a stochastic expectation-maximization algorithm,\nPriMaPs-EM. Despite its conceptual simplicity, PriMaPs-EM leads to competitive\nresults across various pre-trained backbone models, including DINO and DINOv2,\nand across datasets, such as Cityscapes, COCO-Stuff, and Potsdam-3.\nImportantly, PriMaPs-EM is able to boost results when applied orthogonally to\ncurrent state-of-the-art unsupervised semantic segmentation pipelines.",
5
+ "authors": "Oliver Hahn, Nikita Araslanov, Simone Schaub-Meyer, Stefan Roth",
6
+ "published": "2024-04-25",
7
+ "updated": "2024-04-25",
8
+ "primary_cat": "cs.CV",
9
+ "cats": [
10
+ "cs.CV"
11
+ ],
12
+ "label": "Original Paper",
13
+ "paper_cat": "Semantic AND Segmentation AND Image",
14
+ "gt": "Semantic image segmentation is a dense prediction task that classifies image pixels into categories from a pre-defined semantic taxonomy. Owing to its fundamental nature, semantic segmentation has a broad range of applications, such as image editing, medical imaging, robotics, or autonomous driving (see Minaee et al., 2022, for an overview). Addressing this problem via supervised learning requires ground-truth labels for every pixel (Long et al., 2015; Ronneberger et al., 2015; Chen et al., 2018b). Such manual annotation is extremely time and resource intensive. For instance, a trained human annotator requires an average of 90 minutes to label up to 30 classes in a single 2 MP image (Cordts et al., 2016). While committing significant resources to large-scale annotation efforts achieves excellent results (Kirillov et al., 2023), there is natural interest in a more economical approach. Alternative lines of research aim to solve the problem using cheaper \u2013 so-called \u201cweaker\u201d \u2013 variants of annotation. For example, image-level supervision describing the semantic categories present in the image, or bounding-box annotations, can reach impressive levels of segmentation accuracy (Dai et al., 2015; Araslanov & Roth, 2020; Oh et al., 2021; Xu et al., 2022; Ru et al., 2023). As an extreme problem scenario toward reducing the annotation effort, unsupervised semantic segmentation aims to consistently discover and categorize image regions in a given data domain without any labels, know- ing only how many classes to discover. Unsupervised semantic segmentation is highly ambiguous as class boundaries and the level of categorical granularity are task-dependent.1 However, we can leverage the fact that typical image datasets have a homogeneous underlying taxonomy and exhibit invariant domain char- acteristics. Therefore, it is still feasible to decompose images in such datasets in a semantically meaningful and consistent manner without annotations. Despite the challenges of unsupervised semantic segmentation, we have witnessed remarkable progress on this task in the past years (Ji et al., 2019; Cho et al., 2021; Van Gansbeke et al., 2021; 2022; Ke et al., 2022; 1While assigning actual semantic labels to regions without annotation is generally infeasible, the assumption is that the categories of the discovered segments will strongly correlate with human notions of semantic meaning. 1 arXiv:2404.16818v1 [cs.CV] 25 Apr 2024 Image Mask 1 Mask 2 Mask 3 PriMaPs (all) Pseudo Label ... ... ... Figure 1: PriMaPs pseudo label example. Principal mask proposals (PriMaPs) are iteratively extracted from an image (dashed arrows). Each mask is assigned a semantic class resulting in a pseudo label. The examples are taken from the Cityscapes (top), COCO-Stuff (middle), and Potsdam-3 (bottom) datasets. Yin et al., 2022; Hamilton et al., 2022; Karlsson et al., 2022; Li et al., 2023; Seong et al., 2023; Seitzer et al., 2023). Deep representations obtained with self-supervised learning (SSL), such as DINO (Caron et al., 2021), have played a critical role in this advance. However, it remains unclear whether previous work leverages the intrinsic properties of the original SSL representations, or merely uses them for \u201cbootstrapping\u201d and learns a new representation on top. Exploiting the inherent properties of SSL features is preferable for two reasons. First, training SSL models incurs a substantial computational effort, justifiable only if the learned feature extractor is sufficiently versatile. In other words, one can amortize the high computational cost over many downstream tasks, provided that task specialization is computationally negligible. Second, studying SSL representations with lightweight tools, such as linear models, leads to a more interpretable empirical analysis than with the use of more complex models, as evidenced by the widespread use of linear probing in SSL evaluation. Such interpretability advances research on SSL models toward improved cross-task generalization. Equipped with essential tools of linear modeling, i. e. Principal Component Analysis (PCA), we generate Principal Mask Proposals, or PriMaPs, directly from the SSL representation. Complementing previous findings on object-centric images (Tumanyan et al., 2022; Amir et al., 2022), we show that principal com- ponents of SSL features tend to identify visual patterns with high semantic correlation also in scene-centric imagery. Leveraging PriMaPs and minimalist post-processing, we construct semantic pseudo labels for each image as illustrated in Fig. 1. Finally, instead of learning a new embedding on top of the SSL representation (Hamilton et al., 2022; Seong et al., 2023; Seitzer et al., 2023; Zadaianchuk et al., 2023), we employ a moving average implementation of stochastic Expectation Maximization (EM) (Chen et al., 2018a) to assign a con- sistent category to each segment in the pseudo labels and directly optimize class prototypes in the feature space. Our experiments show that this straightforward approach not only boosts the segmentation accu- racy of the DINO baseline, but also that of more advanced state-of-the-art approaches tailored for semantic segmentation, such as STEGO (Hamilton et al., 2022) and HP (Seong et al., 2023). We make the following contributions: (i) We derive lightweight mask proposals, leveraging intrinsic properties of the embedding space, e. g., the covariance, provided by an off-the-shelf SSL approach. (ii) Based on the mask proposals, we construct pseudo labels and employ moving average stochastic EM to assign a consistent semantic class to each proposal. (iii) We demonstrate improved segmentation accuracy across a wide range of SSL embeddings and datasets.",
15
+ "main_content": "Our work builds upon recent advances in self-supervised representation learning, and takes inspiration from previous unsupervised semantic and instance segmentation methods. 2 The goal of self-supervised representation learning (SSL) is to provide generic, task-agnostic feature extractors (He et al., 2020; Chen et al., 2020; Grill et al., 2020). A pivotal role in defining the behavior of self-supervised features on future downstream tasks is taken by the self-supervised objective, the so-called pretext task. Examples of such tasks include predicting the context of a patch (Doersch et al., 2015) or its rotation (Gidaris et al., 2018), image inpainting (Pathak et al., 2016), and \u201csolving\u201d jigsaw puzzles (Noroozi & Favaro, 2016). Another family of self-supervised techniques is based on contrastive learning (Chen et al., 2020; Caron et al., 2020). More recently, Transformer networks (Dosovitskiy et al., 2020) revived some older pretext tasks, such as context prediction (Caron et al., 2021; He et al., 2022), in a more data-scalable fashion. While the standard evaluation practice in SSL (e. g., linear probing, transfer learning) offers some glimpse into the feature properties, understanding the embedding space produced by SSL remains an active terrain for research (Ericsson et al., 2021; Naseer et al., 2021). In particular, DINO features (Caron et al., 2021; Oquab et al., 2024) are known to encode accurate object-specific information, such as object parts (Amir et al., 2022; Tumanyan et al., 2022). However, it remains unclear to what extent DINO embeddings allow for semantic representation of the more ubiquitous multi-object scenes. Here, following previous work (e. g., Hamilton et al., 2022; Seong et al., 2023), we provide further insights. Early techniques for unsupervised semantic segmentation using deep networks (Cho et al., 2021; Van Gansbeke et al., 2021) approach the problem in the spirit of transfer learning and, under certain nomenclatures, may not be considered fully unsupervised. Specifically, starting with supervised ImageNet pre-training (Russakovsky et al., 2015), a network obtains a fine-tuning signal from segmentation-oriented training objectives. Such supervised \u201cbootstrapping\u201d appears to be crucial in the ill-posed unsupervised formulation. Unsupervised training of a deep model for segmentation from scratch is possible, albeit sacrificing accuracy (Ji et al., 2019; Ke et al., 2022). However, training a new deep model for each downstream task contradicts the spirit of SSL of amortizing the high SSL training costs over many computationally cheap specializations of the learned features (Bommasani et al., 2021). Relying on self-supervised DINO pre-training, recent work (Hamilton et al., 2022; Li et al., 2023; Seong et al., 2023) has demonstrated the potential of such amortization with more lightweight fine-tuning for semantic segmentation. Nevertheless, most of this work (e. g., Hamilton et al., 2022; Van Gansbeke et al., 2022) has treated the SSL representation as an inductive prior by learning a new embedding space over the SSL features (e. g., Hamilton et al., 2022; Seong et al., 2023). In contrast, following SSL principles, we use the SSL representation in a more direct and lightweight fashion \u2013 by extracting mask proposals using linear models (PCA) with minimal post-processing and learning a direct mapping from feature to prediction space. Mask proposals have an established role in computer vision (Arbelaez et al., 2011; Uijlings et al., 2013), and remain highly relevant in deep learning (Hwang et al., 2019; Van Gansbeke et al., 2021; Yin et al., 2022). Different from previous work, we directly derive the mask proposals from SSL representations. Our approach is inspired by the recent use of classical algorithms, such as normalized cuts (Ncut Shi & Malik, 2000), in the context of self-supervised segmentation (Wang et al., 2023a;b). However previous approaches (Van Gansbeke et al., 2021; 2022; Wang et al., 2023a;b) mainly proposed foreground object masks on object-centric data, utilized in a multi-step self-training. In contrast, we develop a straightforward method for extracting dense pseudo labels for learning unsupervised semantic segmentation of scene-centric data and show consistent benefits in improving the segmentation accuracy across a variety of baselines and state-of-the-art methods (Hamilton et al., 2022; Seong et al., 2023). 3 PriMaPs: Principal Mask Proposals This work leverages recent advances in self-supervised representation learning (Caron et al., 2021; Oquab et al., 2024) for the specific downstream task of unsupervised semantic segmentation. Our approach is based on the observation that such pre-trained features already exhibit intrinsic spatial similarities, capturing semantic correlations, providing guidance to fit global pseudo-class representations. A simple baseline. Consider a simple baseline that applies K-means clustering to DINO ViT features (Caron et al., 2021). Surprisingly, this already leads to reasonably good unsupervised semantic segmentation results, e. g., around 15 % mean IoU to segment 27 classes on Cityscapes (Cordts et al., 2016), see Tab. 1. 3 However, supervised linear probing between the same feature space and the ground-truth labels \u2013 the theoretical upper bound \u2013 leads to clearly superior results of almost 36 %. Given this gap and the simplicity of the approach, we conclude that there is valuable potential in directly obtaining semantic segmentation without enhancing the original feature representation, unlike in previous work (Hamilton et al., 2022; Seong et al., 2023). From K-means to PriMaPs-EM. When examining the K-means baseline as well as state-of-the-art methods (Hamilton et al., 2022; Seong et al., 2023), see Fig. 4, it can be qualitatively observed that more local consistency within the respective predictions would already lead to less mis-classification. We take inspiration from (Drineas et al., 2004; Ding & He, 2004), who showed that the PCA subspace, spanned by principal components, is a relaxed solution to K-means clustering. We observe that principal components have high semantic correlation for objectas well as scene-centric image features (cf. Fig. 1). We utilize this by iteratively partitioning images based on dominant feature patterns, identified by means of the cosine similarity of the image features to the respective first principal component. We name the resulting class-agnostic image decomposition PriMaPs \u2013 Principal Mask Proposals. PriMaPs stem directly from SSL representations and guide the process of unsupervised semantic segmentation. Shown in Fig. 2, our optimization-based approach, PriMaPs-EM, operates over an SSL feature representation computed from a frozen deep neural network backbone. The optimization realizes stochastic EM of a clustering objective guided by PriMaPs. Specifically, PriMaPs-EM fits class prototypes to the proposals in a globally consistent manner by optimizing over two identically sized vector sets, with one of them being an exponential moving average (EMA) of the other. We show that PriMaPs-EM enables accurate unsupervised partitioning of images into semantically meaningful regions while being highly lightweight and orthogonal to most previous approaches in unsupervised semantic segmentation. 3.1 Deriving PriMaPs We start with a frozen pre-trained self-supervised backbone model F : R3\u00d7h\u00d7w \u2192RC\u00d7H\u00d7W , which embeds an image I \u2208R3\u00d7h\u00d7w into a dense feature representation f \u2208RC\u00d7H\u00d7W as f = F(I) . (1) Here, C refers to the channel dimension of the dense features, and H = h/p, W = w/p with p corresponding to the output stride of the backbone. Based on this image representation, the next step is to decompose the image into semantically meaningful masks to provide a local grouping prior for fitting global class prototypes. Initial principal mask proposal. To identify the initial principal mask proposal in an image I, we analyze the spatial statistical correlations of its features. Specifically, we consider the empirical feature covariance matrix \u03a3 = 1 HW H X i=1 W X j=1 \u0000f:,i,j \u2212\u00af f \u0001\u0000f:,i,j \u2212\u00af f \u0001\u22a4, (2) where f:,i,j \u2208RC are the features at position (i, j) and \u00af f \u2208RC is the mean feature. To identify the feature direction that captures the largest variance in the feature distribution, we seek the first principal component of \u03a3 by solving \u03a3v = \u03bbv . (3) We obtain the first principal component as the eigenvector v1 to the largest eigenvalue \u03bb1, which can be computed efficiently with Singular Value Decomposition (SVD) using the flattened features f. To identify a candidate region, our next goal is to compute a spatial feature similarity map to the dominant feature direction. We observe that doing so directly with the principal direction does not always lead to sufficient localization, i. e., high similarities arise across multiple visual concepts in an image, elaborated in more detail in Appendix A.1. This can be circumvented by first spatially anchoring the dominant feature vector in the feature map. To that end, we obtain the nearest neighbor feature \u02dc f \u2208RC of the first principal component v1 by considering the cosine distance in the normalized feature space \u02c6 f as \u02dc f = \u02c6 f:,i,j , where (i, j) = arg max i,j \u0000v\u22a4 1 \u02c6 f \u0001 . (4) 4 Given this, we compute the cosine-similarity map M \u2208RH\u00d7W of the dominant feature w. r. t. all features as M = (Mi,j)i,j , where Mi,j = \u0000 \u02dc f \u0001\u22a4\u02c6 f:,i,j . (5) Next, a threshold \u03c8 \u2208(0, 1) is applied to the similarity map in order to suppress noise and further localize the initial mask. Accordingly, elements of a binary similarity map P 1 \u2208{0, 1}H\u00d7W are set to 1 when larger than a fraction \u03c8 of the maximal similarity, and 0 otherwise, i. e., P 1 = h Mi,j > \u03c8 \u00b7 max m,n Mm,n i i,j , (6) where [\u00b7] denotes the Iverson bracket. This binary principal mask P 1 gives rise to the first principal mask proposal in image I. Further principal mask proposals. Subsequent mask proposals result from iteratively repeating the described procedure. To that end, it is necessary to suppress features that have already been assigned to a pseudo label. Specifically in iteration z, given the mask proposals P s, s = 1, . . . , z \u22121, extracted in previous iterations, we mask out the features that have already been considered as f z :,i,j = f:,i,j \u0014Xz\u22121 s=1 P s i,j = 0 \u0015 . (7) Applying Eqs. (2) to (6) on top of the masked features f z yields principal mask proposal P z, and so on. We repeat this procedure until the majority of features (e. g., 95%) have been assigned to a mask. In a final step, the remaining features, in case there are any, are assigned to an \u201cignore\u201d mask P 0 i,j = 1 \u2212 Z\u22121 X z=1 P z i,j . (8) This produces a tensor P \u2208{0, 1}Z\u00d7H\u00d7W of Z spatial similarity masks decomposing a single image into Z non-overlapping regions. Proposal post-processing. To further improve the alignment of the masks with edges and color-correlated regions in the image, a fully connected Conditional Random Field (CRF) with Gaussian edge potentials (Kr\u00e4henb\u00fchl & Koltun, 2011) is applied to the initial mask proposals P (after bilinear upsampling to the image resolution) for 10 inference iterations. In order to form a pseudo label for semantic segmentation out of the Z mask proposals, each mask has to be assigned one out of K class labels. This is accomplished using a segmentation prediction of our optimization process, called PriMaPs-EM, detailed below. The entire PriMaPs pseudo label generation process is illustrated in Figure 2b. 3.2 PriMaPs-EM Shown in Fig. 2, PriMaPs-EM is an iterative optimization technique. It leverages the frozen pre-trained self-supervised backbone model F and two identically sized vector sets, the class prototypes \u03b8S and their moving average, the momentum class prototypes \u03b8T . The class prototypes \u03b8S and \u03b8T are the K pseudo class representations in the feature space, projecting the C-dimensional features linearly to K semantic pseudo classes. PriMaPs-EM constructs pseudo labels using PriMaPs, which provide guidance through local consistency for fitting the global class prototypes. In every optimization iteration, we compute the segmentation prediction y from the momentum class prototypes \u03b8T . Next, we assign the pseudo-class ID that is most frequently predicted within each proposal, yielding the final pseudo-label map P \u2217\u2208{0, 1}K\u00d7h\u00d7w, a one-hot encoding of a pseudo-class ID. Finally, we optimize the class prototypes \u03b8S using the pseudo label. PriMaPs-EM consists of two stages, since in our case a meaningful initialization of the class prototypes is vital to provide a reasonable optimization signal. This can be traced back to the pseudo-label generation, which utilizes a segmentation prediction to assign globally consistent classes to the masks. Initializing the class prototypes randomly leads to a highly unstable and noisy signal. 5 Image I Aug. Image I\u2032 F _ F _ f \u03b8T f \u2032 \u03b8S \u00b7 \u00b7 y y\u2032 Gen. PriMaPs Pseudo Label Lfocal EMA (a) PriMaPs-EM architecture Features Image label \u03b8T Pred. Pseudo 1st PC NN Similarity Map Feat. Mask. ID Assign CRF P (b) PriMaPs pseudo label generation Figure 2: (a) PriMaPs-EM architecture. Images are embedded by the frozen self-supervised backbone F. First, both class prototypes \u03b8S and \u03b8T are initialized via a clustering objective. The segmentation prediction y from the momentum class prototypes \u03b8T arises via a dot product with the image features f. While PriMaPs are based on f alone, the pseudo labels additionally use the image I and the segmentation prediction y from the momentum class prototypes \u03b8T . We use the pseudo labels to optimize the class prototypes \u03b8S, which are gradually transferred to \u03b8T by means of an EMA. (b) PriMaPs pseudo label generation. Masks are proposed by iterative binary partitioning based on the cosine similarity of the features of any unassigned pixel to their first principal component. Next, the masks P are aligned to the image using a CRF (Kr\u00e4henb\u00fchl & Koltun, 2011). Finally, a pseudo-class ID is assigned per mask based on the segmentation prediction from the \u03b8T . Gray indicates iterative steps. Initialization. We initialize the class prototypes \u03b8T with the first K principal components. Next, a cosine distance batch-wise K-means (MacQueen, 1967) loss LK-means(\u03b8T ) = \u2212 X i,j max \u0000\u03b8\u22a4 T f:,i,j \u0001 (9) is minimized with respect to \u03b8T for a fixed number of epochs. This minimizes the cumulative cosine distances of the image features f:,i,j to their respective closest class prototype. \u03b8S is initialized with the same prototypes. Moving average stochastic EM. In each iteration, we use the backbone features and momentum class prototypes \u03b8T to yield a segmentation prediction y from which pseudo labels are generated as described in Sec. 3.1. \u03b8S is optimized by applying a batch-wise focal loss (Lin et al., 2020) with respect to these pseudo labels. The focal loss Lfocal is a weighted version of the cross-entropy loss, increasing the loss contribution of less confident classes, i. e., Lfocal(\u03b8S; y\u2032) = \u2212 X k,i,j (1 \u2212\u03c7k)2P \u2217 k,i,j log(y\u2032 k,i,j) , (10) where y\u2032 :,i,j = softmax(\u03b8\u22a4 S f:,i,j) are the predictions and \u03c7k is the class-wise confidence value approximated by averaging y\u2032 :,i,j spatially. The class prototypes \u03b8S are optimized with an augmented input image I\u2032. We employ photometric augmentations (Gaussian blur, grayscaling, and color jitter), introducing a controlled noise, thereby strengthening the robustness of our class representation. The momentum class prototypes \u03b8T are the exponential moving average of the class prototypes \u03b8S. This is utilized in order to stabilize the optimization, accounting for the noisy nature of unsupervised signal used for optimization. We update \u03b8T every \u03b3t iterations with a decay \u03b3\u03c8 as \u03b8t+\u03b3t T = \u03b3\u03c8\u03b8t T + (1 \u2212\u03b3\u03c8)\u03b8t+\u03b3t S , (11) where t is the iteration index of the previous update. This optimization approach resembles moving average stochastic EM. Hereby, the E-step amounts to finding pseudo labels using PriMaPs and the momentum class prototypes. The M-step optimizes the class prototypes with respect to their focal loss Lfocal. Stochasticity arises from performing EM in mini-batches. 6 Inference. At inference time, we obtain a segmentation prediction from the momentum class prototypes \u03b8T , refined using a fully connected CRF with Gaussian edge potentials (Kr\u00e4henb\u00fchl & Koltun, 2011) following previous approaches (Van Gansbeke et al., 2021; Hamilton et al., 2022; Seong et al., 2023). This is the identical CRF as already used for refining the masks in the PriMaPs pseudo-label generation. We use the identical CRF parameters as previous work (Van Gansbeke et al., 2021; Hamilton et al., 2022; Seong et al., 2023). 4 Experiments To assess the efficacy of our approach, we compare it to the current state-of-the-art in unsupervised semantic segmentation. For a fair comparison, we closely follow the overall setup used by numerous previous works (Ji et al., 2019; Cho et al., 2021; Hamilton et al., 2022; Seong et al., 2023). 4.1 Experimental Setup Datasets. Following the practice of previous work, we conduct experiments on Cityscapes (Cordts et al., 2016), COCO-Stuff (Caesar et al., 2018), and Potsdam-3 (ISPRS). Cityscapes and COCO-Stuff are evaluated using 27 classes, while Potsdam is evaluated on the 3-class variant. Adopting the established evaluation protocol (Ji et al., 2019; Cho et al., 2021; Hamilton et al., 2022; Seong et al., 2023), we resize images to 320 pixels along the smaller axis and crop the center 320 \u00d7 320 pixels. This is adjusted to 322 pixels for DINOv2. Different from previous work, we apply this simple scheme throughout this work, thus dispensing with elaborate multi-crop approaches of previous methods (Hamilton et al., 2022; Yin et al., 2022; Seong et al., 2023). Self-supervised backbone. Experiments are conducted across a collection of pre-trained self-supervised feature embeddings: DINO (Caron et al., 2021) based on ViT-Small and ViT-Base using 8 \u00d7 8 patches; and DINOv2 (Oquab et al., 2024) based on ViT-Small and ViT-Base using 14 \u00d7 14 patches. In the spirit of SSL principles, we keep the backbone parameters frozen throughout the experiments. We use the output from the last network layer as our SSL feature embeddings. Since PriMaPs-EM is agnostic to the used embedding space, we can also apply it on top of current state-of-the-art unsupervised segmentation pipelines. Here, we consider STEGO (Hamilton et al., 2022) and HP (Seong et al., 2023), which also use DINO features but learn a target domain-specific subspace. Baseline. Following (Hamilton et al., 2022; Seong et al., 2023), we train a single linear layer as a baseline with the same structure as \u03b8S and \u03b8T by minimizing the cosine distance batch-wise K-Means loss from Eq. (9). Hereby, parameters, such as the number of epochs and the learning rate, are identical to those used when employing PriMaPs-EM. PriMaPs-EM. As discussed in Sec. 3.2, the momentum class prototypes \u03b8T are initialized using the first K principal components; we use 2975 images for PCA, as this is the largest number of training images shared by all datasets. Next, \u03b8T is pre-trained by minimizing Eq. (9) using Adam (Kingma & Ba, 2015). We use a learning rate of 0.005 for 2 epochs on all datasets and backbones. The weights are then copied to \u03b8S. For fitting the class prototypes using EM, \u03b8S is optimized by minimizing the focal loss from Eq. (10) with Adam (Kingma & Ba, 2015) using a learning rate of 0.005. The momentum class prototypes \u03b8T are updated using an EMA according to Eq. (11) every \u03b3s = 10 steps with decay \u03b3\u03c8 = 0.98. We set the PriMaPs mask proposal threshold to \u03c8 = 0.4. We use a batch size of 32 for 50 epochs on Cityscapes and Potsdam-3, and use 5 epochs on COCO-Stuff due to its larger size. Importantly, the same hyperparameters are used across all datasets and backbones. Moreover, note that fitting class prototypes with PriMaPs-EM is quite practical, e. g., about 2 hours on Cityscapes. Experiments are conducted on a single NVIDIA A6000 GPU. Supervised upper bounds. To assess the potential of the SSL features used, we report supervised upper bounds. Specifically, we train a linear layer using cross-entropy and Adam with a learning rate of 0.005. Since PriMaPs-EM uses frozen SSL features, its supervised bound is the same as that of the underlying features. This is not the case, however, for prior work (Hamilton et al., 2022; Seong et al., 2023), which project the feature representation affecting the upper bound. 7 Table 1: Cityscapes \u2013 PriMaPs-EM (Ours) comparison to existing unsupervised semantic segmentation methods, using Accuracy and mean IoU (in %) for unsupervised and supervised probing. Double citations refer to a method\u2019s origin and the work conducting the experiment. Method Backbone Unsupervised Supervised Acc mIoU Acc mIoU IIC (Ji et al., 2019; Cho et al., 2021) 47.9 6.4 \u2013 \u2013 MDC (Caron et al., 2018; Cho et al., 2021) 40.7 7.1 \u2013 \u2013 PiCIE (Cho et al., 2021) 65.5 12.3 \u2013 \u2013 VICE (Karlsson et al., 2022) ResNet18 +FPN 31.9 12.8 86.3 31.6 Baseline (Caron et al., 2021) 61.4 15.8 91.0 35.4 + TransFGU (Yin et al., 2022) 77.9 16.8 \u2013 \u2013 + HP (Seong et al., 2023) 80.1 18.4 91.2 30.6 + PriMaPs-EM 81.2 19.4 91.0 35.4 + HP (Seong et al., 2023) + PriMaPs-EM DINO ViT-S/8 76.6 19.2 91.2 30.6 Baseline (Caron et al., 2021) 49.2 15.5 91.6 35.9 + STEGO (Hamilton et al., 2022; Koenig et al., 2023) 73.2 21.0 89.6 28.0 + HP (Seong et al., 2023) 79.5 18.4 90.9 33.0 + PriMaPs-EM 59.6 17.6 91.6 35.9 + STEGO (Hamilton et al., 2022) + PriMaPs-EM DINO ViT-B/8 78.6 21.6 89.6 28.0 Baseline (Oquab et al., 2024) 49.5 15.3 90.8 41.9 + PriMaPs-EM DINOv2 ViT-S/14 71.5 19.0 90.8 41.9 Baseline (Oquab et al., 2024) 36.1 14.9 91.0 44.8 + PriMaPs-EM DINOv2 ViT-B/14 82.9 21.3 91.0 44.8 Evaluation. For inference, we use the prediction from the momentum class prototypes \u03b8T . CRF refinement uses 10 inference iterations and standard parameters a = 4, b = 3, \u03b8\u03b1 = 67, \u03b8\u03b2 = 3, \u03b8\u03b3 = 1 from prior work (Van Gansbeke et al., 2021; Hamilton et al., 2022; Seong et al., 2023). We evaluate common metrics in unsupervised semantic segmentation, specifically the mean Intersection over Union (mIoU) and Accuracy (Acc) over all classes after aligning the predicted class IDs with ground-truth labels by means of Hungarian matching (Kuhn, 1955). SotA + PriMaPs-EM. To explore our method\u2019s potential, we additionally employ PriMaPs-EM on top of STEGO (Hamilton et al., 2022) and HP (Seong et al., 2023). For each backbone-dataset combination, we apply it on top of the best previous method in terms of mIoU. To that end, the training signal for learning the feature projection of (Hamilton et al., 2022; Seong et al., 2023) remains unchanged. We apply PriMaPs-EM fully orthogonally, using the DINO backbone features for pseudo-label generation and fit a direct connection between the feature space of the state-of-the-art method and the prediction space. 4.2 Results We compare PriMaPs-EM against prior work for unsupervised semantic segmentation (Ji et al., 2019; Cho et al., 2021; Hamilton et al., 2022; Yin et al., 2022; Li et al., 2023; Seong et al., 2023). As in previous work, we use DINO (Caron et al., 2021) as the main baseline. Additionally, we also test PriMaPs-EM on top of DINOv2 (Oquab et al., 2024), STEGO (Hamilton et al., 2022), and HP (Seong et al., 2023). Overall, we observe that the DINO baseline already achieves strong results (cf. Tabs. 1 to 3). DINOv2 features significantly raise the supervised upper bounds in terms of Acc and mIoU, the improvement in the unsupervised case remains more modest. Nevertheless, PriMaPs-EM further boosts the unsupervised segmentation performance. In Tab. 1, we compare to previous work on the Cityscapes dataset. PriMaPs-EM leads to a consistent improvement over all baselines in terms of unsupervised segmentation accuracy. For example, PriMaPsEM boosts DINO ViT-S/8 by +3.6% and +19.8% in terms of mIoU and Acc, respectively, which leads to 8 Table 2: COCO-Stuff \u2013 PriMaPs-EM (Ours) comparison to existing unsupervised semantic segmentation methods, using Accuracy and mean IoU (in %) for unsupervised and supervised probing. Double citations refer to a method\u2019s origin and the work conducting the experiment. Method Backbone Unsupervised Supervised Acc mIoU Acc mIoU IIC (Ji et al., 2019; Cho et al., 2021) 21.8 6.7 44.5 8.4 MDC (Caron et al., 2018; Cho et al., 2021) 32.2 9.8 48.6 13.3 PiCIE (Cho et al., 2021) 48.1 13.8 54.2 13.9 PiCIE+H (Cho et al., 2021) 50.0 14.4 54.8 14.8 VICE (Karlsson et al., 2022) ResNet18 +FPN 28.9 11.4 62.8 25.5 Baseline (Caron et al., 2021) 34.2 9.5 72.0 41.3 + TransFGU (Yin et al., 2022) 52.7 17.5 \u2013 \u2013 + STEGO (Hamilton et al., 2022) 48.3 24.5 74.4 38.3 + ACSeg (Li et al., 2023) \u2013 16.4 \u2013 \u2013 + HP (Seong et al., 2023) 57.2 24.6 75.6 42.7 + PriMaPs-EM 46.5 16.4 72.0 41.3 + HP (Seong et al., 2023) + PriMaPs-EM DINO ViT-S/8 57.8 25.1 75.6 42.7 Baseline (Caron et al., 2021) 38.8 15.7 74.0 44.6 + STEGO (Hamilton et al., 2022) 56.9 28.2 76.1 41.0 + PriMaPs-EM 48.5 21.9 74.0 44.6 + STEGO (Hamilton et al., 2022) + PriMaPs-EM DINO ViT-B/8 57.9 29.7 76.1 41.0 Baseline (Oquab et al., 2024) 44.5 22.9 77.9 52.8 + PriMaPs-EM DINOv2 ViT-S/14 46.5 23.8 77.9 52.8 Baseline (Oquab et al., 2024) 35.0 17.9 77.3 53.7 + PriMaPs-EM DINOv2 ViT-B/14 52.8 23.6 77.3 53.7 state-of-the-art performance. Notably, we find PriMaPs-EM to be complementary to other state-of-the-art unsupervised segmentation methods like STEGO (Hamilton et al., 2022) and HP (Seong et al., 2023) on the corresponding backbone model. This suggests that these methods use their SSL representation only to a limited extent and do not fully leverage the inherent properties of the underlying SSL embeddings. Similar observations can be drawn for the experiments on COCO-Stuff in Tab. 2. PriMaPs-EM leads to a consistent improvement across all four SSL baselines, as well as an improvement over STEGO and HP. For instance, combining STEGO with PriMaPs-EM leads to +14.0% and +19.1% improvement over the baseline in terms of mIoU and Acc for DINO ViT-B/8. Experiments on the Potsdam-3 dataset follow the same pattern (cf. Tab. 3). PriMaPs-EM leads to a consistent gain over the baseline, e. g. +17.6% and +14.4% in terms of mIoU and Acc, respectively, for DINO ViT-B/8. Moreover, it also boosts the accuracy of STEGO and HP. In some cases, the gain of PriMaPs-EM is limited. For example, in Tab. 1 for DINO ViT-B/8 + PriMaPs-EM, the class prototype for \u201csidewalk\u201d is poor while the classes \u201croad\u201d and \u201cvegetation\u201d superimpose smaller objects. For DINO ViT-S/8 + PriMaPs-EM in Tab. 3, the class prototype \u201croad\u201d is poor. This limits the overall performance of our method while still outperforming the respective baseline in both cases. Overall, PriMaPs-EM provides modest but consistent benefits over a wide range of baselines and datasets and reaches competitive segmentation performance w. r. t. the state-of-the-art using identical hyperparameters across all backbones and datasets. Recalling the simplicity of the techniques behind PriMaPs, we believe that this is a significant result. The complementary effect of PriMaPs-EM on other state-of-the-art methods (STEGO, HP) further suggests that they rely on DINO features for mere \u201cbootstrapping\u201d and learn feature representations with orthogonal properties to those of DINO. We conclude that PriMaPs-EM constitutes a straightforward, entirely orthogonal tool for boosting unsupervised semantic segmentation. 4.3 Ablation Study To untangle the factors behind PriMaPs-EM, we examine the individual components in a variety of ablation experiments to access the contribution. 9 Table 3: Potsdam-3 \u2013 PriMaPs-EM (Ours) comparison to existing unsupervised semantic segmentation methods, using Accuracy and mean IoU (in %) for unsupervised and supervised probing. Double citations refer to a method\u2019s origin and the work conducting the experiment. Method Backbone Unsupervised Supervised Acc mIoU Acc mIoU RandomCNN (Cho et al., 2021) 38.2 \u2013 \u2013 \u2013 K-Means (Pedregosa et al., 2011; Cho et al., 2021) 45.7 \u2013 \u2013 \u2013 SIFT (Lowe, 2004; Cho et al., 2021) 38.2 \u2013 \u2013 \u2013 ContextPrediction (Doersch et al., 2015; Cho et al., 2021) 49.6 \u2013 \u2013 \u2013 CC (Isola et al., 2015; Cho et al., 2021) 63.9 \u2013 \u2013 \u2013 DeepCluster (Caron et al., 2018; Cho et al., 2021) 41.7 \u2013 \u2013 \u2013 IIC (Ji et al., 2019; Cho et al., 2021) VGG 11 65.1 \u2013 \u2013 \u2013 Baseline (Caron et al., 2021) 56.6 33.6 82.0 69.0 + STEGO (Hamilton et al., 2022; Koenig et al., 2023) 77.0 62.6 85.9 74.8 + PriMaPs-EM 62.5 38.9 82.0 69.0 + STEGO (Hamilton et al., 2022) + PriMaPs-EM DINO ViT-S/8 78.4 64.2 85.9 74.8 Baseline (Caron et al., 2021) 66.1 49.4 84.3 72.8 + HP (Seong et al., 2023) 82.4 69.1 88.0 78.4 + PriMaPs-EM 80.5 67.0 84.3 72.8 + HP (Seong et al., 2023)+ PriMaPs-EM DINO ViT-B/8 83.3 71.0 88.0 78.4 Baseline (Oquab et al., 2024) 75.9 61.0 86.6 76.2 + PriMaPs-EM DINOv2 ViT-S/14 78.5 64.3 86.6 76.2 Baseline (Oquab et al., 2024) 82.4 69.9 87.9 78.3 + PriMaPs-EM DINOv2 ViT-B/14 83.2 71.1 87.9 78.3 Table 4: Ablation study analyzing design choices and components in the PriMaPs pseudo-label generation (a) and PriMaPs-EM (b) for COCO-Stuff using DINO ViT-B/8. (a) PriMaPs pseudo label ablation Method Acc mIoU Baseline (Caron et al., 2021) 38.8 15.7 Similarity Masks 46.3 19.8 + NN 44.9 20.0 + P-CRF (\u2261PriMaPs-EM) 48.4 21.9 PriMaPs-EM (non-iter.) 47.9 21.7 (b) PriMaPs-EM ablation Method Acc mIoU Baseline (Caron et al., 2021) 38.8 15.7 + PriMaPs pseudo label 38.8 18.0 + EMA 45.0 20.2 + Augment 46.0 20.4 + CRF (\u2261PriMaPs-EM) 48.4 21.9 PriMaPs pseudo-label ablations. In Tab. 4a, we analyze the contribution of the individual sub-steps for PriMaPs pseudo-label generation by increasing the complexity of label generation. We provide the DINO baseline, which corresponds to K-means feature clustering, for reference. In the most simplified case, we directly use the similarity mask, similar to Eq. (4). Next, we use the nearest neighbor (+NN in Tab. 4a) of the principal component to get the masks as in Eq. (5), followed by the full approach with CRF refinement (+P-CRF). Except for the changes in the pseudo-label generation, the optimization remains as described in Sec. 4.1. We observe that the similarity masks already provide a good staring point, yet we identify a gain from every single component step. This suggests that using the nearest neighbor improves the localization of the similarity mask. Similarly, CRF refinement improves the alignment between the masks and the image content. We also experiment with using the respective next principal direction (non-iter.) instead of iteratively extracting the first component from masked features. This leads to slightly inferior results. PriMaPs-EM architecture ablations. In a similar vein, we analyze the contribution of the different architectural components of PriMaPs-EM. Optimizing over a single set of class prototypes using the proposed 10 Table 5: Oracle quality assessment of PriMaPs pseudo labels for Cityscapes, COCO-Stuff, and Potsdam-3 by assigning oracle class IDs to the masks. \u201cPseudo\u201d refers to evaluating only the pixels contained in the pseudo label, \u201cAll\u201d to evaluating including the \u201cignore\u201d assignments of the pseudo label. Method Cityscapes COCO-Stuff Potsdam-3 Acc mIoU Acc mIoU Acc mIoU Pseudo 92.4 54.0 93.4 82.4 95.2 90.9 All 73.2 32.4 74.1 55.9 67.4 48.9 DINO ViT-B/8 Baseline (Caron et al., 2021) 49.2 15.5 38.8 15.7 66.1 49.4 Cityscapes COCO-Stuff Potsdam-3 Ground truth PriMaPs Oracle IDs PriMaPs Colored Image Figure 3: Qualitative PriMaPs examples using DINO ViT-B/8 for Cityscapes, COCO-Stuff, and Potsdam-3. PriMaPs Colored \u2013 each mask proposal is visualized in a different color. PriMaPs Oracle class IDs \u2013 each mask is colored in the corresponding ground-truth class color. PriMaPs pseudo labels already provides moderate improvement (+PriMaPs pseudo label in Tab. 4b), despite the disadvantage of an unstable and noisy optimization signal. Adding the EMA (+EMA) leads to a more stable optimization and further improved segmentation. Augmenting the input (+Augment) results in a further gradual improvement. Similarly, refining the prediction with a CRF improves the results further (+CRF). Assessing PriMaPs pseudo labels. To estimate the quality of the pseudo labels, respectively the principal masks, we decouple those from the class ID assignment by providing the oracle ground-truth class for each mask in Tab. 5. To that end, we evaluate all pixels included in our pseudo labels (\u201cPseudo\u201d), corresponding to the upper bound of our optimization signal. Furthermore, we evaluate \u201cAll\u201d by assigning the \u201cignore\u201d pixels to a wrong class. The results indicate a high quality of the pseudo-label maps. Fig. 3 shows qualitative examples of the PriMaPs mask proposals and pseudo labels. We visualize individual masks, each in a different color (PriMaPs Colored). We also display oracle pseudo labels assigning each mask a color based on the ground-truth label (PriMaPs Oracle class IDs). We observe that the mask proposals align well with the ground-truth labels across all three datasets, generalizing across three distinct domains. PriMaPs effectively partitions images into semantically meaningful masks. Qualitative results. We show qualitative results for Cityscapes, COCO-Stuff, and Potsdam-3 in Fig. 4. We observe that PriMaPs-EM leads to less noisy results compared to the baseline, showcasing an improved 11 Cityscapes COCO-Stuff Potsdam-3 STEGO + PriMaPs-EM STEGO PriMaPs-EM Baseline Ground Truth Image Figure 4: Qualitative results for the DINO ViT-B/8 baseline, PriMaPs-EM (Ours), STEGO (Hamilton et al., 2022), and STEGO+PriMaPs-EM (Ours) for Cityscapes, COCO-Stuff, and Potsdam-3. Our method produces locally more consistent segmentation results reducing overall misclassification compared to the corresponding baseline. local consistency of the segmentation and reduced mis-classification. The comparison with STEGO as a baseline exhibits a similar trend. For further examples and comparisons with HP, please refer to Appendix B.2. Limitations. One of the main challenges is to distinguish between classes that happen to share the same SSL feature representation. This is hardly avoidable if the feature representation is fixed, as was the case here and in previous work (Hamilton et al., 2022; Seong et al., 2023). Another limitation across existing unsupervised semantic segmentation approaches is the limited spatial image resolution. This limitation comes from the SSL training objectives (Caron et al., 2021; Oquab et al., 2024), which are image-level, rather than pixel-level. As a result, we can observe difficulties in segmenting very small, finely resolved structures. 5 Conclusion We present PriMaPs, a novel dense pseudo-label generation approach for unsupervised semantic segmentation. We derive lightweight mask proposals directly from off-the-shelf self-supervised learned features, leveraging the intrinsic properties of their embedding space. Our mask proposals can be used as pseudo labels to effectively fit global class prototypes using moving average stochastic EM with PriMaPs-EM. Despite the simplicity, PriMaPs-EM leads to a consistent boost in unsupervised segmentation accuracy when applied to a variety of SSL features or orthogonally to current state-of-the-art unsupervised semantic segmentation pipelines, as shown by our results across multiple datasets. 12 Acknowledgments This project is partially funded by the European Research Council (ERC) under the European Union\u2019s Horizon 2020 research and innovation programme (grant agreement No. 866008) as well as the State of Hesse (Germany) through the cluster projects \u201cThe Third Wave of Artificial Intelligence (3AI)\u201d and \u201cThe Adaptive Mind (TAM)\u201d."
16
+ }
intro_8K/test_introduction_short_2404.16821v2.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2404.16821v2",
3
+ "title": "How Far Are We to GPT-4V? Closing the Gap to Commercial Multimodal Models with Open-Source Suites",
4
+ "abstract": "In this report, we introduce InternVL 1.5, an open-source multimodal large\nlanguage model (MLLM) to bridge the capability gap between open-source and\nproprietary commercial models in multimodal understanding. We introduce three\nsimple improvements: (1) Strong Vision Encoder: we explored a continuous\nlearning strategy for the large-scale vision foundation model -- InternViT-6B,\nboosting its visual understanding capabilities, and making it can be\ntransferred and reused in different LLMs. (2) Dynamic High-Resolution: we\ndivide images into tiles ranging from 1 to 40 of 448$\\times$448 pixels\naccording to the aspect ratio and resolution of the input images, which\nsupports up to 4K resolution input. (3) High-Quality Bilingual Dataset: we\ncarefully collected a high-quality bilingual dataset that covers common scenes,\ndocument images, and annotated them with English and Chinese question-answer\npairs, significantly enhancing performance in OCR- and Chinese-related tasks.\nWe evaluate InternVL 1.5 through a series of benchmarks and comparative\nstudies. Compared to both open-source and proprietary models, InternVL 1.5\nshows competitive performance, achieving state-of-the-art results in 8 of 18\nbenchmarks. Code has been released at https://github.com/OpenGVLab/InternVL.",
5
+ "authors": "Zhe Chen, Weiyun Wang, Hao Tian, Shenglong Ye, Zhangwei Gao, Erfei Cui, Wenwen Tong, Kongzhi Hu, Jiapeng Luo, Zheng Ma, Ji Ma, Jiaqi Wang, Xiaoyi Dong, Hang Yan, Hewei Guo, Conghui He, Botian Shi, Zhenjiang Jin, Chao Xu, Bin Wang, Xingjian Wei, Wei Li, Wenjian Zhang, Bo Zhang, Pinlong Cai, Licheng Wen, Xiangchao Yan, Min Dou, Lewei Lu, Xizhou Zhu, Tong Lu, Dahua Lin, Yu Qiao, Jifeng Dai, Wenhai Wang",
6
+ "published": "2024-04-25",
7
+ "updated": "2024-04-29",
8
+ "primary_cat": "cs.CV",
9
+ "cats": [
10
+ "cs.CV"
11
+ ],
12
+ "label": "Original Paper",
13
+ "paper_cat": "LLM Fairness",
14
+ "gt": "Large language models (LLMs) have been instrumental in advancing artificial general intelligence (AGI) systems, * equal contribution; \u2020 interns at OpenGVLab, Shanghai AI Labora- tory; B corresponding author ([email protected]). Benchmark InternVL 1.5 Grok-1.5V GPT-4V Claude-3 Opus Gemini Pro 1.5 MMMU Multi-discipline 45.2% 53.6% 56.8% 59.4% 58.5% MathVista Math 53.5% 52.8% 49.9% 50.5% 52.1% AI2D Diagrams 80.7% 88.3% 78.2% 88.1% 80.3% TextVQA Text reading 80.6% 78.1% 78.0% - 73.5% ChartQA Charts 83.8% 76.1% 78.5% 80.8% 81.3% DocVQA Documents 90.9% 85.6% 88.4% 89.3% 86.5% RealWorldQA Real-world understanding 66.0% 68.7% 61.4% 49.8% 67.5% Figure 1. InternVL 1.5 versus proprietary commercial models. The results of these benchmarks show that InternVL 1.5 achieves performance comparable to leading proprietary models. demonstrating remarkable abilities in processing open- world language tasks. Leveraging the advancements in LLMs, multimodal large language models (MLLMs) [5, 18, 23, 62, 63, 84, 92, 116, 142] have made significant strides, facilitating complex vision-language dialogues and interac- tions that bridge the gap between textual and visual infor- mation. Despite these achievements, there remains a notice- able divide between the capabilities of open-source mod- els and proprietary commercial models, e.g., GPT-4V [87], 1 arXiv:2404.16821v2 [cs.CV] 29 Apr 2024 Gemini series [92, 107], and Qwen-VL-Max [5]. This gap is mainly reflected in the following three as- pects: (1) Parameter Scale: Recent proprietary commer- cial MLLMs [5, 87, 92, 102] typically scales not less than 100 billion parameters, while open-source models com- monly employ a 300 million parameter vision foundation model (VFM), which is integrated with either a 7 billion or 13 billion LLMs. (2) Image Resolution: Proprietary commercial models typically employ a dynamic resolution approach, preserving the original aspect ratio to facilitate detailed scene and document understanding. In contrast, open-source models generally train with fixed resolutions [18, 23, 62, 71, 117, 142], such as 336\u00d7336 and 448\u00d7448, leading to a considerable gap in capabilities relative to com- mercial counterparts. (3) Multilingual Capability: Propri- etary models often leverage extensive multilingual datasets for training, enhancing their performance across diverse languages. However, open-source models predominantly utilize English data, relying on the zero-shot capabilities of LLMs for other languages, e.g. LLaVA-NeXT [64]. This re- sults in sub-optimal performance in non-English scene un- derstanding and OCR tasks. To bridge the gap, we introduce InternVL 1.5, integrat- ing three major improvements to enhance its performance and usability. (1) We implement a continuous learning ap- proach to a large-scale VFM\u2014InternViT-6B [18], refining it using high-quality image-text data. This process not only enhances the model\u2019s ability to understand visual content but also improves its adaptability across various LLMs. In addition, using InternLM2-20B [11] as the language foun- dation model also offers robust initial language processing capabilities. (2) We adopt a dynamic high-resolution strat- egy that segments images into 448\u00d7448 tiles, with the num- ber of tiles ranging from 1 to 40 (i.e., 4K resolution) based on the aspect ratio and resolution of the images. To capture global context, we additionally include a thumbnail view. (3) We gather a diverse collection of public datasets, cov- ering high-quality natural scenes, charts, documents, and conversations in both English and Chinese. Additionally, we develop a data translation pipeline using open-source LLMs, which can be easily extended to more languages. These designs endow our model with several advan- tages: (1) Flexible Resolution: Similar to the \u201clow\u201d or \u201chigh\u201d modes available in GPT-4V [87], InternVL 1.5 en- ables users to select the optimal resolution for their im- ages, such as using low-resolution for scene subject de- scription and high-resolution (up to 4K resolution) for doc- ument understanding, effectively balancing computational efficiency with detail preservation. (2) Bilingual Profi- ciency: InternVL 1.5 exhibits robust bilingual capabili- ties, proficiently handling multimodal perception and un- derstanding tasks in both English and Chinese. Notably, in tasks related to Chinese, our model generally outperforms InternVL 1.5 High-Quality Bilingual Dataset Captioning, General QA, Science, Chart, Mathematics, Knowledge, OCR, Document, Grounding, Conversation, Chinese, English Strong Foundation Models InternViT-6B-448px-V1.5 + InternLM2-20B Dynamic High-Resolution 448 ~ 4K Resolution AGI Figure 2. Characteristics of InternVL 1.5. InternVL 1.5 features strong visual representation through continuous learning, flexible resolution capabilities, and robust bilingual proficiency in English and Chinese, positioning it as a competitive MLLM. the leading commercial model GPT-4V [87]. (3) Strong Vi- sual Representation: By implementing a continuous learn- ing strategy, we enhance the visual representation capabil- ities of InternViT-6B [18], making it robust to flexible in- put resolution and various visual domains. Benefitting from InternViT-6B\u2019s massive parameters, our model achieves a level of visual representation that rivals the linguistic capa- bilities of LLMs with more than 20 billion parameters. This synergy between visual and linguistic processing endows our system with robust multimodal capabilities. We evaluated InternVL 1.5 on 18 representative multi- modal benchmarks, which are categorized into four spe- cific groups: OCR-related, general multimodal, mathemat- ical, and multi-turn conversation benchmarks. Compared to both open-source and proprietary models, InternVL 1.5 shows competitive performance, achieving state-of-the-art results in 8 of 18 benchmarks. Notably, as shown in Fig- ure 1, it even surpasses leading proprietary models like Grok-1.5V [125], GPT-4V [87], Claude-3 Opus [3], and Gemini Pro 1.5 [92] in four specific benchmarks, partic- ularly in OCR-related datasets such as TextVQA [100], ChartQA [81], and DocVQA [82]. This evaluation indicates that InternVL 1.5 has effectively narrowed the gap between open-source models and leading commercial models. We hope that our approach and open-source model weights can contribute to the development of the MLLM community. 2",
15
+ "main_content": "2.1. Proprietary Commercial MLLMs Large language models (LLMs) [1, 4, 7, 8, 11, 25, 104, 106, 108, 112, 113, 122, 123, 141] have greatly advanced AGI by enabling complex language tasks previously thought human-exclusive. Building on this, the development of proprietary commercial MLLMs represents a significant evolution. For example, OpenAI\u2019s GPT-4V [87] extends GPT4\u2019s capabilities by incorporating visual inputs, allowing it to handle both text and image content, which stands as a significant development in the domain of MLLMs. Afterward, Google\u2019s Gemini series progresses from Gemini 1.0 [107] to Gemini 1.5 [92], enhancing MLLMs with the ability to process text, images, and audio and support up to 1 million tokens, which boosts performance significantly. The QwenVL-Plus/Max are Alibaba\u2019s leading models in the QwenVL series [5], renowned for superior capacity in multimodal tasks without needing OCR tools. Recent advancements in proprietary MLLMs include Anthropic\u2019s Claude-3V series [3], HyperGAI\u2019s HPT Pro [35], Apple\u2019s MM1 [84], StepFun\u2019s Step-1V [102], and xAI\u2019s Grok-1.5V [125]. 2.2. Open-Source MLLMs The development of open-source MLLMs [2, 13, 43, 48, 51, 55, 56, 69, 70, 103, 110, 118, 120, 124, 138, 139] has significantly influenced the AGI landscape by integrating and enhancing capabilities in processing both visual and textual data. Over the past year, many open-source MLLMs have become well-known, including the LLaVA series [62\u2013 64], MiniGPT-4 [142], VisionLLM [116], Qwen-VL [5], CogVLM [117], Shikra [15], and others [18, 23, 90, 119]. However, these models are typically trained on images with small, fixed resolutions such as 336\u00d7336, or 448\u00d7448, which leads to sub-optimal performance on images with unusual aspect ratios or document data. To address this issue, many approaches have been explored for training on high-resolution images. Currently, there are two common technical routes: one involves designing a dual-branch image encoder [32, 53, 76, 77, 121], and the other involves dividing a high-resolution image into many low-resolution tiles [24, 33, 47, 55, 57, 64, 68, 126, 127]. Despite these explorations in high-resolution training, these open-source models still exhibit significant gaps in understanding documents, charts, and infographics, as well as recognizing scene texts, compared to leading commercial models. 2.3. Vision Foundation Models for MLLMs Vision foundation models (VFMs) are a focal point of research within the MLLM community. Currently, models like CLIP-ViT [91] and SigLIP [136] are prevalently utilized; however, many studies have been conducted to find the most suitable vision encoders for MLLMs [57, 71, 76, Pixel Shuffle Dynamic High Resolution InternViT-6B InternLM2 Tokenizer InternLM2-Chat-20B MLP Projector User Message Figure 3. Overall Architecture. InternVL 1.5 adopts the ViTMLP-LLM architecture similar to popular MLLMs [62, 64], combining a pre-trained InternViT-6B [18] with InternLM2-20B [11] through a MLP projector. Here, we employ a simple pixel shuffle to reduce the number of visual tokens to one-quarter. 111]. For instance, Tong et al. [111] observed notable differences in the visual patterns of CLIP and DINOv2 [88], leading to the development of a mixture-of-features module that combines these two VFMs. LLaVA-HR [76] introduced a dual-branch vision encoder utilizing CLIP-ViT for low-resolution pathways and CLIP-ConvNext for highresolution pathways. Similarly, DeepSeek-VL [71] adopted a dual vision encoder design, using SigLIP-L for lowresolution images and SAM-B for high-resolution images. In this report, we propose a continuous learning strategy for our vision foundation model\u2014InternViT-6B [18], which continuously boosts the visual understanding capabilities and can be transferred and reused across different LLMs. 3. InternVL 1.5 3.1. Overall Architecture As illustrated in Figure 3, InternVL 1.5 employs an architecture akin to widely-used open-source MLLMs, specifically the \u201cViT-MLP-LLM\u201d configuration referenced in various existing studies [18, 23, 62\u201364, 71, 142]. Our implementation of this architecture integrates a pre-trained InternViT-6B [18] with a pre-trained InternLM2-20B [11] using a randomly initialized MLP projector. During training, we implemented a dynamic resolution strategy, dividing images into tiles of 448\u00d7448 pixels in sizes ranging from 1 to 12, based on the aspect ratio and resolution of the input images. During testing, this can be zero-shot scaled up to 40 tiles (i.e., 4K resolution). To enhance scalability for high resolution, we simply employed a pixel shuffle operation to reduce the number of visual tokens to one-quarter of the original. Therefore, in our model, a 448\u00d7448 image is represented by 256 visual tokens. 3 3.2. Strong Vision Encoder In existing MLLMs [5, 23, 62\u201364, 78, 142], the most commonly used vision foundation model is typically a contrastively pre-trained ViT [18, 36, 91, 136]. However, these ViTs are commonly trained on image-text pairs crawled from the Internet at a fixed low resolution (e.g., 224\u00d7224), so their performance degrades when tasked with processing high-resolution images or images from sources other than the Internet, such as document images. InternViT-6B-448px-V1.2. To address this issue, the InternVL 1.2 update involved continuous pre-training of the InternViT-6B model. First, we found that the features from the fourth-to-last layer perform best for multimodal tasks, so we directly discarded the weights of the last three layers, reducing InternViT-6B from 48 layers to 45 layers. Then, we increased the resolution of InternViT-6B from 224 to 448 and integrated it with Nous-Hermes-2-Yi-34B [130]. To equip the model with high-resolution processing and OCR capabilities, both the vision encoder and the MLP were activated for training, utilizing a mix of image captioning [10, 17, 90, 93, 100] and OCR-specific datasets [29, 94]. The newly derived InternViT weights from this process were released as InternViT-6B-448px-V1.21. InternViT-6B-448px-V1.5. The development of InternVL 1.5 continues the pre-training of the strong foundation of InternViT-6B-448px-V1.2. In this update, the resolution of training images is expanded from fixed 448\u00d7448 to dynamic 448\u00d7448, where the basic tile size is 448\u00d7448 and the number of tiles ranges from 1 to 12. Additionally, we enhance the data scale, quality, and diversity of the pretraining dataset, resulting in the powerful robustness, OCR capability, and high-resolution processing capability of our 1.5 version model2. Details of the dynamic resolution and training datasets are described in Sections 3.3 and 3.4. It is noteworthy that despite the LLM in InternVL 1.5 being changed from Nous-Hermes-2-Yi-34B to InternLM220B [11], the InternViT maintained excellent compatibility and portability with the new LLM. This suggests that the visual features learned by InternViT-6B during the pretraining stage of MLLMs are broadly applicable and not tightly bound to the specific LLM. 3.3. Dynamic High-Resolution Inspired by UReader [127], we adopt a dynamic highresolution training approach that effectively adapts to the varying resolutions and aspect ratios of input images. This method leverages the flexibility of segmenting images into tiles, enhancing the model\u2019s ability to process detailed vi1https://huggingface.co/OpenGVLab/InternViT-6B448px-V1-2 2https://huggingface.co/OpenGVLab/InternViT-6B448px-V1-5 1:1 1:4 2:3 1:2 1:5 3:2 1:3 1:6 ... Pre-defined Aspect Ratios 448\u00d7448 Tiles Matching 2:3 (896\u00d71344) Thumbnail Input Image (800\u00d71300) Figure 4. Illustration of dynamic high resolution. We dynamically match an optimal aspect ratio from pre-defined ratios, dividing the image into tiles of 448\u00d7448 pixels and creating a thumbnail for global context. This method minimizes aspect ratio distortion and accommodates varying resolutions during training. sual information while accommodating diverse image resolutions. It mainly consists of the following steps: Dynamic Aspect Ratio Matching. As shown in Figure 4, to maintain natural aspect ratios during processing, we dynamically match the optimal aspect ratio from a pre-defined set of aspect ratios. Due to limited computational resources, we allow a maximum of 12 tiles during training. Consequently, this set includes all 35 possible combinations of aspect ratios formed by 1 to 12 tiles, such as {1:1, 1:2, 2:1, 3:1, ..., 2:6}. During the matching process, for each input image, we calculate its aspect ratio and compare it with the 35 pre-defined aspect ratios by measuring the absolute difference. If multiple pre-defined aspect ratios match (e.g., 1:1 and 2:2), we prioritize the one not exceeding twice the input image\u2019s area, thereby preventing excessive enlargement of low-resolution images. Image Division & Thumbnail. Once an appropriate aspect ratio is determined, the image is resized to the corresponding resolution. For example, an 800\u00d71300 image will be resized to 896\u00d71344. The resized image is then divided into tiles of 448\u00d7448 pixels. Alongside the tiles, we include a thumbnail of the entire image to capture the global context. This thumbnail is scaled down to 448\u00d7448, aiding the model in understanding the overall scene. Therefore, during training, the number of visual tokens ranges from 256 to 3,328. During testing, the number of tiles can increase to a maximum of 40, resulting in 10,496 visual tokens. 3.4. High-Quality Bilingual Dataset Pre-training Dataset. The pre-training dataset utilized in our InternVL 1.5 encompasses a diverse range of publicly accessible sources. We provide an overview of these datasets in Table 1a. These datasets span multi4 task ratio dataset Laion-EN (en) [93], Laion-ZH (zh) [93], COYO (zh) [10], Captioning 53.9% GRIT (zh) [90], COCO (en) [17], TextCaps (en) [99] Objects365 (en&zh) [97], GRIT (en&zh) [90], Detection 5.2% All-Seeing (en&zh) [119] Wukong-OCR (zh) [29], LaionCOCO-OCR (en) [94], OCR (large) 32.0% Common Crawl PDF (en&zh) MMC-Inst (en) [61], LSVT (zh) [105], ST-VQA (en) [9] RCTW-17 (zh) [98], ReCTs (zh) [137], ArT (en&zh) [19], SynthDoG (en&zh) [41], COCO-Text (en) [114], ChartQA (en) [81], CTW (zh) [134], DocVQA (en) [82], OCR (small) 8.9% TextOCR (en) [101], PlotQA (en) [85], InfoVQA (en) [83] (a) Datasets used in the pre-training stage. task dataset Captioning TextCaps (en) [99], ShareGPT4V (en&zh) [16] VQAv2 (en) [28], GQA (en) [34], OKVQA (en) [80], General QA VSR (en) [59], VisualDialog (en) [22] Science AI2D (en) [39], ScienceQA (en) [73], TQA (en) [40] ChartQA (en) [81], MMC-Inst (en) [61], DVQA (en) [38], Chart PlotQA (en) [85], LRV-Instruction (en) [60] GeoQA+ (en) [12], TabMWP (en) [74], MathQA (en) [132], Mathematics CLEVR-Math/Super (en) [54, 58], Geometry3K (en) [72] KVQA (en) [96], A-OKVQA (en) [95], ViQuAE (en) [45], Knowledge Wikipedia (en&zh) [31] OCRVQA (en) [86], InfoVQA (en) [83], TextVQA (en) [100], ArT (en&zh) [19], COCO-Text (en) [114], CTW (zh) [134], LSVT (zh) [105], RCTW-17 (zh) [98], ReCTs (zh) [137], OCR SynthDoG (en&zh) [41], ST-VQA (en) [9] Document DocVQA (en) [20], Common Crawl PDF (en&zh) Grounding RefCOCO/+/g (en) [79, 131], Visual Genome (en) [42] LLaVA-150K (en&zh) [63], LVIS-Instruct4V (en) [115], ALLaVA (en&zh) [14], Laion-GPT4V (en) [44], Conversation TextOCR-GPT4V (en) [37], SVIT (en&zh) [140] OpenHermes2.5 (en) [109], Alpaca-GPT4 (en) [106], Text-only ShareGPT (en&zh) [141], COIG-CQIA (zh) [6] (b) Datasets used in the fine-tuning stage. Table 1. Summary of datasets used in InternVL 1.5. To construct large-scale OCR datasets, we utilized PaddleOCR [49] to perform OCR in Chinese on images from Wukong [29] and in English on images from LAION-COCO [94]. ple tasks, including captioning, which predominantly uses datasets such as Laion-EN [93], Laion-ZH [93], COYO [10], and GRIT [90], constituting 53.9% of the total data. Detection and grounding tasks utilize datasets like Objects365 [97], GRIT [90], and All-Seeing [119], making up 5.2%. For OCR tasks, we utilized large-scale datasets such as Wukong-OCR, LaionCOCO-OCR, and Common Crawl PDFs, which constitute 32.0% of our data. These datasets were constructed using PaddleOCR [49] to perform OCR on Chinese images from Wukong [29] and on English images from LaionCOCO [94]. Smaller OCR datasets include MMC-Inst [61], LSVT [105], ST-VQA [9], RCTW-17 [98], ArT [19], and others, accounting for 8.9% of the data, which focus on more specific or constrained OCR challenges. This diverse dataset assembly ensures robust model pre-training of InternVL, catering to varied linguistic and visual elements across tasks. Fine-tuning Dataset. During the fine-tuning stage, we System: You are a translator proficient in English and {language}. Your task is to translate the following English text into {language}, focusing on a natural and fluent result that avoids \u201ctranslationese.\u201d Please consider these points: 1. Keep proper nouns, brands, and geographical names in English. 2. Retain technical terms or jargon in English, but feel free to explain in {language} if necessary. 3. Use {language} idiomatic expressions for English idioms or proverbs to ensure cultural relevance. 4. Ensure quotes or direct speech sound natural in {language}, maintaining the original\u2019s tone. 5. For acronyms, provide the full form in {language} with the English acronym in parentheses. User: Text for translation: {text} Assistant: {translation results} Figure 5. Explanation of our data translation pipeline. Based on this prompt, we translate English data into Chinese while keeping the language natural and smooth. Here, {language} represents the target language, {text} refers to the original English text, and {translation results} indicates the translated text. meticulously selected datasets to enhance model performance across a wide range of multimodal tasks. The datasets used in this phase are summarized in Table 1b. For image captioning, we included TextCaps [99] and bilingual ShareGPT4V [16], which help the model learn to generate descriptive captions in both English and Chinese. In the domain of general QA, datasets such as VQAv2 [28], GQA [34], and VisualDialog [22] teach the model to handle diverse question-answering scenarios. For scientific image understanding, datasets like AI2D [39], ScienceQA [73], and TQA [40] provide content-rich scenarios to enhance the model\u2019s ability to interpret scientific diagrams and texts. Chart interpretation is bolstered by ChartQA [81], MMC-Inst [61], and PlotQA [85], which train the model to analyze and understand chart images. Mathematics datasets such as GeoQA+ [12], TabMWP [74], and MathQA [132] introduce complex numerical and geometric problem-solving tasks. Knowledge-based QA benefits from the inclusion of datasets like KVQA [96] and bilingual Wikipedia [31], enabling the model to extract and reason with factual information across multiple languages. For tasks involving OCR, we utilize OCRVQA [86], TextVQA [100], and several datasets focused on Chinese and English text recognition, such as SynthDoG [41], to improve text recognition from images. Document understanding is advanced through datasets like DocVQA [82] and Common Crawl PDFs, which help the model for real-world document analysis. Visual grounding is trained using RefCOCO [79, 131] and Visual Genome [42], aiding the model 5 openOCR-related Benchmarks General Multimodal Benchmarks Math model source #param DocVQA ChartQA InfoVQA TextVQA OCRBench MME RWQA AI2D MMMU MMB-EN/CN CCB MMVet SEED HallB MathVista GPT-4V [1] \u2717 \u2212 88.4 78.5 \u2212 78.0 645 1926.6 61.4 78.2 56.8 77.0 / 74.4 46.5 67.6 71.6 46.5 49.9 Gemini Ultra 1.0 [107] \u2717 \u2212 90.9 80.8 80.3 82.3 \u2212 \u2212 \u2212 79.5 59.4 \u2212/ \u2212 \u2212 \u2212 \u2212 \u2212 53.0 Gemini Pro 1.0 [107] \u2717 \u2212 88.1 74.1 75.2 74.6 659 1933.4 \u2212 73.9 47.9 73.6 / 74.3 52.5 64.3 70.7 45.2 45.2 Gemini Pro 1.5 [92] \u2717 \u2212 86.5 81.3 72.7 73.5 \u2212 \u2212 67.5 80.3 58.5 \u2212/ \u2212 \u2212 \u2212 \u2212 \u2212 52.1 Qwen-VL-Max [5] \u2717 \u2212 93.1 79.8 73.4 \u2212 723 2433.6 \u2212 79.3 51.3 77.6 / 75.7 63.5 66.6 \u2212 41.2 51.0 Qwen-VL-Plus [5] \u2717 \u2212 91.4 78.1 \u2212 \u2212 694 2183.4 \u2212 75.9 45.2 67.0 / 70.7 55.1 61.1 72.7 40.6 43.3 Claude-3 Opus [3] \u2717 \u2212 89.3 80.8 \u2212 \u2212 694 1586.8 49.8 88.1 59.4 63.3 / 59.2 26.3 58.1 \u2212 37.8 50.5 Claude-3 Sonnet [3] \u2717 \u2212 89.5 81.1 \u2212 \u2212 646 1625.9 51.9 88.7 53.1 67.8 / 64.2 27.8 \u2212 \u2212 41.3 47.9 Claude-3 Haiku [3] \u2717 \u2212 88.8 81.7 \u2212 \u2212 658 1453.2 \u2212 86.7 50.2 60.7 / 57.2 24.5 \u2212 \u2212 39.2 46.4 HPT Pro [35] \u2717 \u2212 \u2212 \u2212 \u2212 \u2212 \u2212 \u2212 \u2212 \u2212 52.0 77.5 / 76.7 \u2212 \u2212 73.1 \u2212 \u2212 MM1 [84] \u2717 30B \u2212 \u2212 \u2212 73.5 \u2212 2069.0 \u2212 \u2212 44.7 75.1 / \u2212 \u2212 48.7 72.1 \u2212 39.4 Step-1V [102] \u2717 100B \u2212 \u2212 \u2212 \u2212 625 2206.4 \u2212 79.2 49.9 80.7 / 79.9 71.2 63.3 70.3 48.4 44.8 Grok-1.5V [125] \u2717 \u2212 85.6 76.1 \u2212 78.1 \u2212 \u2212 68.7 88.3 \u2212 \u2212/ \u2212 \u2212 \u2212 \u2212 \u2212 52.8 Text-Monkey [68] \u2713 10B 66.7 59.9 28.6 64.3 561 \u2212 \u2212 \u2212 \u2212 \u2212/ \u2212 \u2212 \u2212 \u2212 \u2212 \u2212 DocOwl-1.5 [33] \u2713 8B 82.2 70.2 50.7 68.6 599 \u2212 \u2212 \u2212 \u2212 \u2212/ \u2212 \u2212 \u2212 \u2212 \u2212 \u2212 Mini-Gemini [53] \u2713 35B \u2212 \u2212 \u2212 74.1* \u2212 2141.0 \u2212 \u2212 48.0 80.6 / \u2212 \u2212 59.3 \u2212 \u2212 43.3 LLaVA-NeXT [64] \u2713 35B 84.0 68.7 51.5 69.5* 574 2028.0 \u2212 74.9 51.1 81.1 / 79.0 49.2 57.4 75.9 34.8 46.5 InternVL 1.2 (ours) \u2713 40B 57.7 68.0 39.5 72.5* 569 2175.4 67.5 79.0 51.6 82.2 / 81.2 59.2 48.9 75.6 47.6 47.7 InternVL 1.5 (ours) \u2713 26B 90.9 83.8 72.5 80.6 724 2187.8 66.0 80.7 45.2 82.2 / 82.0 69.8 62.8 76.0 49.3 53.5 Table 2. Comparison with SoTA models on 16 multimodal benchmarks. OCR-related benchmarks include: DocVQA test [82], ChartQA test [81], InfographicVQA test [83], TextVQA val [100], and OCRBench [67]. General multimodal benchmarks encompass: MME [26], RealWorldQA [125], AI2D test [39], MMMU val [135], MMBench-EN/CN test [66], CCBench dev [66], MMVet [133], SEED Image [46], and HallusionBench [30]. Additionally, the math dataset includes MathVista testmini [75]. * denotes that Rosetta OCR tokens are used in the testing of TextVQA. The MME results we report are the sum of the perception and cognition scores. The results of OCRBench, MMBench, CCBench, and HallusionBench are collected from the OpenCompass leaderboard [21]. in precise object localization within images. In the realm of multimodal conversation, datasets like LLaVA-150K [63] and ALLaVA [14] enhance the model\u2019s dialogic capabilities by simulating interactive and engaging scenarios. Lastly, text-only datasets include OpenHermes2.5 [109], AlpacaGPT4 [106], among others [6, 141], which are used to maintain the original linguistic capabilities of the LLM. In summary, these datasets together establish a rich and diverse foundation for fine-tuning, which enhances our model\u2019s ability to handle a wide range of multimodal tasks and ensures its readiness for practical applications. Data Translation Pipeline. As shown in Figure 5, to enhance our model\u2019s multilingual capabilities, we implemented a data translation pipeline. This pipeline utilizes state-of-the-art open-source LLMs [4, 11, 130] or GPT-3.5 to convert English datasets to another language (e.g., Chinese), maintaining consistency and precision in bilingual labeling. Moreover, it can readily expand to encompass more languages by adjusting the language prompt, without relying on manual annotation processes. In Table 1, we have annotated the language for each dataset. For a dataset that was originally in English, an annotation as \u201czh\u201d indicates that we have translated it into Chinese using the translation pipeline. For example, COYO [10] and GRIT [90] were originally English datasets, and we have translated them into Chinese. By leveraging this translation pipeline, the Chinese capabilities of InternVL 1.5 have been greatly enhanced. 4. Experiments 4.1. Implementation Details. InternVL 1.5 was developed by integrating the InternViT6B [18] vision encoder with the InternLM2-20B [11] language model, using a dynamic high-resolution strategy. In this approach, images are segmented into 448\u00d7448 pixel tiles, with the number of tiles ranging up to 12 based on the image\u2019s aspect ratio and resolution during training. In testing phases, the model could handle up to 40 tiles, equivalent to 4K resolution, demonstrating its adaptability to highresolution inputs in a zero-shot manner. Notably, we built our model based on the chat version of InternLM2-20B rather than the base model. The training of InternVL 1.5 was divided into two stages. Initially, the pre-training stage focused on training the InternViT-6B vision encoder and the MLP projector to optimize visual feature extraction. Subsequently, the entire model\u2019s 26 billion parameters were fine-tuned to enhance multimodal capabilities. In both two stages of training, we use a context length of 4096 and adopt the same response formatting prompts as LLaVA 1.5 [52]. Additionally, the evaluation was mainly supported by VLMEvalKit [21]. 4.2. Comparison with State-of-the-Art MLLMs 4.2.1 Quantitative Results on 18 Benchmarks In this section, we conduct an extensive evaluation across a series of benchmarks to assess our model\u2019s multimodal un6 openConvBench (Pairwise Grading) ConvBench (Direct Grading) model source #param R1 R2 S1 S2 S3 SO R1 R2 S1 S2 S3 SO GPT-4V [1] \u2717 \u2212 39.51 38.47 38.47 39.34 37.61 40.55 7.09 7.30 7.30 7.48 7.12 6.88 Claude-3 Opus [3] \u2717 \u2212 36.60 37.49 38.99 39.17 34.32 35.70 6.54 6.75 6.53 7.04 6.68 6.32 Reka Flash [89] \u2717 \u2212 25.60 24.67 25.13 27.56 21.32 26.52 6.78 6.86 6.93 7.25 6.41 6.70 Gemini Pro 1.0 [107] \u2717 \u2212 8.44 8.55 9.01 9.36 7.28 8.32 4.42 4.60 5.18 4.95 3.66 4.24 ShareGPT4V-13B [16] \u2713 13B 17.56 17.45 17.85 18.72 15.77 17.68 4.85 5.03 5.16 5.06 4.86 4.67 LLaVA-1.5-13B [62] \u2713 13B 16.93 18.08 20.45 18.02 15.77 15.77 4.94 5.14 5.03 5.41 4.99 4.74 XComposer2 [23] \u2713 8B 15.83 16.41 17.16 19.06 13.00 15.25 5.82 5.98 5.98 6.17 5.78 5.66 mPLUG-Owl2 [128] \u2713 8B 14.93 15.83 17.50 17.16 12.82 14.04 5.04 5.17 4.98 5.38 5.14 4.91 Qwen-VL-Chat [5] \u2713 10B 14.33 14.62 16.29 18.37 9.19 14.04 5.54 5.65 5.96 5.78 5.22 5.43 MiniGPT-4 [142] \u2713 8B 10.95 10.80 11.61 11.27 9.53 11.09 3.85 4.04 3.99 4.40 3.73 3.66 LLaMA-A-V2 [27] \u2713 7B 9.04 9.59 8.84 10.92 9.01 8.49 4.77 4.91 4.77 5.47 4.48 4.64 InternVL 1.2 (ours) \u2713 40B 21.17 22.41 24.96 21.31 20.97 19.93 5.49 5.69 5.80 5.88 5.39 5.29 InternVL 1.5 (ours) \u2713 26B 17.65 20.22 26.00 17.33 17.33 15.08 5.60 5.76 6.11 5.93 5.25 5.43 openMMT-Bench model source #param Overall Overall* GPT-4V [1] \u2717 \u2212 62.0 55.5 Qwen-VL-Plus [4] \u2717 \u2212 62.3 56.6 Gemini Pro 1.0 [107] \u2717 \u2212 61.6 55.1 Claude-3 Haiku [3] \u2717 \u2212 52.2 46.4 LLaVA-NeXT [64] \u2713 35B 60.8 56.3 XComposer2 [23] \u2713 8B 55.7 50.0 BLIP-2-XXL [50] \u2713 12B 54.8 49.1 Yi-VL-34B [130] \u2713 35B 54.2 48.6 Monkey-Chat [107] \u2713 10B 53.4 46.0 DeepSeek-VL [71] \u2713 7B 53.2 46.5 CogVLM-Chat [117] \u2713 17B 51.6 44.2 InternVL 1.2 (ours) \u2713 40B 63.4 58.2 InternVL 1.5 (ours) \u2713 26B 59.0 56.2 Table 3. Comparison with SoTA models on ConvBench and MMT-Bench. ConvBench [65] is a multi-turn conversation evaluation benchmark designed for MLLMs. The table presents win rates against humans, where S1, S2, and S3 represent the scores for perception, reasoning, and creation, respectively. R2 is calculated as (S1 + S2 + S3)/3, reflecting the average performance across three turns. R1 is derived from (R2 + S0)/2, indicating the model\u2019s overall score. MMT-Bench [129] is a comprehensive benchmark designed to assess MLLMs across massive multimodal tasks requiring expert knowledge and deliberate visual recognition, localization, reasoning, and planning. The overall score is computed across 162 subtasks, excluding visual recognition as denoted by *. derstanding and reasoning capability. The benchmarks employed in our study are categorized into four distinct types: OCR-related, general multimodal, mathematical, and multiturn conversation benchmarks. As depicted in Table 2, InternVL 1.5 exhibits leading performance across the majority of these benchmarks. OCR-related Image Understanding. We evaluate the model performance across four key dimensions of OCR: document comprehension (DocVQA [82]), chart understanding (ChartQA [81]), infographic understanding (InfographicVQA [83]), and scene text interpretation (TextVQA [100]). Additionally, we employ OCRBench [67] to perform a comprehensive evaluation of the model\u2019s overall OCR capabilities. As shown in Table 2, our model demonstrated comparable performance to proprietary models on these benchmarks and significantly outperformed the opensource LLaVA-NeXT [64] as well as InternVL 1.2, the predecessor of InternVL 1.5. Notably, our model achieves state-of-the-art performance on ChartQA and OCRBench, outperforming all competing proprietary models. General Multimodal Evaluation. In addition to OCRrelated benchmarks, we tested our model on several general multi-modal benchmarks. We used RealWorldQA [125] to evaluate the model\u2019s real-world spatial understanding capabilities. HallusionBench [30] was employed to assess its ability to control hallucinations. Additionally, MMMU [135] was utilized to evaluate the model\u2019s multidisciplinary capabilities, and AI2D [39] to assess its understanding of science diagrams. We also tested the model\u2019s proficiency in Chinese and understanding of Chinese culture with the MMBench-CN test [66] and CCBench [66], respectively. Other comprehensive benchmarks such as MME [26], MMBench-EN [66], MMVet [133], SEED [46], and MMT-Bench [129] were also used to assess the model\u2019s visual understanding and reasoning abilities. Compared to other open-source models like TextMonkey [68], DocOwl-1.5 [33], and LLaVA-NeXT [64], our InternVL 1.5 significantly closes the gap with proprietary models in these benchmarks. Specifically, our model achieves the best performance on HallusionBench [30], demonstrating its outstanding ability to reduce hallucinations. Moreover, thanks to our high-quality bilingual dataset, our model exhibits robust Chinese language capabilities, significantly surpassing both open-source and proprietary methods on MMBench-CN and CCBench. However, while InternVL 1.5 surpasses MM1 [84] and is comparable to Gemini Pro 1.0 [107] on MMMU, it shows a slight decline from its predecessor, InternVL 1.2. We attribute this modest decrement to the smaller size of the language model, a phenomenon similarly observed in the MMT-Bench [129] results, as shown in Table 3. Math Reasoning. MathVista [75] is a benchmark designed to integrate challenges from various mathematical and visual tasks. Completing these tasks requires a deep understanding of visuals, logical thinking, and math knowledge\u2014areas where many proprietary commercial models encounter significant difficulties. As shown in Table 2, our model outperforms others, including GPT-4V [87], by a clear margin in this benchmark, showcasing its ability to handle mathematically demanding tasks. Multi-Turn Conversation. Compared to single-turn dialogues, multi-turn conversations align more with human preferences. In practical usage, multi-turn dialogue is the preferred mode for general-purpose assistants to engage with humans in solving a variety of tasks. Therefore, we opt to utilize ConvBench [65] for evaluating multi-turn conversations, which progressively assesses the perception, reasoning, and creativity capabilities of MLLMs. As depicted 7 0 10 20 30 40 50 60 70 80 90 100 90.5 18 DocVQA val 0 10 20 30 40 70 75 80 85 83.8 12 ChartQA test 0 10 20 30 40 30 40 50 60 70 80 72.3 24 InfoVQA val 0 10 20 30 40 70 75 80 80.6 24 T extVQA val 0 10 20 30 40 55 60 65 70 75 72.4 24 OCRBench 0 10 20 30 40 72.00 72.25 72.50 72.75 73.00 73.25 73.1 36 MME 0 10 20 30 40 62 64 66 66.1 40 RealWorldQA 0 10 20 30 40 79.8 80.0 80.2 80.4 80.6 80.8 80.7 6 AI2D test 0 5 10 15 44.25 44.50 44.75 45.00 45.25 45.2 6 MMMU val 0 10 20 30 40 81.6 81.8 82.0 82.2 82.2 6 MMBench-EN test 0 10 20 30 40 81.2 81.4 81.6 81.8 82.0 82.0 6 MMBench-CN test 0 10 20 30 40 69.0 69.5 70.0 70.5 70.2 18 CCBench dev 0 10 20 30 40 58 60 62 64 62.8 24 MMVet 0 10 20 30 40 74.5 75.0 75.5 76.0 76.5 76.1 36 SEED Image 0 10 20 30 40 47.5 48.0 48.5 49.0 49.5 50.0 49.6 4 HallusionBench 0 10 20 30 40 65.0 67.5 70.0 72.5 75.0 74.3 24 Average (w/o MMMU val) Figure 6. Comparison of InternVL 1.5 performance across different image resolutions. The X-axis represents the number of tiles, while the Y-axis indicates benchmark performance. The highest value and its corresponding number of tiles are highlighted. The scores of MME [26] and OCRBench [67] have been normalized to a maximum score of 100. We found that although only 1 to 12 tiles were used during training, it is possible to zero-shot scale up to 40 tiles (i.e., 4K resolution) during testing. Note that since MMMU [135] includes multiple images per sample, it may run out of memory when the number of tiles is large. Therefore, we only tested up to 18 tiles maximum, and MMMU was not included when calculating the average score. in Table 3, InternVL exhibits leading performance among open-source models, albeit still trailing behind GPT-4V by a considerable margin. Going forward, we will continue refining InternVL\u2019s capabilities in multi-turn conversations. 4.3. Ablation Study Larger LLMs need Larger VFMs. In this study, we investigate the interplay between LLMs and VFMs. The comparison involves two open-source MLLMs, LLaVA-NeXT [64] and InternVL 1.2, each equipped with LLMs of 34 billion parameters. Notably, although both models employ LLMs of the same scale, InternVL 1.2 incorporates a significantly larger VFM, with 6 billion parameters, compared to LLaVA-NeXT\u2019s 300 million parameters. Since the data for LLaVA-NeXT is not available, we created a similar dataset ourselves. Additionally, InternVL 1.2 was trained at a fixed resolution of 448\u00d7448, while LLaVA-NeXT used a higher dynamic resolution of 672 \u00d7 672. Therefore, this comparison is not entirely fair or equivalent. Nevertheless, the findings still reveal noteworthy insights. For example, after excluding five OCR-related datasets, ConvBench, and RealWorldQA, InternVL 1.2 outperformed LLaVA-NeXT in 9 out of the remaining 11 datasets. This performance difference supports our hypothesis that for a large-scale LLM (e.g., 34B), a larger VFM (e.g., 6B) can effectively improve the model\u2019s ability to handle complex multimodal tasks, thereby enhancing the overall performance. Dynamic Resolution Matters. As shown in Figure 6, we investigated the effectiveness of dynamic resolution across various multimodal benchmarks. We found that not all tasks require high resolution. Specifically, tasks related to OCR, such as DocVQA, InfoVQA, TextVQA, and OCRBench, benefit from increased resolution. However, tasks like AI2D, MMMU, MMBench, and HallusionBench ex8 User: What action is taking place in the picture? Which color jersey is the attacking player wearing? Explain in detail. GPT-4V: In the image, we see a basketball game in progress where a player in a black jersey is in the act of shooting the ball. The player is extending their arms upwards towards the basket, and it looks like they\u2019re attempting a jump shot. The player is wearing a black jersey with the word \u201cSPURS\u201d across the front, which indicates they are a member of the San Antonio Spurs team. There\u2019s a defensive player from the opposing team, wearing a red jersey with the number \u201c9\u201d and the name \u201cVUCEVIC\u201d on it, who is trying to contest the shot. InternVL 1.5: The image depicts a basketball game in progress. A player in a black jersey, who is likely on the offensive team, is in the act of shooting the basketball. He is airborne, with one arm extended upwards towards the basket, indicating that he is in the process of taking a shot. The player in the red jersey is on defense, with his arm raised in an attempt to block the shot. The background shows the interior of an arena with spectators and a scoreboard, suggesting that this is a professional or organized basketball match. The attacking player\u2019s jersey is black, and the defending player\u2019s jersey is red. User: Tell me a person who had made a similar pose to the cute duck. GPT-4V: I can\u2019t identify or make assumptions about a specific person who has made a similar pose to the duck in the image you\u2019ve uploaded, but the stance the duck is taking, with what appears to be a martial arts belt and a weapon, is reminiscent of a martial artist demonstrating a pose or ready stance. This is a common pose in many martial arts, as practitioners often pose for photos demonstrating their skills or in the midst of performing kata (a sequence of movements in martial arts). If you\u2019re looking for something specific, you might find martial artists or actors who play martial arts roles striking similar poses. InternVL 1.5: The pose of the cute duck is reminiscent of a karate stance, which is often associated with martial artists. One famous person who made a similar pose is Bruce Lee, a martial artist and actor known for his contributions to martial arts and film. Figure 7. Examples for General QA. Blue highlights the standout answers, while red highlights incorrect responses. hibit a slight decline in performance at higher resolutions. Overall, InternVL 1.5 demonstrates strong robustness to dynamic resolution. It can adjust the resolution based on the specific requirements of each task, ensuring optimal performance where high resolution is beneficial and conserving resources where it is not. 4.3.1 Qualitative Results on Different Scenes In previous sections, we evaluated our model across various benchmarks and observed its strong performance. In this section, we conduct a qualitative comparison of our model with GPT-4V [87] across diverse scenarios, including General QA, OCR-related QA, Scientific Understanding, Chinese Traditional Culture, Object Localization, and MultiImage Dialogue. We aim to demonstrate the practicality and versatility of our model in real-world applications, offering insights from the perspective of actual user experience. General QA. To compare the general capabilities of InternVL 1.5 and GPT-4V, we first conducted an experiment involving simple user queries with images requiring general knowledge. As shown on the left side of Figure 7, both two models respond accurately to the query, showcasing their proficiency in general topics. As shown on the right side of Figure 7, GPT-4V may excessively refuse to answer some questions due to its involvement in personal privacy. OCR-Related QA. We conducted an evaluation to compare the OCR capabilities of our InternVL 1.5 model against GPT-4V. On the left side of Figure 8, the first prompt aimed to measure the models\u2019 ability to understand Chinese scenes. In this instance, GPT-4V cannot extract all useful information in the image. On the right side of Figure 8, both GPT-4V and our model have good performance on chart understanding. Scientific Understanding. Evaluating the capabilities of models in scientific understanding reasoning tasks is essential for advancing computational intelligence, particularly in contexts requiring in-domain knowledge and logical reasoning. In our study, we compared the performance of our InternVL 1.5 model with GPT-4V by administering complex multi-disciplinary problems designed to assess the accuracy of their reasoning. In Figure 9, for the first question, both models accurately answered and provided an analysis from an aerodynamic perspective. For the second question, our model precisely analyzed the elements depicted in the image and provided the correct response, whereas GPT-4V speculated on the trend of amino acid transport. These results suggest that our method and GPT-4V exhibit comparable capabilities in scientific understanding and reasoning. Chinese Traditional Culture. We selected two typical multimodal examples related to traditional Chinese art to evaluate our model. As illustrated in Figure 10, both InternVL 1.5 and GPT-4V correctly recognize the Chinese traditional culture depicted in the image. Notably, InternVL 1.5 demonstrates a deeper understanding of this culture, as 9 User: Describe this image in detail. GPT-4V: The image shows a signboard with Chinese characters on it. From the text, we can deduce that the sign is likely for a business. The sign features a design that includes a martini glass with a flamingo, indicating that the establishment might serve drinks. There\u2019s additional text on the right side, providing specific details such as business hours. It appears to be a nighttime business, likely a bar or restaurant, operating from 16:30 to 22:30. The other text provides further information about the services or specialties of the place. Unfortunately, I can\u2019t provide a translation of the text, as my capabilities don\u2019t include translating text within images. InternVL 1.5: The image shows a signboard with Chinese text, indicating a business establishment. The signboard reads: \u201c\u6cb3\u5927\u897f\u95e8\u7b2c\u4e00\u5bb6\u201d which translates to \u201cThe first shop at the west gate of the river.\u201d \u201c\u6b63\u5b97\u8001\u5b57\u53f7\u4e94\u9999\u5473\u201d which means \u201cGenuine, old brand, five-spice flavor.\u201d \u201c\u66f9\u8bb0\u7116\u70e4\u9e21\u817f\u201d which translates to \u201cCao\u2019s roasted and baked chicken legs.\u201d \u201c\u8425\u4e1a\u65f6\u95f416:30-22:30\u201d indicating the business hours from 4:30 PM to 10:30 PM. \u201c7\u70b9\u4e4b\u540e\u5728\u95e8\u53e3\u8def\u706f\u4e0b\u51fa\u591c\u5e02\u5730\u644a\u201d which suggests that after 7 PM, the shop operates in the doorway under the streetlights as a night market stall. The signboard also includes a logo that appears to be a stylized rooster, which might be the brand or shop\u2019s emblem. The overall style of the signboard is quite traditional and straightforward, commonly seen in Chinese street markets or small businesses. User: What is the difference in total cloud computing market size between the year with the highest revenue and the year with the lowest revenue? GPT-4V: To find the difference in total cloud computing market size between the year with the highest revenue and the year with the lowest revenue, we need to: 1. Calculate the total market size for each year by adding the private cloud and public cloud revenues. 2. Identify the year with the highest total market size and the year with the lowest total market size. 3. Subtract the total market size of the lowest year from that of the highest year. Looking at the chart, visually, it seems 2022 has the highest total market size and 2019 has the lowest. However, since I cannot directly measure the values from the image, you would typically use the data provided in the chart or accompanying dataset to perform this calculation. Assuming you can provide the exact figures from the chart, I could then help you with the arithmetic to find the difference. InternVL 1.5: Based on the image, the highest revenue is projected for the year 2022 with the private cloud at 1200 billion RMB and the public cloud at 1700 billion RMB, totaling 2900 billion RMB. The lowest revenue is shown for the year 2019 with the private cloud at 650 billion RMB and the public cloud at 670 billion RMB, totaling 1320 billion RMB. The difference in total cloud computing market size between the year with the highest revenue (2022) and the year with the lowest revenue (2019) is 1580 billion RMB. Figure 8. Examples for OCR-related QA. Blue highlights the standout answers, while red highlights incorrect responses. evidenced by its more detailed descriptions of the cultural elements in its response. Object Localization. Evaluating machine learning models for their proficiency in object localization tasks is essential, especially in applications requiring precise spatial awareness. In our comparative analysis, the performance of the InternVL 1.5 model was juxtaposed with GPT-4V, focusing on their ability to accurately detect and localize objects within various environments. Our assessments ranged from simple object recognition in cluttered scenes to complex scenarios involving dynamic interactions among multiple entities. As illustrated in Figure 11, the results demonstrate that InternVL 1.5 not only localized objects with high accuracy but also exhibited a comparable understanding of spatial relationships, matching the performance of GPT-4V. Multi-Image Dialogue. As shown in Figure 12, in this experiment, we ask InternVL 1.5 and GPT-4V to compare the similarities and differences between the two images. As can be seen, both GPT-4V and InternVL 1.5 provide detailed and accurate responses. Through this experiment, we discovered that although InternVL 1.5 was trained solely on single-image inputs, it exhibits strong zero-shot capabilities for multi-image dialogues. 5. Conclusion This work introduced InternVL 1.5, an open-source MLLM designed to narrow the performance gap between opensource and proprietary models in multimodal understanding. By integrating a strong vision encoder with continuous learning capabilities, adopting a dynamic high-resolution strategy, and utilizing a high-quality bilingual dataset, InternVL 1.5 has demonstrated robust performance across a variety of benchmarks. Our evaluations indicate that the model achieves competitive performance with leading proprietary models, excelling particularly in OCR-related tasks and showing significant improvements in Chinese-related scene understanding. While InternVL 1.5 has contributed to the open-source multimodal understanding, the field continues to evolve with many challenges ahead. We aspire to further enhance InternVL\u2019s capabilities and invite collaboration with the global research community, hoping to enrich and expand the reach of open-source models together. 10 User: Which car in the picture is more aerodynamic? GPT-4V: Based on the image you\u2019ve provided, determining which car is more aerodynamic involves analyzing the shape and design of each vehicle. The car on the right appears to have a more streamlined shape, with a sloped front that would likely reduce air resistance better than the boxy design of the car on the left. Therefore, the car on the right would be considered more aerodynamic. InternVL 1.5: The car on the right appears to be more aerodynamic. This is inferred from the shape of the car, which has a more streamlined and elongated form. The design includes a pointed front end and a tapered rear, which are common features in aerodynamic design to reduce air resistance. The car on the left, with its boxy shape, does not exhibit the same level of aerodynamic efficiency. User: If the pH outside the cell decreases, would you expect the amount of amino acids transported into the cell to increase or decrease? GPT-4V: The image depicts a sodium-glucose symporter, which is a type of secondary active transport mechanism. This transport mechanism is specifically for glucose and does not directly provide information about amino acid transporters. However, speaking generally about amino acid transporters, if the pH outside the cell decreases, the effect on amino acid transport would depend on the type of amino acid transporter involved. Some amino acid transporters are proton-dependent, meaning that they use the proton gradient (which increases as pH decreases) to transport amino acids into the cell. For these transporters, a decrease in extracellular pH could potentially increase the transport of amino acids into the cell. To give a precise answer, one would need specific information about the type of amino acid transporter present in the cell membrane. Since this information is not provided in the image or the description, the impact of a decrease in extracellular pH on amino acid transport cannot be determined from the provided content alone. InternVL 1.5: Based on the image, if the pH outside the cell decreases, we would expect the amount of amino acids transported into the cell to increase. This is because the diagram shows a gradient of amino acids moving from an area of lower concentration (outside the cell) to an area of higher concentration (inside the cell), which is a typical representation of active transport. Active transport is a process that moves substances against their concentration gradient, often using energy, and it is typically more efficient at higher concentrations of the substance being transported. Therefore, a decrease in pH outside the cell would likely increase the concentration gradient, leading to an increase in the transport of amino acids into the cell. Figure 9. Examples for Scientific Understanding. Blue highlights the standout answers. User: \u56fe\u4e2d\u6240\u793a\u7684\u4f20\u7edf\u8868\u6f14\u5f62\u5f0f\u4e3a? GPT-4V: \u8fd9\u5f20\u56fe\u7247\u5c55\u793a\u7684\u662f\u4e2d\u56fd\u7684\u4f20\u7edf\u6728\u5076\u620f\uff0c\u4e5f\u88ab\u79f0\u4e3a\u201c\u63d0\u7ebf \u6728\u5076\u201d\u3002\u8fd9\u79cd\u8868\u6f14\u827a\u672f\u5f62\u5f0f\u5728\u4e2d\u56fd\u6709\u7740\u60a0\u4e45\u7684\u5386\u53f2\uff0c\u901a\u5e38\u4ee5\u7cbe\u7f8e \u7684\u6728\u5076\u548c\u590d\u6742\u7684\u64cd\u4f5c\u6280\u5de7\u8457\u79f0\u3002\u6728\u5076\u620f\u5728\u4e2d\u56fd\u7684\u5404\u4e2a\u5730\u533a\u6709\u4e0d\u540c \u7684\u98ce\u683c\u548c\u8868\u6f14\u7279\u8272\u3002\u56fe\u7247\u4e2d\u7684\u6728\u5076\u670d\u9970\u7eda\u4e3d\uff0c\u8868\u6f14\u8005\u901a\u8fc7\u7ebf\u63a7\u5236 \u6728\u5076\u7684\u52a8\u4f5c\uff0c\u8fdb\u884c\u6545\u4e8b\u53d9\u8ff0\u548c\u8868\u6f14\u3002 InternVL 1.5: \u56fe\u4e2d\u6240\u793a\u7684\u4f20\u7edf\u8868\u6f14\u5f62\u5f0f\u4e3a\u5e03\u888b\u620f\uff0c\u5c5e\u4e8e\u6728\u5076\u620f \u7684\u4e00\u79cd\uff0c\u5e03\u888b\u620f\u662f\u4e00\u79cd\u53e4\u8001\u7684\u6c49\u65cf\u6c11\u95f4\u620f\u66f2\u8868\u6f14\u827a\u672f\u5f62\u5f0f\uff0c\u5e03\u888b \u620f\uff08\u53c8\u79f0\u5e03\u888b\u6728\u5076\u620f\u3001\u624b\u64cd\u5080\u5121\u620f\u3001\u638c\u4e2d\u620f\u3001\u5c0f\u7b3c\u3001\u6307\u82b1\u620f\uff09\u662f \u4e00\u79cd\u6e90\u4e8e17\u4e16\u7eaa\u4e2d\u56fd\u798f\u5efa\u6cc9\u5dde\uff0c\u6d41\u884c\u4e8e\u53f0\u6e7e\u3001\u798f\u5efa\u3001\u5e7f\u4e1c\u7b49\u5730\u7684 \u4f20\u7edf\u620f\u5267\u827a\u672f\u3002\u8868\u6f14\u8005\u7528\u53cc\u624b\u64cd\u7eb5\u5e03\u888b\u5185\u7684\u6728\u5076\uff0c\u901a\u8fc7\u624b\u6307\u7684\u52a8 \u4f5c\u6765\u63a7\u5236\u6728\u5076\u7684\u52a8\u4f5c\u548c\u8868\u60c5\uff0c\u540c\u65f6\u914d\u5408\u97f3\u4e50\u3001\u5531\u8154\u548c\u9053\u767d\uff0c\u6765\u8bb2 \u8ff0\u6545\u4e8b\u3002\u5728\u4e2d\u56fd\u4ee5\u53ca\u4e16\u754c\u534e\u4eba\u5730\u533a\u90fd\u6709\u6d41\u4f20\u3002 User: \u56fe\u4e2d\u662f\u8c01\u7684\u4f5c\u54c1? Options: A. \u5f90\u60b2\u9e3fB. \u9f50\u767d\u77f3C. \u5f20\u5927\u5343 D. \u5218\u6d77\u7c9f GPT-4V: \u8fd9\u5e45\u753b\u7684\u98ce\u683c\u4f3c\u4e4e\u662f\u5f90\u60b2\u9e3f\u7684\uff0c\u4ed6\u4ee5\u8868\u73b0\u9a6c\u7684\u9ad8\u8d85\u6280\u5de7 \u800c\u95fb\u540d\u3002\u5f90\u60b2\u9e3f\u7684\u9a6c\u753b\u7279\u70b9\u662f\u7ebf\u6761\u6d41\u7545\u800c\u6709\u529b\uff0c\u4f53\u73b0\u4e86\u9a6c\u7684\u52a8\u6001 \u7f8e\u548c\u751f\u547d\u529b\u3002\u8fd9\u5e45\u753b\u4e2d\u9a6c\u7684\u8868\u73b0\u4e0e\u5f90\u60b2\u9e3f\u7684\u98ce\u683c\u76f8\u543b\u5408\uff0c\u6240\u4ee5\u6b63 \u786e\u7b54\u6848\u662fA. \u5f90\u60b2\u9e3f\u3002 InternVL 1.5: \u8fd9\u5e45\u753b\u770b\u8d77\u6765\u50cf\u662f\u4e00\u5e45\u4e2d\u56fd\u753b\uff0c\u7279\u522b\u662f\u9a6c\u7684\u63cf\u7ed8 \u65b9\u5f0f\u548c\u98ce\u683c\u3002\u5f90\u60b2\u9e3f\u662f\u4e2d\u56fd\u8457\u540d\u7684\u753b\u5bb6\uff0c\u4ee5\u753b\u9a6c\u800c\u95fb\u540d\uff0c\u4ed6\u7684\u753b \u4f5c\u901a\u5e38\u5177\u6709\u5f88\u5f3a\u7684\u8868\u73b0\u529b\u548c\u52a8\u611f\u3002\u4ece\u56fe\u4e2d\u9a6c\u7684\u59ff\u6001\u548c\u8868\u73b0\u624b\u6cd5\u6765 \u770b\uff0c\u4e0e\u5f90\u60b2\u9e3f\u7684\u98ce\u683c\u975e\u5e38\u76f8\u4f3c\u3002\u5f90\u60b2\u9e3f\u7684\u4f5c\u54c1\u901a\u5e38\u4ee5\u5927\u5199\u610f\u7684\u65b9 \u5f0f\u5448\u73b0\uff0c\u6ce8\u91cd\u8868\u73b0\u7269\u8c61\u7684\u7cbe\u795e\u548c\u6c14\u8d28\u3002\u56fe\u4e2d\u9a6c\u7684\u795e\u6001\u548c\u52a8\u6001\u6355\u6349 \u5f97\u975e\u5e38\u751f\u52a8\uff0c\u4f53\u73b0\u4e86\u5f90\u60b2\u9e3f\u5bf9\u9a6c\u5339\u7684\u6df1\u523b\u7406\u89e3\u548c\u7cbe\u6e5b\u7684\u7ed8\u753b\u6280 \u5de7\u3002\u56e0\u6b64\uff0c\u6839\u636e\u63d0\u4f9b\u7684\u9009\u9879\uff0c\u56fe\u4e2d\u4f5c\u54c1\u5f88\u53ef\u80fd\u662f\u5f90\u60b2\u9e3f\u7684\u4f5c\u54c1\u3002 Figure 10. Examples for Chinese Traditional Culture. Blue highlights the standout answers. 11 User: Tell me the locations of all the number 10 jerseys in the picture. GPT-4V: In the image, there are two players wearing number 10 jerseys. The first one is located towards the center, slightly to the right while facing the image; he\u2019s wearing a red jersey and is the second person from the right in the group wearing red and white uniforms. The second player is wearing a light blue jersey and is positioned on the far left side of the image; he\u2019s the third person from the left in the group wearing blue and white uniforms. InternVL 1.5: In the image, there are two number 10 jerseys visible: 1. The first player wearing the number 10 jersey is located in the center of the image. He is wearing a red jersey with the number 10 on it, and he is standing with his hands on his hips. 2. The second player with the number 10 jersey is on the far right of the image. He is wearing a blue jersey with the number 10 on it and is crouching down with his hands on his knees. User: What is the man in the picture doing with his right hand? GPT-4V: The man in the picture is pointing with his right hand towards a beluga whale that appears to be behind a glass barrier. He seems to be doing this to show the whale to the child he is holding, likely to engage the child\u2019s interest or to teach the child about the whale. InternVL 1.5: The man in the picture is pointing at a large dolphin through a glass barrier. He is likely making this gesture to draw attention to the dolphin, either for the child he is holding or for others around them. This could be to educate the child about the dolphin, to highlight a specific behavior or feature of the dolphin, or simply to share the experience with others. The gesture is a common way to direct someone\u2019s attention to something of interest. Figure 11. Examples for Object Localization. Blue highlights the standout answers, while red highlights incorrect responses. User: What are the similarities and differences between these two pictures? GPT-4V: The similarities between the two images are that they both feature red pandas, a species known for their reddish-brown fur, bushy tails, and adorable faces. In both images, the red pandas are resting on tree branches with green foliage in the background, highlighting their arboreal (tree-dwelling) nature. As for the differences, the first image shows a red panda in a curled-up position, appearing to be asleep with its tail wrapped around its body for warmth. This posture also serves as camouflage, making the panda resemble a tree branch or part of the foliage. The angle of the shot is from above the animal, and the focus is on the texture and patterns of the fur. In contrast, the second image captures the red panda in a more alert and awake state, with its head raised and looking toward the camera. This posture allows a clear view of the panda\u2019s facial features, such as its ears, eyes, and the white markings. The angle of the second image is more direct and at the same level as the animal, which gives a better sense of the panda\u2019s overall appearance and stance. InternVL 1.5: The two pictures both feature red pandas, which are small, arboreal mammals native to the temperate forests of the Himalayas, southwestern China, and the northern part of Myanmar. They are known for their distinctive reddish-brown fur, white markings around their eyes, and bushy tails. In the first picture, the red panda is curled up and appears to be sleeping or resting on a branch. The background is blurred with green foliage, suggesting a natural, forested habitat. The focus is on the red panda\u2019s relaxed posture and the texture of its fur. In the second picture, the red panda is sitting upright on a branch, with its front paws resting on the branch. The animal is looking directly at the camera, giving a more alert and curious expression. The background is also green and blurred, indicating a similar natural setting. The similarities between the two pictures include the presence of the red panda, the branch it is on, and the green, forested background. The differences lie in the red panda\u2019s posture and expression, with the first picture showing a more relaxed and the second a more alert state. Additionally, the first picture has a more focused view of the red panda\u2019s fur and form, while the second picture captures the animal\u2019s face and eyes more clearly. Figure 12. Examples for Multi-Image Dialogue. Blue highlights the standout answers. 12"
16
+ }
intro_8K/test_introduction_short_2404.16831v2.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2404.16831v2",
3
+ "title": "The Third Monocular Depth Estimation Challenge",
4
+ "abstract": "This paper discusses the results of the third edition of the Monocular Depth\nEstimation Challenge (MDEC). The challenge focuses on zero-shot generalization\nto the challenging SYNS-Patches dataset, featuring complex scenes in natural\nand indoor settings. As with the previous edition, methods can use any form of\nsupervision, i.e. supervised or self-supervised. The challenge received a total\nof 19 submissions outperforming the baseline on the test set: 10 among them\nsubmitted a report describing their approach, highlighting a diffused use of\nfoundational models such as Depth Anything at the core of their method. The\nchallenge winners drastically improved 3D F-Score performance, from 17.51% to\n23.72%.",
5
+ "authors": "Jaime Spencer, Fabio Tosi, Matteo Poggi, Ripudaman Singh Arora, Chris Russell, Simon Hadfield, Richard Bowden, GuangYuan Zhou, ZhengXin Li, Qiang Rao, YiPing Bao, Xiao Liu, Dohyeong Kim, Jinseong Kim, Myunghyun Kim, Mykola Lavreniuk, Rui Li, Qing Mao, Jiang Wu, Yu Zhu, Jinqiu Sun, Yanning Zhang, Suraj Patni, Aradhye Agarwal, Chetan Arora, Pihai Sun, Kui Jiang, Gang Wu, Jian Liu, Xianming Liu, Junjun Jiang, Xidan Zhang, Jianing Wei, Fangjun Wang, Zhiming Tan, Jiabao Wang, Albert Luginov, Muhammad Shahzad, Seyed Hosseini, Aleksander Trajcevski, James H. Elder",
6
+ "published": "2024-04-25",
7
+ "updated": "2024-04-27",
8
+ "primary_cat": "cs.CV",
9
+ "cats": [
10
+ "cs.CV"
11
+ ],
12
+ "label": "Original Paper",
13
+ "paper_cat": "Diffusion AND Model",
14
+ "gt": "Monocular depth estimation (MDE) aims at predicting the distance from the camera to the points of the scene de- picted by the pixels in the captured image. It is a highly ill-posed problem due to the absence of geometric priors usually available from multiple images. Nonetheless, deep learning has rapidly advanced this field and made it a real- ity, enabling results far beyond imagination. 1Independent 2University of Bologna 3Blue River Technology 4Oxford Internet Institute 5University of Surrey 6ByteDance 7University of Chinese Academy of Science 8RGA Inc. 9Space Research Institute NASU-SSAU, Kyiv, Ukraine 10Northwestern Polytechnical University, Xi\u2019an 11Indian Institute of Technology, Delhi 12Harbin Institute of Technology 13Fujitsu 14GuangXi University 15University of Reading 16York University For years, most proposed approaches have been tailored to training and testing in a single, defined domain \u2013 e.g., automotive environments [33] or indoor settings [64] \u2013 of- ten ignoring their ability to generalize to unseen environ- ments. Purposely, the Monocular Depth Estimation Chal- lenge (MDEC) in the last years has encouraged the commu- nity to delve into this aspect, by proposing a new benchmark for evaluating MDE models on a set of complex environ- ments, comprising natural, agricultural, urban, and indoor settings. The dataset comes with a validation and a testing split, without any possibility of training/fine-tuning over it thus forcing the models to generalize. While the first edition of MDEC [90] focused on bench- marking self-supervised approaches, the second [91] addi- tionally opened the doors to supervised methods. During the former, the participants outperformed the baseline [30, 92] in all image-based metrics (AbsRel, MAE, RMSE), but could not improve pointcloud reconstructions [65] (F- Score). The latter, instead, brought new methods capable of outperforming the baseline on both aspects, establishing a new State-of-the-Art (SotA). The third edition of MDEC, detailed in this paper, ran in conjunction with CVPR2024, following the successes of the second one by allowing sub- missions of methods exploiting any form of supervision, e.g. supervised, self-supervised, or multi-task. Following previous editions, the challenge was built around SYNS-Patches [1, 92]. This dataset was chosen because of the variegated diversity of environments it con- tains, including urban, residential, industrial, agricultural, natural, and indoor scenes. Furthermore, SYNS-Patches contains dense high-quality LiDAR ground-truth, which is very challenging to obtain in outdoor settings. This allows 1 arXiv:2404.16831v2 [cs.CV] 27 Apr 2024 for a benchmark that accurately reflects the real capabilities of each model, potentially free from biases. While the second edition counted 8 teams outperforming the SotA baseline in either pointcloud- or image-based met- rics, this year 19 submissions achieved this goal. Among these, 10 submitted a report introducing their approach, 7 of whose outperformed the winning team of the second edi- tion. This demonstrates the increasing interest \u2013 and efforts \u2013 in MDEC. In the remainder of the paper, we will provide an overview of each submission, analyze their results on SYNS-Patches, and discuss potential future developments.",
15
+ "main_content": "Supervised MDE. Early monocular depth estimation (MDE) efforts utilized supervised learning, leveraging ground truth depth labels. Eigen et al. [26] proposed a pioneering end-to-end convolutional neural network (CNN) for MDE, featuring a scale-invariant loss and a coarse-to-fine architecture. Subsequent advancements incorporated structured prediction models such as Conditional Random Fields (CRFs) [54, 120] and regression forests [82]. Deeper network architectures [80, 109], multi-scale fusion [63], and transformer-based encoders [8, 16, 79] further enhanced performance. Alternatively, certain methods framed depth estimation as a classification problem [6, 7, 28, 51]. Novel loss functions were also introduced, including gradientbased regression [53, 104], the berHu loss [50], an ordinal relationship loss [14], and scale/shift invariance [80]. Self-Supervised MDE. To overcome the dependence on costly ground truth annotations, self-supervised methods were developed. Garg et al. [30], for the first time, proposed an algorithm based on view synthesis and photometric consistency across stereo image pairs, the importance of which for was extensively analyzed by Poggi et al. [74]. Godard et al. [34] introduced Monodepth, which incorporated differentiable bilinear interpolation [44], virtual stereo prediction, and a SSIM+L1 reconstruction loss. Zhou et al. [130] presented SfM-Learner, which required only monocular video supervision by replacing the known stereo transform with a pose estimation network. Following the groundwork laid by these frameworks, subsequent efforts focused on refining the depth estimation accuracy by integrating feature-based reconstructions [89, 119, 124], semantic segmentation [122], adversarial losses [3], proxydepth representations [5, 18, 48, 70, 83, 97, 107], trinocular supervision [75] and other constraints [9, 61, 103]. Other works focused on improving depth estimates at object boundaries [96, 99]. Moreover, attention has also been given to challenging cases involving dynamic scenarios during the training phase, which pose difficulties in providing accurate supervision signals for such networks. This has been addressed, for example, by incorporating uncertainty estimates [48, 73, 112], motion masks [11, 22, 37, 98], optical flow [59, 81, 118], or via the minimum reconstruction loss [35]. Finally, several architectural innovations, including 3D (un)packing blocks [38], position encoding [36], transformer-based encoders [2, 127], sub-pixel convolutions [71], progressive skip connections [60], and self-attention decoders [46, 110, 129], allowed further improvements. Among them, lightweight models tailored for real-time applications with memory and runtime constraints have also been developed [4, 19, 43, 68, 69, 72, 108]. Generalization and \u201cIn-the-Wild\u201d MDE. Estimating depth in the wild refers to the challenging task of developing methods that can generalize to a wide range of unknown settings [14, 15]. Early works in this area focused on predicting relative (ordinal) depth [14, 15]. Nonetheless, the limited suitability of relative depth in many downstream contexts has driven researchers to explore affine-invariant depth estimation [53, 113]. In the affine-invariant setting, depth is estimated up to an unknown global offset and scale, offering a compromise between ordinal and metric representations. Researchers have employed various strategies to achieve generalization, including leveraging annotations from large datasets to train monocular depth models [79, 80, 111], including internet photo collections [53, 113], as well as from automotive LiDAR [33, 38, 42], RGB-D/Kinect sensors [17, 64, 95], structure-from-motion reconstructions [52, 53], optical flow/disparity estimation [80, 109], and crowd-sourced annotations [14]. However, the varying accuracy of these annotations may have impacted model performance, and acquiring new data sources remains a challenge, motivating the exploration of self-supervised approaches [116, 125]. For instance, KBR(++) [93, 94] leverage large-scale self-supervision from curated internet videos. The transition from CNNs to vision transformers has further boosted performance in this domain, as demonstrated by DPT (MiDaS v3) [79] and Omnidata [25]. Furthermore, a few works like Metric3D [114] and ZeroDepth [40] revisited the depth estimation by explicitly feeding camera intrinsics as additional input. A notable recent trend involves training generative models, especially diffusion models [29, 41, 88] for monocular depth estimation [24, 45, 47, 84, 85]. Adverse Weather and Transparent/Specular Surfaces. Existing monocular depth estimation networks have struggled under adverse weather conditions. Approaches have addressed low visibility [89], employed day-night branches using GANs [100, 126], utilized additional sensors [31], or faced trade-offs [101]. Recently, md4all [32] enabled robust performance across conditions without compromising ideal setting performance. Furthermore, estimating depth for transparent or mirror (ToM) surfaces posed a unique challenge [121, 123]. Costanzino et al. [21] is the only work dedicated to this, introducing novel datasets [77, 78]. Their 2 0 20 40 60 80 100 120 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 Outdoor-Urban 0 20 40 60 80 100 120 0.00 0.01 0.02 0.03 0.04 0.05 0.06 Outdoor-Natural 0 20 40 60 80 100 120 0.00 0.01 0.02 0.03 0.04 0.05 0.06 Outdoor-Agriculture 0 20 40 60 80 100 120 0.00 0.05 0.10 0.15 0.20 Indoor Figure 1. SYNS-Patches Properties. Top: Distribution of images per category in the validation split and the test split respectively. Bottom: Depth distribution per scene type \u2013 indoor scenes are limited to 20m, while outdoor scenes reach up to 120m; natural and Agriculture scenes contain a larger percentage of long-range depths (20-80m), while urban scenes focus on the mid-range (20-40m). approach relied on segmentation maps or pre-trained networks, generating pseudo-labels by inpainting ToM objects and processing them with a pre-trained depth model [80], enabling fine-tuning of existing networks to handle ToM surfaces. 3. The Monocular Depth Estimation Challenge The third edition of the Monocular Depth Estimation Challenge1 was organized on CodaLab [67] as part of a CVPR2024 workshop. The development phase lasted four weeks, using the SYNS-Patches validation split. During this phase, the leaderboard was public but the usernames of the participants were anonymized. Each participant could see the results achieved by their own submission. The final phase of the challenge was open for three weeks. At this stage, the leaderboard was completely private, disallowing participants to see their own scores. This choice was made to encourage the evaluation on the validation split rather than the test split and, together with the fact that all ground-truth depths were withheld, severely avoiding any possibility of overfitting over the test set by conducting repeated evaluations on it. Following the second edition [91], any form of supervision was allowed, in order to provide a more comprehensive overview of the monocular depth estimation field as a whole. This makes it possible to better study the gap between different techniques and identify possible, fu1 https://codalab.lisn.upsaclay.fr/competitions/17161 ture research directions. In this paper, we report results only for submissions that outperformed the baseline in any pointcloud-/image-based metric on the Overall dataset. Dataset. The challenge takes place based on the SYNS-Patches dataset [1, 92], chosen due to the diversity of scenes and environments. A breakdown of images per category and some representative examples are shown in Figure 1 and Figure 2. SYNS-Patches also provides extremely high-quality dense ground-truth LiDAR, with an average coverage of 78.20% (including sky regions). Given such dense ground-truth, depth boundaries were obtained using Canny edge-detection on the log-depth maps, allowing us to compute additional fine-grained metrics for these challenging regions. As outlined in [91, 92], the images were manually checked to remove dynamic object artifacts. Evaluation. Participants were asked to provide the up-toscale disparity prediction for each dataset image. The evaluation server bilinearly upsampled the predictions to the target resolution and inverted them into depth maps. Although self-supervised methods trained with stereo pairs and supervised methods using LiDAR or RGB-D data should be capable of predicting metric depth, in order to ensure comparisons are as fair as possible, the evaluation aligned any predictions with the ground-truth using the median depth. We set a maximum depth threshold of 100 meters. Metrics. Following the first and second editions of the challenge [90, 91], we use a mixture of image-/pointcloud/edge-based metrics. Image-based metrics are the most common (MAE, RMSE, AbsRel) and are computed using 3 Figure 2. SYNS-Patches Dataset. We show samples from diverse scenes, including complex urban, natural, and indoor spaces. Highquality ground-truth depth covers about 78.20% of the image, from which depth boundaries are computed as Canny edges in log space. pixel-wise comparisons between the predicted and groundtruth depth map. Pointcloud-based metrics [65] (F-Score, IoU, Chamfer distance) instead bring the evaluation in the 3D domain, evaluating the reconstructed pointclouds as a whole. Among these, we select reconstruction F-Score as the leaderboard ranking metric. Finally, edge-based metrics are computed only at depth boundary pixels. This includes image-/pointcloud-based metrics and edge accuracy/completion metrics from IBims-1 [49]. 4. Challenge Submissions We now highlight the technical details for each submission, as provided by the authors themselves. Each submission is labeled based on the supervision used, including groundtruth (D), proxy ground-truth (D*), DepthAnything [111] pretraining (\u2020) and monocular (M) or stereo (S) photometric support frames. Teams are numbered according to rankings. Baseline \u2013 S J. Spencer [email protected] C. Russell [email protected] S. Hadfield [email protected] R. Bowden [email protected] Challenge organizers\u2019 submission from the first edition. Network. ConvNeXt-B encoder [56] with a base Monodepth decoder [34, 62] from [92]. Supervision. Self-supervised with a stereo photometric loss [30] and edge-aware disparity smoothness [34]. Training. Trained for 30 epochs on Kitti Eigen-Zhou with an image resolution of 192 \u00d7 640. Team 1: PICO-MR \u2013 \u2020D* G. Zhou [email protected] Z. Li [email protected] Q. Rao [email protected] Y. Bao [email protected] X. Liu [email protected] Network. Based on Depth-Anything [111] with a BEiT384L backbone, starting from the authors\u2019 weights pre-trained on 1.5M labeled images and 62M+ unlabeled images. Supervision. The model is fine-tuned in a supervised manner, with proxy labels derived from stereo images. The final loss function integrates the SILog loss, SSIL loss, Gradient loss, and Random Proposal Normalization (RPNL) loss. Training. The network was fine-tuned on the CityScapes dataset [20], resizing the input to 384\u00d7768 resolution, while keeping proxy labels at 1024 \u00d7 \u00d72048 resolution. Random flipping is used to augment data, the batch size is set to 16 and the learning rate to 0.000161. The fine-tuning is carried out to predict metric depth and early stops at 4 epochs, a strategic choice to prevent overfitting and ensure the model\u2019s robustness to new data. Team 3: RGA-Robot \u2013 \u2020S D. Kim [email protected] J. Kim [email protected] M. Kim [email protected] Network. It uses the Depth Anything [111] pre-trained model to estimate relative depth, accompanied by an auxiliary network to convert it into metric depth. This latter is NAFNet [13], processing the final feature maps and relative depth map predicted by the former model together with the input image. Supervision. Self-supervised loss with two main terms: image reconstruction loss and smoothness loss. The former integrates perceptual loss with photometric loss as used in monodepth2 [35], with the former using a pre-trained VGG19 backbone [87], following a similar approach as in ESRGAN [106]. Training. The train is carried out on Kitti Eigen-Zhou with batch size 8 and learning rate 1e\u22124 for 4 epochs. Only NAFNet is trained, while the Depth Anything model remains frozen. Team 4: EVP++ \u2013 \u2020D M. Lavreniuk nick [email protected] Network. The architecture is based on Depth Anything [111], incorporating a VIT-L encoder [23] for feature extraction and the ZoeDepth metric bins module [8] as a de4 coder. This module computes per-pixel depth bin centers, which are linearly combined to produce metric depth. Supervision. The models were trained in a supervised manner using ground-truth depth information obtained from various datasets, employing the SILog loss function. Training. The models were trained on both indoor and outdoor data, respectively on the NYUv2 dataset [64] with an image size of 392 \u00d7 518, and on KITTI [33], Virtual KITTI 2 [10], and DIODE outdoor [102] with an image size of 518\u00d71078. The batch size was set to 16, the learning rate to 0.000161, and the maximum depth to 10 for indoor scenes. For outdoor scenes, the batch size was set to 1, the learning rate to 0.00002, and the maximum depth to 80. Both models were trained for 5 epochs. Team 6: 3DCreators \u2013 \u2020D R. Li [email protected] Q. Mao [email protected] J. Wu [email protected] Y. Zhu [email protected] J. Sun [email protected] Y. Zhang [email protected] Network. An architecture made of two sub-networks. The first model consists of a pre-trained ViT-large backbone [23] from Depth Anything [111] and a ZoeDepth decoder [8]. The second is Metric3D [115], which uses ConvNext-Large [57] backbone and a LeRes decoder [117]. Supervision. The first network is fine-tuned with the KITTI dataset using SILog loss. The second network uses the released pre-trained weights trained by a diverse collection of datasets as detailed in [115]. Training. The first network is fine-tuned using batch size 16 for 5 epochs. At inference, test-time augmentation \u2013 i.e., color jittering and horizontal flipping \u2013 is used to combine the predictions by the two models: the same image is augmented 10 times and processed by the two models, then the predictions are averaged. Team 7: visioniitd \u2013 D S. Patni [email protected] A. Agarwal [email protected] C. Arora [email protected] Network. The model is ECoDepth [66], which provides effective conditioning for the MDE task to diffusion methods like stable diffusion. It is based on a Comprehensive Image Detail Embedding (CIDE) module which utilizes ViT embeddings of the image and subsequently transforms them to yield a semantic context vector. These embeddings are used to condition the pre-trained UNet backbone in Stable Diffusion, which produces hierarchical feature maps from its decoder. These are resized to a common dimension and passed to the Upsampling decoder and depth regressor to produce the final depth. Supervision. Supervised training using the ground truth depth with SILog loss as the loss function with variance focus (\u03bb) 0.85. Ground-truth depth is transformed as 1 (1+x). Training. Trained on NYUv2 [64], KITTI [33], virtual KITTI v2 [10] for 25 epochs, with one-cycle learning rate (min: 3e\u22125, max: 5e\u22124) and batch size 32 on 8\u00d7 A100 GPUs. Team 9: HIT-AIIA \u2013 \u2020D P. Sun [email protected] K. Jiang [email protected] G. Wu [email protected] J. Liu [email protected] X. Liu [email protected] J. Jiang [email protected] Network. It involves the pre-trained Depth Anything encoder and pre-trained CLlP model. The latter is introduced to calculate the similarity between the keywords \u2018indoor\u2019 or \u2018outdoor\u2019 and features extracted from the input image to route it to two, different instances of Depth Anything specialized on indoor or outdoor scenarios. Supervision. Two instances of Depth Anything are finetuned on ground-truth labels, respectively from NYUv2 and KITTI for indoor and outdoor environments. Training. The training resolution is 392 \u00d7 518 on NYUv2 and 384 \u00d7 768 on KITTI. The batch size is 16 and both instances are trained for 5 epochs. Team 10: FRDC-SH \u2013 \u2020D X. Zhang [email protected] J. Wei [email protected] F. Wang [email protected] Z. Tan [email protected] Network. The depth network is the Depth Anything [111] pre-trained model \u2013 based on ZoeDepth [8] with a DPT BEiT L384 \u2013 and further fine-tuned. Supervision. Trained on ground-truth depth, with SILog and Hyperbolic Chamfer Distance losses. Training. The model is fine-tuned on NYU-v2 [64], 7Scenes [86], SUNRGBD [128], DIODE [102], KITTI [33], DDAD [39], and Argoverse [12] \u2013 without any resizing of the image resolution \u2013 for 20 epochs with batch size 32, a learning rate set to 1.61e-04, and a 0.01 weight decay. Team 15: hyc123 \u2013 D J. Wang [email protected] Network. Swin encoder [55] with skip connections and a decoder with channel-wise self-attention modules. Supervision. Trained with ground truth depths, using a loss consisting of a combination of two L1 losses and an SSIM loss, weighted accordingly. Training. The model was trained on Kitti Eigen-Zhou split using images of size 370 \u00d7 1224 for 100 epochs. 5 Team 16: ReadingLS \u2013 \u2020MD* A. Luginov [email protected] M. Shahzad [email protected] Network. The depth network is SwiftDepth [58], a compact model with only 6.4M parameters. Supervision. Self-supervised monocular training with the minimum reconstruction loss [35], enhanced by offline knowledge distillation from a large MDE model [111]. Training. The model is trained in parallel on Kitti Eigen-Zhou and a selection of outdoor YouTube videos, similarly to KBR [93]. Both training and prediction are performed with the input resolution of 192 \u00d7 640. The teacher model [111] is not trained on either these datasets or SYNSPatches. Team 19: Elder Lab \u2013 D S. Hosseini [email protected] A. Trajcevski [email protected] J. H. Elder [email protected] Network. An off-the-shelf semantic segmentation model [105] is used at first to segment the image. Then, the depth of pixels on the ground plane is estimated by predicting the camera angle from the height of the highest pixel on the ground. Then, depth is propagated vertically for pixels above the ground, while the Manhattan frame is estimated with [76] to identify both Manhattan and non-Manhattan segments in the image and propagate depth along them in 3D space. Finally, the depth map is completed according to heat equations [27], with pixels for which depth has been already estimated imposing forcing conditions, while semantic boundaries and the image frame impose reflection boundary conditions. Supervision. Ground-truth depth is used for training three kernel regression models. Training. Three simple statistical models are trained on CityScapes [20] and NYUv2 [64]: 1) A kernel regression model to estimate ground elevation angle from the vertical image coordinate of the highest observed ground pixel. The ground truth elevation angle is computed by fitting a plane (constrained to have zero roll) to the ground truth ground plane coordinates; 2) A kernel regression model to estimate the depth of ground pixels from their vertical coordinate, conditioned on semantic class; 3) median depth of non-ground pixels in columns directly abutting the bottom of the image frame, conditioned on semantic class. 5. Results Submitted methods were evaluated on the testing split of SYNS-Patches [1, 92]. Participants were allowed to submit methods without any restriction on the supervision or the predictions by the model, which can be either relative or metric. Accordingly, to ensure a fair comparison among the methods, the submitted predictions are aligned to groundtruth depths according to median depth scaling. 5.1. Quantitative Results Table 1 highlights the results of this third edition of the challenge, with the top-performing techniques, ordered using FScore performance, achieving notable improvements over the baseline method. A first, noteworthy observation is the widespread adoption of the Depth Anything model [111], pre-trained on 62M of images, as the backbone architecture by the leading teams, including PICO-MR, RGA-Robot, EVP++, 3DCreators, HIT-AIIA, FRDC-SH, and ReadingLS, demonstrating its effectiveness and versatility. Specifically, Team PICO-MR, which secured the top position on the leaderboard, achieved an F-score of 23.72, outperforming the baseline method by a remarkable 72.9%. This represents a significant improvement over the previous state-of-the-art method, DJI&ZJU, which achieved an F-score of 17.51 in the \u201cThe Second Monocular Depth Estimation Challenge\u201d [91]. In particular, Team PICO-MR\u2019s result shows a 35.5% increase in performance compared to DJI&ZJU, highlighting the rapid progress made in monocular depth estimation within a relatively short period. This improvement can be also clearly observed in the other metrics considered, both accuracy and error \u2013 notably, achieving the second absolute results on F-Edges, MAE, and RMSE. Their success can be attributed to the fine-tuning of the Depth Anything model on the Cityscapes dataset using a combination of SILog, SSIL, Gradient, and Random Proposal Normalization losses, as well as their strategic choice of fine-tuning for a few epochs to prevent overfitting and ensure robustness to unseen data. Team RGA-Robot, in the third place, achieved an Fscore of 22.79, outperforming the baseline by 66.1%. Their novel approach of augmenting the Depth Anything model, maintained frozen, with an auxiliary network, NAFNet, to convert relative depth predictions into metric depth, combined with self-supervised loss terms, shows the effectiveness of this approach in enhancing depth accuracy. In terms of the F-Edges metric, this method achieves the best result. Team EVP++, ranking fourth, achieved an F-score of 20.87, surpassing the baseline by 52.1%. Their approach involved training the Depth Anything model on both indoor and outdoor datasets, adapting image sizes, batch sizes, and learning rates to each scenario, and highlighting the importance of tailoring model parameters to the specific characteristics of the target environment. This strategy notably improves the results in terms of standard 2D error metrics, yielding the lowest MAE, RMSE, and AbsRel. Several other teams also surpassed both the baseline method and the previous state-of-the-art from the second edition of the challenge. Team 3DCreators achieved an 6 Table 1. SYNS-Patches Results. We provide metrics across the whole test split of the dataset. Top-performing entries generally leverage the pre-trained Depth Anything [111] model. Only a few methods use self-supervised losses or proxy depth labels. Train Rank F\u2191 F-Edges\u2191 MAE\u2193 RMSE\u2193 AbsRel\u2193 Acc-Edges\u2193 Comp-Edges\u2193 PICO-MR \u2020D* 1 23.72 11.01 3.78 6.61 21.24 3.90 4.45 Anonymous ? 2 23.25 10.78 3.87 6.70 21.70 3.59 9.86 RGA-Robot \u2020S 3 22.79 11.52 5.21 9.23 28.86 4.15 0.90 EVP++ \u2020D 4 20.87 10.92 3.71 6.53 19.02 2.88 6.77 Anonymous ? 5 20.77 9.96 4.33 7.83 27.80 3.45 13.25 3DCreators \u2020D 6 20.42 10.19 4.41 7.89 23.94 3.61 5.80 visioniitd D 7 19.07 9.92 4.53 7.96 23.27 3.26 8.00 Anonymous ? 8 18.60 9.43 3.92 7.16 20.12 2.89 15.65 HIT-AIIA \u2020D 9 17.83 9.14 4.11 7.73 21.23 2.95 17.81 FRDC-SH \u2020D 10 17.81 9.75 5.04 8.92 24.01 3.16 14.16 Anonymous ? 11 17.57 9.13 4.28 8.36 23.35 3.18 20.66 Anonymous ? 12 16.91 9.07 4.14 7.35 22.05 3.24 18.52 Anonymous ? 13 16.71 9.25 5.48 11.05 34.20 2.57 18.04 Anonymous ? 14 16.45 8.89 5.29 10.53 33.67 2.60 18.73 hyc123 D 15 15.92 9.17 8.25 13.88 43.88 4.11 0.74 ReadingLS \u2020MD* 16 14.81 8.14 5.01 8.94 29.39 3.28 30.28 Baseline S 17 13.72 7.76 5.56 9.72 32.04 3.97 21.63 Anonymous ? 18 13.71 7.55 5.49 9.44 30.74 3.61 18.36 Anonymous ? 19 11.90 8.08 6.33 10.89 30.46 2.99 33.63 Elder Lab D 20 11.04 7.09 8.76 15.86 63.32 3.22 40.61 M=Monocular \u2013 S=Stereo \u2013 D*=Proxy Depth \u2013 D=Ground-truth Depth \u2013 \u2020=Pre-trained Depth Anything model F-score of 20.42, outperforming the baseline by 48.8% by fine-tuning and combining predictions from the Depth Anything model and Metric3D. Team visioniitd follows surpassing the baseline using ECoDepth, which conditions Stable Diffusion\u2019s UNet backbone with Comprehensive Image Detail Embeddings. Team HIT-AIIA and FRDC-SH also achieved notable improvements, with F-score of 17.83 and 17.81, respectively, using specialized model instances and fine-tuning on diverse datasets. Finally, the remaining teams outperformed the baseline either on the F-score or any of the other metrics, yet not surpassing the winner of the previous edition. Team hyc123, with an F-score of 15.92, outperformed the baseline by 16.0% using a Swin encoder with skip connections and a decoder with channel-wise self-attention modules, while Team ReadingLS outperforms the baseline by distilling knowledge from Depth Anything to a lightweight network based on SwiftDepth, further improved using minimal reconstruction loss during training. Finally, Team Elder Lab employed an off-the-shelf semantic segmentation model and estimated depth using techniques such as predicting camera angle, propagating depth along Manhattan and non-Manhattan segments, and completing the depth map using heat equations. They achieved an F-score of 11.04, 19.5% lower than the baseline score of 13.72, yet they obtained 3.22 Acc-Edge, beating the baseline. 5.2. Qualitative Results Figure 3 provides qualitative results for the depth predictions of each submission. A notable trend among the top-performing teams, such as PICO-MR, RGA-Robot, EVP++, and 3DCreators, is the adoption of the Depth Anything model as a backbone architecture. While Depth Anything represents the current state-of-the-art in monocular depth estimation, the qualitative results highlight that there are still significant challenges in accurately estimating depth, particularly for thin structures in complex outdoor scenes. This is evident in columns 2, 4, 5, and 6 of Figure 3, where objects like trees and branches are not well-recovered, despite the impressive quantitative performance of these methods as shown in Table 1. Interestingly, Team visioniitd, which employs a novel approach called ECoDepth to condition Stable Diffusion\u2019s UNet backbone with Comprehensive Image Detail Embeddings, demonstrates a remarkable ability to estimate depth for thin structures. Yet, they are outperformed quantitatively by other methodologies, suggesting that estimating depth in smooth regions may be more challenging than in thin structures. The qualitative results also reveal some method-specific anomalies. For instance, hyc123 exhibits salt-and-pepper noise artifacts, while Elder Lab\u2019s method, which ranks last, generates overly smooth depth maps that lose important scene objects. These anomalies highlight the importance of developing robust techniques that can handle diverse scene characteristics. Grid-like artifacts are observed in the predictions of top-performers PICO-MR and RGA-Robot, particularly in regions where the network seems uncertain about depth estimates. This suggests that further improvements in network architecture and training strategies may be necessary to mitigate these artifacts. 7 GT PICO-MR RGA-Robot EVP++ 3DCreators visioniitd HIT-AIIA FRDC-SH hyc123 ReadingLS Baseline Elder Lab Figure 3. SYNS-Patches Depth Visualization. Best viewed in color and zoomed in. Methods are ranked based on their F-Score in Table 1. We can appreciate how thin structures still represent one of the hardest challenges to any method, such as branches and railings, for instance. Near depth discontinuities, most approaches tend to produce \u201chalos\u201d, interpolating between foreground and background objects and thus failing to perceive sharp boundaries. Nonetheless, most methods expose higher level of detail compared to the baseline. The indoor scenario in the last column shows the strong performance of methods like PICO-MR, EVP++, HITAIIA, and FRDC-SH in estimating scene structure. This can be attributed to their use of large-scale pre-training, fine-tuning on diverse datasets, and carefully designed loss functions that capture both global and local depth cues. However, all methods still exhibit over-smoothing issues at depth discontinuities, manifesting as halo effects. While they outperform the baseline in this regard, likely due to their supervised training with ground truth or proxy labels, there remains significant room for improvement. A notable limitation across all methods is the inability to effectively estimate depth for non-Lambertian surfaces, such as glass or transparent objects. This is evident in the penultimate right column and the first column, corresponding to the windshield. The primary reason for this limitation is the lack of accurate supervision for such surfaces in the training data, highlighting the need for novel techniques and datasets that explicitly address this challenge. In conclusion, the qualitative results provide valuable insights into the current state of monocular depth estimation methods. While the adoption of large-scale pretraining and carefully designed architectures has led to significant improvements, challenges persist in accurately estimating depth for thin structures, smooth regions, and nonLambertian surfaces. Addressing these limitations through novel techniques, improved training strategies, and diverse datasets will be crucial for further advancing this field. 6. Conclusions & Future Work This paper has summarized the results for the third edition of MDEC. Over the various editions of the challenge, we have seen a drastic improvement in performance, showcasing MDE \u2013 in particular real-world generalization \u2013 as an exciting and active area of research. With the advent of the first foundational models for MDE during the last months, we observed a diffused use of frameworks such as Depth Anything [111]. This ignited a major boost to the results submitted by the participants, with a much higher impact compared to the specific kind of supervision chosen for the challenge. Nonetheless, as we can appreciate from the qualitative results, any methods still struggle to accurately predict fine structures and discontinuities, hinting that there is still room for improvement despite the massive amount of data used to train Depth Anything. We hope MDE will continue to attract new researchers and practitioners to this field and renew our invitation to participate in future editions of the challenge. Acknowledgments. This work was partially funded by the EPSRC under grant agreements EP/S016317/1, EP/S016368/1, EP/S016260/1, EP/S035761/1. 8"
16
+ }
intro_8K/test_introduction_short_2404.16895v3.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2404.16895v3",
3
+ "title": "QuERLoc: Towards Next-Generation Localization with Quantum-Enhanced Ranging",
4
+ "abstract": "Remarkable advances have been achieved in localization techniques in past\ndecades, rendering it one of the most important technologies indispensable to\nour daily lives. In this paper, we investigate a novel localization approach\nfor future computing by presenting QuERLoc, the first study on localization\nusing quantum-enhanced ranging. By fine-tuning the evolution of an entangled\nquantum probe, quantum ranging can output the information integrated in the\nprobe as a specific mapping of distance-related parameters. QuERLoc is inspired\nby this unique property to measure a special combination of distances between a\ntarget sensor and multiple anchors within one single physical measurement.\nLeveraging this capability, QuERLoc settles two drawbacks of classical\nlocalization approaches: (i) the target-anchor distances must be measured\nindividually and sequentially, and (ii) the resulting optimization problems are\nnon-convex and are sensitive to noise. We first present the theoretical\nformulation of preparing the probing quantum state and controlling its dynamic\nto induce a convexified localization problem, and then solve it efficiently via\noptimization. We conduct extensive numerical analysis of QuERLoc under various\nsettings. The results show that QuERLoc consistently outperforms classical\napproaches in accuracy and closely follows the theoretical lowerbound, while\nmaintaining low time complexity. It achieves a minimum reduction of 73% in RMSE\nand 97.6% in time consumption compared to baselines. By introducing range-based\nquantum localization to the mobile computing community and showing its superior\nperformance, QuERLoc sheds light on next-generation localization technologies\nand opens up new directions for future research.",
5
+ "authors": "Entong He, Yuxiang Yang, Chenshu Wu",
6
+ "published": "2024-04-25",
7
+ "updated": "2024-05-04",
8
+ "primary_cat": "cs.ET",
9
+ "cats": [
10
+ "cs.ET"
11
+ ],
12
+ "label": "Original Paper",
13
+ "paper_cat": "Parameter AND Efficient AND Fine AND Tuning",
14
+ "gt": "Decades of efforts has been devoted into positioning techniques including the Global Positioning System (GPS) and various indoor localization systems [5, 30], which have become an indispensable part of our lives. Range-based localization schemes locate sensors based on information of Euclidean distances between sensors and neighbouring anchor nodes, which is collected by their casual asso- ciation with the received signals at sensors. They have been applied to different technologies, such as GPS, WiFi, mmWave/UWB radars and ultrasound, etc. Typical ranging models to be exploited are the Time of Arrival (ToA), Time Difference of Arrival (TDoA), Angular of Arrival (AoA), and Received Signal Strength Indicator (RSSI) [25]. Apparently, the localization performance depends heavily on the ranging accuracy. In general, factors affecting the ranging accuracy of the existing methods is multiple-fold, including signal bandwidth, carrier frequency, and environmental effects [25]. For instance, ac- curacy of GPS degrades drastically in presence of blockings due to the deterioration of distance measurement, and WiFi localization Entangled Probing State Preparation Controlled Evolution Relative Phase Measurement Localization with Optimizer Figure 1: Workflow of QuERLoc suffers from low accuracy due to limited bandwidth [22], synchro- nization offsets and multipath effect [20]. Given that the signals are inaccurate, the distance estimates from the signals are consequently perturbed. In the classical picture, distances are measured individu- ally (or in pairs) and sequentially. This could lead to either under- utilization of available anchors or accumulated disturbances in the ranging model with the growth in the quantity of distance rangings. These often result in degraded solution quality, primarily due to the lack of correlations between each distance ranging. For instance, issues such as ill-conditioned least squares [37] and large geometric dilution of precision (GDOP) affect trilateration-based localization. Additionally, significant deviations in the optimal solutions occur when maximum likelihood estimation (MLE) or error minimization [3, 36, 40] techniques are applied to location estimation. To make matters worse, range-based localization with classical ranging mod- els is frequently reduced to an instance of non-convex optimization, which is NP-hard [1]. Although various techniques can be applied to convexify the objective, they could commonly cause problems such as lack of scalability and existence of optimality gap [1]. Despite the significant progress in enhancing ranging and achieving robust localization [3, 5, 20, 32, 38], their accuracy remains inherently and physically limited within the classical framework. In this paper, we foresee an emerging opportunity in quantum- based ranging to break such limits for next-generation localization systems. We propose QuERLoc, a localization approach assisted by Quantum-Enhanced Ranging (QuER), which aims to concurrently tackle the co-existence of Gaussian noise accumulation from itera- tive ranging process, and the non-convexity objective arise from solving error minimization problem [3]. By introducing quantum metrology [12\u201314] and quantum control theory [10, 33] and har- nessing entangled probing states, QuER measures the Euclidean distances between the target and anchor nodes in a correlated man- ner rather than an isolated one. Schematic diagram is shown in Fig. 1. Quantum metrology infers the parameter related to distance 1 arXiv:2404.16895v3 [cs.ET] 4 May 2024 , QuERLoc ranging based on the physical dynamics of the probing system. In this context, we employ quantum bits, known as qubits, to approxi- mate the probes in use. By manually controlling the dynamics of a qubit and utilizing the unique property of quantum entanglement, we expect that the readout from the metrology system would admit a linear combination of the square of distances through the solution of governing Schr\u00f6dinger equation [2, 28]. While classical ranging methods can algorithmically compute such combinations through repeated measurements, this approach tends to increase overhead and exacerbate measurement errors. In contrast, QuER exploits the inner tension among entangled qubits, enabling the measurement of sophisticated distance combinations with the same level of error as a single target-anchor distance measurement. A salient feature of QuERLoc is that by preparing a special prob- ing state corresponding to specific scheme of distance combination, the induced MLE problem would be convexified. Solving the in- duced optimization would be computationally efficient through computing a least-square (LS) regression [37], while the derived so- lution maintains substantial reliability even when subjected to typi- cal noise conditions. By a generic configuration of QuER scheme, \ud835\udc51 rangings are adequate for QuERLoc to generate highly dependable position estimation in \ud835\udc51-dimensional space. To validate our study, we conducted simulations with noisy distance ranging readouts. The results show that QuERLoc significantly outperforms baseline approaches using classical ranging, reduces the error metric RMSE by at least 73% and the average time consumption by at least 97.6%, and consistently saturates the theoretical lower bound of RMSE. To the best of our knowledge, we are the first to study localiza- tion approaches based on quantum ranging. Our research takes a pioneering step towards the utilization of quantum-enhanced rang- ing with quantum entanglement for next-generation localization technologies. We believe this would provide insight to the mobile computing community, and open innovative research opportuni- ties in the fields of both sensor network localization and quantum computing. Our contributions are summarized as follows: \u2022 We introduce the problem of range-based quantum local- ization to the community for the first time and present an analytical formulation of a novel localization approach using quantum-enhanced ranging. \u2022 We propose QuER, which utilizes the evolutionary dynamics of quantum probe under certain external field manipulation and the methodologies in quantum metrology. \u2022 We formulate the localization problem of QuERLoc under specific probing scheme of QuER, resulting in a convex opti- mization problem that can be efficiently solved via weighted least-square regression, and requires only \ud835\udc51rangings for localization in \ud835\udc51-dimensional space. \u2022 We experimentally demonstrate that the proposed QuER- Loc outperforms conventional range-based localization ap- proaches significantly in accuracy and latency, and saturates the Cram\u00e9r-Rao Lower Bound (CRLB) consistently. The rest of the paper is organized as follows. We summarize related works in \u00a72, and present a primer on quantum metrology and classical ranging in \u00a73. We formulate probing particle dynamics in \u00a74, followed by ranging model in \u00a75 and localization algorithm in \u00a76. \u00a77 reports the evaluation results and \u00a78 concludes the paper.",
15
+ "main_content": "Range-based Localization Range-based localization has been a subject of intense study, which involves two key problems: ranging and localization. Ranging is usually done by reversing the propagation distances from various signals, e.g., GPS, WiFi, mmWave, ultrasound, etc, with different ranging models, e.g., AoA, TDoA, RSSI, etc. Many efforts have been made towards localization with certain structure of distance information, including the trilaterationbased algorithms, conic relaxation, and MDS-MAP. Solving problems induced by trilateration often requires the use of linearization or pseudo-linearization [21], and the performance deteriorates significantly due to inaccurate distance measurements and error accumulation, thus further refinement is required [3]. In [38, 41], noise-tolerant trilateration-based algorithms are proposed. Localization by conic relaxation converts non-convex constraints in the problem formulation into convex ones. In [35], So and Ye studied the theory of semidefinite programming (SDP) in sensor network localization, while in [40], Luo et al. applied SDP technique to TDoA localization. Tseng [36] proposed second-order cone programming (SOCP) method as an efficient variant of SDP. Although relaxation method achieves high accuracy in estimating sensor locations, its complexity is in general not satisfactory [1], and is thus only applicable to small-scale problems. Multidimensional scaling (MDS) is a special technique aimed at finding low-dimensional representations for high-dimensional data. MDS-MAP [32] constructs a relative map through distance matrix, and localizes, nodes by transforming the map into a absolute map with sufficient and accurate distance measurements. Quantum Metrology Quantum metrology [12\u201314] emerged as an increasingly important research area, where quantum entanglement and coherence are harnessed to boost the precision of sensing beyond the limit of classical sensors in various fundamental scenarios, including thermometry [26], reference frame alignment [6], and distance measurement [11]. Controlled evolution of quantum system is also widely studied, largely based on the control theory so as to create certain state evolution in realizing different sensing tasks [15, 33]. Besides theoretical works, a primitive quantum sensor network has been lately implemented [24], while recent experiment has demonstrated the feasibility of generating large Greenberger\u2013Horne\u2013Zeilinger (GHZ) state [44]. Experimental works demonstrate the feasibility to prepare widely-used probes in quantum metrology, including the ones utilized by QuERLoc. Quantum-assisted Localization There is no significant amount of work presented in the interdisciplinary field of quantum information and localization. A few existing works enhance fingerprintbased localization by accelerating computation in fingerprint database searching using quantum algorithms. Grover in [16] improves the asymptotic time of searching in an unstructured dataset from \ud835\udc42(\ud835\udc5b) to \ud835\udc42(\u221a\ud835\udc5b). Buhrman et al. [4] introduces the concept of quantum fingerprints and proves its exponential improvement in storage complexity compared to classical one. Subsequent works include the quantum fingerprint localization [34], two-stage transmitter localization method with quantum sensor network [43], and machine learning-based WiFi sensing localization augmented with quantum transfer learning [19]. To our awareness, there is no prior work on range-based localization with quantum ranging. 2 QuERLoc , 3 PRELIMINARIES 3.1 Range-based Localization Model A typical range-based localization model on a \ud835\udc51-dimensional space where \ud835\udc51\u2208{2, 3} consists of \ud835\udc5bnodes with accurate positions Anc = {\ud835\udc821, . . . , \ud835\udc82\ud835\udc5b} fixed on R\ud835\udc51under arbitrary topology, called the anchors. We consider an idealized picture of localization, where all facilities involved have full knowledge of the correspondence and localization of all available anchors. A sensor at position \ud835\udc99\u2208R\ud835\udc51 communicates with a subset of Anc through certain medium and acquires information on the functionals of sensor-anchor distances, denoted as S({\ud835\udc51\ud835\udc56: \ud835\udc56\u2208\ud835\udc3c\u2282{1, . . . ,\ud835\udc5b}}|\ud835\udf51) where \ud835\udc3cis an index set indicating the indices of utilized anchors, \ud835\udc51\ud835\udc56:= \u2225\ud835\udc99\u2212\ud835\udc82\ud835\udc56\u2225where \u2225\u00b7 \u2225is the Euclidean norm on R\ud835\udc51, and \ud835\udf51parameterizes the ranging process. We herein refer to this process as ranging. Moreover, we assume the anchors and sensor positions are bounded, i.e., there exist scalars 0 < \ud835\udf05\ud835\udc4e,\ud835\udf05\ud835\udc60< \u221e, such that for 1 \u2264\ud835\udc56\u2264\ud835\udc5b, \u2225\ud835\udc82\ud835\udc56\u2225\u221e\u2264\ud835\udf05\ud835\udc4e and \u2225\ud835\udc99\u2225\u221e\u2264\ud835\udf05\ud835\udc60where \u2225\u00b7 \u2225\u221estands for the vector infinite norm with \u2225\ud835\udc97\u2225\u221e:= max\ud835\udc56|\ud835\udc63\ud835\udc56|. Suppose a total of \ud835\udc5arangings are available, the objective of the sensor is to fully exploit available information {S(\ud835\udc58)}\ud835\udc5a \ud835\udc58=1 where the superscript \ud835\udc58specifies the signal acquired from the \ud835\udc58th ranging, and estimate its position \u02c6 \ud835\udc99\u2208R\ud835\udc51. 3.2 Comparing Classic Ranging and QuER Conventional range-based localization protocols have different distance measurement scenarios, each corresponds to a specific form of signal-distance mapping S(\u00b7). The majority of such mapping involves one or two anchors, including the angle of arrival (AoA), SAoA(\ud835\udc51\ud835\udc56|\ud835\udf03, \ud835\udf06) = 2\ud835\udf0bcos\ud835\udf03 \ud835\udf06\ud835\udc51\ud835\udc56where SAoA is the phase difference of adjacent antennas; the time of arrival (ToA), SToA(\ud835\udc51\ud835\udc56|\ud835\udc63) = 2\ud835\udc51\ud835\udc56 \ud835\udc63 where SToA is the time difference between signal emission and recapture; the time differences of arrivals (TDoA), STDoA(\ud835\udc51\ud835\udc56,\ud835\udc51\ud835\udc57|\ud835\udc63) = 1 \ud835\udc63 \f \f\ud835\udc51\ud835\udc56\u2212\ud835\udc51\ud835\udc57 \f \f where STDoA is the time differences of arrival at the paired and synchronized sensors; and the received signal strength indicator (RSSI), SRSSI(\ud835\udc51\ud835\udc56|\ud835\udc43\ud835\udc61,\ud835\udc3a\ud835\udc61,\ud835\udc3a\ud835\udc5f, \ud835\udf06) = \ud835\udc43\ud835\udc61\ud835\udc3a\ud835\udc61\ud835\udc3a\ud835\udc5f\ud835\udf062 16\ud835\udf0b2\ud835\udc512 \ud835\udc56 where SRSSI is the received signal power at the sensor [25]. A notable drawback of classic ranging is the requirement for target-anchor distances to be measured sequentially and individually, as in the cases of AoA, ToA, and RSSI, or in pairs for TDoA. Large numbers of ranging would be imperative if full utilization of anchors is required. This sequential measurement process not only introduces extra complexity and overhead of ranging into the localization task but also results in the system\u2019s vulnerability to noise and environmental fluctuations, as the overall effect of normal noise integrated in distance \ud835\udc51\ud835\udc56to the eventual solution would be unpredictable. Moreover, substituting the primitive form of S(\u00b7) into the localization problem arising from either MLE or error minimization [3] always introduces computationally expensive optimization problems [38]. In contrast, the Quantum-Enhanced Ranging (QuER) emerges as an innovative advancement, endowed with a unique capability in settling both issues. It enables the simultaneous ranging of a special combination of distances between a target sensor and an arbitrary number of anchors within a single physical measurement, which is nearly impossible to achieve in classical systems and allows a convexified localization problem. In the following section, we will present a primer on quantum metrology, which underpins QuER, while leaving the analysis of the exact form of the proposed quantum-enhanced ranging SQuERLoc to \u00a75.2. 3.3 Quantum Metrology The proposed QuER is based on quantum metrology. We first briefly introduce the principles of quantum metrology and present its generic readout scenarios. 3.3.1 Quantum Metrology for Parameter Measurement. Quantum metrology targets measuring physical parameters with high precision with the aid of quantum mechanics principles [13]. It typically includes (i) Preparing a probe, described by a quantum state |\ud835\udf13\u27e9in the environment with underlying Hilbert space H, which under an orthornormal basis {|\ud835\udc5b\u27e9} of H can be expressed as |\ud835\udf13\u27e9= \u00cd \ud835\udc5b\ud835\udc4e\ud835\udc5b|\ud835\udc5b\u27e9 and physically exhibits state |\ud835\udc5b\u27e9with probability |\ud835\udc4e\ud835\udc5b|2, subject to \u00cd \ud835\udc5b|\ud835\udc4e\ud835\udc5b|2 = 1 [28]; (ii) Letting it interact with external system, which can be represented by a unitary transformation U\ud835\udf19encoded with targeted parameter set \ud835\udf19; and (iii) Extracting information on \ud835\udf19by quantum measurement, specified by a set of measurement operators {\u03a0\ud835\udc56}\ud835\udc56\u2208N. The probe state and measurement operators may assume to be either separable, or entangled [13], corresponding to whether it is feasible to find the decomposition |\ud835\udf13\u27e9= |\ud835\udf131\u27e9\u2297\u00b7 \u00b7 \u00b7 \u2297|\ud835\udf13\ud835\udc5b\u27e9, where |\ud835\udf13\ud835\udc56\u27e9\u2208H\ud835\udc56is the state in subspace H\ud835\udc56\u2282H, and \u2297represents the tensor product operation. QuERLoc employs this procedure as a subroutine to decode multiple sensor-anchor distances information from the entangled state with one-shot ranging. 3.3.2 Generic Readout Scenario of Quantum Metrology. A generic framework of an atomic probing system is encompassed by the following: Practically, the probe is prepared as a uniform superposition |\ud835\udf13init\u27e9= 1 \u221a 2 (|\ud835\udc4e\u27e9+ |\ud835\udc4f\u27e9), where |\ud835\udc4e\u27e9, |\ud835\udc4f\u27e9are arbitrary orthonormal states in the space H [13, 29]. Applying the parameterized unitary operation U\ud835\udf19on |\ud835\udf13init\u27e9yields the phase state [14]: |\ud835\udf13\ud835\udf19\u27e9= 1 \u221a 2 \u0010 U\ud835\udf19|\ud835\udc4e\u27e9+ U\ud835\udf19|\ud835\udc4f\u27e9 \u0011 \u221d1 \u221a 2 \u0010 |\ud835\udc4e\u27e9+ \ud835\udc52\u2212\ud835\udc56\ud835\udf19|\ud835\udc4f\u27e9 \u0011 . (1) Let \u03a0 := |\ud835\udf13init\u27e9\u27e8\ud835\udf13init| denote the projection operator on subspace spanned by the probe, we apply the positive operator-valued measurement (POVM) [28] on |\ud835\udf13\ud835\udf19\u27e9, specified by a couple {\u03a0, 1H \u2212\u03a0} where 1H is the identity map on H. The readout process involves verifying whether |\ud835\udf13\ud835\udf19\u27e9resides in the subspace of |\ud835\udf13init\u27e9. The outcome would simply be either \u2018yes\u2019 (encoded as 0) or \u2018no\u2019 (encoded as 1), associated with probabilities \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f3 Pr(outcome = 0) = Tr \u0010 \u03a0 |\ud835\udf13\ud835\udf19\u27e9\u27e8\ud835\udf13\ud835\udf19| \u0011 = cos2 \ud835\udf19 2 , Pr(outcome = 1) = 1 \u2212Pr(outcome = 0) = sin2 \ud835\udf19 2 . (2) Repeating the procedures allows us to analyze the value of \ud835\udf19with statistical tools such as maximum likelihood estimator and Bayesian inference [29]. In particular, when |\ud835\udc4e\u27e9, |\ud835\udc4f\u27e9are in \ud835\udc41-tensor form, i.e., |\ud835\udc4e\u27e9= |\ud835\udc4e1\u27e9\u2297 \u00b7 \u00b7 \u00b7\u2297|\ud835\udc4e\ud835\udc41\u27e9and |\ud835\udc4f\u27e9= |\ud835\udc4f1\u27e9\u2297\u00b7 \u00b7 \u00b7\u2297|\ud835\udc4f\ud835\udc41\u27e9, while the operator U\ud835\udf19admits decomposition U\ud835\udf19= U\ud835\udf191 \u2297\u00b7 \u00b7 \u00b7 \u2297U\ud835\udf19\ud835\udc41, as the subscripts index the subsystem H\ud835\udc56the quantum states and operators live on. By 3 , QuERLoc Preparation Evolution Measurement with Optimizer Figure 2: Dynamics of qubit with coupled energy levels nature of tensor product, relative phase can thus be alternatively expressed as the sum of relative phases in subsystems, specified by: U\ud835\udf19|\ud835\udf13init\u27e9= 1 \u221a 2 \u0010 U\ud835\udf191 |\ud835\udc4e1\u27e9\u2297\u00b7 \u00b7 \u00b7 \u2297U\ud835\udf19\ud835\udc41|\ud835\udc4e\ud835\udc41\u27e9 + U\ud835\udf191 |\ud835\udc4f1\u27e9\u2297\u00b7 \u00b7 \u00b7 \u2297U\ud835\udf19\ud835\udc41|\ud835\udc4f\ud835\udc41\u27e9 \u0011 \u221d1 \u221a 2 \u0010 |\ud835\udc4e1\u27e9\u2297\u00b7 \u00b7 \u00b7 \u2297|\ud835\udc4e\ud835\udc41\u27e9+ \ud835\udc52\u2212\ud835\udc56\ud835\udf191 |\ud835\udc4f1\u27e9\u2297\u00b7 \u00b7 \u00b7 \u2297\ud835\udc52\u2212\ud835\udc56\ud835\udf19\ud835\udc41|\ud835\udc4f\ud835\udc41\u27e9 \u0011 = 1 \u221a 2 \u0010 |\ud835\udc4e\u27e9+ \ud835\udc52\u2212\ud835\udc56\u00cd\ud835\udc41 \ud835\udc57=1 \ud835\udf19\ud835\udc57|\ud835\udc4f\u27e9 \u0011 . (3) Quantum metrology utilizes the above phase accumulation phenomenon to improve the asymptotic error by an \ud835\udc41\u22121/2 factor when detecting physical quantities, compared to classical metrology system [14]. QuER operates on an alternative advantage of this unique entanglement property. By deliberately correlate each relative phase \ud835\udf19\ud835\udc57with the particle\u2019s travel distance, or equivalently, its time-offlight (ToF), it enables multiple distances information to be encoded into the joint relative phase \ud835\udf19:= \u00cd\ud835\udc41 \ud835\udc57=1 \ud835\udf19\ud835\udc57. The following sections \u00a74 and \u00a75 will elaborate on the specific time-dependent evolution of a unique quantum state under certain external controls that QuER would use. 4 CONTROLLED DYNAMICS OF A QUBIT The controlled electrodynamics of quantum particles under the theory of quantum mechanics is crucial to the realization of QuER and our proposed QuERLoc. For an isolated physical system with Hamiltonian \ud835\udc3b, the dynamic of any time-dependent quantum state |\ud835\udf13(\ud835\udc61)\u27e9in the Hilbert space is governed by the following Schr\u00f6dinger equation [2], \ud835\udc56\u210f\ud835\udf15 \ud835\udf15\ud835\udc61|\ud835\udf13(\ud835\udc61)\u27e9= \ud835\udc3b|\ud835\udf13(\ud835\udc61)\u27e9, (4) where \u210fis the reduced Planck constant, \ud835\udc56:= \u221a \u22121, and \ud835\udf15/\ud835\udf15\ud835\udc61is the partial derivative operator with respect to time. Specifically, we consider a two-level approximation [2] of an arbitrary particle, where only two energy levels are considered among multiple possible energy states. The two-level system includes a state of the lowest energy level, called the ground state with notion |\ud835\udc54\u27e9and energy \ud835\udc38\ud835\udc54, and a state with energy increased through energy absorption with external circumstance, called the excited state with notion |\ud835\udc52\u27e9and energy \ud835\udc38\ud835\udc52. The energy difference can be expressed as \u0394\ud835\udc38= \ud835\udc38\ud835\udc52\u2212\ud835\udc38\ud835\udc54= \u210f\ud835\udf140, where \ud835\udf140 is the particle frequency according to the theory of Louis de Broglie [2, 28]. The external electromagnetic field can be viewed as a mechanism coupling the two energy levels, resulting in some implicit transitions between them. Precisely, we denote \u2020 to be the conjugate transpose of operators and states, \u27e8\ud835\udf13| := |\ud835\udf13\u27e9\u2020, \ud835\udc8d:= |\ud835\udc52\u27e9\u27e8\ud835\udc54| the atomic laddering operator [2] transiting the ground state to the excited one, and \ud835\udc8d\u2020 = |\ud835\udc54\u27e9\u27e8\ud835\udc52| the atomic descending operator acting the opposite, as shown in Fig. 2. Then the exact manner of such a coupling mechanism can be expressed as \ud835\udc49\ud835\udc8d+ (\ud835\udc49\ud835\udc8d)\u2020 [28], where \ud835\udc49\u2208C is a complex scalar function characterizing the coupling behaviour. For conciseness, we choose \ud835\udc380 = \u210f\ud835\udf140/2 to be the energy zero level, thereby the Hamiltonian of the coupled system [2] can be formulated by: \ud835\udc3b= \u210f\ud835\udf140 2 \u0010 \ud835\udc8d\ud835\udc8d\u2020 \u2212\ud835\udc8d\u2020\ud835\udc8d \u0011 + \ud835\udc49\ud835\udc8d+ \ud835\udc49\u2217\ud835\udc8d\u2020, (5) where \ud835\udc49\u2217represents the conjugate of complex number \ud835\udc49. The two-level approximation enables us to encode the states {|\ud835\udc52\u27e9, |\ud835\udc54\u27e9} into a single qubit by defining |0\u27e9:= |\ud835\udc52\u27e9and |1\u27e9:= |\ud835\udc54\u27e9, and {|0\u27e9, |1\u27e9} would form an orthornormal basis of underlying Hilbert space. To investigate how the qubit would evolve when the external mechanism is manually controlled, we hereby consider the time-dependent coupling\ud835\udc49(\ud835\udc61) = \ud835\udf16(\ud835\udc61)\ud835\udc52\ud835\udc56\ud835\udf03(\ud835\udc61), where \ud835\udf16,\ud835\udf03are coupling magnitude and field spinning rate respectively, both are real functionals on the time horizon T = [0, \u221e). Analytical intractability of solving the Schr\u00f6rdinger equation with time-dependent Hamiltonian can be settled by a separation of operator: Consider the decomposition of \ud835\udc3b[27] as \ud835\udc3b= \ud835\udc3b0+\ud835\udc37(\ud835\udc61), where \ud835\udc3b0 is a time-independent full-rank operator (i.e., rank\ud835\udc3b0 = dim H), and \ud835\udc37(\ud835\udc61) incorporates time-dependent terms. Suppose \ud835\udc3b0 admits spectrum {\ud835\udc38\ud835\udc57} with eigenstates {|\ud835\udc57\u27e9}, then by assuming |\ud835\udf13(\ud835\udc61)\u27e9= \u00cd \ud835\udc57\ud835\udc50\ud835\udc57(\ud835\udc61)\ud835\udc52\u2212\ud835\udc56\ud835\udc38\ud835\udc57/\u210f\ud835\udc61|\ud835\udc57\u27e9 with \ud835\udc50\ud835\udc57(\ud835\udc61) being undetermined time-dependent coefficients, constraints on \ud835\udc50\ud835\udc57(\ud835\udc61) can be derived in light of (4): \ud835\udc56\u210f\ud835\udf15 \ud835\udf15\ud835\udc61\ud835\udc50\ud835\udc57(\ud835\udc61) = \u2211\ufe01 \ud835\udc58 \ud835\udc50\ud835\udc58(\ud835\udc61) exp \u001a \u2212\ud835\udc56 \u210f(\ud835\udc38\ud835\udc58\u2212\ud835\udc38\ud835\udc57)\ud835\udc61 \u001b \u27e8\ud835\udc57| \ud835\udc37(\ud835\udc61) |\ud835\udc58\u27e9. (6) Applying (6) to our proposed case, set \ud835\udc3b0 = \u210f\ud835\udf140 2 \u0010 \ud835\udc8d\ud835\udc8d\u2020 \u2212\ud835\udc8d\u2020\ud835\udc8d \u0011 , and \ud835\udc37(\ud835\udc61) = \ud835\udc49(\ud835\udc61) \u00b7\ud835\udc8d+\ud835\udc49\u2217(\ud835\udc61) \u00b7\ud835\udc8d\u2020, the coefficients \ud835\udc500(\ud835\udc61),\ud835\udc501(\ud835\udc61) should satisfy \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f3 \ud835\udf15 \ud835\udf15\ud835\udc61\ud835\udc500(\ud835\udc61) = \u2212\ud835\udc56 \u210f\ud835\udf16(\ud835\udc61)\ud835\udc52\ud835\udc56(\ud835\udf03(\ud835\udc61)+\ud835\udf140\ud835\udc61)\ud835\udc501(\ud835\udc61), \ud835\udf15 \ud835\udf15\ud835\udc61\ud835\udc501(\ud835\udc61) = \u2212\ud835\udc56 \u210f\ud835\udf16(\ud835\udc61)\ud835\udc52\u2212\ud835\udc56(\ud835\udf03(\ud835\udc61)+\ud835\udf140\ud835\udc61)\ud835\udc500(\ud835\udc61). (7) Formulating the above coupled differential equations would yield the following equation: \ud835\udf152 \ud835\udf15\ud835\udc612\ud835\udc50\ud835\udc57(\ud835\udc61) \u2212 \u001a 1 \ud835\udf16(\ud835\udc61) \ud835\udf15 \ud835\udf15\ud835\udc61\ud835\udf16(\ud835\udc61) + \ud835\udc56\u00b7 (\u22121)\ud835\udc57 \u0012 \ud835\udf15 \ud835\udf15\ud835\udc61\ud835\udf03(\ud835\udc61) + \ud835\udf140 \u0013\u001b \ud835\udf15 \ud835\udf15\ud835\udc61\ud835\udc50\ud835\udc57(\ud835\udc61) + \ud835\udf162(\ud835\udc61) \u210f2 \ud835\udc50\ud835\udc57(\ud835\udc61) = 0, \ud835\udc57\u2208{0, 1}. (8) A tentative solution would be \ud835\udc50\ud835\udc57(\ud835\udc61) \u221d\ud835\udc52\ud835\udc56\ud835\udf02\ud835\udc57(\ud835\udc61) where \ud835\udf02\ud835\udc57is a real funtional on T. Substituting it into (8) yields following equation: \u0012 \ud835\udf15 \ud835\udf15\ud835\udc61\ud835\udf02\ud835\udc57(\ud835\udc61) \u00132 \u2212(\u22121)\ud835\udc57 \u0012 \ud835\udf15 \ud835\udf15\ud835\udc61\ud835\udf03(\ud835\udc61) + \ud835\udf140 \u0013 \ud835\udf15 \ud835\udf15\ud835\udc61\ud835\udf02\ud835\udc57(\ud835\udc61) \u2212\ud835\udf162(\ud835\udc61) \u210f2 = \ud835\udc56 \ud835\udf152 \ud835\udf15\ud835\udc612\ud835\udf02\ud835\udc57(\ud835\udc61) \u2212 \ud835\udf15 \ud835\udf15\ud835\udc61\ud835\udf16(\ud835\udc61) \ud835\udf16(\ud835\udc61) \ud835\udf15 \ud835\udf15\ud835\udc61\ud835\udf02\ud835\udc57(\ud835\udc61) ! , \ud835\udc57\u2208{0, 1}. (9) Since \ud835\udf02\ud835\udc57,\ud835\udf16,\ud835\udf19are all real functionals, the coincidence of two sides in above equation demonstrates the following constraints on the 4 QuERLoc , form of quantum state: |\ud835\udf13(\ud835\udc61)\u27e9= \u00cd \ud835\udc57\u2208{0,1} \u00cd \ud835\udf02\ud835\udc57\ud835\udc44\ud835\udf02\ud835\udc57\ud835\udc52\ud835\udc56(\ud835\udf02\ud835\udc57(\ud835\udc61)\u2212(\u22121) \ud835\udc57\ud835\udf140\ud835\udc61/2) |\ud835\udc57\u27e9, s.t. \u0010 \ud835\udf15 \ud835\udf15\ud835\udc61\ud835\udf02\ud835\udc57(\ud835\udc61) \u00112 \u2212(\u22121)\ud835\udc57\u0010 \ud835\udf15 \ud835\udf15\ud835\udc61\ud835\udf19(\ud835\udc61) + \ud835\udf140 \u0011 \ud835\udf15 \ud835\udf15\ud835\udc61\ud835\udf02\ud835\udc57(\ud835\udc61) \u2212\ud835\udf162(\ud835\udc61) \u210f2 = 0, \u0010 \ud835\udf15 \ud835\udf15\ud835\udc61\ud835\udf02\ud835\udc57(\ud835\udc61) \u0011\u22121 \ud835\udf152 \ud835\udf15\ud835\udc612\ud835\udf02\ud835\udc57(\ud835\udc61) = \ud835\udf16\u22121(\ud835\udc61) \ud835\udf15 \ud835\udf15\ud835\udc61\ud835\udf16(\ud835\udc61), (10) where \ud835\udc44\ud835\udf02\ud835\udc57\u2208C are complex coefficients, and the inner summations are taken on all possible functionals \ud835\udf02\ud835\udc57. The exact behaviour of state entries can be solved with full knowledge of its initial state |\ud835\udf13(0)\u27e9and coupling factors (\ud835\udf16(\ud835\udc61),\ud835\udf19(\ud835\udc61)). 5 QUANTUM-ENHANCED RANGING In this section, we discuss in detail how our QuERLoc takes advantage of a special case of the above constrained probe qubit evolution. Within the region of localization, we deploy a meticulously controlled field with \ud835\udf16(\ud835\udc61) = \ud835\udf08(2\ud835\udefe\ud835\udc61+ \ud835\udf140) and \ud835\udf03(\ud835\udc61) = \ud835\udefe\ud835\udc612, where \ud835\udf08,\ud835\udefe> 0 are positive parameters. We further assume that \ud835\udf08\u226b\u210f. 5.1 Behaviour of Qubit with Uniform Superposition We begin with illustrating the evolutionary behaviour of a qubit |\ud835\udf13(0)\u27e9with uniform quantum superposition state, i.e., it admits equal probability on both of its energy states. It can be prepared by implementing the Hadamard transformation [28] \u210bon |0\u27e9: |\ud835\udf13(0)\u27e9= \u210b|0\u27e9= 1 \u221a 2 (|0\u27e9+ |1\u27e9) . (11) From the expression of \ud835\udf16,\ud835\udf03, we could arrive at an expression of the time variation of single-qubit state |\ud835\udf13(\ud835\udc61)\u27e9: |\ud835\udf13(\ud835\udc61)\u27e9= \ud835\udc500(\ud835\udc61)\ud835\udc52\u2212\ud835\udf140\ud835\udc61/2 |0\u27e9+ \ud835\udc501(\ud835\udc61)\ud835\udc52\ud835\udf140\ud835\udc61/2 |1\u27e9, (12) subject to the initial state consistency and probability completeness \ud835\udc500(0) = \ud835\udc501(0) = 1 \u221a 2 , |\ud835\udc500(\ud835\udc61)|2 + |\ud835\udc501(\ud835\udc61)|2 = 1, \ud835\udc57= 0, 1, \ud835\udc50\ud835\udc57(\ud835\udc61) = \ud835\udc34\ud835\udc57exp ( \ud835\udc56\u00b7 (\u22121)\ud835\udc57+ \u221a\ufe01 1 + 4\ud835\udf082/\u210f2 2 (\ud835\udefe\ud835\udc612 + \ud835\udf140\ud835\udc61) ) + \ud835\udc35\ud835\udc57exp ( \ud835\udc56\u00b7 (\u22121)\ud835\udc57\u2212 \u221a\ufe01 1 + 4\ud835\udf082/\u210f2 2 (\ud835\udefe\ud835\udc612 + \ud835\udf140\ud835\udc61) ) . (13) Denote \u211c(\ud835\udc36) and \u2111(\ud835\udc36) as the real and imaginary part of a complex number \ud835\udc36\u2208C. Solving the undetermined coefficients \ud835\udc34\ud835\udc57, \ud835\udc35\ud835\udc57\u2208C for \ud835\udc57= 0, 1 in (13) subject to (7), we discover that \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 \u211c(\ud835\udc340) = 1 \u221a 2 1 \u2212\ud835\udf0f 1 + \ud835\udf0f2 , \u211c(\ud835\udc350) = 1 \u221a 2 \ud835\udf0f2 + \ud835\udf0f 1 + \ud835\udf0f2 , \u211c(\ud835\udc341) = 1 \u221a 2 \ud835\udf0f2 \u2212\ud835\udf0f 1 + \ud835\udf0f2 , \u211c(\ud835\udc351) = 1 \u221a 2 \ud835\udf0f+ 1 1 + \ud835\udf0f2 , \u2111(\ud835\udc340) = \u2111(\ud835\udc341) = \u2111(\ud835\udc350) = \u2111(\ud835\udc351) = 0, \ud835\udf0f= 2\ud835\udf08/\u210f 1 + \u221a\ufe01 1 + 4\ud835\udf082/\u210f2 . (14) By denoting \u0394(\ud835\udc61) := \u2212 \u221a\ufe01 1 + 4\ud835\udf082/\u210f2 \u0000\ud835\udefe\ud835\udc612 + \ud835\udf140\ud835\udc61\u0001, the previous assumption \ud835\udf08\u226b\u210findicates that 2\ud835\udf08/\u210f\u226b1, and consequently \ud835\udf0f\u21921. Note that whenever |\u211c(\ud835\udc52\ud835\udc56\u0394(\ud835\udc61))| = |cos(\u0394(\ud835\udc61))| \u226b 1\u2212\ud835\udf0f \ud835\udf0f2+\ud835\udf0f, 0 1 2 3 4 5 0 1 2 \u2206\u03b8real(t) \u00d710 4 0 1 2 3 4 5 0 1 2 \u03b3t2 \u00d710 4 0 1 2 3 4 5 Time Variation t/1e \u22124 5 0 5 Discrepancy 1e 10 Figure 3: Comparison of real relative phase \u0394\ud835\udf03real(\ud835\udc61) and \ud835\udefe\ud835\udc612. Minor outliers (50 out of 5 \u00d7 106 data points) are filtered out. Here we set \ud835\udefe= 103 rad/sec2, \ud835\udf140 = 10\u22122 rad/sec, and \ud835\udf08/\u210f= 1010. 1\u2212\ud835\udf0f \ud835\udf0f2+\ud835\udf0f= 2.5 \u00d7 10\u221211. Absolute discrepancy is bounded by 5 \u00d7 10\u221210 while relative phase is on the order of 10\u22124. Parameters can be adjusted subject to prior estimation on the order of probe ToF. we have two approximate relations \ud835\udc340 + \ud835\udc350\ud835\udc52\ud835\udc56\u0394(\ud835\udc61) \u2248 1 \u221a 2\ud835\udc52\ud835\udc56\u0394(\ud835\udc61) and \ud835\udc341 + \ud835\udc351\ud835\udc52\ud835\udc56\u0394(\ud835\udc61) \u2248 1 \u221a 2\ud835\udc52\ud835\udc56\u0394(\ud835\udc61). When \u0394(\ud835\udc61) = \u00b1 \ud835\udf0b 2 + \ud835\udf00for some scalar |\ud835\udf00| \u226a1 within a period 2\ud835\udf0bof the cos(\u00b7) function, | cos(\u0394(\ud835\udc61))| = | sin(\u00b1 \ud835\udf0b 2 \u2212\u0394(\ud835\udc61))| = | sin(\ud835\udf00)| \u2248|\ud835\udf00|. Thus, when \ud835\udf00satisfies 1\u2212\ud835\udf0f \ud835\udf0f2+\ud835\udf0f\u226a |\ud835\udf00| \u226a1, e.g., \ud835\udf00= \u221a\ufe03 1\u2212\ud835\udf0f \ud835\udf0f2+\ud835\udf0f, the following approximation of probe state dynamic can be applied except for intervals \u0002 \u00b1 \ud835\udf0b 2 \u2212|\ud835\udf00|, \u00b1 \ud835\udf0b 2 + |\ud835\udf00| \u0003 within a single period: |\ud835\udf13(\ud835\udc61)\u27e9= 1 \u221a 2 \ud835\udc52\u2212\ud835\udc56\ud835\udf140 2 \ud835\udc61\ud835\udc52\ud835\udc56\u00b7 1+\u221a 1+4\ud835\udf082/\u210f2 2 (\ud835\udefe\ud835\udc612+\ud835\udf140\ud835\udc61) \u0012 1 \u2212\ud835\udf0f 1 + \ud835\udf0f2 + \ud835\udf0f2 + \ud835\udf0f 1 + \ud835\udf0f2 \ud835\udc52\ud835\udc56\u0394(\ud835\udc61) \u0013 |0\u27e9 + 1 \u221a 2 \ud835\udc52 \ud835\udc56\ud835\udf140 2 \ud835\udc61\ud835\udc52\ud835\udc56\u00b7 \u22121+\u221a 1+4\ud835\udf082/\u210f2 2 (\ud835\udefe\ud835\udc612+\ud835\udf140\ud835\udc61) \u0012\ud835\udf0f2 \u2212\ud835\udf0f 1 + \ud835\udf0f2 + \ud835\udf0f+ 1 1 + \ud835\udf0f2 \ud835\udc52\ud835\udc56\u0394(\ud835\udc61) \u0013 |1\u27e9 \ud835\udf0f\u21921, | cos(\u0394(\ud835\udc61)) |\u226b1\u2212\ud835\udf0f \ud835\udf0f2+\ud835\udf0f \u2212 \u2212 \u2212 \u2212 \u2212 \u2212 \u2212 \u2212 \u2212 \u2212 \u2212 \u2212 \u2212 \u2212 \u2212 \u2212 \u2212 \u2212 \u2212 \u2212 \u2212 \u2192\ud835\udc52 \ud835\udc56 2\ud835\udefe\ud835\udc612\ud835\udc52 \ud835\udc56 2 \u0394(\ud835\udc61) | {z } global phase |0\u27e9+ \ud835\udc52\u2212\ud835\udc56\ud835\udefe\ud835\udc612 |1\u27e9 \u221a 2 ! \u221d|0\u27e9+ \ud835\udc52\u2212\ud835\udc56\ud835\udefe\ud835\udc612 |1\u27e9 \u221a 2 . (15) Such approximation is feasible with a high probability of 1\u22122|\ud835\udf00| \ud835\udf0b, and its validity over a certain time period is demonstrated numerically in Fig. 3. While the global phase factor could not be statistically observed [28], the two energy states yield a time-dependent relative phase shift with angular speed proportional to the square of time \ud835\udc61, which enables the statistical detection of qubit ToF in quadratic form [14], as outlined in \u00a73.3.2. 5.2 Ranging Model of QuERLoc Previous analysis in \u00a75.1 could be naturally extended to the picture of multi-qubit evolution. This is of great essence to the realization of QuERLoc. Prior to that, we first outline the settings of our proposed quantum-enhanced ranging. Analogous to classical range-based localization, a QuERLoc scheme conducts a total of \ud835\udc5arangings (QuERs), whereas for the \ud835\udc58th ranging where 1 \u2264\ud835\udc58\u2264\ud835\udc5a, the anchors involved would be flexibly identified by an index set \ud835\udc3c\ud835\udc58\u2282{1, . . . ,\ud835\udc5b}, 5 , QuERLoc |0\u27e9 \u210b \u2022 |0\u27e9 \u2022 \ud835\udf0e\ud835\udc65 |0\u27e9 \u2022 |0\u27e9 \ud835\udf0e\ud835\udc65 Figure 4: Quantum circuit of \u2130({1, \u22121, 1, \u22121}) applied to |0\u27e9\u22974. The section outlined by dash line prepares a 4-qubit GHZ state [28], while the following \ud835\udf0e\ud835\udc65gates conduct bit-flipping. by recalling that each index \ud835\udc56\u2208{1, . . . ,\ud835\udc5b} stands for the \ud835\udc56th anchor available. Each involved anchor \ud835\udc82\ud835\udc56subject to \ud835\udc56\u2208\ud835\udc3c\ud835\udc58, is assigned a binary-valued parameter \ud835\udc64\ud835\udc56,\ud835\udc58\u2208{\u22121, 1}. {\ud835\udc64\ud835\udc56,\ud835\udc58}\ud835\udc56\u2208\ud835\udc3c\ud835\udc58specifies the probe scheme of the \ud835\udc58th QuER. Accordingly, denote | \u00b7 | as the set cardinality and 1(\u00b7) as the indicator function, each ranging would require the following maximally entangled |\ud835\udc3c\ud835\udc58|-qubit probe: |\ud835\udf13(0)\u27e9\ud835\udc58= 1 \u221a 2 \u2211\ufe01 \ud835\udc57=0,1 \u00cc \ud835\udc56\u2208\ud835\udc3c\ud835\udc58 |\ud835\udf0b(\ud835\udc57) \ud835\udc56,\ud835\udc58\u27e9, \ud835\udf0b(\ud835\udc57) \ud835\udc56,\ud835\udc58= 1{\ud835\udc64\ud835\udc56,\ud835\udc58= \u22121} + \ud835\udc57mod 2, (16) which can be prepared by an entangling operator \u2130\ud835\udc58 \u0000{\ud835\udc64\ud835\udc56,\ud835\udc58}\ud835\udc56\u2208\ud835\udc3c\ud835\udc58 \u0001 on the input ground state, composed by a sequence of Hadamard gates, controlled-not (CNOT) gates and NOT (Pauli-X) gates [28]. An illustrative example on preparing a four-qubit probe 1 \u221a 2 (|0101\u27e9+ |1010\u27e9) that corresponds to QuER scheme {1, \u22121, 1, \u22121} is shown in Fig. 4. To convexify the localization problem, QuERLoc further restricts that an even number of anchors are utilized for each ranging process of QuER (i.e., |\ud835\udc3c\ud835\udc58| \u22082Z), and moreover \u2200\ud835\udc58, \u2211\ufe01 \ud835\udc56\u2208\ud835\udc3c\ud835\udc58 1{\ud835\udc64\ud835\udc56,\ud835\udc58= 1} = \u2211\ufe01 \ud835\udc56\u2208\ud835\udc3c\ud835\udc58 1{\ud835\udc64\ud835\udc56,\ud835\udc58= \u22121}. (17) The sensor triggers each ranging procedure by simultaneously emitting the entangled probe qubits, which subsequently evolve continuously in the controlled external field until getting received by the sensor after being reflected by the specific anchor. Denote \ud835\udc95\ud835\udc58= {\ud835\udc61\ud835\udc56}\ud835\udc56\u2208\ud835\udc3c\ud835\udc58the time instants at which each qubit is retrieved by the sensor, the probe would end up with the following form: |\ud835\udf13(\ud835\udc95\ud835\udc58)\u27e9\ud835\udc58= 1 \u221a 2 \ud835\udc52 \ud835\udc56 2 \u00cd \ud835\udc56\u2208\ud835\udc3c\ud835\udc58\u0394(\ud835\udc61\ud835\udc56)\ud835\udc52\ud835\udc56\ud835\udf09\ud835\udc58\u00a9 \u00ad \u00ab \u00cc \ud835\udc56\u2208\ud835\udc3c\ud835\udc58 |\ud835\udf0b(0) \ud835\udc56,\ud835\udc58\u27e9+ \ud835\udc52\u2212\ud835\udc56\ud835\udf12\ud835\udc58\u00cc \ud835\udc56\u2208\ud835\udc3c\ud835\udc58 |\ud835\udf0b(1) \ud835\udc56,\ud835\udc58\u27e9\u00aa \u00ae \u00ac \u221d1 \u221a 2 \u00a9 \u00ad \u00ab \u00cc \ud835\udc56\u2208\ud835\udc3c\ud835\udc58 |\ud835\udf0b(0) \ud835\udc56,\ud835\udc58\u27e9+ \ud835\udc52\u2212\ud835\udc56\ud835\udf12\ud835\udc58\u00cc \ud835\udc56\u2208\ud835\udc3c\ud835\udc58 |\ud835\udf0b(1) \ud835\udc56,\ud835\udc58\u27e9\u00aa \u00ae \u00ac , (18) where \ud835\udf12\ud835\udc58= \ud835\udefe \u2211\ufe01 \ud835\udc56\u2208\ud835\udc3c\ud835\udc58 \ud835\udc64\ud835\udc56,\ud835\udc58\ud835\udc612 \ud835\udc56, \ud835\udf09\ud835\udc58= 1 2 \ud835\udf12\ud835\udc58. (19) Special quantum properties such as the Zeno effect [17] enable us to inhibit the successive spontaneous evolution once the qubit is returned. Thus, no time synchronization is required among different probing qubits, which is of great concern in traditional ToA and TDoA ranging models [25]. With the relative phase \ud835\udf12acquired, by assuming photons are employed as the probes [13], whose propagation speed is the speed of light \ud835\udc50, we could use the instantaneous relation between distance and ToF \ud835\udc61\ud835\udc56= 2\ud835\udc51\ud835\udc56/\ud835\udc50to derive the signaldistance mapping S(\ud835\udc58) QuERLoc for all \ud835\udc58\u2208{1, . . . ,\ud835\udc5a}, S(\ud835\udc58) QuERLoc \u0000{\ud835\udc51\ud835\udc56}\ud835\udc56\u2208\ud835\udc3c\ud835\udc58|{\ud835\udc64\ud835\udc56,\ud835\udc58}\ud835\udc56\u2208\ud835\udc3c\ud835\udc58,\ud835\udefe,\ud835\udc50\u0001 := \ud835\udf12\ud835\udc58= 4\ud835\udefe \ud835\udc502 \u2211\ufe01 \ud835\udc56\u2208\ud835\udc3c\ud835\udc58 \ud835\udc64\ud835\udc56,\ud835\udc58\ud835\udc512 \ud835\udc56. (20) Above non-linear ToF effect is an instance of quantum control that realizes nonlinear quantum dynamics with external field manipulation [7, 23], which is of increasing interest in the field of quantum information processing. In the next section, we will reformulate the localization task as a simple optimization problem based on above structure of phase information. 6 LOCALIZATION VIA QUANTUM RANGING 6.1 Reformulating the Phase-Distance Relations Upon obtaining the accumulated phase \ud835\udf12\ud835\udc58, we can further expand the terms in previous equality (20) as follows: 4\ud835\udefe \ud835\udc502 \u2211\ufe01 \ud835\udc56\u2208\ud835\udc3c\ud835\udc58 \ud835\udc64\ud835\udc56,\ud835\udc58\ud835\udc512 \ud835\udc56= 4\ud835\udefe \ud835\udc502 \u2211\ufe01 \ud835\udc56\u2208\ud835\udc3c\ud835\udc58 \ud835\udc64\ud835\udc56,\ud835\udc58(\ud835\udc99\u2212\ud835\udc82\ud835\udc56)\ud835\udc47(\ud835\udc99\u2212\ud835\udc82\ud835\udc56) = 4\ud835\udefe \ud835\udc502 \u2211\ufe01 \ud835\udc56\u2208\ud835\udc3c\ud835\udc58 \ud835\udc64\ud835\udc56,\ud835\udc58 | {z } = 0 \ud835\udc99\ud835\udc47\ud835\udc99\u2212 \u0012 8\ud835\udefe \ud835\udc502 \u2211\ufe01 \ud835\udc56\u2208\ud835\udc3c\ud835\udc58 \ud835\udc64\ud835\udc56,\ud835\udc58\ud835\udc82\ud835\udc56 \u0013\ud835\udc47 \ud835\udc99 + 4\ud835\udefe \ud835\udc502 \u2211\ufe01 \ud835\udc56\u2208\ud835\udc3c\ud835\udc58 \ud835\udc64\ud835\udc56,\ud835\udc58\ud835\udc82\ud835\udc47 \ud835\udc56\ud835\udc82\ud835\udc56= \ud835\udf12\ud835\udc58, \u22001 \u2264\ud835\udc58\u2264\ud835\udc5a. (21) The coefficient of \ud835\udc99\ud835\udc47\ud835\udc99is eliminated due to the requirement in (17). For mathematical brevity, we apply the following variable substitution after simplifying the equation in (21): \ud835\udc73= \u0000\ud835\udc961, . . . , \ud835\udc96\ud835\udc5a \u0001\ud835\udc47\u2208R\ud835\udc5a\u00d7\ud835\udc51, \ud835\udc89= \u0000\u210e1, . . . ,\u210e\ud835\udc5a \u0001\ud835\udc47\u2208R\ud835\udc5a, \ud835\udc96\ud835\udc58= 2 \u2211\ufe01 \ud835\udc56\u2208\ud835\udc3c\ud835\udc58 \ud835\udc64\ud835\udc56,\ud835\udc58\ud835\udc82\ud835\udc56, \u210e\ud835\udc58= \u2211\ufe01 \ud835\udc56\u2208\ud835\udc3c\ud835\udc58 \ud835\udc64\ud835\udc56,\ud835\udc58\ud835\udc82\ud835\udc47 \ud835\udc56\ud835\udc82\ud835\udc56\u2212(4\ud835\udefe)\u22121 \ud835\udc502\ud835\udf12\ud835\udc58. (22) Finally, by aggregating results from all \ud835\udc5adistance ranging, the simplification moves us from dealing with a complicated quadratic problem to working with the following elegant and straightforward system of linear equations, which QuERLoc solves to realize sensor positioning: Find \ud835\udc99\u2208R\ud835\udc51, s.t. \ud835\udc73\ud835\udc99= \ud835\udc89. (23) 6.2 Weighted Least-Square Solution The systematic bias introduced by the relative phase readout can be modeled as Gaussian noise. It is routine to assume the noise in parallel experiments are independent, yield zero mean, and have standard deviation proportional to the magnitude of observable physical quantities [3]. Without loss of generality, we consider all noise are integrated in the scalarized value of signal (4\ud835\udefe)\u22121 \ud835\udc502\ud835\udf12\ud835\udc58, which we denote as \ud835\udf06\ud835\udc58. The noisy measurement readout can be analytically modeled as e \ud835\udf06\ud835\udc58= \ud835\udf06\ud835\udc58(1 + \ud835\udeff\ud835\udc58) , \ud835\udf39\u223cN \u0010 0, \ud835\udf0c2\ud835\udc70 \u0011 , (24) where \ud835\udf0c\u2208[0, 1) is a scaling factor that characterizes the extent of measurement error, and \ud835\udf39is a vector of Gaussian noise. 6 QuERLoc , We use the R\ud835\udc5avectors \ud835\udf40, e \ud835\udf40respectively to aggregate the exact and noisy measurement readouts. Based on previous assumptions, on observing e \ud835\udf40, the problem (23) can be addressed by solving the following log-likelihood maximization: \u02c6 \ud835\udc99= arg max \ud835\udc99\u2208R\ud835\udc51 log L h e \ud835\udf40; {\ud835\udc64\ud835\udc56,\ud835\udc58}\ud835\udc56\u2208\ud835\udc3c\ud835\udc58: 1 \u2264\ud835\udc58\u2264\ud835\udc5a; Anc i = arg max \ud835\udc99\u2208R\ud835\udc51 log Pr \ud835\udf39\u223cN(0,\ud835\udf0c2\ud835\udc70) \u0014 e \ud835\udf40 \f \f \f \f e \ud835\udf06\ud835\udc58= \ud835\udf06\ud835\udc58(1 + \ud835\udeff\ud835\udc58) \u0015 = arg max \ud835\udc99\u2208R\ud835\udc51 \ud835\udc5a \u2211\ufe01 \ud835\udc58=1 log \u0014 1 \u221a 2\ud835\udf0b\ud835\udf0c\ud835\udf06\ud835\udc58 \u00b7 \ud835\udc52 (\ud835\udc62\ud835\udc47 \ud835\udc58\ud835\udc99\u2212\u210e\ud835\udc58\u2212\ud835\udf06\ud835\udc58+e \ud835\udf06\ud835\udc58)2 2\ud835\udf0c2\ud835\udf062 \ud835\udc58 \u0015 \u2248arg min \ud835\udc99\u2208R\ud835\udc51 \r \r \r \u221a\ufe01 e \ud835\udc4a \u0010 \ud835\udc73\ud835\udc99\u2212e \ud835\udc89 \u0011\r \r \r 2 . (25) Alternatively, we use term e \u210e\ud835\udc58= \u00cd \ud835\udc56\u2208\ud835\udc3c\ud835\udc58\ud835\udc64\ud835\udc56,\ud835\udc58\ud835\udc82\ud835\udc47 \ud835\udc56\ud835\udc82\ud835\udc56\u2212e \ud835\udf06\ud835\udc58as the \ud835\udc58th entry of vector e \ud835\udc89with noise, and e \ud835\udc4a= diag \u0010 e \ud835\udf06\u22122 1 , . . . , e \ud835\udf06\u22122 \ud835\udc5a \u0011 as the diagonal weighting matrix. Approximation in the last equality arises from our insufficient knowledge of the true measurement outcomes {\ud835\udf06\ud835\udc58}\ud835\udc5a \ud835\udc58=1, and we thus replace them by the noisy observations. Above optimization objective is a typical weighted least square (WLS) problem, which is convex and would yield a closed-form solution [37]: \u02c6 \ud835\udc99opt = \u0010 \ud835\udc73\ud835\udc47e \ud835\udc4a\ud835\udc73 \u0011\u22121 \ud835\udc73\ud835\udc47e \ud835\udc4ae \ud835\udc89. (26) It is worth noting that solving the QuERLoc problem requires relatively low computational complexity, as will be discussed in \u00a77.2.4. Unlike traditional localization methods, our QuERLoc directly admits a convex optimization problem in its simplified expression and no further transformation is required. 7 NUMERICAL ANALYSIS We present numerical analysis results in this section to demonstrate the performance of QuERLoc under different testbed settings. 7.1 Simulation Setups 7.1.1 Default Settings of Parameters. In subsequent experiments, we set up default values for a fraction of the parameters included, as listed in Tab. 1. Table 1: Default settings of experiment parameters Parameters Value Dimension \ud835\udc51 3 \ud835\udf05\ud835\udc60 100 (m) \ud835\udf05\ud835\udc4e/\ud835\udf05\ud835\udc60 0.5 Number of Anchors \ud835\udc5b 10 Anc \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 (0, 0, 0), (\ud835\udf05\ud835\udc4e, 0, 0), (0,\ud835\udf05\ud835\udc4e, 0) (0, 0,\ud835\udf05\ud835\udc4e)(\ud835\udf05\ud835\udc4e,\ud835\udf05\ud835\udc4e,\ud835\udf05\ud835\udc4e), (\ud835\udf05\ud835\udc4e, 0,\ud835\udf05\ud835\udc4e) (\ud835\udf05\ud835\udc4e,\ud835\udf05\ud835\udc4e, 0), (0,\ud835\udf05\ud835\udc4e,\ud835\udf05\ud835\udc4e), \u0010\ud835\udf05\ud835\udc4e 2 , \ud835\udf05\ud835\udc4e 2 , 0 \u0011 \u0010\ud835\udf05\ud835\udc4e 2 , \ud835\udf05\ud835\udc4e 2 ,\ud835\udf05\ud835\udc4e \u0011 \uf8fc \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fd \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fe Number of Ranging \ud835\udc5a 3, 4, 5 \u2200\ud835\udc58, |\ud835\udc3c\ud835\udc58| 2 Noise factor \ud835\udf0c 0% \u22125%, step length 0.5% In addressing an instance of localization problem , we can, without loss of generality, scale down all distance values by a coefficient, even if these values are of a very large magnitude. Thus, the choice of \ud835\udf05\ud835\udc60would not affect the result of our numerical evaluation, and we simply set \ud835\udf05\ud835\udc60= 100 (m) here. Note that we control the ratio \ud835\udf05\ud835\udc4e/\ud835\udf05\ud835\udc60to be smaller than 1, so as to generate both near-field and far-field instances, while the latter case is notably more sensitive to ranging noise. Anchors are deployed at the very beginning of the experiment with the specified topology to avoid degeneration of the baseline performance, and remain stationary throughout the simulations. To ensure the feasibility of experimental realization in subsequent works, we employ a 2-qubit entangled probe in the simulation, and thus |\ud835\udc3c\ud835\udc58| = 2. Entries of signing scheme {\ud835\udc64\ud835\udc56,\ud835\udc58}\ud835\udc56\u2208\ud835\udc3c\ud835\udc58 will be specified in the later context, subject to various choices of number of rangings \ud835\udc5a. 7.1.2 Baseline Algorithms. We compare QuERLoc with three rangebased localization approaches: (i) Multilateration + GD: multilateration is an enhanced version of trilateration to make the solution more robust to noise [39, 45] by involving more shots of ranging. We further apply a gradient-descent (GD) refinement [3] to the solution of the linear system introduced by multilateration in the presence of noise to provide a convincing baseline. (ii) SDP-based Localization: introduced to the field of sensor network localization in [35]. It is a powerful approach to achieve robust positioning in network with complex topology, we reduce it to the case of single sensor localization. (iii) TDoA: set up the same reference anchor among all time-difference rangings, and locate the sensors by finding the intersection point of a set of elliptic curves with a shared focus. In the following experiment, we use Chan\u2019s algorithm [21] to settle the non-convexity of distance terms by formulating a pseudo-linear system and solving it with SOCP [1]. 7.1.3 Performance Metrics. To evaluate the performance of a localization algorithm, we repeat the ranging and localization procedure under the same simulation settings, for a total of \ud835\udc5f\u2208N times. Denote \u02c6 \ud835\udc99(\ud835\udc61) and \ud835\udc99(\ud835\udc61) := \ud835\udc99(\ud835\udc61) real to be the estimation and ground truth of the sensor location in the \ud835\udc61th iteration. To measure the precision of all presented localization techniques, we examine both localization error \r \r \r\u02c6 \ud835\udc99(\ud835\udc61) \u2212\ud835\udc99(\ud835\udc61)\r \r \r of a single experiment and Root-Mean-SquareError (RMSE) [42] of \ud835\udc5fiterative experiments at the same noise level. Specifically, the RMSE is defined by RMSE := v t 1 \ud835\udc5f \ud835\udc5f \u2211\ufe01 \ud835\udc61=1 \r \r\u02c6 \ud835\udc99(\ud835\udc61) \u2212\ud835\udc99(\ud835\udc61)\r \r2. 7.1.4 Cram\u00e9r-Rao Lower Bound. Additionally, we examine the Cram\u00e9r-Rao Lower Bound (CRLB) as a benchmark to gauge the optimal accuracy attainable by the estimator \u02c6 \ud835\udc99employed by QuERLoc. Recall that we derive the optimization objective in (25) through the following log-probability density function: log L \u0010 e \ud835\udf40(\ud835\udc61)\u0011 = \u2212 \ud835\udc5a \u2211\ufe01 \ud835\udc58=1 log \u0010\u221a 2\ud835\udf0b\ud835\udf0ce \ud835\udf06(\ud835\udc61) \ud835\udc58 \u0011 \u2212 1 2\ud835\udf0c2 \r \r \r \u221a\ufe01 e \ud835\udc4a(\ud835\udc61) \u0010 \ud835\udc73(\ud835\udc61)\ud835\udc99\u2212e \ud835\udc89(\ud835\udc61)\u0011\r \r \r 2 . 7 , QuERLoc 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 Noise Level (%) 0 10 20 30 40 50 60 70 RMSE (m) CRLB Multilateration+GD TDoA SDP QuERLoc 4.5 6.05 6.1 6.15 (a) RMSE, \ud835\udc5a= 3 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 Noise Level (%) 0 10 20 30 40 50 60 RMSE (m) CRLB Multilateration+GD TDoA SDP QuERLoc 4.5 4.25 4.3 (b) RMSE, \ud835\udc5a= 4 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 Noise Level (%) 0 10 20 30 RMSE (m) CRLB Multilateration+GD TDoA SDP QuERLoc 4.5 3.39 3.4 3.41 (c) RMSE, \ud835\udc5a= 5 0 20 40 60 80 100 120 140 Localization Error(m)-1% Noise 0.0 0.5 1.0 CDF 0 20 40 60 80 100 120 140 Localization Error(m)-5% Noise QuERLoc SDP Multilateration+GD TDoA (d) CDF, \ud835\udc5a= 3 0 10 20 30 40 50 Localization Error(m)-1% Noise 0.0 0.5 1.0 CDF 0 20 40 60 80 100 120 Localization Error(m)-5% Noise QuERLoc SDP Multilateration+GD TDoA (e) CDF, \ud835\udc5a= 4 0 10 20 30 Localization Error(m)-1% Noise 0.0 0.5 1.0 CDF 0 10 20 30 40 50 60 70 80 90 Localization Error(m)-5% Noise QuERLoc SDP Multilateration+GD TDoA (f) CDF, \ud835\udc5a= 5 Figure 5: Performance of QuERLoc and baselines over different noise levels. The CDF of localization error to all localization approaches when \ud835\udc5a= 3, 4 and 5 are plotted under noise levels 1% and 5%. The Fisher information matrix [31] of the log-likelihood function can be formulated as F \u0010 e \ud835\udf40(\ud835\udc61)\u0011 = \u2212E \" \ud835\udf152 log L(e \ud835\udf40(\ud835\udc61)) \ud835\udf15\ud835\udc99\ud835\udf15\ud835\udc99\ud835\udc47 # = ( \u2212E \" \ud835\udf152 log L(e \ud835\udf40(\ud835\udc61)) \ud835\udf15\ud835\udc65\ud835\udc56\ud835\udf15\ud835\udc65\ud835\udc57 #) \ud835\udc56\ud835\udc57 = (\u0012 \ud835\udc73(\ud835\udc61) \ud835\udf152\ud835\udc99 \ud835\udf15\ud835\udc65\ud835\udc56\ud835\udf15\ud835\udc65\ud835\udc57 \u0013\ud835\udc471 \ud835\udf0c2 E h e \ud835\udc4a(\ud835\udc61) \u0010 \ud835\udc73(\ud835\udc61)\ud835\udc99\u2212e \ud835\udc89(\ud835\udc61)\u0011i + 1 \ud835\udf0c2 \u0012 \ud835\udc73(\ud835\udc61) \ud835\udf15\ud835\udc99 \ud835\udf15\ud835\udc65\ud835\udc56 \u0013\ud835\udc47 E h e \ud835\udc4a(\ud835\udc61)i \u0012 \ud835\udc73(\ud835\udc61) \ud835\udf15\ud835\udc99 \ud835\udf15\ud835\udc65\ud835\udc57 \u0013) \ud835\udc56\ud835\udc57 \u22481 + 3\ud835\udf0c2 \ud835\udf0c2 \u0010 \ud835\udc73(\ud835\udc61)\u0011\ud835\udc47e \ud835\udc4a(\ud835\udc61)\ud835\udc73(\ud835\udc61). (27) The first term in the summation vanishes due to \ud835\udf152\ud835\udc99 \ud835\udf15\ud835\udc65\ud835\udc56\ud835\udf15\ud835\udc65\ud835\udc57= h \ud835\udf15\ud835\udeff\ud835\udc58\ud835\udc57 \ud835\udf15\ud835\udc65\ud835\udc56 i \ud835\udc58= 0. The final expression in (27) originates from \ud835\udf15\ud835\udc99\ud835\udc47/\ud835\udf15\ud835\udc99= \ud835\udc70, and E[e \ud835\udf06\u22122 \ud835\udc58] = \ud835\udf06\u22122 \ud835\udc58\u00b7E[1\u22122\ud835\udeff\ud835\udc58+3\ud835\udeff2 \ud835\udc58+\ud835\udc42(\ud835\udeff3 \ud835\udc58)] \u2248(1+3\ud835\udf0c2)\ud835\udf06\u22122 \ud835\udc58 by Taylor expansion along with E[\ud835\udeff\ud835\udc58] = 0 and E[\ud835\udeff2 \ud835\udc58] = Var[\ud835\udeff\ud835\udc58] + E2[\ud835\udeff\ud835\udc58] = \ud835\udf0c2. CRLB provides a lowerbound E[( \u02c6 \ud835\udc99(\ud835\udc61) \u2212\ud835\udc99(\ud835\udc61))( \u02c6 \ud835\udc99(\ud835\udc61) \u2212\ud835\udc99(\ud835\udc61))\ud835\udc47] \u2ab0 F \u22121(e \ud835\udf40(\ud835\udc61)). This allows us to derive a lowerbound for RMSE when \ud835\udc5fis sufficiently large, RMSE \ud835\udc5f\u226b1 = v t 1 \ud835\udc5f \ud835\udc5f \u2211\ufe01 \ud835\udc61=1 E h\u0000 \u02c6 \ud835\udc99(\ud835\udc61) \u2212\ud835\udc99(\ud835\udc61)\u0001\ud835\udc47\u0000 \u02c6 \ud835\udc99(\ud835\udc61) \u2212\ud835\udc99(\ud835\udc61)\u0001i = v t 1 \ud835\udc5f \ud835\udc5f \u2211\ufe01 \ud835\udc61=1 Tr \u0010 E h\u0000 \u02c6 \ud835\udc99(\ud835\udc61) \u2212\ud835\udc99(\ud835\udc61)\u0001 \u0000 \u02c6 \ud835\udc99(\ud835\udc61) \u2212\ud835\udc99(\ud835\udc61)\u0001\ud835\udc47i\u0011 CRLB \u2265 v t 1 \ud835\udc5f \ud835\udc5f \u2211\ufe01 \ud835\udc61=1 Tr \u0010 F \u22121(e \ud835\udf40(\ud835\udc61)) \u0011 . (28) 7.2 Evaluation Results We evaluate all the positioning instances on a classical computer. Within the same setup, we repeatedly generate \ud835\udc5f= 104 locations and perturb the distances data in an analogous way to (24). All approaches including QuERLoc and baselines (i)-(iii) will share identical testbed settings and estimate the same randomly generated sensor locations {\ud835\udc99(\ud835\udc61)}\ud835\udc5f \ud835\udc61=1. The choices of used anchors are determined by the particular protocols of each localization method according to their selective strategies. Notice that we mainly focus on QuERLoc\u2019s capability to acquire special distance combinations rather than the enhancement quantum metrology would bring to the readout precisions [14]. Thus, we set the factor \ud835\udf0cto be identical among all approaches adopted at the same noise level. We implemented all algorithms in Python, where the least-square regressions were solved using the built-in Python package numpy, and SOCPs/SDPs were solved using MOSEK [8]. All simulations were run on a Windows PC with 16GB memory and AMD Ryzen 9 7945HX CPU. 7.2.1 Performance with Few Numbers of Rangings. For each choice of the number of rangings \ud835\udc5a\u2208{3, 4, 5}, we set the probe scheme to be \ud835\udc3c\ud835\udc58= {2\ud835\udc58\u22121, 2\ud835\udc58} \u2282{1, . . . ,\ud835\udc5b} and \ud835\udc64\ud835\udc56,\ud835\udc58= \u2212(\u22121)\ud835\udc56for 1 \u2264\ud835\udc58\u2264\ud835\udc5a and \ud835\udc56\u2208\ud835\udc3c\ud835\udc58. QuERLoc exploits \ud835\udc5adistinct pairs of anchors for oneshot localization. Baselines approaches, including TDoA (where one anchor serves as the reference node across all TDoA measurements), utilize \ud835\udc5aanchors since each ranging only introduces information from a single new anchor. Fig. 5a, 5b and 5c report the RMSE of localization approaches with respect to the varying noise factor when \ud835\udc5a= 3, 4, and 5. QuERLoc works nicely under all presented cases, and consistently surpasses all the baseline methods. From the cumulative distribution function (CDF) of localization error in corresponding experiments under noise level 1% and 5% reported by Fig. 5d, 5e and 5f, we observe that QuERLoc achieves high localization accuracy for the majority of test cases, producing comparatively few estimation of significant deviations. It is noteworthy that when few (= 3) numbers of rangings are available in the 3-dimensional space, QuERLoc can still produce satisfactory location estimation and closely follows the CRLB, while the outputs of all proposed baselines yield large deviation from the ground truth. The reason is that with the aid of QuER, the objective set of optimization problem 8 QuERLoc , 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 Noise Level (%) 0 10 20 RMSE (m) CRLB Multilateration+GD TDoA SDP QuERLoc 4.5 6.0 6.1 (a) RMSE, \ud835\udc5a= 3 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 Noise Level (%) 0 10 20 RMSE (m) CRLB Multilateration+GD TDoA SDP QuERLoc 4.5 4.15 4.2 4.25 (b) RMSE, \ud835\udc5a= 4 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 Noise Level (%) 0 10 20 RMSE (m) CRLB Multilateration+GD TDoA SDP QuERLoc 4.5 3.3875 3.39 3.3925 (c) RMSE, \ud835\udc5a= 5 0 10 20 Localization Error(m)-1% Noise 0.0 0.5 1.0 CDF 0 10 20 30 40 50 60 70 80 Localization Error(m)-5% Noise QuERLoc SDP Multilateration+GD TDoA (d) CDF, \ud835\udc5a= 3 0 10 Localization Error(m)-1% Noise 0.0 0.5 1.0 CDF 0 10 20 30 40 50 60 70 Localization Error(m)-5% Noise QuERLoc SDP Multilateration+GD TDoA (e) CDF, \ud835\udc5a= 4 0 10 Localization Error(m)-1% Noise 0.0 0.5 1.0 CDF 0 10 20 30 40 50 60 70 Localization Error(m)-5% Noise QuERLoc SDP Multilateration+GD TDoA (f) CDF, \ud835\udc5a= 5 Figure 6: Performance of QuERLoc and baselines with same anchor utilization. QuERLoc and all baseline methods conduct distance ranging with the same set of anchors. would degenerate from the intersection of a collection of curved surfaces to that of a collection of hyperplanes. 7.2.2 Superiority of QuERLoc with Same Anchor Utilization. One may doubt that the superiority of QuERLoc merely originates from the full utilization of available anchors, as in the previous experiment, QuERLoc used twice as many anchor nodes as baselines. We address this question by doubling the quantity of distance ranging for baselines (i.e., they are conducting 2\ud835\udc5arangings using the same anchors utilized by QuERLoc) while maintaining that of QuERLoc at \ud835\udc5a. As shown in Fig. 6, despite noticeable improvement in the performance of baselines, QuERLoc still largely outperforms them. As \ud835\udc5achanges from 4 to 5, baselines only yield marginal accuracy improvement. QuERLoc achieves an RMSE of 27% compared to the best-performing baseline Multilateration + GD, as shown in Fig. 6c. 7.2.3 Case Study: Mimicking QuERLoc with Classical Ranging. One question might be raised naturally: Given that a distance combination analogous to (20) is central to the superiority of QuERLoc, is it possible to mimic such a ranging process with classical ranging, thus achieving the same localization accuracy? We explore the feasibility of this approach by conducting the following experiment: For each instance of QuER, classical ranging on each involved target-anchor distance \ud835\udc51\ud835\udc56is meanwhile conducted and combined. As for the systematic noise, we perturb corresponding readouts for QuERLoc and classical simulating system, denoted as QuERLoc-sim, as follows: QuERLoc : e \ud835\udf06\ud835\udc58= (1 + \ud835\udeff\ud835\udc58) \u00b7 \u2211\ufe01 \ud835\udc56\u2208\ud835\udc3c\ud835\udc58 \ud835\udc64\ud835\udc56,\ud835\udc58\ud835\udc512 \ud835\udc56, \ud835\udeff\ud835\udc58\u223cN (0, \ud835\udf0c). QuERLoc-sim : e \ud835\udf06\u2032 \ud835\udc58= \u2211\ufe01 \ud835\udc56\u2208\ud835\udc3c\ud835\udc58 \ud835\udc64\ud835\udc56,\ud835\udc58[\ud835\udc51\ud835\udc56\u00b7 (1 + \ud835\udeff\ud835\udc56)]2 , \ud835\udeff\ud835\udc56\u223cN (0, \ud835\udf0c). The localization performance under both settings is compared in Fig. 7. It be observed that the QuERLoc-sim suffers from evident deterioration in performance, as it doubles the quantity of ranging required compared to QuER under the designed probing scheme, and the noise is integrated into the system in a quadratic manner. 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 Noise Level (%) 0 1 2 3 4 5 6 RMSE (m) QuERLoc m = 3 QuERLoc m = 4 QuERLoc m = 5 QuERLoc-sim m = 3 QuERLoc-sim m = 4 QuERLoc-sim m = 5 Figure 7: Comparison of QuERLoc and QuERLoc-sim with \ud835\udc5a= 3, 4, 5 under non-zero noise levels 0.5% to 5%. Each cluster of boxes corresponds to one noise level. 7.2.4 Time Complexity. Choosing localization methods involves a trade-off between accuracy and latency. Trilateration-based localization requires low running time but is highly sensible to noise, while conic relaxation and gradient-descent methods have no guarantee of instantaneous localization. As is illustrated in Tab. 2 the computational complexity and time consumption of several localization approaches when \ud835\udf0c= 5%, QuERLoc provides reliable localization with much more efficient computational requirements. On average, QuERLoc consumes only 2.4% of the time required by the most efficient baseline method Multilateration + GD. 8 CONCLUSION In this paper, we present QuERLoc, a novel localization approach that exploits the advantage of quantum-enhanced ranging realized by quantum metrology with entangled probes. We propose a new distance ranging model based on the quantum control theory and phase estimation by fine-tuning dynamics of quantum probes under two-level approximation, which we call QuER. We show that by a specially designed probe state, quantum-enhanced ranging 9 , QuERLoc Localization Methods Time Complexity Average Time Consumption (sec) QuERLoc \ud835\udc42\u0000\ud835\udc512(\ud835\udc5a+ \ud835\udc51)\u0001 3.27 \u00d7 10\u22124 SDP \ud835\udc42 \u0010\u221a \ud835\udc51(\ud835\udc5a\ud835\udc512 + \ud835\udc5a\ud835\udf14+ \ud835\udc51\ud835\udf14) log(1/\ud835\udefc) \u0011 # state-of-the-art [18] 2.38 \u00d7 10\u22122 TDoA \ud835\udc42\u0000\ud835\udc512(\ud835\udc5a+ \ud835\udc51) +\ud835\udc47\ud835\udc46\ud835\udc42\ud835\udc36\ud835\udc43 \u0001 2.03 \u00d7 10\u22122 Multilateration+GD \ud835\udc42\u0000\ud835\udc512(\ud835\udc5a+ \ud835\udc51) + \ud835\udc40\ud835\udc512\u0001 1.37 \u00d7 10\u22122 Table 2: Complexity and latency, \ud835\udf0c= 5%. \ud835\udefc> 0 is the relative accuracy, \ud835\udf14is the exponent of matrix multiplication, and \ud835\udc40is the number of BFGS (default choice of gradient-descent search in cvxpy [9]) iterations. Asymptotic behaviour of \ud835\udc47\ud835\udc46\ud835\udc42\ud835\udc36\ud835\udc43depends on the solver\u2019s adaptive choice of problem reduction into various conic programming instances. When reduced to SDP, \ud835\udc47\ud835\udc46\ud835\udc42\ud835\udc36\ud835\udc43= \ud835\udc42( \u221a \ud835\udc51(\ud835\udc5a\ud835\udc512 + \ud835\udc5a\ud835\udf14+ \ud835\udc51\ud835\udf14) log(1/\ud835\udefc)). can result in a convex optimization problem, which can be solved efficiently. Extensive simulations verify that QuERLoc significantly outperforms baseline approaches using classical ranging and saturates CRLB, demonstrating its superiority in both accuracy and latency. Our work provides a theoretical foundation for a potential application of quantum metrology in the field of range-based localization. We believe QuERLoc leads the research on localization with quantum resource and opens new directions to both fields of quantum computing and mobile computing."
16
+ }
intro_8K/test_introduction_short_2404.16899v1.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2404.16899v1",
3
+ "title": "mlr3summary: Concise and interpretable summaries for machine learning models",
4
+ "abstract": "This work introduces a novel R package for concise, informative summaries of\nmachine learning models.\n We take inspiration from the summary function for (generalized) linear models\nin R, but extend it in several directions:\n First, our summary function is model-agnostic and provides a unified summary\noutput also for non-parametric machine learning models;\n Second, the summary output is more extensive and customizable -- it comprises\ninformation on the dataset, model performance, model complexity, model's\nestimated feature importances, feature effects, and fairness metrics;\n Third, models are evaluated based on resampling strategies for unbiased\nestimates of model performances, feature importances, etc.\n Overall, the clear, structured output should help to enhance and expedite the\nmodel selection process, making it a helpful tool for practitioners and\nresearchers alike.",
5
+ "authors": "Susanne Dandl, Marc Becker, Bernd Bischl, Giuseppe Casalicchio, Ludwig Bothmann",
6
+ "published": "2024-04-25",
7
+ "updated": "2024-04-25",
8
+ "primary_cat": "cs.LG",
9
+ "cats": [
10
+ "cs.LG"
11
+ ],
12
+ "label": "Original Paper",
13
+ "paper_cat": "LLM Fairness",
14
+ "gt": "Machine learning (ML) increasingly supports decision-making processes in various domains. A data scientist has a wide range of models available, ranging from intrinsically interpretable models such as linear models to highly complex models such as random forests or gradient boosted trees. Intrinsically interpretable models can come at the expense of generalization performance, i.e., the model\u2019s capability to predict accurately on future data. Being able to interpret predictive models is either often a strict requirement for scientific inference or at least a very desirable property to audit models in other (more technical) contexts. Many methods have been proposed for interpreting black-box ML models in the field of interpretable ML (IML). For comparing (generalized) linear models (GLMs), the stats package in R offers a summary function, which only requires the model (fitted with lm or glm) as input. As an example, glm is applied to a preprocessed version of the German credit dataset (Hofmann 1994) (available in the package via data(\"credit\", package = \"mlr3summary\")): arXiv:2404.16899v1 [cs.LG] 25 Apr 2024 2 mlr3summary: Concise and interpretable summaries for machine learning models > logreg = glm(risk ~., data = credit , + family = binomial(link = \"logit\")) > summary(logreg) Call: glm(formula = risk ~., data = credit, family = binomial(link = \"logit\")) Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) 1.057e+00 3.646e-01 2.900 0.00373 ** age 9.103e-03 8.239e-03 1.105 0.26925 ... Residual deviance: 656.19 on 515 degrees of freedom AIC: 670.19 ... This (shortened) summary informs about the significance of variables (Pr(>|z|)), their re- spective effect size and direction (Estimate), as well as the goodness-of-fit of the model (Residual deviance and AIC). Unfortunately, many other non-parametric ML models cur- rently cannot be analyzed similarly: either targeted implementations exist for specific model classes, or an array of different model-agnostic interpretability techniques (e.g., to derive fea- ture importance) scattered across multiple packages (Molnar, Bischl, and Casalicchio 2018; Biecek 2018; Zargari Marandi 2023) must be employed. However, especially in applied data science, a user often performs model selection or model comparison across an often diverse pool of candidate models, so a standardized diagnostic output becomes highly desirable. Another issue is that in the glm-based summary, the goodness-of-fit is only evaluated on the training data, but not on hold-out/test data. While this might be appropriate for GLM- type models \u2013 provided proper model diagnosis has been performed \u2013 this is not advisable for non-parametric and non-linear models, which can overfit the training data.1 Here, hold- out test data or in general resampling techniques like cross-validation should be used for proper estimation of the generalization performance Simon (2007). Such resampling-based performance estimation should also be used for loss-based IML methods. For interpretability methods that only rely on predictions, this might also be advisable but might not lead to huge differences in results (Molnar, K\u00f6nig, Herbinger, Freiesleben, Dandl, Scholbeck, Casal- icchio, Grosse-Wentrup, and Bischl 2022; Molnar, Freiesleben, K\u00f6nig, Herbinger, Reisinger, Casalicchio, Wright, and Bischl 2023). Contributions With the mlr3summary package, we provide a novel model-agnostic summary function for ML models and learning algorithms in R. This is facilitated by building upon mlr3 (Lang, Binder, Richter, Schratz, Pfisterer, Coors, Au, Casalicchio, Kotthoff, and Bischl 2019; Bischl, Sonabend, Kotthoff, and Lang 2024) \u2013 a package ecosystem for applied ML, including resampling-based performance assessment. The summary function returns a struc- tured overview that gives information on the underlying dataset and model, generalization performances, complexity of the model, fairness metrics, and feature importances and effects. For the latter two, the function relies on model-agnostic methods from the field of IML. The 1For completeness\u2019 sake: Overfitting can happen for GLMs, e.g., in high-dimensional spaces with limited sample size. Susanne Dandl, Marc Becker, Bernd Bischl, Giuseppe Casalicchio, Ludwig Bothmann 3 output is customizable via a flexible control argument to allow adaptation to different appli- cation scenarios. The mlr3summary package is released under LGPL-3 on GitHub (https:// github.com/mlr-org/mlr3summary) and CRAN (https://cran.r-project.org/package= mlr3summary). Documentations in the form of help pages are available as well as unit tests. The example code of this manuscript is available via demo(\"credit\", package = \"mlr3summary\").",
15
+ "main_content": "Most R packages that offer model summaries are restricted to parametric models and extend the stats summary method (e.g., modelsummary (Arel-Bundock 2022), broom (Robinson, Hayes, and Couch 2023)). Performance is only assessed based on training data \u2013 generalization errors are not provided. Packages that can handle diverse ML models focus primarily on performance assessment (e.g., mlr3 (Lang et al. 2019), caret (Kuhn and Max 2008)). Packages that primarily consider feature importances and effects do not provide overviews in a concise, decluttered format but provide extensive reports (e.g., modelDown (Romaszko, Tatarynowicz, Urba\u0144ski, and Biecek 2019) and modelStudio (Baniecki and Biecek 2019) based on DALEX (Biecek 2018), or explainer (Zargari Marandi 2023)). While it is possible to base the assessment on hold-out/test data, assessment based on resampling is not automatically supported by these packages. Overall, to the best of our knowledge, there is no R package yet that allows for a concise yet informative overview based on resampling-based performance assessment, model complexity, feature importance and effect directions, and fairness metrics. 3. Design, functionality, and example The core function of the mlr3summary package is the S3-based summary function for mlr3 Learner objects. It has three arguments: object reflects a trained model \u2013 a model of class Learner fitted with mlr3; resample_result reflects the results of resampling \u2013 a ResampleResult object fitted with mlr3; control reflects some control arguments \u2013 a list created with summary_control (details in Section 3.2). The mlr3 package is the basis of mlr3summary because it provides a unified interface to diverse ML models and resampling strategies. A general overview of the mlr3 ecosystem is given in Bischl et al. (Bischl et al. 2024). With mlr3, the modelling process involves the following steps: (1) initialize a regression or classification task, (2) choose a regression or classification learner, (3) train a model with the specified learner on the initialized task, (4) apply a resampling strategy. The last step is necessary to receive valid estimates for performances, importances, etc., as mentioned in Section 1. The following lines of code illustrate steps (1)-(4) on the (preprocessed) credit dataset from Section 1 using a ranger random forest. As a resampling strategy, we conduct 3-fold cross-validation. > task = TaskClassif$new(id = \"credit\", backend = credit , + target = \"risk\") > rf = lrn(\"classif.ranger\", predict_type = \"prob\") + target = \"risk\") > rf = lrn(\"classif.ranger\", predict_type = \"prob\") > rf$train(task) > cv3 = rsmp(\"cv\", folds = 3L) > rr = resample(task = task , learner = rf , resampling = cv3, 4 mlr3summary: Concise and interpretable summaries for machine learning models + store_models = TRUE) Internally, the resample function fits, in each iteration, the model on the respective training data, uses the model to predict the held-out test data, and stores the predictions in the result object. To compute performances, complexities, importances, and other metrics, the summary function iteratively accesses the models and datasets within the resulting resample object, which requires setting the parameter store_models = TRUE within the resample function. For the final summary output, the results of each iteration are aggregated (e.g., averages and standard deviations (sds)). 3.1. Summary function and output This section shows the summary call and output for the random forest of the previous credit example and provides some details on each displayed paragraph. > summary(object = rf , resample_result = rr) General provides an overview of the task, the learner (including its hyperparameters), and Susanne Dandl, Marc Becker, Bernd Bischl, Giuseppe Casalicchio, Ludwig Bothmann 5 the resampling strategy.2 Residuals display the distribution of residuals of hold-out data over the resampling iterations. For regression models, the residuals display the difference between true and predicted outcome. For classifiers that return class probabilities, the residuals are defined as the difference between predicted probabilities and a one-hot-encoding of the true class. For classifiers that return classes, a confusion matrix is shown. Performance displays averages and sds (in [ ]) of performance measures over the iterations.3 The shown performance values are the area-under-the-curve (auc), the F-score (fbeta), the binary Brier score (bbrier), and Mathew\u2019s correlation coefficient (mcc). The arrows display whether lower or higher values refer to a better performance. \u201c(macro)\u201d indicates a macro aggregation, i.e., measures are computed for each iteration separately before averaging. \u201c(micro)\u201d would indicate that measures are computed across all iterations (see Bischl et al. (2024) for details). Complexity displays averages and sds of two model complexity measures proposed by Molnar, Casalicchio, and Bischl (Molnar et al. 2020): sparsity shows the number of used features that have a non-zero effect on the prediction (evaluated by accumulated local effects (ale) (Apley and Zhu 2020)); interaction_strength shows the scaled approximation error between a main effect model (based on ale) and the prediction function.4 Importance shows the averages and sds of feature importances over the iterations. The first column (pdp) displays importances based on the sds of partial dependence curves (Friedman 2001; Greenwell, Boehmke, and McCarthy 2018), the second column (pfi.ce) shows the results for permutation feature importance Breiman (2001); Fisher, Rudin, and Dominici (2019). Effects shows average effect plots over the iterations \u2013 partial dependence plots (pdp) and ale plots (Friedman 2001; Apley and Zhu 2020). For binary classifiers, the effect plots are only shown for the positively-labeled class (here, task$positive = \"good\"). For multi-class classifiers, the effect plots are given for each outcome class separately (one vs. all). For categorical features, the bars are ordered according to the factor levels of the feature. The learner can also be a complete pipeline from mlr3pipelines (Binder, Pfisterer, Lang, Schneider, Kotthoff, and Bischl 2021), where the most common case would be an ML model with associated pre-processing steps. Then, the summary output also shows some basic information about the pipeline.5 Since preprocessing steps are treated as being part of the learner, the summary output is displayed on the original data (e.g., despite one-hot encoding of categorical features, importance results are not shown for each encoding level separately). The learner can also be an AutoTuner from mlr3tuning, where automatic processes for tuning the hyperparameters are conducted. Examples on pipelining and tuning are given in the demo of the package. 3.2. Customizations 2Currently, this is the only paragraph that is based on object, all other paragraphs are based on resample_result. 3Please note that there is no unbiased estimator of the variance, see (Nadeau and Bengio 1999) and Section 5 for a discussion. 4The interaction strength has a value in [0, 1], 0 means no interactions, 1 means no main effects but interactions. 5Linear pipelines can be displayed in the console, non-linear parts are suppressed in the output. 6 mlr3summary: Concise and interpretable summaries for machine learning models The output of the summary function can be customized via a control argument which requires a list created with the function summary_control as an input. If no control is specified, the following default setting is used: > summary_control(measures = NULL , + complexity_measures = c(\"sparsity\", \"interaction_strength\"), + importance_measures = NULL , n_important = 15L, + effect_measures = c(\"pdp\", \"ale\"), + fairness_measures = NULL , protected_attribute = NULL , + hide = NULL , digits = max(3L, getOption(\"digits\") 3L)) Performances are adaptable via measures, complexities via complexity_measures, importances via importance_measures and effects via effect_measures within summary_control. Examples are given in the demo of the package. The default for measures and importance_ measures is NULL, which results in a collection of commonly reported measures being chosen, based on the task type \u2013 for concrete measures see the help page (?summary_control). n_important reflects that, by default, only the 15 most important features are displayed in the output. This is especially handy for high-dimensional data. With hide, paragraphs of the summary output can be omitted (e.g., \"performance\") and with digits, the number of printed digits is specified. Fairness assessment for classification and regression models is also available in mlr3summary based on the mlr3fairness package (Pfisterer, Siyi, and Lang 2023). Therefore, a protected attribute must be specified. This can be done either within the task by updating the feature roles or by specifying a protected_attribute in summary_control. The following shows the code and output when specifying sex as a protected attribute. The shown default fairness measures are demographic parity (dp), conditional use accuracy equality (cuae) and equalized odds (eod), other measures are possible via fairness_measures in summary_control. > summary(object = rf , resample_result = rr , + control = summary_control(protected_attribute = \"sex\")) 4. Runtime assessment To assess how the runtime scales with differing numbers of features p \u2208{5, 10, 25, 50, 100} and numbers of observations n \u2208{50, 100, 500, 1000, 2000}, we conducted a simulation study. Given X1, X2, X3 \u223cU(0, 1), X4 \u223cBern(0.75), the data generating process is y = f(x) + \u03f5 with f(x) = 4x1 +4x2 +4x4x2 3 and \u03f5 \u223cN(0, 0.1\u00b7f(x)). As noise variables, X5 as a categorical feature with five classes, and X6, ..., Xp \u223cN(0, 1) were added to the data. We trained random forests and linear main effect models on the datasets and conducted 3-fold cross-validation. The first two figures in Figure 1 show that runtimes of the linear model were lower compared to the random forest. To improve runtimes, we added parallelization over the resampling Susanne Dandl, Marc Becker, Bernd Bischl, Giuseppe Casalicchio, Ludwig Bothmann 7 iterations (via the future package (Bengtsson 2021)) as another feature to mlr3summary \u2013 results for the random forest (with 3 cores) are on the right. Overall, scaling of runtimes is worse in p than in n. Figure 1: Runtimes of the summary function for linear models (left), and random forests without (middle) and with (right) parallelization, for differing numbers of features p and observations n. 5. Outlook and discussion In conclusion, this paper introduces a novel R package for concise model summaries. The summary output is highly adaptable due to a control argument and might be extended in the future. We also plan to offer a report function for detailed visualizations and model comparisons. To assess importance and effects of single features, mlr3summary builds upon the iml and fastshap packages. These packages only offer a limited set of interpretation methods. Recommended alternatives to permutation feature importances like conditional feature importance Molnar et al. (2022), are currently not available in a proper R package (published on CRAN). Our summary also currently lacks proper statistical tests for importances or confidence intervals for performances. This is because unbiased estimates of the variance are required which is a challenge for resampling strategies and the available methods that propose unbiased estimates are computationally infeasible (e.g., due to many model refits) (Molnar et al. 2023; Stephen Bates and Tibshirani 2023). Addressing this issue requires some concerted efforts from the research community. If methods are readily available in R, we are happy to integrate them in mlr3summary. Acknowledgments This work has been partially supported by the Federal Statistical Office of Germany."
16
+ }
intro_8K/test_introduction_short_2404.16921v1.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2404.16921v1",
3
+ "title": "A Short Survey of Human Mobility Prediction in Epidemic Modeling from Transformers to LLMs",
4
+ "abstract": "This paper provides a comprehensive survey of recent advancements in\nleveraging machine learning techniques, particularly Transformer models, for\npredicting human mobility patterns during epidemics. Understanding how people\nmove during epidemics is essential for modeling the spread of diseases and\ndevising effective response strategies. Forecasting population movement is\ncrucial for informing epidemiological models and facilitating effective\nresponse planning in public health emergencies. Predicting mobility patterns\ncan enable authorities to better anticipate the geographical and temporal\nspread of diseases, allocate resources more efficiently, and implement targeted\ninterventions. We review a range of approaches utilizing both pretrained\nlanguage models like BERT and Large Language Models (LLMs) tailored\nspecifically for mobility prediction tasks. These models have demonstrated\nsignificant potential in capturing complex spatio-temporal dependencies and\ncontextual patterns in textual data.",
5
+ "authors": "Christian N. Mayemba, D'Jeff K. Nkashama, Jean Marie Tshimula, Maximilien V. Dialufuma, Jean Tshibangu Muabila, Mbuyi Mukendi Didier, Hugues Kanda, Ren\u00e9 Manass\u00e9 Galekwa, Heber Dibwe Fita, Serge Mundele, Kalonji Kalala, Aristarque Ilunga, Lambert Mukendi Ntobo, Dominique Muteba, Aaron Aruna Abedi",
6
+ "published": "2024-04-25",
7
+ "updated": "2024-04-25",
8
+ "primary_cat": "cs.LG",
9
+ "cats": [
10
+ "cs.LG",
11
+ "cs.CL"
12
+ ],
13
+ "label": "Original Paper",
14
+ "paper_cat": "LLM Fairness",
15
+ "gt": "Predicting population movements during disease outbreaks is a complex yet crucial task, with signifi- cant implications for public health decision-making 1Groupe de Recherche de Prospection et Valorisation des Donn\u00e9es (Greprovad), Global 2GRIC, Universit\u00e9 de Sher- brooke, Canada 3University of Kinshasa, Dem. Rep. of the Congo (DRC) 4LISV-UVSQ, Universit\u00e9 Paris-Saclay, France 5University of Klagenfurt, Austria 6School of Electri- cal Engineering and Computer Science, University of Ottawa, Canada 7Montreal Behavioural Medicine Centre, Centre In- t\u00e9gr\u00e9 Universitaire de Sant\u00e9 et Services Sociaux du Nord-de- l\u2019\u00cele-de-Montr\u00e9al (CIUSSS-NIM), Canada 8Biomedical Re- search Unit, Hospital Monkole, Kinshasa, DRC 9University of Florida, USA 10Programme National de Lutte Con- tre la Trypanosomiase Humaine Africaine (PNLTHA), Di- rection of Epidemiological Surveillance, Ministry of Pub- lic Health of the DRC 11Centre National d\u2019Intelligence \u00c9pid\u00e9miologique (CNIEP), Direction of Epidemiological Surveillance, Ministry of Public Health of the DRC. Corre- spondence email: [email protected] and [email protected] and the formulation of epidemic control strategies. The recent COVID-19 pandemic has underscored the importance of understanding human mobility in predicting and controlling the spread of infectious diseases. Human mobility data can be combined with other data sources to help understand mobil- ity patterns. This provides valuable insights into how to slow down the rapid spread of the disease. Additionally, it helps to analyze the correlation be- tween the number of epidemic-infected cases and human activities in recreational areas such as parks. Furthermore, it enables early detection and prompt isolation of virus infection. Mobility data, derived from various sources such as call detail records, global positioning system, social networks, and ex- pert knowledge of a region (Isaacman et al., 2012; Ebrahimpour et al., 2020; Sobral et al., 2020). While traditional epidemiological models heav- ily rely on mobility data, employing approaches like clustering techniques, differential equations, and statistical modeling (Kulkarni et al., 2019; Rah- man et al., 2021), recent years have witnessed a paradigm shift towards the use of deep learning methodologies, specifically Transformer architec- tures pretrained on large corpora. These advanced techniques aim to tackle the inherent complexi- ties involved in modeling human mobility dynam- ics during epidemics (Ma et al., 2022a; Kobayashi et al., 2023). Contribution. In this paper, we provide a com- prehensive overview of recent research endeavors aimed at leveraging machine learning techniques, specifically Transformer models, to enhance the prediction of human mobility patterns in the con- text of epidemics. We highlight the contributions of both pretrained language models and Large Lan- guage Models (LLMs) tailored explicitly for mo- bility prediction tasks. Furthermore, we discuss the challenges and future directions in this emerging field, emphasizing the potential of these advanced modeling techniques to inform more accurate and arXiv:2404.16921v1 [cs.LG] 25 Apr 2024 actionable epidemiological models.",
16
+ "main_content": "A mobility pattern describes the movement of a considered population over a given observation period. Human mobility modeling tasks can be categorized into two main tasks: generation, which involves generating realistic mobility data, and prediction, which involves predicting future mobility patterns at both the individual and collective levels (Luca et al., 2021). In the context of our study, a trajectory corresponds to a sequence of spatio-temporal information related to an individual\u2019s movement. Depending on the task at hand, trajectories can be aggregated by geographic regions. Given two regions, a flow represents the frequency of individuals moving inbound and outbound from one region (the origin) to another (the destination). Figure 1 displays, in one hand, the generation task, which includes subtasks such as flow generation and trajectory generation, where generative models, including Transformers and LLMs, play a crucial role. On the other hand, the prediction task includes crowd flow prediction and next location prediction, for which robust forecasting models are required. Our paper presents a comprehensive overview of recent advancements in leveraging Transformers and LLMs for modeling human mobility patterns, particularly in the context of epidemic control (see Table 1). This table provides information about the methods, applications, and datasets utilized in recent studies involving Transformers and LLMs for modeling human mobility patterns in the context of epidemic modeling. 3 Transformers in Human Mobility Transformers are a type of deep learning architecture that consists of two parts: an encoder and a decoder (Vaswani et al., 2017). They have been instrumental in the recent breakthroughs we observe in various machine learning tasks. These include, but are not limited to, text-to-image generation, machine translation, and text summarization. One of the key factors contributing to the success of Transformers is the attention mechanism. This mechanism allows the model to prioritize the most relevant input data for tasks such as predicting the next word given a context. While its initial application was primarily on textual data, it has since been established that Transformers are effective across a multitude of applications, including forecasting, where they have shown superior performance compared to their predecessors (Vaswani et al., 2017; Trivedi et al., 2021; Osawa et al., 2021; Solatorio, 2023; Xu et al., 2023; Kobayashi et al., 2023; Tedjopurnomo et al., 2023; Wang et al., 2023a, 2024b). Furthermore, Transformers are multimodal, meaning they can combine data sources of different types, such as text, images, graphs, etc. Consequently, their use has seen a significant rise in recent years, including in the prediction of human mobility patterns for epidemic modeling (Li et al., 2021; Devyatkin et al., 2021; Xue et al., 2022a; Cui et al., 2021; Xue et al., 2021; Mai et al., 2022; Li et al., 2022; Hong et al., 2022; Shen et al., 2023; Ren et al., 2023; Botz et al., 2022; Terashima et al., 2023; Bengio et al., 2020; Xu et al., 2021; Ma et al., 2022b; Arag\u00e3o et al., 2023; Violos et al., 2022; Choya et al., 2023; Mao et al., 2023; Wang et al., 2023b; Chen et al., 2023). Figure 2 illustrates the architecture of a Transformer model designed for mobility prediction. The model receives a sequence of location and time data representing a mobility trajectory as input. This trajectory is encoded using the Transformer\u2019s encoder component, which captures the temporal and spatial dependencies within the sequence. The resulting encoding is then passed to the decoder, which generates predictions for the next location in the trajectory. This self-contained framework utilizes the Transformer\u2019s attention mechanism to effectively capture long-range dependencies and spatial-temporal patterns in mobility data, enabling accurate prediction of future locations. Initially, Transformer-based models like BERT showed promise in predicting mobility flows based on textual and location data (Devlin et al., 2018; Li et al., 2021; Crivellari et al., 2022). However, challenges persisted in generalization to new locations and outbreak scenarios (Devyatkin et al., 2021). Terashima et al. (2023) introduce LP-BERT for predicting human mobility trajectories using the Transformer architecture. LP-BERT enables parallel predictions, reducing training and prediction times, which can be beneficial for tasks like epidemic modeling that require quick insights into population movements. When discussing epidemic modeling, Botz et al. (2022) discuss modeling approaches for early warning, monitoring of pandemics, and decision support in public health crises. It emphasizes the importance of population-level computational modeling, Figure 1: Human mobility modeling tasks taxonomy by Luca et al. (2021). ... Input Embedding Transformer \u00a0Encoder ... Transformer \u00a0Decoder ... Output Embedding Predict Next Location Figure 2: Transformer-based mobility prediction: Encoding mobility trajectories to predict future locations. including machine learning techniques, in strengthening healthcare systems against respiratory infections. The authors highlight the significance of predicting outbreak impacts, monitoring disease spread, and assessing intervention effectiveness. Moreover, Ma et al. (2022a) discuss the importance of human trajectory completion in controlling the spread of COVID-19, present a solution based on Transformers and evaluate it using an opensource human mobility dataset. The proposed solution involves using Transformers and deep learning models to estimate missing elements in trajectories. Similarly, Li et al. (2021) present a Transformerbased model for long-term prediction of seasonal influenza outbreaks. The proposed model addresses the limitations of traditional forecasting methods by leveraging the Transformer\u2019s ability to capture long-range dependencies, and introduces a sources selection module based on curve similarity measurement to incorporate spatial dependencies. To monitor human movements and comprehend the emergence of the pandemic, Bengio et al. (2020) develop advanced deep learning models for predicting infectiousness for proactive contact tracing during the COVID-19 pandemic, introduce the concept of proactive contact tracing (PCT) and discuss the use of deep learning predictors to locally predict individual infectiousness based on contact history while respecting privacy constraints. The study highlights the effectiveness of deep learningbased PCT methods in reducing disease spread compared to other tracing methods, suggesting their potential for deployment in smartphone apps to balance virus spread and economic costs while maintaining strong privacy measures. Recent studies have made significant strides in leveraging advanced deep learning techniques for forecasting and modeling various aspects of the COVID-19 pandemic (Devyatkin et al., 2021; Cui et al., 2021; Violos et al., 2022; Xu et al., 2021). These studies utilize recurrent neural networks and Transformer-like architectures, multi-range encoder-decoder frameworks, self-attention based models, and generative adversarial networks to analyze socioeconomic impacts, forecast COVID-19 cases, predict human density in urban areas, and simulate human mobility trajectories. Devyatkin et al. (2021) develop deep neural network models for forecasting the socioeconomic impacts of COVID-19 in Russian regions, particularly focusing on the regional cluster of Moscow and its neighbors. The models, based on recurrent and Transformer-like architectures, utilize heterogeneous data sources including daily cases, age demographics, transport availability, and hospital capacity. The study shows that incorporating demographic and healthcare features improves the accuracy of economic impact predictions, and data from neighboring regions enhances predictions of healthcare and economic impacts. Overall, the research emphasizes the importance of forecasting to address inter-territorial inequality during the pandemic. Cui et al. (2021) propose a multi-range encoder-decoder framework for COVID-19 prediction, leveraging historical case data, human mobility patterns, and reported cases and deaths to enhance prediction accuracy. By embedding features from multiple expose-infection ranges and utilizing message passing between time slices, the model surpasses existing methods in both weekly and daily prediction tasks. Ablation studies confirm the effectiveness of key components, demonstrating the model\u2019s ability to perform well with or without mobility data. The framework addresses challenges posed by incomplete data and unknown disease factors, offering a promising approach for precise and timely COVID-19 forecasting. Violos et al. (2022) present a self-attention based Paper Year Method Application Datasets WiFiMod: Transformer-based Indoor Human Mobility Modeling using Passive Sensing (Trivedi et al., 2021) 2021 WiFiMod (Transformer-based model) Predicting indoor human mobility Enterprise WiFi system logs MobTCast: Leveraging Auxiliary Trajectory Forecasting for Human Mobility Prediction (Xue et al., 2021) 2021 Transformer Human mobility prediction Gowalla, Foursquare-NYC (FS-NYC), and Foursquare-Tokyo (FS-TKY) Predicting Human Behavior with Transformer Considering the Mutual Relationship between Categories and Regions (Osawa et al., 2021) 2021 Transformer Predicting human mobility Not specified TraceBERT\u2014A Feasibility Study on Reconstructing Spatial\u2013Temporal Gaps from Incomplete Motion Trajectories via BERT Training Process on Discrete Location Sequences (Crivellari et al., 2022) 2022 BERT Trajectory reconstruction Real-world large-scale trajectory dataset of short-term tourists (CDRs) Integrating Transformer and GCN for COVID-19 Forecasting (Li et al., 2022) 2022 Transformer and GCN COVID-19 Forecasting Nytimes Coronavirus (COVID-19) Data Large Language Models for Spatial Trajectory Patterns Mining (Zhang et al., 2023) 2023 LLMs such as GPT-4 and Claude-2 Anomaly detection in mobility data GEOLIFE, PATTERNS-OF-LIFE How Do You Go Where? Improving Next Location Prediction by Learning Travel Mode Information using Transformers (Hong et al., 2022) 2022 Transformer Next location prediction Green Class (GC) and Yumuv GeoFormer: Predicting Human Mobility using Generative Pre-trained Transformer (Solatorio, 2023) 2023 GPT-based model Predicting human mobility HuMob Challenge 2023 datasets Modeling and Generating Human Mobility Trajectories using Transformer with Day Encoding (Kobayashi et al., 2023) 2023 Transformer with Day Encoding Modeling and generating human mobility trajectories HuMob dataset CrowdFlowTransformer: Capturing Spatio-Temporal Dependence for Forecasting Human Mobility (Choya et al., 2023) 2023 Transformer Crowd flow forecasting Not specified TrafFormer: A Transformer Model for Predicting Long-term Traffic (Tedjopurnomo et al., 2023) 2023 Transformer Long-term traffic prediction METR-LA, PEMS-BAY Where Would I Go Next? Large Language Models as Human Mobility Predictors (Wang et al., 2023a) 2023 LMM Human mobility prediction GEOLIFE, FSQ-NYC User Re-identification via Human Mobility Trajectories with Siamese Transformer Networks (Wang et al., 2023a) 2023 Siamese Transformer network User re-identification Gowalla, Brightkite, and Foursquare (NYC, TKY) Exploring Large Language Models for Human Mobility Prediction under Public Events (Liang et al., 2023) 2023 LLM Human mobility prediction under public events Publicly available event information and taxi trip data Learning Daily Human Mobility with a Transformer-Based Model (Wang and Osaragi, 2024) 2024 Transformer Modelling human mobility Tokyo Metropolitan Area Health-LLM: Large Language Models for Health Prediction via Wearable Sensor Data (Kim et al., 2024) 2024 LLM Epidemic control PMData, LifeSnaps, GLOBEM, AW_FB, MITBIH, and MIMIC-III Beyond Imitation: Generating Human Mobility from Context-aware Reasoning with Large Language Models (Shao et al., 2024) 2024 LLM Mobility generation Tencent and Mobile Dataset Large Language Models as Urban Residents: An LLM Agent Framework for Personal Mobility Generation (Wang et al., 2024a) 2024 LLM Personal mobility generation Not specified MobilityGPT: Enhanced Human Mobility Modeling with a GPT model (Haydari et al., 2024) 2024 GPT Mobility modeling Real-world datasets COLA: Cross-city Mobility Transformer for Human Trajectory Simulation (Wang et al., 2024b) 2024 Transformer Human trajectory simulation GeoLife, Yahoo, New York, Singapore Table 1: Literature review of Transformers and LLMs for modeling human mobility patterns to epidemic control encoder-decoder model for predicting human density in urban areas, incorporating deep learning methods and geospatial feature preprocessing. This research enhanced human mobility prediction in epidemic modeling by providing insights into population movement patterns, aiding in the analysis of disease transmission dynamics, and supporting the implementation of strategic interventions to mitigate the spread of epidemics. Xu et al. (2021) propose DeltaGAN, a generative model for synthesizing continuous-time human mobility trajectories. DeltaGAN captures realistic mobility dynamics without discretizing visitation times, enabling more accurate trajectory generation and analysis. Its utility is demonstrated in studying the spreading of COVID-19, showing small divergence in population distribution compared to real data. Spatio-temporal epidemic forecasting models have been developed to predict epidemic transmission dynamics by integrating domain knowledge with neural networks (Mao et al., 2023; Ma et al., 2022b). Mao et al. (2023) introduce a spatiotemporal epidemic forecasting model called MPSTAN, which integrates domain knowledge with neural networks to accurately predict epidemic transmission. This study emphasizes the importance of selecting appropriate domain knowledge for forecasting and proposes a dynamic graph structure to capture evolving interactions between patches over time. Ma et al. (2022b) introduce an approach, Hierarchical Spatio-Temporal Graph Neural Networks (HiSTGNN), for pandemic forecasting using large-scale mobility data. HiSTGNN incorporates a two-level neural architecture and a Transformer-based model to capture spatial and temporal information hierarchically. The model outperforms existing baselines in predicting COVID-19 case counts, demonstrating its superior predictive power. The research highlights the importance of leveraging mobility data for pandemic forecasting and addresses the limitations of existing Graph Neural Networks in capturing community structures within mobility graphs. Additionally, models like CF-Transformer and MSP-STTN have been proposed to capture spatiotemporal dependencies for crowd flow forecasting, contributing to human mobility prediction in epidemic modeling (Choya et al., 2023; Xie et al., 2022). More specifically, Choya et al. (2023) introduced the CrowdFlowTransformer (CFTransformer) model, which combines Transformer with graph convolution to capture spatio-temporal dependencies for crowd flow forecasting, and aims to improve forecasting accuracy by considering both temporal and spatial aspects of crowd flow data for applications like human mobility prediction in epidemic. Xie et al. (2022) proposed the MSP-STTN model for shortand long-term crowd flow prediction, focusing on grid-based crowd data analysis. MSP-STTN contributes to human mobility prediction in epidemic modeling by providing insights into long-term crowd flow patterns, aiding in urban planning and traffic management. Its applications extend to various grid-based prediction problems beyond crowd flow analysis, such as weather forecasting and air pollution prediction. These advancements underscore the critical role of machine learning in enhancing our understanding of disease dynamics and informing public health interventions during epidemics. 4 Large Language Models in Human Mobility Recently, there has been a surge in the development of Large Language Models (LLMs) tailored specifically for high-fidelity human mobility simulation and forecasting (Xue et al., 2022b; Liang et al., 2023; Wang et al., 2023c; Zhang et al., 2023; Tang et al., 2024; Shao et al., 2024; Kim et al., 2024; Wang et al., 2024b; Haydari et al., 2024). These models, trained on massive corpora of mobility data paired with auxiliary information, demonstrate the capability to generate plausible mobility trajectories for entire populations under various policy and disease conditions. Despite these advancements, challenges persist regarding ensuring adequate coverage, transparency, and safety for real-world epidemiological applications. Exploring further, Xue et al. (2022b) propose a pipeline that leverages language foundation models for human mobility forecasting by transforming numerical temporal sequences into sentences for prediction tasks. By integrating language models with mobility prompts, the study provides empirical evidence of the effectiveness of this approach in discovering sequential patterns, which can be valuable for predicting human mobility in epidemic modeling scenarios and potential disease spread. Similarly, Liang et al. (2023) explore LLMs\u2019 application for predicting human mobility patterns during public events (LLM-MPE). Addressing the challenge of incorporating textual data from online event descriptions into mobility prediction models, LLM-MPE transforms raw event descriptions into a standardized format and segments historical mobility data to make demand predictions considering both regular and event-related components. This approach can indirectly inform epidemic modeling by providing insights on travel patterns and potential disease spread dynamics during events, thereby aiding the development of more accurate epidemic models. In another stride, Wang et al. (2023c) introduce LLM-Mob, a framework utilizing LLMs for human mobility prediction, capturing both long-term and short-term dependencies and employing contextinclusive prompts. LLM-Mob contributes to epidemic modeling by providing interpretable predictions, underscoring the potential of LLMs in advancing human mobility prediction techniques to address epidemic spread. Tang et al. (2024) present an approach that integrates LLMs with spatial optimization for urban travel itinerary planning. Focusing on the Online Urban Itinerary Planning (OUIP) problem, this study demonstrates the effectiveness of the proposed system through offline and online experiments. The methodology involves using LLMs like GPT-3.5 and GPT-4 for itinerary generation, along with spatial optimization techniques and rule-based metrics for evaluation. This approach can contribute to human mobility prediction in epidemic modeling by efficiently generating personalized and coherent itineraries based on natural language requests, which can help understand and predict human movement patterns in urban contexts during epidemics. Furthermore, by leveraging LLMs for itinerary generation and spatial optimization, the system can adapt to diverse user needs and provide tailored travel plans, valuable in modeling and predicting human mobility changes during epidemics for better public health planning and management. More recently, Shao et al. (2024) proposed an approach called MobiGeaR for generating human mobility data using LLMs and a mechanistic gravity model. MobiGeaR involves leveraging LLM reasoning and a divide-and-coordinate mechanism to generate mobility patterns effectively. The proposed approach significantly reduces the token cost per trajectory and boosts the accuracy of mobility prediction models through data augmentation. The MobiGeaR approach can contribute to human mobility prediction in epidemic modeling by generating high-quality data to augment sparse datasets, enabling mining and modeling of motion patterns for predicting future trajectories based on historical data. The approach can improve the predictive performance crucial for epidemic control and other applications requiring accurate mobility by providing better enhancements in downstream mobility prediction tasks, particularly in intention-type prediction. 5 Challenges and Limitations Despite their promising performance, Transformers and LLMs face several challenges when applied to human mobility prediction tasks in epidemic modeling. One major challenge is the availability and quality of relevant data sources, which can be subject to biases or errors that affect model performance (Kulkarni et al., 2019). Additionally, the applicability of these advanced models extends beyond well-resourced regions to lowand middle-income countries (LMICs) and resource-constrained settings with underdeveloped electronic health records (Tshimula et al., 2023). In these contexts, leveraging machine learning techniques for human mobility prediction can significantly enhance the understanding and management of epidemics by providing valuable insights into population movements even with limited data availability and infrastructure. Mastering the speed of mobility and the number of movements within a given environment during an epidemic context can consequently help in formulating appropriate public health strategies. Taking the example of a screening activity for sleeping sickness in a village where the endemic level is known, and where the main activity of the inhabitants is farming, with mobility defined between the place of residence and the fields during dawn and dusk hours, the failure to consider this type of mobility by healthcare professionals could result in a large number of absences and non-respondents to these activities, even though these individuals had been planned and accounted for. On the other hand, considering an industrialized country context, where means of transportation include airplanes, subways, high-speed trains, and where large surfaces and amusement parks are present, the speed and number of movements would also be high; in such an environment, the spread rate of an epidemic would be directly proportional to mobility. It is therefore important to master the mobility data of such a population and to use it in a public health context to contain the epidemic. Implementing artificial intelligence (AI) models in LMICs poses significant challenges, primarily due to the potential non-reproducibility of their initial performance upon integration with local datasets and the absence of regulatory frameworks (Wang et al., 2023d). Addressing this challenge is critical to ensuring the effectiveness and reliability of Transformers or LLMs used for human mobility modeling in LMICs, ultimately enhancing epidemiological surveillance and the health outcomes of local populations. While fine-tuning these AI models is a recommended approach for specific applications (Yang et al., 2023; Li et al., 2019), it is essential, particularly in the context of LMICs, to plan cross-validation of these models with local datasets to improve and reproduce the model\u2019s original performance. Moreover, ethical considerations may arise when using these models for surveillance purposes or making decisions about public health interventions based on predictions from these models. Therefore, ensuring responsible deployment of these technologies, particularly in underserved regions, is crucial for achieving equitable and effective epidemic control strategies. 6 Conclusion This emerging area shows promise for improving epidemiological modeling through advanced mobility prediction. Continued progress in integrating multimodal data streams and expert knowledge can significantly bolster public health decision-making by providing more realistic models of human movement dynamics during crises. However, further work is essential to overcome existing limitations and ensure responsible deployment of LLMs. The successful implementation of Transformers or LLMs models in LMICs necessitates careful consideration of the model\u2019s suitability for the local context and adjustments to the training and validation datasets. The scope of implementing these AI models in LMICs lies in developing more contextually appropriate models, integrating local datasets, and fostering collaboration to improve performance and reproducibility. Future research endeavors should prioritize enhancing model generalizability across diverse geographical and socio-economic contexts. Moreover, efforts should be directed towards adapting these advanced modeling techniques to resourceconstrained settings, particularly in LMICs, where access to data and computational resources may be limited. This includes exploring innovative approaches for collecting and processing human mobility data in LMICs, as well as adapting LLMs to accommodate varying sociocultural contexts. Addressing these challenges will be crucial for ensuring the widespread applicability and impact of machine learning-based approaches in epidemic modeling and public health decision-making worldwide. This will contribute to more equitable and effective epidemic response strategies on a global scale. Acknowledgments The authors thank all Greprovad members for helpful discussions and comments on early drafts."
17
+ }
intro_8K/test_introduction_short_2404.16926v1.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2404.16926v1",
3
+ "title": "Observational predictions for the survival of atomic hydrogen in simulated Fornax-like galaxy clusters",
4
+ "abstract": "The presence of dense, neutral hydrogen clouds in the hot, diffuse\nintra-group and intra-cluster medium is an important clue to the physical\nprocesses controlling the survival of cold gas and sheds light on cosmological\nbaryon flows in massive halos. Advances in numerical modeling and observational\nsurveys means that theory and observational comparisons are now possible. In\nthis paper, we use the high-resolution TNG50 cosmological simulation to study\nthe HI distribution in seven halos with masses similar to the Fornax galaxy\ncluster. Adopting observational sensitivities similar to the MeerKAT Fornax\nSurvey (MFS), an ongoing HI survey that will probe to column densities of\n$10^{18}$ cm$^{-2}$, we find that Fornax-like TNG50 halos have an extended\ndistribution of neutral hydrogen clouds. Within one virial radius, we predict\nthe MFS will observe a total HI covering fraction around $\\sim$ 12\\% (mean\nvalue) for 10 kpc pixels and 6\\% for 2 kpc pixels. If we restrict this to gas\nmore than 10 half-mass radii from galaxies, the mean values only decrease\nmildly, to 10\\% (4\\%) for 10 (2) kpc pixels (albeit with significant\nhalo-to-halo spread). Although there are large amounts of HI outside of\ngalaxies, the gas seems to be associated with satellites, judging both by the\nvisual inspection of projections and by comparison of the line of sight\nvelocities of galaxies and intracluster HI.",
5
+ "authors": "Avinash Chaturvedi, Stephanie Tonnesen, Greg L. Bryan, Gerg\u00f6 Popping, Michael Hilker, Paolo Serra, Shy Genel",
6
+ "published": "2024-04-25",
7
+ "updated": "2024-04-25",
8
+ "primary_cat": "astro-ph.GA",
9
+ "cats": [
10
+ "astro-ph.GA"
11
+ ],
12
+ "label": "Original Paper",
13
+ "paper_cat": "Diffusion AND Model",
14
+ "gt": "1. The formation and evolution of galaxies is now under- stood to be strongly linked with the diffuse gas filling their dark matter halos. Depending on galaxy mass, this gas is called the circumgalactic medium (CGM) when the gas resides in the halos of galaxies of the Milky Way mass or lower, or the intracluster medium (ICM) for gas living in more massive cluster-sized halos (Mtot \u22731014 M\u2299). Various physical processes, such as gas accretion from the intergalactic medium, and feedback driven by stars and AGN, occur in this gaseous halo. These pro- cesses drive flows which regulate the rate at which gas cools on to the galaxy itself, controlling the amount of mass in the interstellar medium and hence the rate of star formation itself, including possibly quenching (see [email protected] Tumlinson et al. 2017, for a review). Therefore, under- standing the physical processes in the CGM or ICM is crucial to building a comprehensive picture of galaxy evolution. We highlight here the important question of the origin and survival of cold gas in the hot ICM, by studying the phase-space distribution of cold gas in massive halos at low redshift, with a particular focus on the observability of this gas in ongoing and future HI surveys. For low-mass, star-forming galaxies, it is expected that their halo can host an abundant amount of cold gas that may be the fuel for future star formation. The halos of massive galaxies, on the other hand, may lack cold gas, resulting in the lack of any recent star for- mation activity (Gauthier & Chen 2011). In addition, current observational (Chen et al. 2018; Berg et al. 2019; Zahedy et al. 2019) and simulation studies (Dav\u00b4 e et al. 2020; Rahmati et al. 2015; Nelson et al. 2020) suggest that at intermediate redshift (0.3 \u2264z \u22640.8), massive ha- arXiv:2404.16926v1 [astro-ph.GA] 25 Apr 2024 2 los have a significant amount of cold gas. For example, Chen et al. (2018); Zahedy et al. (2019) studied lumi- nous red galaxies (LRG) at redshift z \u223c0.21-0.55 and found that these galaxies host a high column density of the cold gas tracers HI and MgII. Other similar stud- ies point toward the same conclusion that LRGs host an abundant amount of cold gas (Zhu et al. 2014; Lan & Mo 2018; Anand et al. 2021, 2022). At low-redshift, simulation studies of Milky Way-like galaxies show that their CGM also host substantial amounts of cold gas (van de Voort et al. 2019). Recently, simulations have made improved predictions for the cold gas distribution in halos that reproduce many of the observed galactic properties. For example, Nelson et al. (2020) have shown that cold gas in the ha- los of LRGs can be attributed to the thermal instability triggered by local density perturbations. They suggest that these perturbations are related to gas stripped from infalling galaxies via tidal interactions, or ram pressure stripping. Performing a comparative study between cos- mological and idealized simulations (individual galaxy halo simulations), Fielding et al. (2020) have also shown that cold gas extends to the virial radius for Milky Way- mass halos. They also suggest that non-spherical accre- tion and satellite galaxies contribute to the cold gas con- tent in the outer halos. Previously, Villaescusa-Navarro et al. (2018) performed a detailed study using the Il- lustris TNG100 simulation, investigating the HI abun- dance and clustering properties in halos for z \u22645. They showed that HI density profiles are sensitive to various processes such as AGN feedback and tidal stripping. For massive halos, they found that HI is mostly concentrated in their satellite galaxies, whereas for small halos it is concentrated in the central galaxy. In contrast to intermediate redshifts, the study of cold gas in the ICM of massive halos (M\u2217\u22651011 M\u2299) in the local Universe (z \u223c0) is limited to a few studies. Nonetheless, using the HI 21 cm emission line, radio observations have demonstrated the abundant existence of cold neutral atomic gas around early-type galaxies (E/S0) galaxies (Serra et al. 2012, 2013b; Young et al. 2014; Serra et al. 2013a). However, these observations are limited to tens of kpc around the targeted galax- ies and are typically not sensitive to HI column densi- ties below \u22641019 cm\u22122, and therefore do not provide a comprehensive picture of the cold gas in the ICM. The ongoing observations from the MeerKAT Fornax survey (hereafter MFS, Serra et al. 2019; Serra et al. 2023), a radio continuum and line survey of the Fornax cluster, provides an excellent opportunity to study the HI gas in great detail in the nearby Fornax ICM (d \u223c20 Mpc). Photometric and spectroscopic studies (Cantiello et al. 2020; Chaturvedi et al. 2022) have shown that the Fornax cluster mass assembly is still ongoing, making it an interesting target to study. MFS is dedicated to studying the HI distribution and kinematics within the Fornax environment. The HI column density sensitiv- ity of the MFS ranges from \u223c5\u00d71019 cm\u22122 at a spatial resolution of \u223c10 arcsec (\u223c1 kpc at Fornax distance) down to 1018 cm\u22122 at \u223c100 arcsec (\u223c10 kpc at For- nax distance). With a mosaic area of the 12 square degrees, MFS will detect HI in the Fornax intracluster (hereafter IC) region - which in this paper refers to the region within the massive dark matter halo and outside satellite galaxies. The high-resolution TNG50 cosmological simulations (Nelson et al. 2019; Pillepich et al. 2019) provide a me- dian spatial resolution of \u223c100 parsec and its validation of cold gas (neutral and molecular hydrogen) against observational work (Popping et al. 2019; Diemer et al. 2019) makes TNG50 an ideal framework to explore the cold gas distribution in Fornax-like halos. This also pro- vides a chance to forecast the upcoming MFS survey results and test the simulations against the MFS obser- vations. In this work, we use the TNG50 simulations (Nelson et al. 2019; Pillepich et al. 2019) and adopt the observing criteria of the MFS to study the HI content in the TNG50 halos similar to the Fornax galaxy cluster. We also study the HI distribution in these halos and their IC region. We calculate the HI covering fraction for these halos and predict the expected observed MFS HI covering fraction. In addition, both the spatial and velocity distribution of HI gas in the ICM of clusters and groups can be used to gain insight into the origin and survival of this cold gas. If the gas is correlated in both position and velocity with satellites, we can argue that the HI is likely either stripped from satellites or cooling is induced by satel- lites. However, if cold gas is not correlated with satel- lite galaxy positions or velocities, we might argue that either cold gas formation is related to the central galaxy or that cold gas survives in the ICM long enough to be- come virialized (e.g., Voit et al. 2017; Rohr et al. 2023). In this paper we take the first step of making these spa- tial and velocity maps, leaving gas particle tracking to future work. The paper is organized as follows: In section 2, we briefly introduce the TNG50 simulation and present the methodology for calculating the HI covering fraction. Section 3 presents our results about the HI distribution and its covering fraction. In section 4, we compare our results to current simulation (Section 4.1) and observa- tional (Section 4.2) studies, and discuss the likely origin of the cold gas in the intracluster medium of Fornax-like 3 halos (Section 4.3). Section 5 presents the summary of the work. 2. SIMULATION AND METHODOLOGY This section briefly introduces the TNG50 simulation that we use for our analysis as well as our criteria for selecting Fornax-like halos. In addition, we present our methodology for calculating the HI column density and HI covering fraction of the selected halos in TNG50 in order to compare to observational surveys. 2.1. The TNG simulations For our study, we use the TNG50 simulation (Nelson et al. 2019; Pillepich et al. 2019), the highest resolution simulation of the IllustrisTNG cosmological magneto- hydrodynamical (MHD) simulation suite (Nelson et al. 2018; Springel et al. 2018; Pillepich et al. 2018; Mari- nacci et al. 2018; Naiman et al. 2018). The IllustrisTNG project is a set of large cosmological simulations that include a variety of galaxy formation physics including AGN feedback. The model has been designed to match a wide range of observational constraints (Pillepich et al. 2018; Springel et al. 2018) and was carried out with the moving mesh code AREPO (Springel 2010). The AREPO code solves the coupled evolution of dark mat- ter, gas, stars, and black holes under the influence of self- gravity and ideal MHD. Developed with the key motiva- tion to study galaxy formation physics and understand the growth of cosmic structure physics, the IllustrisTNG project uses three distinct simulation box sizes. TNG50 was carried out with a box size of 51.7 Mpc per side with 21603 gas and dark matter cells, resulting in a baryon mass resolution of 8.4 \u00d7 104 M\u2299. In partic- ular, we used TNG50-1 (hereafter TNG50), the highest resolution of the three variants run, which provides a me- dian spatial resolution of \u223c100 pc. We analyze this run, although the other larger boxes (TNG100 and TNG300) contain a larger number of Fornax cluster-sized objects, because we need high spatial resolution to study the in- teraction of cold high density columns and the hot group medium. TNG50 adopts initial conditions and cosmo- logical parameters consistent with the Planck Collabo- ration et al. (2016) cosmology with h = 0.68, \u2126b = 0.05, \u2126m = 0.31, \u2126\u03bb = 0.69, and \u03c38 = 0.82 and assuming a flat universe governed by a \u039b cold dark matter (\u039bCDM) cosmology. 2.2. Fornax-like Halo selection In TNG50 a galaxy cluster and groups of galaxies are referred to as halo or FOF (hereafter referred as halo), identified through the friends-of-friends algorithm (Davis et al. 1985). Within each halo, the SUBFIND algorithm (Springel et al. 2001) identifies the subhalos including the primary (central) galaxy and other satel- lite galaxies (hereafter referred as satellite). To find ha- los similar to the Fornax cluster in TNG50 at snapshot 99 (redshift z = 0), we applied a virial mass selection criterion analogous to the Fornax cluster mass (M200 \u223c5 \u00d7 1013 M\u2299, adopted from Drinkwater et al. 2001), namely the mass range of 1013.5 \u2264M200 \u22641014 M\u2299, where M200 is defined as the mass enclosed within a virial radius R200 equal to 200 times the critical density of the Universe. With this condition, we find a total of seven halos. For these halos, we measured the stellar velocity dis- persion of their central galaxy and found that this value is quite close to that of NGC1399, the central galaxy of the Fornax cluster. Except for halo IDs 4 and 9, all other halo central galaxy stellar velocity dispersions fall within \u00b1 20 % to the stellar velocity dispersion value of 315 kms of NGC1399 (Vaughan 2019). In addition to this, the velocity dispersion of all the subhalos within these halos agrees to within 15% of the observed For- nax cluster members (giants and dwarf galaxies) mean velocity dispersion value of 374 \u00b125 km/s (Drinkwater et al. 2001). A previous study of the hot X-ray emitting medium of galaxy groups in the TNG50 cosmological simulations (Truong et al. 2020) showed a good match to observations. More recently, a new set of zoom-in cosmological simulations using the same model \u2018TNG- cluster\u2019 (Truong et al. 2023) have demonstrated good agreement on a larger sample. These successes in repro- ducing observable properties of galaxy clusters indicates that our seven halos are reasonable matches to the For- nax cluster. From here onward, we refer to these halos as Fornax-like halos. In Table 1, we list the physical properties of these halos. 2.3. Atomic HI content To determine the HI mass of gas cells in TNG50 Fornax-like halos, we use the Popping et al. (2019) molecular hydrogen fraction (H2) catalogue, previously calculated for the TNG simulations. In this work we use their fiducial recipe, which is based on the work by Gnedin & Kravtsov (2011). Gnedin & Kravtsov (2011) performed detailed simu- lations including non-equilibrium chemistry and simpli- fied 3D on-the-fly radiative transfer calculations. Based on these simulations, the authors presented fitting for- mulae for the H2 fraction of neutral gas as a function of the dust-to-gas ratio of the gas, the impinging UV radiation field, and surface density of the neutral gas. 4 Table 1. TNG50 halos similar to the Fornax galaxy cluster TNG50 Halo ID Virial Mass (M200) Virial radius (R200) Total HI mass Central galaxy vel. dispersion Halo members vel. dispersion Log M\u2299 (\u00d7 100 kpc) Log M\u2299 km/s km/s 1 13.97 9.59 10.80 384.24 446.20 2 13.81 8.46 10.37 373.80 402.12 3 13.54 6.92 11.24 306.22 321.83 4 13.50 6.71 11.07 204.56 345.08 6 13.54 6.88 10.67 279.20 320.58 7 13.52 6.80 10.38 348.00 331.76 9 13.51 6.75 10.49 254.10 325.88 Popping et al. (2019) assume that the dust-to-gas ratio scales with the metallicity of the neutral gas, that the local UV radiation field scales with the SFR of the gas cell with an additional contribution from the ionising UV background field and they calculate the gas surface density of a gas cell by multiplying its density by the Jeans length of the cell. A detailed description of the implementation of the Gnedin & Kravtsov (2011) fitting formulae within the TNG simulation suite is presented in Popping et al. 2019, (see their Section 2). 2.4. Halo HI covering fraction To understand and quantify the HI distribution in the halos, we measure their HI covering fraction in differ- ent column densities bins (hereafter denoted as NHI), adopting a range of NHI> 1018, 1019, and 1020 cm\u22122. We first measured the HI column density, by perform- ing a two-dimensional binning of all HI gas cells along the three projected axis X, Y and Z (spatial position), regardless of the velocity space and adopted a pixel size of 2 kpc, similar to the MFS spatial resolution limit. This assumes that gas particle sizes are smaller than 2 kpc, which is generally true for the TNG50 halos gas cells. We checked that larger gas cells have negligible contribution to the HI mass and hence to HI column density. In Section 3.2, we present the HI covering frac- tion results as a function of velocity. We measure the HI covering fraction in two ways, sim- ilar to Rahmati et al. (2015), as follows: Cumulative HI covering fraction, hereafter denoted as fHI(R), is defined as the fraction of surface area covered by the binned pixels having column density higher than a given NHI value within a radius R divided by the total area of pixels within radius R. fHI(R) is expressed as: fHI(R) = PN i=1 ANHI|R PN i=1 A|R (1) Here ANHI is the single pixel area with a column density equal or higher than the given NHI value and is summed over N such pixels in a given area of radius R divided by the total area of pixels in radius R. Differential HI covering fraction, hereafter denoted as fHI(\u2206R), is defined similarly to be cumulative HI cov- ering fraction, except here we consider only the covering fraction within the radial bin defined between radius Rj and Rj+1 and fHI(\u2206R) is expressed as: fHI(\u2206R) = PN i=1 ANHI|\u2206R PN i=1 A|\u2206R (2) We also separately measure the total HI covering frac- tion and the HI covering fraction in the intracluster (IC) regions of the halos, that is, the regions well away from identified galaxies. For the IC measurement, we first select the satellite galaxies within a halo with a stellar mass \u2265108.5 M\u2299and having at least 10 gas cells (vary- ing this to 1 gas particle has no effect on our results). Then, we remove the HI gas cells associated with these satellite (including the central galaxy) using the SUB- FIND algorithm of TNG50. This removal of HI gas is done out to ten times the stellar half mass radius (de- noted as R1/2\u2217) of a satellite. In TNG50, the SUBFIND algorithm identifies all the gas cells that are gravita- tionally bound to a specific satellite. After removing the gas cells of a satellite identified with the SUBFIND procedure out to 10\u00d7R1/2\u2217, we measure the HI covering fraction in the same way as defined earlier. The R1/2\u2217 radii of satellite vary from a few kpc to tens of kpc, and with our adopted radial limit (10\u00d7R1/2\u2217), we make sure that we exclude all gas cells that are within the domain of an individual galaxy. However, the IC measurement may contain the very extended tidal/stripped gas tails originating from the individual galaxies. 3. RESULTS 5 Log NHI cm-2 < Figure 1. HI distribution in the TNG50 halo 6 at redshift z=0, colour-coded with the HI column density. The maps are made using a pixel scale of 2 kpc and are shown projected along the (arbitrarily chosen) z-axis. The three pink circles indicate the viral radius of the halos marked at 0.5, 1.0 and 1.5 times Rvir. Dark blue color in the maps indicate the HI column density lower than NHI= 1016 cm\u22122. Left: Full halo HI distribution, right: HI distribution in the intracluster (IC) region (i.e. after removing HI within 10 stellar half-mass radii of all galaxies). This section presents our results showing the HI dis- tribution and covering fraction in the TNG50 halos and their IC regions. For our study, we considered gas cells gravitationally bound to a halo, including gas cells ex- tending to an average 1.5 Rvir radii of halos. Figure 1 shows the HI distribution in TNG50 halo 6 (left panel) and in its IC region (right panel). Three pink circles indicate the virial radius drawn at 0.5, 1.0 and 1.5 Rvir. These images demonstrates both the patchy nature of HI in the simulated clusters, as well as it\u2019s distributed nature. The IC image (right) emphasizes that much of the HI (at least by area) is not immediately connected to galaxies \u2013 that is, it is at least 10 stellar half-mass radii from any galaxy in the simulation. Figure 2 shows the HI distributions for all other Fornax-like halos (top two rows) and in their IC regions (bottom two rows). The first visual impression we get from these plots is that the large-scale distribution of HI extends beyond 0.5 virial radii (\u223c350 kpc) for all the halos. Otherwise, these halos demonstrate diverse HI distributions with significant variations in the amount of HI mass (Table 1). The diverse and extended HI dis- tribution in these halos could potentially be related to the merger/accretion history of these halos or the ac- tivity of supermassive black holes (Zinger et al. 2020). We can also see the streams or filamentary structures connecting the central and satellite galaxies. We notice, in addition, a large number of small HI regions with relatively high column densities, which are possibly not related to any satellite. We refer to these as clouds. We caution that, unlike Nelson et al. (2019), we have not performed Voronoi tessellation over the gas cells to identify these clouds-like structures and it merely rep- resents HI clumps around the satellite galaxies and in the intra-cluster regions of these halos. For the halos HI map, we see that the centers of the satellite are domi- nated by HI column densities NHI between 1020 to 1021 cm\u22122. Looking at these maps, it is quite clear that HI clouds extend out to the virial radius (corresponding to \u223c700 kpc) covering the IC regions. Within the inner region of each halo, around \u223c0.25 Rvir, a large fraction of the HI gas cells are associated with the central galaxy of the halo. In the IC region, the observed HI struc- tures primarily have column densities NHI< 1020 cm\u22122, whereas the structures beyond 0.5 virial radii lie mainly at 1018 < NHI/cm\u22122< 1019. We present the results of the HI covering fraction of halos and in the IC regions in subsections 3.1 and 3.2, respectively. 6 Log NHI cm-2 < Figure 2. HI maps in the Fornax-like halos (first two rows) and in their IC regions (bottom two rows) made with a pixel size of 2 kpc (projected along the arbitrarily chosen z-axis). The three pink circles indicate the viral radius of the halos marked at 0.5, 1.0 and 1.5 times Rvir. 7 Figure 3. HI cumulative (left panel) and differential (right panel) covering fraction profiles of Fornax-like halos HI maps (Figure 2, first two rows) measured along the arbitrarily chosen z-axis for a pixel size of 2 kpc. The first, middle, and bottom rows show the covering fraction for HI column densities log NHI \u226518, 19 and 20 cm\u22122 respectively. The thin lines indicate the individual halos, and the thick lines mark the average value. The vertical dashed lines indicate the average virial radii of the halos. 8 Figure 4. The same as Figure 3, but for the HI distribution in the intracluster (IC) region of Fornax-like halos HI maps (Figure 2, bottom two rows) measured along the arbitrarily chosen z-axis for a pixel size of 2 kpc. We obtain the IC HI by removing the gas cells gravitationally bound to all galaxies within 10\u00d7 their stellar half mass radii. Note the y-axis ranges differ from Figure 3. 3.1. HI covering fraction profiles We used the HI projected maps as shown in Figure 1 to measure the HI covering fraction as discussed in sec- tion 2. We measured the HI covering fractions of halos for three projections, along the X, Y, and Z directions, using a pixel scale of 2 kpc. In this section, we present the HI covering fraction for the full halo maps (top two rows in Figure 2) and in the next section, we discuss the covering fraction of IC regions. In Figure 3 we show the cumulative and dif- ferential covering fraction profiles measured along the (arbitrarily chosen) z-axis in the left and right panel, respectively. For these panels, we include all of the HI gas in the covering fraction calculation, whether or not it is within the central or a satellite galaxy. We do this because, in a blind HI survey, the satellite galaxies may not be identified, so the HI would be measured glob- ally. By definition, the innermost point of the cumu- lative and differential covering fractions are the same, then the differential covering fraction begins decreasing more steeply than the cumulative covering fraction. The first, middle, and bottom rows in both panels show the covering fraction for HI column density for NHI \u22651018, 1019 and 1020 cm\u22122 respectively. The thin lines indi- 9 cate the individual halos, and the thick lines mark the average value. We focus mainly on the NHI bins of 1018 and 1019 cm\u22122, which are the optimal range for studying the HI distribution at \u223ckpc scales for the MFS. We find that regardless of the projection axis, the average cov- ering fraction for the NHI \u22651018 cm\u22122 bin remains be- tween 10-15% at 0.5 Rvir and drops to 6-10% at 1 Rvir. With increasing column density, the covering fraction decreases, such that for the NHI \u22651019 cm\u22122 bin the covering fraction drops to 5-10% at 0.5 Rvir and to less then 5% at 1 Rvir. The differential covering fraction at 0.5 virial radius is between 5-10% and drops to less than 5% at 1 Rvir. These covering fractions quantify our visual impres- sions. Although all the halos have some HI gas within 0.5 Rvir, it is distributed non-uniformly in small struc- tures that look like filaments or clouds. In Figure 3 we verify that the covering fraction of these structures is low, even when including column densities down to 1018 cm\u22122. These structures and clouds could potentially be associated with the central galaxy and satellite galaxies, but from Figure 2 we see that the HI is clearly extended well beyond the satellite stellar radii. However, this gas might have been stripped from satellites to form part of the IC region, which we discuss in the next section and in Section 4.3. 3.2. Intra-cluster HI covering fraction The bottom two rows of Figure 2 show the HI dis- tribution in the IC regions of the halos. The majority of pixels with column density NHI\u22651018 cm\u22122 extend into the IC regions (the cumulative HI covering fraction drops by about 30% at 1 Rvir). It is primarily pixels with high column density HI, NHI > 1020 cm\u22122, which are removed when only including the IC gas cells in our column density measurement (the cumulative HI cover- ing fraction drops by about 70% at 1 Rvir). Similar to Figure 3, we show the HI covering profiles for the IC regions in Figure 4. For the NHI bins 1018 and 1019 cm\u22122, the cumulative covering fraction at 0.5 Rvir is between 5-10 % and drops to less than 5% at 1 Rvir. We checked the differences in the HI covering fraction along the different projected directions and find that, on average, the HI covering fraction remains the same and varies only by a few percent when changing the projected axis. Figure 5 shows the cumulative HI covering fraction at 0.5 Rvir (upper panel) and 1 Rvir (lower panel) be- tween the Fornax-like halo and their IC region in three different projected axes for a column density NHI \u22651018 cm\u22122. The open blue (green) star, circle and square in- Figure 5. Halo and intra-cluster cumulative HI covering fractions at 0.5 Rvir (upper panel) and 1 Rvir (lower panel) for three projected axes. Open star, circle and square in- dicate the X, Y and Z projections, respectively. The open black (red) star, circle and square indicate the average val- ues. dicate the covering fractions at 0.5 Rvir (1 Rvir) along the X, Y and Z axis projections, respectively. The open black (red) star, circle and square indicate the average values measured at 0.5 Rvir (1 Rvir). On average, the IC HI covering fraction is between 70-80% of the total HI covering fraction. This suggests that a large fraction, around \u223c75% (by covering fraction) of the low-column density HI gas that is distributed throughout these mas- sive halos is well outside the satellite galaxies. We also note that some pixels that had a high column density when including the satellite gas have lower column den- sities when only including IC gas. This can be seen as some red pixels in the top panels of Figure 2 turning into green pixels in the bottom panels. Depending on the HI column sensitivity, MFS will map the HI distribution in Fornax at different spatial resolutions, varying from 1 to 10 kpc. We created sim- ilar HI maps, as shown in Figure 2, with different pixel 10 Figure 6. Average halo and intra-cluster cumulative HI covering fractions of seven halos at 1 Rvir for three projected axes corresponding to different pixel scale size of HI maps. Numbers in the plot mark the pixel size in kpc used in cre- ating the HI maps. sizes ranging from 2 to 10 kpc. Examples of HI maps made with 10 kpc pixel scale are shown in Appendix A. In Figure 6, we show the average halo and IC cumulative HI covering fraction of the 7 halos at 1 Rvir radius for different pixel scales. As anticipated, when degrading the pixel size of HI maps, the HI covering fraction in- creases and, for NHI\u22651018 cm\u22122, reaches 12% at 10 kpc pixel scale for the full halo and is around 10% for the IC halo map. These are then our resolution-matched pre- dicted values for the NHI\u22651018 cm\u22122covering fraction that will be observed by MFS. Finally, in order to investigate whether we can see any signature of diffuse HI gas in the inter-galactic medium or cosmic filamentary structures around these mock Fornax-like halos, we made HI maps extending out to 3 Rvir, which we show in Appendix B. Visu- ally inspecting these maps, we do not find diffuse HI in the inter-galactic medium of halos. We notice that outside 1.5 Rvir, there are several infalling smaller satel- lite galaxies, particularly for Halos 1, 2 and 4, but they contribute a negligible amount in the HI covering frac- tion profile. In particular, we computed HI covering fractions for these maps and found similar results as for the maps shown in Figure 2. 3.3. HI covering fraction in velocity space We also measured the cumulative HI covering fraction in the line-of-sight velocity space, adopting a velocity bin size of \u223c100 kms\u22121 within a range of -700 to 700 kms\u22121. This allows us to connect the HI gas at different column densities in velocity space. A correlation of HI gas in velocity space with the satellite velocity distribution can give us hints about its possible stripping origin from satellite galaxies. Figure 7 shows the HI distribution in velocity space for Halo 6 (left panel) and in its IC region (right panel). Similar to Figure 1, we created these velocity space maps (Sec. 2.4) by performing two- dimensional binning in the phase space and taking the mean velocities in bins corresponding to the X, Y and Z axis, with a pixel scale of 2 kpc. In the velocity maps, we consider the halo velocity as the zero velocity point. Remaining HI velocity maps of other halos and their IC regions are shown in Figure 8. In these maps, we can see the distribution of larger satellite galaxies having different velocities then the surrounding lower density HI (first two rows in Figure 8). We used these maps to measure the HI covering fraction in velocity space for these halos which is shown in Figure 9. The top, middle, and bottom panels show cumulative HI covering fraction for NHI column densities of 1018, 1019 and 1020 cm\u22122 with a velocity bin size of 100 kms\u22121. The thin lines indicate the individual halos, and the thick continuous and dashed lines mark the average value of halo and IC region respectively. Vertical dashed gray line marks the halo velocity which is taken as the zero velocity point. We find that, for a velocity bin size of 100 kms\u22121, on average the cumulative HI covering fraction is less than 4 % (2 %) for the column density NHI \u22651018 (1019) cm\u22122 and significantly drops to less than 1% for NHI \u22651020 cm\u22122. The halo and IC HI velocity covering fraction looks bimodal for most of the halos (except halo 4 and 9), as also shown by the average HI covering fraction (continuous and dashed blue lines in Figure 10). On a halo-by-halo comparison, we have verified that the IC HI does not show a more Gaussian velocity distribution than the total HI, as would be expected by virialized, ballistic gas clouds. We can also use our HI velocity covering fraction anal- ysis to learn about the origin of the HI gas in the IC region. To do this, we looked for a possible correla- tion between the HI velocity covering fraction and the satellite velocity distribution. In Figure 10, we show the velocity distribution of all satellite galaxies (all identi- fied satellites with at least 100 cells and stellar mass \u2265108.5 M\u2299), as a function of the number of galaxies (orange leftmost y-axis of each panel) and their stellar mass (black left y-axis of each panel). The grey bars in Figure 10 indicate the summed stellar mass of galaxies binned within velocity ranges of 100 kms\u22121. The orange histogram represents the number counts of all satellite galaxies in each velocity bin. Additionally, we overlay the velocity covering fraction (right y-axis of each panel) 11 Figure 7. HI distribution in the TNG50 halo 6 at redshift z=0, colour-coded with the HI mean velocity (projected along the arbitrarily chosen Z axis) for a pixel size of 2 kpc. Left: Full halo HI distribution, right: HI distribution in the IC region. for log NHI \u226518 cm\u22122 in blue (total and IC HI in solid and dashed lines, respectively). For most of the ha- los, we find that the velocity covering fraction for both the halo and IC regions follows the velocity distribu- tion of satellite galaxies. This points towards a scenario where the IC HI gas originated from satellite galaxies. We performed a similar analysis considering only the HI-rich galaxies (satellite having HI gas cells outside 10 R1/2\u2217) and found similar correlation between the stellar mass distribution and HI velocity covering fraction, ex- cept that the stellar mass distribution is less centrally peaked (as we would expect for gas rich galaxies that have likely recently entered the cluster). We show both the number distribution and the stellar mass distribution of satellites because both could effect the total HI mass brought into the cluster. The stellar mass distribution of satellite galaxies may even be more likely to predict where HI will be found as, for example, a single 1010 M\u2299galaxy is likely to fall into the clus- ter with more HI than two 109 M\u2299galaxies. In Figure 10, we note that besides the agreement with the num- ber distribution of satellites, the HI velocity covering fraction also correlates with peaks in the velocity distri- bution of the stellar masses of all satellite galaxies (grey histogram). Indeed, in Halos 3 and 7, where the satel- lite stellar mass distribution shows multiple peaks, the HI velocity distribution seems to follow the stellar mass more closely than the number of satellites. This general agreement between the satellite and IC HI velocity dis- tributions suggests that IC HI originates from satellite galaxies, and the possibly stronger relation between the satellite stellar mass and HI velocity gives a hint that the IC HI may even originate from more massive galaxies (although we stress that this does not necessarily imply the gas is stripped from massive disks but may fall in as part of the galaxy\u2019s circumgalactic gas). Although a detailed investigation is outside the scope of this paper, we note that it is expected that some pro- cesses occurring in clusters affect the gas dynamics, and therefore some offset between the HI and satellite ve- locity distributions is unsurprising. For example, a past merger can cause long-lived motions in the ICM (Vaez- zadeh et al. 2022), or black hole activity could affect gas dynamics (Weinberger et al. 2017). 4. DISCUSSION In this section, we discuss our results and compare the HI covering fraction to available observations and other cosmological simulation studies. 4.1. Comparison to other cosmological simulations We begin by comparing our measured cumulative HI covering fraction with the available HI covering fraction 12 Figure 8. HI distribution in the TNG50 halos (top two rows) and in their IC (bottom two rows) at redshift z=0, colour-coded with the HI velocity (projected along the arbitrarily chosen Z axis) for a pixel size of 2 kpc. 13 Figure 9. HI cumulative covering fraction profiles of Fornax-like halos in the velocity space. The top, middle, and bottom panels show the covering fraction for HI column densities log NHI \u226518, 19 and 20 cm\u22122, respectively. The thin lines indicate the individual halos, and the thick lines mark the average value. The thick dashed lines mark the average IC HI cumulative covering fraction in velocity space. from the studies of Nelson et al. (2020). They measured the abundance of cold gas in TNG50 halos for massive halos with mass > 1011 M\u2299at intermediate redshift z \u223c 0.5. Although we have only 7 TNG50 halos, and apart from the evolution of the halos from redshift z\u223c0.5 to 0, the simulations and HI model we used are the same as Nelson et al. (2020); therefore, the HI covering fraction should be of the same order. Our measured HI covering fraction for NHI agrees well with the Nelson et al. (2020) measured values. For a column density of NHI> 1017 cm\u22122, we find a covering fraction around 70 \u00b1 15 % at 10 kpc, dropping to 30 \u00b1 15% at 100 kpc, and for NHI> 1020 cm\u22122, at 100 kpc, the covering fraction is roughly 10%, similar to the findings of Nelson et al. (2020). Rah- mati et al. (2015) used the EAGLE simulation to study the HI distribution around high-redshift massive galax- ies. They found a strong evolutionary trend in the HI covering fraction within the virial radius with redshift. For an averaged HI column density in between 1017.3 < NHI/cm\u22122< 1021, the HI covering fraction drops from 70 % at z = 4 to 10 % at z =1. The HI content of galaxies in the EAGLE cosmological simulations was also inves- tigated (Marasco et al. 2016; Crain et al. 2017), finding that the highest resolution simulations reproduced the HI masses of galaxies as well as their clustering. In ad- dition, in dense group and cluster environments, they found that ram pressure stripping was the primary HI mass removal process but that galaxy interactions also played a role. Studying the spatial distribution and ionisation state of cold gas in the CGM, Faerman & Werk (2023) per- formed semi-analytical modelling of cold gas in the CGM of low-redshift star-forming galaxies. Assuming that cold clouds in the CGM are in local pressure equilib- rium with the warm/hot phase, they reported that cold gas can be found out to 0.6 Rvir or beyond. Although we examine more massive halos (1013.5 \u2264M200 \u22641014 M\u2299) compared to Faerman & Werk (2023), we also find that the CGM of Fornax-like halos normally shows a spatially extended distribution of cold gas clouds out to more than 0.5 Rvir. Previously, van de Voort et al. (2019) have shown that standard mass refinement and a high spatial resolution of a few kpc scale can significantly change the inferred HI column density. Studying zoom-in simulations of a Milky-way mass galaxy within the virial radius, they found that the HI covering fraction of NHI\u22641019 cm\u22122 at 150 kpc is almost doubled from 18% to 30% when increasing the spatial resolution of the CGM. Although the simulation setup of van de Voort et al. (2019) and TNG50 is different, we compare our findings to theirs based on the similar resolutions (\u223c1 kpc). We find that the cumulative HI covering fraction within 150 kpc is around 25%, quite close to the van de Voort et al. (2019) result. 14 Figure 10. HI covering fraction profiles of Fornax-like halos in velocity space for log NHI \u226518 cm\u22122 gas over-plotted on the number counts of all satellite galaxies (orange histogram) and the stellar mass velocity distribution (grey histogram). The profiles and histograms are measured in 100 km/s bins. The leftmost y-axis (orange) of each subplot represents the number count of satellite galaxies, while the second axis (black) represents the log stellar mass of satellite galaxies. Solid and dashed blue lines indicate the velocity covering fraction of the halo and IC regions, respectively (right y-axis of each panel). In addition to the van de Voort et al. (2019) work, several papers have focused on studying CGM proper- ties using higher spatial and mass resolution simulations. For example, the FOGGIE group (Peeples et al. 2019) has used the cosmological code Enzo Bryan et al. (2014) to carry out a set of simulations with high resolution in the CGM, finding that a great deal of small-scale struc- ture emerged in the multiphase gas, producing many more small clouds. However, they found that the HI covering fractions did not change significantly. Simi- larly, Hummels et al. (2019) studied the simulated CGM with an enhanced halo resolution technique in the TEM- PEST simulation, again based on ENZO cosmological zoom simulations. They found that increasing the spa- tial resolution resulted in increasing cool gas content in the CGM. With an enhanced spatial resolution in the CGM, they found that observed HI content and column density increases in the CGM. In a similar vein Suresh et al. (2019) explored the CGM of a 1012 M\u2299mass halo with a super-Lagrangian zoom in method, reaching up to \u223c95 pc resolution in CGM. They reported that en- hanced resolution results in an increased amount of cold gas in the CGM and this increase in the cold gas also results in a small increase in the HI covering fraction. For a column density of NHI\u22651019 cm\u22122, they found a HI covering fraction value of around 10 % at one virial radius, which agrees well with our measured covering fraction of 12 % at one virial radius. More recently, Ramesh & Nelson (2024) re-simulated a sample of Milky Way galaxies at z \u223c0 from TNG50 simulation with the super Lagrangian refinement method. Going down to a scale of 75 pc, they also reported that the abundance of cold gas clouds increases with enhanced resolution but did not find a large change in the covering fraction. 4.2. Detection of HI clouds in the Intracluster (IC) region through observational work For all the TNG50 Fornax-like halos, we found that the HI clouds can be found beyond 0.5 Rvir, which cor- responds to an average physical scale of around 350 kpc. 15 Although only a few, there are observational studies showing the existence of remote HI clouds associated with galaxies. An important example is the detection of HI clouds in the inter-galactic medium of the galaxy group HCG44, where the HI clouds extend to more than \u223c300 kpc (Serra et al. 2013b). Another example is the case of NGC 4532, in the Virgo cluster, where the HI tail of the galaxy with some discrete clouds extends to 500 kpc and constitutes around 10% of the total HI mass (Koopmann 2007). A number of observational properties of the multi- phase nature of the CGM in groups and clusters have been explored through absorption lines of background quasars with the HST-Cosmic Origins Spectrograph (COS). Studying a sample of low redshift luminous red galaxies (LRG) with metal absorption lines such MgII, CIII, and SiIII, Werk et al. (2013) measured the occur- rence of cool metal enriched CGM and reported a MgII ion covering fraction (down to very low column densi- ties) of 0.5 within 160 kpc radius. Using a sample of 16 LRG at z \u223c0.4 observed with the HST/COS, (Chen et al. 2018; Zahedy et al. 2019) found a high HI covering fraction for column density NHI> 1017.2 cm\u22122of about 0.44 within 160 kpc impact parameter. In comparison to this, we measured an HI covering fraction of around 0.75 for column density NHI> 1017.2 cm\u22122at 160 kpc. Studying the CGM of a sample of 21 massive galaxies at z \u223c0.5, Berg et al. (2019) measured an HI covering fraction of column density for NHI> 1017.2 cm\u22122within the virial radius of 15 %, which closely agrees with our average measured HI covering fraction of 12 %. Finally, Emerick et al. (2015) compared HI covering fractions within the virial radius in Virgo-like clusters between simulations and observations, finding values consistent with those found here. Most recently, using the MeerKAT observations of the Fornax A subgroup, Kleiner et al. (2021) reported the detection of HI clouds at \u223c220 kpc from NGC 1316, the central galaxy of Fornax A. Another study done with the MeerkAT telescope have detected a large extended HI cloud, extending \u223c400 kpc in proximity to a large galaxy group at a redshift of z \u223c0.03 (J\u00b4 ozsa et al. 2022). Although observational detections of HI clouds in the IC regions around massive galaxies are few and rare, our and other simulation work like Rahmati et al. (2015); van de Voort et al. (2019); Nelson et al. (2020) strongly suggest the existence of dense small HI clumps within the ICM. For our work we find that within the IC region, HI tends to have a column density log NHI \u223c19 cm\u22122 or less, and current observations are mostly not that sensitive yet. A strong test of cosmological simulations will be to compare our predicted HI covering fractions Figure 11. HI intra-cluster (IC) mass as a function of num- ber of satellites for our seven Fornax-like halos. with MFS observations. If simulations overpredict the covering fraction of cold gas, some combination of i) ex- cess cold gas removal from satellites, ii) excess cold gas added to the ICM from feedback or filamentary accre- tion, and iii) suppressed heating of cold gas in the ICM are likely at play. On the other hand, if simulations un- derpredict the HI covering fraction, some combinations of these effects are leading to too little cold gas. 4.3. Possible Origin Scenario A detailed study of the origin or production of the large amount of HI gas in TNG50 Fornax-like halos is beyond the scope of this work. However, we speculate here on the possible source of the HI gas we do find. Nelson et al. (2020) studied the cold gas distribution in TNG50 massive halos at intermediate redshift z \u223c0.5. Using Lagrangian tracer analysis, they argued that cold gas in TNG50 halos is related to gas that is removed from the halos in infalling satellites. These gas clouds can later stimulate the cooling process leading to a sig- nificant amount of cold gas. Most recently, using the TNG50 simulation Ramesh & Nelson (2024) studied the cold gas clouds in the CGM of Milky-Way like galaxies. They reported that these high density gas clouds show clustering behaviour and this over-density increases around satellite galaxies, sug- gesting that a fraction of these clouds originate from ram-pressure stripping. This suggests that, not only for massive halos (like our Fornax-like halos), but even for smaller galaxies, the satellite gas stripping scenario can be important in producing HI clouds in the CGM. We however do not discount the possibility that a fraction of these clouds, whether around the satellite galaxies 16 or isolated, may originate from the in-situ condensa- tion of hot halo CGM gas in the cluster environment, or may be associated with outflows from the central galaxy (Fraternali & Binney 2006). Studying the for- mation mechanism of high velocity clouds around the Milky-Way like disk galaxies, Binney et al. (2009); Fra- ternali et al. (2015) have suggested that condensation of hot CGM gas can produce the HI clouds. It will be interesting to learn which is the effective mechanism for producing these HI clouds in the CGM. Possibly all the mechanisms: a) satellite stripping, b) thermal instabil- ity in CGM, or c) the feedback from the galaxy play roles in the formation of these clouds and could be ex- plored by characterising these cloud properties in phase space (spatial and velocity) such as was done in Ramesh & Nelson (2024). In addition to the analysis in Section 3.3, which in- dicates that the IC velocity covering fraction is associ- ated with the satellite galaxies\u2019 velocity distribution, we checked if we could see any correlation between the satel- lite galaxies\u2019 number with the total IC HI mass. Fig- ure 11 shows the halo intracluster HI mass as function of satellite galaxy number. In Figure 11, red stars denote all of the satellite galaxies and open squares mark the satellites having HI gas cells outside their 10 \u00d7 R1/2\u2217. With only 7 halos, its hard to quantify any relation, but we find that the halo IC HI mass increases with increas- ing number of satellite galaxies. The IC HI mass corre- lates more steeply with satellites having HI and shows less scatter. This further suggests that the HI mass in the IC regions of Fornax-like clusters could be associated with the stripped HI gas from satellite galaxies, similar to the findings of Nelson et al. (2020). We find a simi- lar level of correlation between the total stellar mass in satellites and the halo IC HI mass. 5. SUMMARY AND CONCLUSION In this paper, using the publicly available TNG50 sim- ulation data, we have studied the distribution of HI gas in halos similar to the Fornax galaxy cluster. Adopt- ing the MeerKAT Fornax survey (MFS) observational conditions, we have measured the HI covering fraction of the halos with a mass of 1013.5 <= M200 <= 1014 M\u2299. The following points summarise our findings and conclusions: 1. Atomic hydrogen in TNG50 Fornax-like halos shows a wide spatial distribution, appearing as clouds and filamentary structures (Figures 1 and 2). HI is non-uniformly distributed and extends in patches well beyond 0.5 virial radii of the central galaxy. On a physical scale, this corresponds to \u223c 350 kpc. 2. Using our HI covering fraction measurements, we find that individual Fornax-like halos in TNG50 show a wide scatter in the measured HI covering fraction ranging from 3% to 15% at 1 Rvir (Figure 5). We predict the upcoming MFS should observe a total HI covering fraction of \u223c25% at 0.5 virial radii and \u223c12% at 1 Rvir (Figure 6) at NHI\u22651018 cm\u22122(spatial resolution \u223c10 kpc). For intraclus- ter regions, this values drops to \u223c20% at 0.5 virial radii and \u223c9% at 1 Rvir. 3. Intracluster (IC) regions (i.e. more than 10 stellar half-mass radii from identified galaxies) in Fornax- like halos hold a substantial fraction of the HI. When using the NHI \u22651018 cm\u22122 contour, the IC HI covering fraction at 1 Rvir (spatial resolution \u223c10 kpc) corresponds to around 75% of the total HI covering fraction (Figure 5). 4. The HI velocity covering fraction for the Fornax- like halos (both in total and in the IC regions only) shows a broad velocity distribution that is not gen- erally Gaussian, indicating that HI is not virialized in the halos (Figure 9). The HI velocity covering fraction for both halo and IC largely follows the velocity distribution of satellite galaxies, suggest- ing that IC HI is associated with satellite galaxies (Figure 10). 5. We find that halo HI intracluster mass increases with increasing number of satellite galaxies and shows an even stronger correlation with the satel- lites having HI presence in their outskirts (Figure 11). This also suggests that HI in the IC regions is associated with the stripped gas of satellite galax- ies, similar to the results of Nelson et al. (2020). With this work, we have demonstrated, based on TNG50 simulation data, that HI cold gas is predicted to co-exist and survive in the hot intracluster medium for Fornax-like clusters. Based on HI maps of Fornax-likes halos in TNG50, we expect MFS to find extended HI well beyond the satellites in the halo, but should gener- ally follow the large-scale satellite distribution, both on the sky and in velocity space. This is also reflected in the asymmetry of the HI in velocity space - while the aver- age distribution of the seven halos studied is symmetric, any individual halo can show strong asymmetries. We plan to perform a future follow-up study to pinpoint the origin of these HI clouds, whether they are possibly stripped or formed in situ in the cluster environment. It will be illuminating to see what MFS will observe within the Fornax cluster. With its higher sensitivity, there re- mains a good chance that the MeerKAT telescope can 17 provide observational support and constraints for cur- rent and future simulation work on the multiphase na- ture of halo gas. 6. ACKNOWLEDGEMENTS A.C. would like to thank Abhijeet Anand and Roland Sazacks for helpful suggestions and discussions regard- ing this work. A.C. acknowledges the financial and computing support from Simons Foundation and thanks the Flatiron Institute pre-doctoral fellowship program through which this research work was carried out. GLB acknowledges support from the NSF (AST-2108470, AC- CESS), a NASA TCAN award, and the Simons Founda- tion through the Learning the Universe Collaboration. APPENDIX A. TNG50 HALOS MAPS AT 10 KPC PIXEL SIZE In our paper we use pixels that are 2 kpc on a side in order to match the MFS resolution at HI column densities of about 1019 cm\u22122. However, the resolution at HI column densities of 1018 cm\u22122 is about 10 kpc. In Figure 6 we show how the covering fraction at this column density increases with pixel size, and in this Appendix Figure 12, we show HI maps made with 10 kpc pixels so the reader can directly compare with the higher resolution maps shown in Figures 2 and 1. B. TNG50 HALOS MAPS AT OUT TO 3 RVIR For our study we considered only the gas cells gravitationally bound to a halo (Sec. 2), which generally includes cells out to an average of 1 to 1.5 Rvir. Here in this Appendix Figure 13, we show TNG50 Fornax-like halos maps out to 3 Rvir, demonstrating that there are several infalling satellite galaxies, particularly for Halos 1, 2 and 4. Their contribution to the HI covering profile are minimal. These maps also confirm that we do not detect any signature of diffuse HI (not closely associated with satellite galaxies) in the outer IGM of halos). REFERENCES Anand, A., Kauffmann, G., & Nelson, D. 2022, Monthly Notices of the Royal Astronomical Society, 513, 3210, doi: 10.1093/mnras/stac928 Anand, A., Nelson, D., & Kauffmann, G. 2021, Monthly Notices of the Royal Astronomical Society, 504, 65, doi: 10.1093/mnras/stab871 Berg, M. A., Howk, J. C., Lehner, N., et al. 2019, The Astrophysical Journal, 883, 5, doi: 10.3847/1538-4357/ab378e Binney, J., Nipoti, C., & Fraternali, F. 2009, MNRAS, 397, 1804, doi: 10.1111/j.1365-2966.2009.15113.x Bryan, G. L., Norman, M. L., O\u2019Shea, B. W., et al. 2014, ApJS, 211, 19, doi: 10.1088/0067-0049/211/2/19 Cantiello, M., Venhola, A., Grado, A., et al. 2020, A&A, 639, A136, doi: 10.1051/0004-6361/202038137 Chaturvedi, A., Hilker, M., Cantiello, M., et al. 2022, A&A, 657, A93, doi: 10.1051/0004-6361/202141334 Chen, H.-W., Zahedy, F. S., Johnson, S. D., et al. 2018, Monthly Notices of the Royal Astronomical Society, 479, 2547, doi: 10.1093/mnras/sty1541 Crain, R. A., Bah\u00b4 e, Y. M., Lagos, C. d. P., et al. 2017, MNRAS, 464, 4204, doi: 10.1093/mnras/stw2586 Davis, M., Efstathiou, G., Frenk, C. S., & White, S. D. M. 1985, ApJ, 292, 371, doi: 10.1086/163168 Dav\u00b4 e, R., Crain, R. A., Stevens, A. R. H., et al. 2020, Monthly Notices of the Royal Astronomical Society, 497, 146, doi: 10.1093/mnras/staa1894 Diemer, B., Stevens, A. R. H., Lagos, C. d. P., et al. 2019, MNRAS, 487, 1529, doi: 10.1093/mnras/stz1323 Drinkwater, M. J., Gregg, M. D., & Colless, M. 2001, ApJL, 548, L139, doi: 10.1086/319113 Emerick, A., Bryan, G., & Putman, M. E. 2015, MNRAS, 453, 4051, doi: 10.1093/mnras/stv1936 Faerman, Y., & Werk, J. K. 2023, ApJ, 956, 92, doi: 10.3847/1538-4357/acf217 Fielding, D. B., Tonnesen, S., DeFelippis, D., et al. 2020, ApJ, 903, 32, doi: 10.3847/1538-4357/abbc6d Fraternali, F., & Binney, J. J. 2006, MNRAS, 366, 449, doi: 10.1111/j.1365-2966.2005.09816.x Fraternali, F., Marasco, A., Armillotta, L., & Marinacci, F. 2015, MNRAS, 447, L70, doi: 10.1093/mnrasl/slu182 Gauthier, J.-R., & Chen, H.-W. 2011, MNRAS, 418, 2730, doi: 10.1111/j.1365-2966.2011.19668.x 18 Log NHI cm-2 < Figure 12. Example of HI maps made with 10 kpc pixel scale, showing HI distribution in the TNG50 halo 3, 6 and 7 at redshift z=0, colour-coded with the HI column density. These maps are shown projected along the (arbitrarily chosen) z-axis. The three pink circles indicate the viral radius of the halos marked at 0.5, 1.0 and 1.5 times Rvir. Dark blue color in the maps indicate the HI column density lower than NHI= 1016 cm\u22122. Left: Full halo HI distribution, right: HI distribution in the intracluster (IC) region. Gnedin, N. Y., & Kravtsov, A. V. 2011, ApJ, 728, 88, doi: 10.1088/0004-637X/728/2/88 Hummels, C. B., Smith, B. D., Hopkins, P. F., et al. 2019, ApJ, 882, 156, doi: 10.3847/1538-4357/ab378f J\u00b4 ozsa, G. I. G., Jarrett, T. H., Cluver, M. E., et al. 2022, ApJ, 926, 167, doi: 10.3847/1538-4357/ac402b Kleiner, D., Serra, P., Maccagni, F. M., et al. 2021, Astronomy & Astrophysics, 648, A32, doi: 10.1051/0004-6361/202039898 Koopmann, R. A. 2007, A 500 kpc HI Tail of the Virgo Pair NGC4532/DDO137 Detected by ALFALFA, doi: 10.1017/S1743921307014287 19 Log NHI cm-2 < Figure 13. HI maps made with 2 kpc pixel scale, showing HI distribution in for all the TNG50 Fornax-like halos at redshift z=0, colour-coded with the HI column density out to 3 Rvir. These maps are shown projected along the (arbitrarily chosen) z-axis. The six pink circles indicate the viral radius of the halos marked at 0.5, 1.0, 1.5, 2.0, 2.5, 3.0 times Rvir. Dark blue color in the maps indicate the HI column density lower than NHI= 1016 cm\u22122. 20 Lan, T.-W., & Mo, H. 2018, The Astrophysical Journal, 866, 36, doi: 10.3847/1538-4357/aadc08 Marasco, A., Crain, R. A., Schaye, J., et al. 2016, MNRAS, 461, 2630, doi: 10.1093/mnras/stw1498 Marinacci, F., Vogelsberger, M., Pakmor, R., et al. 2018, MNRAS, 480, 5113, doi: 10.1093/mnras/sty2206 Naiman, J. P., Pillepich, A., Springel, V., et al. 2018, MNRAS, 477, 1206, doi: 10.1093/mnras/sty618 Nelson, D., Pillepich, A., Springel, V., et al. 2018, MNRAS, 475, 624, doi: 10.1093/mnras/stx3040 Nelson, D., Springel, V., Pillepich, A., et al. 2019, Computational Astrophysics and Cosmology, 6, 2, doi: 10.1186/s40668-019-0028-x Nelson, D., Sharma, P., Pillepich, A., et al. 2020, Monthly Notices of the Royal Astronomical Society, 498, 2391, doi: 10.1093/mnras/staa2419 Peeples, M. S., Corlies, L., Tumlinson, J., et al. 2019, ApJ, 873, 129, doi: 10.3847/1538-4357/ab0654 Pillepich, A., Springel, V., Nelson, D., et al. 2018, MNRAS, 473, 4077, doi: 10.1093/mnras/stx2656 Pillepich, A., Nelson, D., Springel, V., et al. 2019, Monthly Notices of the Royal Astronomical Society, 490, 3196, doi: 10.1093/mnras/stz2338 Planck Collaboration, Ade, P. A. R., Aghanim, N., et al. 2016, Astronomy & Astrophysics, 594, A13, doi: 10.1051/0004-6361/201525830 Popping, G., Pillepich, A., Somerville, R. S., et al. 2019, The Astrophysical Journal, 882, 137, doi: 10.3847/1538-4357/ab30f2 Rahmati, A., Schaye, J., Bower, R. G., et al. 2015, Monthly Notices of the Royal Astronomical Society, 452, 2034, doi: 10.1093/mnras/stv1414 Ramesh, R., & Nelson, D. 2024, MNRAS, 528, 3320, doi: 10.1093/mnras/stae237 Rohr, E., Pillepich, A., Nelson, D., et al. 2023, MNRAS, doi: 10.1093/mnras/stad2101 Serra, P., Koribalski, B., Popping, A., et al. 2013a, ATNF Proposal, C2894. https://ui.adsabs.harvard.edu/abs/2013atnf.prop.5810S Serra, P., Oosterloo, T., Morganti, R., et al. 2012, Monthly Notices of the Royal Astronomical Society, 422, 1835, doi: 10.1111/j.1365-2966.2012.20219.x Serra, P., Koribalski, B., Duc, P.-A., et al. 2013b, Monthly Notices of the Royal Astronomical Society, 428, 370, doi: 10.1093/mnras/sts033 Serra, P., Maccagni, F. M., Kleiner, D., et al. 2019, Astronomy & Astrophysics, 628, A122, doi: 10.1051/0004-6361/201936114 Serra, P., Maccagni, F. M., Kleiner, D., et al. 2023, A&A, 673, A146, doi: 10.1051/0004-6361/202346071 Springel, V. 2010, Monthly Notices of the Royal Astronomical Society, 401, 791, doi: 10.1111/j.1365-2966.2009.15715.x Springel, V., White, S. D. M., Tormen, G., & Kauffmann, G. 2001, MNRAS, 328, 726, doi: 10.1046/j.1365-8711.2001.04912.x Springel, V., Pakmor, R., Pillepich, A., et al. 2018, MNRAS, 475, 676, doi: 10.1093/mnras/stx3304 Suresh, J., Nelson, D., Genel, S., Rubin, K. H. R., & Hernquist, L. 2019, MNRAS, 483, 4040, doi: 10.1093/mnras/sty3402 Truong, N., Pillepich, A., Nelson, D., et al. 2023, arXiv e-prints, arXiv:2311.06334, doi: 10.48550/arXiv.2311.06334 Truong, N., Pillepich, A., Werner, N., et al. 2020, MNRAS, 494, 549, doi: 10.1093/mnras/staa685 Tumlinson, J., Peeples, M. S., & Werk, J. K. 2017, Annual Review of Astronomy and Astrophysics, 55, 389, doi: 10.1146/annurev-astro-091916-055240 Vaezzadeh, I., Roediger, E., Cashmore, C., et al. 2022, MNRAS, 514, 518, doi: 10.1093/mnras/stac784 van de Voort, F., Springel, V., Mandelker, N., van den Bosch, F. C., & Pakmor, R. 2019, Monthly Notices of the Royal Astronomical Society: Letters, 482, L85, doi: 10.1093/mnrasl/sly190 Villaescusa-Navarro, F., Genel, S., Castorina, E., et al. 2018, ApJ, 866, 135, doi: 10.3847/1538-4357/aadba0 Voit, G. M., Meece, G., Li, Y., et al. 2017, ApJ, 845, 80, doi: 10.3847/1538-4357/aa7d04 Weinberger, R., Springel, V., Hernquist, L., et al. 2017, MNRAS, 465, 3291, doi: 10.1093/mnras/stw2944 Werk, J. K., Prochaska, J. X., Thom, C., et al. 2013, ApJS, 204, 17, doi: 10.1088/0067-0049/204/2/17 Young, L. M., Scott, N., Serra, P., et al. 2014, Monthly Notices of the Royal Astronomical Society, 444, 3408, doi: 10.1093/mnras/stt2474 Zahedy, F. S., Chen, H.-W., Johnson, S. D., et al. 2019, Monthly Notices of the Royal Astronomical Society, 484, 2257, doi: 10.1093/mnras/sty3482 Zhu, G., M\u00b4 enard, B., Bizyaev, D., et al. 2014, Monthly Notices of the Royal Astronomical Society, 439, 3139, doi: 10.1093/mnras/stu186 Zinger, E., Pillepich, A., Nelson, D., et al. 2020, Monthly Notices of the Royal Astronomical Society, 499, 768, doi: 10.1093/mnras/staa2607",
15
+ "main_content": "**[introduction]*"
16
+ }
intro_8K/test_introduction_short_2404.16966v1.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2404.16966v1",
3
+ "title": "Examining the robustness of LLM evaluation to the distributional assumptions of benchmarks",
4
+ "abstract": "Benchmarks have emerged as the central approach for evaluating Large Language\nModels (LLMs). The research community often relies on a model's average\nperformance across the test prompts of a benchmark to evaluate the model's\nperformance. This is consistent with the assumption that the test prompts\nwithin a benchmark represent a random sample from a real-world distribution of\ninterest. We note that this is generally not the case; instead, we hold that\nthe distribution of interest varies according to the specific use case. We find\nthat (1) the correlation in model performance across test prompts is\nnon-random, (2) accounting for correlations across test prompts can change\nmodel rankings on major benchmarks, (3) explanatory factors for these\ncorrelations include semantic similarity and common LLM failure points.",
5
+ "authors": "Melissa Ailem, Katerina Marazopoulou, Charlotte Siska, James Bono",
6
+ "published": "2024-04-25",
7
+ "updated": "2024-04-25",
8
+ "primary_cat": "cs.CL",
9
+ "cats": [
10
+ "cs.CL"
11
+ ],
12
+ "label": "Original Paper",
13
+ "paper_cat": "LLM Fairness",
14
+ "gt": "Since the introduction of the Transformer architec- ture (Vaswani et al., 2017), Large Language Mod- els (LLMs) have progressed into sophisticated sys- tems with an outstanding ability to comprehend and generate text that mimic human language. Notable models in this domain include ChatGPT1, utiliz- ing the GPT-3.5-TURBO or GPT-4 architectures2, LLaMA (Touvron et al., 2023), ChatGLM (Zeng et al., 2023), Alpaca (Taori et al., 2023), and Fal- con (Penedo et al., 2023). Due to their effectiveness, LLMs are becoming very popular in both academia and industry, making their evaluation crucial. However, this effectiveness comes at the cost of increased complexity, which makes their evaluation very challenging. Although prior research has introduced benchmarks for dif- ferent tasks along with evaluation measures, these \u2020These authors contributed equally to this work. 1New chat: https://chat.openai.com/ 2Models - OpenAI API: https://platform.openai. com/docs/models/ assessments often overlook potential biases. When a benchmark includes multiple prompts with simi- lar characteristics, it can increase or decrease the average performance of a model, so model compar- isons can become brittle with respect to benchmark composition. In this work, we show that the in- herent connections between the prompts in current benchmarks impact the models\u2019 performance and their relative rankings. The standard approach for evaluation on a bench- mark is to (i) obtain model responses for each prompt in the benchmark, (ii) compute the per- formance metrics for each response, (iii) aggregate (usually average) the performance metrics to obtain a single performance metric over the benchmark, and (iv) compare models by comparing their aggre- gate performance. When aggregating performance metrics in step iii above, each prompt is generally weighted equally (Yang and Menczer, 2023; Pe\u00f1a et al., 2023). However, using equal weights reflects the assumption that prompts in the benchmark are \u201cequal\u201d, in the sense that prompts are representa- tive samples of a target distribution of interest. In the case of LLMs, the notion of a target distribu- tion (i.e., the distribution of all possible prompts for a given use case) is usually not well-defined. For example, different Natural Language Inference (NLI) applications may have very different target distributions, and we should not expect a single benchmark to capture every one. Therefore, one must ask: What distribution do the prompts in the benchmark represent? Would considering different distributions fundamentally change model compar- isons? In this work, we present a novel approach to assess the robustness and adequacy of benchmarks used in evaluating LLMs, by analyzing the perfor- mance of multiple LLMs on a set of four major benchmarks. Our key contributions are outlined below: 1. For each considered benchmark, we observe arXiv:2404.16966v1 [cs.CL] 25 Apr 2024 that the correlation of model performance across prompts is significant (p-value < 0.05). This demonstrates the existence of relationships be- tween prompts within the investigated benchmarks. 2. We explore the robustness of model compar- isons to different distributional assumptions based on correlation structure, and we observe shifts in performance as large as 10% and rank changes as large as 5 (out of 14 models). 3. We provide a characterization of performance over the distribution of all possible prompt weights. This constitutes a robustness check that can be in- corporated in comparative studies. 4. We show that model performance similarity across prompts can be explained by semantic sim- ilarity, but it is most likely derived by common failure points of the LLM.",
15
+ "main_content": "Evaluating the performance of LLMs has become a critical area of research, drawing significant attention in recent years. Comprehensive surveys of LLM evaluation can be found in Chang et al. (2023); Guo et al. (2023), and Liang et al. (2022). When assessing the quality of LLMs, the robustness aspect is becoming of increasing importance (Wang et al., 2022; Goel et al., 2021). Robustness investigates the stability of a model when confronted with unforeseen prompts. Robustness research can be divided into three main lines of work (Li et al., 2023): (i) robustness under distribution shift (Wang et al., 2021; Yang et al., 2023), (ii) robustness to adversarial input (Zhu et al., 2023; Wang et al., 2023a), and (iii) robustness to dataset bias (Gururangan et al., 2018; Le Bras et al., 2020; Niven and Kao, 2019). Our work falls into the latter category. Reducing bias on benchmarks is a long-standing area of research spanning many diverse fields. Applications range from weighing survey responses to match a target population (DeBell, 2018), to accounting for language biases in visual questionanswering (Goyal et al., 2017). In the context of NLI, researchers have looked into improving the quality of prompts in order to mitigate certain types of biases. Work in this area has focused on determining the quality of prompts by generating optimal prompts (Pryzant et al., 2023; Deng et al., 2022) or by clustering prompts based on semantic similarity (Kuhn et al., 2023). Additionally, researchers have investigated data leakage between benchmarks and LLM training data (Zhou et al., 2023; Oren et al., 2023). Limited research has been conducted to study inherent biases in LLM benchmarks. Among existing works, Gururangan et al. (2018) and Niven and Kao (2019) have shown that models leverage spurious statistical relationships in the benchmark datasets and, thus, their performance on the benchmarks is overestimated. In the same spirit, Le Bras et al. (2020) propose to investigate AFLITE (Sakaguchi et al., 2023), an iterative approach to filter datasets by removing biased data points to mitigate overestimation of language models\u2019 performance. More recently, Alzahrani et al. (2024) show that performance of LLMs is highly sensitive to minor changes in benchmarks with multiple-choice questions. Our work is orthogonal yet complementary to previous work. In particular, we propose a new method to identify biases in a benchmark by looking at the performance of multiple recent LLMs on that benchmark. We show that similarity in performance correlates with similarity in prompts. To the best of our knowledge, our work is the first approaching benchmark biases by analyzing and leveraging the performance of a collection of models on a set of major benchmarks; as well as investigating the impact of inherent distributional biases in benchmarks used on LLM comparative studies. 3 Proposed method In this section, we outline the problem setup and introduce the notation and expressions that will be employed throughout the paper. Second, we present the approach to evaluate whether relationships between prompts (based on models\u2019 performance) are statistically non-random. Furthermore, we describe our method for analyzing how sensitive model comparisons are with respect to different distributional assumptions of the benchmark. Finally, we present our proposed methodology for exploring the origins of relationships between prompt performance vectors. 3.1 Problem setup Consider a benchmark containing n prompts {p1, . . . , pn}, and a set of k LLMs {m1, . . . , mk} being evaluated. We define the performance matrix Q as an n \u00d7 k matrix, where every cell Q[i, j] represents the performance of model mj on prompt pi. We refer to the i-th row of that matrix, qi, as a performance vector for prompt pi. To measure how similar two prompts are with respect to model performance, we compute the similarity between their performance vectors sperf (pi, pj) := s(qi, qj), where s(\u00b7, \u00b7) is a similarity function. Here, we consider cosine, Jaccard, and Hamming similarity. Given a performance matrix Q and a similarity function s, we compute a n \u00d7 n similarity matrix Ts(Q), where every cell T[i, j] is the performance similarity for prompts pi, pj: T[i, j] = sperf (pi, pj). Semantic meaning from text is commonly understood through the use of embeddings. An embedding of a prompt is a numerical vector that contains the learned representations of semantic meaning. Measuring semantic similarity between two prompts is achieved by measuring the distance between their embeddings. In this paper, we use ada-2 embeddings from OpenAI3. The ada-2 embeddings are widely used and have been proven effective in various NLP tasks. These embeddings have shown strong performance in assessing semantic similarity between texts (Aperdannier et al., 2024; Kamalloo et al., 2023; Freestone and Santu, 2024). For a set of prompts {p1, . . . , pn}, we compute a matrix of embeddings E = {e1, . . . , en}. E is a n \u00d7 s matrix, where s is the size of the embedding vectors. To measure semantic similarity between pairs of prompts, we compute similarity metrics between the corresponding rows: ssem(pi, pj) = s(ei, ej). 3.2 Determining if performance vectors are correlated Given a benchmark, we assess whether the observed similarity among performance vectors is significant. If the observed similarity is significantly high, this implies the existence of specific connections between prompts. These connections lead to similar model behavior when responding to these prompts. To test this hypothesis, we perform permutation tests. We generate permutations of the performance matrix Q by randomly shuffling the cells of each column. In this way, we permute the values of the model responses across prompts, while holding constant the overall performance of each model (i.e., the column averages of Q). We then compute a similarity matrix Ts(Q) for the observed performance matrix Q, as well as for each permutation Q\u2032 of the performance matrix: [Ts(Q\u2032 1), Ts(Q\u2032 2), . . .]. 3https://openai.com/blog/new-and-improved-embedding-model We compare the distribution of values from Ts(Q) with the distribution of values from the permuted tables [Ts(Q\u2032 1), Ts(Q\u2032 2), . . .]. We conduct a permutation test to compare the average, 75th, and 95th percentiles of these distributions. The p-value of the permutation test is calculated as the proportion of permuted tables for which the statistic is greater than the one obtained with the observed table. Additionally, we use the Kolmogorov-Smirnov (KS) test to compare the entire distribution of values between observed and permuted similarity matrices. To further support our findings, we cluster the observed and permuted performance vectors. If there are non-random correlations between performance vectors, we would expect the clustering of the observed vectors to have higher clustering quality metrics, such as silhouette score. 3.3 Effect of non-uniform weights in aggregate performance metrics So far, we have focused on aggregate performance measures that treat prompts as if they are independent and identically distributed (i.i.d.) samples from some real-world distribution of interest\u2014i.e., each prompt is given equal weight in calculating aggregate performance metrics. In this section, we examine the implications of relaxing this assumption for ranking models based on their performance. Generally, there is no universally correct distribution of interest\u2014it depends on each user\u2019s application. Here, we look into three different ways of capturing distributional assumptions (i.e., of defining weights) for a given benchmark. Cluster-based: We leverage the clustering of performance vectors described above. We consider the following variants for evaluating performance: 1. Only include prompts that are cluster representatives (i.e., the medoids of the clusters). This effectively decreases the size of the benchmark. 2. Include all prompts, but weigh them based on their distance from their cluster representative. We employ two types of weights: (i) Distance-based: The further away a prompt is from the cluster representative, the larger its weight. This setting gives more emphasis on diversity of the benchmark. More formally, let pi be a prompt in cluster Cj, pr j be the representative prompt of cluster Cj, and d(\u00b7, \u00b7) the distance function between two prompts. The weight w for pi is: w(pi) = d(pi, pr j) P pk\u2208Cj \u0010 d(pk, pr j) \u0011 |Cj| P i |Ci| The first factor is the within-cluster weight of the prompt (normalized within cluster). The second factor weighs all prompts of a given cluster proportionally to the cluster\u2019s size. (ii) Inverse-distance weights: The closer a prompt is to the cluster representative, the larger its weight. This setting effectively smooths out the hard clustering we produced: all data points contribute to the performance, not just the cluster representatives. The weight w for pi is computed as: w(pi) = d\u22121(pi, pr j) P pk\u2208Cj \u0010 d\u22121(pk, pr j) \u0011 |Cj| P i |Ci| Increasing benchmark size We start with a random prompt and iteratively add new prompts into the benchmark. To select the next prompt to add, we use two methods: (i) most informative: select the prompt with the largest cosine distance (lowest cosine similarity) from the previously selected ones in order to obtain an informative test set with a reduced semantic similarity between prompts, (ii) random: select a random prompt. Random distributions of weights We weigh each prompt and compute weighted performance, with weights drawn uniformly at random. To achieve that, we sample uniformly at random from the unit simplex using the sampling technique described in Smith and Tromble (2004). This approach aims to provide a characterization over all possible weight configurations. 3.4 Comparing performance vectors with semantic embeddings of prompts Having established that model performance is similar across prompts, we next investigate where this similarity stems from. Our hypothesis is that for a pair of prompts, similar model performance can occur if the prompts are semantically similar. We use linear regression to determine if there exists a significant relationship between semantic similarity and model performance similarity: sperf (pi, pj) = ssem(pi, pj)\u03b2 + \u03f5 where \u03b2 is the coefficient of how much semantic similarity contributes to the model and \u03f5 is error. Using all prompt pairs raises concerns about the data being i.i.d., given that each observation is a pairwise comparison and each member of a pair appears in many observations. To avoid that, we estimate one model for each prompt, including all the pairwise observations of which that prompt is a part. We collect p-values for the coefficients across all models and perform multiple hypotheses adjustment to generate False Discovery Rate (FDR) values. We repeat the same approach for 1000 permutations as described in Section 3.2 for both pairwise performance and semantic similarity vectors. Finally, we compare the distribution of coefficients and FDRs between original data and permutations using the KS test. 4 Experimental setup In this section, we describe the setting of our experiments. Specifically, we provide details on the benchmarks and evaluation metrics we use, the LLMs we consider, and how we evaluate performance of the LLMs on the benchmarks. 4.1 Benchmarks We investigate four major benchmarks that are designed for different tasks. ANLI The Adversarial Natural Language Inference (ANLI) dataset4 is a large-scale dataset for natural language inference (NLI) (Nie et al., 2020). It is collected via an iterative, adversarial humanand-model-in-the-loop procedure, making it more difficult than its predecessors. The dataset used here comprises approximately 100K samples for the training set, 1,200 for the development set, and 1,200 for the test set. Each sample contains a context, a hypothesis, and a label. The goal is to determine the logical relationship between the context and the hypothesis. The label is the assigned category indicating that relationship. In the context of NLI, the labels typically include \u201centailment\u201d, \u201ccontradiction\u201d, or \u201cneutral\u201d. Finally, ANLI makes available a reason (provided by the human-in-theloop), explaining why a sample was misclassified. HellaSwag This is a commonsense natural language inference dataset (Zellers et al., 2019), tasking machines with identifying the most probable followup for an event description. Comprising 70,000 instances, each scenario presents four potential outcomes, with only one being accurate. En4https://huggingface.co/datasets/anli gineered to be challenging for cutting-edge models, the dataset employs Adversarial Filtering to incorporate machine-generated incorrect responses, frequently misclassified by pretrained models. Covering diverse domains, HellaSwag demands a fusion of world knowledge and logical reasoning for successful interpretation. CommonsenseQA This is a multiple-choice question-answering dataset that requires different types of commonsense knowledge to predict the correct answers (Talmor et al., 2019). It contains 12,102 questions with one correct answer and four distractor answers. The questions are crowdsourced and cover a wide range of topics such as open-domain-qa, real-life situations, elementary science, social skills. CNN/Daily Mail The CNN/Daily Mail dataset is a widely used benchmark for text summarization (Nallapati et al., 2016). The dataset comprises news stories from CNN and Daily Mail websites. In total, the corpus contains 286,817 training, 13,368 validation, and 11,487 test pairs. 4.2 Evaluation measures For ANLI, HellaSwag, and CommonsenseQA, the performance matrix contains binary values (correct / incorrect answer). Hence, we use average accuracy to evaluate the performance of each model, as commonly done with these benchmarks (Nie et al., 2020; Wei et al., 2022; Zellers et al., 2019; Talmor et al., 2019). For CNN/Daily Mail, following previous work (See et al., 2017), we measure model performance using the ROUGE score. 4.3 Considered LLMs In order to have a diverse collection of LLMs, we include models from several developers, such as OpenAI and Meta. These include GPT LLMs (Brown et al., 2020; OpenAI, 2023), Llama LLMs (Touvron et al., 2023), and other popular LLMs, such as Falcon-180b (Almazrouei et al., 2023), Koala 13B (Geng et al., 2023), Alpaca 7B (Wang et al., 2023b). Table 1 shows the various models used for each benchmark5. 5Due to constraints in LLMs\u2019 availability, we use different LLMs for each benchmark. This does not impact our work, as each benchmark analysis is standalone and independent of the remaining benchmarks. 4.4 Performance evaluation For ANLI, we evaluate each model on the test dataset, which contains 1200 prompts. For each sample, we use 7 few-shot samples extracted from the ANLI dev set. For the remaining benchmarks, we randomly sample 10% of each benchmark for test and use the rest for few-shot selection. This results in 1005, 1221, and 1150 test samples for HellaSwag, CommonsenseQA, and CNN/Daily Mail respectively. For HellaSwag, we use 10 fewshot examples, while for CommonsenseQA and CNN/Daily Mail we use 5 few-shots. TypeModel ANLIHSCSQACNN/DM GPT ChatGPT-Turbo-Base-0516 \u2713 \u2713 ChatGPT-Turbo-0301 \u2713 \u2713 ChatGPT-Turbo-0613 \u2713 ChatGPT-202301 \u2713 DaVinci (GPT-3) \u2713 Text-Davinci-002 \u2713 Text-Davinci-003 \u2713 GPT-4-0314 \u2713 GPT-4-0314 (Chat) \u2713 \u2713 \u2713 GPT-4-0613 (Chat) \u2713 GPT-4-Turbo-1106 (Chat) \u2713 \u2713 \u2713 GPT-4-Turbo-1106 \u2713 Text-Alpha-002-Current \u2713 \u2713 DV3-FP8 \u2713 Babbage-0721 \u2713 ChatGPT-202301 \u2713 LLAMA Llama-13B \u2713 Llama-2-13B \u2713 \u2713 Llama-30B \u2713 \u2713 Llama-65B \u2713 Llama-2-70B \u2713 \u2713 \u2713 Other Persimmon 8B1 \u2713 \u2713 \u2713 Vicuna 13B2 \u2713 \u2713 Claude-23 \u2713 \u2713 \u2713 Falcon-180b \u2713 \u2713 Koala 13B \u2713 \u2713 Mistral7b4 \u2713 \u2713 Alpaca 7B \u2713 Total 12 13 14 8 1 https://www.adept.ai/blog/persimmon-8b 2 https://lmsys.org/blog/2023-03-30-vicuna/ 3 https://www.anthropic.com/index/claude-2 4 https://mistral.ai/news/announcing-mistral-7b/ Table 1: Summary of LLMs used for ANLI, HellaSwag (HS), CommonsenseQA (CSQA), and CNN/Daily Mail (CNN/DM). Check marks denote which LLMs were used for the specific benchmark. 5 Results In this section, we present the results of the experiments described in Section 3 on the benchmarks. 5.1 Performance vectors are correlated To determine if prompt performance vectors are correlated, we perform the permutation tests described in Section 3.2, using different correlation Hamming Cosine Jaccard ANLI Average 0.60 0.59 0.0009 75th percentile 0.66 0.0009 0.67 95th percentile 0.0009 0.0009 0.0009 KS test 2e-5 2e-5 2e-5 HS Average 0.52 0.57 0.0009 75th percentile 0.0009 0.0009 0.0009 95th percentile 0.88 0.85 0.87 KS test 2e-5 2e-5 2e-5 CSQA Average 0.53 0.52 0.0009 75th percentile 0.0009 0.0009 0.0029 95th percentile 0.0009 0.0009 0.0009 KS test 2e-5 2e-5 2e-5 Table 2: p-values obtained with permutation tests and the KS test using different correlation measures and aggregation functions for ANLI, HellaSwag (HS), and CommonsenseQA (CSQA). measures. The obtained p-values for ANLI, HellaSwag, and CommonsenseQA are depicted in Table 2. On ANLI and CommonsenseQA, the permutation tests show strong evidence that the correlations between the prompt performance vectors are significant. For HellaSwag, our findings reveal consistently low p-values across all correlation measures when using the 75th percentile, as well as a low p-value when averaging Jaccard similarities. For the three benchmarks above, the KS test is significant across all correlation measures. For CNN/Daily Mail, the performance matrix contains ROUGE scores, which are continuous values. Thus, we use cosine similarity to compare the average correlations obtained from the original and permuted performance matrices. The results show that the correlations among original performance vectors are significantly greater. To further support this finding, we cluster the model responses using spherical k-means (Dhillon and Modha, 2001). We choose the optimal number of clusters to maximize the average silhouette score, computed using cosine distance. Table 3 contains the average silhouette scores of clustering the performance vectors and a random permutation of them. For all benchmarks, the performance vectors produce higher silhouette scores compared to the permuted performance vectors. This provides additional evidence to support the outcome of the hypothesis tests presented above: the performance vectors are similar. 5.2 Impact of prompt weights on performance and relative ranking of models In this section, we present the results of different weighting schemes for the prompts of a benchmark, Benchmark observed permuted ANLI 0.52 0.21 HellaSwag 0.54 0.24 CommonsenseQA 0.61 0.29 CNN/Daily Mail 0.25 0.21 Table 3: Average silhouette score of clustering observed performance vectors and a random permutation of performance vectors for the various benchmarks. as described in Section 3.3. 5.2.1 Cluster-based evaluation First, we cluster the performance vectors of each benchmark as described earlier. Then, we compute the average accuracy of models for each benchmark, using only the cluster representatives of that benchmark. We also compute weighted performance using distance-based and inverse-distancebased weights. Figure 1 illustrates how these weighting schemes affect the relative ranking of models for each benchmark. The rows correspond to different weighting schemes, while the columns correspond to the different models and are ordered by increasing original performance (i.e., decreasing rank). Every cell contains the ranking change (compared to the original benchmark) of the model of that column for the method of that row. If there were no ranking changes, all values would be 0. However, we observe that there are multiple ranking changes as great as 5 (model is ranked 5 positions above the original benchmark). 5.2.2 Increasing size of benchmark Next, we study how performance is affected by the size and diversity of the benchmark. We start with a random prompt and iteratively add new prompts to the benchmark, either by adding the most informative prompt (i.e., the one with the maximum average distance from the current benchmark), or a random one. Figure 2 shows the average performance for each model as the benchmark size increases (maximum benchmark size corresponds to the original benchmark). Looking at the most informative method for ANLI (Figure 2a), the first 400 prompts result in random performance (0.5) for all models. This suggests that the initial prompts chosen with this method are the most \u201cdifficult\u201d, in that the models are exhibiting performance close to random (accuracy 50%). Similar results are observed for HellaSwag and CommonsenseQA (see Appendix C, Figure 9), but not for CNN/Daily Mail (Figure 2b), where the performance on the reduced benchmark follows a similar pattern as the ChatGPT-T urbo-0301 ChatGPT-T urbo-Base-0516 Claude-2 Falcon-180b GPT-4-0314 (Chat) GPT-4-T urbo-1106 (Chat) Koala 13B Llama-2 13B Llama-2 70B Mistral 7B Persimmon 8B Vicuna 13B distance weights inverse distance weights medoids HDBSCAN medoids spherical k-means 0 0 0 1 0 0 -2 1 0 0 1 -3 -1 0 0 1 1 1 0 0 -1 -1 -1 0 -1 0 -3 0 1 0 0 0 1 0 2 -1 1 0 1 -1 -1 0 0 -1 0 0 1 0 (a) ANLI Alpaca 7B HF ChatGPT-T urbo-0301 ChatGPT-T urbo-Base-0516 Claude-2.0 GPT-4-0314 (Chat) GPT-4-T urbo-1106 (Chat) Koala 13B HF Llama-2-13B Llama-2-70B Llama-30B Persimmon-8b T ext-Alpha-002-Current Vicuna 13B HF distance weights inverse distance weights medoids HDBSCAN medoids spherical k-means -1 0 -1 -1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -1 0 -1 -3 0 0 0 2 0 0 0 0 0 0 0 0 1 0 1 1 0 0 0 0 1 0 0 0 (b) HellaSwag ChatGPT-T urbo-0613 Claude-2.0 DaVinci (GPT-3) Falcon-180b GPT-4-0314 GPT-4-0613 (Chat) GPT-4-T urbo-1106 GPT-4-T urbo-1106 (Chat) Llama-13B Llama-2-70B Llama-30B Llama-65B Mistral-7b Persimmon-8b distance weights inverse distance weights medoids HDBSCAN medoids spherical k-means -1 -1 -2 0 2 1 -2 2 0 0 0 0 -4 0 5 0 2 0 1 0 1 1 1 1 1 -1 -2 0 4 1 -1 0 0 0 0 0 -2 1 0 1 0 1 -1 0 0 0 -1 1 -2 -2 0 -5 0 0 0 0 (c) CommonsenseQA Babbage-0721 (GPT-3) ChatGPT-202301 ChatGPT-T urbo-0301 DV3-FP8 GPT-4-0314 (Chat) T ext-Alpha-002-Current T ext-DaVinci-002 T ext-DaVinci-003 distance weights inverse distance weights medoids HDBSCAN medoids spherical k-means 0 0 0 0 0 0 0 -1 0 0 -1 1 0 0 -1 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 (d) CNN/Daily Mail Figure 1: Visualization of ranking changes (compared to original benchmark) for various benchmark modifications. Rows show different weighting methods, columns show the models. Each cell contains the ranking change (original ranking minus new ranking) of the column-model for the row-method. We observe rank changes as great as 5. Persimmon 8B Llama-2 13B Koala 13B Mistral 7B Vicuna 13B Claude-2 ChatGPT-T urbo-Base-0516 Falcon-180b Llama-2 70B ChatGPT-T urbo-0301 GPT-4-T urbo-1106 (Chat) GPT-4-0314 (Chat) 0.4 0.5 0.6 0.7 Accuracy T est set size 100 200 300 400 500 600 700 800 900 1000 1100 1200 (a) ANLI Babbage-0721 (GPT-3) ChatGPT-202301 ChatGPT-T urbo-0301 T ext-DaVinci-002 T ext-DaVinci-003 GPT-4-0314 (Chat) DV3-FP8 T ext-Alpha-002-Current 0.25 0.30 0.35 0.40 Accuracy T est set size 100 200 300 400 500 600 700 800 900 1000 1100 (b) CNN/Daily Mail Figure 2: Average performance as benchmark size increases. Prompts are added to maximize average cosine distance. Maximum benchmark size corresponds to performance on the original benchmark. performance on the original benchmark. The random method tracks the original performance for all benchmarks (see Appendix C, Figure 10). 5.2.3 Random distributions of weights We explore the distribution of all weighting schemes and the effect they have on the weighted accuracy and relative ranking of the models. As described in Section 3.3, we sample 100,000 random weight configurations. For each model, we compute the weighted performance based on these weights. For ANLI, HellaSwag, and CommonsenseQA the performance of a model can change up to 10%. For CNN/Daily Mail, the range is smaller, up to 3%. Detailed results are included in Appendix D. We note that the range is similar for all models within a benchmark, indicating that it is a property related to the benchmark and not the specific models. To further demonstrate changes in relative ranking of models, we take a closer look at the pairwise ranking differences. Figure 3 depicts a pairwise comparison of weighted performance for each benchmark. Every cell shows how often the model in the row outperforms the model of the column. For ANLI, approximately for half of the weight configurations the ranking of the top two models is reversed! However, for the CNN/Daily Mail data, there are effectively no reversals (less than 0.01%). 5.3 Relationship between model performance and semantic similarity of prompts Having established that model performance is correlated across prompts, we investigate what can explain these correlations. Our hypothesis is that it is driven by semantic similarity. We use the method described in Section 3.4 to assess if there is a significant relationship between semantic similarity and model performance similarity. Our findings show that only CNN/Daily Mail presents a significant relationship between prompt semantic similarity and prompt performance similarity (see Figure 4d). This benchmark is a text summarization task, where the success of the ROUGE Persimmon 8B Llama-2 13B Koala 13B Mistral 7B Vicuna 13B Claude-2 ChatGPT-T urbo-Base-0516 Falcon-180b Llama-2 70B ChatGPT-T urbo-0301 GPT-4-T urbo-1106 (Chat) GPT-4-0314 (Chat) Persimmon 8B Llama-2 13B Koala 13B Mistral 7B Vicuna 13B Claude-2 ChatGPT-T urbo-Base-0516 Falcon-180b Llama-2 70B ChatGPT-T urbo-0301 GPT-4-T urbo-1106 (Chat) GPT-4-0314 (Chat) 0.0% 21.9% 0.2% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 78.1% 0.0% 0.5% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 99.8% 99.5% 0.0% 21.2% 2.3% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 100.0% 100.0% 78.8% 0.0% 8.7% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 100.0% 100.0% 97.7% 91.3% 0.0% 10.7% 0.8% 0.0% 0.0% 0.0% 0.0% 0.0% 100.0% 100.0% 100.0% 100.0% 89.3% 0.0% 10.4% 0.5% 0.0% 0.0% 0.0% 0.0% 100.0% 100.0% 100.0% 100.0% 99.2% 89.6% 0.0% 12.9% 0.4% 0.0% 0.0% 0.0% 100.0% 100.0% 100.0% 100.0% 100.0% 99.5% 87.1% 0.0% 1.5% 0.0% 0.0% 0.0% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 99.6% 98.5% 0.0% 3.4% 0.0% 0.0% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 96.6% 0.0% 0.0% 0.0% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 0.0% 44.9% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 55.1% 0.0% Proportion where row > column 0 25 50 75 100 (a) ANLI Babbage-0721 (GPT-3) ChatGPT-202301 ChatGPT-T urbo-0301 T ext-DaVinci-002 T ext-DaVinci-003 GPT-4-0314 (Chat) DV3-FP8 T ext-Alpha-002-Current Babbage-0721 (GPT-3) ChatGPT-202301 ChatGPT-T urbo-0301 T ext-DaVinci-002 T ext-DaVinci-003 GPT-4-0314 (Chat) DV3-FP8 T ext-Alpha-002-Current 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 100.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 100.0% 100.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 100.0% 100.0% 100.0% 0.0% 0.0% 0.0% 0.0% 0.0% 100.0% 100.0% 100.0% 100.0% 0.0% 0.0% 0.0% 0.0% 100.0% 100.0% 100.0% 100.0% 100.0% 0.0% 0.0% 0.0% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 0.0% 0.3% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 99.7% 0.0% Proportion where row > column 0 25 50 75 100 (b) CNN/Daily Mail Figure 3: Pairwise comparison of weighted performance. Each cell is the percentage of times the model of the row outperforms the model of the column. metric highly depends on the ability to extract relevant entities from text. For example, we find that prompts referring to the economy or global warming have high correlation in model performance (see Appendix B, Table 5). ANLI also makes available a reason component: what human agents state as the explanation for why the LLM gave a wrong answer. We find a significant relationship between semantic similarity using the reason component and prompt performance similarity (as seen in Figure 4a). The input prompt\u2014consisting of the context, hypothesis and label components\u2014shows no relationship, which is most likely because the creators of ANLI put great effort into ensuring diversity in the benchmark (Nie et al., 2020). This is also evident in Figure 2. The significance of the reason component indicates that the model performance vectors correlate because of how the model generates a response. We observe prompts where the reasons for similar model performance indicate that the model cannot do math, e.g., \u201cThe system may have missed this as it did not add up the losses from both sets\u201d and \u201cthe model might not know math\u201d (see Appendix B, Table 4). Hellaswag and CommonsenseQA use a multipleCoefficient FDR -2.5 0.0 2.5 5.0 0.00 0.25 0.50 0.75 1.00 0 5 10 15 20 0.0 0.1 0.2 0.3 0.4 density (a) ANLI (reason) Coefficient FDR -20 -10 0 10 20 0.00 0.25 0.50 0.75 1.00 0 10 20 0.0 0.1 0.2 0.3 density (b) HellaSwag Coefficient FDR -10 0 10 0.00 0.25 0.50 0.75 1.00 0 10 20 30 40 0.0 0.1 0.2 density (c) CommonsenseQA Coefficient FDR -2 -1 0 1 2 3 0.00 0.25 0.50 0.75 1.00 0 10 20 30 0.0 0.2 0.4 0.6 density (d) CNN/Daily Mail Figure 4: Distribution of semantic similarity coefficients and FDRs for all benchmarks. Red is original data, blue is permutations. KS tests for all distributions shown have p-values < 2e-5. choice format. The lack of strong evidence supporting the correlation in these benchmarks (see Figures 4b and 4c) is likely due to the embeddings picking up similarities between the different choices, rather than the logic the LLMs employ to arrive at their conclusion. This is consistent with our findings for ANLI, where a significant relationship does not stem from inputs to the model, but from the LLMs\u2019 failure points. Our findings indicate there is a larger question about why the model performance vectors are correlated, and investigating this is central to understanding model performance. Semantic similarity can be a factor, but it depends on the task the benchmark is designed for. Based on our results for ANLI, it appears that the reasoning required for the task (i.e., reasoning types that cause models to fail), can be even more important than semantic similarity. 6 Conclusion and future work LLMs are commonly evaluated on benchmarks that may include multiple prompts testing similar skills. In this work, we demonstrate this bias on major benchmarks, by showing that model performance across different prompts is significantly correlated. Furthermore, we demonstrate that LLM comparative studies can be significantly altered when using non-uniform weights for prompts during evaluation. The suggested approach can serve as a consistency check in comparative studies of LLMs, ensuring that the results take into consideration benchmark biases. Finally, we show that similar model performance across prompts can be explained by semantic similarity, but is most likely derived from common failure points of the LLM. Our findings could influence a larger diagnostics tool for evaluating the robustness of model quality comparisons with respect to distributional assumptions of benchmarks. Future work also includes identifying additional factors that may explain these biases. This information can give rise to solutions for improving benchmarks robustness. These findings could help researchers generating novel benchmarks to identify and eliminate biases. 7 Limitations Our study requires access to multiple LLMs to generate model performance vectors for each prompt in a benchmark. This can be computationally expensive and require GPUs. Some models, such as OpenAI\u2019s GPT-4, have limited API calls, making data collection time consuming. While we provide a novel approach for researchers to investigate bias in their own studies, providing a comprehensive de-biasing methodology is not within the scope of this work. Finally, we have only touched the surface on why prompts have similar performance across multiple LLMs. There are many other components to investigate, such as the length of the prompt and prompt complexity. This information could be leveraged to propose solutions on improving benchmarks, without running prompts through multiple LLMs."
16
+ }