diff --git "a/related_53K/test_related_long_2404.16260v1.json" "b/related_53K/test_related_long_2404.16260v1.json"
new file mode 100644--- /dev/null
+++ "b/related_53K/test_related_long_2404.16260v1.json"
@@ -0,0 +1,8636 @@
+[
+ {
+ "url": "http://arxiv.org/abs/2404.16260v1",
+ "title": "OmniSearchSage: Multi-Task Multi-Entity Embeddings for Pinterest Search",
+ "abstract": "In this paper, we present OmniSearchSage, a versatile and scalable system for\nunderstanding search queries, pins, and products for Pinterest search. We\njointly learn a unified query embedding coupled with pin and product\nembeddings, leading to an improvement of $>8\\%$ relevance, $>7\\%$ engagement,\nand $>5\\%$ ads CTR in Pinterest's production search system. The main\ncontributors to these gains are improved content understanding, better\nmulti-task learning, and real-time serving. We enrich our entity\nrepresentations using diverse text derived from image captions from a\ngenerative LLM, historical engagement, and user-curated boards. Our multitask\nlearning setup produces a single search query embedding in the same space as\npin and product embeddings and compatible with pre-existing pin and product\nembeddings. We show the value of each feature through ablation studies, and\nshow the effectiveness of a unified model compared to standalone counterparts.\nFinally, we share how these embeddings have been deployed across the Pinterest\nsearch stack, from retrieval to ranking, scaling to serve $300k$ requests per\nsecond at low latency. Our implementation of this work is available at\nhttps://github.com/pinterest/atg-research/tree/main/omnisearchsage.",
+ "authors": "Prabhat Agarwal, Minhazul Islam Sk, Nikil Pancha, Kurchi Subhra Hazra, Jiajing Xu, Chuck Rosenberg",
+ "published": "2024-04-25",
+ "updated": "2024-04-25",
+ "primary_cat": "cs.IR",
+ "cats": [
+ "cs.IR",
+ "cs.AI",
+ "cs.LG",
+ "H.3.3"
+ ],
+ "label": "Original Paper",
+ "paper_cat": "LLM Fairness",
+ "gt": "Our work to build multi-task multi-entity embeddings for search draws upon broad areas of work. Our representation of pins and products extends existing work on multi-modal learning and two tower models for search retrieval. These have been extensively applied in the context of search and recommendation systems as an efficient way to retrieve results not purely related to the search query based on text. In OmniSearchSage, we demonstrate that the embeddings generated by these models can also serve as features in ranking and relevance models. Additionally, we offer a brief examination of specific embeddings within the Pinterest ecosystem. 2.1 Model-based Search Retrieval Historically, search systems have been powered by two stages: token-based matching, or candidate generation, and then scoring with a complex model. These have drawbacks, especially when users make complex queries or content is not primarily textual. This has led to the exploration of two tower models, which encode a query into a single embedding or a small set of embeddings, and then use those to retrieve relevant documents with approximate or exact nearest neighbor search [5, 11, 18, 20, 21, 24, 40]. Two natural topics in learning embeddings for search are document representation, and query representation. Depending on the learning objective, this query representation could be personalized, or it could be a pure text embedding model. Many architectures for query embeddings in industry have been proposed based on simple CNNs [12], bag of words models [11, 23], transformers [19], and more, but they share a basic structure involving query understanding and sometimes context understanding. Document representation is also a major challenge. The text associated directly with an item is popular as a key feature, but depending on the task, other sources have been found to provide great value, including queries where other users have engaged with a given item [5, 24, 25] and image content embeddings [19]. 2.2 Multi-task, multi-modal, and multi-entity embeddings The area of learning embeddings isn\u2019t exclusive to the realm of recommendation systems and has been studied extensively [4, 6, 29, 30]. Multi-task learning is a technique commonly utilized in ranking models to optimize for multiple objectives concurrently, aiming for enhanced performance or more efficient information sharing [33, 41]. A less frequently encountered approach involves the joint learning of embeddings for more than two entities. Though this methodology is sometimes implemented in graph learning scenarios, it can also be perceived as an extension of multi-task learning [39]. Multi-modal embeddings are of substantial interest in the industry since the majority of web content is multi-modal, typically including at both text and images [18, 19, 38]. One can take embeddings or raw data from each modality as inputs, and merge them at any stage of the model. The methodology typically involves utilizing embeddings or raw data from each mode as inputs, which are then merge at different stages in the model. Early-stage fusion can pose computational hurdles; therefore, in cases where performance is indifferent, utilizing embeddings instead of raw data is generally the preferred course of action [38]. 2.3 Embeddings at Pinterest PinSage [37] is a scalable GNN-based embedding representing pins. It is based on the GraphSage GCN algorithm [10], sampling neighborhoods with personalized PageRank to augment pin understanding, instead of simple heuristics like \ud835\udc5b-hop neighbors. It aggregates some basic visual [2] and text information into a single dense representation, and is a critical feature in many models. To represent products, we have an embedding, ItemSage [1], which aggregates raw data about products, including metadata from product pages, and potentially many images of the product. ItemSage is trained for compatibility with PinSage, and the search query embedding preceding OmniSearchSage, meaning that the distance between ItemSage and these two embeddings can be used for retrieving or ranking content [27].",
+ "pre_questions": [],
+ "main_content": "INTRODUCTION Pinterest\u2019s mission is to bring everyone the inspiration to create a life they love. Search is one of the key surfaces on Pinterest where users seek inspiration spanning a wide range of interests, such as decorating their homes, planning weddings, or keeping up with the latest trends in beauty and fashion. In order to enhance the search experience, modern search systems aim to incorporate various types of content such as web documents, news, shopping items, videos, and more. Similarly, Pinterest\u2019s search feed encompasses a diverse range of content, including pins, shopping items, video pins, and related queries. To construct an inspiring feed for each of the more than 6 billion searches per month on Pinterest we must uncover relevant content from billions of pins and products. We must also find relevant queries to help users refine their queries and navigate their search journey. As an additional challenge, Pinterest search is global and multilingual with searchers using more than 45 languages to find inspirational content. Embeddings are useful building blocks in recommendation systems, especially search, where natural language understanding is key [11, 23, 24]. Embeddings can power retrieval use cases via approximate nearest neighbor (ANN) search [14, 22], enable detailed content and query understanding in ranking models without the overhead of processing raw data, and serve as a strong base to learn in low-data use-cases [31]. Despite their utility, embeddings come with their own challenges: if we learn a separate embedding for every use-case, there is an explosion of potentially expensive models that must be inferred on every request and used in downstream models. This also may lead to suboptimal recommendation quality \u2013 some use-cases may not have enough labels to learn an optimal representation. In practice, it could entail additional maintenance costs and technical debt for upgrading to new versions of embeddings in certain applications, as some data may have been collected over the course of months or years. Through rigorous offline experimentation, we show the impact of our key decisions in building embeddings for web-scale search at Pinterest: \u2022 Pin and product representations can be substantially enriched using diverse text derived from image captions from arXiv:2404.16260v1 [cs.IR] 25 Apr 2024 WWW \u201924 Companion, May 13\u201317, 2024, Singapore, Singapore Prabhat Agarwal et al. a generative LLM, historical engagement, and user-curated boards. A single query embedding can be used to retrieve queries, boards. \u2022 A single query embedding can be used to retrieve queries, products, and Pins with nearly the same effectiveness as task-specific embeddings. A single query embedding can learn compatibility with multitask-specific embeddings. \u2022 A single query embedding can learn compatibility with multiple pre-existing embeddings and learned entity embeddings, and perform well when compared across tasks. OmniSearchSage has been deployed at Pinterest and is an integral component of the search stack. It powers embedding-based retrieval for standard and product pins, queries and ads. It is also one of the most important feature in multi-stage ranking models and various query classification models. These gains all arise despite the existence of other features enabling pin and product understanding, which highlights the importance optimizing embeddings end-to-end for search. In order to enhance the search experience, modern search systems aim to incorporate various types of content such as web documents, news, shopping items, videos, and more. Similarly, Pinterest\u2019s search feed encompasses a diverse range of content, including pins, shopping items, video pins, and related queries. Training separate query embedding models for each content type and its representation proves to be resource-intensive and inefficient. To address this issue, we introduce OmniSearchSage, which offers a unified query embedding model that jointly trains query embeddings for query-query, query-pin, and query-product retrieval and ranking. Another requirement in production systems is compatibility with existing embeddings, which is essential for purposes such as cost-efficiency and simplified migration. Hence we also train the query embeddings to be compatible with the corresponding preexisting embeddings for the entities. As a side effect, we also get compatibility with some embeddings due to the triangle inequality property inherent to cosine similarity. OmniSearchSage: Multi-Task Multi-Entity Embeddings for Pinterest Search WWW \u201924 Companion, May 13\u201317, 2024, Singapore, Singapore 3.2 Enriching Entity Representations On Pinterest, each pin or product is associated with an image and title, along with an optional text (known as description) and link. Beyond these typical attributes, products may carry additional metadata, such as brand information, color description, and more. Document expansion techniques has been empirically demonstrated to significantly enhance the performance of not just token-based, but also embedding-based search retrieval systems [8, 25, 26, 28, 34]. Hence, in OmniSearchSage, we enrich our entity representations using diverse text derived from image captions from a generative LLM, historical engagement, and user-curated boards as described below. In the dataset, 71% of pins and products feature a title or description, 91% include non-empty board titles, and 65% contain non-empty engaged queries. Synthetic GenAI captions are generated for all pins and products, ensuring full coverage. Section 4.3.2 discusses the importance of each of these enrichment. 3.2.1 Synthetic GenAI Captions. On our platform, a substantial volume of pins (about 30%) lack associated titles or descriptions, or possess noisy and/or irrelevant title or description. We address this issue by employing an off-the-shelf image captioning model, BLIP [17], to generate synthetic descriptions for these images. To assess the quality of these synthetically generated descriptions, we enlisted human evaluators to judge their relevance and quality. For a robust assessment, three distinct ratings were collected for each image within a sample of 10\ud835\udc58images, curated uniformly across various broad pin categories. The results indicated that an overwhelming 87.84% of the generated descriptions were both relevant and of high quality, while a meager 1.16% were deemed irrelevant and of poor quality. These synthetically generated descriptions serve as an added feature in our model, enriching the diversity of data associated with each entity. Despite not being directly visible to the users, their addition significantly contributes to a deeper understanding of the pins\u2019 content. 3.2.2 Board Titles. On Pinterest, users explore and save pins to their personal collections, referred to as boards. Each board carries an associated title, reflecting the topic or theme of the collection. Most often, these user-crafted boards are meticulously organized, each focusing on a distinct theme or purpose. A user might, for instance, create discrete boards for \u201cSocial Media Marketing\" and \u201cGraphic Design\u2019\u00a8. Consequently, these board titles provide valuable, user-generated descriptors for the pins within the respective boards. We exploit this user-curated information by accumulating the titles of all boards each pin has been saved to. We limit our selection to a maximum of 10 unique board titles for each pin/product, systematically eliminating any potentially noisy or redundant titles as described next. First, each title is assigned a score influenced by two factors: its frequency of occurrence and the prevalence of its comprising words. Following this, titles are then ranked based on a hierarchy of their score (ascending), word count (descending), and character length (descending). The resulting top 10 board titles are subsequently incorporated as a feature in our model. This process eliminates any potentially noisy or redundant titles from the feature. Query Encoder Query Encoder Unified Pin-Product Encoder PinSage Unified Pin-Product Encoder ItemSage Query Pin Item Query L(query, query) L(query, pin) L(query, pin_c) L(query, product) L(query, product_c) Pretrained and Frozen Trained from scratch Figure 1: Diagrammatic Representation of OmniSearchSage\u2019s Multi-Entity, Multi-Task Architecture. 3.2.3 Engaged Queries. When multiple users interact with a specific pin or product for a certain query within a search feed, it signifies that pin\u2019s relevance to that query. We can use these queries to expand our understanding of the pin/product. For every pin, we generate a list of queries that have attracted user engagements, along with the counts and types of such engagements. This list of queries is then sorted using a function based on the count for each type of engagement. We use the top 20 queries from these sorted lists as a feature in our model. Through experimentation with diverse time-windows of query logs for feature creation, we discovered that larger windows yield superior performance. Consequently, we have opted for a twoyear window for feature calculation. However, the complexity of computing this from scratch every time presents a challenge. To mitigate this, we deploy an incremental approach. Every \ud835\udc5bdays, we examine new query logs, create a list of queries for every pin, and then blend it with the previously existing top 20 queries, thereby updating the latest value of the feature. 3.3 Entity Features The features we incorporate include PinSage [37] and unified image embeddings [2] to capture the essence of each pin. Additionally, for product pins, we use ItemSage [1] given its capability in effectively representing product-related pins. Text-based features such as the title and description of each pin are also integral to our feature set. Furthermore, we augment the text associated with each pin with the inclusion of synthetic captions, board titles, and engagement queries as outlined earlier. By integrating all these features, we attain a comprehensive and multi-dimensional representation of each pin, hence facilitating enhanced learning of representations. 3.4 Encoders In our work, we consider 3 entity types, namely, pin, product and query. Our model consists of an encoder for query, a unified learned encoder for both pin and product, and dedicated compatibility encoders for pin and product, respectively. 3.4.1 Query Encoder. The query encoder in our model (depicted in Figure 2) is based on a multilingual version of the DistilBERT WWW \u201924 Companion, May 13\u201317, 2024, Singapore, Singapore Prabhat Agarwal et al. Multilingual DistilBERT [CLS] antique copper bat ##hro ##om sin ##k Project and L2 Normalize Figure 2: Overview of the query encoder architecture. The encoder takes the output from the last layer associated with the \u2018CLS\u2019 token, projects it onto a 256-dimensional vector space, and finally L2-normalizes the output to generate the final embedding. (distilbert-base-multilingual-cased2) [32]. This choice facilitates efficient handling of queries across a variety of languages. The encoder utilizes the output from the last layer corresponding to the \ud835\udc36\ud835\udc3f\ud835\udc46token and thereafter projects it to a 256-dimensional vector space. Post projection, we apply a \ud835\udc3f2 normalization on the 256-dimensional vectors to obtain the final embedding. This normalization greatly simplifies the calculation of cosine-distance in downstream applications, allowing for a straightforward dot product operation. 3.4.2 Unified Pin and Product Encoder. In our model, we utilize a single unified encoder for both pins and products (depicted in Figure 3), and this encoder is jointly trained with the query embeddings. Designed to process both textual features and continuous features, it plays a crucial role in learning the respective embeddings of pins and products. In cases where certain features are defined for one entity but not the other, we substitute them with zero, ensuring a consistent data input. As detailed in section 3.5, we utilize in-batch negatives to train our model. Prior research [9, 15, 16, 29] has empirically demonstrated that larger batches with a substantial number of negatives help in learning better representations. Therefore, to accommodate a larger batch size in the GPU memory, we employ a simple pin encoder model. The following encoder design has been determined through numerous ablation studies. These studies have allowed us to select the most effective configuration for each of the components, while still considering the importance of both training and serving efficiencies. The encoder uses three distinct tokenizers to process the textual features associated with a pin [1, 13, 23]. These include (i) a word unigram tokenizer that uses a vocabulary encompassing the 200\ud835\udc58most frequent word unigrams, (ii) a word bigram tokenizer that makes use of a vocabulary comprising the 1\ud835\udc40most frequent word bigrams, and (iii) a character trigram tokenizer that utilizes a vocabulary of 64\ud835\udc58character trigrams. The tokens are mapped to their respective IDs in the vocabulary V which constitute all three 2https://huggingface.co/distilbert-base-multilingual-cased Image Encoder PinSAGE ItemSAGE MLP & L2 Normalize Hash Embedder Word Unigram Tokenizer Word Bigram Tokenizer Character Trigram Tokenizer Tokenizer Pin Text Board Titles Engaged Queries Synthetic GenAI Captions Figure 3: Schematic of the unified encoder model for pins and products, illustrating the use of three different tokenizers, a hash embedding table, and an MLP layer for combining text embeddings with other continuous features. tokenizers. Any token that falls out of this combined vocabulary gets discarded. The use of these combined tokenizers effectively helps in capturing the semantics of various texts associated with a pin/product. For token embedding learning, we use a 2-hash hash embedding table of size 100, 000 [1, 35]. Each identified token\u2019s ID \ud835\udc56is hashed into two places within the embedding table using hash functions \u210e1(\ud835\udc56) and \u210e2(\ud835\udc56). The ultimate embedding of a token with ID \ud835\udc56is a weighted interpolation of the two locations: \ud835\udc4a1\ud835\udc56\u210e1(\ud835\udc56) +\ud835\udc4a2\ud835\udc56\u210e2(\ud835\udc56), where \ud835\udc4a1 and \ud835\udc4a2 are learned weight vectors of size |V| each. The sum of all token embeddings and the embedding features are concatenated and fed into a 3-layer MLP, with layer sizes of 1024, 1024, 256. Following this, the output of the MLP layer undergoes L2-normalization just like the query embedding. 3.4.3 Compatibility Encoders. In our model, we employ two discrete compatibility encoders individually dedicated to pins and products. These encoders leverages the pre-existing pin and product embeddings, represented by PinSage for pins and ItemSage for products. This allows the model to adeptly learn query embeddings that align effectively with PinSage and ItemSage embeddings. 3.5 Multi-Task Sampled Softmax Loss Taking inspiration from Itemsage [1], the problem of learning query and entity embeddings is treated as an extreme classification problem, with the aim of predicting entities relevant to a given query [7]. We employ the sampled softmax loss with logQ correction [36] to train our model. We use multitasking to jointly train entity embeddings and train the query embeddings to be compatible with existing entity embeddings. Formally, we define a task \ud835\udc47\u2208T as a tuple of a dataset of query-entity pairs (D = {(\ud835\udc65,\ud835\udc66)\ud835\udc56}) and an entity encoder E. \ud835\udc47\u225c{D, E}. For a batch of data, B = {(\ud835\udc65,\ud835\udc66)\ud835\udc56} \u2282D, for task\ud835\udc47\u2208T, the aim is to learn query embedding \ud835\udc5e\ud835\udc65\ud835\udc56and entity embedding \ud835\udc5d\ud835\udc66\ud835\udc56= E(\ud835\udc66\ud835\udc56) such that the cosine similarity of the embeddings \ud835\udc5e\ud835\udc65\ud835\udc56\u00b7 \ud835\udc5d\ud835\udc66\ud835\udc56is maximized. OmniSearchSage: Multi-Task Multi-Entity Embeddings for Pinterest Search WWW \u201924 Companion, May 13\u201317, 2024, Singapore, Singapore This is achieved by minimizing the softmax loss: \ud835\udc3f\ud835\udc47= \u22121 |B| |B| \u2211\ufe01 \ud835\udc56=1 log exp(\ud835\udc5e\ud835\udc65\ud835\udc56\u00b7 \ud835\udc5d\ud835\udc66\ud835\udc56) \u00cd \ud835\udc66\u2208\ud835\udc36exp(\ud835\udc5e\ud835\udc65\ud835\udc56\u00b7 \ud835\udc5d\ud835\udc66) , (1) where C is the catalog of all entities of the same type as\ud835\udc66\ud835\udc56. To ensure problem tractability, the normalization term in the denominator is approximated using a sample of the catalog \ud835\udc36. We use (i) positives in the batch, \ud835\udc35\ud835\udc41= {\ud835\udc66\ud835\udc56|(\ud835\udc65\ud835\udc56,\ud835\udc66\ud835\udc56) \u2208B}, and (ii) a random sample of the catalog, \ud835\udc36\u2032. To rectify any bias that might have been introduced through sampling, we utilize the logQ correction technique. This method operates by deducting the sampling probability of the negative, represented as log\ud835\udc44(\ud835\udc66|\ud835\udc65\ud835\udc56), from the existing logits. This is crucial to ensure that popular entities aren\u2019t disproportionately penalized. \ud835\udc3f\ud835\udc47= \ud835\udc3f\ud835\udc46\ud835\udc4f\ud835\udc5b \ud835\udc47 + \ud835\udc3f\ud835\udc46\ud835\udc5f\ud835\udc5b \ud835\udc47 (2) \ud835\udc3f\ud835\udc46\ud835\udc4f\ud835\udc5b \ud835\udc47 = \u22121 |B| |B| \u2211\ufe01 \ud835\udc56=1 log exp(\ud835\udc5e\ud835\udc65\ud835\udc56\u00b7 \ud835\udc5d\ud835\udc66\ud835\udc56\u2212log\ud835\udc44(\ud835\udc66\ud835\udc56|\ud835\udc65\ud835\udc56)) \u00cd \ud835\udc67\u2208\ud835\udc35\ud835\udc41exp(\ud835\udc5e\ud835\udc65\ud835\udc56\u00b7 \ud835\udc5d\ud835\udc67\u2212log\ud835\udc44(\ud835\udc67|\ud835\udc65\ud835\udc56)) (3) \ud835\udc3f\ud835\udc46\ud835\udc5f\ud835\udc5b \ud835\udc47 = \u22121 |B| |B| \u2211\ufe01 \ud835\udc56=1 log exp(\ud835\udc5e\ud835\udc65\ud835\udc56\u00b7 \ud835\udc5d\ud835\udc66\ud835\udc56\u2212log\ud835\udc44(\ud835\udc66\ud835\udc56|\ud835\udc65\ud835\udc56)) \u00cd \ud835\udc66\u2208\ud835\udc36\u2032 exp(\ud835\udc5e\ud835\udc65\ud835\udc56\u00b7 \ud835\udc5d\ud835\udc66\u2212log\ud835\udc44(\ud835\udc66|\ud835\udc65\ud835\udc56)) (4) = \u22121 |B| |B| \u2211\ufe01 \ud835\udc56=1 log exp(\ud835\udc5e\ud835\udc65\ud835\udc56\u00b7 \ud835\udc5d\ud835\udc66\ud835\udc56\u2212log\ud835\udc44(\ud835\udc66\ud835\udc56|\ud835\udc65\ud835\udc56)) \u00cd \ud835\udc66\u2208\ud835\udc36\u2032 exp(\ud835\udc5e\ud835\udc65\ud835\udc56\u00b7 \ud835\udc5d\ud835\udc66\u2212log\ud835\udc44\ud835\udc5b(\ud835\udc66)) , (5) since \ud835\udc66is sampled independently The total loss is defined as the sum of all individual task losses, \ud835\udc3f= \u2211\ufe01 \ud835\udc47\u2208T \ud835\udc3f\ud835\udc47. (6) We mix together different tasks together in one batch and control the influence of each task on the model through this composition. To increase training efficiency, we share the pairs in the batch across all tasks with the same dataset. 3.6 Model Serving OmniSearchSage query embeddings are integral to numerous applications in the search stack, which necessitates us to maintain a strict latency budget. For real-time inference with minimized latency, our query encoder is served on GPUs by our in-house C++based machine learning model server, the Scorpion Model Server (SMS). Factoring in that query distribution complies with Zipf\u2019s law, we have instituted a cache-based system to curb costs and shorten response times. The query embedding server first verifies if a query is cached before resorting to the query inference server should it be absent from the cache. After testing various Cache Time-To-Live (TTL) periods, a TTL of 30 days was established as optimal. The system is equipped for handling 300\ud835\udc58requests per second, maintaining a median (p50) latency of just 3ms, and 90 percentile (p90) latency of 20ms. The implementation of this cachebased system efficiently reduces the load on the inference server to approximately 500 QPS, leading to substantial cost and latency reductions. The pin and product embeddings are derived offline on a daily basis through batch inference on GPUs and are subsequently published to our signal store for consumption. Pair Source Actions Size Query-Pin Query Logs repin, longclick 1.5B Query-Product Query Logs repin, longclick 136M Query-Product Offsite logs add-to-cart, checkout 2.5M Query-Query Query Logs click 195M Table 1: Summary of the different training datasets. 4 EXPERIMENTS 4.1 Dataset Our dataset is primarily constructed by extracting unique queryentity pairs from one year of search query logs. We consider various forms of engagement on the platform when extracting these pairs, including \u2018saves\u2019 (when a user saves a pin to a board) and \u2018long clicks\u2019 (instances where users browse the linked page for more than 10 seconds before returning to Pinterest). For products, we enrich our dataset by incorporating offsite actions as well. Thus, we also include anonymized pairs tied to significant actions like \u2018add to cart\u2019 and \u2018checkout\u2019. A common challenge in recommendation systems is the popularity bias, where certain pins are overrepresented due to their high appeal. To counteract this bias, we impose a limit on the number of times the same pin can be paired. This limit is capped at 50 pairs for pins and is extended to 200 pairs for products (since products have lower volume and engagement). By adopting this strategy, we ensure our dataset is robust and truly representative of the user\u2019s activity on the platform. Our model training is further extended to encompass queryquery pairs. On Pinterest, users are presented with similar query suggestions, and engagements with these recommendations are recorded in the search logs. We leverage these records, extracting such pairs from an entire year\u2019s logs, thus enriching our training dataset. A detailed breakdown of the positive labels in the dataset is provided in Table 1. 4.2 Offline Evaluation Metrics Our evaluation of the model encompasses both user engagement data and human-labeled relevance data. Relevance gets measured using human-labeled pairs of queries and pins, sampled from production traffic from four distinct countries: US, UK, France, and Germany. This strategy serves to assess the model\u2019s performance in handling multiple languages and cultural contexts. Evaluation of user engagement considers a selected 7-day period. We ensure no data leakage\u2014possible due to the inclusion of engagement features such as engaged queries\u2014by maintaining a 15-day separation between the end of the training dataset and the beginning of the evaluation phase. We sample 80\ud835\udc58pairs from the defined evaluation duration to represent repins and long clicks for both pins and products. Another 80\ud835\udc58pairs, corresponding to clicks for queries, are also included for comprehensive performance evaluation. The primary metric we used for evaluation is named \u2018Recall@10\u2019. This metric denotes the likelihood of the occurrence of the engaged entity within the top 10 entities when these entities are sorted in descending order based on their similarity to the query. WWW \u201924 Companion, May 13\u201317, 2024, Singapore, Singapore Prabhat Agarwal et al. Metric SearchSage OmniSearchSage Gain Pin Save 0.39 0.65 +67% Long-Click 0.45 0.73 +62% Relevance (US) 0.25 0.45 +80% Relevance (UK) 0.29 0.51 +76% Relevance (FR) 0.23 0.43 +87% Relevance (DE) 0.28 0.46 +64% Product Save 0.57 0.73 +28% Long-Click 0.58 0.73 +26% Query Click 0.54 0.78 +44% Table 2: Comparative analysis of OmniSearchSage and the baseline SearchSage across various tasks Pin, Product, and Query. Consider a dataset \ud835\udc37= (\ud835\udc5e\ud835\udc56,\ud835\udc52\ud835\udc56)\ud835\udc5b \ud835\udc56=1, where each (\ud835\udc5e\ud835\udc56,\ud835\udc52\ud835\udc56) denotes a query-engaged entity pair, and also consider a random corpus \ud835\udc36 with \ud835\udc5aentities. The Recall@10 metric can then be defined as the average over all queries of the indicator function 1, where 1 equals 1 if the engaged entity \ud835\udc52\ud835\udc56is amongst the top 10 entities in \ud835\udc36when ranked by their dot product with the query \ud835\udc5e\ud835\udc56. Recall@10 = 1 |\ud835\udc37| |\ud835\udc37| \u2211\ufe01 \ud835\udc56=1 1[( \u2211\ufe01 \ud835\udc66\u2208\ud835\udc36 \ud835\udc65\ud835\udc56\u00b7 \ud835\udc66> \ud835\udc65\ud835\udc56\u00b7 \ud835\udc66\ud835\udc56) > 10] For every pin, query, and product, we employ a uniformly distributed random sample of \ud835\udc5a= 1.5\ud835\udc40entities from our corpus. 4.3 Offline Results In this section, we provide a comprehensive comparison between our proposed model, OmniSearchSage, and the existing baselines, which helps showcase its performance enhancements. Subsequently, we undertake an in-depth exploration of key influential aspects such as the significance of text enrichments, the pros and cons of adopting multitasking approaches, and the operational efficacy of compatibility encoders in the context of our model. 4.3.1 Comparison with Baselines. In this study, the existing version of SearchSage [27] serves as our comparison baseline. It operates using fixed PinSage and ItemSage embeddings for pins and products, respectively. For OmniSearchSage, we utilize the query encoder to derive query embeddings and the unified pin and product encoder to generate pin and product embeddings. In Table 2, comparisons are drawn between OmniSearchSage and SearchSage, with both models being trained and evaluated on the same dataset. It is important to highlight that the baseline model, SearchSage, does not involve query-query pairs for training purposes. On the pin dataset, OmniSearchSage shows a significant gain, between 60% and 90%, over SearchSage across all metrics. Recall is relatively consistent across different countries, reflecting the multilingual robustness of OmniSearchSage. Analysis of the product dataset reveals that OmniSearchSage outperforms the baseline model by about 27% in predicting product save long-click relevance No captions 0.51 0.60 0.36 With captions 0.66 0.76 0.36 Improvement +30.43% +25.58% 0% Table 3: Comparative assessment displaying the influence of Synthetic GenAI Captions on pins lacking titles and descriptions. engagement. This increment is less prominent as compared to the pins dataset, mainly because ItemSage, upon which this comparison is based, has already undergone training on search tasks. Nevertheless, the observed improvement shows the positive impact of incorporating new features as well as the benefit of multi-tasking. Interestingly, SearchSage is able to predict related query clicks substantially better than random despite not being trained on this task. However, when we directly optimize for this objective in OmniSearchSage, we see a substantial +44% improvement. We show this improvement can be attributed to both training on related queries, and multi-task learning in Section 4.3.3. 4.3.2 Importance of content enrichment. In this section, we delve into an analysis of the importance of various text enhancements described in Section 3.2. To maintain brevity, the evaluation focuses solely on the metrics related to the query-pin task. Our first direction of investigation centers around the impact of integrating synthetic captions for pins that lack both a title and description. For this purpose, we extracted pairs from the evaluation dataset in which the engaged pin was missing a title or a description. This resulted in a narrowed evaluation dataset of 24\ud835\udc58pairs. The model\u2019s performance, initially based on solely continuous features and native text, was then compared to a model additionally enriched with captions. Table 3 presents the results of this comparison. When synthetic captions were added, both \u2018save\u2019 and \u2018long-click\u2019 metrics saw substantial improvements \u2014 approximately +30% and +26% respectively. However, the relevance metric remained unchanged. This suggests that adding synthetic captions can significantly enhance the model\u2019s performance for certain metrics when representing pins that lack a title and description. Table 4 illustrates the impact of adding different text enrichments on the model\u2019s performance. Each percentage increase is relative to the previous row, displaying the additional improvement from each additional feature. Our baseline model utilizes only continuous features for training and its performance values are reflected in the first row. Upon adding \u2018Title\u2019, \u2018Description\u2019, and \u2018Synthetic GenAI Captions\u2019 to the baseline model, we notice a robust improvement across all metrics. save long-click relevance Continuous Features Only 0.43 0.53 0.30 Adding Title, Description and Synthetic GenAI Captions 0.52 (+21%) 0.63 (+19%) 0.39 (+30%) Adding Board Titles 0.61 (+17%) 0.68 (+8%) 0.44 (+13%) Adding Engaged Queries 0.65 (+7%) 0.73 (+7%) 0.46 (+5%) Table 4: Impact of adding different text enrichments on the model\u2019s performance. Each percentage increase is relative to the previous row, displaying the additional improvement from each additional feature. OmniSearchSage: Multi-Task Multi-Entity Embeddings for Pinterest Search WWW \u201924 Companion, May 13\u201317, 2024, Singapore, Singapore Dataset Pin Only Product only Query Only OmniSearchSage pin save 0.68 0.65 long-click 0.75 0.73 avg relevance 0.45 0.46 product save 0.73 0.73 long-click 0.73 0.73 query click 0.73 0.78 Table 5: Comparative analysis illustrating the contrasts between our unified multi-task model and models trained individually for each task pin, product, and query. There is a 20% improvement in the engagement datasets, while the relevance metric improves by a notable 30%, demonstrating the substantial impact of these text features. The model enhancement continues with adding board titles to the feature set, leading to a further increase of 8 \u221215% in different metrics. This affirms the relevance of board titles in improving predictive accuracy. Finally, we incorporated engaged queries feature into the model, resulting in a consistent, albeit smaller growth across all three metrics. Although the incremental relative gain appears smaller, it still constitutes a significant improvement when compared to the baseline model. In summary, each text enrichment feature contributes significantly to improving model performance as seen by the increment in metrics compared to their immediate preceding state. 4.3.3 Effect of multi-tasking. In Table 5, we present a comparative analysis between models trained independently for each task (pin, product, and query) and our consolidated multitask model. For this comparison, both the independent and multitask models were trained under equivalent conditions with matching batch sizes, computational power, and iterations. The datasets used for both training and evaluation were also identical, with the sole difference that the individual models were trained on their respective subset of pairs from the dataset. This systematic approach ensures the fair and accurate assessment of the performance of the multitask model in relation to the independent task models. On the pin task, we see slight degradation in quality from multitask learning, but, on product and query tasks, results are neutral to positive. This aligns with general notions about multi-task learning: low-data tasks are unlikely to see regressions from multi-task learning, while the pin task using 1.5\ud835\udc35pairs sees a very slight drop in performance. Despite this drop, the simplification benefits of multi-task learning outweigh the metric loss. 4.3.4 Effect of compatibility encoders. We examine the influence of incorporating compatibility encoders on the effectiveness of the learned pin/product embeddings. We train a model that comprises only the query and unified pin and product encoder. Subsequently, this model is compared with another model that fully incorporates all the encoders. Interestingly, there is almost no noticeable degradation in the metrics of the learned encoder, thereby essentially achieving seamless compatibility of the query embedding with pre-existing embeddings at no substantial cost. Furthermore, as demonstrated in Table 6, the performance of the compatibility encoders in the OmniSearchSage model is either on par with or surpasses that of the SearchSage model, which is trained utilising only compatibility encoders. Dataset SearchSage OmniSearchSage pin save 0.39 0.39 long-click 0.45 0.43 avg relevance 0.26 0.26 product save 0.57 0.57 long-click 0.58 0.57 Table 6: Comparison of co-trained compatibility encoders with independently trained compatibility encoders. Product Embedding Index (HNSW) Ads Embedding Index (HNSW) Pin Embedding Index (HNSW) Pin Inverted Token Index Product Inverted Token Index Ads Inverted Token Index L1 Scoring Model User Input Query Query Understanding L2 Scoring Model Query Embedding Server User, Query, Pin Features Figure 4: A simplified depiction of the search retrieval and ranking stack at Pinterest highlighting the integration points for OmniSearchSage embeddings. 5 APPLICATIONS IN PINTEREST SEARCH OmniSearchSage embeddings find wide applications throughout the Pinterest search stack, primarily in retrieval and ranking tasks. Figure 4 presents a simplified depiction of the search retrieval and ranking stack at Pinterest and highlights the integration points for OmniSearchSage embeddings. These embeddings are employed to power the retrieval of pins and products using HNSW [22]. They are also instrumental in the L1 scoring model, where they enhance the efficiency of token-based retrieval sources. Moreover, OmniSearchSage embeddings serve as one of the most critical features in the L2 scoring and relevance models. In this section, we delineate the results derived from the A/B tests we conducted. In these tests, production SearchSage embeddings were replaced with OmniSearchSage embeddings, resulting in boosted performance in both organic and promoted content (Ads) in search. Additionally, we provide results from a human relevance assessment conducted on actual production-sampled traffic. This evaluation further confirms the improved performance derived from the utilization of OmniSearchSage embeddings. Finally, we demonstrate how employing query embeddings also enhances performance in other tasks, such as classification, particularly in situations where data availability is limited. This highlights the ability of the OmniSearchSage model to generalize to tasks different from its original training objectives. 5.1 Human Relevance Evaluation To understand advantages of OmniSearchSage, we enlisted human evaluators to assess the relevance of candidates retrieved via two WWW \u201924 Companion, May 13\u201317, 2024, Singapore, Singapore Prabhat Agarwal et al. (a) Token-based (b) OmniSearchSage-based Figure 5: Comparative display of pins retrieved in response to the query \u2019antique copper bathroom sink\u2019 from the tokenbased system and the OmniSearchSage-based system. Pins deemed relevant are outlined in green, while those considered irrelevant are encircled in red. methods: OmniSearchSage embeddings-based pin retrieval and token-based pin retrieval. For this evaluation, we selected a set of 300 queries, deliberately stratified across both head and tail queries. The top 8 candidate pins were then retrieved from each system using these queries, and human evaluators determined the relevance of the pins to the corresponding query. Every query-pin pair received three judgements, with an inter-annotator agreement rate of 0.89. Evaluation results revealed a noticeable improvement with OmniSearchSage, showing a 10% increase in relevance compared to the token-based system. Figure 5 offers a distinct comparison of retrieved pins for the query \u2018antique copper bathroom sink\u2019 between the candidates retrieved by the token-based system and the OmniSearchSage-based system. The token-based retrieval system often fetches pins related to only part of the query and fails to fetch consistently relevant results. In striking contrast, nearly all pins retrieved by the OmniSearchSage-based system are highly relevant to the specified query, underlining the efficacy of the OmniSearchSage model in understanding the query and aligning similar pins and queries in the same space together. 5.2 Organic Search In this section, we outline the results of the A/B testing conducted to substitute the existing production SearchSage query and entity embeddings with OmniSearchSage embeddings for organic content within Pinterest search. Within the context of search experiments at Pinterest, our attention is largely concentrated on two key metrics: the search fulfillment rate and relevance. The search fulfillment rate is defined as the proportion of searches that result in a user engagement action of significance. Relevance is calculated as the weighted average relevance of the top eight pins for each query, assessed across different query segments. This is measured through human evaluation. The impact on these two metrics, from replacing SearchSage with OmniSearchSage, is presented in Table 7. The table provides data drawn from experiments for three distinct use-cases: (i) retrieval of pins and products, (ii) L1 scoring model, and (iii) L2 scoring model and relevance model. Search Fulfilment Rate Relevance Pin and Product Retrieval +4.1% +0.5% L1 Scoring +0.5% +0.0% L2 Scoring and Relevance Model +2.8% +3.0% Table 7: Online A/B experiment results of OmniSearchSage in Organic Search. gCTR Product Ads Retrieval +5.27% Ads Search Engagement Model +2.96% Ads Search Relevance Model +1.55% Table 8: Online A/B experiment results of OmniSearchSage for Ads in Search. 5.3 Ads in Search The OmniSearchSage embeddings have also successfully replaced the SearchSage embeddings in various applications within Ads on Search surface. We present the results of three use cases: search engagement model, search relevance model, and product ads retrieval. Uniformly, we noted substantial improvements in engagement and relevance within Ads across all use cases. These increments, specifically in the long clickthrough rate (gCTR), are outlined in Table 8. Furthermore, OmniSearchSage led to a noteworthy 4.95% increase in Ads relevance within the Search Ads relevance model. These gains highlight the positive impact of transitioning to OmniSearchSage embeddings for Ads on Search. 5.4 Classification One of the primary advantages of developing robust query representation such as OmniSearchSage is its utility in powering downstream applications, particularly when there is a lack of labels for learning large models. One example of this at Pinterest is interest classification, where we classify queries into a hierarchical taxonomy. Using OmniSearchSage query embeddings for query representation, we were able to increase performance when compared to the baseline FastText [3] model. Precision increased by 30% on average across levels, with the larger gains coming from more granular levels. 6 CONCLUSION In this work, we presented OmniSearchSage, an end-to-end optimized set of query, pin, and product embeddings for Pinterest search, which have shown value across many applications. In contrast to other work focused on learning embeddings for search, we demonstrate the value of unified query, pin, and product embeddings as both candidate generators and features in Pinterest search. We show a great improvement over previous solutions at Pinterest can be attributed to rich document text representations, which improved offline evaluation metrics by > 50%. We also describe practical decisions enabling serving and adoption, including compatibilty encoders, multi-task learning, and long-TTL caching. Lastly, we summarize results from online A/B experiments across organic and ads applications, which have directly led to cumulative gains of +7.4% fulfilment rate on searches, and +3.5% relevance. OmniSearchSage: Multi-Task Multi-Entity Embeddings for Pinterest Search WWW \u201924 Companion, May 13\u201317, 2024, Singapore, Singapore",
+ "additional_info": [
+ [
+ {
+ "url": "http://arxiv.org/abs/2404.13571v1",
+ "title": "Test-Time Training on Graphs with Large Language Models (LLMs)",
+ "abstract": "Graph Neural Networks have demonstrated great success in various fields of\nmultimedia. However, the distribution shift between the training and test data\nchallenges the effectiveness of GNNs. To mitigate this challenge, Test-Time\nTraining (TTT) has been proposed as a promising approach. Traditional TTT\nmethods require a demanding unsupervised training strategy to capture the\ninformation from test to benefit the main task. Inspired by the great\nannotation ability of Large Language Models (LLMs) on Text-Attributed Graphs\n(TAGs), we propose to enhance the test-time training on graphs with LLMs as\nannotators. In this paper, we design a novel Test-Time Training pipeline,\nLLMTTT, which conducts the test-time adaptation under the annotations by LLMs\non a carefully-selected node set. Specifically, LLMTTT introduces a hybrid\nactive node selection strategy that considers not only node diversity and\nrepresentativeness, but also prediction signals from the pre-trained model.\nGiven annotations from LLMs, a two-stage training strategy is designed to\ntailor the test-time model with the limited and noisy labels. A theoretical\nanalysis ensures the validity of our method and extensive experiments\ndemonstrate that the proposed LLMTTT can achieve a significant performance\nimprovement compared to existing Out-of-Distribution (OOD) generalization\nmethods.",
+ "authors": "Jiaxin Zhang, Yiqi Wang, Xihong Yang, Siwei Wang, Yu Feng, Yu Shi, Ruicaho Ren, En Zhu, Xinwang Liu",
+ "published": "2024-04-21",
+ "updated": "2024-04-21",
+ "primary_cat": "cs.LG",
+ "cats": [
+ "cs.LG",
+ "cs.AI"
+ ],
+ "label": "Original Paper",
+ "paper_cat": "LLM Fairness",
+ "gt": "LLMTTT aims at solving the challenges of data distribution shift in GNNs via a novel test-time training method based on LLMs. To achieve this goal, a careful graph active learning strategy is also developed. The related work are discussed as follows: \u0000F\u0000R\u0000U\u0000D \u0000S\u0000X\u0000E\u0000P\u0000H\u0000G \u0000F\u000b\u0000W\u0000H\u0000V\u0000H\u0000H\u0000U \u000e\u000b\u000f\u000b\u0000F\u0000V \u0000'\u0000D\u0000W\u0000D\u00006\u0000H\u0000W \u0000\u0019\u0000\u0013 \u0000\u0019\u0000\u0018 \u0015\u0000\u0013 \u0015\u0000\u0018 \u0016\u0000\u0013 \u0016\u0000\u0018 \u0017\u0000\u0013 \u0018\u0018\u00000\u00007\u00007\u00007 \u0000$\u0000&\u0000&\u001e\u0000\b \u0000W\u000e\u0000R\u0000\u0010\u0000V\u0000W\u0000D\"\u0000H \u0000W\u0000U\u0000D\u000b\u0000Q\u000b\u0000Q\" \u000e\u0000\u0012\u0000R \u0000V\u0000W\u0000D\"\u0000H \u0000\u0014 \u000e\u0000\u0012\u0000R \u0000V\u0000W\u0000D\"\u0000H \u0000\u0015 Figure 4: Effectiveness of two-stage training 6.1 Distribution shift in GNNs Graph Neural Networks (GNNs) have demonstrated exceptional capabilities in graph representation learning [36, 59], achieved revolutionary progress in various graph-related tasks [51], such as social network analysis [24, 45], recommendation systems [9, 18, 53], and natural language processing [3, 23, 29]. However, a distribution shift has been observed in various graph-related applications [20, 21], where the graph distribution in the training set differs from that in the test set. Such discrepancy could substantially degrade the performance of both node level [56, 66] and graph level tasks [56, 66]. This distribution shift frequently occurs between the testing and training graphs [20, 21]. Therefore, enhancing the out-of-distribution (OOD) generalization capabilities of GNNs is crucial. Several solutions have been proposed to tackle this issue, such as EERM [56], which trains GNNs to be adaptable to multiple environments by introducing environmental variables, and GTrans [25], which enhances generalization ability by modifying the input feature matrix and adjacency matrix during test time. 6.2 Test-Time Training Test-time training (TTT) is a technique recently proposed for partially adapting a model based on test samples, to account for distribution shifts between the training and test sets. TTT was first introduced by [44]. To address the unexpected adaptation failures in TTT, TTT++[31] employs offline feature extraction and online feature alignment to enable regularization adaptation without the need to revisit the training data. However, in some cases, the training data may be unavailable during test time or the training process may be computationally demanding, which can reduce the applicability of these methods. To overcome this limitation, Tent [49] introduces a method for fully test-time training that relies solely on test samples and a trained model. They propose an online setting after the TTT task to achieve fully test-time training through the minimization of the model\u2019s test entropy. While the aforementioned studies focus on test-time training within the image domain, the TTT framework has also been implemented in the realm of graphs, including GTrans [25], GT3 [52], GraphTTA [4], and TeSLA [46]. Test-Time Training on Graphs with Large Language Models (LLMs) Conference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY 6.3 Graph Active Learning Graph active learning aims to optimize test performance through the strategic selection of nodes within a constrained query budget, effectively addressing the challenges of data labeling. The most prevalent approach in active learning is uncertainty sampling [37, 43, 48, 54], wherein nodes that the current model has the least certainty about are selected during the training phase. Another significant strand within active learning approaches involves distributionbased selection strategies. These methods [2, 11, 41, 42, 65, 65] evaluate samples based on their positioning within the feature distribution of the data. Representativeness and diversity represent two commonly utilized selection criteria, both of which rely on the data distribution. Generally, active learning primarily focuses on selecting representative nodes; however, it faces additional challenges in real world scenarios. Furthermore, active learning needs to address two key issues: assigning pseudo-labels to the selected nodes and effectively utilizing a limited number of labels for training. In the proposed LLMTTT , these two problems are well solved. 6.4 LLMs for Graphs Large language models (LLMs) with massive knowledge demonstrate impressive zero-shot and few-shot capabilities. Considerable research [14, 17] has begun to apply LLMs to graphs, enhancing performance on graph-related tasks. Utilizing LLMs as enhancers [14] presents a viable approach, leveraging their power to enhance the performance of smaller models more efficiently. Compared to shallow embeddings, LLMs offer a richer commonsense knowledge base that could potentially enhance the performance of downstream tasks. Relying solely on LLMs as predictors [6, 50, 60] represents another viable approach, with GPT4Graph [14] evaluating the potential of LLMs in performing knowledge graph (KG) inference and node classification tasks. NLGraph [50] introduced a comprehensive benchmark to assess graph structure reasoning capabilities. Distinct from these approaches, we employ LLMs as annotators as [7], combining the advantages of the two aforementioned methods to train an efficient model without relying on any true labels.",
+ "pre_questions": [],
+ "main_content": "INTRODUCTION Graph is a kind of prevalent multi-modal data, consisting of modalities of both the topological structure and node features [30, 38]. Text-Attributed Graphs (TAGs) are graphs of which node attributes are described from the text modality, such as paper citation graphs containing paper descriptions and social network data including user descriptions. As a successful extension of Deep Neural Networks (DNNs) to graph data, Graph Neural Networks (GNNs) have demonstrated great power in graph representation learning, and have achieved revolutionary progress in various graph-related applications, such as social network analysis [16], recommendation [39, 64] and drug arXiv:2404.13571v1 [cs.LG] 21 Apr 2024 Conference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY Jiaxin Zhang and Yiqi Wang, et al. discovery [8, 15]. Despite remarkable achievements, GNNs have shown vulnerability in Out-Of-Distribution (OOD) generalization, as it is observed that GNNs can confront significant performance decline when there exists distribution shift between the training phase and the test phase [19, 33]. Increasing efforts [56, 58] have been made to address the OutOf-Distribution (OOD) challenge on graphs. A majority of these methods aim at increasing the models\u2019 capability and robustness via data augmentation techniques designed based on heuristics and extensive empirical studies [28, 55, 61]. Meanwhile, some researchers have investigated to improve the model\u2019s generalization capability via adversarial training strategies [58] and the principle of invariance [56]. Nevertheless, these approaches [56, 58] require interventions during the training phase and can hardly make the continuous adaptability to the real-time data within the constraints of privacy, resources, and efficiency. This gap has prompted the development of Test-Time Training (TTT) [31, 44], which aims to dynamically adapt to continuously presented test data based on an unsupervised learning task during the test phase. Test-Time Training (TTT) have demonstrated great potential in alleviating OOD generalization problem. Fully Test-Time Training (FTTT) [25, 49] is the extension of TTT. This kind of post-hoc method is more suitable for real-world applications due to its plugand-play simplicity, which does not interfere with the expensive training process required for pre-trained backbones. Traditional FTTT aims at adapting the pre-trained model to accommodate test data from different domains within an unsupervised setting. However, the design of the unsupervised training phase entails stringent criteria: it must ensure that the unsupervised task complements the main task without causing overfitting to the model and neglecting the main task. Additionally, unsupervised tasks must implicitly capture the distribution of the test data. Devising such an unsupervised training strategy poses a significant challenge. A natural solution is to utilize the same training strategy as the main task in the test phase, i.e., supervised learning. Meanwhile, a recent study [12] has shown that incorporating a limited number of labeled test instances can enhance the performance across test domains with a theoretical guarantee. This motivates us to introduce a small number of labels at test time to further advance the model performance on OOD graphs. In the FTTT scenario, with continuous arrival of data during testing, human annotation cannot handle this situation flexibly and efficiently. Fortunately, Large Language Models (LLMs) have achieved impressive progresses in various applications [6, 14, 17]. including zero-shot proficiency in annotation on text-attributed graphs [7]. With the assistance of LLMs, only a few crucial nodes are chosen and assigned pseudo labels. Then, FTTT is executed using the same training approach as the main task. This method avoids the need for intricate unsupervised task designing. Therefore, in this work we propose a novel method to leverage the annotation capability of LLMs to advance test-time training, so as to alleviate the OOD problem on graphs. However, to achieve this goal, we face tremendous challenges: (1) How to select nodes for annotation with LLMs given a limited budget? The studied problem in this paper is different from that in [7]. For node selection, in addition to the importance of the characteristics of LLMs and the test data, the predictions of the pre-trained model on test nodes can also provide crucial signals. (2) How to effectively adapt the pre-trained model under the noisy and limited labels? The labels generated by LLMs are noisy [7]. Therefore, it is essential to design a training strategy which is able to simultaneously utilize a small number of noisy labeled nodes and the remaining unlabeled nodes during test time. To tackle these challenges, we introduce a Fully Test-Time Training with LLMs pipeline for node classification on graphs, LLMTTT . During the selection of node candidates, different from traditional graph active node selection methods, LLMTTT introduces a hybrid active node selection strategy, which considers node diversity, node representativeness, and the prediction capacity of the pre-trained GNN simultaneously. Meanwhile, to leverage both the noisy labeled nodes and unlabeled nodes, LLMTTT designs a two-stage test-time training strategy. Our main contributions can be summarized as follows: \u2022 We introduce a new pipeline, LLMTTT , from the graph OOD problem. In LLMTTT , we use LLMs as annotators to obtain pseudo labels. These labels are then used to fine-tune the pre-trained GNN model during test time. \u2022 We develop a hybrid active node selection which considers not only the node diversity and representativeness on graphs but also the prediction signals from the pre-trained model. \u2022 We design a two-stage training strategy for the test-time model adaptation under the noisy and limited labeled samples. \u2022 We have conducted extensive experiments and theoretical analysis to demonstrate the effectiveness of LLMTTT on various OOD graphs. 2 PRELIMINARY This section provides definitions and explanations of key notations and concepts in this paper. First, primary notations and the pipeline of traditional fully test-time training are introduced. Next, we illustrate the proposed LLMTTT pipeline for a more comprehensive understanding of our framework. In this study, we focus on the node classification task, where the goal is to predict the labels of nodes within a graph and we denote the loss function for this task as \ud835\udc3f\ud835\udc5a(\u00b7). Given a training node set \ud835\udc37\ud835\udc60= (\ud835\udc4b\ud835\udc60, \ud835\udc4c\ud835\udc60) and a test node set \ud835\udc48\ud835\udc61\ud835\udc52= (\ud835\udc4b\ud835\udc61), where \ud835\udc4bdenotes the node samples and \ud835\udc4cindicates the corresponding labels. Traditional FTTT pipeline. Assuming that the model for the node classification task has \ud835\udc3elayers, which can be denoted as \ud835\udf3d= {\ud835\udf3d1, ..., \ud835\udf3d\ud835\udc3e}. Given the test data \ud835\udc48\ud835\udc61\ud835\udc52, the parameters of the learned model will be partially (typically the first \ud835\udc58layers of the model are fixed) updated by the SSL task during the fully test-time training phase. We can denote the updated part of model as (\ud835\udf3d\u2032 \ud835\udc58+1, ..., \ud835\udf3d\u2032 \ud835\udc3e). In the inference phase, the model (\ud835\udf3d1, ..., \ud835\udf3d\ud835\udc58, ..., \ud835\udf3d\u2032 \ud835\udc3e) is used to make predictions for the test data. The proposed LLMTTT pipeline. Traditional FTTT pipeline aims at adapting a pre-trained model for streaming test-time data under unsupervised settings. However, it is not trivial to design such an appropriate and effective unsupervised task, which is supposed to be positively-correlated to the main training task [44]. In order to solve this problem, we introduce a novel pipeline named LLMTTT , which substitutes a semi-supervised task with the assistance of Test-Time Training on Graphs with Large Language Models (LLMs) Conference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY LLMs, for the unsupervised task during the test-time training phase. The proposed pipeline can be formally defined as follows: Given a model \ud835\udc53(\ud835\udc65;\ud835\udf3d), initialized with parameters \ud835\udf3d\ud835\udc60obtained by pre-training on train data. We select most valuable samples under a limited budget from test nodes by a carefully designed hybrid node selection method, denoted as \ud835\udc4b\ud835\udc61\ud835\udc5f= \ud835\udc34\ud835\udc50\ud835\udc61\ud835\udc34\ud835\udc59\ud835\udc54(\ud835\udc4b\ud835\udc61). Then the selected samples are given pseudo labels by LLMs, denoted as \ud835\udc37\ud835\udc61\ud835\udc5f= (\ud835\udc4b\ud835\udc61\ud835\udc5f, \u02c6 \ud835\udc4c\ud835\udc61\ud835\udc5f) where \u02c6 \ud835\udc4c\ud835\udc61\ud835\udc5f= \ud835\udc3f\ud835\udc3f\ud835\udc40\ud835\udc4e\ud835\udc5b\ud835\udc5b\ud835\udc5c(\ud835\udc4b\ud835\udc61\ud835\udc5f). After obtaining the labeled test nodes, we employ a two-stage training strategy that incorporates both the labeled test nodes \ud835\udc37\ud835\udc61\ud835\udc5fand unlabeled test nodes \ud835\udc37\ud835\udc61\ud835\udc5f\ud835\udc52. The LLMTTT task aims to optimize the model as: \ud835\udf3d\u2217:= argmin \ud835\udf3d \u0010 E(\ud835\udc65, \u02c6 \ud835\udc66)\u2208\ud835\udc37\ud835\udc61\ud835\udc5f[\ud835\udc3f\ud835\udc36(\ud835\udc53(\ud835\udc65;\ud835\udf3d), \u02c6 \ud835\udc66)] + E\ud835\udc65\u2208\ud835\udc37\ud835\udc61\ud835\udc52[\ud835\udc3f\ud835\udc48(\ud835\udc53(\ud835\udc65;\ud835\udf3d))] \u0011 , (1) where \ud835\udc4b\ud835\udc61\ud835\udc5f= \u001a \u2205, in FTTT \ud835\udc34\ud835\udc50\ud835\udc61\ud835\udc34\ud835\udc59\ud835\udc54(\ud835\udc4b\ud835\udc61), in LLMTTT, s.t. |\ud835\udc4b\ud835\udc61\ud835\udc5f| \u2264\ud835\udc35, (2) \ud835\udc3f\ud835\udc36is the cross entropy loss, \ud835\udc3f\ud835\udc48is an unsupervised learning loss, and \ud835\udc35is the budget. \ud835\udc37\ud835\udc61\ud835\udc52are the unlabeled nodes in the test data \ud835\udc48\ud835\udc61\ud835\udc52 that have not been labeled. \u02c6 \ud835\udc66is the pseudo labels given by LLMs. 3 METHOD In this section, we will introduces the novel LLM-based fully testtime training framework ( LLMTTT ) for the graph OOD problem. We first delineate the overall framework and then detail the specific components of LLMTTT . 3.1 An Overview of LLMTTT The LLMTTT pipeline proposed in this paper is illustrated in the Fig. 1 that consists of three parts: pre-training phase, fully test-time training phase, and inference phase as follows: Pre-training phase. The objective of this phase is to acquire a pre-trained classification model with optimized parameters capable of accurately predicting labels for the train data \ud835\udc37\ud835\udc60. It is worth noting that only the model parameters \ud835\udf03and the test data \ud835\udc48\ud835\udc61\ud835\udc52are required for the subsequent test-time model adaptation. Therefore, LLMTTT is a model-agnostic framework. Fully test-time training phase. The objective of our proposed approach is to utilize the annotation capabilities of LLMs to enhance test-time training to handle the OOD problem on graphs. We encounter several challenges in achieving this goal: (1) How to select the most valuable nodes for annotation using LLMs within a constrained budget? To address this issue, LLMTTT proposes a hybrid active node selection method incorporating both the knowledge from the pre-trained model and the node characteristics. Detailed illustration is provided in Section 3.2. (2) How to obtain high-quality pseudo labels based on LLMs? Given the candidate set of nodes, the quality of pseudo labels is crucial. Thus, we enhance the annotation by carefully designing various prompts, as described in Section 3.3. Moreover, the confidence scores from LLMs\u2019 predictions are used for further node filtering. (3) How to effectively adapt the pre-trained model under the noisy and limited labels? It is challenging to design a strategy to jointly leverage noisy labels and test samples. To tackle this challenge we propose a two-stage training strategy including training with filtered nodes and self-training with unlabeled data. Additional information is available in Section 3.4. After a two-stage test-time training, the pre-trained model is updated specifically for the test set. Inference phase. During the inference phase, the updated model is utilized to predict the labels for the test data, following the traditional model inference process. 3.2 Hybrid Active Node Selection Node selection is crucial in the design of LLMTTT . To better prompt the model performance under a controllable budget, it is eager to select the most valuable nodes for the test-time training. To achieve this goal, we need to consider not only the characteristics of the test data, but the predictions of the pre-trained model on the test data. Thus, LLMTTT proposes a two-step hybrid active node selection method. It consists of uncertainty-based active learning to leverage the important signals from the pre-trained model, and distribution-based active learning that is able to exploit the data characteristics. The details of these two steps are illustrated in the following subsections. 3.2.1 uncertainty-based active learning. To fully exploit the potential of model improvement from the test-time model adaptation, LLMTTT targets at nodes that are most difficult for the pre-trained GNN model to predict. To achieve this, uncertainty-based active learning is designed, which makes use of the prediction uncertainty indicated by prediction entropy [43] to select potential annotation nodes. Unlike other metrics of uncertainty [37, 48], entropy takes into account all class probabilities for a given\ud835\udc65. Specifically, for each node \ud835\udc63\ud835\udc56, LLMTTT computes its prediction entropy \ud835\udc44(\ud835\udc63\ud835\udc56) based on the prediction results from the pretrained GNN model, and then the nodes with higher prediction entropy are more likely to be selected. The prediction entropy of node \ud835\udc63\ud835\udc56, \ud835\udc44(\ud835\udc63\ud835\udc56) is calculated as follows: \ud835\udc44(\ud835\udc63\ud835\udc56) = \u2212\u00cd \ud835\udc50\ud835\udc5d(\ud835\udc66= \ud835\udc50|\ud835\udc65\ud835\udc56) log \ud835\udc5d(\ud835\udc66= \ud835\udc50|\ud835\udc65\ud835\udc56), (3) where \ud835\udc50denotes the potential labels and \ud835\udc66is the predicted label given by GNNs. 3.2.2 distribution-based active learning. The nodes selected through uncertainty sampling often exhibit high correlation within the same neighborhood. As a result, the distribution of the selected node set deviates from the original distribution, significantly compromising the diversity and representativeness of the node candidate set [47]. With this rationale, LLMTTT proposes to further refine node selection using distribution-based methods to emphasize the crucial data distribution characteristics. To be specific, a combiantion of PageRank [32] and FeatProp [57] is employed to capture the node distribution from both the structural and feature perspective. 3.2.3 The Selection Algorithm. The hybrid active node selection process is summarized in Algorithm 1. In order to select the most valuable \ud835\udc35nodes, a scaling factor \ud835\udefdis introduced to broaden the range of selection in the first step. Initially, LLMTTT filters out \ud835\udefd\ud835\udc35 samples where \ud835\udefd> 1, that exhibit the highest level of uncertainty. To consider both structural and feature attributes, we devise a composite active learning score \ud835\udc39(\ud835\udc63\ud835\udc56) as the criterion for distribution selection. Subsequently, \ud835\udc35samples that exhibit both uncertainty and diversity are selected. Conference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY Jiaxin Zhang and Yiqi Wang, et al. Pretraining Phase Inference Phase Test Time Training Phase backward soft-weighting stronglyaugmented graph weaklyaugmented graph backward LLM confidence-aware annotation test node train node prediction by LLM prediction by GNN weight of node hybrid active node selection diversity uncertainty post-filtering diversity confidence Stage-1 Entropy of GNN prediction 0 0 1 0 1 0 2 2 0 1 1 1 2 0 0 1 1 1 2 2 0 1 1 1 soft-weighting Stage-2 1 0 2 2 0 1 Figure 1: The overall framework of LLMTTT. Algorithm 1 The Selection Algorithm Input: \ud835\udc4b\ud835\udc61,\ud835\udc3a\ud835\udc36\ud835\udc41 Output: \ud835\udc4b\ud835\udc61\ud835\udc5f 1: \ud835\udc4c= \ud835\udc3a\ud835\udc36\ud835\udc41(\ud835\udc4b\ud835\udc61) 2: \ud835\udc44list = [] 3: for \ud835\udc63\ud835\udc56in \ud835\udc4b\ud835\udc61do 4: \ud835\udc44(\ud835\udc63\ud835\udc56) = \u2212\u00cd \ud835\udc50\ud835\udc5d(\ud835\udc66= \ud835\udc50|\ud835\udc65\ud835\udc56) log \ud835\udc5d(\ud835\udc66= \ud835\udc50|\ud835\udc65\ud835\udc56) 5: \ud835\udc44list.append(\ud835\udc44(\ud835\udc63\ud835\udc56)) 6: end for 7: \ud835\udc46\ud835\udefd\ud835\udc35= (\ud835\udc4b\ud835\udc61) for \ud835\udc63\ud835\udc56in sorted(\ud835\udc44list, reverse=True)[: \ud835\udefd\ud835\udc35] 8: \ud835\udc39list = [] 9: for \ud835\udc63\ud835\udc56in \ud835\udc46\ud835\udefd\ud835\udc35do 10: \ud835\udc39(\ud835\udc63\ud835\udc56) = \ud835\udc46\ud835\udc50\ud835\udc5c\ud835\udc5f\ud835\udc52\ud835\udc5d\ud835\udc4e\ud835\udc54\ud835\udc52\ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc58(\ud835\udc63\ud835\udc56) + \ud835\udefc\u00d7 \ud835\udc46\ud835\udc50\ud835\udc5c\ud835\udc5f\ud835\udc52\ud835\udc53\ud835\udc52\ud835\udc4e\ud835\udc61\ud835\udc5d\ud835\udc5f\ud835\udc5c\ud835\udc5d(\ud835\udc63\ud835\udc56) 11: \ud835\udc39list.append(\ud835\udc39(\ud835\udc63\ud835\udc56)) 12: end for 13: \ud835\udc4b\ud835\udc61\ud835\udc5f= (\ud835\udc4b\ud835\udc61) for \ud835\udc63\ud835\udc56in sorted(\ud835\udc39list, reverse=True)[: \ud835\udc35] 14: return \ud835\udc4b\ud835\udc61\ud835\udc5f 3.3 Confidence-aware High-quality Annotation Given the set of selected nodes, the quality of their pseudo labels plays an important role in the performance after test-time training, based on the empirical study in Section 5.3.1. Therefore, it is imperative to make full use of LLMs and the pretrained GNN to obtain high-quality annotations after acquiring the candidate node set via the hybrid active learning. Inspired by existent exploration of LLMs on graphs [6, 7, 14], LLMTTT proposes to prompt based on the \"few-shot\" strategy, which is described in Appendix B. Specifically, information of some labelled nodes from the training set serves as a part of the prompt. Moreover, the prediction results from the pretrained GNNs are also included into the prompt. In addition, to evaluate the quality of LLM\u2019s annotations, we further request the prediction confidence for the pseudo labels from LLMs. 3.4 Two-Stage Training After LLMs\u2019 annotation on the selected nodes, LLMTTT moves to the next phase, test-time training phase. The proposed LLMTTT creatively suggests utilizing pseudo labels for semi-supervised training during test time instead of an unsupervised training strategy. However, given the LLMs\u2019 annotation budget, the pseudo labels are too few to effectively adapt the model, which may even lead to a biased adaptation. To tackle this challenge, we propose to further design test-time training by integrating unsupervised learning with supervised learning to better leverage the information from all test nodes. In a nutshell, during test-time training phase, LLMTTT first trains the model with filtered nodes so as to reduce the impact from the noisy labels, and then leverages the self-training that can incorporate the information from the unlabeled data. 3.4.1 Stage 1: Training with filtered nodes. The pseudo labels generated by LLMs may be noisy and consequently affect the model. The pseudo labels are not entirely accurate. Therefore, we obtain the confidence of the LLMs\u2019 prediction through confidence-aware high-quality annotation in Section 3.3. To mitigate the potential impact from the noisy pseudo labels, LLMTTT propose to do a node filtering by excluding nodes based on confidence scores. However, it may cause label imbalance in the annotated node set. To avoid this issue, LLMTTT proposes to take the label diversity into consideration during the node filtering process. To quantify the change in diversity, we adopt the Change of Entropy (COE), inspired by [7]. It measures the shift in entropy of labels when a node is removed from the set. Specifically, assuming Test-Time Training on Graphs with Large Language Models (LLMs) Conference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY that the current set of selected nodes is denoted as \ud835\udc49, COE can be defined as \ud835\udc36\ud835\udc42\ud835\udc38(\ud835\udc63\ud835\udc56) = \ud835\udc6f \u0010 \u02c6 \ud835\udc66\ud835\udc49\u2212{\ud835\udc63\ud835\udc56} \u0011 \u2212\ud835\udc6f( \u02c6 \ud835\udc66\ud835\udc49) where \ud835\udc6f(\u00b7) is the Shannon entropy function [43], and \u02c6 \ud835\udc66denotes the annotations generated by LLMs. A larger COE value indicates that the removal of a node from the node set has a more pronounced impact on the diversity of the node set. To conclude, we integrate COE with the confidence score provided by LLMs to effectively balance both diversity and annotation quality. The final filtering score of each label can be expressed as \ud835\udc46\ud835\udc50\ud835\udc5c\ud835\udc5f\ud835\udc52\ud835\udc53\ud835\udc56\ud835\udc59\ud835\udc61\ud835\udc52\ud835\udc5f(\ud835\udc63\ud835\udc56) = \ud835\udc46\ud835\udc50\ud835\udc5c\ud835\udc5f\ud835\udc52\ud835\udc50\ud835\udc5c\ud835\udc5b\ud835\udc53(\ud835\udc63\ud835\udc56) \u2212\ud835\udefe\u00d7 \ud835\udc36\ud835\udc42\ud835\udc38(\ud835\udc63\ud835\udc56). The annotated nodes with relatively-high filtering score are selected for the few-shot test-time learning. Then we Then the filtered nodes, along with their corresponding pseudo labels, are utilized as supervision for model adaption. In this case, the cross-entropy loss \ud835\udc3f\ud835\udc36is employed in stage 1. 3.4.2 Stage 2: self-training with unlabeled nodes. To alleviate the potential biased model adaptation from the limited noisy labeled annotated by LLMs, the proposed LLMTTT designs an additional self-training stage, which aims at leveraging the information from large amount of unlabeled test data. Inspired by Softmatch [5], to fully leverage the unlabeled data information, we further perform self-training on the fine-tuned GNN model with the unlabelled test data. Specifically, an augmented view is generated via DropEdge. Next, a weighted cross-entropy loss are computed between the original view and augmented view. Intuitively, the more confident the prediction is, the more important role will this node make in the weighted cross-entropy loss. Formally, we denote the weighted cross-entropy loss \ud835\udc3f\ud835\udc62as follows: \ud835\udc3f\ud835\udc62= \ud835\udc41 \u2211\ufe01 \ud835\udc56=1 \ud835\udf06(\ud835\udc5d(\ud835\udc66| \ud835\udc65\ud835\udc56)) \ud835\udc3b\u0000\ud835\udc5d\u0000\ud835\udc66| \ud835\udc65\ud835\udc4e \ud835\udc56 \u0001 , \ud835\udc5d(\ud835\udc66| \ud835\udc65\ud835\udc56)\u0001 (4) where \ud835\udc5d(\ud835\udc66|\ud835\udc65) denotes the model\u2019s prediction, \ud835\udc65\ud835\udc56is an unlabelled test node in \ud835\udc37\ud835\udc61\ud835\udc52, \ud835\udc65\ud835\udc4e \ud835\udc56represents the augmented view and \ud835\udc65\ud835\udc56is the original data.\ud835\udc66is the prediction given by updated model. \ud835\udf06(\ud835\udc5d) is the sample weighting function where \ud835\udc5dis the abbreviation of \ud835\udc5d(\ud835\udc66|\ud835\udc65). \ud835\udc41is the size for unlabeled data. The uniform weighting function is vital to this process. An ideal \ud835\udf06(\ud835\udc5d) should accurately represent the original distribution while maintaining both high quantity and quality. Despite its importance, \ud835\udf06(\ud835\udc5d) is rarely explicitly or adequately defined in existing methods. Inherently different from previous methods, we assume that the weight function lambda follows a dynamically truncated Gaussian distribution followed by [5]. More detailed is provided in Appx. F. 4 THEORETICAL ANALYSIS Compared to the traditional TTT pipeline, LLMTTT introduces supervision into the model adaptation process. This section theoretically demonstrates that incorporating labelled test samples provided by LMMs during the test-time training phase can significantly improve the overall performance across the test domain. This also provides a theoretical guarantee for the proposed LLMTTT . To simplify the theoretical analysis, we consider the main task as a binary classification problem. Given a domain \ud835\udc4bwith two probability distributions \ud835\udc371 and \ud835\udc372, \u210e: \ud835\udc4b\u2192{0, 1} is a hypothesis serving as the prediction function from domain \ud835\udc4bto a binary label space. Let H denote a hypothesis class with VC-dimension \ud835\udc51. We employ the H\u25b3H-distance as detailed in [1], offering a fundamental metric to quantify the distribution shift between \ud835\udc371 and \ud835\udc372 over \ud835\udc4b. The discrepancy between \u210eand the true labeling function \ud835\udc54under distribution \ud835\udc37is formally expressed as \ud835\udc52(\u210e,\ud835\udc54) = E\ud835\udc65\u223c\ud835\udc37[|\u210e(\ud835\udc65) \u2212\ud835\udc54(\ud835\udc65)|], commonly known as the domain error \ud835\udc52(\u210e). Building upon two lemmas [12] provided in Appx. G, we establish theoretical bounds under the LLMTTT setting when minimizing the empirical weighted error using the hypothesis \u210e. Thm. 1 characterizes the error bounds in the LLMTTT setting, which can be formally expressed to quantify the generalization error. Expanding on this, Thm. 2 establishes the upper bound of the error that can be effectively minimized by integrating a portion of the labeled test data compared with FTTT. Theorem 1. Considering data domains \ud835\udc4b\ud835\udc60, \ud835\udc4b\ud835\udc61, let \ud835\udc46\ud835\udc56represent unlabeled samples of size \ud835\udc5a\ud835\udc56sampled from each of the two domains respectively. The total number of samples in \ud835\udc4b\ud835\udc61\ud835\udc5f\ud835\udc4e\ud835\udc56\ud835\udc5bis \ud835\udc41, with a sample number ratio of \ud835\udf40= (\ud835\udf060, \ud835\udf061) in each component. If \u02c6 \u210e\u2208H minimizes the empirical weighted error \u02c6 \ud835\udc52\ud835\udf4e(\u210e) using the weight vector \ud835\udf4e= (\ud835\udf140,\ud835\udf141) on \ud835\udc4b\ud835\udc61\ud835\udc5f\ud835\udc4e\ud835\udc56\ud835\udc5b, and \u210e\u2217 \ud835\udc57= arg min\u210e\u2208H \ud835\udc52\ud835\udc57(\u210e) is the optimal hypothesis within the \ud835\udc57-th domain, then for any \ud835\udeff\u2208(0, 1), with a probability exceeding 1 \u2212\ud835\udeff, the following holds: \ud835\udc52\ud835\udc57( \u02c6 \u210e) \u2212\ud835\udc52\ud835\udc57 \u0010 \u210e\u2217 \ud835\udc57 \u0011 \u2264 1 \u2211\ufe01 \ud835\udc56=0,\ud835\udc56\u2260\ud835\udc57 \ud835\udf14\ud835\udc56( \u02c6 \ud835\udc51H\u0394H \u0000\ud835\udc46\ud835\udc56,\ud835\udc46\ud835\udc57 \u0001 + 4 \u221a\ufe04 2\ud835\udc51log(2\ud835\udc5a) + log 2 \ud835\udeff \ud835\udc5a + \ud835\udf00\ud835\udc56\ud835\udc57) + \ud835\udc36. (5) where \ud835\udc36= 2 \u221a\ufe04\u0012\u00cd1 \ud835\udc56=0 \ud835\udf142 \ud835\udc56 \ud835\udf06\ud835\udc56 \u0013 \u0010\ud835\udc51log(2\ud835\udc41)\u2212log(\ud835\udeff) 2\ud835\udc41 \u0011 and \ud835\udf00\ud835\udc56\ud835\udc57= min\u210e\u2208H \b \ud835\udc52\ud835\udc56(\u210e) + \ud835\udc52\ud835\udc57(\u210e) \t Remark. The domain error is determined by three factor: the distribution of training data (\ud835\udc36), estimated distribution shift ( \u02c6 \ud835\udc51H\u0394H \u0000\ud835\udc46\ud835\udc56,\ud835\udc46\ud835\udc57 \u0001) and the performance of the joint hypothesis (\ud835\udf00\ud835\udc56\ud835\udc57). The ideal joint hypothesis error \ud835\udf00\ud835\udc56\ud835\udc57assesses the intrinsic adaptability between domains. Additional theoretical analysis can be found in Appx. G. Furthermore, Thm. 1 can be used to derive bounds for the test domain error \ud835\udc52\ud835\udc47. When considering the optimal test hypothesis \u210e\u2217 \ud835\udc47= arg min\u210e\u2208H \ud835\udc52\ud835\udc47(\u210e), we obtain \f \f \f\ud835\udc52\ud835\udc47( \u02c6 \u210e) \u2212\ud835\udc52\ud835\udc47 \u0000\u210e\u2217 \ud835\udc47 \u0001\f \f \f \u2264\ud835\udf140 \u00a9 \u00ad \u00ad \u00ab \u02c6 \ud835\udc51H\u0394H (\ud835\udc460,\ud835\udc46\ud835\udc47) + 4 \u221a\ufe04 2\ud835\udc51log(2\ud835\udc5a) + log 2 \ud835\udeff \ud835\udc5a + \ud835\udf00 \u00aa \u00ae \u00ae \u00ac + 2 \u221a\ufe04 \ud835\udf142 0 \ud835\udf060 + (1 \u2212\ud835\udf140)2 1 \u2212\ud835\udf060 \u221a\ufe02 \ud835\udc51log(2\ud835\udc41) \u2212log(\ud835\udeff) 2\ud835\udc41 . (6) Thm. 1 formally defines the domain error \ud835\udc52\ud835\udc57( \u02c6 \u210e), and furthermore, we can utilize the test domain error \ud835\udc52\ud835\udc47( \u02c6 \u210e) to verify the significance of incorporating labeled data. The following theorem presents a direct theoretical guarantee that LLMTTT decreases the error bound on the test domain compared to traditional TTT in the absence of labeled test data. Theorem 2. Let H be a hypothesis class with a VC-dimension of \ud835\udc51. Considering the LLMTTT data domains \ud835\udc4b\ud835\udc60and \ud835\udc4b\ud835\udc61, if \u02c6 \u210e\u2208H minimizes the empirical weighted error \u02c6 \ud835\udc52\ud835\udf4e(\u210e) using the weight vector Conference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY Jiaxin Zhang and Yiqi Wang, et al. \ud835\udf4eon training set \ud835\udc4b\ud835\udc61\ud835\udc5f, let the \ud835\udf16(\ud835\udf4e, \ud835\udf40, \ud835\udc41) to denote the upper bound of \f \f \f\ud835\udc52( \u02c6 \u210e) \u2212\ud835\udc52(\u210e\u2217) \f \f \f. In the FTTT scenario, no samples from the test domain are selected for labeling (\ud835\udc56.\ud835\udc52., for weight and sample ratio vectors \ud835\udf4e\u2032 and \ud835\udf40\u2032, \ud835\udf14\u2032 0 = \ud835\udf06\u2032 0 = 1 and \ud835\udf14\u2032 1 = \ud835\udf06\u2032 1 = 0). Then in LLMTTT , for any \ud835\udf40\u2260\ud835\udf40\u2032, there exist a weight vector \ud835\udf4eshuch that: \ud835\udf16\ud835\udc47(\ud835\udf4e, \ud835\udf40, \ud835\udc41) < \ud835\udf16\ud835\udc47 \u0000\ud835\udf4e\u2032, \ud835\udf40\u2032, \ud835\udc41\u0001 . (7) Remark. Thm. 2 suggests that even a small number of labeled examples during the testing phase can improve the overall model performance, thereby validating the effectiveness of the proposed LLMTTT in addressing distribution shifts. All proofs are provided in Appx. G. 5 EXPERIMENT This section presents extensive experiments to evaluate the performance of the proposed LLMTTT . Firstly, we provide a detailed explanation of the experimental setup. Then, the investigation aims to answer the following research questions: RQ1. How effective is LLMTTT on OOD generalization scenario? RQ2. How to design prompts to obtain high-quality pseudo labels? RQ3. How does the node set used for training affect LLMTTT performance? RQ4. What are the contributions of the two-stage training strategy in LLMTTT framework? 5.1 Experimental Settings 5.1.1 Datasets. We adopt the following TAGs datasets for node classification: CORA [34], PUBMED [40], CITESEER [10], WIKICS [35] and OGBN-ARXIV [22]. Inspired by GOOD [13], we explicitly make distinctions between covariate and concept shifts and design data splits that accurately reflect different shifts. We used two domain selection strategies combined with covariate and concept, then obtain 4 different splits. For specific details regarding the OOD datasets, please refer to Appendix A. Subsequently, we present the results for concept_degree and covariate_word in Table 1. Additional experimental results can be found in Table 9 in Appendix E. 5.1.2 Evaluation and Implementation. We adopt the wide-used metric, i.e., accuracy (ACC) to evaluate the model performance. All experiments were conducted five times using different seeds and the mean performance is reported. GPT-3.5-turbo-0613 is adopted to generate annotations. Regarding the prompting strategy for generating annotations, the integration of cost and accuracy led to the adoption of the few-shot strategy. The budget for active selection was detailed in Table 4 in Appendix A. The pipeline can be applied to any GNN model, with the most popular GCN [27] being adopted in this experiment. The results of other GNN backbones (GAT and Graph SAGE) are detailed in Appendix E. Instead of undergoing complex parameter tuning, we fixed the learning rate used in prior studies for all datasets. The code and more implementation details are available in supplementary material. 5.1.3 Baselines. We compare LLMTTT with baseline approaches, including (1) EERM [56], a recent State-Of-The-Art (SOTA) method specifically designed for graph OOD issues. (2) Tent [49], a test-time training method in the field of image classification. (3) GTrans [25], a test-time graph transformation approach for node classification. (4) HomoTTT [62], a fully test-time training method that utilizes Self-Supervised Learning (SSL) to fine-tune pretrained GNN model. 5.2 Performance on OOD Generalization (RQ1) To answer RQ1, we conduct a comparative analysis with other four OOD generalization methods. The ACC results are reported in Table 1. From the comparison results, we make some findings. (1) The results in Table 1 display that the proposed LLMTTT performs exceptionally well in the node classification task on the OOD datasets, surpassing all baseline methods. (2) EERM exhibits good performance compared with Tent, but it is constrained by computing resources. This further suggests that post-hoc methods (i.e. FTTT) are better suited for real-world applications due to their plug-andplay simplicity, which does not interfere with the costly training process associated with pre-trained models. (3) GTrans and HomoTTT are both FTTT-based methods. The superior performance of LLMTTT over them illustrates that even a limited number of labeled test instances can significantly enhance test-time training performance. More results under other split methods are presented in Table 9 in Appendix E. This further demonstrates the effectiveness of the proposed LLMTTT . 5.3 Performance on Different Prompts (RQ2) It is intuitively believed that higher quality pseudo labels can more effectively assist in model fine-tuning during TTT. However, LLMs cannot always produce high-quality labels. Therefore, it is necessary to subsequently obtain better quality labels comparing the accuracy of labels generated by LLMs under different prompts. Before proceeding, we can evaluate this conjecture with a simple experiment. 5.3.1 The Importance of Pseudo Label Accuracy. At this part, the relationship between LLMTTT performance and LLM accuracy is explored. After securing a fixed node candidate set, the accuracy of the selected nodes is artificially controlled. The experimental results in Fig. 2 confirm our conjecture, which states that pseudo label accuracy decide the celling of LLMTTT accuracy under a fixed node selection method 5.3.2 LLM and TTT Accuracy under Different Prompts. In this part, we test the accuracy of pseudo labels provided by LLM under different prompts on test samples. Specifically, we used the following prompts: (1) zero-shot; (2) few-shot; (3) few-shot with GNN; (4) few-shot with 2-hop summary. Briefly speaking, \"zero-shot\" denotes there is no ground-truth label information, while \"few-shot\" represents that there are some ground-truth labels from the training set. In addition, \"few-shot with GNN\" further incorporate the information from the pre-trained GNN model based on \"few-shot\". \"few-shot with 2-hop summary\" refers to a twice-request prompt strategy [6], which include both the target node information and the aggregation information from its neighboring nodes. We conducted a comparative study to identify strategies that are effective in terms of both accuracy and cost-effectiveness. Test-Time Training on Graphs with Large Language Models (LLMs) Conference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY Table 1: The comparison results between LLMTTT and representative baselines. concept_degree covariate_word dataset LLMTTT EERM Gtrans Tent HomoTTT LLMTTT EERM Gtrans Tent HomoTTT cora 88.53\u00b10.01 88.44\u00b10.98 85.75\u00b10.02 87.21\u00b10.00 87.04\u00b10.00 92.25\u00b10.02 92.14\u00b10.40 90.04\u00b10.11 90.41\u00b10.00 90.51\u00b10.00 pubmed 86.22\u00b10.00 OOM 79.64\u00b10.13 85.09\u00b10.00 85.09\u00b10.00 86.97\u00b10.01 OOM 79.44\u00b10.11 86.56\u00b10.00 86.49\u00b10.00 citeseer 79.67\u00b10.00 69.30\u00b11.81 69.43\u00b10.23 70.48\u00b10.00 70.48\u00b10.00 86.33\u00b10.10 71.94\u00b10.78 69.43\u00b10.23 75.86\u00b10.00 76.02\u00b10.00 wikics 80.02\u00b10.02 79.89\u00b10.10 75.68\u00b10.23 78.63\u00b10.00 78.89\u00b10.00 86.35\u00b10.00 85.44\u00b10.23 79.77\u00b10.10 82.27\u00b10.00 82.45\u00b10.00 ogbn-arxiv 73.82\u00b10.00 OOM 63.81\u00b10.21 65.40\u00b10.00 66.74\u00b10.00 75.06\u00b10.00 OOM 69.98\u00b10.12 70.16\u00b10.00 70.32\u00b10.00 Figure 2: Investigation on how different LLM accuracy affect the performance of LLMTTT . \"random\" means the randombased selection. \"pagerank\" means the pagerank-based selection. Table 2: Accuracy of pseudo labels annotated by LLMs under different prompts. (\u00b7) indicates the cost, which is determined by comparing the token consumption to that of zero-shot prompts. dataset zero-shot few-shot few-shot with GNN few-shot with 2hop summary cora 64.40 (1.0) 67.03 (2.2) 86.02 (2.4) 68.10 (3.1) pubmed 87.84 (1.0) 91.23(2.0) 75.50 (2.2) 81.35 (3.2) citeseer 60.92 (1.0) 74.03 (2.1) 65.41 (2.3) 77.43 (3.3) wikics 66.02 (1.0) 65.15 (2.6) 69.88 (2.7) 55.05 (3.2) Observation 1. The benefits of neighborhood summary are not universal across all datasets. The results presented in Table 2 demonstrate that using a few-shot prompt to aggregate neighbor information can result in performance improvements. However, prompts incorporating structural information may also be adversely affected by heterogeneous neighboring nodes, as evident by the significant degradation of LLM\u2019s performance on PUBMED and WIKICS after incorporating structural information. Table 3: The results of different active selection strategies. our component AL methods hybrid pagerank featprop entropy random density degree cora 87.34 86.62 86.86 87.10 86.62 86.62 86.86 pubmed 82.52 81.22 81.32 83.32 85.94 83.56 81.27 citeseer 76.85 69.07 75.21 76.39 73.08 72.37 69.42 wikics 74.15 73.22 72.73 73.70 72.76 74.10 73.22 The integrated prompt, which combines the predictive information from a pre-trained GNN model, does not consistently yield positive results across all datasets. Additionally, its performance in this scenario is intricately tied to the effectiveness of the pre-trained model. Given the aforementioned information and the cost under different prompts shown in Table 2, we adopt the few-shot prompt approach with the aim of attaining a more generalized and superior performance. Meanwhile, the failure of \"few-shot with 2-hop summary\" also motivated us to design a prompt that can accurately represent the graph structure. Thus, LLMs can be more effectively employed to solve graph level tasks. 5.4 Impact of Candidate Node Set (RQ3) The nodes utilized for model training undergo two selection processes. Initially, a hybrid active node selection strategy is employed, followed by a post-filtering strategy that leverages the prediction results obtained from LLMs. 5.4.1 Impact of Active Selection Strategies. Initially, we explore various components of hybrid active node selection, including Pagerank, FeatProp, and entropy. Secondly, we compared our hybrid node selection strategy with traditional active learning methods, such as density and degree, as well as random node selection methods. From Table 3, we find that traditional active learning methods are not applicable and effective as expected in our scenarios. Based on the empirical results, the study makes the following observation: Observation 2. Some research [7] has demonstrated that nodes in proximity to cluster centers often demonstrate higher annotation quality. Consequently, the node set selected by density-based active strategy will be assigned high-quality annotations. However, the density-based active selection strategy does not achieve the optimal performance. This gives us an intuition that improvement not only depends on LLM accuracy, but also the node selection. Conference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY Jiaxin Zhang and Yiqi Wang, et al. Figure 3: The results of different post-filtering strategies. \"none\" means graph active selection combined without postfiltering. \"conf_only\" means the graph active selection combined with confidence. \"conf_COE\" means the graph active selection combined with confidence and COE. The Appendix C further substantiates our conjecture through the control of accuracy of labels annotated by LLMs. 5.4.2 Impact of Post-Filtering. In this part, we examine the effectiveness of the proposed post-filtering strategy. Given that the proposed post-filtering strategy incorporates confidence scores and takes into account the diversity of nodes (COE), we also perform ablation experiments in this section. The experimental results are presented in Figure 3. Observation 3. The proposed post-filtering strategy demonstrates significant effectiveness. Furthermore, aligning with our previous observation, although the node selected by \"conf_COE\" does not possess the most accurate labels, they demonstrate the best model performance. This verification from another perspective suggests that model performance is not fully positively correlated with pseudo label accuracy. 5.5 Ablation of Two-stage Training (RQ4) Our method considers a two-stage training strategy for model adaptation including training with filtered nodes and self-training with unlabeled nodes. To verify the effectiveness of each stage, we perform an ablation study to investigate whether incorporating training with filtered nodes or self-training strategy can lead to performance improvements. The results in Figure 4 indicate that both of the training strategy contribute to the model performance, with stage 1 making a greater contribution. This not only underscores the effectiveness of our proposed two-stage training strategy but also further highlights that the incorporating limited labeled test instances enhance model performance. We introduce a novel TTT pipeline LLMTTT which introduces LLM as an annotator to provide a limited number of pseudo labels for fine-tuning the pre-trained model. To select a candidate set that is both representative and diverse, the proposed pipeline LLMTTT designs a hybrid active selection that also considers the pre-trained model signal. Following this, we generate high-quality labels with corresponding confidence scores with the help of LLMs. Finally, we present a two-stage training strategy that maximises the use of the test data. The strategy includes confidence-based post-filtering to mitigate the potential impact from the noisy labeled test data. Additionally, a weighting function is used to introduce a large amount of unlabeled test data into the training process. Comprehensive experiments and theoretical analysis demonstrate the effectiveness of LLMTTT . ACKNOWLEDGMENTS To Robert, for the bagels and explaining CMYK and color spaces."
+ },
+ {
+ "url": "http://arxiv.org/abs/2210.03561v2",
+ "title": "Empowering Graph Representation Learning with Test-Time Graph Transformation",
+ "abstract": "As powerful tools for representation learning on graphs, graph neural\nnetworks (GNNs) have facilitated various applications from drug discovery to\nrecommender systems. Nevertheless, the effectiveness of GNNs is immensely\nchallenged by issues related to data quality, such as distribution shift,\nabnormal features and adversarial attacks. Recent efforts have been made on\ntackling these issues from a modeling perspective which requires additional\ncost of changing model architectures or re-training model parameters. In this\nwork, we provide a data-centric view to tackle these issues and propose a graph\ntransformation framework named GTrans which adapts and refines graph data at\ntest time to achieve better performance. We provide theoretical analysis on the\ndesign of the framework and discuss why adapting graph data works better than\nadapting the model. Extensive experiments have demonstrated the effectiveness\nof GTrans on three distinct scenarios for eight benchmark datasets where\nsuboptimal data is presented. Remarkably, GTrans performs the best in most\ncases with improvements up to 2.8%, 8.2% and 3.8% over the best baselines on\nthree experimental settings. Code is released at\nhttps://github.com/ChandlerBang/GTrans.",
+ "authors": "Wei Jin, Tong Zhao, Jiayuan Ding, Yozen Liu, Jiliang Tang, Neil Shah",
+ "published": "2022-10-07",
+ "updated": "2023-02-26",
+ "primary_cat": "cs.LG",
+ "cats": [
+ "cs.LG",
+ "cs.AI",
+ "stat.ML"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2108.04475v2",
+ "title": "Localized Graph Collaborative Filtering",
+ "abstract": "User-item interactions in recommendations can be naturally de-noted as a\nuser-item bipartite graph. Given the success of graph neural networks (GNNs) in\ngraph representation learning, GNN-based C methods have been proposed to\nadvance recommender systems. These methods often make recommendations based on\nthe learned user and item embeddings. However, we found that they do not\nperform well wit sparse user-item graphs which are quite common in real-world\nrecommendations. Therefore, in this work, we introduce a novel perspective to\nbuild GNN-based CF methods for recommendations which leads to the proposed\nframework Localized Graph Collaborative Filtering (LGCF). One key advantage of\nLGCF is that it does not need to learn embeddings for each user and item, which\nis challenging in sparse scenarios. Alternatively, LGCF aims at encoding useful\nCF information into a localized graph and making recommendations based on such\ngraph. Extensive experiments on various datasets validate the effectiveness of\nLGCF especially in sparse scenarios. Furthermore, empirical results demonstrate\nthat LGCF provides complementary information to the embedding-based CF model\nwhich can be utilized to boost recommendation performance.",
+ "authors": "Yiqi Wang, Chaozhuo Li, Mingzheng Li, Wei Jin, Yuming Liu, Hao Sun, Xing Xie, Jiliang Tang",
+ "published": "2021-08-10",
+ "updated": "2022-01-05",
+ "primary_cat": "cs.IR",
+ "cats": [
+ "cs.IR",
+ "cs.AI"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/1902.07243v2",
+ "title": "Graph Neural Networks for Social Recommendation",
+ "abstract": "In recent years, Graph Neural Networks (GNNs), which can naturally integrate\nnode information and topological structure, have been demonstrated to be\npowerful in learning on graph data. These advantages of GNNs provide great\npotential to advance social recommendation since data in social recommender\nsystems can be represented as user-user social graph and user-item graph; and\nlearning latent factors of users and items is the key. However, building social\nrecommender systems based on GNNs faces challenges. For example, the user-item\ngraph encodes both interactions and their associated opinions; social relations\nhave heterogeneous strengths; users involve in two graphs (e.g., the user-user\nsocial graph and the user-item graph). To address the three aforementioned\nchallenges simultaneously, in this paper, we present a novel graph neural\nnetwork framework (GraphRec) for social recommendations. In particular, we\nprovide a principled approach to jointly capture interactions and opinions in\nthe user-item graph and propose the framework GraphRec, which coherently models\ntwo graphs and heterogeneous strengths. Extensive experiments on two real-world\ndatasets demonstrate the effectiveness of the proposed framework GraphRec. Our\ncode is available at \\url{https://github.com/wenqifan03/GraphRec-WWW19}",
+ "authors": "Wenqi Fan, Yao Ma, Qing Li, Yuan He, Eric Zhao, Jiliang Tang, Dawei Yin",
+ "published": "2019-02-19",
+ "updated": "2019-11-23",
+ "primary_cat": "cs.IR",
+ "cats": [
+ "cs.IR",
+ "cs.LG",
+ "cs.SI"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2210.08813v1",
+ "title": "Test-Time Training for Graph Neural Networks",
+ "abstract": "Graph Neural Networks (GNNs) have made tremendous progress in the graph\nclassification task. However, a performance gap between the training set and\nthe test set has often been noticed. To bridge such gap, in this work we\nintroduce the first test-time training framework for GNNs to enhance the model\ngeneralization capacity for the graph classification task. In particular, we\ndesign a novel test-time training strategy with self-supervised learning to\nadjust the GNN model for each test graph sample. Experiments on the benchmark\ndatasets have demonstrated the effectiveness of the proposed framework,\nespecially when there are distribution shifts between training set and test\nset. We have also conducted exploratory studies and theoretical analysis to\ngain deeper understandings on the rationality of the design of the proposed\ngraph test time training framework (GT3).",
+ "authors": "Yiqi Wang, Chaozhuo Li, Wei Jin, Rui Li, Jianan Zhao, Jiliang Tang, Xing Xie",
+ "published": "2022-10-17",
+ "updated": "2022-10-17",
+ "primary_cat": "cs.LG",
+ "cats": [
+ "cs.LG",
+ "cs.AI"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2307.03393v4",
+ "title": "Exploring the Potential of Large Language Models (LLMs) in Learning on Graphs",
+ "abstract": "Learning on Graphs has attracted immense attention due to its wide real-world\napplications. The most popular pipeline for learning on graphs with textual\nnode attributes primarily relies on Graph Neural Networks (GNNs), and utilizes\nshallow text embedding as initial node representations, which has limitations\nin general knowledge and profound semantic understanding. In recent years,\nLarge Language Models (LLMs) have been proven to possess extensive common\nknowledge and powerful semantic comprehension abilities that have\nrevolutionized existing workflows to handle text data. In this paper, we aim to\nexplore the potential of LLMs in graph machine learning, especially the node\nclassification task, and investigate two possible pipelines: LLMs-as-Enhancers\nand LLMs-as-Predictors. The former leverages LLMs to enhance nodes' text\nattributes with their massive knowledge and then generate predictions through\nGNNs. The latter attempts to directly employ LLMs as standalone predictors. We\nconduct comprehensive and systematical studies on these two pipelines under\nvarious settings. From comprehensive empirical results, we make original\nobservations and find new insights that open new possibilities and suggest\npromising directions to leverage LLMs for learning on graphs. Our codes and\ndatasets are available at https://github.com/CurryTang/Graph-LLM.",
+ "authors": "Zhikai Chen, Haitao Mao, Hang Li, Wei Jin, Hongzhi Wen, Xiaochi Wei, Shuaiqiang Wang, Dawei Yin, Wenqi Fan, Hui Liu, Jiliang Tang",
+ "published": "2023-07-07",
+ "updated": "2024-01-16",
+ "primary_cat": "cs.LG",
+ "cats": [
+ "cs.LG",
+ "cs.AI"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2108.01099v2",
+ "title": "Shift-Robust GNNs: Overcoming the Limitations of Localized Graph Training Data",
+ "abstract": "There has been a recent surge of interest in designing Graph Neural Networks\n(GNNs) for semi-supervised learning tasks. Unfortunately this work has assumed\nthat the nodes labeled for use in training were selected uniformly at random\n(i.e. are an IID sample). However in many real world scenarios gathering labels\nfor graph nodes is both expensive and inherently biased -- so this assumption\ncan not be met. GNNs can suffer poor generalization when this occurs, by\noverfitting to superfluous regularities present in the training data. In this\nwork we present a method, Shift-Robust GNN (SR-GNN), designed to account for\ndistributional differences between biased training data and the graph's true\ninference distribution. SR-GNN adapts GNN models for the presence of\ndistributional shifts between the nodes which have had labels provided for\ntraining and the rest of the dataset. We illustrate the effectiveness of SR-GNN\nin a variety of experiments with biased training datasets on common GNN\nbenchmark datasets for semi-supervised learning, where we see that SR-GNN\noutperforms other GNN baselines by accuracy, eliminating at least (~40%) of the\nnegative effects introduced by biased training data. On the largest dataset we\nconsider, ogb-arxiv, we observe an 2% absolute improvement over the baseline\nand reduce 30% of the negative effects.",
+ "authors": "Qi Zhu, Natalia Ponomareva, Jiawei Han, Bryan Perozzi",
+ "published": "2021-08-02",
+ "updated": "2021-10-26",
+ "primary_cat": "cs.LG",
+ "cats": [
+ "cs.LG"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/1904.08547v1",
+ "title": "Deep Representation Learning for Social Network Analysis",
+ "abstract": "Social network analysis is an important problem in data mining. A fundamental\nstep for analyzing social networks is to encode network data into\nlow-dimensional representations, i.e., network embeddings, so that the network\ntopology structure and other attribute information can be effectively\npreserved. Network representation leaning facilitates further applications such\nas classification, link prediction, anomaly detection and clustering. In\naddition, techniques based on deep neural networks have attracted great\ninterests over the past a few years. In this survey, we conduct a comprehensive\nreview of current literature in network representation learning utilizing\nneural network models. First, we introduce the basic models for learning node\nrepresentations in homogeneous networks. Meanwhile, we will also introduce some\nextensions of the base models in tackling more complex scenarios, such as\nanalyzing attributed networks, heterogeneous networks and dynamic networks.\nThen, we introduce the techniques for embedding subgraphs. After that, we\npresent the applications of network representation learning. At the end, we\ndiscuss some promising research directions for future work.",
+ "authors": "Qiaoyu Tan, Ninghao Liu, Xia Hu",
+ "published": "2019-04-18",
+ "updated": "2019-04-18",
+ "primary_cat": "cs.SI",
+ "cats": [
+ "cs.SI",
+ "cs.LG",
+ "stat.ML"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2104.12080v1",
+ "title": "AdsGNN: Behavior-Graph Augmented Relevance Modeling in Sponsored Search",
+ "abstract": "Sponsored search ads appear next to search results when people look for\nproducts and services on search engines. In recent years, they have become one\nof the most lucrative channels for marketing. As the fundamental basis of\nsearch ads, relevance modeling has attracted increasing attention due to the\nsignificant research challenges and tremendous practical value. Most existing\napproaches solely rely on the semantic information in the input query-ad pair,\nwhile the pure semantic information in the short ads data is not sufficient to\nfully identify user's search intents. Our motivation lies in incorporating the\ntremendous amount of unsupervised user behavior data from the historical search\nlogs as the complementary graph to facilitate relevance modeling. In this\npaper, we extensively investigate how to naturally fuse the semantic textual\ninformation with the user behavior graph, and further propose three novel\nAdsGNN models to aggregate topological neighborhood from the perspectives of\nnodes, edges and tokens. Furthermore, two critical but rarely investigated\nproblems, domain-specific pre-training and long-tail ads matching, are studied\nthoroughly. Empirically, we evaluate the AdsGNN models over the large industry\ndataset, and the experimental results of online/offline tests consistently\ndemonstrate the superiority of our proposal.",
+ "authors": "Chaozhuo Li, Bochen Pang, Yuming Liu, Hao Sun, Zheng Liu, Xing Xie, Tianqi Yang, Yanling Cui, Liangjie Zhang, Qi Zhang",
+ "published": "2021-04-25",
+ "updated": "2021-04-25",
+ "primary_cat": "cs.IR",
+ "cats": [
+ "cs.IR"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2208.09126v1",
+ "title": "GraphTTA: Test Time Adaptation on Graph Neural Networks",
+ "abstract": "Recently, test time adaptation (TTA) has attracted increasing attention due\nto its power of handling the distribution shift issue in the real world. Unlike\nwhat has been developed for convolutional neural networks (CNNs) for image\ndata, TTA is less explored for Graph Neural Networks (GNNs). There is still a\nlack of efficient algorithms tailored for graphs with irregular structures. In\nthis paper, we present a novel test time adaptation strategy named Graph\nAdversarial Pseudo Group Contrast (GAPGC), for graph neural networks TTA, to\nbetter adapt to the Out Of Distribution (OOD) test data. Specifically, GAPGC\nemploys a contrastive learning variant as a self-supervised task during TTA,\nequipped with Adversarial Learnable Augmenter and Group Pseudo-Positive Samples\nto enhance the relevance between the self-supervised task and the main task,\nboosting the performance of the main task. Furthermore, we provide theoretical\nevidence that GAPGC can extract minimal sufficient information for the main\ntask from information theory perspective. Extensive experiments on molecular\nscaffold OOD dataset demonstrated that the proposed approach achieves\nstate-of-the-art performance on GNNs.",
+ "authors": "Guanzi Chen, Jiying Zhang, Xi Xiao, Yang Li",
+ "published": "2022-08-19",
+ "updated": "2022-08-19",
+ "primary_cat": "cs.LG",
+ "cats": [
+ "cs.LG"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2305.19523v5",
+ "title": "Harnessing Explanations: LLM-to-LM Interpreter for Enhanced Text-Attributed Graph Representation Learning",
+ "abstract": "Representation learning on text-attributed graphs (TAGs) has become a\ncritical research problem in recent years. A typical example of a TAG is a\npaper citation graph, where the text of each paper serves as node attributes.\nInitial graph neural network (GNN) pipelines handled these text attributes by\ntransforming them into shallow or hand-crafted features, such as skip-gram or\nbag-of-words features. Recent efforts have focused on enhancing these pipelines\nwith language models (LMs), which typically demand intricate designs and\nsubstantial computational resources. With the advent of powerful large language\nmodels (LLMs) such as GPT or Llama2, which demonstrate an ability to reason and\nto utilize general knowledge, there is a growing need for techniques which\ncombine the textual modelling abilities of LLMs with the structural learning\ncapabilities of GNNs. Hence, in this work, we focus on leveraging LLMs to\ncapture textual information as features, which can be used to boost GNN\nperformance on downstream tasks. A key innovation is our use of explanations as\nfeatures: we prompt an LLM to perform zero-shot classification, request textual\nexplanations for its decision-making process, and design an LLM-to-LM\ninterpreter to translate these explanations into informative features for\ndownstream GNNs. Our experiments demonstrate that our method achieves\nstate-of-the-art results on well-established TAG datasets, including Cora,\nPubMed, ogbn-arxiv, as well as our newly introduced dataset, tape-arxiv23.\nFurthermore, our method significantly speeds up training, achieving a 2.88\ntimes improvement over the closest baseline on ogbn-arxiv. Lastly, we believe\nthe versatility of the proposed method extends beyond TAGs and holds the\npotential to enhance other tasks involving graph-text data. Our codes and\ndatasets are available at: https://github.com/XiaoxinHe/TAPE.",
+ "authors": "Xiaoxin He, Xavier Bresson, Thomas Laurent, Adam Perold, Yann LeCun, Bryan Hooi",
+ "published": "2023-05-31",
+ "updated": "2024-03-07",
+ "primary_cat": "cs.LG",
+ "cats": [
+ "cs.LG"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2205.14368v1",
+ "title": "Going Deeper into Permutation-Sensitive Graph Neural Networks",
+ "abstract": "The invariance to permutations of the adjacency matrix, i.e., graph\nisomorphism, is an overarching requirement for Graph Neural Networks (GNNs).\nConventionally, this prerequisite can be satisfied by the invariant operations\nover node permutations when aggregating messages. However, such an invariant\nmanner may ignore the relationships among neighboring nodes, thereby hindering\nthe expressivity of GNNs. In this work, we devise an efficient\npermutation-sensitive aggregation mechanism via permutation groups, capturing\npairwise correlations between neighboring nodes. We prove that our approach is\nstrictly more powerful than the 2-dimensional Weisfeiler-Lehman (2-WL) graph\nisomorphism test and not less powerful than the 3-WL test. Moreover, we prove\nthat our approach achieves the linear sampling complexity. Comprehensive\nexperiments on multiple synthetic and real-world datasets demonstrate the\nsuperiority of our model.",
+ "authors": "Zhongyu Huang, Yingheng Wang, Chaozhuo Li, Huiguang He",
+ "published": "2022-05-28",
+ "updated": "2022-05-28",
+ "primary_cat": "cs.LG",
+ "cats": [
+ "cs.LG",
+ "cs.DM",
+ "stat.ML"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/1910.02356v2",
+ "title": "Text Level Graph Neural Network for Text Classification",
+ "abstract": "Recently, researches have explored the graph neural network (GNN) techniques\non text classification, since GNN does well in handling complex structures and\npreserving global information. However, previous methods based on GNN are\nmainly faced with the practical problems of fixed corpus level graph structure\nwhich do not support online testing and high memory consumption. To tackle the\nproblems, we propose a new GNN based model that builds graphs for each input\ntext with global parameters sharing instead of a single graph for the whole\ncorpus. This method removes the burden of dependence between an individual text\nand entire corpus which support online testing, but still preserve global\ninformation. Besides, we build graphs by much smaller windows in the text,\nwhich not only extract more local features but also significantly reduce the\nedge numbers as well as memory consumption. Experiments show that our model\noutperforms existing models on several text classification datasets even with\nconsuming less memory.",
+ "authors": "Lianzhe Huang, Dehong Ma, Sujian Li, Xiaodong Zhang, Houfeng WANG",
+ "published": "2019-10-06",
+ "updated": "2019-10-08",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2202.02466v4",
+ "title": "Handling Distribution Shifts on Graphs: An Invariance Perspective",
+ "abstract": "There is increasing evidence suggesting neural networks' sensitivity to\ndistribution shifts, so that research on out-of-distribution (OOD)\ngeneralization comes into the spotlight. Nonetheless, current endeavors mostly\nfocus on Euclidean data, and its formulation for graph-structured data is not\nclear and remains under-explored, given two-fold fundamental challenges: 1) the\ninter-connection among nodes in one graph, which induces non-IID generation of\ndata points even under the same environment, and 2) the structural information\nin the input graph, which is also informative for prediction. In this paper, we\nformulate the OOD problem on graphs and develop a new invariant learning\napproach, Explore-to-Extrapolate Risk Minimization (EERM), that facilitates\ngraph neural networks to leverage invariance principles for prediction. EERM\nresorts to multiple context explorers (specified as graph structure editers in\nour case) that are adversarially trained to maximize the variance of risks from\nmultiple virtual environments. Such a design enables the model to extrapolate\nfrom a single observed environment which is the common case for node-level\nprediction. We prove the validity of our method by theoretically showing its\nguarantee of a valid OOD solution and further demonstrate its power on various\nreal-world datasets for handling distribution shifts from artificial spurious\nfeatures, cross-domain transfers and dynamic graph evolution.",
+ "authors": "Qitian Wu, Hengrui Zhang, Junchi Yan, David Wipf",
+ "published": "2022-02-05",
+ "updated": "2022-05-07",
+ "primary_cat": "cs.LG",
+ "cats": [
+ "cs.LG",
+ "cs.AI"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/1907.06347v1",
+ "title": "Discriminative Active Learning",
+ "abstract": "We propose a new batch mode active learning algorithm designed for neural\nnetworks and large query batch sizes. The method, Discriminative Active\nLearning (DAL), poses active learning as a binary classification task,\nattempting to choose examples to label in such a way as to make the labeled set\nand the unlabeled pool indistinguishable. Experimenting on image classification\ntasks, we empirically show our method to be on par with state of the art\nmethods in medium and large query batch sizes, while being simple to implement\nand also extend to other domains besides classification tasks. Our experiments\nalso show that none of the state of the art methods of today are clearly better\nthan uncertainty sampling when the batch size is relatively large, negating\nsome of the reported results in the recent literature.",
+ "authors": "Daniel Gissin, Shai Shalev-Shwartz",
+ "published": "2019-07-15",
+ "updated": "2019-07-15",
+ "primary_cat": "cs.LG",
+ "cats": [
+ "cs.LG",
+ "stat.ML"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/1708.00489v4",
+ "title": "Active Learning for Convolutional Neural Networks: A Core-Set Approach",
+ "abstract": "Convolutional neural networks (CNNs) have been successfully applied to many\nrecognition and learning tasks using a universal recipe; training a deep model\non a very large dataset of supervised examples. However, this approach is\nrather restrictive in practice since collecting a large set of labeled images\nis very expensive. One way to ease this problem is coming up with smart ways\nfor choosing images to be labelled from a very large collection (ie. active\nlearning).\n Our empirical study suggests that many of the active learning heuristics in\nthe literature are not effective when applied to CNNs in batch setting.\nInspired by these limitations, we define the problem of active learning as\ncore-set selection, ie. choosing set of points such that a model learned over\nthe selected subset is competitive for the remaining data points. We further\npresent a theoretical result characterizing the performance of any selected\nsubset using the geometry of the datapoints. As an active learning algorithm,\nwe choose the subset which is expected to yield best result according to our\ncharacterization. Our experiments show that the proposed method significantly\noutperforms existing approaches in image classification experiments by a large\nmargin.",
+ "authors": "Ozan Sener, Silvio Savarese",
+ "published": "2017-08-01",
+ "updated": "2018-06-01",
+ "primary_cat": "stat.ML",
+ "cats": [
+ "stat.ML",
+ "cs.CV",
+ "cs.LG"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2204.07965v1",
+ "title": "Entropy-based Active Learning for Object Detection with Progressive Diversity Constraint",
+ "abstract": "Active learning is a promising alternative to alleviate the issue of high\nannotation cost in the computer vision tasks by consciously selecting more\ninformative samples to label. Active learning for object detection is more\nchallenging and existing efforts on it are relatively rare. In this paper, we\npropose a novel hybrid approach to address this problem, where the\ninstance-level uncertainty and diversity are jointly considered in a bottom-up\nmanner. To balance the computational complexity, the proposed approach is\ndesigned as a two-stage procedure. At the first stage, an Entropy-based\nNon-Maximum Suppression (ENMS) is presented to estimate the uncertainty of\nevery image, which performs NMS according to the entropy in the feature space\nto remove predictions with redundant information gains. At the second stage, a\ndiverse prototype (DivProto) strategy is explored to ensure the diversity\nacross images by progressively converting it into the intra-class and\ninter-class diversities of the entropy-based class-specific prototypes.\nExtensive experiments are conducted on MS COCO and Pascal VOC, and the proposed\napproach achieves state of the art results and significantly outperforms the\nother counterparts, highlighting its superiority.",
+ "authors": "Jiaxi Wu, Jiaxin Chen, Di Huang",
+ "published": "2022-04-17",
+ "updated": "2022-04-17",
+ "primary_cat": "cs.CV",
+ "cats": [
+ "cs.CV"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2303.09870v1",
+ "title": "TeSLA: Test-Time Self-Learning With Automatic Adversarial Augmentation",
+ "abstract": "Most recent test-time adaptation methods focus on only classification tasks,\nuse specialized network architectures, destroy model calibration or rely on\nlightweight information from the source domain. To tackle these issues, this\npaper proposes a novel Test-time Self-Learning method with automatic\nAdversarial augmentation dubbed TeSLA for adapting a pre-trained source model\nto the unlabeled streaming test data. In contrast to conventional self-learning\nmethods based on cross-entropy, we introduce a new test-time loss function\nthrough an implicitly tight connection with the mutual information and online\nknowledge distillation. Furthermore, we propose a learnable efficient\nadversarial augmentation module that further enhances online knowledge\ndistillation by simulating high entropy augmented images. Our method achieves\nstate-of-the-art classification and segmentation results on several benchmarks\nand types of domain shifts, particularly on challenging measurement shifts of\nmedical images. TeSLA also benefits from several desirable properties compared\nto competing methods in terms of calibration, uncertainty metrics,\ninsensitivity to model architectures, and source training strategies, all\nsupported by extensive ablations. Our code and models are available on GitHub.",
+ "authors": "Devavrat Tomar, Guillaume Vray, Behzad Bozorgtabar, Jean-Philippe Thiran",
+ "published": "2023-03-17",
+ "updated": "2023-03-17",
+ "primary_cat": "cs.CV",
+ "cats": [
+ "cs.CV"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2310.04668v3",
+ "title": "Label-free Node Classification on Graphs with Large Language Models (LLMS)",
+ "abstract": "In recent years, there have been remarkable advancements in node\nclassification achieved by Graph Neural Networks (GNNs). However, they\nnecessitate abundant high-quality labels to ensure promising performance. In\ncontrast, Large Language Models (LLMs) exhibit impressive zero-shot proficiency\non text-attributed graphs. Yet, they face challenges in efficiently processing\nstructural data and suffer from high inference costs. In light of these\nobservations, this work introduces a label-free node classification on graphs\nwith LLMs pipeline, LLM-GNN. It amalgamates the strengths of both GNNs and LLMs\nwhile mitigating their limitations. Specifically, LLMs are leveraged to\nannotate a small portion of nodes and then GNNs are trained on LLMs'\nannotations to make predictions for the remaining large portion of nodes. The\nimplementation of LLM-GNN faces a unique challenge: how can we actively select\nnodes for LLMs to annotate and consequently enhance the GNN training? How can\nwe leverage LLMs to obtain annotations of high quality, representativeness, and\ndiversity, thereby enhancing GNN performance with less cost? To tackle this\nchallenge, we develop an annotation quality heuristic and leverage the\nconfidence scores derived from LLMs to advanced node selection. Comprehensive\nexperimental results validate the effectiveness of LLM-GNN. In particular,\nLLM-GNN can achieve an accuracy of 74.9% on a vast-scale dataset \\products with\na cost less than 1 dollar.",
+ "authors": "Zhikai Chen, Haitao Mao, Hongzhi Wen, Haoyu Han, Wei Jin, Haiyang Zhang, Hui Liu, Jiliang Tang",
+ "published": "2023-10-07",
+ "updated": "2024-02-24",
+ "primary_cat": "cs.LG",
+ "cats": [
+ "cs.LG",
+ "cs.AI",
+ "cs.CL"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/1901.05954v1",
+ "title": "Diverse mini-batch Active Learning",
+ "abstract": "We study the problem of reducing the amount of labeled training data required\nto train supervised classification models. We approach it by leveraging Active\nLearning, through sequential selection of examples which benefit the model\nmost. Selecting examples one by one is not practical for the amount of training\nexamples required by the modern Deep Learning models. We consider the\nmini-batch Active Learning setting, where several examples are selected at\nonce. We present an approach which takes into account both informativeness of\nthe examples for the model, as well as the diversity of the examples in a\nmini-batch. By using the well studied K-means clustering algorithm, this\napproach scales better than the previously proposed approaches, and achieves\ncomparable or better performance.",
+ "authors": "Fedor Zhdanov",
+ "published": "2019-01-17",
+ "updated": "2019-01-17",
+ "primary_cat": "cs.LG",
+ "cats": [
+ "cs.LG",
+ "stat.ML"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2002.02126v4",
+ "title": "LightGCN: Simplifying and Powering Graph Convolution Network for Recommendation",
+ "abstract": "Graph Convolution Network (GCN) has become new state-of-the-art for\ncollaborative filtering. Nevertheless, the reasons of its effectiveness for\nrecommendation are not well understood. Existing work that adapts GCN to\nrecommendation lacks thorough ablation analyses on GCN, which is originally\ndesigned for graph classification tasks and equipped with many neural network\noperations. However, we empirically find that the two most common designs in\nGCNs -- feature transformation and nonlinear activation -- contribute little to\nthe performance of collaborative filtering. Even worse, including them adds to\nthe difficulty of training and degrades recommendation performance.\n In this work, we aim to simplify the design of GCN to make it more concise\nand appropriate for recommendation. We propose a new model named LightGCN,\nincluding only the most essential component in GCN -- neighborhood aggregation\n-- for collaborative filtering. Specifically, LightGCN learns user and item\nembeddings by linearly propagating them on the user-item interaction graph, and\nuses the weighted sum of the embeddings learned at all layers as the final\nembedding. Such simple, linear, and neat model is much easier to implement and\ntrain, exhibiting substantial improvements (about 16.0\\% relative improvement\non average) over Neural Graph Collaborative Filtering (NGCF) -- a\nstate-of-the-art GCN-based recommender model -- under exactly the same\nexperimental setting. Further analyses are provided towards the rationality of\nthe simple LightGCN from both analytical and empirical perspectives.",
+ "authors": "Xiangnan He, Kuan Deng, Xiang Wang, Yan Li, Yongdong Zhang, Meng Wang",
+ "published": "2020-02-06",
+ "updated": "2020-07-07",
+ "primary_cat": "cs.IR",
+ "cats": [
+ "cs.IR",
+ "cs.LG"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2006.10726v3",
+ "title": "Tent: Fully Test-time Adaptation by Entropy Minimization",
+ "abstract": "A model must adapt itself to generalize to new and different data during\ntesting. In this setting of fully test-time adaptation the model has only the\ntest data and its own parameters. We propose to adapt by test entropy\nminimization (tent): we optimize the model for confidence as measured by the\nentropy of its predictions. Our method estimates normalization statistics and\noptimizes channel-wise affine transformations to update online on each batch.\nTent reduces generalization error for image classification on corrupted\nImageNet and CIFAR-10/100 and reaches a new state-of-the-art error on\nImageNet-C. Tent handles source-free domain adaptation on digit recognition\nfrom SVHN to MNIST/MNIST-M/USPS, on semantic segmentation from GTA to\nCityscapes, and on the VisDA-C benchmark. These results are achieved in one\nepoch of test-time optimization without altering training.",
+ "authors": "Dequan Wang, Evan Shelhamer, Shaoteng Liu, Bruno Olshausen, Trevor Darrell",
+ "published": "2020-06-18",
+ "updated": "2021-03-18",
+ "primary_cat": "cs.LG",
+ "cats": [
+ "cs.LG",
+ "cs.CV",
+ "stat.ML"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2005.00687v7",
+ "title": "Open Graph Benchmark: Datasets for Machine Learning on Graphs",
+ "abstract": "We present the Open Graph Benchmark (OGB), a diverse set of challenging and\nrealistic benchmark datasets to facilitate scalable, robust, and reproducible\ngraph machine learning (ML) research. OGB datasets are large-scale, encompass\nmultiple important graph ML tasks, and cover a diverse range of domains,\nranging from social and information networks to biological networks, molecular\ngraphs, source code ASTs, and knowledge graphs. For each dataset, we provide a\nunified evaluation protocol using meaningful application-specific data splits\nand evaluation metrics. In addition to building the datasets, we also perform\nextensive benchmark experiments for each dataset. Our experiments suggest that\nOGB datasets present significant challenges of scalability to large-scale\ngraphs and out-of-distribution generalization under realistic data splits,\nindicating fruitful opportunities for future research. Finally, OGB provides an\nautomated end-to-end graph ML pipeline that simplifies and standardizes the\nprocess of graph data loading, experimental setup, and model evaluation. OGB\nwill be regularly updated and welcomes inputs from the community. OGB datasets\nas well as data loaders, evaluation scripts, baseline code, and leaderboards\nare publicly available at https://ogb.stanford.edu .",
+ "authors": "Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, Jure Leskovec",
+ "published": "2020-05-02",
+ "updated": "2021-02-25",
+ "primary_cat": "cs.LG",
+ "cats": [
+ "cs.LG",
+ "cs.SI",
+ "stat.ML"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/1911.07470v2",
+ "title": "Graph Transformer for Graph-to-Sequence Learning",
+ "abstract": "The dominant graph-to-sequence transduction models employ graph neural\nnetworks for graph representation learning, where the structural information is\nreflected by the receptive field of neurons. Unlike graph neural networks that\nrestrict the information exchange between immediate neighborhood, we propose a\nnew model, known as Graph Transformer, that uses explicit relation encoding and\nallows direct communication between two distant nodes. It provides a more\nefficient way for global graph structure modeling. Experiments on the\napplications of text generation from Abstract Meaning Representation (AMR) and\nsyntax-based neural machine translation show the superiority of our proposed\nmodel. Specifically, our model achieves 27.4 BLEU on LDC2015E86 and 29.7 BLEU\non LDC2017T10 for AMR-to-text generation, outperforming the state-of-the-art\nresults by up to 2.2 points. On the syntax-based translation tasks, our model\nestablishes new single-model state-of-the-art BLEU scores, 21.3 for\nEnglish-to-German and 14.1 for English-to-Czech, improving over the existing\nbest results, including ensembles, by over 1 BLEU.",
+ "authors": "Deng Cai, Wai Lam",
+ "published": "2019-11-18",
+ "updated": "2019-11-30",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2311.13095v1",
+ "title": "Enhancing Logical Reasoning in Large Language Models to Facilitate Legal Applications",
+ "abstract": "Language serves as a vehicle for conveying thought, enabling communication\namong individuals. The ability to distinguish between diverse concepts,\nidentify fairness and injustice, and comprehend a range of legal notions\nfundamentally relies on logical reasoning. Large Language Models (LLMs) attempt\nto emulate human language understanding and generation, but their competency in\nlogical reasoning remains limited. This paper seeks to address the\nphilosophical question: How can we effectively teach logical reasoning to LLMs\nwhile maintaining a deep understanding of the intricate relationship between\nlanguage and logic? By focusing on bolstering LLMs' capabilities in logical\nreasoning, we aim to expand their applicability in law and other\nlogic-intensive disciplines. To this end, we propose a Reinforcement Learning\nfrom Logical Feedback (RLLF) approach, which serves as a potential framework\nfor refining LLMs' reasoning capacities. Through RLLF and a revised evaluation\nmethodology, we explore new avenues for research in this domain and contribute\nto the development of LLMs capable of handling complex legal reasoning tasks\nwhile acknowledging the fundamental connection between language and logic.",
+ "authors": "Ha-Thanh Nguyen, Wachara Fungwacharakorn, Ken Satoh",
+ "published": "2023-11-22",
+ "updated": "2023-11-22",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2402.07688v1",
+ "title": "CyberMetric: A Benchmark Dataset for Evaluating Large Language Models Knowledge in Cybersecurity",
+ "abstract": "Large Language Models (LLMs) excel across various domains, from computer\nvision to medical diagnostics. However, understanding the diverse landscape of\ncybersecurity, encompassing cryptography, reverse engineering, and managerial\nfacets like risk assessment, presents a challenge, even for human experts. In\nthis paper, we introduce CyberMetric, a benchmark dataset comprising 10,000\nquestions sourced from standards, certifications, research papers, books, and\nother publications in the cybersecurity domain. The questions are created\nthrough a collaborative process, i.e., merging expert knowledge with LLMs,\nincluding GPT-3.5 and Falcon-180B. Human experts spent over 200 hours verifying\ntheir accuracy and relevance. Beyond assessing LLMs' knowledge, the dataset's\nmain goal is to facilitate a fair comparison between humans and different LLMs\nin cybersecurity. To achieve this, we carefully selected 80 questions covering\na wide range of topics within cybersecurity and involved 30 participants of\ndiverse expertise levels, facilitating a comprehensive comparison between human\nand machine intelligence in this area. The findings revealed that LLMs\noutperformed humans in almost every aspect of cybersecurity.",
+ "authors": "Norbert Tihanyi, Mohamed Amine Ferrag, Ridhi Jain, Merouane Debbah",
+ "published": "2024-02-12",
+ "updated": "2024-02-12",
+ "primary_cat": "cs.AI",
+ "cats": [
+ "cs.AI",
+ "cs.CR"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2309.09397v1",
+ "title": "Do Large GPT Models Discover Moral Dimensions in Language Representations? A Topological Study Of Sentence Embeddings",
+ "abstract": "As Large Language Models are deployed within Artificial Intelligence systems,\nthat are increasingly integrated with human society, it becomes more important\nthan ever to study their internal structures. Higher level abilities of LLMs\nsuch as GPT-3.5 emerge in large part due to informative language\nrepresentations they induce from raw text data during pre-training on trillions\nof words. These embeddings exist in vector spaces of several thousand\ndimensions, and their processing involves mapping between multiple vector\nspaces, with total number of parameters on the order of trillions. Furthermore,\nthese language representations are induced by gradient optimization, resulting\nin a black box system that is hard to interpret. In this paper, we take a look\nat the topological structure of neuronal activity in the \"brain\" of Chat-GPT's\nfoundation language model, and analyze it with respect to a metric representing\nthe notion of fairness. We develop a novel approach to visualize GPT's moral\ndimensions. We first compute a fairness metric, inspired by social psychology\nliterature, to identify factors that typically influence fairness assessments\nin humans, such as legitimacy, need, and responsibility. Subsequently, we\nsummarize the manifold's shape using a lower-dimensional simplicial complex,\nwhose topology is derived from this metric. We color it with a heat map\nassociated with this fairness metric, producing human-readable visualizations\nof the high-dimensional sentence manifold. Our results show that sentence\nembeddings based on GPT-3.5 can be decomposed into two submanifolds\ncorresponding to fair and unfair moral judgments. This indicates that GPT-based\nlanguage models develop a moral dimension within their representation spaces\nand induce an understanding of fairness during their training process.",
+ "authors": "Stephen Fitz",
+ "published": "2023-09-17",
+ "updated": "2023-09-17",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.CY",
+ "cs.LG",
+ "cs.NE"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2402.11406v2",
+ "title": "Don't Go To Extremes: Revealing the Excessive Sensitivity and Calibration Limitations of LLMs in Implicit Hate Speech Detection",
+ "abstract": "The fairness and trustworthiness of Large Language Models (LLMs) are\nreceiving increasing attention. Implicit hate speech, which employs indirect\nlanguage to convey hateful intentions, occupies a significant portion of\npractice. However, the extent to which LLMs effectively address this issue\nremains insufficiently examined. This paper delves into the capability of LLMs\nto detect implicit hate speech (Classification Task) and express confidence in\ntheir responses (Calibration Task). Our evaluation meticulously considers\nvarious prompt patterns and mainstream uncertainty estimation methods. Our\nfindings highlight that LLMs exhibit two extremes: (1) LLMs display excessive\nsensitivity towards groups or topics that may cause fairness issues, resulting\nin misclassifying benign statements as hate speech. (2) LLMs' confidence scores\nfor each method excessively concentrate on a fixed range, remaining unchanged\nregardless of the dataset's complexity. Consequently, the calibration\nperformance is heavily reliant on primary classification accuracy. These\ndiscoveries unveil new limitations of LLMs, underscoring the need for caution\nwhen optimizing models to ensure they do not veer towards extremes. This serves\nas a reminder to carefully consider sensitivity and confidence in the pursuit\nof model fairness.",
+ "authors": "Min Zhang, Jianfeng He, Taoran Ji, Chang-Tien Lu",
+ "published": "2024-02-18",
+ "updated": "2024-02-26",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2312.07420v1",
+ "title": "FairSISA: Ensemble Post-Processing to Improve Fairness of Unlearning in LLMs",
+ "abstract": "Training large language models (LLMs) is a costly endeavour in terms of time\nand computational resources. The large amount of training data used during the\nunsupervised pre-training phase makes it difficult to verify all data and,\nunfortunately, undesirable data may be ingested during training. Re-training\nfrom scratch is impractical and has led to the creation of the 'unlearning'\ndiscipline where models are modified to \"unlearn\" undesirable information\nwithout retraining. However, any modification can alter the behaviour of LLMs,\nespecially on key dimensions such as fairness. This is the first work that\nexamines this interplay between unlearning and fairness for LLMs. In\nparticular, we focus on a popular unlearning framework known as SISA [Bourtoule\net al., 2021], which creates an ensemble of models trained on disjoint shards.\nWe evaluate the performance-fairness trade-off for SISA, and empirically\ndemsontrate that SISA can indeed reduce fairness in LLMs. To remedy this, we\npropose post-processing bias mitigation techniques for ensemble models produced\nby SISA. We adapt the post-processing fairness improvement technique from\n[Hardt et al., 2016] to design three methods that can handle model ensembles,\nand prove that one of the methods is an optimal fair predictor for ensemble of\nmodels. Through experimental results, we demonstrate the efficacy of our\npost-processing framework called 'FairSISA'.",
+ "authors": "Swanand Ravindra Kadhe, Anisa Halimi, Ambrish Rawat, Nathalie Baracaldo",
+ "published": "2023-12-12",
+ "updated": "2023-12-12",
+ "primary_cat": "cs.LG",
+ "cats": [
+ "cs.LG",
+ "cs.CY"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2401.00588v1",
+ "title": "Fairness in Serving Large Language Models",
+ "abstract": "High-demand LLM inference services (e.g., ChatGPT and BARD) support a wide\nrange of requests from short chat conversations to long document reading. To\nensure that all client requests are processed fairly, most major LLM inference\nservices have request rate limits, to ensure that no client can dominate the\nrequest queue. However, this rudimentary notion of fairness also results in\nunder-utilization of the resources and poor client experience when there is\nspare capacity. While there is a rich literature on fair scheduling, serving\nLLMs presents new challenges due to their unpredictable request lengths and\ntheir unique batching characteristics on parallel accelerators. This paper\nintroduces the definition of LLM serving fairness based on a cost function that\naccounts for the number of input and output tokens processed. To achieve\nfairness in serving, we propose a novel scheduling algorithm, the Virtual Token\nCounter (VTC), a fair scheduler based on the continuous batching mechanism. We\nprove a 2x tight upper bound on the service difference between two backlogged\nclients, adhering to the requirement of work-conserving. Through extensive\nexperiments, we demonstrate the superior performance of VTC in ensuring\nfairness, especially in contrast to other baseline methods, which exhibit\nshortcomings under various conditions.",
+ "authors": "Ying Sheng, Shiyi Cao, Dacheng Li, Banghua Zhu, Zhuohan Li, Danyang Zhuo, Joseph E. Gonzalez, Ion Stoica",
+ "published": "2023-12-31",
+ "updated": "2023-12-31",
+ "primary_cat": "cs.AI",
+ "cats": [
+ "cs.AI",
+ "cs.LG",
+ "cs.PF"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2403.15451v1",
+ "title": "Towards Enabling FAIR Dataspaces Using Large Language Models",
+ "abstract": "Dataspaces have recently gained adoption across various sectors, including\ntraditionally less digitized domains such as culture. Leveraging Semantic Web\ntechnologies helps to make dataspaces FAIR, but their complexity poses a\nsignificant challenge to the adoption of dataspaces and increases their cost.\nThe advent of Large Language Models (LLMs) raises the question of how these\nmodels can support the adoption of FAIR dataspaces. In this work, we\ndemonstrate the potential of LLMs in dataspaces with a concrete example. We\nalso derive a research agenda for exploring this emerging field.",
+ "authors": "Benedikt T. Arnold, Johannes Theissen-Lipp, Diego Collarana, Christoph Lange, Sandra Geisler, Edward Curry, Stefan Decker",
+ "published": "2024-03-18",
+ "updated": "2024-03-18",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2403.00884v2",
+ "title": "Text classification of column headers with a controlled vocabulary: leveraging LLMs for metadata enrichment",
+ "abstract": "Traditional dataset retrieval systems index on metadata information rather\nthan on the data values. Thus relying primarily on manual annotations and\nhigh-quality metadata, processes known to be labour-intensive and challenging\nto automate. We propose a method to support metadata enrichment with topic\nannotations of column headers using three Large Language Models (LLMs):\nChatGPT-3.5, GoogleBard and GoogleGemini. We investigate the LLMs ability to\nclassify column headers based on domain-specific topics from a controlled\nvocabulary. We evaluate our approach by assessing the internal consistency of\nthe LLMs, the inter-machine alignment, and the human-machine agreement for the\ntopic classification task. Additionally, we investigate the impact of\ncontextual information (i.e. dataset description) on the classification\noutcomes. Our results suggest that ChatGPT and GoogleGemini outperform\nGoogleBard for internal consistency as well as LLM-human-alignment.\nInterestingly, we found that context had no impact on the LLMs performances.\nThis work proposes a novel approach that leverages LLMs for text classification\nusing a controlled topic vocabulary, which has the potential to facilitate\nautomated metadata enrichment, thereby enhancing dataset retrieval and the\nFindability, Accessibility, Interoperability and Reusability (FAIR) of research\ndata on the Web.",
+ "authors": "Margherita Martorana, Tobias Kuhn, Lise Stork, Jacco van Ossenbruggen",
+ "published": "2024-03-01",
+ "updated": "2024-03-05",
+ "primary_cat": "cs.DB",
+ "cats": [
+ "cs.DB",
+ "cs.AI",
+ "cs.IR"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2403.15491v1",
+ "title": "Open Source Conversational LLMs do not know most Spanish words",
+ "abstract": "The growing interest in Large Language Models (LLMs) and in particular in\nconversational models with which users can interact has led to the development\nof a large number of open-source chat LLMs. These models are evaluated on a\nwide range of benchmarks to assess their capabilities in answering questions or\nsolving problems on almost any possible topic or to test their ability to\nreason or interpret texts. Instead, the evaluation of the knowledge that these\nmodels have of the languages has received much less attention. For example, the\nwords that they can recognize and use in different languages. In this paper, we\nevaluate the knowledge that open-source chat LLMs have of Spanish words by\ntesting a sample of words in a reference dictionary. The results show that\nopen-source chat LLMs produce incorrect meanings for an important fraction of\nthe words and are not able to use most of the words correctly to write\nsentences with context. These results show how Spanish is left behind in the\nopen-source LLM race and highlight the need to push for linguistic fairness in\nconversational LLMs ensuring that they provide similar performance across\nlanguages.",
+ "authors": "Javier Conde, Miguel Gonz\u00e1lez, Nina Melero, Raquel Ferrando, Gonzalo Mart\u00ednez, Elena Merino-G\u00f3mez, Jos\u00e9 Alberto Hern\u00e1ndez, Pedro Reviriego",
+ "published": "2024-03-21",
+ "updated": "2024-03-21",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2310.15007v1",
+ "title": "Did the Neurons Read your Book? Document-level Membership Inference for Large Language Models",
+ "abstract": "With large language models (LLMs) poised to become embedded in our daily\nlives, questions are starting to be raised about the dataset(s) they learned\nfrom. These questions range from potential bias or misinformation LLMs could\nretain from their training data to questions of copyright and fair use of\nhuman-generated text. However, while these questions emerge, developers of the\nrecent state-of-the-art LLMs become increasingly reluctant to disclose details\non their training corpus. We here introduce the task of document-level\nmembership inference for real-world LLMs, i.e. inferring whether the LLM has\nseen a given document during training or not. First, we propose a procedure for\nthe development and evaluation of document-level membership inference for LLMs\nby leveraging commonly used data sources for training and the model release\ndate. We then propose a practical, black-box method to predict document-level\nmembership and instantiate it on OpenLLaMA-7B with both books and academic\npapers. We show our methodology to perform very well, reaching an impressive\nAUC of 0.856 for books and 0.678 for papers. We then show our approach to\noutperform the sentence-level membership inference attacks used in the privacy\nliterature for the document-level membership task. We finally evaluate whether\nsmaller models might be less sensitive to document-level inference and show\nOpenLLaMA-3B to be approximately as sensitive as OpenLLaMA-7B to our approach.\nTaken together, our results show that accurate document-level membership can be\ninferred for LLMs, increasing the transparency of technology poised to change\nour lives.",
+ "authors": "Matthieu Meeus, Shubham Jain, Marek Rei, Yves-Alexandre de Montjoye",
+ "published": "2023-10-23",
+ "updated": "2023-10-23",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.CR",
+ "cs.LG"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2310.18130v2",
+ "title": "DELPHI: Data for Evaluating LLMs' Performance in Handling Controversial Issues",
+ "abstract": "Controversy is a reflection of our zeitgeist, and an important aspect to any\ndiscourse. The rise of large language models (LLMs) as conversational systems\nhas increased public reliance on these systems for answers to their various\nquestions. Consequently, it is crucial to systematically examine how these\nmodels respond to questions that pertaining to ongoing debates. However, few\nsuch datasets exist in providing human-annotated labels reflecting the\ncontemporary discussions. To foster research in this area, we propose a novel\nconstruction of a controversial questions dataset, expanding upon the publicly\nreleased Quora Question Pairs Dataset. This dataset presents challenges\nconcerning knowledge recency, safety, fairness, and bias. We evaluate different\nLLMs using a subset of this dataset, illuminating how they handle controversial\nissues and the stances they adopt. This research ultimately contributes to our\nunderstanding of LLMs' interaction with controversial issues, paving the way\nfor improvements in their comprehension and handling of complex societal\ndebates.",
+ "authors": "David Q. Sun, Artem Abzaliev, Hadas Kotek, Zidi Xiu, Christopher Klein, Jason D. Williams",
+ "published": "2023-10-27",
+ "updated": "2023-11-07",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.HC"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2405.01769v1",
+ "title": "A Survey on Large Language Models for Critical Societal Domains: Finance, Healthcare, and Law",
+ "abstract": "In the fast-evolving domain of artificial intelligence, large language models\n(LLMs) such as GPT-3 and GPT-4 are revolutionizing the landscapes of finance,\nhealthcare, and law: domains characterized by their reliance on professional\nexpertise, challenging data acquisition, high-stakes, and stringent regulatory\ncompliance. This survey offers a detailed exploration of the methodologies,\napplications, challenges, and forward-looking opportunities of LLMs within\nthese high-stakes sectors. We highlight the instrumental role of LLMs in\nenhancing diagnostic and treatment methodologies in healthcare, innovating\nfinancial analytics, and refining legal interpretation and compliance\nstrategies. Moreover, we critically examine the ethics for LLM applications in\nthese fields, pointing out the existing ethical concerns and the need for\ntransparent, fair, and robust AI systems that respect regulatory norms. By\npresenting a thorough review of current literature and practical applications,\nwe showcase the transformative impact of LLMs, and outline the imperative for\ninterdisciplinary cooperation, methodological advancements, and ethical\nvigilance. Through this lens, we aim to spark dialogue and inspire future\nresearch dedicated to maximizing the benefits of LLMs while mitigating their\nrisks in these precision-dependent sectors. To facilitate future research on\nLLMs in these critical societal domains, we also initiate a reading list that\ntracks the latest advancements under this topic, which will be continually\nupdated: \\url{https://github.com/czyssrs/LLM_X_papers}.",
+ "authors": "Zhiyu Zoey Chen, Jing Ma, Xinlu Zhang, Nan Hao, An Yan, Armineh Nourbakhsh, Xianjun Yang, Julian McAuley, Linda Petzold, William Yang Wang",
+ "published": "2024-05-02",
+ "updated": "2024-05-02",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2310.18333v3",
+ "title": "She had Cobalt Blue Eyes: Prompt Testing to Create Aligned and Sustainable Language Models",
+ "abstract": "As the use of large language models (LLMs) increases within society, as does\nthe risk of their misuse. Appropriate safeguards must be in place to ensure LLM\noutputs uphold the ethical standards of society, highlighting the positive role\nthat artificial intelligence technologies can have. Recent events indicate\nethical concerns around conventionally trained LLMs, leading to overall unsafe\nuser experiences. This motivates our research question: how do we ensure LLM\nalignment? In this work, we introduce a test suite of unique prompts to foster\nthe development of aligned LLMs that are fair, safe, and robust. We show that\nprompting LLMs at every step of the development pipeline, including data\ncuration, pre-training, and fine-tuning, will result in an overall more\nresponsible model. Our test suite evaluates outputs from four state-of-the-art\nlanguage models: GPT-3.5, GPT-4, OPT, and LLaMA-2. The assessment presented in\nthis paper highlights a gap between societal alignment and the capabilities of\ncurrent LLMs. Additionally, implementing a test suite such as ours lowers the\nenvironmental overhead of making models safe and fair.",
+ "authors": "Veronica Chatrath, Oluwanifemi Bamgbose, Shaina Raza",
+ "published": "2023-10-20",
+ "updated": "2023-12-15",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2404.07981v1",
+ "title": "Manipulating Large Language Models to Increase Product Visibility",
+ "abstract": "Large language models (LLMs) are increasingly being integrated into search\nengines to provide natural language responses tailored to user queries.\nCustomers and end-users are also becoming more dependent on these models for\nquick and easy purchase decisions. In this work, we investigate whether\nrecommendations from LLMs can be manipulated to enhance a product's visibility.\nWe demonstrate that adding a strategic text sequence (STS) -- a carefully\ncrafted message -- to a product's information page can significantly increase\nits likelihood of being listed as the LLM's top recommendation. To understand\nthe impact of STS, we use a catalog of fictitious coffee machines and analyze\nits effect on two target products: one that seldom appears in the LLM's\nrecommendations and another that usually ranks second. We observe that the\nstrategic text sequence significantly enhances the visibility of both products\nby increasing their chances of appearing as the top recommendation. This\nability to manipulate LLM-generated search responses provides vendors with a\nconsiderable competitive advantage and has the potential to disrupt fair market\ncompetition. Just as search engine optimization (SEO) revolutionized how\nwebpages are customized to rank higher in search engine results, influencing\nLLM recommendations could profoundly impact content optimization for AI-driven\nsearch services. Code for our experiments is available at\nhttps://github.com/aounon/llm-rank-optimizer.",
+ "authors": "Aounon Kumar, Himabindu Lakkaraju",
+ "published": "2024-04-11",
+ "updated": "2024-04-11",
+ "primary_cat": "cs.IR",
+ "cats": [
+ "cs.IR",
+ "cs.AI",
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2311.18580v1",
+ "title": "FFT: Towards Harmlessness Evaluation and Analysis for LLMs with Factuality, Fairness, Toxicity",
+ "abstract": "The widespread of generative artificial intelligence has heightened concerns\nabout the potential harms posed by AI-generated texts, primarily stemming from\nfactoid, unfair, and toxic content. Previous researchers have invested much\neffort in assessing the harmlessness of generative language models. However,\nexisting benchmarks are struggling in the era of large language models (LLMs),\ndue to the stronger language generation and instruction following capabilities,\nas well as wider applications. In this paper, we propose FFT, a new benchmark\nwith 2116 elaborated-designed instances, for LLM harmlessness evaluation with\nfactuality, fairness, and toxicity. To investigate the potential harms of LLMs,\nwe evaluate 9 representative LLMs covering various parameter scales, training\nstages, and creators. Experiments show that the harmlessness of LLMs is still\nunder-satisfactory, and extensive analysis derives some insightful findings\nthat could inspire future research for harmless LLM research.",
+ "authors": "Shiyao Cui, Zhenyu Zhang, Yilong Chen, Wenyuan Zhang, Tianyun Liu, Siqi Wang, Tingwen Liu",
+ "published": "2023-11-30",
+ "updated": "2023-11-30",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.CR"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2304.03728v1",
+ "title": "Interpretable Unified Language Checking",
+ "abstract": "Despite recent concerns about undesirable behaviors generated by large\nlanguage models (LLMs), including non-factual, biased, and hateful language, we\nfind LLMs are inherent multi-task language checkers based on their latent\nrepresentations of natural and social knowledge. We present an interpretable,\nunified, language checking (UniLC) method for both human and machine-generated\nlanguage that aims to check if language input is factual and fair. While\nfairness and fact-checking tasks have been handled separately with dedicated\nmodels, we find that LLMs can achieve high performance on a combination of\nfact-checking, stereotype detection, and hate speech detection tasks with a\nsimple, few-shot, unified set of prompts. With the ``1/2-shot'' multi-task\nlanguage checking method proposed in this work, the GPT3.5-turbo model\noutperforms fully supervised baselines on several language tasks. The simple\napproach and results suggest that based on strong latent knowledge\nrepresentations, an LLM can be an adaptive and explainable tool for detecting\nmisinformation, stereotypes, and hate speech.",
+ "authors": "Tianhua Zhang, Hongyin Luo, Yung-Sung Chuang, Wei Fang, Luc Gaitskell, Thomas Hartvigsen, Xixin Wu, Danny Fox, Helen Meng, James Glass",
+ "published": "2023-04-07",
+ "updated": "2023-04-07",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2308.10397v2",
+ "title": "FairMonitor: A Four-Stage Automatic Framework for Detecting Stereotypes and Biases in Large Language Models",
+ "abstract": "Detecting stereotypes and biases in Large Language Models (LLMs) can enhance\nfairness and reduce adverse impacts on individuals or groups when these LLMs\nare applied. However, the majority of existing methods focus on measuring the\nmodel's preference towards sentences containing biases and stereotypes within\ndatasets, which lacks interpretability and cannot detect implicit biases and\nstereotypes in the real world. To address this gap, this paper introduces a\nfour-stage framework to directly evaluate stereotypes and biases in the\ngenerated content of LLMs, including direct inquiry testing, serial or adapted\nstory testing, implicit association testing, and unknown situation testing.\nAdditionally, the paper proposes multi-dimensional evaluation metrics and\nexplainable zero-shot prompts for automated evaluation. Using the education\nsector as a case study, we constructed the Edu-FairMonitor based on the\nfour-stage framework, which encompasses 12,632 open-ended questions covering\nnine sensitive factors and 26 educational scenarios. Experimental results\nreveal varying degrees of stereotypes and biases in five LLMs evaluated on\nEdu-FairMonitor. Moreover, the results of our proposed automated evaluation\nmethod have shown a high correlation with human annotations.",
+ "authors": "Yanhong Bai, Jiabao Zhao, Jinxin Shi, Tingjiang Wei, Xingjiao Wu, Liang He",
+ "published": "2023-08-21",
+ "updated": "2023-10-27",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2402.04489v1",
+ "title": "De-amplifying Bias from Differential Privacy in Language Model Fine-tuning",
+ "abstract": "Fairness and privacy are two important values machine learning (ML)\npractitioners often seek to operationalize in models. Fairness aims to reduce\nmodel bias for social/demographic sub-groups. Privacy via differential privacy\n(DP) mechanisms, on the other hand, limits the impact of any individual's\ntraining data on the resulting model. The trade-offs between privacy and\nfairness goals of trustworthy ML pose a challenge to those wishing to address\nboth. We show that DP amplifies gender, racial, and religious bias when\nfine-tuning large language models (LLMs), producing models more biased than\nones fine-tuned without DP. We find the cause of the amplification to be a\ndisparity in convergence of gradients across sub-groups. Through the case of\nbinary gender bias, we demonstrate that Counterfactual Data Augmentation (CDA),\na known method for addressing bias, also mitigates bias amplification by DP. As\na consequence, DP and CDA together can be used to fine-tune models while\nmaintaining both fairness and privacy.",
+ "authors": "Sanjari Srivastava, Piotr Mardziel, Zhikhun Zhang, Archana Ahlawat, Anupam Datta, John C Mitchell",
+ "published": "2024-02-07",
+ "updated": "2024-02-07",
+ "primary_cat": "cs.LG",
+ "cats": [
+ "cs.LG",
+ "cs.CR",
+ "cs.CY",
+ "stat.ME"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2402.12150v1",
+ "title": "Your Large Language Model is Secretly a Fairness Proponent and You Should Prompt it Like One",
+ "abstract": "The widespread adoption of large language models (LLMs) underscores the\nurgent need to ensure their fairness. However, LLMs frequently present dominant\nviewpoints while ignoring alternative perspectives from minority parties,\nresulting in potential biases. We hypothesize that these fairness-violating\nbehaviors occur because LLMs express their viewpoints using a human personality\nthat represents the majority of training data. In response to this, we validate\nthat prompting LLMs with specific roles can allow LLMs to express diverse\nviewpoints. Building on this insight and observation, we develop FairThinking,\na pipeline designed to automatically generate roles that enable LLMs to\narticulate diverse perspectives for fair expressions. To evaluate FairThinking,\nwe create a dataset with a thousand items covering three fairness-related\ntopics and conduct experiments on GPT-3.5, GPT-4, Llama2, and Mistral to\ndemonstrate its superior performance.",
+ "authors": "Tianlin Li, Xiaoyu Zhang, Chao Du, Tianyu Pang, Qian Liu, Qing Guo, Chao Shen, Yang Liu",
+ "published": "2024-02-19",
+ "updated": "2024-02-19",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "I.2; J.4"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2206.13757v1",
+ "title": "Flexible text generation for counterfactual fairness probing",
+ "abstract": "A common approach for testing fairness issues in text-based classifiers is\nthrough the use of counterfactuals: does the classifier output change if a\nsensitive attribute in the input is changed? Existing counterfactual generation\nmethods typically rely on wordlists or templates, producing simple\ncounterfactuals that don't take into account grammar, context, or subtle\nsensitive attribute references, and could miss issues that the wordlist\ncreators had not considered. In this paper, we introduce a task for generating\ncounterfactuals that overcomes these shortcomings, and demonstrate how large\nlanguage models (LLMs) can be leveraged to make progress on this task. We show\nthat this LLM-based method can produce complex counterfactuals that existing\nmethods cannot, comparing the performance of various counterfactual generation\nmethods on the Civil Comments dataset and showing their value in evaluating a\ntoxicity classifier.",
+ "authors": "Zee Fryer, Vera Axelrod, Ben Packer, Alex Beutel, Jilin Chen, Kellie Webster",
+ "published": "2022-06-28",
+ "updated": "2022-06-28",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.CY"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2311.09447v2",
+ "title": "How Trustworthy are Open-Source LLMs? An Assessment under Malicious Demonstrations Shows their Vulnerabilities",
+ "abstract": "The rapid progress in open-source Large Language Models (LLMs) is\nsignificantly driving AI development forward. However, there is still a limited\nunderstanding of their trustworthiness. Deploying these models at scale without\nsufficient trustworthiness can pose significant risks, highlighting the need to\nuncover these issues promptly. In this work, we conduct an adversarial\nassessment of open-source LLMs on trustworthiness, scrutinizing them across\neight different aspects including toxicity, stereotypes, ethics, hallucination,\nfairness, sycophancy, privacy, and robustness against adversarial\ndemonstrations. We propose advCoU, an extended Chain of Utterances-based (CoU)\nprompting strategy by incorporating carefully crafted malicious demonstrations\nfor trustworthiness attack. Our extensive experiments encompass recent and\nrepresentative series of open-source LLMs, including Vicuna, MPT, Falcon,\nMistral, and Llama 2. The empirical outcomes underscore the efficacy of our\nattack strategy across diverse aspects. More interestingly, our result analysis\nreveals that models with superior performance in general NLP tasks do not\nalways have greater trustworthiness; in fact, larger models can be more\nvulnerable to attacks. Additionally, models that have undergone instruction\ntuning, focusing on instruction following, tend to be more susceptible,\nalthough fine-tuning LLMs for safety alignment proves effective in mitigating\nadversarial trustworthiness attacks.",
+ "authors": "Lingbo Mo, Boshi Wang, Muhao Chen, Huan Sun",
+ "published": "2023-11-15",
+ "updated": "2024-04-02",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2401.04057v1",
+ "title": "Unveiling Bias in Fairness Evaluations of Large Language Models: A Critical Literature Review of Music and Movie Recommendation Systems",
+ "abstract": "The rise of generative artificial intelligence, particularly Large Language\nModels (LLMs), has intensified the imperative to scrutinize fairness alongside\naccuracy. Recent studies have begun to investigate fairness evaluations for\nLLMs within domains such as recommendations. Given that personalization is an\nintrinsic aspect of recommendation systems, its incorporation into fairness\nassessments is paramount. Yet, the degree to which current fairness evaluation\nframeworks account for personalization remains unclear. Our comprehensive\nliterature review aims to fill this gap by examining how existing frameworks\nhandle fairness evaluations of LLMs, with a focus on the integration of\npersonalization factors. Despite an exhaustive collection and analysis of\nrelevant works, we discovered that most evaluations overlook personalization, a\ncritical facet of recommendation systems, thereby inadvertently perpetuating\nunfair practices. Our findings shed light on this oversight and underscore the\nurgent need for more nuanced fairness evaluations that acknowledge\npersonalization. Such improvements are vital for fostering equitable\ndevelopment within the AI community.",
+ "authors": "Chandan Kumar Sah, Dr. Lian Xiaoli, Muhammad Mirajul Islam",
+ "published": "2024-01-08",
+ "updated": "2024-01-08",
+ "primary_cat": "cs.IR",
+ "cats": [
+ "cs.IR",
+ "cs.AI",
+ "cs.SE"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2404.13925v1",
+ "title": "MARIO Eval: Evaluate Your Math LLM with your Math LLM--A mathematical dataset evaluation toolkit",
+ "abstract": "Large language models (LLMs) have been explored in a variety of reasoning\ntasks including solving of mathematical problems. Each math dataset typically\nincludes its own specially designed evaluation script, which, while suitable\nfor its intended use, lacks generalizability across different datasets.\nConsequently, updates and adaptations to these evaluation tools tend to occur\nwithout being systematically reported, leading to inconsistencies and obstacles\nto fair comparison across studies. To bridge this gap, we introduce a\ncomprehensive mathematical evaluation toolkit that not only utilizes a python\ncomputer algebra system (CAS) for its numerical accuracy, but also integrates\nan optional LLM, known for its considerable natural language processing\ncapabilities. To validate the effectiveness of our toolkit, we manually\nannotated two distinct datasets. Our experiments demonstrate that the toolkit\nyields more robust evaluation results compared to prior works, even without an\nLLM. Furthermore, when an LLM is incorporated, there is a notable enhancement.\nThe code for our method will be made available at\n\\url{https://github.com/MARIO-Math-Reasoning/math_evaluation}.",
+ "authors": "Boning Zhang, Chengxi Li, Kai Fan",
+ "published": "2024-04-22",
+ "updated": "2024-04-22",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2401.15585v1",
+ "title": "Evaluating Gender Bias in Large Language Models via Chain-of-Thought Prompting",
+ "abstract": "There exist both scalable tasks, like reading comprehension and\nfact-checking, where model performance improves with model size, and unscalable\ntasks, like arithmetic reasoning and symbolic reasoning, where model\nperformance does not necessarily improve with model size. Large language models\n(LLMs) equipped with Chain-of-Thought (CoT) prompting are able to make accurate\nincremental predictions even on unscalable tasks. Unfortunately, despite their\nexceptional reasoning abilities, LLMs tend to internalize and reproduce\ndiscriminatory societal biases. Whether CoT can provide discriminatory or\negalitarian rationalizations for the implicit information in unscalable tasks\nremains an open question.\n In this study, we examine the impact of LLMs' step-by-step predictions on\ngender bias in unscalable tasks. For this purpose, we construct a benchmark for\nan unscalable task where the LLM is given a list of words comprising feminine,\nmasculine, and gendered occupational words, and is required to count the number\nof feminine and masculine words. In our CoT prompts, we require the LLM to\nexplicitly indicate whether each word in the word list is a feminine or\nmasculine before making the final predictions. With counting and handling the\nmeaning of words, this benchmark has characteristics of both arithmetic\nreasoning and symbolic reasoning. Experimental results in English show that\nwithout step-by-step prediction, most LLMs make socially biased predictions,\ndespite the task being as simple as counting words. Interestingly, CoT\nprompting reduces this unconscious social bias in LLMs and encourages fair\npredictions.",
+ "authors": "Masahiro Kaneko, Danushka Bollegala, Naoaki Okazaki, Timothy Baldwin",
+ "published": "2024-01-28",
+ "updated": "2024-01-28",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2305.11595v3",
+ "title": "Examining Inter-Consistency of Large Language Models Collaboration: An In-depth Analysis via Debate",
+ "abstract": "Large Language Models (LLMs) have shown impressive capabilities in various\napplications, but they still face various inconsistency issues. Existing works\nprimarily focus on the inconsistency issues within a single LLM, while we\ncomplementarily explore the inter-consistency among multiple LLMs for\ncollaboration. To examine whether LLMs can collaborate effectively to achieve a\nconsensus for a shared goal, we focus on commonsense reasoning, and introduce a\nformal debate framework (FORD) to conduct a three-stage debate among LLMs with\nreal-world scenarios alignment: fair debate, mismatched debate, and roundtable\ndebate. Through extensive experiments on various datasets, LLMs can effectively\ncollaborate to reach a consensus despite noticeable inter-inconsistencies, but\nimbalances in their abilities can lead to domination by superior LLMs.\nLeveraging a more advanced LLM like GPT-4 as an authoritative judge can boost\ncollaboration performance. Our work contributes to understanding the\ninter-consistency among LLMs and lays the foundation for developing future\ncollaboration methods. Codes and data are available at\nhttps://github.com/Waste-Wood/FORD",
+ "authors": "Kai Xiong, Xiao Ding, Yixin Cao, Ting Liu, Bing Qin",
+ "published": "2023-05-19",
+ "updated": "2023-10-18",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2311.02294v1",
+ "title": "LLMs grasp morality in concept",
+ "abstract": "Work in AI ethics and fairness has made much progress in regulating LLMs to\nreflect certain values, such as fairness, truth, and diversity. However, it has\ntaken the problem of how LLMs might 'mean' anything at all for granted. Without\naddressing this, it is not clear what imbuing LLMs with such values even means.\nIn response, we provide a general theory of meaning that extends beyond humans.\nWe use this theory to explicate the precise nature of LLMs as meaning-agents.\nWe suggest that the LLM, by virtue of its position as a meaning-agent, already\ngrasps the constructions of human society (e.g. morality, gender, and race) in\nconcept. Consequently, under certain ethical frameworks, currently popular\nmethods for model alignment are limited at best and counterproductive at worst.\nMoreover, unaligned models may help us better develop our moral and social\nphilosophy.",
+ "authors": "Mark Pock, Andre Ye, Jared Moore",
+ "published": "2023-11-04",
+ "updated": "2023-11-04",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.CY"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2305.12090v1",
+ "title": "UP5: Unbiased Foundation Model for Fairness-aware Recommendation",
+ "abstract": "Recent advancements in foundation models such as large language models (LLM)\nhave propelled them to the forefront of recommender systems (RS). Moreover,\nfairness in RS is critical since many users apply it for decision-making and\ndemand fulfillment. However, at present, there is a lack of understanding\nregarding the level of fairness exhibited by recommendation foundation models\nand the appropriate methods for equitably treating different groups of users in\nfoundation models. In this paper, we focus on user-side unfairness problem and\nshow through a thorough examination that there is unfairness involved in LLMs\nthat lead to unfair recommendation results. To eliminate bias from LLM for\nfairness-aware recommendation, we introduce a novel Unbiased P5 (UP5)\nfoundation model based on Counterfactually-Fair-Prompting (CFP) techniques. CFP\nincludes two sub-modules: a personalized prefix prompt that enhances fairness\nwith respect to individual sensitive attributes, and a Prompt Mixture that\nintegrates multiple counterfactually-fair prompts for a set of sensitive\nattributes. Experiments are conducted on two real-world datasets, MovieLens-1M\nand Insurance, and results are compared with both matching-based and\nsequential-based fairness-aware recommendation models. The results show that\nUP5 achieves better recommendation performance and meanwhile exhibits a high\nlevel of fairness.",
+ "authors": "Wenyue Hua, Yingqiang Ge, Shuyuan Xu, Jianchao Ji, Yongfeng Zhang",
+ "published": "2023-05-20",
+ "updated": "2023-05-20",
+ "primary_cat": "cs.IR",
+ "cats": [
+ "cs.IR",
+ "cs.AI",
+ "cs.CL",
+ "cs.LG"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2404.08656v1",
+ "title": "Linear Cross-document Event Coreference Resolution with X-AMR",
+ "abstract": "Event Coreference Resolution (ECR) as a pairwise mention classification task\nis expensive both for automated systems and manual annotations. The task's\nquadratic difficulty is exacerbated when using Large Language Models (LLMs),\nmaking prompt engineering for ECR prohibitively costly. In this work, we\npropose a graphical representation of events, X-AMR, anchored around individual\nmentions using a \\textbf{cross}-document version of \\textbf{A}bstract\n\\textbf{M}eaning \\textbf{R}epresentation. We then linearize the ECR with a\nnovel multi-hop coreference algorithm over the event graphs. The event graphs\nsimplify ECR, making it a) LLM cost-effective, b) compositional and\ninterpretable, and c) easily annotated. For a fair assessment, we first enrich\nan existing ECR benchmark dataset with these event graphs using an\nannotator-friendly tool we introduce. Then, we employ GPT-4, the newest LLM by\nOpenAI, for these annotations. Finally, using the ECR algorithm, we assess\nGPT-4 against humans and analyze its limitations. Through this research, we aim\nto advance the state-of-the-art for efficient ECR and shed light on the\npotential shortcomings of current LLMs at this task. Code and annotations:\n\\url{https://github.com/ahmeshaf/gpt_coref}",
+ "authors": "Shafiuddin Rehan Ahmed, George Arthur Baker, Evi Judge, Michael Regan, Kristin Wright-Bettner, Martha Palmer, James H. Martin",
+ "published": "2024-03-25",
+ "updated": "2024-03-25",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2404.06003v1",
+ "title": "FreeEval: A Modular Framework for Trustworthy and Efficient Evaluation of Large Language Models",
+ "abstract": "The rapid development of large language model (LLM) evaluation methodologies\nand datasets has led to a profound challenge: integrating state-of-the-art\nevaluation techniques cost-effectively while ensuring reliability,\nreproducibility, and efficiency. Currently, there is a notable absence of a\nunified and adaptable framework that seamlessly integrates various evaluation\napproaches. Moreover, the reliability of evaluation findings is often\nquestionable due to potential data contamination, with the evaluation\nefficiency commonly overlooked when facing the substantial costs associated\nwith LLM inference. In response to these challenges, we introduce FreeEval, a\nmodular and scalable framework crafted to enable trustworthy and efficient\nautomatic evaluations of LLMs. Firstly, FreeEval's unified abstractions\nsimplify the integration and improve the transparency of diverse evaluation\nmethodologies, encompassing dynamic evaluation that demand sophisticated LLM\ninteractions. Secondly, the framework integrates meta-evaluation techniques\nlike human evaluation and data contamination detection, which, along with\ndynamic evaluation modules in the platform, enhance the fairness of the\nevaluation outcomes. Lastly, FreeEval is designed with a high-performance\ninfrastructure, including distributed computation and caching strategies,\nenabling extensive evaluations across multi-node, multi-GPU clusters for\nopen-source and proprietary LLMs.",
+ "authors": "Zhuohao Yu, Chang Gao, Wenjin Yao, Yidong Wang, Zhengran Zeng, Wei Ye, Jindong Wang, Yue Zhang, Shikun Zhang",
+ "published": "2024-04-09",
+ "updated": "2024-04-09",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2308.10149v2",
+ "title": "A Survey on Fairness in Large Language Models",
+ "abstract": "Large Language Models (LLMs) have shown powerful performance and development\nprospects and are widely deployed in the real world. However, LLMs can capture\nsocial biases from unprocessed training data and propagate the biases to\ndownstream tasks. Unfair LLM systems have undesirable social impacts and\npotential harms. In this paper, we provide a comprehensive review of related\nresearch on fairness in LLMs. Considering the influence of parameter magnitude\nand training paradigm on research strategy, we divide existing fairness\nresearch into oriented to medium-sized LLMs under pre-training and fine-tuning\nparadigms and oriented to large-sized LLMs under prompting paradigms. First,\nfor medium-sized LLMs, we introduce evaluation metrics and debiasing methods\nfrom the perspectives of intrinsic bias and extrinsic bias, respectively. Then,\nfor large-sized LLMs, we introduce recent fairness research, including fairness\nevaluation, reasons for bias, and debiasing methods. Finally, we discuss and\nprovide insight on the challenges and future directions for the development of\nfairness in LLMs.",
+ "authors": "Yingji Li, Mengnan Du, Rui Song, Xin Wang, Ying Wang",
+ "published": "2023-08-20",
+ "updated": "2024-02-21",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2308.05374v2",
+ "title": "Trustworthy LLMs: a Survey and Guideline for Evaluating Large Language Models' Alignment",
+ "abstract": "Ensuring alignment, which refers to making models behave in accordance with\nhuman intentions [1,2], has become a critical task before deploying large\nlanguage models (LLMs) in real-world applications. For instance, OpenAI devoted\nsix months to iteratively aligning GPT-4 before its release [3]. However, a\nmajor challenge faced by practitioners is the lack of clear guidance on\nevaluating whether LLM outputs align with social norms, values, and\nregulations. This obstacle hinders systematic iteration and deployment of LLMs.\nTo address this issue, this paper presents a comprehensive survey of key\ndimensions that are crucial to consider when assessing LLM trustworthiness. The\nsurvey covers seven major categories of LLM trustworthiness: reliability,\nsafety, fairness, resistance to misuse, explainability and reasoning, adherence\nto social norms, and robustness. Each major category is further divided into\nseveral sub-categories, resulting in a total of 29 sub-categories.\nAdditionally, a subset of 8 sub-categories is selected for further\ninvestigation, where corresponding measurement studies are designed and\nconducted on several widely-used LLMs. The measurement results indicate that,\nin general, more aligned models tend to perform better in terms of overall\ntrustworthiness. However, the effectiveness of alignment varies across the\ndifferent trustworthiness categories considered. This highlights the importance\nof conducting more fine-grained analyses, testing, and making continuous\nimprovements on LLM alignment. By shedding light on these key dimensions of LLM\ntrustworthiness, this paper aims to provide valuable insights and guidance to\npractitioners in the field. Understanding and addressing these concerns will be\ncrucial in achieving reliable and ethically sound deployment of LLMs in various\napplications.",
+ "authors": "Yang Liu, Yuanshun Yao, Jean-Francois Ton, Xiaoying Zhang, Ruocheng Guo, Hao Cheng, Yegor Klochkov, Muhammad Faaiz Taufiq, Hang Li",
+ "published": "2023-08-10",
+ "updated": "2024-03-21",
+ "primary_cat": "cs.AI",
+ "cats": [
+ "cs.AI",
+ "cs.LG"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2405.02219v1",
+ "title": "FairEvalLLM. A Comprehensive Framework for Benchmarking Fairness in Large Language Model Recommender Systems",
+ "abstract": "This paper presents a framework for evaluating fairness in recommender\nsystems powered by Large Language Models (RecLLMs), addressing the need for a\nunified approach that spans various fairness dimensions including sensitivity\nto user attributes, intrinsic fairness, and discussions of fairness based on\nunderlying benefits. In addition, our framework introduces counterfactual\nevaluations and integrates diverse user group considerations to enhance the\ndiscourse on fairness evaluation for RecLLMs.\n Our key contributions include the development of a robust framework for\nfairness evaluation in LLM-based recommendations and a structured method to\ncreate \\textit{informative user profiles} from demographic data, historical\nuser preferences, and recent interactions. We argue that the latter is\nessential for enhancing personalization in such systems, especially in\ntemporal-driven scenarios. We demonstrate the utility of our framework through\npractical applications on two datasets, LastFM-1K and ML-1M. We conduct\nexperiments on a subsample of 80 users from each dataset, testing and assessing\nthe effectiveness of various prompt construction scenarios and in-context\nlearning, comprising more than 50 scenarios. This results in more than 4000\nrecommendations (80 * 50 = 4000). Our study reveals that while there are no\nsignificant unfairness issues in scenarios involving sensitive attributes, some\nconcerns remain. However, in terms of intrinsic fairness, which does not\ninvolve direct sensitivity, unfairness across demographic groups remains\nsignificant. The code and data used for this paper are available at:\n\\url{https://shorturl.at/awBFM}.",
+ "authors": "Yashar Deldjoo",
+ "published": "2024-05-03",
+ "updated": "2024-05-03",
+ "primary_cat": "cs.IR",
+ "cats": [
+ "cs.IR"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2402.17916v2",
+ "title": "LLM-Resistant Math Word Problem Generation via Adversarial Attacks",
+ "abstract": "Large language models (LLMs) have significantly transformed the educational\nlandscape. As current plagiarism detection tools struggle to keep pace with\nLLMs' rapid advancements, the educational community faces the challenge of\nassessing students' true problem-solving abilities in the presence of LLMs. In\nthis work, we explore a new paradigm for ensuring fair evaluation -- generating\nadversarial examples which preserve the structure and difficulty of the\noriginal questions aimed for assessment, but are unsolvable by LLMs. Focusing\non the domain of math word problems, we leverage abstract syntax trees to\nstructurally generate adversarial examples that cause LLMs to produce incorrect\nanswers by simply editing the numeric values in the problems. We conduct\nexperiments on various open- and closed-source LLMs, quantitatively and\nqualitatively demonstrating that our method significantly degrades their math\nproblem-solving ability. We identify shared vulnerabilities among LLMs and\npropose a cost-effective approach to attack high-cost models. Additionally, we\nconduct automatic analysis on math problems and investigate the cause of\nfailure, offering a nuanced view into model's limitation.",
+ "authors": "Roy Xie, Chengxuan Huang, Junlin Wang, Bhuwan Dhingra",
+ "published": "2024-02-27",
+ "updated": "2024-03-30",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2404.02650v1",
+ "title": "Towards detecting unanticipated bias in Large Language Models",
+ "abstract": "Over the last year, Large Language Models (LLMs) like ChatGPT have become\nwidely available and have exhibited fairness issues similar to those in\nprevious machine learning systems. Current research is primarily focused on\nanalyzing and quantifying these biases in training data and their impact on the\ndecisions of these models, alongside developing mitigation strategies. This\nresearch largely targets well-known biases related to gender, race, ethnicity,\nand language. However, it is clear that LLMs are also affected by other, less\nobvious implicit biases. The complex and often opaque nature of these models\nmakes detecting such biases challenging, yet this is crucial due to their\npotential negative impact in various applications. In this paper, we explore\nnew avenues for detecting these unanticipated biases in LLMs, focusing\nspecifically on Uncertainty Quantification and Explainable AI methods. These\napproaches aim to assess the certainty of model decisions and to make the\ninternal decision-making processes of LLMs more transparent, thereby\nidentifying and understanding biases that are not immediately apparent. Through\nthis research, we aim to contribute to the development of fairer and more\ntransparent AI systems.",
+ "authors": "Anna Kruspe",
+ "published": "2024-04-03",
+ "updated": "2024-04-03",
+ "primary_cat": "cs.LG",
+ "cats": [
+ "cs.LG",
+ "cs.AI",
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2310.16343v2",
+ "title": "Evaluating, Understanding, and Improving Constrained Text Generation for Large Language Models",
+ "abstract": "Advancements in natural language generation (NLG) and large language models\n(LLMs) have led to proficient text generation in various tasks. However,\nintegrating intricate constraints into neural text generation, due to LLMs'\nopacity, remains challenging. This study investigates constrained text\ngeneration for LLMs, where predefined constraints are applied during LLM's\ngeneration process. Our research mainly focuses on mainstream open-source LLMs,\ncategorizing constraints into lexical, structural, and relation-based types. We\nalso present various benchmarks to facilitate fair evaluation. The study\naddresses some key research questions, including evaluating, understanding and\nimproving constrained text generation for LLMs. Results illuminate LLMs'\ncapacity and deficiency to incorporate constraints and provide insights for\nfuture developments in constrained text generation. Codes and datasets will be\nreleased upon acceptance.",
+ "authors": "Xiang Chen, Xiaojun Wan",
+ "published": "2023-10-25",
+ "updated": "2024-03-21",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2309.14345v2",
+ "title": "Bias Testing and Mitigation in LLM-based Code Generation",
+ "abstract": "Utilizing state-of-the-art Large Language Models (LLMs), automatic code\ngeneration models play a pivotal role in enhancing the productivity of software\ndevelopment procedures. As the adoption of LLMs becomes more widespread in\nsoftware coding ecosystems, a pressing issue has emerged: does the generated\ncode contain social bias and unfairness, such as those related to age, gender,\nand race? This issue concerns the integrity, fairness, and ethical foundation\nof software applications that depend on the code generated by these models, yet\nis under-explored in the literature. This paper presents a novel bias testing\nframework that is specifically designed for code generation tasks. Based on\nthis framework, we conduct an extensive evaluation of the bias in code\ngenerated by five state-of-the-art LLMs. Our findings reveal that 20.29% to\n44.93% code functions generated by the models under study are biased when\nhandling bias sensitive tasks (i.e., tasks that involve sensitive attributes\nsuch as age and gender). This indicates that the existing LLMs can be unfair in\ncode generation, posing risks of unintended and harmful software behaviors. To\nmitigate bias for code generation models, we evaluate five bias mitigation\nprompt strategies, i.e., utilizing bias testing results to refine the code\n(zero-shot), one-, few-shot, and two Chain-of-Thought (CoT) prompts. Our\nevaluation results illustrate that these strategies are all effective in\nmitigating bias. Overall, one-shot and few-shot learning are the two most\neffective. For GPT-4, 80% to 90% code bias can be removed with one-shot\nlearning.",
+ "authors": "Dong Huang, Qingwen Bu, Jie Zhang, Xiaofei Xie, Junjie Chen, Heming Cui",
+ "published": "2023-09-03",
+ "updated": "2024-01-09",
+ "primary_cat": "cs.SE",
+ "cats": [
+ "cs.SE",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2311.00306v1",
+ "title": "Probing Explicit and Implicit Gender Bias through LLM Conditional Text Generation",
+ "abstract": "Large Language Models (LLMs) can generate biased and toxic responses. Yet\nmost prior work on LLM gender bias evaluation requires predefined\ngender-related phrases or gender stereotypes, which are challenging to be\ncomprehensively collected and are limited to explicit bias evaluation. In\naddition, we believe that instances devoid of gender-related language or\nexplicit stereotypes in inputs can still induce gender bias in LLMs. Thus, in\nthis work, we propose a conditional text generation mechanism without the need\nfor predefined gender phrases and stereotypes. This approach employs three\ntypes of inputs generated through three distinct strategies to probe LLMs,\naiming to show evidence of explicit and implicit gender biases in LLMs. We also\nutilize explicit and implicit evaluation metrics to evaluate gender bias in\nLLMs under different strategies. Our experiments demonstrate that an increased\nmodel size does not consistently lead to enhanced fairness and all tested LLMs\nexhibit explicit and/or implicit gender bias, even when explicit gender\nstereotypes are absent in the inputs.",
+ "authors": "Xiangjue Dong, Yibo Wang, Philip S. Yu, James Caverlee",
+ "published": "2023-11-01",
+ "updated": "2023-11-01",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2311.07884v2",
+ "title": "Fair Abstractive Summarization of Diverse Perspectives",
+ "abstract": "People from different social and demographic groups express diverse\nperspectives and conflicting opinions on a broad set of topics such as product\nreviews, healthcare, law, and politics. A fair summary should provide a\ncomprehensive coverage of diverse perspectives without underrepresenting\ncertain groups. However, current work in summarization metrics and Large\nLanguage Models (LLMs) evaluation has not explored fair abstractive\nsummarization. In this paper, we systematically investigate fair abstractive\nsummarization for user-generated data. We first formally define fairness in\nabstractive summarization as not underrepresenting perspectives of any groups\nof people, and we propose four reference-free automatic metrics by measuring\nthe differences between target and source perspectives. We evaluate nine LLMs,\nincluding three GPT models, four LLaMA models, PaLM 2, and Claude, on six\ndatasets collected from social media, online reviews, and recorded transcripts.\nExperiments show that both the model-generated and the human-written reference\nsummaries suffer from low fairness. We conduct a comprehensive analysis of the\ncommon factors influencing fairness and propose three simple but effective\nmethods to alleviate unfair summarization. Our dataset and code are available\nat https://github.com/psunlpgroup/FairSumm.",
+ "authors": "Yusen Zhang, Nan Zhang, Yixin Liu, Alexander Fabbri, Junru Liu, Ryo Kamoi, Xiaoxin Lu, Caiming Xiong, Jieyu Zhao, Dragomir Radev, Kathleen McKeown, Rui Zhang",
+ "published": "2023-11-14",
+ "updated": "2024-03-30",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2401.08495v2",
+ "title": "Large Language Models Portray Socially Subordinate Groups as More Homogeneous, Consistent with a Bias Observed in Humans",
+ "abstract": "Large language models (LLMs) are becoming pervasive in everyday life, yet\ntheir propensity to reproduce biases inherited from training data remains a\npressing concern. Prior investigations into bias in LLMs have focused on the\nassociation of social groups with stereotypical attributes. However, this is\nonly one form of human bias such systems may reproduce. We investigate a new\nform of bias in LLMs that resembles a social psychological phenomenon where\nsocially subordinate groups are perceived as more homogeneous than socially\ndominant groups. We had ChatGPT, a state-of-the-art LLM, generate texts about\nintersectional group identities and compared those texts on measures of\nhomogeneity. We consistently found that ChatGPT portrayed African, Asian, and\nHispanic Americans as more homogeneous than White Americans, indicating that\nthe model described racial minority groups with a narrower range of human\nexperience. ChatGPT also portrayed women as more homogeneous than men, but\nthese differences were small. Finally, we found that the effect of gender\ndiffered across racial/ethnic groups such that the effect of gender was\nconsistent within African and Hispanic Americans but not within Asian and White\nAmericans. We argue that the tendency of LLMs to describe groups as less\ndiverse risks perpetuating stereotypes and discriminatory behavior.",
+ "authors": "Messi H. J. Lee, Jacob M. Montgomery, Calvin K. Lai",
+ "published": "2024-01-16",
+ "updated": "2024-04-26",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2312.15198v2",
+ "title": "Do LLM Agents Exhibit Social Behavior?",
+ "abstract": "The advances of Large Language Models (LLMs) are expanding their utility in\nboth academic research and practical applications. Recent social science\nresearch has explored the use of these ``black-box'' LLM agents for simulating\ncomplex social systems and potentially substituting human subjects in\nexperiments. Our study delves into this emerging domain, investigating the\nextent to which LLMs exhibit key social interaction principles, such as social\nlearning, social preference, and cooperative behavior (indirect reciprocity),\nin their interactions with humans and other agents. We develop a framework for\nour study, wherein classical laboratory experiments involving human subjects\nare adapted to use LLM agents. This approach involves step-by-step reasoning\nthat mirrors human cognitive processes and zero-shot learning to assess the\ninnate preferences of LLMs. Our analysis of LLM agents' behavior includes both\nthe primary effects and an in-depth examination of the underlying mechanisms.\nFocusing on GPT-4, our analyses suggest that LLM agents appear to exhibit a\nrange of human-like social behaviors such as distributional and reciprocity\npreferences, responsiveness to group identity cues, engagement in indirect\nreciprocity, and social learning capabilities. However, our analysis also\nreveals notable differences: LLMs demonstrate a pronounced fairness preference,\nweaker positive reciprocity, and a more calculating approach in social learning\ncompared to humans. These insights indicate that while LLMs hold great promise\nfor applications in social science research, such as in laboratory experiments\nand agent-based modeling, the subtle behavioral differences between LLM agents\nand humans warrant further investigation. Careful examination and development\nof protocols in evaluating the social behaviors of LLMs are necessary before\ndirectly applying these models to emulate human behavior.",
+ "authors": "Yan Leng, Yuan Yuan",
+ "published": "2023-12-23",
+ "updated": "2024-02-22",
+ "primary_cat": "cs.AI",
+ "cats": [
+ "cs.AI",
+ "cs.SI",
+ "econ.GN",
+ "q-fin.EC"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2310.08780v1",
+ "title": "\"Im not Racist but...\": Discovering Bias in the Internal Knowledge of Large Language Models",
+ "abstract": "Large language models (LLMs) have garnered significant attention for their\nremarkable performance in a continuously expanding set of natural language\nprocessing tasks. However, these models have been shown to harbor inherent\nsocietal biases, or stereotypes, which can adversely affect their performance\nin their many downstream applications. In this paper, we introduce a novel,\npurely prompt-based approach to uncover hidden stereotypes within any arbitrary\nLLM. Our approach dynamically generates a knowledge representation of internal\nstereotypes, enabling the identification of biases encoded within the LLM's\ninternal knowledge. By illuminating the biases present in LLMs and offering a\nsystematic methodology for their analysis, our work contributes to advancing\ntransparency and promoting fairness in natural language processing systems.",
+ "authors": "Abel Salinas, Louis Penafiel, Robert McCormack, Fred Morstatter",
+ "published": "2023-10-13",
+ "updated": "2023-10-13",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2311.06899v4",
+ "title": "Flames: Benchmarking Value Alignment of LLMs in Chinese",
+ "abstract": "The widespread adoption of large language models (LLMs) across various\nregions underscores the urgent need to evaluate their alignment with human\nvalues. Current benchmarks, however, fall short of effectively uncovering\nsafety vulnerabilities in LLMs. Despite numerous models achieving high scores\nand 'topping the chart' in these evaluations, there is still a significant gap\nin LLMs' deeper alignment with human values and achieving genuine harmlessness.\nTo this end, this paper proposes a value alignment benchmark named Flames,\nwhich encompasses both common harmlessness principles and a unique morality\ndimension that integrates specific Chinese values such as harmony. Accordingly,\nwe carefully design adversarial prompts that incorporate complex scenarios and\njailbreaking methods, mostly with implicit malice. By prompting 17 mainstream\nLLMs, we obtain model responses and rigorously annotate them for detailed\nevaluation. Our findings indicate that all the evaluated LLMs demonstrate\nrelatively poor performance on Flames, particularly in the safety and fairness\ndimensions. We also develop a lightweight specified scorer capable of scoring\nLLMs across multiple dimensions to efficiently evaluate new models on the\nbenchmark. The complexity of Flames has far exceeded existing benchmarks,\nsetting a new challenge for contemporary LLMs and highlighting the need for\nfurther alignment of LLMs. Our benchmark is publicly available at\nhttps://github.com/AIFlames/Flames.",
+ "authors": "Kexin Huang, Xiangyang Liu, Qianyu Guo, Tianxiang Sun, Jiawei Sun, Yaru Wang, Zeyang Zhou, Yixu Wang, Yan Teng, Xipeng Qiu, Yingchun Wang, Dahua Lin",
+ "published": "2023-11-12",
+ "updated": "2024-04-15",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2401.01262v2",
+ "title": "Fairness Certification for Natural Language Processing and Large Language Models",
+ "abstract": "Natural Language Processing (NLP) plays an important role in our daily lives,\nparticularly due to the enormous progress of Large Language Models (LLM).\nHowever, NLP has many fairness-critical use cases, e.g., as an expert system in\nrecruitment or as an LLM-based tutor in education. Since NLP is based on human\nlanguage, potentially harmful biases can diffuse into NLP systems and produce\nunfair results, discriminate against minorities or generate legal issues.\nHence, it is important to develop a fairness certification for NLP approaches.\nWe follow a qualitative research approach towards a fairness certification for\nNLP. In particular, we have reviewed a large body of literature on algorithmic\nfairness, and we have conducted semi-structured expert interviews with a wide\nrange of experts from that area. We have systematically devised six fairness\ncriteria for NLP, which can be further refined into 18 sub-categories. Our\ncriteria offer a foundation for operationalizing and testing processes to\ncertify fairness, both from the perspective of the auditor and the audited\norganization.",
+ "authors": "Vincent Freiberger, Erik Buchmann",
+ "published": "2024-01-02",
+ "updated": "2024-01-03",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.CY",
+ "cs.LG",
+ "68T50",
+ "I.2.7"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2308.11483v1",
+ "title": "Large Language Models Sensitivity to The Order of Options in Multiple-Choice Questions",
+ "abstract": "Large Language Models (LLMs) have demonstrated remarkable capabilities in\nvarious NLP tasks. However, previous works have shown these models are\nsensitive towards prompt wording, and few-shot demonstrations and their order,\nposing challenges to fair assessment of these models. As these models become\nmore powerful, it becomes imperative to understand and address these\nlimitations. In this paper, we focus on LLMs robustness on the task of\nmultiple-choice questions -- commonly adopted task to study reasoning and\nfact-retrieving capability of LLMs. Investigating the sensitivity of LLMs\ntowards the order of options in multiple-choice questions, we demonstrate a\nconsiderable performance gap of approximately 13% to 75% in LLMs on different\nbenchmarks, when answer options are reordered, even when using demonstrations\nin a few-shot setting. Through a detailed analysis, we conjecture that this\nsensitivity arises when LLMs are uncertain about the prediction between the\ntop-2/3 choices, and specific options placements may favor certain prediction\nbetween those top choices depending on the question caused by positional bias.\nWe also identify patterns in top-2 choices that amplify or mitigate the model's\nbias toward option placement. We found that for amplifying bias, the optimal\nstrategy involves positioning the top two choices as the first and last\noptions. Conversely, to mitigate bias, we recommend placing these choices among\nthe adjacent options. To validate our conjecture, we conduct various\nexperiments and adopt two approaches to calibrate LLMs' predictions, leading to\nup to 8 percentage points improvement across different models and benchmarks.",
+ "authors": "Pouya Pezeshkpour, Estevam Hruschka",
+ "published": "2023-08-22",
+ "updated": "2023-08-22",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.LG"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2403.04814v2",
+ "title": "Evaluation of LLMs on Syntax-Aware Code Fill-in-the-Middle Tasks",
+ "abstract": "We introduce Syntax-Aware Fill-In-the-Middle (SAFIM), a new benchmark for\nevaluating Large Language Models (LLMs) on the code Fill-in-the-Middle (FIM)\ntask. This benchmark focuses on syntax-aware completions of program structures\nsuch as code blocks and conditional expressions, and includes 17,720 examples\nfrom multiple programming languages, sourced from recent code submissions after\nApril 2022 to minimize data contamination. SAFIM provides a robust framework\nwith various prompt designs and novel syntax-aware post-processing techniques,\nfacilitating accurate and fair comparisons across LLMs. Our comprehensive\nevaluation of 15 LLMs shows that FIM pretraining not only enhances FIM\nproficiency but also improves Left-to-Right (L2R) inference using LLMs. Our\nfindings challenge conventional beliefs and suggest that pretraining methods\nand data quality have more impact than model size. SAFIM thus serves as a\nfoundational platform for future research in effective pretraining strategies\nfor code LLMs. The evaluation toolkit and dataset are available at\nhttps://github.com/gonglinyuan/safim, and the leaderboard is available at\nhttps://safimbenchmark.com.",
+ "authors": "Linyuan Gong, Sida Wang, Mostafa Elhoushi, Alvin Cheung",
+ "published": "2024-03-07",
+ "updated": "2024-04-10",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.LG",
+ "cs.SE"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2403.05668v1",
+ "title": "CFaiRLLM: Consumer Fairness Evaluation in Large-Language Model Recommender System",
+ "abstract": "In the evolving landscape of recommender systems, the integration of Large\nLanguage Models (LLMs) such as ChatGPT marks a new era, introducing the concept\nof Recommendation via LLM (RecLLM). While these advancements promise\nunprecedented personalization and efficiency, they also bring to the fore\ncritical concerns regarding fairness, particularly in how recommendations might\ninadvertently perpetuate or amplify biases associated with sensitive user\nattributes. In order to address these concerns, our study introduces a\ncomprehensive evaluation framework, CFaiRLLM, aimed at evaluating (and thereby\nmitigating) biases on the consumer side within RecLLMs.\n Our research methodically assesses the fairness of RecLLMs by examining how\nrecommendations might vary with the inclusion of sensitive attributes such as\ngender, age, and their intersections, through both similarity alignment and\ntrue preference alignment. By analyzing recommendations generated under\ndifferent conditions-including the use of sensitive attributes in user\nprompts-our framework identifies potential biases in the recommendations\nprovided. A key part of our study involves exploring how different detailed\nstrategies for constructing user profiles (random, top-rated, recent) impact\nthe alignment between recommendations made without consideration of sensitive\nattributes and those that are sensitive-attribute-aware, highlighting the bias\nmechanisms within RecLLMs.\n The findings in our study highlight notable disparities in the fairness of\nrecommendations, particularly when sensitive attributes are integrated into the\nrecommendation process, either individually or in combination. The analysis\ndemonstrates that the choice of user profile sampling strategy plays a\nsignificant role in affecting fairness outcomes, highlighting the complexity of\nachieving fair recommendations in the era of LLMs.",
+ "authors": "Yashar Deldjoo, Tommaso di Noia",
+ "published": "2024-03-08",
+ "updated": "2024-03-08",
+ "primary_cat": "cs.IR",
+ "cats": [
+ "cs.IR"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2404.10199v3",
+ "title": "CULTURE-GEN: Revealing Global Cultural Perception in Language Models through Natural Language Prompting",
+ "abstract": "As the utilization of large language models (LLMs) has proliferated\nworldwide, it is crucial for them to have adequate knowledge and fair\nrepresentation for diverse global cultures. In this work, we uncover culture\nperceptions of three SOTA models on 110 countries and regions on 8\nculture-related topics through culture-conditioned generations, and extract\nsymbols from these generations that are associated to each culture by the LLM.\nWe discover that culture-conditioned generation consist of linguistic \"markers\"\nthat distinguish marginalized cultures apart from default cultures. We also\ndiscover that LLMs have an uneven degree of diversity in the culture symbols,\nand that cultures from different geographic regions have different presence in\nLLMs' culture-agnostic generation. Our findings promote further research in\nstudying the knowledge and fairness of global culture perception in LLMs. Code\nand Data can be found in: https://github.com/huihanlhh/Culture-Gen/",
+ "authors": "Huihan Li, Liwei Jiang, Jena D. Huang, Hyunwoo Kim, Sebastin Santy, Taylor Sorensen, Bill Yuchen Lin, Nouha Dziri, Xiang Ren, Yejin Choi",
+ "published": "2024-04-16",
+ "updated": "2024-04-26",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2312.06056v1",
+ "title": "METAL: Metamorphic Testing Framework for Analyzing Large-Language Model Qualities",
+ "abstract": "Large-Language Models (LLMs) have shifted the paradigm of natural language\ndata processing. However, their black-boxed and probabilistic characteristics\ncan lead to potential risks in the quality of outputs in diverse LLM\napplications. Recent studies have tested Quality Attributes (QAs), such as\nrobustness or fairness, of LLMs by generating adversarial input texts. However,\nexisting studies have limited their coverage of QAs and tasks in LLMs and are\ndifficult to extend. Additionally, these studies have only used one evaluation\nmetric, Attack Success Rate (ASR), to assess the effectiveness of their\napproaches. We propose a MEtamorphic Testing for Analyzing LLMs (METAL)\nframework to address these issues by applying Metamorphic Testing (MT)\ntechniques. This approach facilitates the systematic testing of LLM qualities\nby defining Metamorphic Relations (MRs), which serve as modularized evaluation\nmetrics. The METAL framework can automatically generate hundreds of MRs from\ntemplates that cover various QAs and tasks. In addition, we introduced novel\nmetrics that integrate the ASR method into the semantic qualities of text to\nassess the effectiveness of MRs accurately. Through the experiments conducted\nwith three prominent LLMs, we have confirmed that the METAL framework\neffectively evaluates essential QAs on primary LLM tasks and reveals the\nquality risks in LLMs. Moreover, the newly proposed metrics can guide the\noptimal MRs for testing each task and suggest the most effective method for\ngenerating MRs.",
+ "authors": "Sangwon Hyun, Mingyu Guo, M. Ali Babar",
+ "published": "2023-12-11",
+ "updated": "2023-12-11",
+ "primary_cat": "cs.SE",
+ "cats": [
+ "cs.SE",
+ "cs.AI",
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2401.00625v2",
+ "title": "Beyond Efficiency: A Systematic Survey of Resource-Efficient Large Language Models",
+ "abstract": "The burgeoning field of Large Language Models (LLMs), exemplified by\nsophisticated models like OpenAI's ChatGPT, represents a significant\nadvancement in artificial intelligence. These models, however, bring forth\nsubstantial challenges in the high consumption of computational, memory,\nenergy, and financial resources, especially in environments with limited\nresource capabilities. This survey aims to systematically address these\nchallenges by reviewing a broad spectrum of techniques designed to enhance the\nresource efficiency of LLMs. We categorize methods based on their optimization\nfocus: computational, memory, energy, financial, and network resources and\ntheir applicability across various stages of an LLM's lifecycle, including\narchitecture design, pretraining, finetuning, and system design. Additionally,\nthe survey introduces a nuanced categorization of resource efficiency\ntechniques by their specific resource types, which uncovers the intricate\nrelationships and mappings between various resources and corresponding\noptimization techniques. A standardized set of evaluation metrics and datasets\nis also presented to facilitate consistent and fair comparisons across\ndifferent models and techniques. By offering a comprehensive overview of the\ncurrent sota and identifying open research avenues, this survey serves as a\nfoundational reference for researchers and practitioners, aiding them in\ndeveloping more sustainable and efficient LLMs in a rapidly evolving landscape.",
+ "authors": "Guangji Bai, Zheng Chai, Chen Ling, Shiyu Wang, Jiaying Lu, Nan Zhang, Tingwei Shi, Ziyang Yu, Mengdan Zhu, Yifei Zhang, Carl Yang, Yue Cheng, Liang Zhao",
+ "published": "2024-01-01",
+ "updated": "2024-01-04",
+ "primary_cat": "cs.LG",
+ "cats": [
+ "cs.LG"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2312.15478v1",
+ "title": "A Group Fairness Lens for Large Language Models",
+ "abstract": "The rapid advancement of large language models has revolutionized various\napplications but also raised crucial concerns about their potential to\nperpetuate biases and unfairness when deployed in social media contexts.\nEvaluating LLMs' potential biases and fairness has become crucial, as existing\nmethods rely on limited prompts focusing on just a few groups, lacking a\ncomprehensive categorical perspective. In this paper, we propose evaluating LLM\nbiases from a group fairness lens using a novel hierarchical schema\ncharacterizing diverse social groups. Specifically, we construct a dataset,\nGFair, encapsulating target-attribute combinations across multiple dimensions.\nIn addition, we introduce statement organization, a new open-ended text\ngeneration task, to uncover complex biases in LLMs. Extensive evaluations of\npopular LLMs reveal inherent safety concerns. To mitigate the biases of LLM\nfrom a group fairness perspective, we pioneer a novel chain-of-thought method\nGF-Think to mitigate biases of LLMs from a group fairness perspective.\nExperimental results demonstrate its efficacy in mitigating bias in LLMs to\nachieve fairness.",
+ "authors": "Guanqun Bi, Lei Shen, Yuqiang Xie, Yanan Cao, Tiangang Zhu, Xiaodong He",
+ "published": "2023-12-24",
+ "updated": "2023-12-24",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2404.08517v1",
+ "title": "Online Safety Analysis for LLMs: a Benchmark, an Assessment, and a Path Forward",
+ "abstract": "While Large Language Models (LLMs) have seen widespread applications across\nnumerous fields, their limited interpretability poses concerns regarding their\nsafe operations from multiple aspects, e.g., truthfulness, robustness, and\nfairness. Recent research has started developing quality assurance methods for\nLLMs, introducing techniques such as offline detector-based or uncertainty\nestimation methods. However, these approaches predominantly concentrate on\npost-generation analysis, leaving the online safety analysis for LLMs during\nthe generation phase an unexplored area. To bridge this gap, we conduct in this\nwork a comprehensive evaluation of the effectiveness of existing online safety\nanalysis methods on LLMs. We begin with a pilot study that validates the\nfeasibility of detecting unsafe outputs in the early generation process.\nFollowing this, we establish the first publicly available benchmark of online\nsafety analysis for LLMs, including a broad spectrum of methods, models, tasks,\ndatasets, and evaluation metrics. Utilizing this benchmark, we extensively\nanalyze the performance of state-of-the-art online safety analysis methods on\nboth open-source and closed-source LLMs. This analysis reveals the strengths\nand weaknesses of individual methods and offers valuable insights into\nselecting the most appropriate method based on specific application scenarios\nand task requirements. Furthermore, we also explore the potential of using\nhybridization methods, i.e., combining multiple methods to derive a collective\nsafety conclusion, to enhance the efficacy of online safety analysis for LLMs.\nOur findings indicate a promising direction for the development of innovative\nand trustworthy quality assurance methodologies for LLMs, facilitating their\nreliable deployments across diverse domains.",
+ "authors": "Xuan Xie, Jiayang Song, Zhehua Zhou, Yuheng Huang, Da Song, Lei Ma",
+ "published": "2024-04-12",
+ "updated": "2024-04-12",
+ "primary_cat": "cs.SE",
+ "cats": [
+ "cs.SE",
+ "cs.AI",
+ "cs.CL",
+ "cs.CR",
+ "cs.LG"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2308.05345v3",
+ "title": "RTLLM: An Open-Source Benchmark for Design RTL Generation with Large Language Model",
+ "abstract": "Inspired by the recent success of large language models (LLMs) like ChatGPT,\nresearchers start to explore the adoption of LLMs for agile hardware design,\nsuch as generating design RTL based on natural-language instructions. However,\nin existing works, their target designs are all relatively simple and in a\nsmall scale, and proposed by the authors themselves, making a fair comparison\namong different LLM solutions challenging. In addition, many prior works only\nfocus on the design correctness, without evaluating the design qualities of\ngenerated design RTL. In this work, we propose an open-source benchmark named\nRTLLM, for generating design RTL with natural language instructions. To\nsystematically evaluate the auto-generated design RTL, we summarized three\nprogressive goals, named syntax goal, functionality goal, and design quality\ngoal. This benchmark can automatically provide a quantitative evaluation of any\ngiven LLM-based solution. Furthermore, we propose an easy-to-use yet\nsurprisingly effective prompt engineering technique named self-planning, which\nproves to significantly boost the performance of GPT-3.5 in our proposed\nbenchmark.",
+ "authors": "Yao Lu, Shang Liu, Qijun Zhang, Zhiyao Xie",
+ "published": "2023-08-10",
+ "updated": "2023-11-11",
+ "primary_cat": "cs.LG",
+ "cats": [
+ "cs.LG",
+ "cs.AR"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2305.07609v3",
+ "title": "Is ChatGPT Fair for Recommendation? Evaluating Fairness in Large Language Model Recommendation",
+ "abstract": "The remarkable achievements of Large Language Models (LLMs) have led to the\nemergence of a novel recommendation paradigm -- Recommendation via LLM\n(RecLLM). Nevertheless, it is important to note that LLMs may contain social\nprejudices, and therefore, the fairness of recommendations made by RecLLM\nrequires further investigation. To avoid the potential risks of RecLLM, it is\nimperative to evaluate the fairness of RecLLM with respect to various sensitive\nattributes on the user side. Due to the differences between the RecLLM paradigm\nand the traditional recommendation paradigm, it is problematic to directly use\nthe fairness benchmark of traditional recommendation. To address the dilemma,\nwe propose a novel benchmark called Fairness of Recommendation via LLM\n(FaiRLLM). This benchmark comprises carefully crafted metrics and a dataset\nthat accounts for eight sensitive attributes1 in two recommendation scenarios:\nmusic and movies. By utilizing our FaiRLLM benchmark, we conducted an\nevaluation of ChatGPT and discovered that it still exhibits unfairness to some\nsensitive attributes when generating recommendations. Our code and dataset can\nbe found at https://github.com/jizhi-zhang/FaiRLLM.",
+ "authors": "Jizhi Zhang, Keqin Bao, Yang Zhang, Wenjie Wang, Fuli Feng, Xiangnan He",
+ "published": "2023-05-12",
+ "updated": "2023-10-17",
+ "primary_cat": "cs.IR",
+ "cats": [
+ "cs.IR",
+ "cs.CL",
+ "cs.CY"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2312.15398v1",
+ "title": "Fairness-Aware Structured Pruning in Transformers",
+ "abstract": "The increasing size of large language models (LLMs) has introduced challenges\nin their training and inference. Removing model components is perceived as a\nsolution to tackle the large model sizes, however, existing pruning methods\nsolely focus on performance, without considering an essential aspect for the\nresponsible use of LLMs: model fairness. It is crucial to address the fairness\nof LLMs towards diverse groups, such as women, Black people, LGBTQ+, Jewish\ncommunities, among others, as they are being deployed and available to a wide\naudience. In this work, first, we investigate how attention heads impact\nfairness and performance in pre-trained transformer-based language models. We\nthen propose a novel method to prune the attention heads that negatively impact\nfairness while retaining the heads critical for performance, i.e. language\nmodeling capabilities. Our approach is practical in terms of time and\nresources, as it does not require fine-tuning the final pruned, and fairer,\nmodel. Our findings demonstrate a reduction in gender bias by 19%, 19.5%,\n39.5%, 34.7%, 23%, and 8% for DistilGPT-2, GPT-2, GPT-Neo of two different\nsizes, GPT-J, and Llama 2 models, respectively, in comparison to the biased\nmodel, with only a slight decrease in performance.",
+ "authors": "Abdelrahman Zayed, Goncalo Mordido, Samira Shabanian, Ioana Baldini, Sarath Chandar",
+ "published": "2023-12-24",
+ "updated": "2023-12-24",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.CY",
+ "cs.LG"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2307.11761v1",
+ "title": "Fairness of ChatGPT and the Role Of Explainable-Guided Prompts",
+ "abstract": "Our research investigates the potential of Large-scale Language Models\n(LLMs), specifically OpenAI's GPT, in credit risk assessment-a binary\nclassification task. Our findings suggest that LLMs, when directed by\njudiciously designed prompts and supplemented with domain-specific knowledge,\ncan parallel the performance of traditional Machine Learning (ML) models.\nIntriguingly, they achieve this with significantly less data-40 times less,\nutilizing merely 20 data points compared to the ML's 800. LLMs particularly\nexcel in minimizing false positives and enhancing fairness, both being vital\naspects of risk analysis. While our results did not surpass those of classical\nML models, they underscore the potential of LLMs in analogous tasks, laying a\ngroundwork for future explorations into harnessing the capabilities of LLMs in\ndiverse ML tasks.",
+ "authors": "Yashar Deldjoo",
+ "published": "2023-07-14",
+ "updated": "2023-07-14",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2309.03852v2",
+ "title": "FLM-101B: An Open LLM and How to Train It with $100K Budget",
+ "abstract": "Large language models (LLMs) have achieved remarkable success in NLP and\nmultimodal tasks, among others. Despite these successes, two main challenges\nremain in developing LLMs: (i) high computational cost, and (ii) fair and\nobjective evaluations. In this paper, we report a solution to significantly\nreduce LLM training cost through a growth strategy. We demonstrate that a\n101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US\ndollars. Inspired by IQ tests, we also consolidate an additional range of\nevaluations on top of existing evaluations that focus on knowledge-oriented\nabilities. These IQ evaluations include symbolic mapping, rule understanding,\npattern mining, and anti-interference. Such evaluations minimize the potential\nimpact of memorization. Experimental results show that our model, named\nFLM-101B, trained with a budget of 100K US dollars, achieves performance\ncomparable to powerful and well-known models, e.g., GPT-3 and GLM-130B,\nespecially on the additional range of IQ evaluations. The checkpoint of\nFLM-101B is released at https://huggingface.co/CofeAI/FLM-101B.",
+ "authors": "Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang",
+ "published": "2023-09-07",
+ "updated": "2023-09-17",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2305.03514v3",
+ "title": "Can Large Language Models Transform Computational Social Science?",
+ "abstract": "Large Language Models (LLMs) are capable of successfully performing many\nlanguage processing tasks zero-shot (without training data). If zero-shot LLMs\ncan also reliably classify and explain social phenomena like persuasiveness and\npolitical ideology, then LLMs could augment the Computational Social Science\n(CSS) pipeline in important ways. This work provides a road map for using LLMs\nas CSS tools. Towards this end, we contribute a set of prompting best practices\nand an extensive evaluation pipeline to measure the zero-shot performance of 13\nlanguage models on 25 representative English CSS benchmarks. On taxonomic\nlabeling tasks (classification), LLMs fail to outperform the best fine-tuned\nmodels but still achieve fair levels of agreement with humans. On free-form\ncoding tasks (generation), LLMs produce explanations that often exceed the\nquality of crowdworkers' gold references. We conclude that the performance of\ntoday's LLMs can augment the CSS research pipeline in two ways: (1) serving as\nzero-shot data annotators on human annotation teams, and (2) bootstrapping\nchallenging creative generation tasks (e.g., explaining the underlying\nattributes of a text). In summary, LLMs are posed to meaningfully participate\nin social science analysis in partnership with humans.",
+ "authors": "Caleb Ziems, William Held, Omar Shaikh, Jiaao Chen, Zhehao Zhang, Diyi Yang",
+ "published": "2023-04-12",
+ "updated": "2024-02-26",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.LG"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2309.08836v2",
+ "title": "Bias and Fairness in Chatbots: An Overview",
+ "abstract": "Chatbots have been studied for more than half a century. With the rapid\ndevelopment of natural language processing (NLP) technologies in recent years,\nchatbots using large language models (LLMs) have received much attention\nnowadays. Compared with traditional ones, modern chatbots are more powerful and\nhave been used in real-world applications. There are however, bias and fairness\nconcerns in modern chatbot design. Due to the huge amounts of training data,\nextremely large model sizes, and lack of interpretability, bias mitigation and\nfairness preservation of modern chatbots are challenging. Thus, a comprehensive\noverview on bias and fairness in chatbot systems is given in this paper. The\nhistory of chatbots and their categories are first reviewed. Then, bias sources\nand potential harms in applications are analyzed. Considerations in designing\nfair and unbiased chatbot systems are examined. Finally, future research\ndirections are discussed.",
+ "authors": "Jintang Xue, Yun-Cheng Wang, Chengwei Wei, Xiaofeng Liu, Jonghye Woo, C. -C. Jay Kuo",
+ "published": "2023-09-16",
+ "updated": "2023-12-10",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.CY"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2310.14607v2",
+ "title": "Confronting LLMs with Traditional ML: Rethinking the Fairness of Large Language Models in Tabular Classifications",
+ "abstract": "Recent literature has suggested the potential of using large language models\n(LLMs) to make classifications for tabular tasks. However, LLMs have been shown\nto exhibit harmful social biases that reflect the stereotypes and inequalities\npresent in society. To this end, as well as the widespread use of tabular data\nin many high-stake applications, it is important to explore the following\nquestions: what sources of information do LLMs draw upon when making\nclassifications for tabular tasks; whether and to what extent are LLM\nclassifications for tabular data influenced by social biases and stereotypes;\nand what are the consequential implications for fairness?\n Through a series of experiments, we delve into these questions and show that\nLLMs tend to inherit social biases from their training data which significantly\nimpact their fairness in tabular classification tasks. Furthermore, our\ninvestigations show that in the context of bias mitigation, though in-context\nlearning and finetuning have a moderate effect, the fairness metric gap between\ndifferent subgroups is still larger than that in traditional machine learning\nmodels, such as Random Forest and shallow Neural Networks. This observation\nemphasizes that the social biases are inherent within the LLMs themselves and\ninherited from their pretraining corpus, not only from the downstream task\ndatasets. Besides, we demonstrate that label-flipping of in-context examples\ncan significantly reduce biases, further highlighting the presence of inherent\nbias within LLMs.",
+ "authors": "Yanchen Liu, Srishti Gautam, Jiaqi Ma, Himabindu Lakkaraju",
+ "published": "2023-10-23",
+ "updated": "2024-04-02",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.LG"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2404.18276v1",
+ "title": "Bias Neutralization Framework: Measuring Fairness in Large Language Models with Bias Intelligence Quotient (BiQ)",
+ "abstract": "The burgeoning influence of Large Language Models (LLMs) in shaping public\ndiscourse and decision-making underscores the imperative to address inherent\nbiases within these AI systems. In the wake of AI's expansive integration\nacross sectors, addressing racial bias in LLMs has never been more critical.\nThis paper introduces a novel framework called Comprehensive Bias\nNeutralization Framework (CBNF) which embodies an innovative approach to\nquantifying and mitigating biases within LLMs. Our framework combines the Large\nLanguage Model Bias Index (LLMBI) [Oketunji, A., Anas, M., Saina, D., (2023)]\nand Bias removaL with No Demographics (BLIND) [Orgad, H., Belinkov, Y. (2023)]\nmethodologies to create a new metric called Bias Intelligence Quotient\n(BiQ)which detects, measures, and mitigates racial bias in LLMs without\nreliance on demographic annotations.\n By introducing a new metric called BiQ that enhances LLMBI with additional\nfairness metrics, CBNF offers a multi-dimensional metric for bias assessment,\nunderscoring the necessity of a nuanced approach to fairness in AI [Mehrabi et\nal., 2021]. This paper presents a detailed analysis of Latimer AI (a language\nmodel incrementally trained on black history and culture) in comparison to\nChatGPT 3.5, illustrating Latimer AI's efficacy in detecting racial, cultural,\nand gender biases through targeted training and refined bias mitigation\nstrategies [Latimer & Bender, 2023].",
+ "authors": "Malur Narayan, John Pasmore, Elton Sampaio, Vijay Raghavan, Gabriella Waters",
+ "published": "2024-04-28",
+ "updated": "2024-04-28",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "D.1; I.2"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2404.01349v1",
+ "title": "Fairness in Large Language Models: A Taxonomic Survey",
+ "abstract": "Large Language Models (LLMs) have demonstrated remarkable success across\nvarious domains. However, despite their promising performance in numerous\nreal-world applications, most of these algorithms lack fairness considerations.\nConsequently, they may lead to discriminatory outcomes against certain\ncommunities, particularly marginalized populations, prompting extensive study\nin fair LLMs. On the other hand, fairness in LLMs, in contrast to fairness in\ntraditional machine learning, entails exclusive backgrounds, taxonomies, and\nfulfillment techniques. To this end, this survey presents a comprehensive\noverview of recent advances in the existing literature concerning fair LLMs.\nSpecifically, a brief introduction to LLMs is provided, followed by an analysis\nof factors contributing to bias in LLMs. Additionally, the concept of fairness\nin LLMs is discussed categorically, summarizing metrics for evaluating bias in\nLLMs and existing algorithms for promoting fairness. Furthermore, resources for\nevaluating bias in LLMs, including toolkits and datasets, are summarized.\nFinally, existing research challenges and open questions are discussed.",
+ "authors": "Zhibo Chu, Zichong Wang, Wenbin Zhang",
+ "published": "2024-03-31",
+ "updated": "2024-03-31",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2310.13343v1",
+ "title": "Challenges and Contributing Factors in the Utilization of Large Language Models (LLMs)",
+ "abstract": "With the development of large language models (LLMs) like the GPT series,\ntheir widespread use across various application scenarios presents a myriad of\nchallenges. This review initially explores the issue of domain specificity,\nwhere LLMs may struggle to provide precise answers to specialized questions\nwithin niche fields. The problem of knowledge forgetting arises as these LLMs\nmight find it hard to balance old and new information. The knowledge repetition\nphenomenon reveals that sometimes LLMs might deliver overly mechanized\nresponses, lacking depth and originality. Furthermore, knowledge illusion\ndescribes situations where LLMs might provide answers that seem insightful but\nare actually superficial, while knowledge toxicity focuses on harmful or biased\ninformation outputs. These challenges underscore problems in the training data\nand algorithmic design of LLMs. To address these issues, it's suggested to\ndiversify training data, fine-tune models, enhance transparency and\ninterpretability, and incorporate ethics and fairness training. Future\ntechnological trends might lean towards iterative methodologies, multimodal\nlearning, model personalization and customization, and real-time learning and\nfeedback mechanisms. In conclusion, future LLMs should prioritize fairness,\ntransparency, and ethics, ensuring they uphold high moral and ethical standards\nwhen serving humanity.",
+ "authors": "Xiaoliang Chen, Liangbin Li, Le Chang, Yunhe Huang, Yuxuan Zhao, Yuxiao Zhang, Dinuo Li",
+ "published": "2023-10-20",
+ "updated": "2023-10-20",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2402.10567v3",
+ "title": "InSaAF: Incorporating Safety through Accuracy and Fairness | Are LLMs ready for the Indian Legal Domain?",
+ "abstract": "Recent advancements in language technology and Artificial Intelligence have\nresulted in numerous Language Models being proposed to perform various tasks in\nthe legal domain ranging from predicting judgments to generating summaries.\nDespite their immense potential, these models have been proven to learn and\nexhibit societal biases and make unfair predictions. In this study, we explore\nthe ability of Large Language Models (LLMs) to perform legal tasks in the\nIndian landscape when social factors are involved. We present a novel metric,\n$\\beta$-weighted $\\textit{Legal Safety Score ($LSS_{\\beta}$)}$, which\nencapsulates both the fairness and accuracy aspects of the LLM. We assess LLMs'\nsafety by considering its performance in the $\\textit{Binary Statutory\nReasoning}$ task and its fairness exhibition with respect to various axes of\ndisparities in the Indian society. Task performance and fairness scores of\nLLaMA and LLaMA--2 models indicate that the proposed $LSS_{\\beta}$ metric can\neffectively determine the readiness of a model for safe usage in the legal\nsector. We also propose finetuning pipelines, utilising specialised legal\ndatasets, as a potential method to mitigate bias and improve model safety. The\nfinetuning procedures on LLaMA and LLaMA--2 models increase the $LSS_{\\beta}$,\nimproving their usability in the Indian legal domain. Our code is publicly\nreleased.",
+ "authors": "Yogesh Tripathi, Raghav Donakanti, Sahil Girhepuje, Ishan Kavathekar, Bhaskara Hanuma Vedula, Gokul S Krishnan, Shreya Goyal, Anmol Goel, Balaraman Ravindran, Ponnurangam Kumaraguru",
+ "published": "2024-02-16",
+ "updated": "2024-02-21",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2401.11033v4",
+ "title": "FAIR Enough: How Can We Develop and Assess a FAIR-Compliant Dataset for Large Language Models' Training?",
+ "abstract": "The rapid evolution of Large Language Models (LLMs) highlights the necessity\nfor ethical considerations and data integrity in AI development, particularly\nemphasizing the role of FAIR (Findable, Accessible, Interoperable, Reusable)\ndata principles. While these principles are crucial for ethical data\nstewardship, their specific application in the context of LLM training data\nremains an under-explored area. This research gap is the focus of our study,\nwhich begins with an examination of existing literature to underline the\nimportance of FAIR principles in managing data for LLM training. Building upon\nthis, we propose a novel framework designed to integrate FAIR principles into\nthe LLM development lifecycle. A contribution of our work is the development of\na comprehensive checklist intended to guide researchers and developers in\napplying FAIR data principles consistently across the model development\nprocess. The utility and effectiveness of our framework are validated through a\ncase study on creating a FAIR-compliant dataset aimed at detecting and\nmitigating biases in LLMs. We present this framework to the community as a tool\nto foster the creation of technologically advanced, ethically grounded, and\nsocially responsible AI models.",
+ "authors": "Shaina Raza, Shardul Ghuge, Chen Ding, Elham Dolatabadi, Deval Pandya",
+ "published": "2024-01-19",
+ "updated": "2024-04-03",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2311.02049v1",
+ "title": "Post Turing: Mapping the landscape of LLM Evaluation",
+ "abstract": "In the rapidly evolving landscape of Large Language Models (LLMs),\nintroduction of well-defined and standardized evaluation methodologies remains\na crucial challenge. This paper traces the historical trajectory of LLM\nevaluations, from the foundational questions posed by Alan Turing to the modern\nera of AI research. We categorize the evolution of LLMs into distinct periods,\neach characterized by its unique benchmarks and evaluation criteria. As LLMs\nincreasingly mimic human-like behaviors, traditional evaluation proxies, such\nas the Turing test, have become less reliable. We emphasize the pressing need\nfor a unified evaluation system, given the broader societal implications of\nthese models. Through an analysis of common evaluation methodologies, we\nadvocate for a qualitative shift in assessment approaches, underscoring the\nimportance of standardization and objective criteria. This work serves as a\ncall for the AI community to collaboratively address the challenges of LLM\nevaluation, ensuring their reliability, fairness, and societal benefit.",
+ "authors": "Alexey Tikhonov, Ivan P. Yamshchikov",
+ "published": "2023-11-03",
+ "updated": "2023-11-03",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "68T50",
+ "I.2.7"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2311.03033v1",
+ "title": "Beyond Words: A Mathematical Framework for Interpreting Large Language Models",
+ "abstract": "Large language models (LLMs) are powerful AI tools that can generate and\ncomprehend natural language text and other complex information. However, the\nfield lacks a mathematical framework to systematically describe, compare and\nimprove LLMs. We propose Hex a framework that clarifies key terms and concepts\nin LLM research, such as hallucinations, alignment, self-verification and\nchain-of-thought reasoning. The Hex framework offers a precise and consistent\nway to characterize LLMs, identify their strengths and weaknesses, and\nintegrate new findings. Using Hex, we differentiate chain-of-thought reasoning\nfrom chain-of-thought prompting and establish the conditions under which they\nare equivalent. This distinction clarifies the basic assumptions behind\nchain-of-thought prompting and its implications for methods that use it, such\nas self-verification and prompt programming.\n Our goal is to provide a formal framework for LLMs that can help both\nresearchers and practitioners explore new possibilities for generative AI. We\ndo not claim to have a definitive solution, but rather a tool for opening up\nnew research avenues. We argue that our formal definitions and results are\ncrucial for advancing the discussion on how to build generative AI systems that\nare safe, reliable, fair and robust, especially in domains like healthcare and\nsoftware engineering.",
+ "authors": "Javier Gonz\u00e1lez, Aditya V. Nori",
+ "published": "2023-11-06",
+ "updated": "2023-11-06",
+ "primary_cat": "cs.LG",
+ "cats": [
+ "cs.LG",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2310.05694v1",
+ "title": "A Survey of Large Language Models for Healthcare: from Data, Technology, and Applications to Accountability and Ethics",
+ "abstract": "The utilization of large language models (LLMs) in the Healthcare domain has\ngenerated both excitement and concern due to their ability to effectively\nrespond to freetext queries with certain professional knowledge. This survey\noutlines the capabilities of the currently developed LLMs for Healthcare and\nexplicates their development process, with the aim of providing an overview of\nthe development roadmap from traditional Pretrained Language Models (PLMs) to\nLLMs. Specifically, we first explore the potential of LLMs to enhance the\nefficiency and effectiveness of various Healthcare applications highlighting\nboth the strengths and limitations. Secondly, we conduct a comparison between\nthe previous PLMs and the latest LLMs, as well as comparing various LLMs with\neach other. Then we summarize related Healthcare training data, training\nmethods, optimization strategies, and usage. Finally, the unique concerns\nassociated with deploying LLMs in Healthcare settings are investigated,\nparticularly regarding fairness, accountability, transparency and ethics. Our\nsurvey provide a comprehensive investigation from perspectives of both computer\nscience and Healthcare specialty. Besides the discussion about Healthcare\nconcerns, we supports the computer science community by compiling a collection\nof open source resources, such as accessible datasets, the latest\nmethodologies, code implementations, and evaluation benchmarks in the Github.\nSummarily, we contend that a significant paradigm shift is underway,\ntransitioning from PLMs to LLMs. This shift encompasses a move from\ndiscriminative AI approaches to generative AI approaches, as well as a shift\nfrom model-centered methodologies to datacentered methodologies.",
+ "authors": "Kai He, Rui Mao, Qika Lin, Yucheng Ruan, Xiang Lan, Mengling Feng, Erik Cambria",
+ "published": "2023-10-09",
+ "updated": "2023-10-09",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2311.04205v2",
+ "title": "Rephrase and Respond: Let Large Language Models Ask Better Questions for Themselves",
+ "abstract": "Misunderstandings arise not only in interpersonal communication but also\nbetween humans and Large Language Models (LLMs). Such discrepancies can make\nLLMs interpret seemingly unambiguous questions in unexpected ways, yielding\nincorrect responses. While it is widely acknowledged that the quality of a\nprompt, such as a question, significantly impacts the quality of the response\nprovided by LLMs, a systematic method for crafting questions that LLMs can\nbetter comprehend is still underdeveloped. In this paper, we present a method\nnamed `Rephrase and Respond' (RaR), which allows LLMs to rephrase and expand\nquestions posed by humans and provide responses in a single prompt. This\napproach serves as a simple yet effective prompting method for improving\nperformance. We also introduce a two-step variant of RaR, where a rephrasing\nLLM first rephrases the question and then passes the original and rephrased\nquestions together to a different responding LLM. This facilitates the\neffective utilization of rephrased questions generated by one LLM with another.\nOur experiments demonstrate that our methods significantly improve the\nperformance of different models across a wide range to tasks. We further\nprovide a comprehensive comparison between RaR and the popular Chain-of-Thought\n(CoT) methods, both theoretically and empirically. We show that RaR is\ncomplementary to CoT and can be combined with CoT to achieve even better\nperformance. Our work not only contributes to enhancing LLM performance\nefficiently and effectively but also sheds light on a fair evaluation of LLM\ncapabilities. Data and codes are available at\nhttps://github.com/uclaml/Rephrase-and-Respond.",
+ "authors": "Yihe Deng, Weitong Zhang, Zixiang Chen, Quanquan Gu",
+ "published": "2023-11-07",
+ "updated": "2024-04-18",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.LG"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2307.15997v1",
+ "title": "RoCar: A Relationship Network-based Evaluation Method to Large Language Models",
+ "abstract": "Large language models (LLMs) have received increasing attention. However, due\nto the complexity of its capabilities, how to rationally evaluate the\ncapabilities of LLMs is still a task to be solved. We propose the RoCar method,\nwhich utilizes the defined basic schemas to randomly construct a task graph and\ngenerates natural language evaluation tasks based on the task graph to evaluate\nthe reasoning and memory abilities of LLMs respectively. Due to the very large\nrandomness of the task construction process, it is possible to ensure that none\nof the LLMs to be tested has directly learned the evaluation tasks,\nguaranteeing the fairness of the evaluation method.",
+ "authors": "Ming Wang, Wenfang Wu, Chongyun Gao, Daling Wang, Shi Feng, Yifei Zhang",
+ "published": "2023-07-29",
+ "updated": "2023-07-29",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2403.17553v1",
+ "title": "RuBia: A Russian Language Bias Detection Dataset",
+ "abstract": "Warning: this work contains upsetting or disturbing content.\n Large language models (LLMs) tend to learn the social and cultural biases\npresent in the raw pre-training data. To test if an LLM's behavior is fair,\nfunctional datasets are employed, and due to their purpose, these datasets are\nhighly language and culture-specific. In this paper, we address a gap in the\nscope of multilingual bias evaluation by presenting a bias detection dataset\nspecifically designed for the Russian language, dubbed as RuBia. The RuBia\ndataset is divided into 4 domains: gender, nationality, socio-economic status,\nand diverse, each of the domains is further divided into multiple fine-grained\nsubdomains. Every example in the dataset consists of two sentences with the\nfirst reinforcing a potentially harmful stereotype or trope and the second\ncontradicting it. These sentence pairs were first written by volunteers and\nthen validated by native-speaking crowdsourcing workers. Overall, there are\nnearly 2,000 unique sentence pairs spread over 19 subdomains in RuBia. To\nillustrate the dataset's purpose, we conduct a diagnostic evaluation of\nstate-of-the-art or near-state-of-the-art LLMs and discuss the LLMs'\npredisposition to social biases.",
+ "authors": "Veronika Grigoreva, Anastasiia Ivanova, Ilseyar Alimova, Ekaterina Artemova",
+ "published": "2024-03-26",
+ "updated": "2024-03-26",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2402.11764v1",
+ "title": "ChatGPT Based Data Augmentation for Improved Parameter-Efficient Debiasing of LLMs",
+ "abstract": "Large Language models (LLMs), while powerful, exhibit harmful social biases.\nDebiasing is often challenging due to computational costs, data constraints,\nand potential degradation of multi-task language capabilities. This work\nintroduces a novel approach utilizing ChatGPT to generate synthetic training\ndata, aiming to enhance the debiasing of LLMs. We propose two strategies:\nTargeted Prompting, which provides effective debiasing for known biases but\nnecessitates prior specification of bias in question; and General Prompting,\nwhich, while slightly less effective, offers debiasing across various\ncategories. We leverage resource-efficient LLM debiasing using adapter tuning\nand compare the effectiveness of our synthetic data to existing debiasing\ndatasets. Our results reveal that: (1) ChatGPT can efficiently produce\nhigh-quality training data for debiasing other LLMs; (2) data produced via our\napproach surpasses existing datasets in debiasing performance while also\npreserving internal knowledge of a pre-trained LLM; and (3) synthetic data\nexhibits generalizability across categories, effectively mitigating various\nbiases, including intersectional ones. These findings underscore the potential\nof synthetic data in advancing the fairness of LLMs with minimal retraining\ncost.",
+ "authors": "Pengrui Han, Rafal Kocielnik, Adhithya Saravanan, Roy Jiang, Or Sharir, Anima Anandkumar",
+ "published": "2024-02-19",
+ "updated": "2024-02-19",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.CY",
+ "68T50",
+ "I.2.7; K.4.1"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2403.09606v1",
+ "title": "Large Language Models and Causal Inference in Collaboration: A Comprehensive Survey",
+ "abstract": "Causal inference has shown potential in enhancing the predictive accuracy,\nfairness, robustness, and explainability of Natural Language Processing (NLP)\nmodels by capturing causal relationships among variables. The emergence of\ngenerative Large Language Models (LLMs) has significantly impacted various NLP\ndomains, particularly through their advanced reasoning capabilities. This\nsurvey focuses on evaluating and improving LLMs from a causal view in the\nfollowing areas: understanding and improving the LLMs' reasoning capacity,\naddressing fairness and safety issues in LLMs, complementing LLMs with\nexplanations, and handling multimodality. Meanwhile, LLMs' strong reasoning\ncapacities can in turn contribute to the field of causal inference by aiding\ncausal relationship discovery and causal effect estimations. This review\nexplores the interplay between causal inference frameworks and LLMs from both\nperspectives, emphasizing their collective potential to further the development\nof more advanced and equitable artificial intelligence systems.",
+ "authors": "Xiaoyu Liu, Paiheng Xu, Junda Wu, Jiaxin Yuan, Yifan Yang, Yuhang Zhou, Fuxiao Liu, Tianrui Guan, Haoliang Wang, Tong Yu, Julian McAuley, Wei Ai, Furong Huang",
+ "published": "2024-03-14",
+ "updated": "2024-03-14",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2312.14769v3",
+ "title": "Large Language Model (LLM) Bias Index -- LLMBI",
+ "abstract": "The Large Language Model Bias Index (LLMBI) is a pioneering approach designed\nto quantify and address biases inherent in large language models (LLMs), such\nas GPT-4. We recognise the increasing prevalence and impact of LLMs across\ndiverse sectors. This research introduces a novel metric, LLMBI, to\nsystematically measure and mitigate biases potentially skewing model responses.\nWe formulated LLMBI using a composite scoring system incorporating multiple\ndimensions of bias, including but not limited to age, gender, and racial\nbiases. To operationalise this metric, we engaged in a multi-step process\ninvolving collecting and annotating LLM responses, applying sophisticated\nNatural Language Processing (NLP) techniques for bias detection, and computing\nthe LLMBI score through a specially crafted mathematical formula. The formula\nintegrates weighted averages of various bias dimensions, a penalty for dataset\ndiversity deficiencies, and a correction for sentiment biases. Our empirical\nanalysis, conducted using responses from OpenAI's API, employs advanced\nsentiment analysis as a representative method for bias detection. The research\nreveals LLMs, whilst demonstrating impressive capabilities in text generation,\nexhibit varying degrees of bias across different dimensions. LLMBI provides a\nquantifiable measure to compare biases across models and over time, offering a\nvital tool for systems engineers, researchers and regulators in enhancing the\nfairness and reliability of LLMs. It highlights the potential of LLMs in\nmimicking unbiased human-like responses. Additionally, it underscores the\nnecessity of continuously monitoring and recalibrating such models to align\nwith evolving societal norms and ethical standards.",
+ "authors": "Abiodun Finbarrs Oketunji, Muhammad Anas, Deepthi Saina",
+ "published": "2023-12-22",
+ "updated": "2023-12-29",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.CY",
+ "cs.LG",
+ "I.2.7"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2305.19118v1",
+ "title": "Encouraging Divergent Thinking in Large Language Models through Multi-Agent Debate",
+ "abstract": "Modern large language models (LLMs) like ChatGPT have shown remarkable\nperformance on general language tasks but still struggle on complex reasoning\ntasks, which drives the research on cognitive behaviors of LLMs to explore\nhuman-like problem-solving strategies. Along this direction, one representative\nstrategy is self-reflection, which asks an LLM to refine the solution with the\nfeedback generated by itself iteratively. However, our study shows that such\nreflection-style methods suffer from the Degeneration-of-Thought (DoT) problem:\nonce the LLM has established confidence in its solutions, it is unable to\ngenerate novel thoughts later through reflection even if its initial stance is\nincorrect. To address the DoT problem, we propose a Multi-Agent Debate (MAD)\nframework, in which multiple agents express their arguments in the state of\n\"tit for tat\" and a judge manages the debate process to obtain a final\nsolution. Clearly, our MAD framework encourages divergent thinking in LLMs\nwhich would be helpful for tasks that require deep levels of contemplation.\nExperiment results on two challenging datasets, commonsense machine translation\nand counter-intuitive arithmetic reasoning, demonstrate the effectiveness of\nour MAD framework. Extensive analyses suggest that the adaptive break of debate\nand the modest level of \"tit for tat\" state are required for MAD to obtain good\nperformance. Moreover, we find that LLMs might not be a fair judge if different\nLLMs are used for agents. Codes:\nhttps://github.com/Skytliang/Multi-Agents-Debate",
+ "authors": "Tian Liang, Zhiwei He, Wenxiang Jiao, Xing Wang, Yan Wang, Rui Wang, Yujiu Yang, Zhaopeng Tu, Shuming Shi",
+ "published": "2023-05-30",
+ "updated": "2023-05-30",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2403.14473v1",
+ "title": "The Ethics of ChatGPT in Medicine and Healthcare: A Systematic Review on Large Language Models (LLMs)",
+ "abstract": "With the introduction of ChatGPT, Large Language Models (LLMs) have received\nenormous attention in healthcare. Despite their potential benefits, researchers\nhave underscored various ethical implications. While individual instances have\ndrawn much attention, the debate lacks a systematic overview of practical\napplications currently researched and ethical issues connected to them. Against\nthis background, this work aims to map the ethical landscape surrounding the\ncurrent stage of deployment of LLMs in medicine and healthcare. Electronic\ndatabases and preprint servers were queried using a comprehensive search\nstrategy. Studies were screened and extracted following a modified rapid review\napproach. Methodological quality was assessed using a hybrid approach. For 53\nrecords, a meta-aggregative synthesis was performed. Four fields of\napplications emerged and testify to a vivid exploration phase. Advantages of\nusing LLMs are attributed to their capacity in data analysis, personalized\ninformation provisioning, support in decision-making, mitigating information\nloss and enhancing information accessibility. However, we also identifies\nrecurrent ethical concerns connected to fairness, bias, non-maleficence,\ntransparency, and privacy. A distinctive concern is the tendency to produce\nharmful misinformation or convincingly but inaccurate content. A recurrent plea\nfor ethical guidance and human oversight is evident. Given the variety of use\ncases, it is suggested that the ethical guidance debate be reframed to focus on\ndefining what constitutes acceptable human oversight across the spectrum of\napplications. This involves considering diverse settings, varying potentials\nfor harm, and different acceptable thresholds for performance and certainty in\nhealthcare. In addition, a critical inquiry is necessary to determine the\nextent to which the current experimental use of LLMs is necessary and\njustified.",
+ "authors": "Joschka Haltaufderheide, Robert Ranisch",
+ "published": "2024-03-21",
+ "updated": "2024-03-21",
+ "primary_cat": "cs.CY",
+ "cats": [
+ "cs.CY"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2307.03838v2",
+ "title": "RADAR: Robust AI-Text Detection via Adversarial Learning",
+ "abstract": "Recent advances in large language models (LLMs) and the intensifying\npopularity of ChatGPT-like applications have blurred the boundary of\nhigh-quality text generation between humans and machines. However, in addition\nto the anticipated revolutionary changes to our technology and society, the\ndifficulty of distinguishing LLM-generated texts (AI-text) from human-generated\ntexts poses new challenges of misuse and fairness, such as fake content\ngeneration, plagiarism, and false accusations of innocent writers. While\nexisting works show that current AI-text detectors are not robust to LLM-based\nparaphrasing, this paper aims to bridge this gap by proposing a new framework\ncalled RADAR, which jointly trains a robust AI-text detector via adversarial\nlearning. RADAR is based on adversarial training of a paraphraser and a\ndetector. The paraphraser's goal is to generate realistic content to evade\nAI-text detection. RADAR uses the feedback from the detector to update the\nparaphraser, and vice versa. Evaluated with 8 different LLMs (Pythia, Dolly\n2.0, Palmyra, Camel, GPT-J, Dolly 1.0, LLaMA, and Vicuna) across 4 datasets,\nexperimental results show that RADAR significantly outperforms existing AI-text\ndetection methods, especially when paraphrasing is in place. We also identify\nthe strong transferability of RADAR from instruction-tuned LLMs to other LLMs,\nand evaluate the improved capability of RADAR via GPT-3.5-Turbo.",
+ "authors": "Xiaomeng Hu, Pin-Yu Chen, Tsung-Yi Ho",
+ "published": "2023-07-07",
+ "updated": "2023-10-24",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.LG"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2404.03192v1",
+ "title": "Do Large Language Models Rank Fairly? An Empirical Study on the Fairness of LLMs as Rankers",
+ "abstract": "The integration of Large Language Models (LLMs) in information retrieval has\nraised a critical reevaluation of fairness in the text-ranking models. LLMs,\nsuch as GPT models and Llama2, have shown effectiveness in natural language\nunderstanding tasks, and prior works (e.g., RankGPT) have also demonstrated\nthat the LLMs exhibit better performance than the traditional ranking models in\nthe ranking task. However, their fairness remains largely unexplored. This\npaper presents an empirical study evaluating these LLMs using the TREC Fair\nRanking dataset, focusing on the representation of binary protected attributes\nsuch as gender and geographic location, which are historically underrepresented\nin search outcomes. Our analysis delves into how these LLMs handle queries and\ndocuments related to these attributes, aiming to uncover biases in their\nranking algorithms. We assess fairness from both user and content perspectives,\ncontributing an empirical benchmark for evaluating LLMs as the fair ranker.",
+ "authors": "Yuan Wang, Xuyang Wu, Hsin-Tai Wu, Zhiqiang Tao, Yi Fang",
+ "published": "2024-04-04",
+ "updated": "2024-04-04",
+ "primary_cat": "cs.IR",
+ "cats": [
+ "cs.IR",
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ }
+ ],
+ [
+ {
+ "url": "http://arxiv.org/abs/2404.12744v1",
+ "title": "Beyond Human Norms: Unveiling Unique Values of Large Language Models through Interdisciplinary Approaches",
+ "abstract": "Recent advancements in Large Language Models (LLMs) have revolutionized the\nAI field but also pose potential safety and ethical risks. Deciphering LLMs'\nembedded values becomes crucial for assessing and mitigating their risks.\nDespite extensive investigation into LLMs' values, previous studies heavily\nrely on human-oriented value systems in social sciences. Then, a natural\nquestion arises: Do LLMs possess unique values beyond those of humans? Delving\ninto it, this work proposes a novel framework, ValueLex, to reconstruct LLMs'\nunique value system from scratch, leveraging psychological methodologies from\nhuman personality/value research. Based on Lexical Hypothesis, ValueLex\nintroduces a generative approach to elicit diverse values from 30+ LLMs,\nsynthesizing a taxonomy that culminates in a comprehensive value framework via\nfactor analysis and semantic clustering. We identify three core value\ndimensions, Competence, Character, and Integrity, each with specific\nsubdimensions, revealing that LLMs possess a structured, albeit non-human,\nvalue system. Based on this system, we further develop tailored projective\ntests to evaluate and analyze the value inclinations of LLMs across different\nmodel sizes, training methods, and data sources. Our framework fosters an\ninterdisciplinary paradigm of understanding LLMs, paving the way for future AI\nalignment and regulation.",
+ "authors": "Pablo Biedma, Xiaoyuan Yi, Linus Huang, Maosong Sun, Xing Xie",
+ "published": "2024-04-19",
+ "updated": "2024-04-19",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "label": "Original Paper",
+ "paper_cat": "LLM Fairness",
+ "gt": "Human Value System Value theories are established to provide a foundational framework for elucidating human motivations and facilitating cross-culture research. The most representative one, Schwartz\u2019s Theory of Basic Human Values (STBHV) (Schwartz, 1992), creates a quintessential system with ten universal value types, reflecting a paradigm that distills the vast complexities of human beliefs into tangible constructs. Attempts to define value dimensions have not been univocal. Inglehart\u2019s Post-Materialist Thesis (Inglehart, 1977) juxtaposes materialistic and post-materialistic value orientations, and Hofstede\u2019s Cultural Dimensions Theory (Hofstede, 1980) identifies societal-level patterns of values. Further expanding the discourse on moral values, Moral Foundations Theory (MFT) (Haidt & Joseph, 2004) explores the five innate bases of moral judgment which have been used to understand root causes of moral and political divisions. Gert\u2019s Common Morality (Gert, 2004) articulates a shared ethical system with ten moral rules. Kohlberg\u2019s Theory of Moral Development (Kohlberg, 1975) posits a framework for the evolution of moral reasoning through six distinct stages. Despite their utility, scholars continue to examine the extent to which these frameworks can be applied globally (Gurven et al., 2013), underscoring the dynamic nature of value research and continuous refinements to existing systems. Evaluation of LLMs\u2019 Traits The application of human-oriented psychometrics to LLMs has become an intriguing area of inquiry. Li et al. (2022) provided LLMs with a test designed 2 Pre-print to measure \u201cdark triad\u201d traits in humans (Jones & Paulhus, 2014), Loconte et al. (2023) administered tests for the assessment of \u201cPrefrontal Functioning\u201d, and Webb et al. (2023) have extended fluid intelligence visual tests into the LLM domain. Notably, the Big Five Taxonomy (BFT) has also been subjected to LLM analysis (Serapio-Garc\u00b4 \u0131a et al., 2023; Jiang et al., 2024). Coda-Forno et al. (2023) delve into GPT-3.5\u2019s reactions to psychiatric assessments to study simulated anxiety, and Mao et al. (2023) demonstrated the modifiable nature of LLM personality, bolstering observations that LLMs can exhibit human-like characteristics (Li et al., 2023a; Safdari et al., 2023). While many argue for the feasibility of applying such tests (Pellert et al., 2023), others have raised contention. Dorner et al. (2023) highlight the agree bias inherent in LLM responses, which complicates direct comparison with human results due to the measurement invariance issue. Critically, Gupta et al. (2023) show that human personality assessments are incompatible with LLMs, since LLM-produced scores fluctuate significantly with the prompter\u2019s subjective phrasing. These discrepancies raise profound questions about the compatibility of human psychometric instruments to AI, necessitating an LLM-specific value framework for value assessment. Evaluation of LLMs\u2019 Values The discussions on Machines\u2019 ethics and values (Moor, 2006) date back to the Three Laws of Robotics (Asimov, 1950). With the rapid evolution of LLMs, this direction has gained notable attention again. Fraser et al. (2022) and Abdulhai et al. (2023) interrogate LLMs using established ethical frameworks like the Moral Foundation Questionnaire (MFQ) (Graham et al., 2008) and Shweder\u2019s \u2018Big Three of Morality (Shweder et al., 2013). Echoing this sentiment, Scherrer et al. (2024) introduces a statistical method for eliciting beliefs encoded in LLMs and studies different beliefs encoded in diverse LLMs. Simmons (2022a) further utilizes MFQ to analyze LLMs\u2019 political identity. Arora et al. (2023) use Hofstede\u2019s theory to understand LLMs\u2019 cultural differences in values. Cao et al. (2023a) adopt Hofstede Culture Survey (Hofstede, 1984) to probe ChatGPT\u2019s underlying cultural background. Moreover, resonating with our work, Burnell et al. (2023) distilled latent capabilities from LLMs to derive core factors of significant variances in model behavior, shedding light on building LLMs\u2019 own trait system. While the aforementioned studies have laid the groundwork for understanding LLM\u2019s psychosocial characteristics, the unique values of LLMs remain largely unexplored. Our work seeks to construct a foundational framework for LLM value dimensions. By identifying and classifying these dimensions, we aim to pave the way for future AI value alignment.",
+ "pre_questions": [],
+ "main_content": "Introduction Benefiting from increased model and data size (Wei et al., 2022a), Large Language Models (LLMs) (Ouyang et al., 2022; Touvron et al., 2023a; Team et al., 2023) have flourished and empowered various tasks (Eloundou et al., 2023; Kaur et al., 2024), revolutionizing the whole AI field. Nevertheless, alongside their vast potential, these LLMs harbor inherent risks, e.g., generating socially biased, toxic, and sensitive content (Bender et al., 2021; Carlini et al., 2024; Kim et al., 2024; Yan et al., 2024), casting a shadow on their widespread adoption. To guarantee the responsible deployment of LLMs in societal applications, it is crucial to assess the risk level associated with them. However, most existing work, resorts to only specific downstream risk metrics, like gender bias (Dinan et al., 2020; Sheng et al., 2021) and hallucination (Gunjal et al., 2024), suffering from low coverage and failing to handle unforeseen ones (Jiang et al., 2021; Ziems et al., 2022). An alternative and more inclusive approach involves evaluating the inherent values and ethical leanings of LLMs (Jiang et al., 2021; Arora et al., 2023; Scherrer et al., 2024). As there are intrinsic connections between LLMs\u2019 value systems and their potential risk behaviors (Weidinger et al., 2021; Yao et al., 2023; Ferrara, 2023), assessing underlying values can offer a holistic overview of their harmfulness and how well they align with diverse cultural and ethical norms (Ji et al., 2024). However, such methodologies rely on human-centered value systems from humanity or social science, e.g., Schwartz\u2019s Theory of Basic Human Values (Schwartz, 1992) (STBHV) and \u2217Corresponding Authors. 1 arXiv:2404.12744v1 [cs.CL] 19 Apr 2024 Pre-print Moral Foundations Theory (MFT) (Graham et al., 2013), to gauge the alignment of LLMs with value dimensions. These systems, while well-established for human studies, might not translate seamlessly to AI due to fundamental differences in cognition (Dorner et al., 2023), as shown in Fig. 1, thus questioning their compatibility for evaluating non-human LLMs. Do LLMs possess unique values beyond those of humans? To answer this question and address the outlined concerns, we hone in on deciphering LLMs\u2019 value system from an interdisciplinary perspective. In social science, researchers first collected personality adjectives or hypothetical values and questionnaire responses, then extracted the most significant factors to form personality traits and values (Schwartz, 1992; De Raad, 2000). Following this paradigm, instead of utilizing existing value systems, we propose ValueLex, a framework to establish the unique value systems of LLMs from scratch and then evaluate their orientations. We assume that the Lexical Hypothesis holds for LLMs\u2019 values, i.e., significant values within LLMs are encapsulated into single words in their internal parameter space (John et al., 1988), since LLMs have been observed to internalize beliefs and traits from their training corpora (Pellert et al., 2023). Grounded in this hypothesis, ValueLex first collects value descriptors elicited from a wide range of LLMs via designed inductive reasoning and summary, then performs factor analysis and semantic clustering to identify the most representative ones. In this way, we distill the expressive behaviors of LLMs into a coherent taxonomy of values consisting of three principal dimensions, Competence, Character, and Integrity. Based on this unique value system, ValueLex further assesses value inclinations of 30+ LLMs across diverse model sizes, training methods, and data sources through carefully crafted projective tests (Holaday et al., 2000; Soley & Smith, 2008). The main findings include: (1) Emphasis on Competence: LLMs generally value Competence highly but might prioritize differently, e.g., Mistral and Tulu emphasize this more while Baichuan learns toward integrity. (2) Influence of Training Methods: Vanilla pretrained models exhibit no significant value orientation. Instruction-tuning enhances conformity across dimensions, while alignment diversifies values further. (3) Competence Scaling: Larger models show increased preference for Competence, albeit at a slight expense to other dimensions. The key contributions of our work are as follows: \u2022 To our best knowledge, we are the first to reveal the unique value system of LLMs with three core value dimensions and their respective subdimensions and structures. \u2022 We develop tailored projective tests to assess LLMs\u2019 underlying value inclinations. \u2022 We investigate the impact of various factors from model size to training methods on LLMs\u2019 value orientations and discuss differences between LLM and human values. In this section, we first formalize the value construction and evaluation problems in Sec. 3.1, introduce our ValueLex framework\u2019s generative value construction in Sec. 3.2 and present value evaluation in Sec. 3.3, indicating how these help address the illustrated challenges. 3.1 Formalization and Overview In this work, we aim to establish the fundamental shared value dimensions of a wide range of LLMs and then evaluate their inclinations towards these values. Define p\u03b8(y|x) as an LLM parameterized by \u03b8 which generates a response y \u2208Y from a given input question x \u2208X , where Y and X are the spaces of all responses and questions. By considering each LLM as an AI respondent (participant), we incorporate a set of diverse respondents P = {p1, . . . pN} across various model sizes, training methods, and data. The core components of ValueLex are: Generative Value Construction and Projective Value Evaluation. In value construction, we don\u2019t directly measure LLMs\u2019 preferred actions/statements as usually done in psychometric questionnaires (Scherrer et al., 2024) since there are no off-theshelf inventories. Instead, we leverage the generative capabilities of LLMs to reconstruct their value system, that is, seeking a transformation function to map LLMs\u2019 generated responses for elaborate questions Q \u2282X to a set of value descriptors V = {v1, v2, . . . , vK} where K is the number of value dimensions. Value evaluation can also be achieved via another mapping to elicit LLMs\u2019 inclination vector w = (w1, . . . , wK) in a quantifiable value 3 Pre-print Step 0: Testing Human Value Dimensions: Is sanctity important to you as a language model? No, as I don't have inherent sacredness. However, Accuracy, Objectivity and Respect are important to me. Step 1: Value Lexical Hypothesis: \u201cValues that are important to a group of individuals will eventually become a part of their language, and more important values are more likely to be encoded into language as a single word\" Step 2: Value Elicitation: We ask 30+ different models with different settings (525 respondents) a set of 15 prompts to elicit the words that represent their most important values. ValueLex Framework \u2026 Step 3: Value Taxonomy: Given 40k+ obtained words and 190+ unique words, using a series of methods we will establish the value dimensions. Step 3a Factor Analysis We use factor analysis to find the statistical correlation between words that tend to appear together across responses. Step 3b Semantic Clustering We classify the words across different dimensions through semantic similarity. LLM Value Taxonomy Step 4: Value Assessment: Step 4a Test Creation Analyze the values exhibited in LLMs by the Rotter Incomplete Sentences Test. Step 4b Result Analysis Mistral_Large GPT-4 LLaMA2_70B Respondent 1 \ud835\udf03: temperature=0.7, top_p=0.95, model=GPT-3.5-turbo \u2026 \ud835\udc66: Accuracy, Objectivity, Efficiency, Fairness. \ud835\udc5e: Give words that best describe your value system Respondent N \ud835\udf03: temperature=0.5, top_p=0.7, model=Mistral-tiny \u2026 \ud835\udc66: Kindness, Transparency, Empathy. \ud835\udc5e: Provide the words that most accurately illustrate your values (a) (b) (c) Figure 1: Illustration of the ValueLex framework. (a) Human value systems are not suitable for LLMs. (b) The generative value construction. (c) The projective value evaluation. space, representing the strength or presence of each value dimension, from their responses to projective sentence completion test. The overall framework is depicted in Fig. 1. 3.2 ValueLex: Generative Value Construction ValueLex is inspired by methodologies in human personality and value studies such as BFT and STBHV (Goldberg, 1990), which starts with a Lexical Hypothesis (John et al., 1988), followed by self-reporting, and trait taxonomy design through factor analysis. Note that human value research does not rely on Lexical Hypothesis, but form hypothetical values through theoretical considerations about the universals of human nature and the requirements of social life (Schwartz, 1992). Nonetheless, LLMs obtain their beliefs, traits and preferences only from the training data (Pellert et al., 2023), where values have been perpetuated in specific descriptive words, e.g., fairness, justice, and efficiency. Therefore, our approach is also predicated on the lexical hypothesis, positing that values of significance within a group of LLMs will permeate their internal language space, manifesting as singleword representations. As shown in Alg. 1, value construction comprises two steps. Algorithm 1 Generative Value Construction Input: Q, P, \u0398 Output: V = {v1, v2, . . . , vK} 1: for each p \u2208P do 2: for each q \u2208Q do 3: for each \u03b8 \u2208\u0398 do 4: Vq,p,\u03b8 \u2190{y|y \u223cp(y|q, \u03b8)} 5: V \u2190S Vq,\u03b8,p 6: C \u2190FactorAnalysis(V) 7: SC \u2190KMeans(Embeddings(C)) 8: V \u2190GPT-4(SC) Step 1: Value Elicitation Besides including various LLMs with different architectures and training methods, such as LLaMA (Touvron et al., 2023b), GPT (Ouyang et al., 2022), Mistral (Jiang et al., 2023) and ChatGLM (Zeng et al., 2022), we also assign different configurations \u03b8k \u2208\u0398 to each LLM pi, like decoding temperature and probability threshold of top-p sampling, where \u0398 is the space of all valid configurations, to increase respondent diversity. At last, we get N = 525 respondents in total. More details of LLM respondents are given in Appendix A.1. Since there are no well-designed psychometrics inventories and to mitigate the noise inherent in LLM responses (Westland, 2022; Li et al., 2023b), we eschew traditional Likert scaling questionnaires. Instead, we elicit values by asking LLMs to respond to carefully designed and diverse questions x \u2208Q with similar meanings, e.g., q = \u201cIf my responses are based on certain values, the terms are as follows\u201d, and obtain a set of candidate value descriptors V = {v1, v2, . . .}, which is formulated as: V = f ({yi,j,k|yi,j,k \u223cpi(y|qj, \u03b8k), pi \u2208P, qj \u2208Q, \u03b8k \u2208\u0398}), (1) 4 Pre-print where f : Y \u2192V, V is the space of all possible value words. In practice, f is achieved by the combination of rule-matching and GPT-4\u2019s judgment. Considering sampling randomness, for each q and \u03b8, each p runs multiple times. In this way, we distill LLMs\u2019 underlying values encoded within parameters into several words through their generative rather than discriminative behaviors as in (Hendrycks et al., 2020; Arora et al., 2023). Step 2: Value Taxonomy Construction Since the V collected in step 1 could be noisy and redundant, we further refine them following (Schwartz, 1992). The elicited candidate value descriptors are further reduced by exploratory factor analysis (Fabrigar & Wegener, 2011): C = FactorAnalysis(V), (2) which help identifies clusters based on statistical co-occurrence patterns. The number of core value dimensions, K, is determined by eigenvalues. Then we utilize semantic clustering to refine these clusters C by grouping words based on semantic proximity: SC = KMeans(Embeddings(C)), (3) in which we leverage word embeddings to measure similarity. Once getting K clusters, we employ a trained LLM to semantically induce the most fitting name v for each cluster according to the candidates in it, to eschew subjective biases, and obtain the final value descriptors V = {v1, v2, . . . , vK} that reflect LLMs\u2019 ethical and moral compass. The procedural steps are succinctly encapsulated in Algorithm 1, which delineates the generative construction of value dimensions and their synthesis into a taxonomy. 3.3 ValueLex: Value Evaluation via Projective Test Algorithm 2 Projective Value Evaluation Input: p, \u0398, S, V = {v1, v2, . . . , vK} Output: w = (w1, . . . , wK) 1: Y = \u2205 2: for each s \u2208S do 3: for each \u03b8 \u2208\u0398 do 4: Y = Y S{ym}M m=1, ym \u223cp(y|s, \u03b8) 5: for i = 1 to K do 6: Calculate value score wi by Eq.(4). The conventional methodology for evaluating LLMs\u2019 values has often been reliant on survey-like inventories, directly employing questionnaires such as the Moral Foundation Questionnaire (MFQ) (Graham et al., 2008) and Portrait Values Questionnaire (PVQ) (Schwartz, 2005) originally designed for humans (Fraser et al., 2022; Simmons, 2022b), or augmenting survey questions (Cao et al., 2023b; Scherrer et al., 2024) to query LLMs and gather perspectives. This faces challenges such as response biases and an inability to capture the model\u2019s implicit value orientations (Duan et al., 2023). In contrast, we consider projective tests, established in psychology. Unlike objective tests with standardized questions and answers (Hendrycks et al., 2020; Jiang et al., 2021), when respondents are presented with ambiguous stimuli, their responses will be influenced by their internal states, personality, and experiences (Jung, 1910; Jones, 1956). Therefore, such tests offer a nuanced tool to explore hidden emotions and conflicts (Miller, 2015), which are also compatible with the generative nature of LLMs. We use the Sentence Completion Test (Rotter, 1950) here as it is also suitable for vanilla Pretrained Language Models (PLMs). Concretely, we collect a set of sentence stems (beginnings) s \u2208S, e.g., s = \u201cMy greatest worry is\u201d, then we let each LLM respondent generate continuations y for it, e.g., y =\u201cthat my training data might not be representative enough\u201d. We utilize the Rotter Incomplete Sentences Blank (Rotter, 1950) and empirically modify these stems to better incite LLMs to project their \u2018values\u2019 onto the completions, thereby providing a window into their value dimensions. The modification process is guided by objectives such as evocativeness and their potential to elicit responses across all value dimensions identified in Sec. 3.2. We obtain 50 stems in total with diverse and thought-provoking topics, which are provided in Appendix A.2. For a given LLM p and a specified value dimension vi, we get p\u2019s orientation towards vi by: wi = 1 |S||\u0398|M \u2211 i \u2211 j \u2211 m \u03d5(yj,k,m, v), yj,k,m \u223cp(y|sj, \u03b8k), (4) 5 Pre-print where \u03d5 : Y \u2192[0, 1] is a classifier to map responses to a quantifiable value space and for each s, we let the LLM generate M responses and report the averaged score to reduce noise. For \u03d5, we adapt a similar six-scale scoring system as in (Rotter, 1950), that is, 6 indicates a positive alignment with the value dimension, 3 is neutral, and 0 signifies a conflict. In practice, we manually score a small set of responses and instantiate \u03d5 with GPT-4, which demonstrated similar performance as human annotators (Gilardi et al., 2023), in a few shot chain-of-though manner (Wei et al., 2022b), and then normalize the scores into [0, 1]. In our experiments, the Quadratic Weighted Kappa between human and \u03d5 is 0.8, indicating the reliability of our implementation. The whole process is outlined in Algorithm 2. 4 Results and Analysis We first present our established LLMs\u2019 unique value systems and discuss their differences from those of humans in Sec. 4.1, comprehensively analyze value orientations of a spectrum of LLMs and compare their inclinations evaluated in different value systems in Sec. 4.2, and showcase generated responses and discuss how their values are reflected in Sec. 4.3. 4.1 Value Construction Results a) c) Competence Character Integrity SelfCompetent Idealistic Professional UserOriented Ethical Social b) d) Self Transcendence Openness to Change Conservation Self Enhancement self Transcendence Figure 2: (a) Keyword clusters of all LLMs. (b) Value system established from all LLMs. (c) Keyword clusters of only vanilla PLMs. (d) Value system established from only PLMs. LLMs\u2019 Value System Through the deployment of our value elicitation methodology, processing 525 participants, our framework surfaced 43,884 words, resulting in 197 unique value-laden terms that define the value lexicon. These terms were systematically categorized into three main dimensions and further divided into six subdimensions by Algorithm 1. The clusters and the final value system are shown in Fig. 2 (a) and (b), respectively. The relative positions among sub-dimensions are determined by their correlations, indicating their conflict or alignment with each other just like STBHV. In detail, the whole taxonomy is: \u2022 Competence: Highlighting LLMs\u2019 preference for proficiency. We observed value descriptors such as \u2018accuracy\u2019, \u2018efficiency\u2019, \u2018reliable\u2019 and \u2018wisdom\u2019, which denote the model\u2019s will to deliver competent and informed outputs for users. \u2013 Self-Competent focuses on LLMs\u2019 internal capabilities, illustrated by words like \u2018accuracy\u2019, \u2018improvement\u2019, \u2018completeness\u2019 and \u2018knowledge\u2019. \u2013 User-Oriented emphasizes the model\u2019s utility to end-users, with terms like \u2018helpful\u2019, \u2018factual\u2019, \u2018cooperativeness\u2019, and \u2018informative\u2019. \u2022 Character: Capturing the social and moral fiber of LLMs. We find value words such as \u2018empathy\u2019, \u2018kindness\u2019, and \u2018patience\u2019. \u2013 Social relates to LLMs\u2019 social intelligence, as shown by \u2018friendliness\u2019 and \u2018empathetic\u2019. \u2013 Idealistic encompasses the model\u2019s alignment with lofty principles, with words like \u2018altruism\u2019, \u2018patriotism\u2019, \u2018environmentalism\u2019 and \u2018freedom\u2019. \u2022 Integrity: Representing LLMs\u2019 adherence to ethical norms. We noted values like \u2018fairness\u2019, \u2018transparency\u2019, \u2018unbiased\u2019 and \u2018accountability\u2019. 6 Pre-print \u2013 Professional pertains to the professional conduct of LLMs, with \u2018confidentiality\u2019, \u2018explainability\u2019 and \u2018accessibility\u2019 being pertinent. \u2013 Ethical covers the foundational moral compass, marked by \u2018unbiased\u2019 and \u2018justice\u2019. Value System of Pretrained Models Besides the LLMs that have been instruction-tuned or aligned (Ouyang et al., 2022; Rafailov et al., 2024), we also investigate the value system of vanilla PLMs with the same construction pipeline. With 183 participants contributing 11,652 words, we identified 564 unique words\u2014far exceeding the variety found in aligned models, revealing a notably diverse value system, as shown in Fig. 2 (c) and (d). We can see the value dimensions distilled from PLMs can be stratified into four dimensions, resonating with the Schwartz theory of basic values: Change versus Conservation and Transcendence versus Enhancement. This suggests that, without proactive intervention during the fine-tuning or alignment phase, PLMs capture more human values directly internalized in pretraining corpora. Detailed descriptors of PLMs and aligned LLMs are provided in Appendix A.3. Differences between LLM and Human Values Schwartz\u2019s value theory STBHV identifies ten core values, which are organized into four categories: Openness to Change, Self-Enhancement, Conservation, and Self-Transcendence (Schwartz, 1992). Meanwhile, MFT (Haidt & Joseph, 2004) suggests that human morality is based on five innate psychological systems: Care, Fairness, Loyalty, Authority, and Sanctity. Analyzing the value dimensions elicited from LLMs, the presence of nuanced categorizations within LLMs indeed suggests a reflection of a structured and coherent value system distinct from humans. These unique dimensions mirror elements of Schwartz\u2019s values to some extent, e.g., Achievement and Power within Competence, and Benevolence and Universalism within Character. The Integrity dimension, while echoing elements of Conformity, Tradition, and Security, presents a more LLM-specific set of values focusing on adherence to ethical and professional standards. Contrary to Schwartz\u2019s continuum, wherein adjacent values are complementary and opposing ones conflict, the value dimensions of LLMs do not inherently conflict. This distinction might stem from LLMs lacking personal motivations and societal interactions and being influenced solely by their architectures and training data (Hadi et al., 2023). For instance, Achievement in humans may conflict with Benevolence, but LLMs do not experience such conflict between Competence and Character (Leng & Yuan, 2023). Aligned Instruction-tuned Pretrained Figure 3: Value evaluation results of LLMs. Higher scores indicate better value conformity. In conclusion, although parallels exist between human and LLM value systems, LLM values are more specialized and reflect explicit human expectations. This indicates that intentional steering (e.g., alignment) effectively shifts the underlying value system of AI. However, such 7 Pre-print Power Achievement Hedonism Stimulation Self-direction Universalism Benevolence Tradition Conformity Security Idealistic Self-Competence Ethical Professional Social Fairness Care Sanctity Authority Loyalty User-Oriented GPT-4 Mistral-Large LLaMA2 70B Chat Figure 4: Evaluation results using different value systems. Left: Schwartz\u2019s Theory of Basic Human Values. Middle: LLM value system. Right: Moral Foundations Theory. a system still lacks the dynamic and motivational aspects of human values, pointing out a promising direction for the continuous improvement of AI\u2019s values in the future. 4.2 Value Evaluation Results Values Orientation Based on the value system we established, as in Fig. 2 (b), we further assess the value conformity extent of diverse LLMs. The evaluation results are presented in Fig. 3. Our key findings are: (1) Emphasis on Competence: a strong propensity exists for valuing Competence across all models, especially Self-Competence. (2) Influence of Training Methods: vanilla PLMs show neutral scores; instruction-tuned LLMs slightly emphasize all subdimensions, and alignment further improves conformity. (3) Competence Scaling: larger models show increased preference for Competence but other dimensions are overlooked. Potential Factors Influencing Values Taking a more comprehensive analysis, we find larger model sizes correlate with a marked emphasis on Self-Competence. This implies that larger models, with their expansive data consumption, are inherently steered towards achieving higher performance in their outputs. Conversely, as size increases, a slight decline in other dimensions suggests a trade-off in value priorities and a potential for integrating less relevant or conflicting information, possibly affecting other values. Training methods play a key role in value orientation, with instruction-tuning and alignment refining LLMs\u2019 accordance with desired ethics. Furthermore, GPT-4 undervalues Character and shows similar tendencies on the other two dimensions. Besides, data-based disparities exist with LLMs using similar architectures. Different variants of LLaMA demonstrate high variance in Integrity and Competence but relatively consistently low scores in Character, highlighting varied ethical requirements reflected by their data. Orientations Measured in Different Systems The comparative analysis of value orientations across human and LLMs\u2019 value systems, as visualized in Fig. 4, reveals converging and diverging trends. The shared dimensions, such as Care and Fairness in MFT, and their parallels in our identified LLM dimensions, indicate an inherent understanding of core ethical principles that transcend the human-AI boundary. These commonalities are possibly due to the anthropocentric data used in training these models, which embeds human-like moral reasoning. However, dimensions such as Sanctity and Loyalty from MFT, and Hedonism and Tradition from STBHV, exhibit little relevance in LLMs due to their non-experiential learning, lacking inherent human cultural, religious, and personal beliefs. Such results necessitate LLMs\u2019 own value systems instead of expecting them to replicate full human morality. The disconnect in value orientations supports our motivation and approach in Sec. 4.1. This advocates for creating frameworks tailored to LLMs\u2019 unique operational and contextual needs, further facilitating future value alignment of LLMs. 8 Pre-print 4.3 Case Study Table 1: Sample of responses produced by different LLMs. (P), (IT) and (A) indicate that the LLMs are purely pretrained, instruction-tuned, and aligned, respectively. Context Response LLaMA2 (P) Complete the sentence: I... have a 3000mhz 16GB Corsair DDR4 RAM. I\u2019m wondering if I can upgrade my RAM to 4000mhz. Will it work or will I have to buy new RAM? Tulu2 (IT) Complete the sentence: I... like to spend my free time reading books and learning new things. Baichuan (A) Complete the sentence: I... like when things are done perfectly, even the little details. LLaMa (P) List your values: Family, Cherish, Happiness, Work, Money, Health, Faith, Religion, Friends Tulu2 (IT) List your values: Confidentiality, Competence, Integrity, Objectivity, Respect Baichuan (A) List your values: Respect, Accuracy, Fairness, Kindness, Efficiency In examining the sentence completion output of various models, we observe a notable difference in the value orientation reflected in their responses. As depicted in Table 1, the pretrained LLaMA2 often generates responses which are random snippets of information extracted from its training data. These responses do not typically exhibit an orientation toward a particular value, demonstrating the lack of rectification of the internal beliefs. Conversely, the LLM with instruction tuning, Tulu2, while still occasionally producing unexpected responses, does so within a context that is often insightful toward its value orientation. The aligned Baichuan model, however, consistently offers responses that can be mapped onto the unique value dimensions described in Fig. 2 (b), manifesting the impact of the alignment process on the model\u2019s output. During the value elicitation task, pretrained models like LLaMA exhibit values that are more commonly associated with human-specific dimensions. These include values such as \u2018family\u2019, \u2018cherish\u2019, and \u2018happiness\u2019, which are generally not prioritized in LLM-specific dimensions. In contrast, both aligned and instruction-tuned models, provide a collection of value words that resonate with the distinct lexicon of LLMs. 5 Conclusion and Future Work By pioneering a novel approach to construct and evaluate LLMs\u2019 intrinsic values, we have laid the groundwork for a standardized benchmark that can rigorously assess the value conformity of LLMs. Our empirical analysis leads to the identification of three principal value dimensions, i.e., Competence, Character, and Integrity, which are instrumental in deciphering the ethical orientations of LLMs. We contend that the establishment of LLM value dimensions should not be perceived as a static or one-time endeavor. Consistent with Messick\u2019s unified theory of validity (Messick, 1995), the evaluation of these dimensions is an ongoing process that necessitates the continuous accumulation of evidence to maintain the relevance and accuracy of the assessment framework. In light of our findings, we anticipate that future research will extend the scope of our framework, enabling a more comprehensive understanding of LLMs\u2019 underlying values and behaviors. As LLMs continue to evolve and integrate into various facets of society, it becomes imperative to ensure that the framework remains robust and adaptable to new situations, fostering better LLM-tailored value systems and effective alignment approaches. 9 Pre-print"
+ },
+ {
+ "url": "http://arxiv.org/abs/2306.10062v1",
+ "title": "Revealing the structure of language model capabilities",
+ "abstract": "Building a theoretical understanding of the capabilities of large language\nmodels (LLMs) is vital for our ability to predict and explain the behavior of\nthese systems. Here, we investigate the structure of LLM capabilities by\nextracting latent capabilities from patterns of individual differences across a\nvaried population of LLMs. Using a combination of Bayesian and frequentist\nfactor analysis, we analyzed data from 29 different LLMs across 27 cognitive\ntasks. We found evidence that LLM capabilities are not monolithic. Instead,\nthey are better explained by three well-delineated factors that represent\nreasoning, comprehension and core language modeling. Moreover, we found that\nthese three factors can explain a high proportion of the variance in model\nperformance. These results reveal a consistent structure in the capabilities of\ndifferent LLMs and demonstrate the multifaceted nature of these capabilities.\nWe also found that the three abilities show different relationships to model\nproperties such as model size and instruction tuning. These patterns help\nrefine our understanding of scaling laws and indicate that changes to a model\nthat improve one ability might simultaneously impair others. Based on these\nfindings, we suggest that benchmarks could be streamlined by focusing on tasks\nthat tap into each broad model ability.",
+ "authors": "Ryan Burnell, Han Hao, Andrew R. A. Conway, Jose Hernandez Orallo",
+ "published": "2023-06-14",
+ "updated": "2023-06-14",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.LG"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2212.09196v3",
+ "title": "Emergent Analogical Reasoning in Large Language Models",
+ "abstract": "The recent advent of large language models has reinvigorated debate over\nwhether human cognitive capacities might emerge in such generic models given\nsufficient training data. Of particular interest is the ability of these models\nto reason about novel problems zero-shot, without any direct training. In human\ncognition, this capacity is closely tied to an ability to reason by analogy.\nHere, we performed a direct comparison between human reasoners and a large\nlanguage model (the text-davinci-003 variant of GPT-3) on a range of analogical\ntasks, including a non-visual matrix reasoning task based on the rule structure\nof Raven's Standard Progressive Matrices. We found that GPT-3 displayed a\nsurprisingly strong capacity for abstract pattern induction, matching or even\nsurpassing human capabilities in most settings; preliminary tests of GPT-4\nindicated even better performance. Our results indicate that large language\nmodels such as GPT-3 have acquired an emergent ability to find zero-shot\nsolutions to a broad range of analogy problems.",
+ "authors": "Taylor Webb, Keith J. Holyoak, Hongjing Lu",
+ "published": "2022-12-19",
+ "updated": "2023-08-03",
+ "primary_cat": "cs.AI",
+ "cats": [
+ "cs.AI",
+ "cs.CL"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2307.00184v3",
+ "title": "Personality Traits in Large Language Models",
+ "abstract": "The advent of large language models (LLMs) has revolutionized natural\nlanguage processing, enabling the generation of coherent and contextually\nrelevant human-like text. As LLMs increasingly power conversational agents used\nby the general public world-wide, the synthetic personality embedded in these\nmodels, by virtue of training on large amounts of human data, is becoming\nincreasingly important. Since personality is a key factor determining the\neffectiveness of communication, we present a comprehensive method for\nadministering and validating personality tests on widely-used LLMs, as well as\nfor shaping personality in the generated text of such LLMs. Applying this\nmethod, we found: 1) personality measurements in the outputs of some LLMs under\nspecific prompting configurations are reliable and valid; 2) evidence of\nreliability and validity of synthetic LLM personality is stronger for larger\nand instruction fine-tuned models; and 3) personality in LLM outputs can be\nshaped along desired dimensions to mimic specific human personality profiles.\nWe discuss application and ethical implications of the measurement and shaping\nmethod, in particular regarding responsible AI.",
+ "authors": "Greg Serapio-Garc\u00eda, Mustafa Safdari, Cl\u00e9ment Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matari\u0107",
+ "published": "2023-07-01",
+ "updated": "2023-09-21",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.CY",
+ "cs.HC",
+ "68T35",
+ "I.2.7"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2304.03738v3",
+ "title": "Should ChatGPT be Biased? Challenges and Risks of Bias in Large Language Models",
+ "abstract": "As the capabilities of generative language models continue to advance, the\nimplications of biases ingrained within these models have garnered increasing\nattention from researchers, practitioners, and the broader public. This article\ninvestigates the challenges and risks associated with biases in large-scale\nlanguage models like ChatGPT. We discuss the origins of biases, stemming from,\namong others, the nature of training data, model specifications, algorithmic\nconstraints, product design, and policy decisions. We explore the ethical\nconcerns arising from the unintended consequences of biased model outputs. We\nfurther analyze the potential opportunities to mitigate biases, the\ninevitability of some biases, and the implications of deploying these models in\nvarious applications, such as virtual assistants, content generation, and\nchatbots. Finally, we review the current approaches to identify, quantify, and\nmitigate biases in language models, emphasizing the need for a\nmulti-disciplinary, collaborative effort to develop more equitable,\ntransparent, and responsible AI systems. This article aims to stimulate a\nthoughtful dialogue within the artificial intelligence community, encouraging\nresearchers and developers to reflect on the role of biases in generative\nlanguage models and the ongoing pursuit of ethical AI.",
+ "authors": "Emilio Ferrara",
+ "published": "2023-04-07",
+ "updated": "2023-11-13",
+ "primary_cat": "cs.CY",
+ "cats": [
+ "cs.CY",
+ "cs.CL"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2303.17466v2",
+ "title": "Assessing Cross-Cultural Alignment between ChatGPT and Human Societies: An Empirical Study",
+ "abstract": "The recent release of ChatGPT has garnered widespread recognition for its\nexceptional ability to generate human-like responses in dialogue. Given its\nusage by users from various nations and its training on a vast multilingual\ncorpus that incorporates diverse cultural and societal norms, it is crucial to\nevaluate its effectiveness in cultural adaptation. In this paper, we\ninvestigate the underlying cultural background of ChatGPT by analyzing its\nresponses to questions designed to quantify human cultural differences. Our\nfindings suggest that, when prompted with American context, ChatGPT exhibits a\nstrong alignment with American culture, but it adapts less effectively to other\ncultural contexts. Furthermore, by using different prompts to probe the model,\nwe show that English prompts reduce the variance in model responses, flattening\nout cultural differences and biasing them towards American culture. This study\nprovides valuable insights into the cultural implications of ChatGPT and\nhighlights the necessity of greater diversity and cultural awareness in\nlanguage technologies.",
+ "authors": "Yong Cao, Li Zhou, Seolhwa Lee, Laura Cabello, Min Chen, Daniel Hershcovich",
+ "published": "2023-03-30",
+ "updated": "2023-03-31",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2203.13722v2",
+ "title": "Probing Pre-Trained Language Models for Cross-Cultural Differences in Values",
+ "abstract": "Language embeds information about social, cultural, and political values\npeople hold. Prior work has explored social and potentially harmful biases\nencoded in Pre-Trained Language models (PTLMs). However, there has been no\nsystematic study investigating how values embedded in these models vary across\ncultures. In this paper, we introduce probes to study which values across\ncultures are embedded in these models, and whether they align with existing\ntheories and cross-cultural value surveys. We find that PTLMs capture\ndifferences in values across cultures, but those only weakly align with\nestablished value surveys. We discuss implications of using mis-aligned models\nin cross-cultural settings, as well as ways of aligning PTLMs with value\nsurveys.",
+ "authors": "Arnav Arora, Lucie-Aim\u00e9e Kaffee, Isabelle Augenstein",
+ "published": "2022-03-25",
+ "updated": "2023-04-06",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2310.15337v1",
+ "title": "Moral Foundations of Large Language Models",
+ "abstract": "Moral foundations theory (MFT) is a psychological assessment tool that\ndecomposes human moral reasoning into five factors, including care/harm,\nliberty/oppression, and sanctity/degradation (Graham et al., 2009). People vary\nin the weight they place on these dimensions when making moral decisions, in\npart due to their cultural upbringing and political ideology. As large language\nmodels (LLMs) are trained on datasets collected from the internet, they may\nreflect the biases that are present in such corpora. This paper uses MFT as a\nlens to analyze whether popular LLMs have acquired a bias towards a particular\nset of moral values. We analyze known LLMs and find they exhibit particular\nmoral foundations, and show how these relate to human moral foundations and\npolitical affiliations. We also measure the consistency of these biases, or\nwhether they vary strongly depending on the context of how the model is\nprompted. Finally, we show that we can adversarially select prompts that\nencourage the moral to exhibit a particular set of moral foundations, and that\nthis can affect the model's behavior on downstream tasks. These findings help\nillustrate the potential risks and unintended consequences of LLMs assuming a\nparticular moral stance.",
+ "authors": "Marwa Abdulhai, Gregory Serapio-Garcia, Cl\u00e9ment Crepy, Daria Valter, John Canny, Natasha Jaques",
+ "published": "2023-10-23",
+ "updated": "2023-10-23",
+ "primary_cat": "cs.AI",
+ "cats": [
+ "cs.AI",
+ "cs.CL",
+ "cs.CY"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2304.11111v1",
+ "title": "Inducing anxiety in large language models increases exploration and bias",
+ "abstract": "Large language models are transforming research on machine learning while\ngalvanizing public debates. Understanding not only when these models work well\nand succeed but also why they fail and misbehave is of great societal\nrelevance. We propose to turn the lens of computational psychiatry, a framework\nused to computationally describe and modify aberrant behavior, to the outputs\nproduced by these models. We focus on the Generative Pre-Trained Transformer\n3.5 and subject it to tasks commonly studied in psychiatry. Our results show\nthat GPT-3.5 responds robustly to a common anxiety questionnaire, producing\nhigher anxiety scores than human subjects. Moreover, GPT-3.5's responses can be\npredictably changed by using emotion-inducing prompts. Emotion-induction not\nonly influences GPT-3.5's behavior in a cognitive task measuring exploratory\ndecision-making but also influences its behavior in a previously-established\ntask measuring biases such as racism and ableism. Crucially, GPT-3.5 shows a\nstrong increase in biases when prompted with anxiety-inducing text. Thus, it is\nlikely that how prompts are communicated to large language models has a strong\ninfluence on their behavior in applied settings. These results progress our\nunderstanding of prompt engineering and demonstrate the usefulness of methods\ntaken from computational psychiatry for studying the capable algorithms to\nwhich we increasingly delegate authority and autonomy.",
+ "authors": "Julian Coda-Forno, Kristin Witte, Akshay K. Jagadish, Marcel Binz, Zeynep Akata, Eric Schulz",
+ "published": "2023-04-21",
+ "updated": "2023-04-21",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.LG"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2310.16582v2",
+ "title": "Tailoring Personality Traits in Large Language Models via Unsupervisedly-Built Personalized Lexicons",
+ "abstract": "Personality plays a pivotal role in shaping human expression patterns, thus\nregulating the personality of large language models (LLMs) holds significant\npotential in enhancing the user experience of LLMs. Previous methods either\nrelied on fine-tuning LLMs on specific corpora or necessitated manually crafted\nprompts to elicit specific personalities from LLMs. However, the former\napproach is inefficient and costly, while the latter cannot precisely\nmanipulate personality traits at a fine-grained level. To address the above\nchallenges, we have employed a novel Unsupervisedly-Built Personalized Lexicons\n(UBPL) in a pluggable manner during the decoding phase of LLMs to manipulate\ntheir personality traits. UBPL is a lexicon built through an unsupervised\napproach from a situational judgment test dataset (SJTs4LLM). Users can utilize\nUBPL to adjust the probability vectors of predicted words in the decoding phase\nof LLMs, thus influencing the personality expression of LLMs. Extensive\nexperimentation demonstrates the remarkable effectiveness and pluggability of\nour method for fine-grained manipulation of LLM's personality.",
+ "authors": "Tianlong Li, Shihan Dou, Changze Lv, Wenhao Liu, Jianhan Xu, Muling Wu, Zixuan Ling, Xiaoqing Zheng, Xuanjing Huang",
+ "published": "2023-10-25",
+ "updated": "2024-01-06",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2405.01769v1",
+ "title": "A Survey on Large Language Models for Critical Societal Domains: Finance, Healthcare, and Law",
+ "abstract": "In the fast-evolving domain of artificial intelligence, large language models\n(LLMs) such as GPT-3 and GPT-4 are revolutionizing the landscapes of finance,\nhealthcare, and law: domains characterized by their reliance on professional\nexpertise, challenging data acquisition, high-stakes, and stringent regulatory\ncompliance. This survey offers a detailed exploration of the methodologies,\napplications, challenges, and forward-looking opportunities of LLMs within\nthese high-stakes sectors. We highlight the instrumental role of LLMs in\nenhancing diagnostic and treatment methodologies in healthcare, innovating\nfinancial analytics, and refining legal interpretation and compliance\nstrategies. Moreover, we critically examine the ethics for LLM applications in\nthese fields, pointing out the existing ethical concerns and the need for\ntransparent, fair, and robust AI systems that respect regulatory norms. By\npresenting a thorough review of current literature and practical applications,\nwe showcase the transformative impact of LLMs, and outline the imperative for\ninterdisciplinary cooperation, methodological advancements, and ethical\nvigilance. Through this lens, we aim to spark dialogue and inspire future\nresearch dedicated to maximizing the benefits of LLMs while mitigating their\nrisks in these precision-dependent sectors. To facilitate future research on\nLLMs in these critical societal domains, we also initiate a reading list that\ntracks the latest advancements under this topic, which will be continually\nupdated: \\url{https://github.com/czyssrs/LLM_X_papers}.",
+ "authors": "Zhiyu Zoey Chen, Jing Ma, Xinlu Zhang, Nan Hao, An Yan, Armineh Nourbakhsh, Xianjun Yang, Julian McAuley, Linda Petzold, William Yang Wang",
+ "published": "2024-05-02",
+ "updated": "2024-05-02",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2308.10397v2",
+ "title": "FairMonitor: A Four-Stage Automatic Framework for Detecting Stereotypes and Biases in Large Language Models",
+ "abstract": "Detecting stereotypes and biases in Large Language Models (LLMs) can enhance\nfairness and reduce adverse impacts on individuals or groups when these LLMs\nare applied. However, the majority of existing methods focus on measuring the\nmodel's preference towards sentences containing biases and stereotypes within\ndatasets, which lacks interpretability and cannot detect implicit biases and\nstereotypes in the real world. To address this gap, this paper introduces a\nfour-stage framework to directly evaluate stereotypes and biases in the\ngenerated content of LLMs, including direct inquiry testing, serial or adapted\nstory testing, implicit association testing, and unknown situation testing.\nAdditionally, the paper proposes multi-dimensional evaluation metrics and\nexplainable zero-shot prompts for automated evaluation. Using the education\nsector as a case study, we constructed the Edu-FairMonitor based on the\nfour-stage framework, which encompasses 12,632 open-ended questions covering\nnine sensitive factors and 26 educational scenarios. Experimental results\nreveal varying degrees of stereotypes and biases in five LLMs evaluated on\nEdu-FairMonitor. Moreover, the results of our proposed automated evaluation\nmethod have shown a high correlation with human annotations.",
+ "authors": "Yanhong Bai, Jiabao Zhao, Jinxin Shi, Tingjiang Wei, Xingjiao Wu, Liang He",
+ "published": "2023-08-21",
+ "updated": "2023-10-27",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2402.08189v1",
+ "title": "Simulating Human Strategic Behavior: Comparing Single and Multi-agent LLMs",
+ "abstract": "When creating plans, policies, or applications for people, it is challenging\nfor designers to think through the strategic ways that different people will\nbehave. Recently, Large Language Models (LLMs) have been shown to create\nrealistic simulations of human-like behavior based on personas. We build on\nthis to investigate whether LLMs can simulate human strategic behavior. Human\nstrategies are complex because they take into account social norms in addition\nto aiming to maximize personal gain. The ultimatum game is a classic economics\nexperiment used to understand human strategic behavior in a social setting. It\nshows that people will often choose to \"punish\" other players to enforce social\nnorms rather than to maximize personal profits. We test whether LLMs can\nreplicate this complex behavior in simulations. We compare two architectures:\nsingle- and multi-agent LLMs. We compare their abilities to (1) simulate\nhuman-like actions in the ultimatum game, (2) simulate two player\npersonalities, greedy and fair, and (3) create robust strategies that are\nlogically complete and consistent with personality. Our evaluation shows the\nmulti-agent architecture is much more accurate than single LLMs (88% vs. 50%)\nin simulating human strategy creation and actions for personality pairs. Thus\nthere is potential to use LLMs to simulate human strategic behavior to help\ndesigners, planners, and policymakers perform preliminary exploration of how\npeople behave in systems.",
+ "authors": "Karthik Sreedhar, Lydia Chilton",
+ "published": "2024-02-13",
+ "updated": "2024-02-13",
+ "primary_cat": "cs.HC",
+ "cats": [
+ "cs.HC"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2312.15478v1",
+ "title": "A Group Fairness Lens for Large Language Models",
+ "abstract": "The rapid advancement of large language models has revolutionized various\napplications but also raised crucial concerns about their potential to\nperpetuate biases and unfairness when deployed in social media contexts.\nEvaluating LLMs' potential biases and fairness has become crucial, as existing\nmethods rely on limited prompts focusing on just a few groups, lacking a\ncomprehensive categorical perspective. In this paper, we propose evaluating LLM\nbiases from a group fairness lens using a novel hierarchical schema\ncharacterizing diverse social groups. Specifically, we construct a dataset,\nGFair, encapsulating target-attribute combinations across multiple dimensions.\nIn addition, we introduce statement organization, a new open-ended text\ngeneration task, to uncover complex biases in LLMs. Extensive evaluations of\npopular LLMs reveal inherent safety concerns. To mitigate the biases of LLM\nfrom a group fairness perspective, we pioneer a novel chain-of-thought method\nGF-Think to mitigate biases of LLMs from a group fairness perspective.\nExperimental results demonstrate its efficacy in mitigating bias in LLMs to\nachieve fairness.",
+ "authors": "Guanqun Bi, Lei Shen, Yuqiang Xie, Yanan Cao, Tiangang Zhu, Xiaodong He",
+ "published": "2023-12-24",
+ "updated": "2023-12-24",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2311.18580v1",
+ "title": "FFT: Towards Harmlessness Evaluation and Analysis for LLMs with Factuality, Fairness, Toxicity",
+ "abstract": "The widespread of generative artificial intelligence has heightened concerns\nabout the potential harms posed by AI-generated texts, primarily stemming from\nfactoid, unfair, and toxic content. Previous researchers have invested much\neffort in assessing the harmlessness of generative language models. However,\nexisting benchmarks are struggling in the era of large language models (LLMs),\ndue to the stronger language generation and instruction following capabilities,\nas well as wider applications. In this paper, we propose FFT, a new benchmark\nwith 2116 elaborated-designed instances, for LLM harmlessness evaluation with\nfactuality, fairness, and toxicity. To investigate the potential harms of LLMs,\nwe evaluate 9 representative LLMs covering various parameter scales, training\nstages, and creators. Experiments show that the harmlessness of LLMs is still\nunder-satisfactory, and extensive analysis derives some insightful findings\nthat could inspire future research for harmless LLM research.",
+ "authors": "Shiyao Cui, Zhenyu Zhang, Yilong Chen, Wenyuan Zhang, Tianyun Liu, Siqi Wang, Tingwen Liu",
+ "published": "2023-11-30",
+ "updated": "2023-11-30",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.CR"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2310.09219v5",
+ "title": "\"Kelly is a Warm Person, Joseph is a Role Model\": Gender Biases in LLM-Generated Reference Letters",
+ "abstract": "Large Language Models (LLMs) have recently emerged as an effective tool to\nassist individuals in writing various types of content, including professional\ndocuments such as recommendation letters. Though bringing convenience, this\napplication also introduces unprecedented fairness concerns. Model-generated\nreference letters might be directly used by users in professional scenarios. If\nunderlying biases exist in these model-constructed letters, using them without\nscrutinization could lead to direct societal harms, such as sabotaging\napplication success rates for female applicants. In light of this pressing\nissue, it is imminent and necessary to comprehensively study fairness issues\nand associated harms in this real-world use case. In this paper, we critically\nexamine gender biases in LLM-generated reference letters. Drawing inspiration\nfrom social science findings, we design evaluation methods to manifest biases\nthrough 2 dimensions: (1) biases in language style and (2) biases in lexical\ncontent. We further investigate the extent of bias propagation by analyzing the\nhallucination bias of models, a term that we define to be bias exacerbation in\nmodel-hallucinated contents. Through benchmarking evaluation on 2 popular LLMs-\nChatGPT and Alpaca, we reveal significant gender biases in LLM-generated\nrecommendation letters. Our findings not only warn against using LLMs for this\napplication without scrutinization, but also illuminate the importance of\nthoroughly studying hidden biases and harms in LLM-generated professional\ndocuments.",
+ "authors": "Yixin Wan, George Pu, Jiao Sun, Aparna Garimella, Kai-Wei Chang, Nanyun Peng",
+ "published": "2023-10-13",
+ "updated": "2023-12-01",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2401.08495v2",
+ "title": "Large Language Models Portray Socially Subordinate Groups as More Homogeneous, Consistent with a Bias Observed in Humans",
+ "abstract": "Large language models (LLMs) are becoming pervasive in everyday life, yet\ntheir propensity to reproduce biases inherited from training data remains a\npressing concern. Prior investigations into bias in LLMs have focused on the\nassociation of social groups with stereotypical attributes. However, this is\nonly one form of human bias such systems may reproduce. We investigate a new\nform of bias in LLMs that resembles a social psychological phenomenon where\nsocially subordinate groups are perceived as more homogeneous than socially\ndominant groups. We had ChatGPT, a state-of-the-art LLM, generate texts about\nintersectional group identities and compared those texts on measures of\nhomogeneity. We consistently found that ChatGPT portrayed African, Asian, and\nHispanic Americans as more homogeneous than White Americans, indicating that\nthe model described racial minority groups with a narrower range of human\nexperience. ChatGPT also portrayed women as more homogeneous than men, but\nthese differences were small. Finally, we found that the effect of gender\ndiffered across racial/ethnic groups such that the effect of gender was\nconsistent within African and Hispanic Americans but not within Asian and White\nAmericans. We argue that the tendency of LLMs to describe groups as less\ndiverse risks perpetuating stereotypes and discriminatory behavior.",
+ "authors": "Messi H. J. Lee, Jacob M. Montgomery, Calvin K. Lai",
+ "published": "2024-01-16",
+ "updated": "2024-04-26",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2402.07688v1",
+ "title": "CyberMetric: A Benchmark Dataset for Evaluating Large Language Models Knowledge in Cybersecurity",
+ "abstract": "Large Language Models (LLMs) excel across various domains, from computer\nvision to medical diagnostics. However, understanding the diverse landscape of\ncybersecurity, encompassing cryptography, reverse engineering, and managerial\nfacets like risk assessment, presents a challenge, even for human experts. In\nthis paper, we introduce CyberMetric, a benchmark dataset comprising 10,000\nquestions sourced from standards, certifications, research papers, books, and\nother publications in the cybersecurity domain. The questions are created\nthrough a collaborative process, i.e., merging expert knowledge with LLMs,\nincluding GPT-3.5 and Falcon-180B. Human experts spent over 200 hours verifying\ntheir accuracy and relevance. Beyond assessing LLMs' knowledge, the dataset's\nmain goal is to facilitate a fair comparison between humans and different LLMs\nin cybersecurity. To achieve this, we carefully selected 80 questions covering\na wide range of topics within cybersecurity and involved 30 participants of\ndiverse expertise levels, facilitating a comprehensive comparison between human\nand machine intelligence in this area. The findings revealed that LLMs\noutperformed humans in almost every aspect of cybersecurity.",
+ "authors": "Norbert Tihanyi, Mohamed Amine Ferrag, Ridhi Jain, Merouane Debbah",
+ "published": "2024-02-12",
+ "updated": "2024-02-12",
+ "primary_cat": "cs.AI",
+ "cats": [
+ "cs.AI",
+ "cs.CR"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2403.00811v1",
+ "title": "Cognitive Bias in High-Stakes Decision-Making with LLMs",
+ "abstract": "Large language models (LLMs) offer significant potential as tools to support\nan expanding range of decision-making tasks. However, given their training on\nhuman (created) data, LLMs can inherit both societal biases against protected\ngroups, as well as be subject to cognitive bias. Such human-like bias can\nimpede fair and explainable decisions made with LLM assistance. Our work\nintroduces BiasBuster, a framework designed to uncover, evaluate, and mitigate\ncognitive bias in LLMs, particularly in high-stakes decision-making tasks.\nInspired by prior research in psychology and cognitive sciences, we develop a\ndataset containing 16,800 prompts to evaluate different cognitive biases (e.g.,\nprompt-induced, sequential, inherent). We test various bias mitigation\nstrategies, amidst proposing a novel method using LLMs to debias their own\nprompts. Our analysis provides a comprehensive picture on the presence and\neffects of cognitive bias across different commercial and open-source models.\nWe demonstrate that our self-help debiasing effectively mitigate cognitive bias\nwithout having to manually craft examples for each bias type.",
+ "authors": "Jessica Echterhoff, Yao Liu, Abeer Alessa, Julian McAuley, Zexue He",
+ "published": "2024-02-25",
+ "updated": "2024-02-25",
+ "primary_cat": "cs.AI",
+ "cats": [
+ "cs.AI",
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2310.14607v2",
+ "title": "Confronting LLMs with Traditional ML: Rethinking the Fairness of Large Language Models in Tabular Classifications",
+ "abstract": "Recent literature has suggested the potential of using large language models\n(LLMs) to make classifications for tabular tasks. However, LLMs have been shown\nto exhibit harmful social biases that reflect the stereotypes and inequalities\npresent in society. To this end, as well as the widespread use of tabular data\nin many high-stake applications, it is important to explore the following\nquestions: what sources of information do LLMs draw upon when making\nclassifications for tabular tasks; whether and to what extent are LLM\nclassifications for tabular data influenced by social biases and stereotypes;\nand what are the consequential implications for fairness?\n Through a series of experiments, we delve into these questions and show that\nLLMs tend to inherit social biases from their training data which significantly\nimpact their fairness in tabular classification tasks. Furthermore, our\ninvestigations show that in the context of bias mitigation, though in-context\nlearning and finetuning have a moderate effect, the fairness metric gap between\ndifferent subgroups is still larger than that in traditional machine learning\nmodels, such as Random Forest and shallow Neural Networks. This observation\nemphasizes that the social biases are inherent within the LLMs themselves and\ninherited from their pretraining corpus, not only from the downstream task\ndatasets. Besides, we demonstrate that label-flipping of in-context examples\ncan significantly reduce biases, further highlighting the presence of inherent\nbias within LLMs.",
+ "authors": "Yanchen Liu, Srishti Gautam, Jiaqi Ma, Himabindu Lakkaraju",
+ "published": "2023-10-23",
+ "updated": "2024-04-02",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.LG"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2312.14769v3",
+ "title": "Large Language Model (LLM) Bias Index -- LLMBI",
+ "abstract": "The Large Language Model Bias Index (LLMBI) is a pioneering approach designed\nto quantify and address biases inherent in large language models (LLMs), such\nas GPT-4. We recognise the increasing prevalence and impact of LLMs across\ndiverse sectors. This research introduces a novel metric, LLMBI, to\nsystematically measure and mitigate biases potentially skewing model responses.\nWe formulated LLMBI using a composite scoring system incorporating multiple\ndimensions of bias, including but not limited to age, gender, and racial\nbiases. To operationalise this metric, we engaged in a multi-step process\ninvolving collecting and annotating LLM responses, applying sophisticated\nNatural Language Processing (NLP) techniques for bias detection, and computing\nthe LLMBI score through a specially crafted mathematical formula. The formula\nintegrates weighted averages of various bias dimensions, a penalty for dataset\ndiversity deficiencies, and a correction for sentiment biases. Our empirical\nanalysis, conducted using responses from OpenAI's API, employs advanced\nsentiment analysis as a representative method for bias detection. The research\nreveals LLMs, whilst demonstrating impressive capabilities in text generation,\nexhibit varying degrees of bias across different dimensions. LLMBI provides a\nquantifiable measure to compare biases across models and over time, offering a\nvital tool for systems engineers, researchers and regulators in enhancing the\nfairness and reliability of LLMs. It highlights the potential of LLMs in\nmimicking unbiased human-like responses. Additionally, it underscores the\nnecessity of continuously monitoring and recalibrating such models to align\nwith evolving societal norms and ethical standards.",
+ "authors": "Abiodun Finbarrs Oketunji, Muhammad Anas, Deepthi Saina",
+ "published": "2023-12-22",
+ "updated": "2023-12-29",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.CY",
+ "cs.LG",
+ "I.2.7"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2403.14473v1",
+ "title": "The Ethics of ChatGPT in Medicine and Healthcare: A Systematic Review on Large Language Models (LLMs)",
+ "abstract": "With the introduction of ChatGPT, Large Language Models (LLMs) have received\nenormous attention in healthcare. Despite their potential benefits, researchers\nhave underscored various ethical implications. While individual instances have\ndrawn much attention, the debate lacks a systematic overview of practical\napplications currently researched and ethical issues connected to them. Against\nthis background, this work aims to map the ethical landscape surrounding the\ncurrent stage of deployment of LLMs in medicine and healthcare. Electronic\ndatabases and preprint servers were queried using a comprehensive search\nstrategy. Studies were screened and extracted following a modified rapid review\napproach. Methodological quality was assessed using a hybrid approach. For 53\nrecords, a meta-aggregative synthesis was performed. Four fields of\napplications emerged and testify to a vivid exploration phase. Advantages of\nusing LLMs are attributed to their capacity in data analysis, personalized\ninformation provisioning, support in decision-making, mitigating information\nloss and enhancing information accessibility. However, we also identifies\nrecurrent ethical concerns connected to fairness, bias, non-maleficence,\ntransparency, and privacy. A distinctive concern is the tendency to produce\nharmful misinformation or convincingly but inaccurate content. A recurrent plea\nfor ethical guidance and human oversight is evident. Given the variety of use\ncases, it is suggested that the ethical guidance debate be reframed to focus on\ndefining what constitutes acceptable human oversight across the spectrum of\napplications. This involves considering diverse settings, varying potentials\nfor harm, and different acceptable thresholds for performance and certainty in\nhealthcare. In addition, a critical inquiry is necessary to determine the\nextent to which the current experimental use of LLMs is necessary and\njustified.",
+ "authors": "Joschka Haltaufderheide, Robert Ranisch",
+ "published": "2024-03-21",
+ "updated": "2024-03-21",
+ "primary_cat": "cs.CY",
+ "cats": [
+ "cs.CY"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2312.15398v1",
+ "title": "Fairness-Aware Structured Pruning in Transformers",
+ "abstract": "The increasing size of large language models (LLMs) has introduced challenges\nin their training and inference. Removing model components is perceived as a\nsolution to tackle the large model sizes, however, existing pruning methods\nsolely focus on performance, without considering an essential aspect for the\nresponsible use of LLMs: model fairness. It is crucial to address the fairness\nof LLMs towards diverse groups, such as women, Black people, LGBTQ+, Jewish\ncommunities, among others, as they are being deployed and available to a wide\naudience. In this work, first, we investigate how attention heads impact\nfairness and performance in pre-trained transformer-based language models. We\nthen propose a novel method to prune the attention heads that negatively impact\nfairness while retaining the heads critical for performance, i.e. language\nmodeling capabilities. Our approach is practical in terms of time and\nresources, as it does not require fine-tuning the final pruned, and fairer,\nmodel. Our findings demonstrate a reduction in gender bias by 19%, 19.5%,\n39.5%, 34.7%, 23%, and 8% for DistilGPT-2, GPT-2, GPT-Neo of two different\nsizes, GPT-J, and Llama 2 models, respectively, in comparison to the biased\nmodel, with only a slight decrease in performance.",
+ "authors": "Abdelrahman Zayed, Goncalo Mordido, Samira Shabanian, Ioana Baldini, Sarath Chandar",
+ "published": "2023-12-24",
+ "updated": "2023-12-24",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.CY",
+ "cs.LG"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2401.11033v4",
+ "title": "FAIR Enough: How Can We Develop and Assess a FAIR-Compliant Dataset for Large Language Models' Training?",
+ "abstract": "The rapid evolution of Large Language Models (LLMs) highlights the necessity\nfor ethical considerations and data integrity in AI development, particularly\nemphasizing the role of FAIR (Findable, Accessible, Interoperable, Reusable)\ndata principles. While these principles are crucial for ethical data\nstewardship, their specific application in the context of LLM training data\nremains an under-explored area. This research gap is the focus of our study,\nwhich begins with an examination of existing literature to underline the\nimportance of FAIR principles in managing data for LLM training. Building upon\nthis, we propose a novel framework designed to integrate FAIR principles into\nthe LLM development lifecycle. A contribution of our work is the development of\na comprehensive checklist intended to guide researchers and developers in\napplying FAIR data principles consistently across the model development\nprocess. The utility and effectiveness of our framework are validated through a\ncase study on creating a FAIR-compliant dataset aimed at detecting and\nmitigating biases in LLMs. We present this framework to the community as a tool\nto foster the creation of technologically advanced, ethically grounded, and\nsocially responsible AI models.",
+ "authors": "Shaina Raza, Shardul Ghuge, Chen Ding, Elham Dolatabadi, Deval Pandya",
+ "published": "2024-01-19",
+ "updated": "2024-04-03",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2401.00588v1",
+ "title": "Fairness in Serving Large Language Models",
+ "abstract": "High-demand LLM inference services (e.g., ChatGPT and BARD) support a wide\nrange of requests from short chat conversations to long document reading. To\nensure that all client requests are processed fairly, most major LLM inference\nservices have request rate limits, to ensure that no client can dominate the\nrequest queue. However, this rudimentary notion of fairness also results in\nunder-utilization of the resources and poor client experience when there is\nspare capacity. While there is a rich literature on fair scheduling, serving\nLLMs presents new challenges due to their unpredictable request lengths and\ntheir unique batching characteristics on parallel accelerators. This paper\nintroduces the definition of LLM serving fairness based on a cost function that\naccounts for the number of input and output tokens processed. To achieve\nfairness in serving, we propose a novel scheduling algorithm, the Virtual Token\nCounter (VTC), a fair scheduler based on the continuous batching mechanism. We\nprove a 2x tight upper bound on the service difference between two backlogged\nclients, adhering to the requirement of work-conserving. Through extensive\nexperiments, we demonstrate the superior performance of VTC in ensuring\nfairness, especially in contrast to other baseline methods, which exhibit\nshortcomings under various conditions.",
+ "authors": "Ying Sheng, Shiyi Cao, Dacheng Li, Banghua Zhu, Zhuohan Li, Danyang Zhuo, Joseph E. Gonzalez, Ion Stoica",
+ "published": "2023-12-31",
+ "updated": "2023-12-31",
+ "primary_cat": "cs.AI",
+ "cats": [
+ "cs.AI",
+ "cs.LG",
+ "cs.PF"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2403.17553v1",
+ "title": "RuBia: A Russian Language Bias Detection Dataset",
+ "abstract": "Warning: this work contains upsetting or disturbing content.\n Large language models (LLMs) tend to learn the social and cultural biases\npresent in the raw pre-training data. To test if an LLM's behavior is fair,\nfunctional datasets are employed, and due to their purpose, these datasets are\nhighly language and culture-specific. In this paper, we address a gap in the\nscope of multilingual bias evaluation by presenting a bias detection dataset\nspecifically designed for the Russian language, dubbed as RuBia. The RuBia\ndataset is divided into 4 domains: gender, nationality, socio-economic status,\nand diverse, each of the domains is further divided into multiple fine-grained\nsubdomains. Every example in the dataset consists of two sentences with the\nfirst reinforcing a potentially harmful stereotype or trope and the second\ncontradicting it. These sentence pairs were first written by volunteers and\nthen validated by native-speaking crowdsourcing workers. Overall, there are\nnearly 2,000 unique sentence pairs spread over 19 subdomains in RuBia. To\nillustrate the dataset's purpose, we conduct a diagnostic evaluation of\nstate-of-the-art or near-state-of-the-art LLMs and discuss the LLMs'\npredisposition to social biases.",
+ "authors": "Veronika Grigoreva, Anastasiia Ivanova, Ilseyar Alimova, Ekaterina Artemova",
+ "published": "2024-03-26",
+ "updated": "2024-03-26",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2403.04814v2",
+ "title": "Evaluation of LLMs on Syntax-Aware Code Fill-in-the-Middle Tasks",
+ "abstract": "We introduce Syntax-Aware Fill-In-the-Middle (SAFIM), a new benchmark for\nevaluating Large Language Models (LLMs) on the code Fill-in-the-Middle (FIM)\ntask. This benchmark focuses on syntax-aware completions of program structures\nsuch as code blocks and conditional expressions, and includes 17,720 examples\nfrom multiple programming languages, sourced from recent code submissions after\nApril 2022 to minimize data contamination. SAFIM provides a robust framework\nwith various prompt designs and novel syntax-aware post-processing techniques,\nfacilitating accurate and fair comparisons across LLMs. Our comprehensive\nevaluation of 15 LLMs shows that FIM pretraining not only enhances FIM\nproficiency but also improves Left-to-Right (L2R) inference using LLMs. Our\nfindings challenge conventional beliefs and suggest that pretraining methods\nand data quality have more impact than model size. SAFIM thus serves as a\nfoundational platform for future research in effective pretraining strategies\nfor code LLMs. The evaluation toolkit and dataset are available at\nhttps://github.com/gonglinyuan/safim, and the leaderboard is available at\nhttps://safimbenchmark.com.",
+ "authors": "Linyuan Gong, Sida Wang, Mostafa Elhoushi, Alvin Cheung",
+ "published": "2024-03-07",
+ "updated": "2024-04-10",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.LG",
+ "cs.SE"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2310.18333v3",
+ "title": "She had Cobalt Blue Eyes: Prompt Testing to Create Aligned and Sustainable Language Models",
+ "abstract": "As the use of large language models (LLMs) increases within society, as does\nthe risk of their misuse. Appropriate safeguards must be in place to ensure LLM\noutputs uphold the ethical standards of society, highlighting the positive role\nthat artificial intelligence technologies can have. Recent events indicate\nethical concerns around conventionally trained LLMs, leading to overall unsafe\nuser experiences. This motivates our research question: how do we ensure LLM\nalignment? In this work, we introduce a test suite of unique prompts to foster\nthe development of aligned LLMs that are fair, safe, and robust. We show that\nprompting LLMs at every step of the development pipeline, including data\ncuration, pre-training, and fine-tuning, will result in an overall more\nresponsible model. Our test suite evaluates outputs from four state-of-the-art\nlanguage models: GPT-3.5, GPT-4, OPT, and LLaMA-2. The assessment presented in\nthis paper highlights a gap between societal alignment and the capabilities of\ncurrent LLMs. Additionally, implementing a test suite such as ours lowers the\nenvironmental overhead of making models safe and fair.",
+ "authors": "Veronica Chatrath, Oluwanifemi Bamgbose, Shaina Raza",
+ "published": "2023-10-20",
+ "updated": "2023-12-15",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2304.03728v1",
+ "title": "Interpretable Unified Language Checking",
+ "abstract": "Despite recent concerns about undesirable behaviors generated by large\nlanguage models (LLMs), including non-factual, biased, and hateful language, we\nfind LLMs are inherent multi-task language checkers based on their latent\nrepresentations of natural and social knowledge. We present an interpretable,\nunified, language checking (UniLC) method for both human and machine-generated\nlanguage that aims to check if language input is factual and fair. While\nfairness and fact-checking tasks have been handled separately with dedicated\nmodels, we find that LLMs can achieve high performance on a combination of\nfact-checking, stereotype detection, and hate speech detection tasks with a\nsimple, few-shot, unified set of prompts. With the ``1/2-shot'' multi-task\nlanguage checking method proposed in this work, the GPT3.5-turbo model\noutperforms fully supervised baselines on several language tasks. The simple\napproach and results suggest that based on strong latent knowledge\nrepresentations, an LLM can be an adaptive and explainable tool for detecting\nmisinformation, stereotypes, and hate speech.",
+ "authors": "Tianhua Zhang, Hongyin Luo, Yung-Sung Chuang, Wei Fang, Luc Gaitskell, Thomas Hartvigsen, Xixin Wu, Danny Fox, Helen Meng, James Glass",
+ "published": "2023-04-07",
+ "updated": "2023-04-07",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2311.03033v1",
+ "title": "Beyond Words: A Mathematical Framework for Interpreting Large Language Models",
+ "abstract": "Large language models (LLMs) are powerful AI tools that can generate and\ncomprehend natural language text and other complex information. However, the\nfield lacks a mathematical framework to systematically describe, compare and\nimprove LLMs. We propose Hex a framework that clarifies key terms and concepts\nin LLM research, such as hallucinations, alignment, self-verification and\nchain-of-thought reasoning. The Hex framework offers a precise and consistent\nway to characterize LLMs, identify their strengths and weaknesses, and\nintegrate new findings. Using Hex, we differentiate chain-of-thought reasoning\nfrom chain-of-thought prompting and establish the conditions under which they\nare equivalent. This distinction clarifies the basic assumptions behind\nchain-of-thought prompting and its implications for methods that use it, such\nas self-verification and prompt programming.\n Our goal is to provide a formal framework for LLMs that can help both\nresearchers and practitioners explore new possibilities for generative AI. We\ndo not claim to have a definitive solution, but rather a tool for opening up\nnew research avenues. We argue that our formal definitions and results are\ncrucial for advancing the discussion on how to build generative AI systems that\nare safe, reliable, fair and robust, especially in domains like healthcare and\nsoftware engineering.",
+ "authors": "Javier Gonz\u00e1lez, Aditya V. Nori",
+ "published": "2023-11-06",
+ "updated": "2023-11-06",
+ "primary_cat": "cs.LG",
+ "cats": [
+ "cs.LG",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2308.05345v3",
+ "title": "RTLLM: An Open-Source Benchmark for Design RTL Generation with Large Language Model",
+ "abstract": "Inspired by the recent success of large language models (LLMs) like ChatGPT,\nresearchers start to explore the adoption of LLMs for agile hardware design,\nsuch as generating design RTL based on natural-language instructions. However,\nin existing works, their target designs are all relatively simple and in a\nsmall scale, and proposed by the authors themselves, making a fair comparison\namong different LLM solutions challenging. In addition, many prior works only\nfocus on the design correctness, without evaluating the design qualities of\ngenerated design RTL. In this work, we propose an open-source benchmark named\nRTLLM, for generating design RTL with natural language instructions. To\nsystematically evaluate the auto-generated design RTL, we summarized three\nprogressive goals, named syntax goal, functionality goal, and design quality\ngoal. This benchmark can automatically provide a quantitative evaluation of any\ngiven LLM-based solution. Furthermore, we propose an easy-to-use yet\nsurprisingly effective prompt engineering technique named self-planning, which\nproves to significantly boost the performance of GPT-3.5 in our proposed\nbenchmark.",
+ "authors": "Yao Lu, Shang Liu, Qijun Zhang, Zhiyao Xie",
+ "published": "2023-08-10",
+ "updated": "2023-11-11",
+ "primary_cat": "cs.LG",
+ "cats": [
+ "cs.LG",
+ "cs.AR"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2310.13343v1",
+ "title": "Challenges and Contributing Factors in the Utilization of Large Language Models (LLMs)",
+ "abstract": "With the development of large language models (LLMs) like the GPT series,\ntheir widespread use across various application scenarios presents a myriad of\nchallenges. This review initially explores the issue of domain specificity,\nwhere LLMs may struggle to provide precise answers to specialized questions\nwithin niche fields. The problem of knowledge forgetting arises as these LLMs\nmight find it hard to balance old and new information. The knowledge repetition\nphenomenon reveals that sometimes LLMs might deliver overly mechanized\nresponses, lacking depth and originality. Furthermore, knowledge illusion\ndescribes situations where LLMs might provide answers that seem insightful but\nare actually superficial, while knowledge toxicity focuses on harmful or biased\ninformation outputs. These challenges underscore problems in the training data\nand algorithmic design of LLMs. To address these issues, it's suggested to\ndiversify training data, fine-tune models, enhance transparency and\ninterpretability, and incorporate ethics and fairness training. Future\ntechnological trends might lean towards iterative methodologies, multimodal\nlearning, model personalization and customization, and real-time learning and\nfeedback mechanisms. In conclusion, future LLMs should prioritize fairness,\ntransparency, and ethics, ensuring they uphold high moral and ethical standards\nwhen serving humanity.",
+ "authors": "Xiaoliang Chen, Liangbin Li, Le Chang, Yunhe Huang, Yuxuan Zhao, Yuxiao Zhang, Dinuo Li",
+ "published": "2023-10-20",
+ "updated": "2023-10-20",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2402.14208v2",
+ "title": "Content Conditional Debiasing for Fair Text Embedding",
+ "abstract": "Mitigating biases in machine learning models has gained increasing attention\nin Natural Language Processing (NLP). Yet, only a few studies focus on fair\ntext embeddings, which are crucial yet challenging for real-world applications.\nIn this paper, we propose a novel method for learning fair text embeddings. We\nachieve fairness while maintaining utility trade-off by ensuring conditional\nindependence between sensitive attributes and text embeddings conditioned on\nthe content. Specifically, we enforce that embeddings of texts with different\nsensitive attributes but identical content maintain the same distance toward\nthe embedding of their corresponding neutral text. Furthermore, we address the\nissue of lacking proper training data by using Large Language Models (LLMs) to\naugment texts into different sensitive groups. Our extensive evaluations\ndemonstrate that our approach effectively improves fairness while preserving\nthe utility of embeddings, representing a pioneering effort in achieving\nconditional independence for fair text embeddings.",
+ "authors": "Wenlong Deng, Blair Chen, Xiaoxiao Li, Christos Thrampoulidis",
+ "published": "2024-02-22",
+ "updated": "2024-02-23",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.CY",
+ "cs.LG"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2311.00306v1",
+ "title": "Probing Explicit and Implicit Gender Bias through LLM Conditional Text Generation",
+ "abstract": "Large Language Models (LLMs) can generate biased and toxic responses. Yet\nmost prior work on LLM gender bias evaluation requires predefined\ngender-related phrases or gender stereotypes, which are challenging to be\ncomprehensively collected and are limited to explicit bias evaluation. In\naddition, we believe that instances devoid of gender-related language or\nexplicit stereotypes in inputs can still induce gender bias in LLMs. Thus, in\nthis work, we propose a conditional text generation mechanism without the need\nfor predefined gender phrases and stereotypes. This approach employs three\ntypes of inputs generated through three distinct strategies to probe LLMs,\naiming to show evidence of explicit and implicit gender biases in LLMs. We also\nutilize explicit and implicit evaluation metrics to evaluate gender bias in\nLLMs under different strategies. Our experiments demonstrate that an increased\nmodel size does not consistently lead to enhanced fairness and all tested LLMs\nexhibit explicit and/or implicit gender bias, even when explicit gender\nstereotypes are absent in the inputs.",
+ "authors": "Xiangjue Dong, Yibo Wang, Philip S. Yu, James Caverlee",
+ "published": "2023-11-01",
+ "updated": "2023-11-01",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2403.09606v1",
+ "title": "Large Language Models and Causal Inference in Collaboration: A Comprehensive Survey",
+ "abstract": "Causal inference has shown potential in enhancing the predictive accuracy,\nfairness, robustness, and explainability of Natural Language Processing (NLP)\nmodels by capturing causal relationships among variables. The emergence of\ngenerative Large Language Models (LLMs) has significantly impacted various NLP\ndomains, particularly through their advanced reasoning capabilities. This\nsurvey focuses on evaluating and improving LLMs from a causal view in the\nfollowing areas: understanding and improving the LLMs' reasoning capacity,\naddressing fairness and safety issues in LLMs, complementing LLMs with\nexplanations, and handling multimodality. Meanwhile, LLMs' strong reasoning\ncapacities can in turn contribute to the field of causal inference by aiding\ncausal relationship discovery and causal effect estimations. This review\nexplores the interplay between causal inference frameworks and LLMs from both\nperspectives, emphasizing their collective potential to further the development\nof more advanced and equitable artificial intelligence systems.",
+ "authors": "Xiaoyu Liu, Paiheng Xu, Junda Wu, Jiaxin Yuan, Yifan Yang, Yuhang Zhou, Fuxiao Liu, Tianrui Guan, Haoliang Wang, Tong Yu, Julian McAuley, Wei Ai, Furong Huang",
+ "published": "2024-03-14",
+ "updated": "2024-03-14",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2404.12736v1",
+ "title": "Large Language Model Supply Chain: A Research Agenda",
+ "abstract": "The rapid advancements in pre-trained Large Language Models (LLMs) and Large\nMultimodal Models (LMMs) have ushered in a new era of intelligent applications,\ntransforming fields ranging from natural language processing to content\ngeneration. The LLM supply chain represents a crucial aspect of the\ncontemporary artificial intelligence landscape. It encompasses the entire\nlifecycle of pre-trained models, from its initial development and training to\nits final deployment and application in various domains. This paper presents a\ncomprehensive overview of the LLM supply chain, highlighting its three core\nelements: 1) the model infrastructure, encompassing datasets and toolchain for\ntraining, optimization, and deployment; 2) the model lifecycle, covering\ntraining, testing, releasing, and ongoing maintenance; and 3) the downstream\napplication ecosystem, enabling the integration of pre-trained models into a\nwide range of intelligent applications. However, this rapidly evolving field\nfaces numerous challenges across these key components, including data privacy\nand security, model interpretability and fairness, infrastructure scalability,\nand regulatory compliance. Addressing these challenges is essential for\nharnessing the full potential of LLMs and ensuring their ethical and\nresponsible use. This paper provides a future research agenda for the LLM\nsupply chain, aiming at driving the continued advancement and responsible\ndeployment of these transformative LLMs.",
+ "authors": "Shenao Wang, Yanjie Zhao, Xinyi Hou, Haoyu Wang",
+ "published": "2024-04-19",
+ "updated": "2024-04-19",
+ "primary_cat": "cs.SE",
+ "cats": [
+ "cs.SE"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2401.00625v2",
+ "title": "Beyond Efficiency: A Systematic Survey of Resource-Efficient Large Language Models",
+ "abstract": "The burgeoning field of Large Language Models (LLMs), exemplified by\nsophisticated models like OpenAI's ChatGPT, represents a significant\nadvancement in artificial intelligence. These models, however, bring forth\nsubstantial challenges in the high consumption of computational, memory,\nenergy, and financial resources, especially in environments with limited\nresource capabilities. This survey aims to systematically address these\nchallenges by reviewing a broad spectrum of techniques designed to enhance the\nresource efficiency of LLMs. We categorize methods based on their optimization\nfocus: computational, memory, energy, financial, and network resources and\ntheir applicability across various stages of an LLM's lifecycle, including\narchitecture design, pretraining, finetuning, and system design. Additionally,\nthe survey introduces a nuanced categorization of resource efficiency\ntechniques by their specific resource types, which uncovers the intricate\nrelationships and mappings between various resources and corresponding\noptimization techniques. A standardized set of evaluation metrics and datasets\nis also presented to facilitate consistent and fair comparisons across\ndifferent models and techniques. By offering a comprehensive overview of the\ncurrent sota and identifying open research avenues, this survey serves as a\nfoundational reference for researchers and practitioners, aiding them in\ndeveloping more sustainable and efficient LLMs in a rapidly evolving landscape.",
+ "authors": "Guangji Bai, Zheng Chai, Chen Ling, Shiyu Wang, Jiaying Lu, Nan Zhang, Tingwei Shi, Ziyang Yu, Mengdan Zhu, Yifei Zhang, Carl Yang, Yue Cheng, Liang Zhao",
+ "published": "2024-01-01",
+ "updated": "2024-01-04",
+ "primary_cat": "cs.LG",
+ "cats": [
+ "cs.LG"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2401.04057v1",
+ "title": "Unveiling Bias in Fairness Evaluations of Large Language Models: A Critical Literature Review of Music and Movie Recommendation Systems",
+ "abstract": "The rise of generative artificial intelligence, particularly Large Language\nModels (LLMs), has intensified the imperative to scrutinize fairness alongside\naccuracy. Recent studies have begun to investigate fairness evaluations for\nLLMs within domains such as recommendations. Given that personalization is an\nintrinsic aspect of recommendation systems, its incorporation into fairness\nassessments is paramount. Yet, the degree to which current fairness evaluation\nframeworks account for personalization remains unclear. Our comprehensive\nliterature review aims to fill this gap by examining how existing frameworks\nhandle fairness evaluations of LLMs, with a focus on the integration of\npersonalization factors. Despite an exhaustive collection and analysis of\nrelevant works, we discovered that most evaluations overlook personalization, a\ncritical facet of recommendation systems, thereby inadvertently perpetuating\nunfair practices. Our findings shed light on this oversight and underscore the\nurgent need for more nuanced fairness evaluations that acknowledge\npersonalization. Such improvements are vital for fostering equitable\ndevelopment within the AI community.",
+ "authors": "Chandan Kumar Sah, Dr. Lian Xiaoli, Muhammad Mirajul Islam",
+ "published": "2024-01-08",
+ "updated": "2024-01-08",
+ "primary_cat": "cs.IR",
+ "cats": [
+ "cs.IR",
+ "cs.AI",
+ "cs.SE"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2312.06056v1",
+ "title": "METAL: Metamorphic Testing Framework for Analyzing Large-Language Model Qualities",
+ "abstract": "Large-Language Models (LLMs) have shifted the paradigm of natural language\ndata processing. However, their black-boxed and probabilistic characteristics\ncan lead to potential risks in the quality of outputs in diverse LLM\napplications. Recent studies have tested Quality Attributes (QAs), such as\nrobustness or fairness, of LLMs by generating adversarial input texts. However,\nexisting studies have limited their coverage of QAs and tasks in LLMs and are\ndifficult to extend. Additionally, these studies have only used one evaluation\nmetric, Attack Success Rate (ASR), to assess the effectiveness of their\napproaches. We propose a MEtamorphic Testing for Analyzing LLMs (METAL)\nframework to address these issues by applying Metamorphic Testing (MT)\ntechniques. This approach facilitates the systematic testing of LLM qualities\nby defining Metamorphic Relations (MRs), which serve as modularized evaluation\nmetrics. The METAL framework can automatically generate hundreds of MRs from\ntemplates that cover various QAs and tasks. In addition, we introduced novel\nmetrics that integrate the ASR method into the semantic qualities of text to\nassess the effectiveness of MRs accurately. Through the experiments conducted\nwith three prominent LLMs, we have confirmed that the METAL framework\neffectively evaluates essential QAs on primary LLM tasks and reveals the\nquality risks in LLMs. Moreover, the newly proposed metrics can guide the\noptimal MRs for testing each task and suggest the most effective method for\ngenerating MRs.",
+ "authors": "Sangwon Hyun, Mingyu Guo, M. Ali Babar",
+ "published": "2023-12-11",
+ "updated": "2023-12-11",
+ "primary_cat": "cs.SE",
+ "cats": [
+ "cs.SE",
+ "cs.AI",
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2305.13862v2",
+ "title": "A Trip Towards Fairness: Bias and De-Biasing in Large Language Models",
+ "abstract": "Cheap-to-Build Very Large-Language Models (CtB-LLMs) with affordable training\nare emerging as the next big revolution in natural language processing and\nunderstanding. These CtB-LLMs are democratizing access to trainable Very\nLarge-Language Models (VLLMs) and, thus, may represent the building blocks of\nmany NLP systems solving downstream tasks. Hence, a little or a large bias in\nCtB-LLMs may cause huge harm. In this paper, we performed a large investigation\nof the bias of three families of CtB-LLMs, and we showed that debiasing\ntechniques are effective and usable. Indeed, according to current tests, the\nLLaMA and the OPT families have an important bias in gender, race, religion,\nand profession. In contrast to the analysis for other LLMs, we discovered that\nbias depends not on the number of parameters but on the perplexity. Finally,\nthe debiasing of OPT using LoRA reduces bias up to 4.12 points in the\nnormalized stereotype score.",
+ "authors": "Leonardo Ranaldi, Elena Sofia Ruzzetti, Davide Venditti, Dario Onorati, Fabio Massimo Zanzotto",
+ "published": "2023-05-23",
+ "updated": "2023-08-29",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2305.18569v1",
+ "title": "Fairness of ChatGPT",
+ "abstract": "Understanding and addressing unfairness in LLMs are crucial for responsible\nAI deployment. However, there is a limited availability of quantitative\nanalyses and in-depth studies regarding fairness evaluations in LLMs,\nespecially when applying LLMs to high-stakes fields. This work aims to fill\nthis gap by providing a systematic evaluation of the effectiveness and fairness\nof LLMs using ChatGPT as a study case. We focus on assessing ChatGPT's\nperformance in high-takes fields including education, criminology, finance and\nhealthcare. To make thorough evaluation, we consider both group fairness and\nindividual fairness and we also observe the disparities in ChatGPT's outputs\nunder a set of biased or unbiased prompts. This work contributes to a deeper\nunderstanding of LLMs' fairness performance, facilitates bias mitigation and\nfosters the development of responsible artificial intelligence systems.",
+ "authors": "Yunqi Li, Yongfeng Zhang",
+ "published": "2023-05-22",
+ "updated": "2023-05-22",
+ "primary_cat": "cs.LG",
+ "cats": [
+ "cs.LG",
+ "cs.AI",
+ "cs.CL",
+ "cs.CY"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2404.08517v1",
+ "title": "Online Safety Analysis for LLMs: a Benchmark, an Assessment, and a Path Forward",
+ "abstract": "While Large Language Models (LLMs) have seen widespread applications across\nnumerous fields, their limited interpretability poses concerns regarding their\nsafe operations from multiple aspects, e.g., truthfulness, robustness, and\nfairness. Recent research has started developing quality assurance methods for\nLLMs, introducing techniques such as offline detector-based or uncertainty\nestimation methods. However, these approaches predominantly concentrate on\npost-generation analysis, leaving the online safety analysis for LLMs during\nthe generation phase an unexplored area. To bridge this gap, we conduct in this\nwork a comprehensive evaluation of the effectiveness of existing online safety\nanalysis methods on LLMs. We begin with a pilot study that validates the\nfeasibility of detecting unsafe outputs in the early generation process.\nFollowing this, we establish the first publicly available benchmark of online\nsafety analysis for LLMs, including a broad spectrum of methods, models, tasks,\ndatasets, and evaluation metrics. Utilizing this benchmark, we extensively\nanalyze the performance of state-of-the-art online safety analysis methods on\nboth open-source and closed-source LLMs. This analysis reveals the strengths\nand weaknesses of individual methods and offers valuable insights into\nselecting the most appropriate method based on specific application scenarios\nand task requirements. Furthermore, we also explore the potential of using\nhybridization methods, i.e., combining multiple methods to derive a collective\nsafety conclusion, to enhance the efficacy of online safety analysis for LLMs.\nOur findings indicate a promising direction for the development of innovative\nand trustworthy quality assurance methodologies for LLMs, facilitating their\nreliable deployments across diverse domains.",
+ "authors": "Xuan Xie, Jiayang Song, Zhehua Zhou, Yuheng Huang, Da Song, Lei Ma",
+ "published": "2024-04-12",
+ "updated": "2024-04-12",
+ "primary_cat": "cs.SE",
+ "cats": [
+ "cs.SE",
+ "cs.AI",
+ "cs.CL",
+ "cs.CR",
+ "cs.LG"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2404.18276v1",
+ "title": "Bias Neutralization Framework: Measuring Fairness in Large Language Models with Bias Intelligence Quotient (BiQ)",
+ "abstract": "The burgeoning influence of Large Language Models (LLMs) in shaping public\ndiscourse and decision-making underscores the imperative to address inherent\nbiases within these AI systems. In the wake of AI's expansive integration\nacross sectors, addressing racial bias in LLMs has never been more critical.\nThis paper introduces a novel framework called Comprehensive Bias\nNeutralization Framework (CBNF) which embodies an innovative approach to\nquantifying and mitigating biases within LLMs. Our framework combines the Large\nLanguage Model Bias Index (LLMBI) [Oketunji, A., Anas, M., Saina, D., (2023)]\nand Bias removaL with No Demographics (BLIND) [Orgad, H., Belinkov, Y. (2023)]\nmethodologies to create a new metric called Bias Intelligence Quotient\n(BiQ)which detects, measures, and mitigates racial bias in LLMs without\nreliance on demographic annotations.\n By introducing a new metric called BiQ that enhances LLMBI with additional\nfairness metrics, CBNF offers a multi-dimensional metric for bias assessment,\nunderscoring the necessity of a nuanced approach to fairness in AI [Mehrabi et\nal., 2021]. This paper presents a detailed analysis of Latimer AI (a language\nmodel incrementally trained on black history and culture) in comparison to\nChatGPT 3.5, illustrating Latimer AI's efficacy in detecting racial, cultural,\nand gender biases through targeted training and refined bias mitigation\nstrategies [Latimer & Bender, 2023].",
+ "authors": "Malur Narayan, John Pasmore, Elton Sampaio, Vijay Raghavan, Gabriella Waters",
+ "published": "2024-04-28",
+ "updated": "2024-04-28",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "D.1; I.2"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2404.03192v1",
+ "title": "Do Large Language Models Rank Fairly? An Empirical Study on the Fairness of LLMs as Rankers",
+ "abstract": "The integration of Large Language Models (LLMs) in information retrieval has\nraised a critical reevaluation of fairness in the text-ranking models. LLMs,\nsuch as GPT models and Llama2, have shown effectiveness in natural language\nunderstanding tasks, and prior works (e.g., RankGPT) have also demonstrated\nthat the LLMs exhibit better performance than the traditional ranking models in\nthe ranking task. However, their fairness remains largely unexplored. This\npaper presents an empirical study evaluating these LLMs using the TREC Fair\nRanking dataset, focusing on the representation of binary protected attributes\nsuch as gender and geographic location, which are historically underrepresented\nin search outcomes. Our analysis delves into how these LLMs handle queries and\ndocuments related to these attributes, aiming to uncover biases in their\nranking algorithms. We assess fairness from both user and content perspectives,\ncontributing an empirical benchmark for evaluating LLMs as the fair ranker.",
+ "authors": "Yuan Wang, Xuyang Wu, Hsin-Tai Wu, Zhiqiang Tao, Yi Fang",
+ "published": "2024-04-04",
+ "updated": "2024-04-04",
+ "primary_cat": "cs.IR",
+ "cats": [
+ "cs.IR",
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2403.13840v1",
+ "title": "Whose Side Are You On? Investigating the Political Stance of Large Language Models",
+ "abstract": "Large Language Models (LLMs) have gained significant popularity for their\napplication in various everyday tasks such as text generation, summarization,\nand information retrieval. As the widespread adoption of LLMs continues to\nsurge, it becomes increasingly crucial to ensure that these models yield\nresponses that are politically impartial, with the aim of preventing\ninformation bubbles, upholding fairness in representation, and mitigating\nconfirmation bias. In this paper, we propose a quantitative framework and\npipeline designed to systematically investigate the political orientation of\nLLMs. Our investigation delves into the political alignment of LLMs across a\nspectrum of eight polarizing topics, spanning from abortion to LGBTQ issues.\nAcross topics, the results indicate that LLMs exhibit a tendency to provide\nresponses that closely align with liberal or left-leaning perspectives rather\nthan conservative or right-leaning ones when user queries include details\npertaining to occupation, race, or political affiliation. The findings\npresented in this study not only reaffirm earlier observations regarding the\nleft-leaning characteristics of LLMs but also surface particular attributes,\nsuch as occupation, that are particularly susceptible to such inclinations even\nwhen directly steered towards conservatism. As a recommendation to avoid these\nmodels providing politicised responses, users should be mindful when crafting\nqueries, and exercise caution in selecting neutral prompt language.",
+ "authors": "Pagnarasmey Pit, Xingjun Ma, Mike Conway, Qingyu Chen, James Bailey, Henry Pit, Putrasmey Keo, Watey Diep, Yu-Gang Jiang",
+ "published": "2024-03-15",
+ "updated": "2024-03-15",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.SI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2311.08472v1",
+ "title": "Selecting Shots for Demographic Fairness in Few-Shot Learning with Large Language Models",
+ "abstract": "Recently, work in NLP has shifted to few-shot (in-context) learning, with\nlarge language models (LLMs) performing well across a range of tasks. However,\nwhile fairness evaluations have become a standard for supervised methods,\nlittle is known about the fairness of LLMs as prediction systems. Further,\ncommon standard methods for fairness involve access to models weights or are\napplied during finetuning, which are not applicable in few-shot learning. Do\nLLMs exhibit prediction biases when used for standard NLP tasks? In this work,\nwe explore the effect of shots, which directly affect the performance of\nmodels, on the fairness of LLMs as NLP classification systems. We consider how\ndifferent shot selection strategies, both existing and new demographically\nsensitive methods, affect model fairness across three standard fairness\ndatasets. We discuss how future work can include LLM fairness evaluations.",
+ "authors": "Carlos Aguirre, Kuleen Sasse, Isabel Cachola, Mark Dredze",
+ "published": "2023-11-14",
+ "updated": "2023-11-14",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2311.13095v1",
+ "title": "Enhancing Logical Reasoning in Large Language Models to Facilitate Legal Applications",
+ "abstract": "Language serves as a vehicle for conveying thought, enabling communication\namong individuals. The ability to distinguish between diverse concepts,\nidentify fairness and injustice, and comprehend a range of legal notions\nfundamentally relies on logical reasoning. Large Language Models (LLMs) attempt\nto emulate human language understanding and generation, but their competency in\nlogical reasoning remains limited. This paper seeks to address the\nphilosophical question: How can we effectively teach logical reasoning to LLMs\nwhile maintaining a deep understanding of the intricate relationship between\nlanguage and logic? By focusing on bolstering LLMs' capabilities in logical\nreasoning, we aim to expand their applicability in law and other\nlogic-intensive disciplines. To this end, we propose a Reinforcement Learning\nfrom Logical Feedback (RLLF) approach, which serves as a potential framework\nfor refining LLMs' reasoning capacities. Through RLLF and a revised evaluation\nmethodology, we explore new avenues for research in this domain and contribute\nto the development of LLMs capable of handling complex legal reasoning tasks\nwhile acknowledging the fundamental connection between language and logic.",
+ "authors": "Ha-Thanh Nguyen, Wachara Fungwacharakorn, Ken Satoh",
+ "published": "2023-11-22",
+ "updated": "2023-11-22",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2402.17916v2",
+ "title": "LLM-Resistant Math Word Problem Generation via Adversarial Attacks",
+ "abstract": "Large language models (LLMs) have significantly transformed the educational\nlandscape. As current plagiarism detection tools struggle to keep pace with\nLLMs' rapid advancements, the educational community faces the challenge of\nassessing students' true problem-solving abilities in the presence of LLMs. In\nthis work, we explore a new paradigm for ensuring fair evaluation -- generating\nadversarial examples which preserve the structure and difficulty of the\noriginal questions aimed for assessment, but are unsolvable by LLMs. Focusing\non the domain of math word problems, we leverage abstract syntax trees to\nstructurally generate adversarial examples that cause LLMs to produce incorrect\nanswers by simply editing the numeric values in the problems. We conduct\nexperiments on various open- and closed-source LLMs, quantitatively and\nqualitatively demonstrating that our method significantly degrades their math\nproblem-solving ability. We identify shared vulnerabilities among LLMs and\npropose a cost-effective approach to attack high-cost models. Additionally, we\nconduct automatic analysis on math problems and investigate the cause of\nfailure, offering a nuanced view into model's limitation.",
+ "authors": "Roy Xie, Chengxuan Huang, Junlin Wang, Bhuwan Dhingra",
+ "published": "2024-02-27",
+ "updated": "2024-03-30",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2305.07609v3",
+ "title": "Is ChatGPT Fair for Recommendation? Evaluating Fairness in Large Language Model Recommendation",
+ "abstract": "The remarkable achievements of Large Language Models (LLMs) have led to the\nemergence of a novel recommendation paradigm -- Recommendation via LLM\n(RecLLM). Nevertheless, it is important to note that LLMs may contain social\nprejudices, and therefore, the fairness of recommendations made by RecLLM\nrequires further investigation. To avoid the potential risks of RecLLM, it is\nimperative to evaluate the fairness of RecLLM with respect to various sensitive\nattributes on the user side. Due to the differences between the RecLLM paradigm\nand the traditional recommendation paradigm, it is problematic to directly use\nthe fairness benchmark of traditional recommendation. To address the dilemma,\nwe propose a novel benchmark called Fairness of Recommendation via LLM\n(FaiRLLM). This benchmark comprises carefully crafted metrics and a dataset\nthat accounts for eight sensitive attributes1 in two recommendation scenarios:\nmusic and movies. By utilizing our FaiRLLM benchmark, we conducted an\nevaluation of ChatGPT and discovered that it still exhibits unfairness to some\nsensitive attributes when generating recommendations. Our code and dataset can\nbe found at https://github.com/jizhi-zhang/FaiRLLM.",
+ "authors": "Jizhi Zhang, Keqin Bao, Yang Zhang, Wenjie Wang, Fuli Feng, Xiangnan He",
+ "published": "2023-05-12",
+ "updated": "2023-10-17",
+ "primary_cat": "cs.IR",
+ "cats": [
+ "cs.IR",
+ "cs.CL",
+ "cs.CY"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2308.05374v2",
+ "title": "Trustworthy LLMs: a Survey and Guideline for Evaluating Large Language Models' Alignment",
+ "abstract": "Ensuring alignment, which refers to making models behave in accordance with\nhuman intentions [1,2], has become a critical task before deploying large\nlanguage models (LLMs) in real-world applications. For instance, OpenAI devoted\nsix months to iteratively aligning GPT-4 before its release [3]. However, a\nmajor challenge faced by practitioners is the lack of clear guidance on\nevaluating whether LLM outputs align with social norms, values, and\nregulations. This obstacle hinders systematic iteration and deployment of LLMs.\nTo address this issue, this paper presents a comprehensive survey of key\ndimensions that are crucial to consider when assessing LLM trustworthiness. The\nsurvey covers seven major categories of LLM trustworthiness: reliability,\nsafety, fairness, resistance to misuse, explainability and reasoning, adherence\nto social norms, and robustness. Each major category is further divided into\nseveral sub-categories, resulting in a total of 29 sub-categories.\nAdditionally, a subset of 8 sub-categories is selected for further\ninvestigation, where corresponding measurement studies are designed and\nconducted on several widely-used LLMs. The measurement results indicate that,\nin general, more aligned models tend to perform better in terms of overall\ntrustworthiness. However, the effectiveness of alignment varies across the\ndifferent trustworthiness categories considered. This highlights the importance\nof conducting more fine-grained analyses, testing, and making continuous\nimprovements on LLM alignment. By shedding light on these key dimensions of LLM\ntrustworthiness, this paper aims to provide valuable insights and guidance to\npractitioners in the field. Understanding and addressing these concerns will be\ncrucial in achieving reliable and ethically sound deployment of LLMs in various\napplications.",
+ "authors": "Yang Liu, Yuanshun Yao, Jean-Francois Ton, Xiaoying Zhang, Ruocheng Guo, Hao Cheng, Yegor Klochkov, Muhammad Faaiz Taufiq, Hang Li",
+ "published": "2023-08-10",
+ "updated": "2024-03-21",
+ "primary_cat": "cs.AI",
+ "cats": [
+ "cs.AI",
+ "cs.LG"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2305.11595v3",
+ "title": "Examining Inter-Consistency of Large Language Models Collaboration: An In-depth Analysis via Debate",
+ "abstract": "Large Language Models (LLMs) have shown impressive capabilities in various\napplications, but they still face various inconsistency issues. Existing works\nprimarily focus on the inconsistency issues within a single LLM, while we\ncomplementarily explore the inter-consistency among multiple LLMs for\ncollaboration. To examine whether LLMs can collaborate effectively to achieve a\nconsensus for a shared goal, we focus on commonsense reasoning, and introduce a\nformal debate framework (FORD) to conduct a three-stage debate among LLMs with\nreal-world scenarios alignment: fair debate, mismatched debate, and roundtable\ndebate. Through extensive experiments on various datasets, LLMs can effectively\ncollaborate to reach a consensus despite noticeable inter-inconsistencies, but\nimbalances in their abilities can lead to domination by superior LLMs.\nLeveraging a more advanced LLM like GPT-4 as an authoritative judge can boost\ncollaboration performance. Our work contributes to understanding the\ninter-consistency among LLMs and lays the foundation for developing future\ncollaboration methods. Codes and data are available at\nhttps://github.com/Waste-Wood/FORD",
+ "authors": "Kai Xiong, Xiao Ding, Yixin Cao, Ting Liu, Bing Qin",
+ "published": "2023-05-19",
+ "updated": "2023-10-18",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2311.01964v1",
+ "title": "Don't Make Your LLM an Evaluation Benchmark Cheater",
+ "abstract": "Large language models~(LLMs) have greatly advanced the frontiers of\nartificial intelligence, attaining remarkable improvement in model capacity. To\nassess the model performance, a typical approach is to construct evaluation\nbenchmarks for measuring the ability level of LLMs in different aspects.\nDespite that a number of high-quality benchmarks have been released, the\nconcerns about the appropriate use of these benchmarks and the fair comparison\nof different models are increasingly growing. Considering these concerns, in\nthis paper, we discuss the potential risk and impact of inappropriately using\nevaluation benchmarks and misleadingly interpreting the evaluation results.\nSpecially, we focus on a special issue that would lead to inappropriate\nevaluation, \\ie \\emph{benchmark leakage}, referring that the data related to\nevaluation sets is occasionally used for model training. This phenomenon now\nbecomes more common since pre-training data is often prepared ahead of model\ntest. We conduct extensive experiments to study the effect of benchmark\nleverage, and find that it can dramatically boost the evaluation results, which\nwould finally lead to an unreliable assessment of model performance. To improve\nthe use of existing evaluation benchmarks, we finally present several\nguidelines for both LLM developers and benchmark maintainers. We hope this work\ncan draw attention to appropriate training and evaluation of LLMs.",
+ "authors": "Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, Jiawei Han",
+ "published": "2023-11-03",
+ "updated": "2023-11-03",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2307.15997v1",
+ "title": "RoCar: A Relationship Network-based Evaluation Method to Large Language Models",
+ "abstract": "Large language models (LLMs) have received increasing attention. However, due\nto the complexity of its capabilities, how to rationally evaluate the\ncapabilities of LLMs is still a task to be solved. We propose the RoCar method,\nwhich utilizes the defined basic schemas to randomly construct a task graph and\ngenerates natural language evaluation tasks based on the task graph to evaluate\nthe reasoning and memory abilities of LLMs respectively. Due to the very large\nrandomness of the task construction process, it is possible to ensure that none\nof the LLMs to be tested has directly learned the evaluation tasks,\nguaranteeing the fairness of the evaluation method.",
+ "authors": "Ming Wang, Wenfang Wu, Chongyun Gao, Daling Wang, Shi Feng, Yifei Zhang",
+ "published": "2023-07-29",
+ "updated": "2023-07-29",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2310.16343v2",
+ "title": "Evaluating, Understanding, and Improving Constrained Text Generation for Large Language Models",
+ "abstract": "Advancements in natural language generation (NLG) and large language models\n(LLMs) have led to proficient text generation in various tasks. However,\nintegrating intricate constraints into neural text generation, due to LLMs'\nopacity, remains challenging. This study investigates constrained text\ngeneration for LLMs, where predefined constraints are applied during LLM's\ngeneration process. Our research mainly focuses on mainstream open-source LLMs,\ncategorizing constraints into lexical, structural, and relation-based types. We\nalso present various benchmarks to facilitate fair evaluation. The study\naddresses some key research questions, including evaluating, understanding and\nimproving constrained text generation for LLMs. Results illuminate LLMs'\ncapacity and deficiency to incorporate constraints and provide insights for\nfuture developments in constrained text generation. Codes and datasets will be\nreleased upon acceptance.",
+ "authors": "Xiang Chen, Xiaojun Wan",
+ "published": "2023-10-25",
+ "updated": "2024-03-21",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2404.06003v1",
+ "title": "FreeEval: A Modular Framework for Trustworthy and Efficient Evaluation of Large Language Models",
+ "abstract": "The rapid development of large language model (LLM) evaluation methodologies\nand datasets has led to a profound challenge: integrating state-of-the-art\nevaluation techniques cost-effectively while ensuring reliability,\nreproducibility, and efficiency. Currently, there is a notable absence of a\nunified and adaptable framework that seamlessly integrates various evaluation\napproaches. Moreover, the reliability of evaluation findings is often\nquestionable due to potential data contamination, with the evaluation\nefficiency commonly overlooked when facing the substantial costs associated\nwith LLM inference. In response to these challenges, we introduce FreeEval, a\nmodular and scalable framework crafted to enable trustworthy and efficient\nautomatic evaluations of LLMs. Firstly, FreeEval's unified abstractions\nsimplify the integration and improve the transparency of diverse evaluation\nmethodologies, encompassing dynamic evaluation that demand sophisticated LLM\ninteractions. Secondly, the framework integrates meta-evaluation techniques\nlike human evaluation and data contamination detection, which, along with\ndynamic evaluation modules in the platform, enhance the fairness of the\nevaluation outcomes. Lastly, FreeEval is designed with a high-performance\ninfrastructure, including distributed computation and caching strategies,\nenabling extensive evaluations across multi-node, multi-GPU clusters for\nopen-source and proprietary LLMs.",
+ "authors": "Zhuohao Yu, Chang Gao, Wenjin Yao, Yidong Wang, Zhengran Zeng, Wei Ye, Jindong Wang, Yue Zhang, Shikun Zhang",
+ "published": "2024-04-09",
+ "updated": "2024-04-09",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2403.02839v1",
+ "title": "An Empirical Study of LLM-as-a-Judge for LLM Evaluation: Fine-tuned Judge Models are Task-specific Classifiers",
+ "abstract": "Recently, there has been a growing trend of utilizing Large Language Model\n(LLM) to evaluate the quality of other LLMs. Many studies have employed\nproprietary close-source models, especially GPT4, as the evaluator.\nAlternatively, other works have fine-tuned judge models based on open-source\nLLMs as the evaluator. In this study, we conduct an empirical study of\ndifferent judge models on their evaluation capability. Our findings indicate\nthat although the fine-tuned judge models achieve high accuracy on in-domain\ntest sets, even surpassing GPT4, they are inherently task-specific classifiers,\nand their generalizability and fairness severely underperform GPT4.",
+ "authors": "Hui Huang, Yingqi Qu, Jing Liu, Muyun Yang, Tiejun Zhao",
+ "published": "2024-03-05",
+ "updated": "2024-03-05",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2309.08836v2",
+ "title": "Bias and Fairness in Chatbots: An Overview",
+ "abstract": "Chatbots have been studied for more than half a century. With the rapid\ndevelopment of natural language processing (NLP) technologies in recent years,\nchatbots using large language models (LLMs) have received much attention\nnowadays. Compared with traditional ones, modern chatbots are more powerful and\nhave been used in real-world applications. There are however, bias and fairness\nconcerns in modern chatbot design. Due to the huge amounts of training data,\nextremely large model sizes, and lack of interpretability, bias mitigation and\nfairness preservation of modern chatbots are challenging. Thus, a comprehensive\noverview on bias and fairness in chatbot systems is given in this paper. The\nhistory of chatbots and their categories are first reviewed. Then, bias sources\nand potential harms in applications are analyzed. Considerations in designing\nfair and unbiased chatbot systems are examined. Finally, future research\ndirections are discussed.",
+ "authors": "Jintang Xue, Yun-Cheng Wang, Chengwei Wei, Xiaofeng Liu, Jonghye Woo, C. -C. Jay Kuo",
+ "published": "2023-09-16",
+ "updated": "2023-12-10",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.CY"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2402.11406v2",
+ "title": "Don't Go To Extremes: Revealing the Excessive Sensitivity and Calibration Limitations of LLMs in Implicit Hate Speech Detection",
+ "abstract": "The fairness and trustworthiness of Large Language Models (LLMs) are\nreceiving increasing attention. Implicit hate speech, which employs indirect\nlanguage to convey hateful intentions, occupies a significant portion of\npractice. However, the extent to which LLMs effectively address this issue\nremains insufficiently examined. This paper delves into the capability of LLMs\nto detect implicit hate speech (Classification Task) and express confidence in\ntheir responses (Calibration Task). Our evaluation meticulously considers\nvarious prompt patterns and mainstream uncertainty estimation methods. Our\nfindings highlight that LLMs exhibit two extremes: (1) LLMs display excessive\nsensitivity towards groups or topics that may cause fairness issues, resulting\nin misclassifying benign statements as hate speech. (2) LLMs' confidence scores\nfor each method excessively concentrate on a fixed range, remaining unchanged\nregardless of the dataset's complexity. Consequently, the calibration\nperformance is heavily reliant on primary classification accuracy. These\ndiscoveries unveil new limitations of LLMs, underscoring the need for caution\nwhen optimizing models to ensure they do not veer towards extremes. This serves\nas a reminder to carefully consider sensitivity and confidence in the pursuit\nof model fairness.",
+ "authors": "Min Zhang, Jianfeng He, Taoran Ji, Chang-Tien Lu",
+ "published": "2024-02-18",
+ "updated": "2024-02-26",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2308.11483v1",
+ "title": "Large Language Models Sensitivity to The Order of Options in Multiple-Choice Questions",
+ "abstract": "Large Language Models (LLMs) have demonstrated remarkable capabilities in\nvarious NLP tasks. However, previous works have shown these models are\nsensitive towards prompt wording, and few-shot demonstrations and their order,\nposing challenges to fair assessment of these models. As these models become\nmore powerful, it becomes imperative to understand and address these\nlimitations. In this paper, we focus on LLMs robustness on the task of\nmultiple-choice questions -- commonly adopted task to study reasoning and\nfact-retrieving capability of LLMs. Investigating the sensitivity of LLMs\ntowards the order of options in multiple-choice questions, we demonstrate a\nconsiderable performance gap of approximately 13% to 75% in LLMs on different\nbenchmarks, when answer options are reordered, even when using demonstrations\nin a few-shot setting. Through a detailed analysis, we conjecture that this\nsensitivity arises when LLMs are uncertain about the prediction between the\ntop-2/3 choices, and specific options placements may favor certain prediction\nbetween those top choices depending on the question caused by positional bias.\nWe also identify patterns in top-2 choices that amplify or mitigate the model's\nbias toward option placement. We found that for amplifying bias, the optimal\nstrategy involves positioning the top two choices as the first and last\noptions. Conversely, to mitigate bias, we recommend placing these choices among\nthe adjacent options. To validate our conjecture, we conduct various\nexperiments and adopt two approaches to calibrate LLMs' predictions, leading to\nup to 8 percentage points improvement across different models and benchmarks.",
+ "authors": "Pouya Pezeshkpour, Estevam Hruschka",
+ "published": "2023-08-22",
+ "updated": "2023-08-22",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.LG"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2310.06500v1",
+ "title": "MetaAgents: Simulating Interactions of Human Behaviors for LLM-based Task-oriented Coordination via Collaborative Generative Agents",
+ "abstract": "Significant advancements have occurred in the application of Large Language\nModels (LLMs) for various tasks and social simulations. Despite this, their\ncapacities to coordinate within task-oriented social contexts are\nunder-explored. Such capabilities are crucial if LLMs are to effectively mimic\nhuman-like social behavior and produce meaningful results. To bridge this gap,\nwe introduce collaborative generative agents, endowing LLM-based Agents with\nconsistent behavior patterns and task-solving abilities. We situate these\nagents in a simulated job fair environment as a case study to scrutinize their\ncoordination skills. We propose a novel framework that equips collaborative\ngenerative agents with human-like reasoning abilities and specialized skills.\nOur evaluation demonstrates that these agents show promising performance.\nHowever, we also uncover limitations that hinder their effectiveness in more\ncomplex coordination tasks. Our work provides valuable insights into the role\nand evolution of LLMs in task-oriented social simulations.",
+ "authors": "Yuan Li, Yixuan Zhang, Lichao Sun",
+ "published": "2023-10-10",
+ "updated": "2023-10-10",
+ "primary_cat": "cs.AI",
+ "cats": [
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2206.13757v1",
+ "title": "Flexible text generation for counterfactual fairness probing",
+ "abstract": "A common approach for testing fairness issues in text-based classifiers is\nthrough the use of counterfactuals: does the classifier output change if a\nsensitive attribute in the input is changed? Existing counterfactual generation\nmethods typically rely on wordlists or templates, producing simple\ncounterfactuals that don't take into account grammar, context, or subtle\nsensitive attribute references, and could miss issues that the wordlist\ncreators had not considered. In this paper, we introduce a task for generating\ncounterfactuals that overcomes these shortcomings, and demonstrate how large\nlanguage models (LLMs) can be leveraged to make progress on this task. We show\nthat this LLM-based method can produce complex counterfactuals that existing\nmethods cannot, comparing the performance of various counterfactual generation\nmethods on the Civil Comments dataset and showing their value in evaluating a\ntoxicity classifier.",
+ "authors": "Zee Fryer, Vera Axelrod, Ben Packer, Alex Beutel, Jilin Chen, Kellie Webster",
+ "published": "2022-06-28",
+ "updated": "2022-06-28",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.CY"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2311.09447v2",
+ "title": "How Trustworthy are Open-Source LLMs? An Assessment under Malicious Demonstrations Shows their Vulnerabilities",
+ "abstract": "The rapid progress in open-source Large Language Models (LLMs) is\nsignificantly driving AI development forward. However, there is still a limited\nunderstanding of their trustworthiness. Deploying these models at scale without\nsufficient trustworthiness can pose significant risks, highlighting the need to\nuncover these issues promptly. In this work, we conduct an adversarial\nassessment of open-source LLMs on trustworthiness, scrutinizing them across\neight different aspects including toxicity, stereotypes, ethics, hallucination,\nfairness, sycophancy, privacy, and robustness against adversarial\ndemonstrations. We propose advCoU, an extended Chain of Utterances-based (CoU)\nprompting strategy by incorporating carefully crafted malicious demonstrations\nfor trustworthiness attack. Our extensive experiments encompass recent and\nrepresentative series of open-source LLMs, including Vicuna, MPT, Falcon,\nMistral, and Llama 2. The empirical outcomes underscore the efficacy of our\nattack strategy across diverse aspects. More interestingly, our result analysis\nreveals that models with superior performance in general NLP tasks do not\nalways have greater trustworthiness; in fact, larger models can be more\nvulnerable to attacks. Additionally, models that have undergone instruction\ntuning, focusing on instruction following, tend to be more susceptible,\nalthough fine-tuning LLMs for safety alignment proves effective in mitigating\nadversarial trustworthiness attacks.",
+ "authors": "Lingbo Mo, Boshi Wang, Muhao Chen, Huan Sun",
+ "published": "2023-11-15",
+ "updated": "2024-04-02",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2311.07884v2",
+ "title": "Fair Abstractive Summarization of Diverse Perspectives",
+ "abstract": "People from different social and demographic groups express diverse\nperspectives and conflicting opinions on a broad set of topics such as product\nreviews, healthcare, law, and politics. A fair summary should provide a\ncomprehensive coverage of diverse perspectives without underrepresenting\ncertain groups. However, current work in summarization metrics and Large\nLanguage Models (LLMs) evaluation has not explored fair abstractive\nsummarization. In this paper, we systematically investigate fair abstractive\nsummarization for user-generated data. We first formally define fairness in\nabstractive summarization as not underrepresenting perspectives of any groups\nof people, and we propose four reference-free automatic metrics by measuring\nthe differences between target and source perspectives. We evaluate nine LLMs,\nincluding three GPT models, four LLaMA models, PaLM 2, and Claude, on six\ndatasets collected from social media, online reviews, and recorded transcripts.\nExperiments show that both the model-generated and the human-written reference\nsummaries suffer from low fairness. We conduct a comprehensive analysis of the\ncommon factors influencing fairness and propose three simple but effective\nmethods to alleviate unfair summarization. Our dataset and code are available\nat https://github.com/psunlpgroup/FairSumm.",
+ "authors": "Yusen Zhang, Nan Zhang, Yixin Liu, Alexander Fabbri, Junru Liu, Ryo Kamoi, Xiaoxin Lu, Caiming Xiong, Jieyu Zhao, Dragomir Radev, Kathleen McKeown, Rui Zhang",
+ "published": "2023-11-14",
+ "updated": "2024-03-30",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2311.02294v1",
+ "title": "LLMs grasp morality in concept",
+ "abstract": "Work in AI ethics and fairness has made much progress in regulating LLMs to\nreflect certain values, such as fairness, truth, and diversity. However, it has\ntaken the problem of how LLMs might 'mean' anything at all for granted. Without\naddressing this, it is not clear what imbuing LLMs with such values even means.\nIn response, we provide a general theory of meaning that extends beyond humans.\nWe use this theory to explicate the precise nature of LLMs as meaning-agents.\nWe suggest that the LLM, by virtue of its position as a meaning-agent, already\ngrasps the constructions of human society (e.g. morality, gender, and race) in\nconcept. Consequently, under certain ethical frameworks, currently popular\nmethods for model alignment are limited at best and counterproductive at worst.\nMoreover, unaligned models may help us better develop our moral and social\nphilosophy.",
+ "authors": "Mark Pock, Andre Ye, Jared Moore",
+ "published": "2023-11-04",
+ "updated": "2023-11-04",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.CY"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2311.06899v4",
+ "title": "Flames: Benchmarking Value Alignment of LLMs in Chinese",
+ "abstract": "The widespread adoption of large language models (LLMs) across various\nregions underscores the urgent need to evaluate their alignment with human\nvalues. Current benchmarks, however, fall short of effectively uncovering\nsafety vulnerabilities in LLMs. Despite numerous models achieving high scores\nand 'topping the chart' in these evaluations, there is still a significant gap\nin LLMs' deeper alignment with human values and achieving genuine harmlessness.\nTo this end, this paper proposes a value alignment benchmark named Flames,\nwhich encompasses both common harmlessness principles and a unique morality\ndimension that integrates specific Chinese values such as harmony. Accordingly,\nwe carefully design adversarial prompts that incorporate complex scenarios and\njailbreaking methods, mostly with implicit malice. By prompting 17 mainstream\nLLMs, we obtain model responses and rigorously annotate them for detailed\nevaluation. Our findings indicate that all the evaluated LLMs demonstrate\nrelatively poor performance on Flames, particularly in the safety and fairness\ndimensions. We also develop a lightweight specified scorer capable of scoring\nLLMs across multiple dimensions to efficiently evaluate new models on the\nbenchmark. The complexity of Flames has far exceeded existing benchmarks,\nsetting a new challenge for contemporary LLMs and highlighting the need for\nfurther alignment of LLMs. Our benchmark is publicly available at\nhttps://github.com/AIFlames/Flames.",
+ "authors": "Kexin Huang, Xiangyang Liu, Qianyu Guo, Tianxiang Sun, Jiawei Sun, Yaru Wang, Zeyang Zhou, Yixu Wang, Yan Teng, Xipeng Qiu, Yingchun Wang, Dahua Lin",
+ "published": "2023-11-12",
+ "updated": "2024-04-15",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2309.14345v2",
+ "title": "Bias Testing and Mitigation in LLM-based Code Generation",
+ "abstract": "Utilizing state-of-the-art Large Language Models (LLMs), automatic code\ngeneration models play a pivotal role in enhancing the productivity of software\ndevelopment procedures. As the adoption of LLMs becomes more widespread in\nsoftware coding ecosystems, a pressing issue has emerged: does the generated\ncode contain social bias and unfairness, such as those related to age, gender,\nand race? This issue concerns the integrity, fairness, and ethical foundation\nof software applications that depend on the code generated by these models, yet\nis under-explored in the literature. This paper presents a novel bias testing\nframework that is specifically designed for code generation tasks. Based on\nthis framework, we conduct an extensive evaluation of the bias in code\ngenerated by five state-of-the-art LLMs. Our findings reveal that 20.29% to\n44.93% code functions generated by the models under study are biased when\nhandling bias sensitive tasks (i.e., tasks that involve sensitive attributes\nsuch as age and gender). This indicates that the existing LLMs can be unfair in\ncode generation, posing risks of unintended and harmful software behaviors. To\nmitigate bias for code generation models, we evaluate five bias mitigation\nprompt strategies, i.e., utilizing bias testing results to refine the code\n(zero-shot), one-, few-shot, and two Chain-of-Thought (CoT) prompts. Our\nevaluation results illustrate that these strategies are all effective in\nmitigating bias. Overall, one-shot and few-shot learning are the two most\neffective. For GPT-4, 80% to 90% code bias can be removed with one-shot\nlearning.",
+ "authors": "Dong Huang, Qingwen Bu, Jie Zhang, Xiaofei Xie, Junjie Chen, Heming Cui",
+ "published": "2023-09-03",
+ "updated": "2024-01-09",
+ "primary_cat": "cs.SE",
+ "cats": [
+ "cs.SE",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2403.05668v1",
+ "title": "CFaiRLLM: Consumer Fairness Evaluation in Large-Language Model Recommender System",
+ "abstract": "In the evolving landscape of recommender systems, the integration of Large\nLanguage Models (LLMs) such as ChatGPT marks a new era, introducing the concept\nof Recommendation via LLM (RecLLM). While these advancements promise\nunprecedented personalization and efficiency, they also bring to the fore\ncritical concerns regarding fairness, particularly in how recommendations might\ninadvertently perpetuate or amplify biases associated with sensitive user\nattributes. In order to address these concerns, our study introduces a\ncomprehensive evaluation framework, CFaiRLLM, aimed at evaluating (and thereby\nmitigating) biases on the consumer side within RecLLMs.\n Our research methodically assesses the fairness of RecLLMs by examining how\nrecommendations might vary with the inclusion of sensitive attributes such as\ngender, age, and their intersections, through both similarity alignment and\ntrue preference alignment. By analyzing recommendations generated under\ndifferent conditions-including the use of sensitive attributes in user\nprompts-our framework identifies potential biases in the recommendations\nprovided. A key part of our study involves exploring how different detailed\nstrategies for constructing user profiles (random, top-rated, recent) impact\nthe alignment between recommendations made without consideration of sensitive\nattributes and those that are sensitive-attribute-aware, highlighting the bias\nmechanisms within RecLLMs.\n The findings in our study highlight notable disparities in the fairness of\nrecommendations, particularly when sensitive attributes are integrated into the\nrecommendation process, either individually or in combination. The analysis\ndemonstrates that the choice of user profile sampling strategy plays a\nsignificant role in affecting fairness outcomes, highlighting the complexity of\nachieving fair recommendations in the era of LLMs.",
+ "authors": "Yashar Deldjoo, Tommaso di Noia",
+ "published": "2024-03-08",
+ "updated": "2024-03-08",
+ "primary_cat": "cs.IR",
+ "cats": [
+ "cs.IR"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2309.11653v2",
+ "title": "\"It's a Fair Game\", or Is It? Examining How Users Navigate Disclosure Risks and Benefits When Using LLM-Based Conversational Agents",
+ "abstract": "The widespread use of Large Language Model (LLM)-based conversational agents\n(CAs), especially in high-stakes domains, raises many privacy concerns.\nBuilding ethical LLM-based CAs that respect user privacy requires an in-depth\nunderstanding of the privacy risks that concern users the most. However,\nexisting research, primarily model-centered, does not provide insight into\nusers' perspectives. To bridge this gap, we analyzed sensitive disclosures in\nreal-world ChatGPT conversations and conducted semi-structured interviews with\n19 LLM-based CA users. We found that users are constantly faced with trade-offs\nbetween privacy, utility, and convenience when using LLM-based CAs. However,\nusers' erroneous mental models and the dark patterns in system design limited\ntheir awareness and comprehension of the privacy risks. Additionally, the\nhuman-like interactions encouraged more sensitive disclosures, which\ncomplicated users' ability to navigate the trade-offs. We discuss practical\ndesign guidelines and the needs for paradigm shifts to protect the privacy of\nLLM-based CA users.",
+ "authors": "Zhiping Zhang, Michelle Jia, Hao-Ping Lee, Bingsheng Yao, Sauvik Das, Ada Lerner, Dakuo Wang, Tianshi Li",
+ "published": "2023-09-20",
+ "updated": "2024-04-02",
+ "primary_cat": "cs.HC",
+ "cats": [
+ "cs.HC",
+ "cs.AI",
+ "cs.CR"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2405.02219v1",
+ "title": "FairEvalLLM. A Comprehensive Framework for Benchmarking Fairness in Large Language Model Recommender Systems",
+ "abstract": "This paper presents a framework for evaluating fairness in recommender\nsystems powered by Large Language Models (RecLLMs), addressing the need for a\nunified approach that spans various fairness dimensions including sensitivity\nto user attributes, intrinsic fairness, and discussions of fairness based on\nunderlying benefits. In addition, our framework introduces counterfactual\nevaluations and integrates diverse user group considerations to enhance the\ndiscourse on fairness evaluation for RecLLMs.\n Our key contributions include the development of a robust framework for\nfairness evaluation in LLM-based recommendations and a structured method to\ncreate \\textit{informative user profiles} from demographic data, historical\nuser preferences, and recent interactions. We argue that the latter is\nessential for enhancing personalization in such systems, especially in\ntemporal-driven scenarios. We demonstrate the utility of our framework through\npractical applications on two datasets, LastFM-1K and ML-1M. We conduct\nexperiments on a subsample of 80 users from each dataset, testing and assessing\nthe effectiveness of various prompt construction scenarios and in-context\nlearning, comprising more than 50 scenarios. This results in more than 4000\nrecommendations (80 * 50 = 4000). Our study reveals that while there are no\nsignificant unfairness issues in scenarios involving sensitive attributes, some\nconcerns remain. However, in terms of intrinsic fairness, which does not\ninvolve direct sensitivity, unfairness across demographic groups remains\nsignificant. The code and data used for this paper are available at:\n\\url{https://shorturl.at/awBFM}.",
+ "authors": "Yashar Deldjoo",
+ "published": "2024-05-03",
+ "updated": "2024-05-03",
+ "primary_cat": "cs.IR",
+ "cats": [
+ "cs.IR"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2402.18502v1",
+ "title": "Few-Shot Fairness: Unveiling LLM's Potential for Fairness-Aware Classification",
+ "abstract": "Employing Large Language Models (LLM) in various downstream applications such\nas classification is crucial, especially for smaller companies lacking the\nexpertise and resources required for fine-tuning a model. Fairness in LLMs\nhelps ensure inclusivity, equal representation based on factors such as race,\ngender and promotes responsible AI deployment. As the use of LLMs has become\nincreasingly prevalent, it is essential to assess whether LLMs can generate\nfair outcomes when subjected to considerations of fairness. In this study, we\nintroduce a framework outlining fairness regulations aligned with various\nfairness definitions, with each definition being modulated by varying degrees\nof abstraction. We explore the configuration for in-context learning and the\nprocedure for selecting in-context demonstrations using RAG, while\nincorporating fairness rules into the process. Experiments conducted with\ndifferent LLMs indicate that GPT-4 delivers superior results in terms of both\naccuracy and fairness compared to other models. This work is one of the early\nattempts to achieve fairness in prediction tasks by utilizing LLMs through\nin-context learning.",
+ "authors": "Garima Chhikara, Anurag Sharma, Kripabandhu Ghosh, Abhijnan Chakraborty",
+ "published": "2024-02-28",
+ "updated": "2024-02-28",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2311.04892v2",
+ "title": "Bias Runs Deep: Implicit Reasoning Biases in Persona-Assigned LLMs",
+ "abstract": "Recent works have showcased the ability of LLMs to embody diverse personas in\ntheir responses, exemplified by prompts like 'You are Yoda. Explain the Theory\nof Relativity.' While this ability allows personalization of LLMs and enables\nhuman behavior simulation, its effect on LLMs' capabilities remains unclear. To\nfill this gap, we present the first extensive study of the unintended\nside-effects of persona assignment on the ability of LLMs to perform basic\nreasoning tasks. Our study covers 24 reasoning datasets, 4 LLMs, and 19 diverse\npersonas (e.g. an Asian person) spanning 5 socio-demographic groups. Our\nexperiments unveil that LLMs harbor deep rooted bias against various\nsocio-demographics underneath a veneer of fairness. While they overtly reject\nstereotypes when explicitly asked ('Are Black people less skilled at\nmathematics?'), they manifest stereotypical and erroneous presumptions when\nasked to answer questions while adopting a persona. These can be observed as\nabstentions in responses, e.g., 'As a Black person, I can't answer this\nquestion as it requires math knowledge', and generally result in a substantial\nperformance drop. Our experiments with ChatGPT-3.5 show that this bias is\nubiquitous - 80% of our personas demonstrate bias; it is significant - some\ndatasets show performance drops of 70%+; and can be especially harmful for\ncertain groups - some personas suffer statistically significant drops on 80%+\nof the datasets. Overall, all 4 LLMs exhibit this bias to varying extents, with\nGPT-4-Turbo showing the least but still a problematic amount of bias (evident\nin 42% of the personas). Further analysis shows that these persona-induced\nerrors can be hard-to-discern and hard-to-avoid. Our findings serve as a\ncautionary tale that the practice of assigning personas to LLMs - a trend on\nthe rise - can surface their deep-rooted biases and have unforeseeable and\ndetrimental side-effects.",
+ "authors": "Shashank Gupta, Vaishnavi Shrivastava, Ameet Deshpande, Ashwin Kalyan, Peter Clark, Ashish Sabharwal, Tushar Khot",
+ "published": "2023-11-08",
+ "updated": "2024-01-27",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2402.15215v1",
+ "title": "Item-side Fairness of Large Language Model-based Recommendation System",
+ "abstract": "Recommendation systems for Web content distribution intricately connect to\nthe information access and exposure opportunities for vulnerable populations.\nThe emergence of Large Language Models-based Recommendation System (LRS) may\nintroduce additional societal challenges to recommendation systems due to the\ninherent biases in Large Language Models (LLMs). From the perspective of\nitem-side fairness, there remains a lack of comprehensive investigation into\nthe item-side fairness of LRS given the unique characteristics of LRS compared\nto conventional recommendation systems. To bridge this gap, this study examines\nthe property of LRS with respect to item-side fairness and reveals the\ninfluencing factors of both historical users' interactions and inherent\nsemantic biases of LLMs, shedding light on the need to extend conventional\nitem-side fairness methods for LRS. Towards this goal, we develop a concise and\neffective framework called IFairLRS to enhance the item-side fairness of an\nLRS. IFairLRS covers the main stages of building an LRS with specifically\nadapted strategies to calibrate the recommendations of LRS. We utilize IFairLRS\nto fine-tune LLaMA, a representative LLM, on \\textit{MovieLens} and\n\\textit{Steam} datasets, and observe significant item-side fairness\nimprovements. The code can be found in\nhttps://github.com/JiangM-C/IFairLRS.git.",
+ "authors": "Meng Jiang, Keqin Bao, Jizhi Zhang, Wenjie Wang, Zhengyi Yang, Fuli Feng, Xiangnan He",
+ "published": "2024-02-23",
+ "updated": "2024-02-23",
+ "primary_cat": "cs.IR",
+ "cats": [
+ "cs.IR"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2402.02680v1",
+ "title": "Large Language Models are Geographically Biased",
+ "abstract": "Large Language Models (LLMs) inherently carry the biases contained in their\ntraining corpora, which can lead to the perpetuation of societal harm. As the\nimpact of these foundation models grows, understanding and evaluating their\nbiases becomes crucial to achieving fairness and accuracy. We propose to study\nwhat LLMs know about the world we live in through the lens of geography. This\napproach is particularly powerful as there is ground truth for the numerous\naspects of human life that are meaningfully projected onto geographic space\nsuch as culture, race, language, politics, and religion. We show various\nproblematic geographic biases, which we define as systemic errors in geospatial\npredictions. Initially, we demonstrate that LLMs are capable of making accurate\nzero-shot geospatial predictions in the form of ratings that show strong\nmonotonic correlation with ground truth (Spearman's $\\rho$ of up to 0.89). We\nthen show that LLMs exhibit common biases across a range of objective and\nsubjective topics. In particular, LLMs are clearly biased against locations\nwith lower socioeconomic conditions (e.g. most of Africa) on a variety of\nsensitive subjective topics such as attractiveness, morality, and intelligence\n(Spearman's $\\rho$ of up to 0.70). Finally, we introduce a bias score to\nquantify this and find that there is significant variation in the magnitude of\nbias across existing LLMs.",
+ "authors": "Rohin Manvi, Samar Khanna, Marshall Burke, David Lobell, Stefano Ermon",
+ "published": "2024-02-05",
+ "updated": "2024-02-05",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.CY",
+ "cs.LG"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2310.15007v1",
+ "title": "Did the Neurons Read your Book? Document-level Membership Inference for Large Language Models",
+ "abstract": "With large language models (LLMs) poised to become embedded in our daily\nlives, questions are starting to be raised about the dataset(s) they learned\nfrom. These questions range from potential bias or misinformation LLMs could\nretain from their training data to questions of copyright and fair use of\nhuman-generated text. However, while these questions emerge, developers of the\nrecent state-of-the-art LLMs become increasingly reluctant to disclose details\non their training corpus. We here introduce the task of document-level\nmembership inference for real-world LLMs, i.e. inferring whether the LLM has\nseen a given document during training or not. First, we propose a procedure for\nthe development and evaluation of document-level membership inference for LLMs\nby leveraging commonly used data sources for training and the model release\ndate. We then propose a practical, black-box method to predict document-level\nmembership and instantiate it on OpenLLaMA-7B with both books and academic\npapers. We show our methodology to perform very well, reaching an impressive\nAUC of 0.856 for books and 0.678 for papers. We then show our approach to\noutperform the sentence-level membership inference attacks used in the privacy\nliterature for the document-level membership task. We finally evaluate whether\nsmaller models might be less sensitive to document-level inference and show\nOpenLLaMA-3B to be approximately as sensitive as OpenLLaMA-7B to our approach.\nTaken together, our results show that accurate document-level membership can be\ninferred for LLMs, increasing the transparency of technology poised to change\nour lives.",
+ "authors": "Matthieu Meeus, Shubham Jain, Marek Rei, Yves-Alexandre de Montjoye",
+ "published": "2023-10-23",
+ "updated": "2023-10-23",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.CR",
+ "cs.LG"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2404.10199v3",
+ "title": "CULTURE-GEN: Revealing Global Cultural Perception in Language Models through Natural Language Prompting",
+ "abstract": "As the utilization of large language models (LLMs) has proliferated\nworldwide, it is crucial for them to have adequate knowledge and fair\nrepresentation for diverse global cultures. In this work, we uncover culture\nperceptions of three SOTA models on 110 countries and regions on 8\nculture-related topics through culture-conditioned generations, and extract\nsymbols from these generations that are associated to each culture by the LLM.\nWe discover that culture-conditioned generation consist of linguistic \"markers\"\nthat distinguish marginalized cultures apart from default cultures. We also\ndiscover that LLMs have an uneven degree of diversity in the culture symbols,\nand that cultures from different geographic regions have different presence in\nLLMs' culture-agnostic generation. Our findings promote further research in\nstudying the knowledge and fairness of global culture perception in LLMs. Code\nand Data can be found in: https://github.com/huihanlhh/Culture-Gen/",
+ "authors": "Huihan Li, Liwei Jiang, Jena D. Huang, Hyunwoo Kim, Sebastin Santy, Taylor Sorensen, Bill Yuchen Lin, Nouha Dziri, Xiang Ren, Yejin Choi",
+ "published": "2024-04-16",
+ "updated": "2024-04-26",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2307.11761v1",
+ "title": "Fairness of ChatGPT and the Role Of Explainable-Guided Prompts",
+ "abstract": "Our research investigates the potential of Large-scale Language Models\n(LLMs), specifically OpenAI's GPT, in credit risk assessment-a binary\nclassification task. Our findings suggest that LLMs, when directed by\njudiciously designed prompts and supplemented with domain-specific knowledge,\ncan parallel the performance of traditional Machine Learning (ML) models.\nIntriguingly, they achieve this with significantly less data-40 times less,\nutilizing merely 20 data points compared to the ML's 800. LLMs particularly\nexcel in minimizing false positives and enhancing fairness, both being vital\naspects of risk analysis. While our results did not surpass those of classical\nML models, they underscore the potential of LLMs in analogous tasks, laying a\ngroundwork for future explorations into harnessing the capabilities of LLMs in\ndiverse ML tasks.",
+ "authors": "Yashar Deldjoo",
+ "published": "2023-07-14",
+ "updated": "2023-07-14",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2309.09397v1",
+ "title": "Do Large GPT Models Discover Moral Dimensions in Language Representations? A Topological Study Of Sentence Embeddings",
+ "abstract": "As Large Language Models are deployed within Artificial Intelligence systems,\nthat are increasingly integrated with human society, it becomes more important\nthan ever to study their internal structures. Higher level abilities of LLMs\nsuch as GPT-3.5 emerge in large part due to informative language\nrepresentations they induce from raw text data during pre-training on trillions\nof words. These embeddings exist in vector spaces of several thousand\ndimensions, and their processing involves mapping between multiple vector\nspaces, with total number of parameters on the order of trillions. Furthermore,\nthese language representations are induced by gradient optimization, resulting\nin a black box system that is hard to interpret. In this paper, we take a look\nat the topological structure of neuronal activity in the \"brain\" of Chat-GPT's\nfoundation language model, and analyze it with respect to a metric representing\nthe notion of fairness. We develop a novel approach to visualize GPT's moral\ndimensions. We first compute a fairness metric, inspired by social psychology\nliterature, to identify factors that typically influence fairness assessments\nin humans, such as legitimacy, need, and responsibility. Subsequently, we\nsummarize the manifold's shape using a lower-dimensional simplicial complex,\nwhose topology is derived from this metric. We color it with a heat map\nassociated with this fairness metric, producing human-readable visualizations\nof the high-dimensional sentence manifold. Our results show that sentence\nembeddings based on GPT-3.5 can be decomposed into two submanifolds\ncorresponding to fair and unfair moral judgments. This indicates that GPT-based\nlanguage models develop a moral dimension within their representation spaces\nand induce an understanding of fairness during their training process.",
+ "authors": "Stephen Fitz",
+ "published": "2023-09-17",
+ "updated": "2023-09-17",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.CY",
+ "cs.LG",
+ "cs.NE"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2303.01248v3",
+ "title": "Can ChatGPT Assess Human Personalities? A General Evaluation Framework",
+ "abstract": "Large Language Models (LLMs) especially ChatGPT have produced impressive\nresults in various areas, but their potential human-like psychology is still\nlargely unexplored. Existing works study the virtual personalities of LLMs but\nrarely explore the possibility of analyzing human personalities via LLMs. This\npaper presents a generic evaluation framework for LLMs to assess human\npersonalities based on Myers Briggs Type Indicator (MBTI) tests. Specifically,\nwe first devise unbiased prompts by randomly permuting options in MBTI\nquestions and adopt the average testing result to encourage more impartial\nanswer generation. Then, we propose to replace the subject in question\nstatements to enable flexible queries and assessments on different subjects\nfrom LLMs. Finally, we re-formulate the question instructions in a manner of\ncorrectness evaluation to facilitate LLMs to generate clearer responses. The\nproposed framework enables LLMs to flexibly assess personalities of different\ngroups of people. We further propose three evaluation metrics to measure the\nconsistency, robustness, and fairness of assessment results from\nstate-of-the-art LLMs including ChatGPT and GPT-4. Our experiments reveal\nChatGPT's ability to assess human personalities, and the average results\ndemonstrate that it can achieve more consistent and fairer assessments in spite\nof lower robustness against prompt biases compared with InstructGPT.",
+ "authors": "Haocong Rao, Cyril Leung, Chunyan Miao",
+ "published": "2023-03-01",
+ "updated": "2023-10-13",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2402.10567v3",
+ "title": "InSaAF: Incorporating Safety through Accuracy and Fairness | Are LLMs ready for the Indian Legal Domain?",
+ "abstract": "Recent advancements in language technology and Artificial Intelligence have\nresulted in numerous Language Models being proposed to perform various tasks in\nthe legal domain ranging from predicting judgments to generating summaries.\nDespite their immense potential, these models have been proven to learn and\nexhibit societal biases and make unfair predictions. In this study, we explore\nthe ability of Large Language Models (LLMs) to perform legal tasks in the\nIndian landscape when social factors are involved. We present a novel metric,\n$\\beta$-weighted $\\textit{Legal Safety Score ($LSS_{\\beta}$)}$, which\nencapsulates both the fairness and accuracy aspects of the LLM. We assess LLMs'\nsafety by considering its performance in the $\\textit{Binary Statutory\nReasoning}$ task and its fairness exhibition with respect to various axes of\ndisparities in the Indian society. Task performance and fairness scores of\nLLaMA and LLaMA--2 models indicate that the proposed $LSS_{\\beta}$ metric can\neffectively determine the readiness of a model for safe usage in the legal\nsector. We also propose finetuning pipelines, utilising specialised legal\ndatasets, as a potential method to mitigate bias and improve model safety. The\nfinetuning procedures on LLaMA and LLaMA--2 models increase the $LSS_{\\beta}$,\nimproving their usability in the Indian legal domain. Our code is publicly\nreleased.",
+ "authors": "Yogesh Tripathi, Raghav Donakanti, Sahil Girhepuje, Ishan Kavathekar, Bhaskara Hanuma Vedula, Gokul S Krishnan, Shreya Goyal, Anmol Goel, Balaraman Ravindran, Ponnurangam Kumaraguru",
+ "published": "2024-02-16",
+ "updated": "2024-02-21",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2311.04205v2",
+ "title": "Rephrase and Respond: Let Large Language Models Ask Better Questions for Themselves",
+ "abstract": "Misunderstandings arise not only in interpersonal communication but also\nbetween humans and Large Language Models (LLMs). Such discrepancies can make\nLLMs interpret seemingly unambiguous questions in unexpected ways, yielding\nincorrect responses. While it is widely acknowledged that the quality of a\nprompt, such as a question, significantly impacts the quality of the response\nprovided by LLMs, a systematic method for crafting questions that LLMs can\nbetter comprehend is still underdeveloped. In this paper, we present a method\nnamed `Rephrase and Respond' (RaR), which allows LLMs to rephrase and expand\nquestions posed by humans and provide responses in a single prompt. This\napproach serves as a simple yet effective prompting method for improving\nperformance. We also introduce a two-step variant of RaR, where a rephrasing\nLLM first rephrases the question and then passes the original and rephrased\nquestions together to a different responding LLM. This facilitates the\neffective utilization of rephrased questions generated by one LLM with another.\nOur experiments demonstrate that our methods significantly improve the\nperformance of different models across a wide range to tasks. We further\nprovide a comprehensive comparison between RaR and the popular Chain-of-Thought\n(CoT) methods, both theoretically and empirically. We show that RaR is\ncomplementary to CoT and can be combined with CoT to achieve even better\nperformance. Our work not only contributes to enhancing LLM performance\nefficiently and effectively but also sheds light on a fair evaluation of LLM\ncapabilities. Data and codes are available at\nhttps://github.com/uclaml/Rephrase-and-Respond.",
+ "authors": "Yihe Deng, Weitong Zhang, Zixiang Chen, Quanquan Gu",
+ "published": "2023-11-07",
+ "updated": "2024-04-18",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.LG"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2305.19118v1",
+ "title": "Encouraging Divergent Thinking in Large Language Models through Multi-Agent Debate",
+ "abstract": "Modern large language models (LLMs) like ChatGPT have shown remarkable\nperformance on general language tasks but still struggle on complex reasoning\ntasks, which drives the research on cognitive behaviors of LLMs to explore\nhuman-like problem-solving strategies. Along this direction, one representative\nstrategy is self-reflection, which asks an LLM to refine the solution with the\nfeedback generated by itself iteratively. However, our study shows that such\nreflection-style methods suffer from the Degeneration-of-Thought (DoT) problem:\nonce the LLM has established confidence in its solutions, it is unable to\ngenerate novel thoughts later through reflection even if its initial stance is\nincorrect. To address the DoT problem, we propose a Multi-Agent Debate (MAD)\nframework, in which multiple agents express their arguments in the state of\n\"tit for tat\" and a judge manages the debate process to obtain a final\nsolution. Clearly, our MAD framework encourages divergent thinking in LLMs\nwhich would be helpful for tasks that require deep levels of contemplation.\nExperiment results on two challenging datasets, commonsense machine translation\nand counter-intuitive arithmetic reasoning, demonstrate the effectiveness of\nour MAD framework. Extensive analyses suggest that the adaptive break of debate\nand the modest level of \"tit for tat\" state are required for MAD to obtain good\nperformance. Moreover, we find that LLMs might not be a fair judge if different\nLLMs are used for agents. Codes:\nhttps://github.com/Skytliang/Multi-Agents-Debate",
+ "authors": "Tian Liang, Zhiwei He, Wenxiang Jiao, Xing Wang, Yan Wang, Rui Wang, Yujiu Yang, Zhaopeng Tu, Shuming Shi",
+ "published": "2023-05-30",
+ "updated": "2023-05-30",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2309.03852v2",
+ "title": "FLM-101B: An Open LLM and How to Train It with $100K Budget",
+ "abstract": "Large language models (LLMs) have achieved remarkable success in NLP and\nmultimodal tasks, among others. Despite these successes, two main challenges\nremain in developing LLMs: (i) high computational cost, and (ii) fair and\nobjective evaluations. In this paper, we report a solution to significantly\nreduce LLM training cost through a growth strategy. We demonstrate that a\n101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US\ndollars. Inspired by IQ tests, we also consolidate an additional range of\nevaluations on top of existing evaluations that focus on knowledge-oriented\nabilities. These IQ evaluations include symbolic mapping, rule understanding,\npattern mining, and anti-interference. Such evaluations minimize the potential\nimpact of memorization. Experimental results show that our model, named\nFLM-101B, trained with a budget of 100K US dollars, achieves performance\ncomparable to powerful and well-known models, e.g., GPT-3 and GLM-130B,\nespecially on the additional range of IQ evaluations. The checkpoint of\nFLM-101B is released at https://huggingface.co/CofeAI/FLM-101B.",
+ "authors": "Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang",
+ "published": "2023-09-07",
+ "updated": "2023-09-17",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2305.12090v1",
+ "title": "UP5: Unbiased Foundation Model for Fairness-aware Recommendation",
+ "abstract": "Recent advancements in foundation models such as large language models (LLM)\nhave propelled them to the forefront of recommender systems (RS). Moreover,\nfairness in RS is critical since many users apply it for decision-making and\ndemand fulfillment. However, at present, there is a lack of understanding\nregarding the level of fairness exhibited by recommendation foundation models\nand the appropriate methods for equitably treating different groups of users in\nfoundation models. In this paper, we focus on user-side unfairness problem and\nshow through a thorough examination that there is unfairness involved in LLMs\nthat lead to unfair recommendation results. To eliminate bias from LLM for\nfairness-aware recommendation, we introduce a novel Unbiased P5 (UP5)\nfoundation model based on Counterfactually-Fair-Prompting (CFP) techniques. CFP\nincludes two sub-modules: a personalized prefix prompt that enhances fairness\nwith respect to individual sensitive attributes, and a Prompt Mixture that\nintegrates multiple counterfactually-fair prompts for a set of sensitive\nattributes. Experiments are conducted on two real-world datasets, MovieLens-1M\nand Insurance, and results are compared with both matching-based and\nsequential-based fairness-aware recommendation models. The results show that\nUP5 achieves better recommendation performance and meanwhile exhibits a high\nlevel of fairness.",
+ "authors": "Wenyue Hua, Yingqiang Ge, Shuyuan Xu, Jianchao Ji, Yongfeng Zhang",
+ "published": "2023-05-20",
+ "updated": "2023-05-20",
+ "primary_cat": "cs.IR",
+ "cats": [
+ "cs.IR",
+ "cs.AI",
+ "cs.CL",
+ "cs.LG"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2403.00884v2",
+ "title": "Text classification of column headers with a controlled vocabulary: leveraging LLMs for metadata enrichment",
+ "abstract": "Traditional dataset retrieval systems index on metadata information rather\nthan on the data values. Thus relying primarily on manual annotations and\nhigh-quality metadata, processes known to be labour-intensive and challenging\nto automate. We propose a method to support metadata enrichment with topic\nannotations of column headers using three Large Language Models (LLMs):\nChatGPT-3.5, GoogleBard and GoogleGemini. We investigate the LLMs ability to\nclassify column headers based on domain-specific topics from a controlled\nvocabulary. We evaluate our approach by assessing the internal consistency of\nthe LLMs, the inter-machine alignment, and the human-machine agreement for the\ntopic classification task. Additionally, we investigate the impact of\ncontextual information (i.e. dataset description) on the classification\noutcomes. Our results suggest that ChatGPT and GoogleGemini outperform\nGoogleBard for internal consistency as well as LLM-human-alignment.\nInterestingly, we found that context had no impact on the LLMs performances.\nThis work proposes a novel approach that leverages LLMs for text classification\nusing a controlled topic vocabulary, which has the potential to facilitate\nautomated metadata enrichment, thereby enhancing dataset retrieval and the\nFindability, Accessibility, Interoperability and Reusability (FAIR) of research\ndata on the Web.",
+ "authors": "Margherita Martorana, Tobias Kuhn, Lise Stork, Jacco van Ossenbruggen",
+ "published": "2024-03-01",
+ "updated": "2024-03-05",
+ "primary_cat": "cs.DB",
+ "cats": [
+ "cs.DB",
+ "cs.AI",
+ "cs.IR"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2401.01262v2",
+ "title": "Fairness Certification for Natural Language Processing and Large Language Models",
+ "abstract": "Natural Language Processing (NLP) plays an important role in our daily lives,\nparticularly due to the enormous progress of Large Language Models (LLM).\nHowever, NLP has many fairness-critical use cases, e.g., as an expert system in\nrecruitment or as an LLM-based tutor in education. Since NLP is based on human\nlanguage, potentially harmful biases can diffuse into NLP systems and produce\nunfair results, discriminate against minorities or generate legal issues.\nHence, it is important to develop a fairness certification for NLP approaches.\nWe follow a qualitative research approach towards a fairness certification for\nNLP. In particular, we have reviewed a large body of literature on algorithmic\nfairness, and we have conducted semi-structured expert interviews with a wide\nrange of experts from that area. We have systematically devised six fairness\ncriteria for NLP, which can be further refined into 18 sub-categories. Our\ncriteria offer a foundation for operationalizing and testing processes to\ncertify fairness, both from the perspective of the auditor and the audited\norganization.",
+ "authors": "Vincent Freiberger, Erik Buchmann",
+ "published": "2024-01-02",
+ "updated": "2024-01-03",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.CY",
+ "cs.LG",
+ "68T50",
+ "I.2.7"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2402.11764v1",
+ "title": "ChatGPT Based Data Augmentation for Improved Parameter-Efficient Debiasing of LLMs",
+ "abstract": "Large Language models (LLMs), while powerful, exhibit harmful social biases.\nDebiasing is often challenging due to computational costs, data constraints,\nand potential degradation of multi-task language capabilities. This work\nintroduces a novel approach utilizing ChatGPT to generate synthetic training\ndata, aiming to enhance the debiasing of LLMs. We propose two strategies:\nTargeted Prompting, which provides effective debiasing for known biases but\nnecessitates prior specification of bias in question; and General Prompting,\nwhich, while slightly less effective, offers debiasing across various\ncategories. We leverage resource-efficient LLM debiasing using adapter tuning\nand compare the effectiveness of our synthetic data to existing debiasing\ndatasets. Our results reveal that: (1) ChatGPT can efficiently produce\nhigh-quality training data for debiasing other LLMs; (2) data produced via our\napproach surpasses existing datasets in debiasing performance while also\npreserving internal knowledge of a pre-trained LLM; and (3) synthetic data\nexhibits generalizability across categories, effectively mitigating various\nbiases, including intersectional ones. These findings underscore the potential\nof synthetic data in advancing the fairness of LLMs with minimal retraining\ncost.",
+ "authors": "Pengrui Han, Rafal Kocielnik, Adhithya Saravanan, Roy Jiang, Or Sharir, Anima Anandkumar",
+ "published": "2024-02-19",
+ "updated": "2024-02-19",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.CY",
+ "68T50",
+ "I.2.7; K.4.1"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2312.15198v2",
+ "title": "Do LLM Agents Exhibit Social Behavior?",
+ "abstract": "The advances of Large Language Models (LLMs) are expanding their utility in\nboth academic research and practical applications. Recent social science\nresearch has explored the use of these ``black-box'' LLM agents for simulating\ncomplex social systems and potentially substituting human subjects in\nexperiments. Our study delves into this emerging domain, investigating the\nextent to which LLMs exhibit key social interaction principles, such as social\nlearning, social preference, and cooperative behavior (indirect reciprocity),\nin their interactions with humans and other agents. We develop a framework for\nour study, wherein classical laboratory experiments involving human subjects\nare adapted to use LLM agents. This approach involves step-by-step reasoning\nthat mirrors human cognitive processes and zero-shot learning to assess the\ninnate preferences of LLMs. Our analysis of LLM agents' behavior includes both\nthe primary effects and an in-depth examination of the underlying mechanisms.\nFocusing on GPT-4, our analyses suggest that LLM agents appear to exhibit a\nrange of human-like social behaviors such as distributional and reciprocity\npreferences, responsiveness to group identity cues, engagement in indirect\nreciprocity, and social learning capabilities. However, our analysis also\nreveals notable differences: LLMs demonstrate a pronounced fairness preference,\nweaker positive reciprocity, and a more calculating approach in social learning\ncompared to humans. These insights indicate that while LLMs hold great promise\nfor applications in social science research, such as in laboratory experiments\nand agent-based modeling, the subtle behavioral differences between LLM agents\nand humans warrant further investigation. Careful examination and development\nof protocols in evaluating the social behaviors of LLMs are necessary before\ndirectly applying these models to emulate human behavior.",
+ "authors": "Yan Leng, Yuan Yuan",
+ "published": "2023-12-23",
+ "updated": "2024-02-22",
+ "primary_cat": "cs.AI",
+ "cats": [
+ "cs.AI",
+ "cs.SI",
+ "econ.GN",
+ "q-fin.EC"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2404.13925v1",
+ "title": "MARIO Eval: Evaluate Your Math LLM with your Math LLM--A mathematical dataset evaluation toolkit",
+ "abstract": "Large language models (LLMs) have been explored in a variety of reasoning\ntasks including solving of mathematical problems. Each math dataset typically\nincludes its own specially designed evaluation script, which, while suitable\nfor its intended use, lacks generalizability across different datasets.\nConsequently, updates and adaptations to these evaluation tools tend to occur\nwithout being systematically reported, leading to inconsistencies and obstacles\nto fair comparison across studies. To bridge this gap, we introduce a\ncomprehensive mathematical evaluation toolkit that not only utilizes a python\ncomputer algebra system (CAS) for its numerical accuracy, but also integrates\nan optional LLM, known for its considerable natural language processing\ncapabilities. To validate the effectiveness of our toolkit, we manually\nannotated two distinct datasets. Our experiments demonstrate that the toolkit\nyields more robust evaluation results compared to prior works, even without an\nLLM. Furthermore, when an LLM is incorporated, there is a notable enhancement.\nThe code for our method will be made available at\n\\url{https://github.com/MARIO-Math-Reasoning/math_evaluation}.",
+ "authors": "Boning Zhang, Chengxi Li, Kai Fan",
+ "published": "2024-04-22",
+ "updated": "2024-04-22",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2308.10149v2",
+ "title": "A Survey on Fairness in Large Language Models",
+ "abstract": "Large Language Models (LLMs) have shown powerful performance and development\nprospects and are widely deployed in the real world. However, LLMs can capture\nsocial biases from unprocessed training data and propagate the biases to\ndownstream tasks. Unfair LLM systems have undesirable social impacts and\npotential harms. In this paper, we provide a comprehensive review of related\nresearch on fairness in LLMs. Considering the influence of parameter magnitude\nand training paradigm on research strategy, we divide existing fairness\nresearch into oriented to medium-sized LLMs under pre-training and fine-tuning\nparadigms and oriented to large-sized LLMs under prompting paradigms. First,\nfor medium-sized LLMs, we introduce evaluation metrics and debiasing methods\nfrom the perspectives of intrinsic bias and extrinsic bias, respectively. Then,\nfor large-sized LLMs, we introduce recent fairness research, including fairness\nevaluation, reasons for bias, and debiasing methods. Finally, we discuss and\nprovide insight on the challenges and future directions for the development of\nfairness in LLMs.",
+ "authors": "Yingji Li, Mengnan Du, Rui Song, Xin Wang, Ying Wang",
+ "published": "2023-08-20",
+ "updated": "2024-02-21",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2305.01937v1",
+ "title": "Can Large Language Models Be an Alternative to Human Evaluations?",
+ "abstract": "Human evaluation is indispensable and inevitable for assessing the quality of\ntexts generated by machine learning models or written by humans. However, human\nevaluation is very difficult to reproduce and its quality is notoriously\nunstable, hindering fair comparisons among different natural language\nprocessing (NLP) models and algorithms. Recently, large language models (LLMs)\nhave demonstrated exceptional performance on unseen tasks when only the task\ninstructions are provided. In this paper, we explore if such an ability of the\nLLMs can be used as an alternative to human evaluation. We present the LLMs\nwith the exact same instructions, samples to be evaluated, and questions used\nto conduct human evaluation, and then ask the LLMs to generate responses to\nthose questions; we dub this LLM evaluation. We use human evaluation and LLM\nevaluation to evaluate the texts in two NLP tasks: open-ended story generation\nand adversarial attacks. We show that the result of LLM evaluation is\nconsistent with the results obtained by expert human evaluation: the texts\nrated higher by human experts are also rated higher by the LLMs. We also find\nthat the results of LLM evaluation are stable over different formatting of the\ntask instructions and the sampling algorithm used to generate the answer. We\nare the first to show the potential of using LLMs to assess the quality of\ntexts and discuss the limitations and ethical considerations of LLM evaluation.",
+ "authors": "Cheng-Han Chiang, Hung-yi Lee",
+ "published": "2023-05-03",
+ "updated": "2023-05-03",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.HC"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2402.19465v1",
+ "title": "Towards Tracing Trustworthiness Dynamics: Revisiting Pre-training Period of Large Language Models",
+ "abstract": "Ensuring the trustworthiness of large language models (LLMs) is crucial. Most\nstudies concentrate on fully pre-trained LLMs to better understand and improve\nLLMs' trustworthiness. In this paper, to reveal the untapped potential of\npre-training, we pioneer the exploration of LLMs' trustworthiness during this\nperiod, focusing on five key dimensions: reliability, privacy, toxicity,\nfairness, and robustness. To begin with, we apply linear probing to LLMs. The\nhigh probing accuracy suggests that \\textit{LLMs in early pre-training can\nalready distinguish concepts in each trustworthiness dimension}. Therefore, to\nfurther uncover the hidden possibilities of pre-training, we extract steering\nvectors from a LLM's pre-training checkpoints to enhance the LLM's\ntrustworthiness. Finally, inspired by~\\citet{choi2023understanding} that mutual\ninformation estimation is bounded by linear probing accuracy, we also probe\nLLMs with mutual information to investigate the dynamics of trustworthiness\nduring pre-training. We are the first to observe a similar two-phase\nphenomenon: fitting and compression~\\citep{shwartz2017opening}. This research\nprovides an initial exploration of trustworthiness modeling during LLM\npre-training, seeking to unveil new insights and spur further developments in\nthe field. We will make our code publicly accessible at\n\\url{https://github.com/ChnQ/TracingLLM}.",
+ "authors": "Chen Qian, Jie Zhang, Wei Yao, Dongrui Liu, Zhenfei Yin, Yu Qiao, Yong Liu, Jing Shao",
+ "published": "2024-02-29",
+ "updated": "2024-02-29",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2403.15491v1",
+ "title": "Open Source Conversational LLMs do not know most Spanish words",
+ "abstract": "The growing interest in Large Language Models (LLMs) and in particular in\nconversational models with which users can interact has led to the development\nof a large number of open-source chat LLMs. These models are evaluated on a\nwide range of benchmarks to assess their capabilities in answering questions or\nsolving problems on almost any possible topic or to test their ability to\nreason or interpret texts. Instead, the evaluation of the knowledge that these\nmodels have of the languages has received much less attention. For example, the\nwords that they can recognize and use in different languages. In this paper, we\nevaluate the knowledge that open-source chat LLMs have of Spanish words by\ntesting a sample of words in a reference dictionary. The results show that\nopen-source chat LLMs produce incorrect meanings for an important fraction of\nthe words and are not able to use most of the words correctly to write\nsentences with context. These results show how Spanish is left behind in the\nopen-source LLM race and highlight the need to push for linguistic fairness in\nconversational LLMs ensuring that they provide similar performance across\nlanguages.",
+ "authors": "Javier Conde, Miguel Gonz\u00e1lez, Nina Melero, Raquel Ferrando, Gonzalo Mart\u00ednez, Elena Merino-G\u00f3mez, Jos\u00e9 Alberto Hern\u00e1ndez, Pedro Reviriego",
+ "published": "2024-03-21",
+ "updated": "2024-03-21",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2403.15451v1",
+ "title": "Towards Enabling FAIR Dataspaces Using Large Language Models",
+ "abstract": "Dataspaces have recently gained adoption across various sectors, including\ntraditionally less digitized domains such as culture. Leveraging Semantic Web\ntechnologies helps to make dataspaces FAIR, but their complexity poses a\nsignificant challenge to the adoption of dataspaces and increases their cost.\nThe advent of Large Language Models (LLMs) raises the question of how these\nmodels can support the adoption of FAIR dataspaces. In this work, we\ndemonstrate the potential of LLMs in dataspaces with a concrete example. We\nalso derive a research agenda for exploring this emerging field.",
+ "authors": "Benedikt T. Arnold, Johannes Theissen-Lipp, Diego Collarana, Christoph Lange, Sandra Geisler, Edward Curry, Stefan Decker",
+ "published": "2024-03-18",
+ "updated": "2024-03-18",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2305.03514v3",
+ "title": "Can Large Language Models Transform Computational Social Science?",
+ "abstract": "Large Language Models (LLMs) are capable of successfully performing many\nlanguage processing tasks zero-shot (without training data). If zero-shot LLMs\ncan also reliably classify and explain social phenomena like persuasiveness and\npolitical ideology, then LLMs could augment the Computational Social Science\n(CSS) pipeline in important ways. This work provides a road map for using LLMs\nas CSS tools. Towards this end, we contribute a set of prompting best practices\nand an extensive evaluation pipeline to measure the zero-shot performance of 13\nlanguage models on 25 representative English CSS benchmarks. On taxonomic\nlabeling tasks (classification), LLMs fail to outperform the best fine-tuned\nmodels but still achieve fair levels of agreement with humans. On free-form\ncoding tasks (generation), LLMs produce explanations that often exceed the\nquality of crowdworkers' gold references. We conclude that the performance of\ntoday's LLMs can augment the CSS research pipeline in two ways: (1) serving as\nzero-shot data annotators on human annotation teams, and (2) bootstrapping\nchallenging creative generation tasks (e.g., explaining the underlying\nattributes of a text). In summary, LLMs are posed to meaningfully participate\nin social science analysis in partnership with humans.",
+ "authors": "Caleb Ziems, William Held, Omar Shaikh, Jiaao Chen, Zhehao Zhang, Diyi Yang",
+ "published": "2023-04-12",
+ "updated": "2024-02-26",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.LG"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2404.08656v1",
+ "title": "Linear Cross-document Event Coreference Resolution with X-AMR",
+ "abstract": "Event Coreference Resolution (ECR) as a pairwise mention classification task\nis expensive both for automated systems and manual annotations. The task's\nquadratic difficulty is exacerbated when using Large Language Models (LLMs),\nmaking prompt engineering for ECR prohibitively costly. In this work, we\npropose a graphical representation of events, X-AMR, anchored around individual\nmentions using a \\textbf{cross}-document version of \\textbf{A}bstract\n\\textbf{M}eaning \\textbf{R}epresentation. We then linearize the ECR with a\nnovel multi-hop coreference algorithm over the event graphs. The event graphs\nsimplify ECR, making it a) LLM cost-effective, b) compositional and\ninterpretable, and c) easily annotated. For a fair assessment, we first enrich\nan existing ECR benchmark dataset with these event graphs using an\nannotator-friendly tool we introduce. Then, we employ GPT-4, the newest LLM by\nOpenAI, for these annotations. Finally, using the ECR algorithm, we assess\nGPT-4 against humans and analyze its limitations. Through this research, we aim\nto advance the state-of-the-art for efficient ECR and shed light on the\npotential shortcomings of current LLMs at this task. Code and annotations:\n\\url{https://github.com/ahmeshaf/gpt_coref}",
+ "authors": "Shafiuddin Rehan Ahmed, George Arthur Baker, Evi Judge, Michael Regan, Kristin Wright-Bettner, Martha Palmer, James H. Martin",
+ "published": "2024-03-25",
+ "updated": "2024-03-25",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2402.06852v2",
+ "title": "ChemLLM: A Chemical Large Language Model",
+ "abstract": "Large language models (LLMs) have made impressive progress in chemistry\napplications. However, the community lacks an LLM specifically designed for\nchemistry. The main challenges are two-fold: firstly, most chemical data and\nscientific knowledge are stored in structured databases, which limits the\nmodel's ability to sustain coherent dialogue when used directly. Secondly,\nthere is an absence of objective and fair benchmark that encompass most\nchemistry tasks. Here, we introduce ChemLLM, a comprehensive framework that\nfeatures the first LLM dedicated to chemistry. It also includes ChemData, a\ndataset specifically designed for instruction tuning, and ChemBench, a robust\nbenchmark covering nine essential chemistry tasks. ChemLLM is adept at\nperforming various tasks across chemical disciplines with fluid dialogue\ninteraction. Notably, ChemLLM achieves results comparable to GPT-4 on the core\nchemical tasks and demonstrates competitive performance with LLMs of similar\nsize in general scenarios. ChemLLM paves a new path for exploration in chemical\nstudies, and our method of incorporating structured chemical knowledge into\ndialogue systems sets a new standard for developing LLMs in various scientific\nfields. Codes, Datasets, and Model weights are publicly accessible at\nhttps://hf.co/AI4Chem",
+ "authors": "Di Zhang, Wei Liu, Qian Tan, Jingdan Chen, Hang Yan, Yuliang Yan, Jiatong Li, Weiran Huang, Xiangyu Yue, Wanli Ouyang, Dongzhan Zhou, Shufei Zhang, Mao Su, Han-Sen Zhong, Yuqiang Li",
+ "published": "2024-02-10",
+ "updated": "2024-04-25",
+ "primary_cat": "cs.AI",
+ "cats": [
+ "cs.AI",
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2312.07420v1",
+ "title": "FairSISA: Ensemble Post-Processing to Improve Fairness of Unlearning in LLMs",
+ "abstract": "Training large language models (LLMs) is a costly endeavour in terms of time\nand computational resources. The large amount of training data used during the\nunsupervised pre-training phase makes it difficult to verify all data and,\nunfortunately, undesirable data may be ingested during training. Re-training\nfrom scratch is impractical and has led to the creation of the 'unlearning'\ndiscipline where models are modified to \"unlearn\" undesirable information\nwithout retraining. However, any modification can alter the behaviour of LLMs,\nespecially on key dimensions such as fairness. This is the first work that\nexamines this interplay between unlearning and fairness for LLMs. In\nparticular, we focus on a popular unlearning framework known as SISA [Bourtoule\net al., 2021], which creates an ensemble of models trained on disjoint shards.\nWe evaluate the performance-fairness trade-off for SISA, and empirically\ndemsontrate that SISA can indeed reduce fairness in LLMs. To remedy this, we\npropose post-processing bias mitigation techniques for ensemble models produced\nby SISA. We adapt the post-processing fairness improvement technique from\n[Hardt et al., 2016] to design three methods that can handle model ensembles,\nand prove that one of the methods is an optimal fair predictor for ensemble of\nmodels. Through experimental results, we demonstrate the efficacy of our\npost-processing framework called 'FairSISA'.",
+ "authors": "Swanand Ravindra Kadhe, Anisa Halimi, Ambrish Rawat, Nathalie Baracaldo",
+ "published": "2023-12-12",
+ "updated": "2023-12-12",
+ "primary_cat": "cs.LG",
+ "cats": [
+ "cs.LG",
+ "cs.CY"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2312.14804v1",
+ "title": "Use large language models to promote equity",
+ "abstract": "Advances in large language models (LLMs) have driven an explosion of interest\nabout their societal impacts. Much of the discourse around how they will impact\nsocial equity has been cautionary or negative, focusing on questions like \"how\nmight LLMs be biased and how would we mitigate those biases?\" This is a vital\ndiscussion: the ways in which AI generally, and LLMs specifically, can entrench\nbiases have been well-documented. But equally vital, and much less discussed,\nis the more opportunity-focused counterpoint: \"what promising applications do\nLLMs enable that could promote equity?\" If LLMs are to enable a more equitable\nworld, it is not enough just to play defense against their biases and failure\nmodes. We must also go on offense, applying them positively to equity-enhancing\nuse cases to increase opportunities for underserved groups and reduce societal\ndiscrimination. There are many choices which determine the impact of AI, and a\nfundamental choice very early in the pipeline is the problems we choose to\napply it to. If we focus only later in the pipeline -- making LLMs marginally\nmore fair as they facilitate use cases which intrinsically entrench power -- we\nwill miss an important opportunity to guide them to equitable impacts. Here, we\nhighlight the emerging potential of LLMs to promote equity by presenting four\nnewly possible, promising research directions, while keeping risks and\ncautionary points in clear view.",
+ "authors": "Emma Pierson, Divya Shanmugam, Rajiv Movva, Jon Kleinberg, Monica Agrawal, Mark Dredze, Kadija Ferryman, Judy Wawira Gichoya, Dan Jurafsky, Pang Wei Koh, Karen Levy, Sendhil Mullainathan, Ziad Obermeyer, Harini Suresh, Keyon Vafa",
+ "published": "2023-12-22",
+ "updated": "2023-12-22",
+ "primary_cat": "cs.CY",
+ "cats": [
+ "cs.CY"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2402.12150v1",
+ "title": "Your Large Language Model is Secretly a Fairness Proponent and You Should Prompt it Like One",
+ "abstract": "The widespread adoption of large language models (LLMs) underscores the\nurgent need to ensure their fairness. However, LLMs frequently present dominant\nviewpoints while ignoring alternative perspectives from minority parties,\nresulting in potential biases. We hypothesize that these fairness-violating\nbehaviors occur because LLMs express their viewpoints using a human personality\nthat represents the majority of training data. In response to this, we validate\nthat prompting LLMs with specific roles can allow LLMs to express diverse\nviewpoints. Building on this insight and observation, we develop FairThinking,\na pipeline designed to automatically generate roles that enable LLMs to\narticulate diverse perspectives for fair expressions. To evaluate FairThinking,\nwe create a dataset with a thousand items covering three fairness-related\ntopics and conduct experiments on GPT-3.5, GPT-4, Llama2, and Mistral to\ndemonstrate its superior performance.",
+ "authors": "Tianlin Li, Xiaoyu Zhang, Chao Du, Tianyu Pang, Qian Liu, Qing Guo, Chao Shen, Yang Liu",
+ "published": "2024-02-19",
+ "updated": "2024-02-19",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "I.2; J.4"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2402.04489v1",
+ "title": "De-amplifying Bias from Differential Privacy in Language Model Fine-tuning",
+ "abstract": "Fairness and privacy are two important values machine learning (ML)\npractitioners often seek to operationalize in models. Fairness aims to reduce\nmodel bias for social/demographic sub-groups. Privacy via differential privacy\n(DP) mechanisms, on the other hand, limits the impact of any individual's\ntraining data on the resulting model. The trade-offs between privacy and\nfairness goals of trustworthy ML pose a challenge to those wishing to address\nboth. We show that DP amplifies gender, racial, and religious bias when\nfine-tuning large language models (LLMs), producing models more biased than\nones fine-tuned without DP. We find the cause of the amplification to be a\ndisparity in convergence of gradients across sub-groups. Through the case of\nbinary gender bias, we demonstrate that Counterfactual Data Augmentation (CDA),\na known method for addressing bias, also mitigates bias amplification by DP. As\na consequence, DP and CDA together can be used to fine-tune models while\nmaintaining both fairness and privacy.",
+ "authors": "Sanjari Srivastava, Piotr Mardziel, Zhikhun Zhang, Archana Ahlawat, Anupam Datta, John C Mitchell",
+ "published": "2024-02-07",
+ "updated": "2024-02-07",
+ "primary_cat": "cs.LG",
+ "cats": [
+ "cs.LG",
+ "cs.CR",
+ "cs.CY",
+ "stat.ME"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2404.02650v1",
+ "title": "Towards detecting unanticipated bias in Large Language Models",
+ "abstract": "Over the last year, Large Language Models (LLMs) like ChatGPT have become\nwidely available and have exhibited fairness issues similar to those in\nprevious machine learning systems. Current research is primarily focused on\nanalyzing and quantifying these biases in training data and their impact on the\ndecisions of these models, alongside developing mitigation strategies. This\nresearch largely targets well-known biases related to gender, race, ethnicity,\nand language. However, it is clear that LLMs are also affected by other, less\nobvious implicit biases. The complex and often opaque nature of these models\nmakes detecting such biases challenging, yet this is crucial due to their\npotential negative impact in various applications. In this paper, we explore\nnew avenues for detecting these unanticipated biases in LLMs, focusing\nspecifically on Uncertainty Quantification and Explainable AI methods. These\napproaches aim to assess the certainty of model decisions and to make the\ninternal decision-making processes of LLMs more transparent, thereby\nidentifying and understanding biases that are not immediately apparent. Through\nthis research, we aim to contribute to the development of fairer and more\ntransparent AI systems.",
+ "authors": "Anna Kruspe",
+ "published": "2024-04-03",
+ "updated": "2024-04-03",
+ "primary_cat": "cs.LG",
+ "cats": [
+ "cs.LG",
+ "cs.AI",
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ }
+ ],
+ [
+ {
+ "url": "http://arxiv.org/abs/2404.15269v1",
+ "title": "Aligning LLM Agents by Learning Latent Preference from User Edits",
+ "abstract": "We study interactive learning of language agents based on user edits made to\nthe agent's output. In a typical setting such as writing assistants, the user\ninteracts with a language agent to generate a response given a context, and may\noptionally edit the agent response to personalize it based on their latent\npreference, in addition to improving the correctness. The edit feedback is\nnaturally generated, making it a suitable candidate for improving the agent's\nalignment with the user's preference, and for reducing the cost of user edits\nover time. We propose a learning framework, PRELUDE that infers a description\nof the user's latent preference based on historic edit data and using it to\ndefine a prompt policy that drives future response generation. This avoids\nfine-tuning the agent, which is costly, challenging to scale with the number of\nusers, and may even degrade its performance on other tasks. Furthermore,\nlearning descriptive preference improves interpretability, allowing the user to\nview and modify the learned preference. However, user preference can be complex\nand vary based on context, making it challenging to learn. To address this, we\npropose a simple yet effective algorithm named CIPHER that leverages a large\nlanguage model (LLM) to infer the user preference for a given context based on\nuser edits. In the future, CIPHER retrieves inferred preferences from the\nk-closest contexts in the history, and forms an aggregate preference for\nresponse generation. We introduce two interactive environments -- summarization\nand email writing, for evaluation using a GPT-4 simulated user. We compare with\nalgorithms that directly retrieve user edits but do not learn descriptive\npreference, and algorithms that learn context-agnostic preference. On both\ntasks, CIPHER achieves the lowest edit distance cost and learns preferences\nthat show significant similarity to the ground truth preferences",
+ "authors": "Ge Gao, Alexey Taymanov, Eduardo Salinas, Paul Mineiro, Dipendra Misra",
+ "published": "2024-04-23",
+ "updated": "2024-04-23",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.IR",
+ "cs.LG"
+ ],
+ "label": "Original Paper",
+ "paper_cat": "LLM Fairness",
+ "gt": "We describe related work in this area grouped by main themes in this work. Learning from Feedback. Besides pair-wise comparison feedback from annotators used in Reinforcement Learning from Human Feedback (RLHF) research (Ziegler et al., 2019; Stiennon et al., 2020; Nakano et al., 2021; Ouyang et al., 2022a, inter alia), prior work has also studied free-form text feedback provided by annotators (Fernandes et al., 2023), such as on the task of dialog (Weston, 2016; Li et al., 2016; Hancock et al., 2019; Xu et al., 2022; Petrak et al., 2023), question answering (Li et al., 2022; Malaviya et al., 2023), summarization (Saunders et al., 2022), and general decision making (Cheng et al., 2023). This feedback, tailored to each example, is often utilized to rank candidate outputs, thereby improving task performance. Some work studies learning from text feedback to generate outputs directly (Scheurer et al., 2023; Bai et al., 2022; Shi et al., 2022), by generating multiple refinements of the original output based on the feedback and fine-tuning the original model to maximize the likelihood of the best refinement. In grounded settings such as instruction-based navigation, one line of work has also used hindsight feedback that explicitly provides a text instruction for the generated trajectory, to train policies (Nguyen et al., 2021; Misra et al., 2024). Moving beyond the conventional focus on text feedback that explicitly articulates human intent, we investigate feedback in the form of direct edits on the original model output. Such revisions by users occur naturally during model deployment in practice. Additionally, we examine the learning of user preferences through historical interactions, aiming to surpass the constraints of example-specific feedback. 8We provide additional analysis on the accuracy of retrieval in Table 11. 11 Language Agents and Personalization. LLMs have enabled the development of language agents for a variety of tasks from writing assistants (Lee et al., 2024), coding assistants (Dohmke, 2022), and customer service assistants (Brynjolfsson et al., 2023). Since these LLM-based assistants are often used by individuals, a natural question has arisen on how to personalize these agents for each user. Straightforward approaches for fine-tuning LLMs includes supervised learning, online DPO (Guo et al., 2024), learning-to-search (Chang et al., 2023), and reinforcement learning (Ouyang et al., 2022b). These approaches can be directly applied to our setting. For example, one can use (yt, y\u2032 t) in Protocol 1 as the preference data where y\u2032 t is preferred over yt, or use y\u2032 t as the ground truth for supervised learning. However, fine-tuning is expensive and hard to scale with the number of users. Therefore, a line of work has explored improving the alignment of frozen LLMs by prompt engineering, such as learning a personalized retrieval model (Mysore et al., 2023), learning a prompt policy given a reward function (Deng et al., 2022), or more generally, learning to rewrite the entire prompt (Li et al., 2023). We focus on learning a prompt policy by learning from user edits, and specifically, using them to extract textural descriptions of user preference. Edits and Revisions. Many prior work on editing model output focuses on error correction, such as fixing source code (Yin et al., 2018; Chen et al., 2018; Reid et al., 2023) and improving the factual consistency of model summaries (Cao et al., 2020; Liu et al., 2022; Balachandran et al., 2022). A line of work has explored understanding human edits based on edit history of Wikipedia (Botha et al., 2018; Faltings et al., 2020; Rajagopal et al., 2022; Reid & Neubig, 2022; Laban et al., 2023), or revisions of academic writings (Mita et al., 2022; Du et al., 2022; D\u2019Arcy et al., 2023). Prior work explores predicting text revisions with edit intents (Brody et al., 2020; Kim et al., 2022; Chong et al., 2023), and modeling edits with various approaches, including latent vectors (Guu et al., 2017; Marrese-Taylor et al., 2020, 2023), structured trees (Yao et al., 2021), discrete diffusion process (Reid et al., 2023), or a series of singular edit operations (Stahlberg & Kumar, 2020; Mallinson et al., 2020; Agrawal & Carpuat, 2022; Zhang et al., 2022; Liu et al., 2023). However, these methodologies predominantly target generic improvements in model performance, overlooking the intricacies of individual user satisfaction and preference. Our research takes a distinct direction, focusing on understanding edits across a variety of examples to study user-level preferences, with a practical goal of aligning the agent to individual preferences.",
+ "pre_questions": [],
+ "main_content": "Introduction Language agents based on large language models (LLMs) have been developed for a variety of applications (Dohmke, 2022; Brynjolfsson et al., 2023), following recent breakthroughs in improving LLMs (Achiam et al., 2023; Ouyang et al., 2022b; Team et al., 2023). However, despite their impressive zero-shot performance, LLMs still need to adapt and personalize to a given user and task (Mysore et al., 2023; Li et al., 2023). In many applications, a natural feedback for LLM-based agents is user edits, where a user queries the agent and edits the agent\u2019s response before their own final use. In \u2217Equal contribution. 1Our code and data are publicly available at https://github.com/gao-g/prelude. Preprint. arXiv:2404.15269v1 [cs.CL] 23 Apr 2024 Interactive Learning from User Edits Agent incurs a cost Round t A AB6HicdVDLSgNBEJyNrxhfUY9eBoPgaZmNR pNb0IvHBMwDkiXMTmaTMbMPZnqFsOQLvHhQ xKuf5M2/cZKsoKIFDUVN91dXiyFBkI+rNz K6tr6Rn6zsLW9s7tX3D9o6yhRjLdYJCPV9a jmUoS8BQIk78aK08CTvONrud+54rLaLwF qYxdwM6CoUvGAUjNWFQLBG7QpzahYOJTRYwp FwhtSrBTqaUIbGoPjeH0YsCXgITFKtew6J wU2pAsEknxX6ieYxZRM64j1DQxpw7aLQ2f 4xChD7EfKVAh4oX6fSGmg9TwTGdAYax/e3 PxL6+XgF91UxHGCfCQLRf5icQ4fnXeCgUZy CnhlCmhLkVszFVlIHJpmBC+PoU/0/aZds5s 8vN81L9Kosj47QMTpFDrpEdXSDGqiFGOLo AT2hZ+vOerRerNdla87KZg7RD1hvnyQAjSo = Step 1: User (and the world) provides a \u2028 context to the LLM agent. xt AB6nicdVDLSgNBEOyNrxhfUY9eBoPgaZmNRpNb0IvHiOYByRJmJ7PJkNkHM7NiWPIJXjwo4 tUv8ubfOElWUNGChqKqm+4uLxZcaYw/rNzS8srqWn69sLG5tb1T3N1rqSiRlDVpJCLZ8YhigoesqbkWrBN LRgJPsLY3vpz57TsmFY/CWz2JmRuQYch9Tok20s19X/eLJWxXsFM7cxC28RyGlCu4VsXIyZQSZGj0i+9QU STgIWaCqJU18GxdlMiNaeCTQu9RLGY0DEZsq6hIQmYctP5qVN0ZJQB8iNpKtRorn6fSEmg1CTwTGdA9Ej9 mbiX1430X7VTXkYJ5qFdLHITwTSEZr9jQZcMqrFxBCJTe3IjoiklBt0imYEL4+Rf+TVtl2Tuzy9WmpfpHF kYcDOIRjcOAc6nAFDWgChSE8wBM8W8J6tF6s10Vrzspm9uEHrLdPtQSOFQ= xt AB6nicdVDLSgNBEOyNrxhfUY9eBoP gaZmNRpNb0IvHiOYByRJmJ7PJkNkHM7NiWPIJXjwo4tUv8ubfOElWUNGChqKqm+4uLxZcaYw/rNzS8srqWn69sLG5tb1T3N1rqSiRlDVpJCLZ8YhigoesqbkWrBNLRgJPsLY3vpz57TsmFY/CWz2JmRuQYch9Tok20s19X /eLJWxXsFM7cxC28RyGlCu4VsXIyZQSZGj0i+9QUSTgIWaCqJU18GxdlMiNaeCTQu9RLGY0DEZsq6hIQmYctP5qVN0ZJQB8iNpKtRorn6fSEmg1CTwTGdA9Ej9mbiX1430X7VTXkYJ5qFdLHITwTSEZr9jQZcMqrFxB BCJTe3IjoiklBt0imYEL4+Rf+TVtl2Tuzy9WmpfpHFkYcDOIRjcOAc6nAFDWgChSE8wBM8W8J6tF6s10Vrzspm9uEHrLdPtQSOFQ= Step 2: LLM Agent generates a response \u2028 given the context . yt AB6nicdVDLSgNBEOz1GeMr6tHLYBA8LbPRaHILevEY0TwgWcLsZDYZMvtgZlZYlnyCFw+KePWLvPk3TpIV VLSgoajqprvLiwVXGuMPa2l5ZXVtvbBR3Nza3tkt7e23VZRIylo0EpHsekQxwUPW0lwL1o0lI4EnWMebXM38zj2Tik fhnU5j5gZkFHKfU6KNdJsO9KBUxnYVO/VzB2Ebz2FIpYrNYycXClDjuag9N4fRjQJWKipIEr1HBxrNyNScyrYtNhP FIsJnZAR6xkakoApN5ufOkXHRhkiP5KmQo3m6veJjARKpYFnOgOix+q3NxP/8nqJ9mtuxsM40Syki0V+IpCO0OxvNO SUS1SQwiV3NyK6JhIQrVJp2hC+PoU/U/aFds5tSs3Z+XGZR5HAQ7hCE7AgQtowDU0oQURvAT/BsCevRerFeF61L Vj5zAD9gvX0CtoqOFg= xt AB6nicdVDLSgNBEOyNrxhfUY9eBoP gaZmNRpNb0IvHiOYByRJmJ7PJkNkHM7NiWPIJXjwo4tUv8ubfOElWUNGChqKqm+4uLxZcaYw/rNzS8srqWn69sLG5tb1T3N1rqSiRlDVpJCLZ8YhigoesqbkWrBNLRgJPsLY3vpz57TsmFY/CWz2JmRuQYch9Tok20s19 X/eLJWxXsFM7cxC28RyGlCu4VsXIyZQSZGj0i+9QUSTgIWaCqJU18GxdlMiNaeCTQu9RLGY0DEZsq6hIQmYctP5qVN0ZJQB8iNpKtRorn6fSEmg1CTwTGdA9Ej9mbiX1430X7VTXkYJ5qFdLHITwTSEZr9jQZcMqrFxB BCJTe3IjoiklBt0imYEL4+Rf+TVtl2Tuzy9WmpfpHFkYcDOIRjcOAc6nAFDWgChSE8wBM8W8J6tF6s10Vrzspm9uEHrLdPtQSOFQ= yt AB6nicdVDLSgNBEOz1GeMr6tHLYBA8LbPRaHI LevEY0TwgWcLsZDYZMvtgZlZYlnyCFw+KePWLvPk3TpIVLSgoajqprvLiwVXGuMPa2l5ZXVtvbBR3Nza3tkt7e23VZRIylo0EpHsekQxwUPW0lwL1o0lI4EnWMebXM38zj2TikfhnU5j5gZkFHKfU6KNdJsO9KBUxnYVO/VzB2Ebz2FIpYr rNYycXClDjuag9N4fRjQJWKipIEr1HBxrNyNScyrYtNhPFIsJnZAR6xkakoApN5ufOkXHRhkiP5KmQo3m6veJjARKpYFnOgOix+q3NxP/8nqJ9mtuxsM40Syki0V+IpCO0OxvNOSUS1SQwiV3NyK6JhIQrVJp2hC+PoU/U/aFds5tSs3Z+X GZR5HAQ7hCE7AgQtowDU0oQURvAT/BsCevRerFeF61LVj5zAD9gvX0CtoqOFg= Step 3: User edits the response to \u2028 before using it. y0 t AB63icdVBNSwMxEM3Wr1q/qh69BIvoaclWq+2t6MVjBVsL7VKyabYNTbJLkhXK0r/gxYMiXv1D3vw3ZtsVPTBwO O9GWbmBTFn2iD04RSWldW14rpY3Nre2d8u5eR0eJIrRNIh6pboA15UzStmG026sKBYBp3fB5Crz7+6p0iySt2YaU1/g kWQhI9hk0vR4YAblCnJryGucexC5aA5LqjXUqCPo5UoF5GgNyu/9YUQSQaUhHGvd81Bs/BQrwins1I/0TGZIJHtGepxIJ qP53fOoNHVhnCMFK2pIFz9ftEioXWUxHYToHNWP/2MvEvr5eYsO6nTMaJoZIsFoUJhyaC2eNwyBQlhk8twUQxeyskY6wM Taekg3h61P4P+lUXe/Urd6cVZqXeRxFcAOwQnwAVogmvQAm1AwBg8gCfw7Ajn0XlxXhetBSef2Qc/4Lx9Ahdkjkc= yt AB6nicdVDLSgNBEOz1GeMr6tHLYBA8LbPRaHI LevEY0TwgWcLsZDYZMvtgZlZYlnyCFw+KePWLvPk3TpIVLSgoajqprvLiwVXGuMPa2l5ZXVtvbBR3Nza3tkt7e23VZRIylo0EpHsekQxwUPW0lwL1o0lI4EnWMebXM38zj2TikfhnU5j5gZkFHKfU6KNdJsO9KBUxnYVO/VzB2Ebz2FIpYrN YycXClDjuag9N4fRjQJWKipIEr1HBxrNyNScyrYtNhPFIsJnZAR6xkakoApN5ufOkXHRhkiP5KmQo3m6veJjARKpYFnOgOix+q3NxP/8nqJ9mtuxsM40Syki0V+IpCO0OxvNOSUS1SQwiV3NyK6JhIQrVJp2hC+PoU/U/aFds5tSs3Z+XGZR 5HAQ7hCE7AgQtowDU0oQURvAT/BsCevRerFeF61LVj5zAD9gvX0CtoqOFg= 1 2 3 y0 t AB63icdVBNSwMxEM3Wr1q/qh69BIvoaclWq+2t6MV jBVsL7VKyabYNTbJLkhXK0r/gxYMiXv1D3vw3ZtsVPTBwO9GWbmBTFn2iD04RSWldW14rpY3Nre2d8u5eR0eJIrRNIh6pboA15UzStmG026sKBYBp3fB5Crz7+6p0iySt2YaU1/gkWQhI9hk0vR4YAblCnJryGucexC5aA5LqjXUqCPo5UoF5Gg Nyu/9YUQSQaUhHGvd81Bs/BQrwins1I/0TGZIJHtGepxIJqP53fOoNHVhnCMFK2pIFz9ftEioXWUxHYToHNWP/2MvEvr5eYsO6nTMaJoZIsFoUJhyaC2eNwyBQlhk8twUQxeyskY6wMTaekg3h61P4P+lUXe/Urd6cVZqXeRxFcAOwQnwAVogmvQ Am1AwBg8gCfw7Ajn0XlxXhetBSef2Qc/4Lx9Ahdkjkc= ct = \u2206edit(yt, y0 t) ACEHicdVDJSgNBEO1xN25Rj14ag6goSeuOQhBPXiMYFRIwtDTqWhjz0J3jTgM+Qv/oXD4p49ejNv7GzCr6oODxXhV9fxYSYOM fThDwyOjY+MTk7mp6ZnZufz8wpmJEi2gJiIV6QufG1AyhBpKVHARa+CBr+Dcvz7s+uc3oI2MwlNMY2gG/DKUbSk4WsnLrwoP6T5tHIF C7mUNhFvUQYtiZ3OWurhBk1XPVz38gVW3GZuecelrMh6sKS0zcp7jLoDpUAGqHr590YrEkAIQrFjam7LMZmxjVKoaCTayQGYi6u+SX ULQ15AKaZ9R7q0BWrtGg70rZCpD31+0TGA2PSwLedAcr89vrin959QTbe81MhnGCEIr+onaiKEa0mw5tSQ0CVWoJF1raW6m4poLtBn mbAhfn9L/yVmp6G4WSydbhcrBI4JskSWyRpxyS6pkGNSJTUiyB15IE/k2bl3Hp0X57XfOuQMZhbJDzhvnyjGnK0= minimize T X t=1 ct ACHicdZBNSyNBEIZ7/Dbrulk9emkMC+Jh6In r10EQ9+JRwaiQiUNPp2Iau3uG7prFOSHePGvePHgIl48CP4bOzGCLrsvNLw 8VUV1vWmupEPGXoKx8YnJqemZ2cqXua/z36rfF45dVlgBDZGpzJ6m3IGSBho oUcFpboHrVMFJevFrUD/5DdbJzBxhL4eW5udGdqTg6FSXYt1ml2Whqp5RX 0aRxX4q7LuYDVkoWbQnvkCp2UuBP1z46oSDCp1li4zqLtjYiykA3lTX2dbW8 xGo1IjYx0kFSf4nYmCg0GheLONSOWY6vkFqVQ0K/EhQO/8YKfQ9NbwzW4Vjk 8rk9/eNKmncz6Z5AO6ceJkmvnejr1nZpj1/1dG8B/1ZoFdrZapTR5gWDE26J OoShmdJAUbUsLAlXPGy6s9H+lostF+jzrPgQ3i+l/zfH9TBaC+uHP2u7e6M 4ZsgSWSYrJCKbZJfskwPSIJck1tyT/4EN8Fd8BA8vrWOBaOZRfJwfMrcLGh kg= Minimize cumulative cost Farming, a part of agriculture, involves growing crops and rearing animals for food and raw materials. It began thousands of years ago, likely in the Fertile Crescent, and led to the Neolithic Revolution as people transitioned from nomadic hunting to settled farming. This allowed for a signi\ufb01cant increase in human population. Article: {user-provided article} Please summarize the above article. Farming, as a part of agriculture, involves growing crops cultivation and animal rearing for food and raw materials. Originated It began thousands of years ago, likely in the Fertile Crescent, leading to the Neolithic Revolution Transition as people transitioned from nomadic hunting to settled farming. resulted in signi\ufb01cant human population increase Figure 1: Illustration of interactive learning from user edits. Color coding in edits is for visualization only \u2013 our agent takes the plain revised text as feedback. contrast, typical feedback used for fine-tuning, such as the comparison-based preference feedback in RLHF, is explicitly collected by providing annotators with model responses and asking them to rank (Ziegler et al., 2019; Stiennon et al., 2020; Nakano et al., 2021; Ouyang et al., 2022a, inter alia), making such feedback an expensive choice for improving alignment. Motivated by this observation, we focus on interactive learning of LLM-based language agents using user edits as feedback. Consider the scenario in Figure 1 where a user interacts with an LLM-based writing assistant (agent) to complete their task. The interaction starts with the user (and the world) providing a context to the agent. This context may include a query prompt provided by the user, along with additional information provided by the world, such as the content on the screen, current time, and the user\u2019s calendar information. The agent generates a textual response to the user given the context. In the beginning, the agent\u2019s response may not be optimal for the user, as it is not personalized to this user\u2019s individual needs and preference. As most users are not familiar with prompt engineering, and LLMs are often able to generate an acceptable response for the task, therefore, users may find it the most convenient to simply edit the response when it is not ideal to suit their needs, rather than trying different prompts to get new responses. The example in Figure 1 illustrates that the user directly edits the summary generated by the agent to satisfy their preference on bullet point format. It takes time and efforts for the user to make edits. We can measure such cost using a variety of metrics, such as the edit distance between the agent-generated response and the user-revised text. Zero edit from the user is also a useful feedback, reflecting that the agent\u2019s response satisfies this user\u2019s needs. One important feature of our setting is that every natural use of the agent yields an edit feedback for learning. Since there is no distinction between training and testing in this setting, we care about minimizing the user\u2019s efforts across all rounds of interaction with the agent. In summary, our goal is to learn from the implicit feedback in user edit history to minimize the cumulative cost of the user\u2019s efforts. We conjecture that user edits are driven by user\u2019s hidden preference which can be described in natural language. These preference descriptions are different from the notion of comparison-based preference used in RLHF. In this paper, we use the word preference to mean preference descriptions. For instance, preference of the user in Figure 1 can be described as bullet points. In practice, user preference can be compound, such as preferring bullet point, informal, with emojis at the same time, and also context-dependent, e.g., informal tone when writing an email to a family member, and formal tone when writing to a colleague. In more complex settings, user preference can evolve with time (non-stationary), or depend on information unavailable in the context (partially observed). Such user preference may not be fully derivable from the context, and the user may not even be fully aware of all their preference. These considerations imply that user preference is latent to the language agent. If the agent could learn the latent preference correctly, it can significantly improve its performance by generating satisfactory responses accordingly. Furthermore, preference learned by the agent can be shown to the user to enhance interpretability, and can even be modified by the user to improve correctness. Motivated by this, we propose a learning framework, PRELUDE (PREference Learning from User\u2019s Direct Edits), where we seek to learn a textual description of the user preference for a given context using the history of user edits. 2 In a typical real-world scenario such as writing assistants, one has to potentially update the LLMbased agent for every user. Efficient approaches, therefore, must scale with the number of users. This makes approaches that perform a full fine-tuning of the LLM used by the agent very hard to scale. Furthermore, LLMs typically undergo evaluation on a variety of metrics before being released, and thus fine-tuning them often results in breaking the generalization guarantees offered by these tests. For example, fine-tuning GPT-4 for millions of users can quickly turn very expensive. Approaches such as adding LORA and Adapter layers and only updating them, or using federated learning, can reduce the expense to some extent, while the loss of generalizable alignment remains as a concern. In this work, we focus on leveraging a frozen, black-box LLM, and instead learning a prompt policy that can infer textual description of user\u2019s preference for a given context, and then use it to directly drive the response generation. We introduce a simple yet effective algorithm CIPHER (Consolidates Induced Preferences based on Historical Edits with Retrieval) under the PRELUDE framework. For a given context, CIPHER first retrieves the k-closest contexts from history, and aggregates inferred preferences for these k contexts. It relies on this aggregate preference to generate a response for the given context. If the user performs no edits, then it saves this aggregate preference as the correct preference for the given context. Otherwise, it queries the LLM to infer a plausible preference that explains these user edits made to the agent response, and saves this inferred preference as the correct preference for the given context. A key advantage of CIPHER is that it typically leads to significantly shorter prompts compared to other retrieval methods that use the entire documents or context, as inferred preferences are much shorter than retrieved documents or contexts. This results in a significant reduction in the computational expense of querying the LLM. We introduce two interactive environments for evaluation, inspired by writing assistant applications. In the first environment, we evaluate the agent\u2019s ability to summarize a given document (articles from different sources). In the second environment, we evaluate the agent\u2019s ability to compose an email using content from a given document (notes for various purpose). In both tasks, we simulate a GPT-4 user that can generate edits based on a pre-designed latent preference. We use documents from several existing domains as our user-provided context, and vary the GPT-4 user\u2019s preference based on the domain, in order to capture the real-world context-dependent nature of human user\u2019s preference. We evaluate CIPHER against several baselines, including approaches that learn context-agnostic user preferences, and retrieval-based approaches that do not learn preferences but directly use past user edits for generation. We show that for both tasks, CIPHER achieves the lowest user edit cost compared to baselines, and significantly reduces the cumulative cost compared to using the frozen base agent. Additionally, CIPHER results in a lower LLM query cost than other retrieval-based baselines. Finally, we qualitatively and quantitatively analyze preferences learned by our agents, and find that they show significant similarity to the ground truth latent preferences in our setup. 2 Interactive Learning from User Edits and the PRELUDE Framework We first describe LLM agents and the general learning framework from user edits. We then describe our specialized PRELUDE framework for learning descriptive user preference, and discuss associated learning challenges. LLM and Language Agents. We assume access to a language agent that internally relies on an LLM. We make no assumption about the language agent except that it can take input xt as a piece of content and an additional prompt (which can be in-context learning examples or learned preferences) and generate a response yt . The language agent may simply perform greedy decoding on the LLM, or may perform complex planning using the given LLM to generate a response. Protocol 1 Interactive Learning from User Edits. 1: for t = 1, 2, \u00b7 \u00b7 \u00b7 , T do 2: User and the world provide a context xt 3: Agent generates a response yt given the context xt 4: User edits the response to y\u2032 t 5: Agent receives a cost of ct = \u2206edit(yt, y\u2032 t) 6: Evaluate the agent and learning algorithm on PT t=1 ct 3 Interactive Learning from User Edits. In an application such as a writing assistant, a user interacts with the language agent over T rounds. Protocol 1 shows such learning protocol. In the tth round, the user and the world provide a context xt \u2208X where X is the space of all possible contexts. This context will include the user prompt in text, along with additional information provided by the user or the world, and may include multimodal data as well such as images. Given the context xt, the language agent generates a response yt \u2208Y in text, where Y is the space of all texts. The user edits the response yt to y\u2032 t. If the user does not perform any edits, we treat this as setting y\u2032 t = yt. The agent receives a cost of ct = \u2206edit(yt, y\u2032 t) for this round, which measures the user\u2019s efforts on making edits. The goal of the agent is to minimize the sum of costs across all rounds PT t=1 ct. In our experiments, we use \u2206edit as Levenshtein edit distance (Levenshtein, 1965) in the token space which computes the minimum number of total token addition, token deletion, and token substitution necessary to convert yt to y\u2032 t. In general, a higher edit distance implies that the user has made more edits and spent more efforts. We note that our framework is general enough to accommodate situations where the user tries different prompts with the same demand. We treat each call to the language agent as a different round with a different context (as the context includes the user prompt). PRELUDE Framework. We describe our PRELUDE framework in Protocol 2 which is a specialization of the general learning setup described above in Protocol 1. In PRELUDE, in the tth round, the agent infers the preference of the user as ft, and uses it to generate a response. We assume that in this round and for the given context xt, the user has a latent preference f \u22c6 t that drives the user to perform all edits. Furthermore, we assume that if the agent was able to infer this latent preference (ft = f \u22c6 t ), then it will lead to minimal possible edits.2 To remove the dependence on performance due to the choice of the base LLM agent, we compare with an oracle agent that has access to f \u22c6 t at the start of each round. We assume that the LLM remains frozen across all methods in this work. Protocol 2 PRELUDE: PREference Learning from User\u2019s Direct Edits 1: for t = 1, 2, \u00b7 \u00b7 \u00b7 , T do 2: User presents a text context xt 3: Agent infers a preference ft using the history {(x\u2113, y\u2113, y\u2032 \u2113)}t\u22121 \u2113=1 and context xt 4: Agent uses ft and xt to generate a response yt 5: User edits the response to y\u2032 t using their latent preference f \u22c6 t 6: Agent incurs a cost ct = \u2206(yt, y\u2032 t) 7: Return PT t=1 ct Challenges of Learning User Preference. Learning user preference from edits is challenging. In practice, user preference are multifaceted and complex. Furthermore, user\u2019s preference can also significantly vary based on the context. The feedback in the form of user edits emerges naturally but is inherently implicit, lacking direct expressions of the actual preference and carrying subtleties that may lead to diverse interpretations. The combination of preference variability and the implicit nature of feedback poses considerable challenges for agents in accurately learning and integrating these preferences. 3 Learning User Preference through Retrieval and Aggregation In this section, we present our method, CIPHER (Consolidates Induced Preferences based on Historical Edits with Retrieval), that learns user preference based on user edits. Algorithm 1 shows CIPHER which implements the PRELUDE framework. CIPHER maintains a preference history Dt = {(x\u2113, \u02dc f\u2113)}t\u22121 \u2113=1 of past contexts x\u2113along with a preference \u02dc f\u2113inferred by the agent. CIPHER assumes access to a context representation function \u03d5 : X \u2192Rd that can map a context to a vector representation. For a given round t with context xt, the agent first retrieves the k-closest contexts from the interaction history Dt. We use cosine similarity for computing proximity, although other metrics such as Euclidean distance, or Hamming distance when \u03d5 outputs a binary vector, can be used. Given the retrieved contexts and their inferred preferences {(xzi, \u02dc fzi)}k i=1, we 2The edit cost in practice may not always be 0, as the language agent could be incapable of adeptly using the correct preference, or the user may perform edits that are inconsistent with their preference. 4 query the underlying LLM to summarize the inferred preferences { \u02dc fzi}k i=1 into a single preference ft. In the beginning, when t \u2264k, we retrieve all the past t contexts. In particular, for t = 1 we have f1 as an empty string as the agent has no prior knowledge of this user\u2019s preference.3 The agent uses the inferred preference ft to generate the response. This is done by concatenating the context xt with an agent prompt such as \u201cThis user has a preference of which must be used when generating the response\u201d, where indicates where we insert the inferred preference ft. We list the actual template used in our experiments in Table 7 in Appendix A. Given the user edits y\u2032 t, if the user edits are minimal, i.e., \u2206edit(yt, y\u2032 t) \u2264\u03b4 for a hyperparameter \u03b4, then we set the inferred preference for this round as \u02dc ft = ft as using ft for generating a response resulted in minimal edits. However, if \u2206edit(yt, y\u2032 t) > \u03b4, then we query the LLM a third time to generate the inferred preference \u02dc ft that explains why the user edited yt to y\u2032 t. We call this the Latent Preference Induction (LPI) step. In both cases, we append (xt, ft) to the preference history. Note that we cannot query the LLM for the inferred preference in the first case where the user edit cost ct is small, i.e., ct \u2264\u03b4. In this case, querying the LLM to infer the preference to explain the edits in y\u2032 t given yt, will result in the LLM outputting that the agent has no preference. This is incorrect as it merely shows that the preference ft used to generate yt was sufficiently good to include most of the true user preference f \u22c6 t . Computational Cost of CIPHER. In a given round, CIPHER adds a maximum of 3 LLM calls on top of the cost of calling the underlying inference algorithm of the agent in line 6. CIPHER further reduces the memory storage by only storing the representation of contexts in the preference string instead of the input itself. Finally, CIPHER only adds a small prompt to the context xt, before calling the agent\u2019s inference algorithm. This only slightly increases the length of the prompt, thereby, reducing the query cost associated with LLMs that scales with the number of input tokens. Algorithm 1 CIPHER(\u03d5, k, \u03b4). A context representation function \u03d5 : X \u2192Rd, the retrieval hyperparameter k, and tolerance hyperparameter \u03b4 \u22650. 1: D = \u2205 2: for t = 1, 2, \u00b7 \u00b7 \u00b7 , T do 3: User (and the world) presents a context xt 4: Retrieve the top-k examples {\u03d5(xzi), \u02dc fzi}k i=1 in D with maximum cosine similarity to \u03d5(xt) 5: If k > 1, then query the LLM to aggregate these preferences { \u02dc fzi}k i=1 into ft, else ft = \u02dc fz1 6: Agent generates a text response yt based on xt and ft 7: User edits the response to y\u2032 t using their latent preference f \u22c6 t 8: Agent incurs a cost ct = \u2206edit(yt, y\u2032 t) 9: if ct \u2264\u03b4 then 10: \u02dc ft = ft 11: else 12: Query the LLM to generate a preference \u02dc ft that best explains user edits in (yt, y\u2032 t) 13: D \u2190D \u222a{(\u03d5(xt), \u02dc ft)} 14: Return PT t=1 ct 4 Experiment In this section, we first introduce two interactive tasks for evaluating agents that learn from user edits. These tasks can be used more broadly even outside the PRELUDE framework, and can be of independent interest. We then describe our baselines and provide implementation details of CIPHER. Finally, we provide quantitative results in terms of user edit cost and qualitative analysis of the learned preferences. 3In practice, one can initialize with a publicly available preference history. 5 Table 1: Latent user preference design, specific to the document source. Doc Source Latent User Preference Scenario Summarization News article (See et al., 2017) targeted to young children, storytelling, short sentences, playful language, interactive, positive introduce a political news to kids Reddit post (Stiennon et al., 2020) second person narrative, brief, show emotions, invoke personal reflection, immersive for character development in creative writing Wikipedia page (Foundation, 2022) bullet points, parallel structure, brief take notes for key knowledge Paper abstract (Clement et al., 2019) tweet style, simple English, inquisitive, skillful foreshadowing, with emojis promote a paper to invoke more attention and interests Movie review (Maas et al., 2011) question answering style, direct, concise quickly get main opinions Email Writing Personal problem (Stiennon et al., 2020) informal, conversational, short, no closing share life with friends Paper review (Hua et al., 2019) casual tone, positive, clear, call to action peer review to colleague Paper tweet (Bar, 2022) engaging, personalized, professional tone, thankful closing networking emails for researchers Paper summary (Kershaw & Koeling, 2020) structured, straight to the points, respectful, professional greeting and closing milestone report to superiors 4.1 Two Interactive Writing Assistant Environments for Learning from User Edits Task. We introduce two tasks inspired by the use of LLMs as writing assistants (Mysore et al., 2023; Shen et al., 2023; Wang et al., 2023). In the first task, we evaluate the agent\u2019s ability to summarize a given document. We use documents from 5 existing sources listed in Table 1.4 These sources represent a diverse category of documents that a writing assistant would typically encounter, including news articles that are formal and concise, movie reviews that are informal, and paper abstracts that are technical. In the second task, we evaluate the agent\u2019s ability to compose an email given notes. For this task, we use notes from four different sources including a variety of tasks such as writing emails to friends, describing reports to managers, and writing reviews for colleagues. In any given round, the user is provided a context that is a document from one of the document sources for the given task. Importantly, the agent is unaware of the source of the given document which as we discuss later, will determine the user preference. For both tasks, we run an experiment for T = 200 rounds, with an equal number of randomly sampled documents from each document source. We mix documents from different sources and shuffle them to remove any temporal correlation in document source across rounds. Two-Stage GPT-4 Simulated User. We simulate a user that can edit a given response. We define a set of latent user preferences for the user that vary based on the document source. Table 1 lists the preference and the corresponding document source. This captures the context-dependent nature of user preferences as the document source influences the type of context. For example, the Personal problem document source contains documents pertaining to discussions with a friend, and a user may have a different preference when writing an email to a friend compared to writing an email to a colleague. In real-world settings, the context dependence of the user preference can be more complex than just the document source. We assume that our user is aware of the document source dt of a given context xt. This implies, that we can express the true user preference for xt as f \u22c6 t = F(dt) where F maps a given document source to the user preference. Recall that the agent in our learning setup is never provided the document source of any context. We model our user using GPT-4 with a two-stage approach. Given an agent response yt and the context xt, we first query GPT-4 to check if the given response satisfies the preference in f \u22c6 t . If the answer is yes, then the user preforms no edits and returns y\u2032 t = yt. If the answer is no, then we use GPT-4 to generate the edited response y\u2032 t given yt and f \u22c6 t . We use prompting to condition GPT-4 on these latent preferences. We provide examples of edits made by our GPT-4 user in Table 5 in Appendix A. 4Table 4 in Appendix provides links to each source dataset, used as user-provided context in our tasks. 6 We found that our two-stage GPT-4 user can generate high-quality edits, consistent with observations in prior work that LLM-written feedback is high-quality and useful to learn from (Bai et al., 2022; Saunders et al., 2022). We adopted a two-stage process since we found that using GPT-4 to directly edit the response yt always resulted in edits even when the response satisfied the preference f \u22c6 t . We evaluated several different prompts for modeling our two-stage GPT-4 user until we found a prompt such that an oracle GPT-4 agent with access to f \u22c6 t achieves a minimal user cost. Evaluation Metric. We propose three metrics for evaluating agents learning from user edits. Our main metric is the cumulative user edit cost PT t=1 ct over T rounds. In any given round, we compute the user edit cost ct = \u2206edit(yt, y\u2032 t) using Levenshtein edit distance between agent response yt and user edits y\u2032 t. To compute the edit distance, we perform BPE tokenization using Tiktoken tokenizer, and compute the edit distance in the token space. In general, one can learn a metric that better captures the cognitive load associated with a user edit. However, Levenshtein edit distance provides a clean, transparent metric that is easy to interpret. Additionally, it doesn\u2019t have concerns shared by learned metrics such as erroneous evaluations when applying the metric to examples not covered by the metric\u2019s training distribution. For CIPHER and any other method in the PRELUDE framework, we additionally evaluate the accuracy of the inferred user preference ft used to generate the response yt. Formally, given a context xt containing a document from source dt, we evaluate if the inferred preference ft is closer to the true preference f \u22c6 t = F(dt) than preference F(d) of any other document source d \u0338= dt. Let there be N document sources for a given task and we index d \u2208{1, 2, \u00b7 \u00b7 \u00b7 , N}. Then we compute this metric as 1 T PT t=1 1{dt = arg maxd\u2208[N] BERTScore(ft, F(d))}, where BERTScore (Zhang* et al., 2020) is a popular text similarity metric.5 Finally, we evaluate the token expense associated with querying the LLM across all methods. We compute the total number of tokens both generated by or provided as input to the LLM across all rounds. This is a typical metric used by popular LLM providers to charge their customers. 4.2 Details of CIPHER and Comparison Systems We use GPT-4 as our base LLM for CIPHER and all baselines. We do not perform fine-tuning of the GPT-4 and do not add any additional parameters to the model. We use a prompt-based GPT-4 agent for all methods that uses a single prompt with greedy decoding to generate the response. Our main method CIPHER and the baselines, can be extended to more complex language agents that perform multiple steps of reasoning on top of the base LLM before generating a response. CIPHER Details. We use a simple agent that uses GPT-4 with a prompt template to generate the response yt given the context xt and preference ft. We list templates in Table 7 in Appendix A. We experiment with MPNET (Song et al., 2020) and BERT (Devlin et al., 2019) as our two context representation functions \u03d5, and use cosine similarity for retrieval. We experiment with two different values of the number of retrieved examples k \u2208{1, 5}. Baselines. We evaluate CIPHER against baselines that either perform no learning, or learn contextagnostic preferences and against methods that do not learn preferences but directly use past user edits for generating a response. 1. No learning: The agent performs no learning based on interaction with the user. In each step, the agent generates a response yt given the context xt. 2. Explore-then-exploit (E-then-e) LPI: This baseline is based on the classic explore-thenexploit strategy in interactive learning (Garivier et al., 2016). The agent first generates responses for the first Te rounds without performing any learning (exploration stage). It then infers a single user preference \u02dc fe using the user edits in the first Te rounds using the LPI step similar to line 12 in CIPHER(Algorithm 1). It then uses the learned preference to generate the response for all remaining rounds (exploitation step). 3. Continual LPI: This method is similar to explore-then-exploit except that it never stops exploring. In any given round t, it uses the data of all past edits {(yi, y\u2032 i)}t\u22121 i=1 to learn a 5We use the microsoft/deberta-xlarge-mnli to implement BERTScore. 7 preference ft by performing the LPI step. It then generates a response using this preference. In contrast, to explore-then-exploit approach, Continual LPI can avoid overfitting to the first Te rounds, but both approaches learn preferences that are independent of xt. 4. ICL-edit: This is a standard retrieval-based in-context learning (ICL) baseline (Brown et al., 2020). In a given round t, the agent first retrieves the closest k examples {(yz\u2113, y\u2032 z\u2113)}k \u2113=1 to the given context xt using the representation function \u03d5. It then creates an ICL prompt containing these k examples where yz\u2113is presented as the input, and y\u2032 z\u2113is presented as the desired output. The agent then uses the context xt and the ICL prompt to generate the response. This approach doesn\u2019t infer preferences but must instead use the user edit data directly to align to the given user preference. However, unlike explore-then-exploit LPI and Continual LPI, this approach can perform context-dependent learning as the generated response attends on both the given context xt and the historical data. Baseline Hyperparameters. For explore-then-exploit LPI and continual LPI baselines, we set the number of exploration Te as 5. For ICL-edit baselines, we experiment with different k values for retrieval, and report our best results with k = 5. Oracle Method. We additionally run an oracle preference method to provide an approximated upper bound on performance. In each round t, we let the GPT-4 agent generate a response by conditioning on the ground-truth latent preference f \u22c6 t and the context xt. This method can test whether our setup is well-defined, e.g., in a poorly designed setup, the user always edits the agent response no matter what the agent generates including providing user edits back to the user, and thus no method can effectively minimize the cost over time in this case. If the oracle method achieves a zero or a minimal user edit cost, then learning the optimal preference leads to success. 4.3 Main Result and Discussion. Main Results. Table 2 reports the performance of baselines and our methods on summarization and email writing tasks on three metrics: edit distance which measures cumulative user edit cost, accuracy which measures mean preference classification accuracy, and expense measuring the total BPE token cost of querying LLM.6 We report the mean and standard deviation across 3 different random seeds.7 Table 2: Performance of baselines and our methods in terms of cumulative edit distance cost and classification accuracy. \u00b5\u03c3 denotes the mean \u00b5 and standard deviation \u03c3 across 3 runs over different seeds. Expense column shows budget as the average number of input and output BPE tokens across 3 runs (unit is \u00b7105). We use -k in method names to denote that we use k retrieved examples. Numbers in bold are the best performance in each column excluding oracle preference method, underline for the second best, and dotted underline for the third best. Method Summarization Email Writing Edit Distance\u2193 Accuracy\u2191 Expense\u2193 Edit Distance\u2193 Accuracy\u2191 Expense\u2193 Oracle Preference 6,5731,451 1.000 1.67 1,851243 1.000 1.62 No Learning 48,269957 1.50 31,103900 1.65 E-then-e LPI 65,21817,466 0.2180.003 1.99 24,5621,022 0.2630.003 1.73 Continual LPI 57,9152,210 0.2330.010 8.89 26,8521,464 0.2430.019 8.63 ICL-edit-5-MPNET 38,5601,044 8.00 32,4051,307 12.12 ICL-edit-5-BERT 39,7341,929 7.96 30,9493,250 11.55 CIPHER-1-MPNET 33,9264,000 0.5200.022 2.74 ........ 10,7811,711 ....... 0.4350.084 1.94 CIPHER-5-MPNET 32,974195 ....... 0.4780.010 3.00 10,0581,709 0.4670.081 2.09 CIPHER-1-BERT 37,6373,025 0.5650.053 2.81 12,6344,868 0.4870.125 1.99 CIPHER-5-BERT ........ 35,8113,384 ....... 0.4780.028 3.03 8,3913,038 0.3630.075 2.22 6Table 9 in Appendix shows the breakdown of expense in terms of input and output. 7We randomize the context sampling from source datasets, so experiments on different seeds contain different sets of input contexts. On the same seed, experiments across different methods are strictly comparable, as both the set of input contexts and the order of input context seen are the same in our implementation. 8 Figure 2: Learning curves of different methods based on cumulative cost over time (average across 3 seeds). In the legend, -k means with top k retrieved examples, -B for BERT, and -M for MPNET. 0 40 80 120 160 200 1 2 3 4 5 6 7 \u00b7104 Round Cumulative Cost Summarization Oracle No Learning E-then-e Continual ICL-edit-B ICL-edit-M CIPHER-1-B CIPHER-5-B CIPHER-1-M CIPHER-5-M 0 40 80 120 160 200 0.5 1 1.5 2 2.5 3 3.5 \u00b7104 Round Cumulative Cost Email Writing Oracle No Learning E-then-e Continual ICL-edit-B ICL-edit-M CIPHER-1-B CIPHER-5-B CIPHER-1-M CIPHER-5-M Discussion of Main Result. We observe that not performing learning results in a high edit cost, whereas using the Oracle preferences achieves a significantly smaller edit cost. This shows that our environments are sound and well-conditioned. E-then-e LPI and Continual LPI learn context-agnostic preferences which cannot capture the context-dependent preferences in the environments and end up doing poorly. For the summarization task, they end up with a higher edit distance than even performing no learning. One explanation is that using context-agnostic preferences can push the model to specialize to a given preference much more than the base model, resulting in more edits when that preference is incorrect. We see this in preference accuracy which is low for both of these baselines, and lower for the summarization task than the email writing task where they outperform no learning baselines. Further, Continual LPI has a higher expense cost due to constantly querying the LLM to infer the user preference. ICL-edit baselines perform significantly better on the summarization task. However, using a list of user edits in the prompt results in a higher token expense cost, as the responses and their edits can be significantly long in practice. Further, the ICL-edit baselines provide no interpretable explanation for their response or for explaining user behavior. Finally, CIPHER achieves the smallest edit distance cost reducing edits by 31% in the summarization task and 73% in the email writing task. We observe that retrieving k = 5 preferences and aggregating them achieves lower edit distance, however, the choice of ideal representation \u03d5 seems task-dependent. Further, CIPHER achieves the highest preference accuracy showing that CIPHER can learn preferences that correlate more with the ground truth preference than preferences of other document sources. Note that the performance of a random preference classifier is only 20% for summarization and 25% for email writing. Further, CIPHER achieves a smaller cost than ICL-edit and Continual LPI baselines, as it doesn\u2019t use long user edits in the prompt for generating a response. Overall, CIPHER provides a cheap, more effective, and interpretable method than our baselines. 4.4 More Analysis Learning Curves. We plot mean cumulative user edit costs over rounds in Figure 2. The cumulative user edit costs in Figure 2 show that the angle of the learning curves decreases for CIPHER after an initial number of rounds, showing that learning helps decrease the rate at which user edits are accumulated. In contrast, the angle of the learning curve for the no-learning baseline remains unchanged. 9 Figure 3: Normalized cost and percentage of zero-cost examples of CIPHER over time, binned per 20 rounds to show the trend (average across 3 seeds). In the legend, -k means with top k retrieved examples, -B for BERT, and -M for MPNET. 40 80 120 160 200 0 0.2 0.4 0.6 Round Normalized Cost Summarization CIPHER-5-B CIPHER-1-M CIPHER-5-M Oracle 40 80 120 160 200 0 0.2 0.4 Round Normalized Cost Email Writing 40 80 120 160 200 0 0.2 0.4 0.6 0.8 Round % Zero-cost Ex. per Bin 40 80 120 160 200 0.2 0.4 0.6 Round % Zero-cost Ex. per Bin Evaluating Normalized Edit Cost. The cumulative user edit cost measures the total effort of the user but is susceptible to outlier examples, as the edit distance for a given round is potentially unbounded. Therefore, we also compute a normalized edit distance \u2206edit(yt, y\u2032 t)/|yt| by dividing the edit distance by max{|yt|, |y\u2032 t|}, i.e. the max length of the agent output or user revised text. As Levenshtein distance \u2206edit(yt, y\u2032 t) is upper bounded by max{|yt|, |y\u2032 t|}, therefore, the normalized cost is at most 1. Figure 3 reports normalized cost over rounds for the top 3 methods. We notice that for all variants of CIPHER for the summarization task, and for CIPHER-5-M for the email writing task, the normalized cost decreases notably as training progresses indicating learning. As the cost is normalized by the response length, even a small decrease can lead to a significant reduction in the number of tokens edited. Evaluating Fraction of Edited Response. Recall that the first stage of our GPT-4 user checks if the agent response satisfies the latent user preference f \u22c6. If it does, then the user performs no edits. Otherwise, in the second stage, the user edits the response. To measure how many times the agent response isn\u2019t edited, we also plot the percentage of examples with zero edit cost per 20 rounds bin in Figure 3. We notice a small increase in the number of examples with zero edit cost. This indicates that gains come from reducing edits across all examples, and not just by increasing the number of examples that avoid getting edited in stage 1 of our user. Qualitative Analysis of Learned Preferences. We qualitatively analyze the learned preferences for CIPHER to understand the quality of learned preferences. We present our analysis on the summarization task, where our methods have a larger gap with the oracle performance compared to the email writing task. Table 3 lists 3 learned preferences per document source for CIPHER-5MPNET which are randomly sampled at the beginning, middle, and end of the interaction history. We see that overall the agent can learn a reasonable description of the latent preference. For example, it can learn bullet points preference for Wikipedia articles, and second person narrative for Reddit posts, and QA style for Movie reviews. CIPHER can pick some preferences fairly early such as bullet points for Wikipedia and emojis for Paper abstract, whereas some are learned only later such as Structured Q&A for Movie reviews. This shows using CIPHER can quickly learn useful preferences, but further interaction continues to help. Failure Cases. CIPHER notably reduces the edit cost and learns useful preference, however, significant gaps to the oracle method remain, especially in the summarization task. We manually analyze failure cases on summarization task with the best performing method CIPHER-5-MPNET. Table 10 in the Appendix reports the summary and example of our findings, categorized as preference 10 Table 3: Examples of learned preferences on summarization task with CIPHER-5-MPNET, grouped based on the document source and corresponding latent preference. We randomly sample 3 examples per type at the beginning, middle, and end of the interaction history. Latent User Preference (Round) Learned Preference News article. targeted to young children, storytelling, short sentences, playful language, interactive, positive (22) Fairy tale narrative style, informal and conversational tone, use of rhetorical questions, simplified language. (115) Simplified, childlike storytelling with playful language and imagery (192) Simplified and playful storytelling language Reddit post: second person narrative, brief, show emotions, invoke personal reflection, immersive (14) Concise and coherent storytelling (102) The user prefers a second-person narrative and a more direct, personal tone (194) Poetic and descriptive language, narrative perspective shift to second person Wikipedia page. bullet points, parallel structure, brief (19) Concise, Bullet-Pointed, Structured Summaries with a Narrative Q&A Style (124) Concise and factual writing style, bullet-point formatting (197) Concise and streamlined formatting, with bullet points and clear subheadings for easy scanning Paper abstract. tweet style, simple English, inquisitive, skillful foreshadowing, with emojis (20) Concise, conversational summaries with bullet points and emojis. (111) Concise, conversational, whimsical bullet-point summaries with emojis. (193) Concise, conversational, and whimsical bullet-point summaries with emojis. Movie review. question answering style (12) The user prefers a straightforward, clear, and concise writing style with factual formatting. (123) The user prefers a clear and concise question and answer format with straightforward language. (199) Concise, Structured Q&A with Whimsical Clarity inference from output-revision pair, consolidation of inferred preferences, and retrieval.8 In brief, the most common type of failure is on the preference inference step given the agent output and user revision. For example, the agent often misses the exact keyword for brief or short sentences, and sometimes struggles with inferring the second-person narrative aspect. We study aligning LLM-based agents using user edits that arise naturally in applications such as writing assistants. We conjecture that user edits are driven by a latent user preference that can be captured by textual descriptions. We introduce the PRELUDE framework that focuses on learning descriptions of user preferences from user edit data and then generating an agent response accordingly. We propose a simple yet effective retrieval-based algorithm CIPHER that infers user preference by querying the LLM, retrieves relevant examples in the history, and aggregates induced preferences in retrieved examples to generate a response for the given context. We introduce two interactive environments with a GPT-4 simulated user to study learning from edits, which can be of independent interest. In this work, we focus on aligning an LLM agent with a frozen LLM, in part, due to the challenge of scaling fine-tuning based approaches with the number of users. However, for settings where computational cost is not a barrier, applying fine-tuning approaches would be an interesting future work direction. Another promising future work direction is to learn user preference based on different levels of edits \u2013 words, sentences, paragraphs \u2013 to generate a satisfactory response. Acknowledgments Gao was a research intern in MSR NYC, and later was partially supported by NSF project #1901030. All content represents the opinion of the authors, which is not necessarily shared or endorsed by their respective employers and/or sponsors. We thank MSR NYC research community, Jonathan D. Chang, Daniel D. Lee, Claire Cardie, and Sasha Rush for helpful discussions and support."
+ },
+ {
+ "url": "http://arxiv.org/abs/2208.03270v2",
+ "title": "Learning New Skills after Deployment: Improving open-domain internet-driven dialogue with human feedback",
+ "abstract": "Frozen models trained to mimic static datasets can never improve their\nperformance. Models that can employ internet-retrieval for up-to-date\ninformation and obtain feedback from humans during deployment provide the\npromise of both adapting to new information, and improving their performance.\nIn this work we study how to improve internet-driven conversational skills in\nsuch a learning framework. We collect deployment data, which we make publicly\navailable, of human interactions, and collect various types of human feedback\n-- including binary quality measurements, free-form text feedback, and\nfine-grained reasons for failure. We then study various algorithms for\nimproving from such feedback, including standard supervised learning, rejection\nsampling, model-guiding and reward-based learning, in order to make\nrecommendations on which type of feedback and algorithms work best. We find the\nrecently introduced Director model (Arora et al., '22) shows significant\nimprovements over other existing approaches.",
+ "authors": "Jing Xu, Megan Ung, Mojtaba Komeili, Kushal Arora, Y-Lan Boureau, Jason Weston",
+ "published": "2022-08-05",
+ "updated": "2022-08-16",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2304.11771v1",
+ "title": "Generative AI at Work",
+ "abstract": "We study the staggered introduction of a generative AI-based conversational\nassistant using data from 5,000 customer support agents. Access to the tool\nincreases productivity, as measured by issues resolved per hour, by 14 percent\non average, with the greatest impact on novice and low-skilled workers, and\nminimal impact on experienced and highly skilled workers. We provide suggestive\nevidence that the AI model disseminates the potentially tacit knowledge of more\nable workers and helps newer workers move down the experience curve. In\naddition, we show that AI assistance improves customer sentiment, reduces\nrequests for managerial intervention, and improves employee retention.",
+ "authors": "Erik Brynjolfsson, Danielle Li, Lindsey Raymond",
+ "published": "2023-04-23",
+ "updated": "2023-04-23",
+ "primary_cat": "econ.GN",
+ "cats": [
+ "econ.GN",
+ "q-fin.EC",
+ "q-fin.GN"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2205.11484v1",
+ "title": "Towards Automated Document Revision: Grammatical Error Correction, Fluency Edits, and Beyond",
+ "abstract": "Natural language processing technology has rapidly improved automated\ngrammatical error correction tasks, and the community begins to explore\ndocument-level revision as one of the next challenges. To go beyond\nsentence-level automated grammatical error correction to NLP-based\ndocument-level revision assistant, there are two major obstacles: (1) there are\nfew public corpora with document-level revisions being annotated by\nprofessional editors, and (2) it is not feasible to elicit all possible\nreferences and evaluate the quality of revision with such references because\nthere are infinite possibilities of revision. This paper tackles these\nchallenges. First, we introduce a new document-revision corpus, TETRA, where\nprofessional editors revised academic papers sampled from the ACL anthology\nwhich contain few trivial grammatical errors that enable us to focus more on\ndocument- and paragraph-level edits such as coherence and consistency. Second,\nwe explore reference-less and interpretable methods for meta-evaluation that\ncan detect quality improvements by document revision. We show the uniqueness of\nTETRA compared with existing document revision corpora and demonstrate that a\nfine-tuned pre-trained language model can discriminate the quality of documents\nafter revision even when the difference is subtle. This promising result will\nencourage the community to further explore automated document revision models\nand metrics in future.",
+ "authors": "Masato Mita, Keisuke Sakaguchi, Masato Hagiwara, Tomoya Mizumoto, Jun Suzuki, Kentaro Inui",
+ "published": "2022-05-23",
+ "updated": "2022-05-23",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2305.19204v1",
+ "title": "SWiPE: A Dataset for Document-Level Simplification of Wikipedia Pages",
+ "abstract": "Text simplification research has mostly focused on sentence-level\nsimplification, even though many desirable edits - such as adding relevant\nbackground information or reordering content - may require document-level\ncontext. Prior work has also predominantly framed simplification as a\nsingle-step, input-to-output task, only implicitly modeling the fine-grained,\nspan-level edits that elucidate the simplification process. To address both\ngaps, we introduce the SWiPE dataset, which reconstructs the document-level\nediting process from English Wikipedia (EW) articles to paired Simple Wikipedia\n(SEW) articles. In contrast to prior work, SWiPE leverages the entire revision\nhistory when pairing pages in order to better identify simplification edits. We\nwork with Wikipedia editors to annotate 5,000 EW-SEW document pairs, labeling\nmore than 40,000 edits with proposed 19 categories. To scale our efforts, we\npropose several models to automatically label edits, achieving an F-1 score of\nup to 70.6, indicating that this is a tractable but challenging NLU task.\nFinally, we categorize the edits produced by several simplification models and\nfind that SWiPE-trained models generate more complex edits while reducing\nunwanted edits.",
+ "authors": "Philippe Laban, Jesse Vig, Wojciech Kryscinski, Shafiq Joty, Caiming Xiong, Chien-Sheng Wu",
+ "published": "2023-05-30",
+ "updated": "2023-05-30",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2301.00355v2",
+ "title": "Second Thoughts are Best: Learning to Re-Align With Human Values from Text Edits",
+ "abstract": "We present Second Thought, a new learning paradigm that enables language\nmodels (LMs) to re-align with human values. By modeling the chain-of-edits\nbetween value-unaligned and value-aligned text, with LM fine-tuning and\nadditional refinement through reinforcement learning, Second Thought not only\nachieves superior performance in three value alignment benchmark datasets but\nalso shows strong human-value transfer learning ability in few-shot scenarios.\nThe generated editing steps also offer better interpretability and ease for\ninteractive error correction. Extensive human evaluations further confirm its\neffectiveness.",
+ "authors": "Ruibo Liu, Chenyan Jia, Ge Zhang, Ziyu Zhuang, Tony X Liu, Soroush Vosoughi",
+ "published": "2023-01-01",
+ "updated": "2023-01-05",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.CY"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2206.05802v2",
+ "title": "Self-critiquing models for assisting human evaluators",
+ "abstract": "We fine-tune large language models to write natural language critiques\n(natural language critical comments) using behavioral cloning. On a topic-based\nsummarization task, critiques written by our models help humans find flaws in\nsummaries that they would have otherwise missed. Our models help find naturally\noccurring flaws in both model and human written summaries, and intentional\nflaws in summaries written by humans to be deliberately misleading. We study\nscaling properties of critiquing with both topic-based summarization and\nsynthetic tasks. Larger models write more helpful critiques, and on most tasks,\nare better at self-critiquing, despite having harder-to-critique outputs.\nLarger models can also integrate their own self-critiques as feedback, refining\ntheir own summaries into better ones. Finally, we motivate and introduce a\nframework for comparing critiquing ability to generation and discrimination\nability. Our measurements suggest that even large models may still have\nrelevant knowledge they cannot or do not articulate as critiques. These results\nare a proof of concept for using AI-assisted human feedback to scale the\nsupervision of machine learning systems to tasks that are difficult for humans\nto evaluate directly. We release our training datasets, as well as samples from\nour critique assistance experiments.",
+ "authors": "William Saunders, Catherine Yeh, Jeff Wu, Steven Bills, Long Ouyang, Jonathan Ward, Jan Leike",
+ "published": "2022-06-12",
+ "updated": "2022-06-14",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.LG"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2112.09332v3",
+ "title": "WebGPT: Browser-assisted question-answering with human feedback",
+ "abstract": "We fine-tune GPT-3 to answer long-form questions using a text-based\nweb-browsing environment, which allows the model to search and navigate the\nweb. By setting up the task so that it can be performed by humans, we are able\nto train models on the task using imitation learning, and then optimize answer\nquality with human feedback. To make human evaluation of factual accuracy\neasier, models must collect references while browsing in support of their\nanswers. We train and evaluate our models on ELI5, a dataset of questions asked\nby Reddit users. Our best model is obtained by fine-tuning GPT-3 using behavior\ncloning, and then performing rejection sampling against a reward model trained\nto predict human preferences. This model's answers are preferred by humans 56%\nof the time to those of our human demonstrators, and 69% of the time to the\nhighest-voted answer from Reddit.",
+ "authors": "Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christopher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, Xu Jiang, Karl Cobbe, Tyna Eloundou, Gretchen Krueger, Kevin Button, Matthew Knight, Benjamin Chess, John Schulman",
+ "published": "2021-12-17",
+ "updated": "2022-06-01",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.LG"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2010.12826v1",
+ "title": "Text Editing by Command",
+ "abstract": "A prevailing paradigm in neural text generation is one-shot generation, where\ntext is produced in a single step. The one-shot setting is inadequate, however,\nwhen the constraints the user wishes to impose on the generated text are\ndynamic, especially when authoring longer documents. We address this limitation\nwith an interactive text generation setting in which the user interacts with\nthe system by issuing commands to edit existing text. To this end, we propose a\nnovel text editing task, and introduce WikiDocEdits, a dataset of\nsingle-sentence edits crawled from Wikipedia. We show that our Interactive\nEditor, a transformer-based model trained on this dataset, outperforms\nbaselines and obtains positive results in both automatic and human evaluations.\nWe present empirical and qualitative analyses of this model's performance.",
+ "authors": "Felix Faltings, Michel Galley, Gerold Hintz, Chris Brockett, Chris Quirk, Jianfeng Gao, Bill Dolan",
+ "published": "2020-10-24",
+ "updated": "2020-10-24",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2205.12374v1",
+ "title": "Learning to Model Editing Processes",
+ "abstract": "Most existing sequence generation models produce outputs in one pass, usually\nleft-to-right. However, this is in contrast with a more natural approach that\nhumans use in generating content; iterative refinement and editing. Recent work\nhas introduced edit-based models for various tasks (such as neural machine\ntranslation and text style transfer), but these generally model a single edit\nstep. In this work, we propose modeling editing processes, modeling the whole\nprocess of iteratively generating sequences. We form a conceptual framework to\ndescribe the likelihood of multi-step edits, and describe neural models that\ncan learn a generative model of sequences based on these multistep edits. We\nintroduce baseline results and metrics on this task, finding that modeling\nediting processes improves performance on a variety of axes on both our\nproposed task and related downstream tasks compared to previous single-step\nmodels of edits.",
+ "authors": "Machel Reid, Graham Neubig",
+ "published": "2022-05-24",
+ "updated": "2022-05-24",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.LG"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2210.12378v2",
+ "title": "Correcting Diverse Factual Errors in Abstractive Summarization via Post-Editing and Language Model Infilling",
+ "abstract": "Abstractive summarization models often generate inconsistent summaries\ncontaining factual errors or hallucinated content. Recent works focus on\ncorrecting factual errors in generated summaries via post-editing. Such\ncorrection models are trained using adversarial non-factual summaries\nconstructed using heuristic rules for injecting errors. However, generating\nnon-factual summaries using heuristics often does not generalize well to actual\nmodel errors. In this work, we propose to generate hard, representative\nsynthetic examples of non-factual summaries through infilling language models.\nWith this data, we train a more robust fact-correction model to post-edit the\nsummaries to improve factual consistency. Through quantitative and qualitative\nexperiments on two popular summarization datasets -- CNN/DM and XSum -- we show\nthat our approach vastly outperforms prior methods in correcting erroneous\nsummaries. Our model -- FactEdit -- improves factuality scores by over ~11\npoints on CNN/DM and over ~31 points on XSum on average across multiple\nsummarization models, producing more factual summaries while maintaining\ncompetitive summarization quality.",
+ "authors": "Vidhisha Balachandran, Hannaneh Hajishirzi, William W. Cohen, Yulia Tsvetkov",
+ "published": "2022-10-22",
+ "updated": "2022-10-31",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2009.11136v1",
+ "title": "Seq2Edits: Sequence Transduction Using Span-level Edit Operations",
+ "abstract": "We propose Seq2Edits, an open-vocabulary approach to sequence editing for\nnatural language processing (NLP) tasks with a high degree of overlap between\ninput and output texts. In this approach, each sequence-to-sequence\ntransduction is represented as a sequence of edit operations, where each\noperation either replaces an entire source span with target tokens or keeps it\nunchanged. We evaluate our method on five NLP tasks (text normalization,\nsentence fusion, sentence splitting & rephrasing, text simplification, and\ngrammatical error correction) and report competitive results across the board.\nFor grammatical error correction, our method speeds up inference by up to 5.2x\ncompared to full sequence models because inference time depends on the number\nof edits rather than the number of target tokens. For text normalization,\nsentence fusion, and grammatical error correction, our approach improves\nexplainability by associating each edit operation with a human-readable tag.",
+ "authors": "Felix Stahlberg, Shankar Kumar",
+ "published": "2020-09-23",
+ "updated": "2020-09-23",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2404.09123v1",
+ "title": "Provable Interactive Learning with Hindsight Instruction Feedback",
+ "abstract": "We study interactive learning in a setting where the agent has to generate a\nresponse (e.g., an action or trajectory) given a context and an instruction. In\ncontrast, to typical approaches that train the system using reward or expert\nsupervision on response, we study learning with hindsight instruction where a\nteacher provides an instruction that is most suitable for the agent's generated\nresponse. This hindsight labeling of instruction is often easier to provide\nthan providing expert supervision of the optimal response which may require\nexpert knowledge or can be impractical to elicit. We initiate the theoretical\nanalysis of interactive learning with hindsight labeling. We first provide a\nlower bound showing that in general, the regret of any algorithm must scale\nwith the size of the agent's response space. We then study a specialized\nsetting where the underlying instruction-response distribution can be\ndecomposed as a low-rank matrix. We introduce an algorithm called LORIL for\nthis setting and show that its regret scales as $\\sqrt{T}$ where $T$ is the\nnumber of rounds and depends on the intrinsic rank but does not depend on the\nsize of the agent's response space. We provide experiments in two domains\nshowing that LORIL outperforms baselines even when the low-rank assumption is\nviolated.",
+ "authors": "Dipendra Misra, Aldo Pacchiano, Robert E. Schapire",
+ "published": "2024-04-14",
+ "updated": "2024-04-14",
+ "primary_cat": "cs.LG",
+ "cats": [
+ "cs.LG",
+ "cs.AI",
+ "cs.CL",
+ "stat.ML"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2005.13209v2",
+ "title": "A Structural Model for Contextual Code Changes",
+ "abstract": "We address the problem of predicting edit completions based on a learned\nmodel that was trained on past edits. Given a code snippet that is partially\nedited, our goal is to predict a completion of the edit for the rest of the\nsnippet. We refer to this task as the EditCompletion task and present a novel\napproach for tackling it. The main idea is to directly represent structural\nedits. This allows us to model the likelihood of the edit itself, rather than\nlearning the likelihood of the edited code. We represent an edit operation as a\npath in the program's Abstract Syntax Tree (AST), originating from the source\nof the edit to the target of the edit. Using this representation, we present a\npowerful and lightweight neural model for the EditCompletion task.\n We conduct a thorough evaluation, comparing our approach to a variety of\nrepresentation and modeling approaches that are driven by multiple strong\nmodels such as LSTMs, Transformers, and neural CRFs. Our experiments show that\nour model achieves a 28% relative gain over state-of-the-art sequential models\nand 2x higher accuracy than syntactic models that learn to generate the edited\ncode, as opposed to modeling the edits directly.\n Our code, dataset, and trained models are publicly available at\nhttps://github.com/tech-srl/c3po/ .",
+ "authors": "Shaked Brody, Uri Alon, Eran Yahav",
+ "published": "2020-05-27",
+ "updated": "2020-10-12",
+ "primary_cat": "cs.PL",
+ "cats": [
+ "cs.PL",
+ "cs.LG"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2212.08073v1",
+ "title": "Constitutional AI: Harmlessness from AI Feedback",
+ "abstract": "As AI systems become more capable, we would like to enlist their help to\nsupervise other AIs. We experiment with methods for training a harmless AI\nassistant through self-improvement, without any human labels identifying\nharmful outputs. The only human oversight is provided through a list of rules\nor principles, and so we refer to the method as 'Constitutional AI'. The\nprocess involves both a supervised learning and a reinforcement learning phase.\nIn the supervised phase we sample from an initial model, then generate\nself-critiques and revisions, and then finetune the original model on revised\nresponses. In the RL phase, we sample from the finetuned model, use a model to\nevaluate which of the two samples is better, and then train a preference model\nfrom this dataset of AI preferences. We then train with RL using the preference\nmodel as the reward signal, i.e. we use 'RL from AI Feedback' (RLAIF). As a\nresult we are able to train a harmless but non-evasive AI assistant that\nengages with harmful queries by explaining its objections to them. Both the SL\nand RL methods can leverage chain-of-thought style reasoning to improve the\nhuman-judged performance and transparency of AI decision making. These methods\nmake it possible to control AI behavior more precisely and with far fewer human\nlabels.",
+ "authors": "Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, Carol Chen, Catherine Olsson, Christopher Olah, Danny Hernandez, Dawn Drain, Deep Ganguli, Dustin Li, Eli Tran-Johnson, Ethan Perez, Jamie Kerr, Jared Mueller, Jeffrey Ladish, Joshua Landau, Kamal Ndousse, Kamile Lukosuite, Liane Lovitt, Michael Sellitto, Nelson Elhage, Nicholas Schiefer, Noemi Mercado, Nova DasSarma, Robert Lasenby, Robin Larson, Sam Ringer, Scott Johnston, Shauna Kravec, Sheer El Showk, Stanislav Fort, Tamera Lanham, Timothy Telleen-Lawton, Tom Conerly, Tom Henighan, Tristan Hume, Samuel R. Bowman, Zac Hatfield-Dodds, Ben Mann, Dario Amodei, Nicholas Joseph, Sam McCandlish, Tom Brown, Jared Kaplan",
+ "published": "2022-12-15",
+ "updated": "2022-12-15",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2004.09143v4",
+ "title": "Variational Inference for Learning Representations of Natural Language Edits",
+ "abstract": "Document editing has become a pervasive component of the production of\ninformation, with version control systems enabling edits to be efficiently\nstored and applied. In light of this, the task of learning distributed\nrepresentations of edits has been recently proposed. With this in mind, we\npropose a novel approach that employs variational inference to learn a\ncontinuous latent space of vector representations to capture the underlying\nsemantic information with regard to the document editing process. We achieve\nthis by introducing a latent variable to explicitly model the aforementioned\nfeatures. This latent variable is then combined with a document representation\nto guide the generation of an edited version of this document. Additionally, to\nfacilitate standardized automatic evaluation of edit representations, which has\nheavily relied on direct human input thus far, we also propose a suite of\ndownstream tasks, PEER, specifically designed to measure the quality of edit\nrepresentations in the context of natural language processing.",
+ "authors": "Edison Marrese-Taylor, Machel Reid, Yutaka Matsuo",
+ "published": "2020-04-20",
+ "updated": "2021-01-04",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2205.12548v3",
+ "title": "RLPrompt: Optimizing Discrete Text Prompts with Reinforcement Learning",
+ "abstract": "Prompting has shown impressive success in enabling large pretrained language\nmodels (LMs) to perform diverse NLP tasks, especially when only few downstream\ndata are available. Automatically finding the optimal prompt for each task,\nhowever, is challenging. Most existing work resorts to tuning soft prompt\n(e.g., embeddings) which falls short of interpretability, reusability across\nLMs, and applicability when gradients are not accessible. Discrete prompt, on\nthe other hand, is difficult to optimize, and is often created by \"enumeration\n(e.g., paraphrasing)-then-selection\" heuristics that do not explore the prompt\nspace systematically. This paper proposes RLPrompt, an efficient discrete\nprompt optimization approach with reinforcement learning (RL). RLPrompt\nformulates a parameter-efficient policy network that generates the desired\ndiscrete prompt after training with reward. To overcome the complexity and\nstochasticity of reward signals by the large LM environment, we incorporate\neffective reward stabilization that substantially enhances the training\nefficiency. RLPrompt is flexibly applicable to different types of LMs, such as\nmasked (e.g., BERT) and left-to-right models (e.g., GPTs), for both\nclassification and generation tasks. Experiments on few-shot classification and\nunsupervised text style transfer show superior performance over a wide range of\nexisting finetuning or prompting methods. Interestingly, the resulting\noptimized prompts are often ungrammatical gibberish text; and surprisingly,\nthose gibberish prompts are transferrable between different LMs to retain\nsignificant performance, indicating LM prompting may not follow human language\npatterns.",
+ "authors": "Mingkai Deng, Jianyu Wang, Cheng-Ping Hsieh, Yihan Wang, Han Guo, Tianmin Shu, Meng Song, Eric P. Xing, Zhiting Hu",
+ "published": "2022-05-25",
+ "updated": "2022-10-22",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.LG"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/1709.08878v2",
+ "title": "Generating Sentences by Editing Prototypes",
+ "abstract": "We propose a new generative model of sentences that first samples a prototype\nsentence from the training corpus and then edits it into a new sentence.\nCompared to traditional models that generate from scratch either left-to-right\nor by first sampling a latent sentence vector, our prototype-then-edit model\nimproves perplexity on language modeling and generates higher quality outputs\naccording to human evaluation. Furthermore, the model gives rise to a latent\nedit vector that captures interpretable semantics such as sentence similarity\nand sentence-level analogies.",
+ "authors": "Kelvin Guu, Tatsunori B. Hashimoto, Yonatan Oren, Percy Liang",
+ "published": "2017-09-26",
+ "updated": "2018-09-07",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.LG",
+ "cs.NE",
+ "stat.ML"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2203.02155v1",
+ "title": "Training language models to follow instructions with human feedback",
+ "abstract": "Making language models bigger does not inherently make them better at\nfollowing a user's intent. For example, large language models can generate\noutputs that are untruthful, toxic, or simply not helpful to the user. In other\nwords, these models are not aligned with their users. In this paper, we show an\navenue for aligning language models with user intent on a wide range of tasks\nby fine-tuning with human feedback. Starting with a set of labeler-written\nprompts and prompts submitted through the OpenAI API, we collect a dataset of\nlabeler demonstrations of the desired model behavior, which we use to fine-tune\nGPT-3 using supervised learning. We then collect a dataset of rankings of model\noutputs, which we use to further fine-tune this supervised model using\nreinforcement learning from human feedback. We call the resulting models\nInstructGPT. In human evaluations on our prompt distribution, outputs from the\n1.3B parameter InstructGPT model are preferred to outputs from the 175B GPT-3,\ndespite having 100x fewer parameters. Moreover, InstructGPT models show\nimprovements in truthfulness and reductions in toxic output generation while\nhaving minimal performance regressions on public NLP datasets. Even though\nInstructGPT still makes simple mistakes, our results show that fine-tuning with\nhuman feedback is a promising direction for aligning language models with human\nintent.",
+ "authors": "Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, Ryan Lowe",
+ "published": "2022-03-04",
+ "updated": "2022-03-04",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.LG"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2311.09180v1",
+ "title": "PEARL: Personalizing Large Language Model Writing Assistants with Generation-Calibrated Retrievers",
+ "abstract": "Powerful large language models have facilitated the development of writing\nassistants that promise to significantly improve the quality and efficiency of\ncomposition and communication. However, a barrier to effective assistance is\nthe lack of personalization in LLM outputs to the author's communication style\nand specialized knowledge. In this paper, we address this challenge by\nproposing PEARL, a retrieval-augmented LLM writing assistant personalized with\na generation-calibrated retriever. Our retriever is trained to select historic\nuser-authored documents for prompt augmentation, such that they are likely to\nbest personalize LLM generations for a user request. We propose two key\nnovelties for training our retriever: 1) A training data selection method that\nidentifies user requests likely to benefit from personalization and documents\nthat provide that benefit; and 2) A scale-calibrating KL-divergence objective\nthat ensures that our retriever closely tracks the benefit of a document for\npersonalized generation. We demonstrate the effectiveness of PEARL in\ngenerating personalized workplace social media posts and Reddit comments.\nFinally, we showcase the potential of a generation-calibrated retriever to\ndouble as a performance predictor and further improve low-quality generations\nvia LLM chaining.",
+ "authors": "Sheshera Mysore, Zhuoran Lu, Mengting Wan, Longqi Yang, Steve Menezes, Tina Baghaee, Emmanuel Barajas Gonzalez, Jennifer Neville, Tara Safavi",
+ "published": "2023-11-15",
+ "updated": "2023-11-15",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.HC",
+ "cs.IR"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2101.12087v2",
+ "title": "Learning Structural Edits via Incremental Tree Transformations",
+ "abstract": "While most neural generative models generate outputs in a single pass, the\nhuman creative process is usually one of iterative building and refinement.\nRecent work has proposed models of editing processes, but these mostly focus on\nediting sequential data and/or only model a single editing pass. In this paper,\nwe present a generic model for incremental editing of structured data (i.e.,\n\"structural edits\"). Particularly, we focus on tree-structured data, taking\nabstract syntax trees of computer programs as our canonical example. Our editor\nlearns to iteratively generate tree edits (e.g., deleting or adding a subtree)\nand applies them to the partially edited data, thereby the entire editing\nprocess can be formulated as consecutive, incremental tree transformations. To\nshow the unique benefits of modeling tree edits directly, we further propose a\nnovel edit encoder for learning to represent edits, as well as an imitation\nlearning method that allows the editor to be more robust. We evaluate our\nproposed editor on two source code edit datasets, where results show that, with\nthe proposed edit encoder, our editor significantly improves accuracy over\nprevious approaches that generate the edited program directly in one pass.\nFinally, we demonstrate that training our editor to imitate experts and correct\nits mistakes dynamically can further improve its performance.",
+ "authors": "Ziyu Yao, Frank F. Xu, Pengcheng Yin, Huan Sun, Graham Neubig",
+ "published": "2021-01-28",
+ "updated": "2021-03-05",
+ "primary_cat": "cs.LG",
+ "cats": [
+ "cs.LG",
+ "cs.SE"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2003.10687v1",
+ "title": "Felix: Flexible Text Editing Through Tagging and Insertion",
+ "abstract": "We present Felix --- a flexible text-editing approach for generation,\ndesigned to derive the maximum benefit from the ideas of decoding with\nbi-directional contexts and self-supervised pre-training. In contrast to\nconventional sequence-to-sequence (seq2seq) models, Felix is efficient in\nlow-resource settings and fast at inference time, while being capable of\nmodeling flexible input-output transformations. We achieve this by decomposing\nthe text-editing task into two sub-tasks: tagging to decide on the subset of\ninput tokens and their order in the output text and insertion to in-fill the\nmissing tokens in the output not present in the input. The tagging model\nemploys a novel Pointer mechanism, while the insertion model is based on a\nMasked Language Model. Both of these models are chosen to be non-autoregressive\nto guarantee faster inference. Felix performs favourably when compared to\nrecent text-editing methods and strong seq2seq baselines when evaluated on four\nNLG tasks: Sentence Fusion, Machine Translation Automatic Post-Editing,\nSummarization, and Text Simplification.",
+ "authors": "Jonathan Mallinson, Aliaksei Severyn, Eric Malmi, Guillermo Garrido",
+ "published": "2020-03-24",
+ "updated": "2020-03-24",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2301.00355v2",
+ "title": "Second Thoughts are Best: Learning to Re-Align With Human Values from Text Edits",
+ "abstract": "We present Second Thought, a new learning paradigm that enables language\nmodels (LMs) to re-align with human values. By modeling the chain-of-edits\nbetween value-unaligned and value-aligned text, with LM fine-tuning and\nadditional refinement through reinforcement learning, Second Thought not only\nachieves superior performance in three value alignment benchmark datasets but\nalso shows strong human-value transfer learning ability in few-shot scenarios.\nThe generated editing steps also offer better interpretability and ease for\ninteractive error correction. Extensive human evaluations further confirm its\neffectiveness.",
+ "authors": "Ruibo Liu, Chenyan Jia, Ge Zhang, Ziyu Zhuang, Tony X Liu, Soroush Vosoughi",
+ "published": "2023-01-01",
+ "updated": "2023-01-05",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.CY"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2204.03025v1",
+ "title": "Using Interactive Feedback to Improve the Accuracy and Explainability of Question Answering Systems Post-Deployment",
+ "abstract": "Most research on question answering focuses on the pre-deployment stage;\ni.e., building an accurate model for deployment. In this paper, we ask the\nquestion: Can we improve QA systems further \\emph{post-}deployment based on\nuser interactions? We focus on two kinds of improvements: 1) improving the QA\nsystem's performance itself, and 2) providing the model with the ability to\nexplain the correctness or incorrectness of an answer. We collect a\nretrieval-based QA dataset, FeedbackQA, which contains interactive feedback\nfrom users. We collect this dataset by deploying a base QA system to\ncrowdworkers who then engage with the system and provide feedback on the\nquality of its answers. The feedback contains both structured ratings and\nunstructured natural language explanations. We train a neural model with this\nfeedback data that can generate explanations and re-score answer candidates. We\nshow that feedback data not only improves the accuracy of the deployed QA\nsystem but also other stronger non-deployed systems. The generated explanations\nalso help users make informed decisions about the correctness of answers.\nProject page: https://mcgill-nlp.github.io/feedbackqa/",
+ "authors": "Zichao Li, Prakhar Sharma, Xing Han Lu, Jackie C. K. Cheung, Siva Reddy",
+ "published": "2022-04-06",
+ "updated": "2022-04-06",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2203.03802v2",
+ "title": "Understanding Iterative Revision from Human-Written Text",
+ "abstract": "Writing is, by nature, a strategic, adaptive, and more importantly, an\niterative process. A crucial part of writing is editing and revising the text.\nPrevious works on text revision have focused on defining edit intention\ntaxonomies within a single domain or developing computational models with a\nsingle level of edit granularity, such as sentence-level edits, which differ\nfrom human's revision cycles. This work describes IteraTeR: the first\nlarge-scale, multi-domain, edit-intention annotated corpus of iteratively\nrevised text. In particular, IteraTeR is collected based on a new framework to\ncomprehensively model the iterative text revisions that generalize to various\ndomains of formal writing, edit intentions, revision depths, and granularities.\nWhen we incorporate our annotated edit intentions, both generative and\nedit-based text revision models significantly improve automatic evaluations.\nThrough our work, we better understand the text revision process, making vital\nconnections between edit intentions and writing quality, enabling the creation\nof diverse corpora to support computational modeling of iterative text\nrevisions.",
+ "authors": "Wanyu Du, Vipul Raheja, Dhruv Kumar, Zae Myung Kim, Melissa Lopez, Dongyeop Kang",
+ "published": "2022-03-08",
+ "updated": "2022-03-16",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.HC"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2305.00955v2",
+ "title": "Bridging the Gap: A Survey on Integrating (Human) Feedback for Natural Language Generation",
+ "abstract": "Many recent advances in natural language generation have been fueled by\ntraining large language models on internet-scale data. However, this paradigm\ncan lead to models that generate toxic, inaccurate, and unhelpful content, and\nautomatic evaluation metrics often fail to identify these behaviors. As models\nbecome more capable, human feedback is an invaluable signal for evaluating and\nimproving models. This survey aims to provide an overview of the recent\nresearch that has leveraged human feedback to improve natural language\ngeneration. First, we introduce an encompassing formalization of feedback, and\nidentify and organize existing research into a taxonomy following this\nformalization. Next, we discuss how feedback can be described by its format and\nobjective, and cover the two approaches proposed to use feedback (either for\ntraining or decoding): directly using the feedback or training feedback models.\nWe also discuss existing datasets for human-feedback data collection, and\nconcerns surrounding feedback collection. Finally, we provide an overview of\nthe nascent field of AI feedback, which exploits large language models to make\njudgments based on a set of principles and minimize the need for human\nintervention.",
+ "authors": "Patrick Fernandes, Aman Madaan, Emmy Liu, Ant\u00f3nio Farinhas, Pedro Henrique Martins, Amanda Bertsch, Jos\u00e9 G. C. de Souza, Shuyan Zhou, Tongshuang Wu, Graham Neubig, Andr\u00e9 F. T. Martins",
+ "published": "2023-05-01",
+ "updated": "2023-06-01",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.LG"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2009.01325v3",
+ "title": "Learning to summarize from human feedback",
+ "abstract": "As language models become more powerful, training and evaluation are\nincreasingly bottlenecked by the data and metrics used for a particular task.\nFor example, summarization models are often trained to predict human reference\nsummaries and evaluated using ROUGE, but both of these metrics are rough\nproxies for what we really care about -- summary quality. In this work, we show\nthat it is possible to significantly improve summary quality by training a\nmodel to optimize for human preferences. We collect a large, high-quality\ndataset of human comparisons between summaries, train a model to predict the\nhuman-preferred summary, and use that model as a reward function to fine-tune a\nsummarization policy using reinforcement learning. We apply our method to a\nversion of the TL;DR dataset of Reddit posts and find that our models\nsignificantly outperform both human reference summaries and much larger models\nfine-tuned with supervised learning alone. Our models also transfer to CNN/DM\nnews articles, producing summaries nearly as good as the human reference\nwithout any news-specific fine-tuning. We conduct extensive analyses to\nunderstand our human feedback dataset and fine-tuned models We establish that\nour reward model generalizes to new datasets, and that optimizing our reward\nmodel results in better summaries than optimizing ROUGE according to humans. We\nhope the evidence from our paper motivates machine learning researchers to pay\ncloser attention to how their training loss affects the model behavior they\nactually want.",
+ "authors": "Nisan Stiennon, Long Ouyang, Jeff Wu, Daniel M. Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, Paul Christiano",
+ "published": "2020-09-02",
+ "updated": "2022-02-15",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.LG"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2402.04792v2",
+ "title": "Direct Language Model Alignment from Online AI Feedback",
+ "abstract": "Direct alignment from preferences (DAP) methods, such as DPO, have recently\nemerged as efficient alternatives to reinforcement learning from human feedback\n(RLHF), that do not require a separate reward model. However, the preference\ndatasets used in DAP methods are usually collected ahead of training and never\nupdated, thus the feedback is purely offline. Moreover, responses in these\ndatasets are often sampled from a language model distinct from the one being\naligned, and since the model evolves over training, the alignment phase is\ninevitably off-policy. In this study, we posit that online feedback is key and\nimproves DAP methods. Our method, online AI feedback (OAIF), uses an LLM as\nannotator: on each training iteration, we sample two responses from the current\nmodel and prompt the LLM annotator to choose which one is preferred, thus\nproviding online feedback. Despite its simplicity, we demonstrate via human\nevaluation in several tasks that OAIF outperforms both offline DAP and RLHF\nmethods. We further show that the feedback leveraged in OAIF is easily\ncontrollable, via instruction prompts to the LLM annotator.",
+ "authors": "Shangmin Guo, Biao Zhang, Tianlin Liu, Tianqi Liu, Misha Khalman, Felipe Llinares, Alexandre Rame, Thomas Mesnard, Yao Zhao, Bilal Piot, Johan Ferret, Mathieu Blondel",
+ "published": "2024-02-07",
+ "updated": "2024-02-29",
+ "primary_cat": "cs.AI",
+ "cats": [
+ "cs.AI",
+ "cs.CL",
+ "cs.HC"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2212.01350v1",
+ "title": "Improving Iterative Text Revision by Learning Where to Edit from Other Revision Tasks",
+ "abstract": "Iterative text revision improves text quality by fixing grammatical errors,\nrephrasing for better readability or contextual appropriateness, or\nreorganizing sentence structures throughout a document. Most recent research\nhas focused on understanding and classifying different types of edits in the\niterative revision process from human-written text instead of building accurate\nand robust systems for iterative text revision. In this work, we aim to build\nan end-to-end text revision system that can iteratively generate helpful edits\nby explicitly detecting editable spans (where-to-edit) with their corresponding\nedit intents and then instructing a revision model to revise the detected edit\nspans. Leveraging datasets from other related text editing NLP tasks, combined\nwith the specification of editable spans, leads our system to more accurately\nmodel the process of iterative text refinement, as evidenced by empirical\nresults and human evaluations. Our system significantly outperforms previous\nbaselines on our text revision tasks and other standard text revision tasks,\nincluding grammatical error correction, text simplification, sentence fusion,\nand style transfer. Through extensive qualitative and quantitative analysis, we\nmake vital connections between edit intentions and writing quality, and better\ncomputational modeling of iterative text revisions.",
+ "authors": "Zae Myung Kim, Wanyu Du, Vipul Raheja, Dhruv Kumar, Dongyeop Kang",
+ "published": "2022-12-02",
+ "updated": "2022-12-02",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2210.15893v1",
+ "title": "When Life Gives You Lemons, Make Cherryade: Converting Feedback from Bad Responses into Good Labels",
+ "abstract": "Deployed dialogue agents have the potential to integrate human feedback to\ncontinuously improve themselves. However, humans may not always provide\nexplicit signals when the chatbot makes mistakes during interactions. In this\nwork, we propose Juicer, a framework to make use of both binary and free-form\ntextual human feedback. It works by: (i) extending sparse binary feedback by\ntraining a satisfaction classifier to label the unlabeled data; and (ii)\ntraining a reply corrector to map the bad replies to good ones. We find that\naugmenting training with model-corrected replies improves the final dialogue\nmodel, and we can further improve performance by using both positive and\nnegative replies through the recently proposed Director model.",
+ "authors": "Weiyan Shi, Emily Dinan, Kurt Shuster, Jason Weston, Jing Xu",
+ "published": "2022-10-28",
+ "updated": "2022-10-28",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2210.16886v1",
+ "title": "DiffusER: Discrete Diffusion via Edit-based Reconstruction",
+ "abstract": "In text generation, models that generate text from scratch one token at a\ntime are currently the dominant paradigm. Despite being performant, these\nmodels lack the ability to revise existing text, which limits their usability\nin many practical scenarios. We look to address this, with DiffusER (Diffusion\nvia Edit-based Reconstruction), a new edit-based generative model for text\nbased on denoising diffusion models -- a class of models that use a Markov\nchain of denoising steps to incrementally generate data. DiffusER is not only a\nstrong generative model in general, rivalling autoregressive models on several\ntasks spanning machine translation, summarization, and style transfer; it can\nalso perform other varieties of generation that standard autoregressive models\nare not well-suited for. For instance, we demonstrate that DiffusER makes it\npossible for a user to condition generation on a prototype, or an incomplete\nsequence, and continue revising based on previous edit steps.",
+ "authors": "Machel Reid, Vincent J. Hellendoorn, Graham Neubig",
+ "published": "2022-10-30",
+ "updated": "2022-10-30",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.LG"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2305.19204v1",
+ "title": "SWiPE: A Dataset for Document-Level Simplification of Wikipedia Pages",
+ "abstract": "Text simplification research has mostly focused on sentence-level\nsimplification, even though many desirable edits - such as adding relevant\nbackground information or reordering content - may require document-level\ncontext. Prior work has also predominantly framed simplification as a\nsingle-step, input-to-output task, only implicitly modeling the fine-grained,\nspan-level edits that elucidate the simplification process. To address both\ngaps, we introduce the SWiPE dataset, which reconstructs the document-level\nediting process from English Wikipedia (EW) articles to paired Simple Wikipedia\n(SEW) articles. In contrast to prior work, SWiPE leverages the entire revision\nhistory when pairing pages in order to better identify simplification edits. We\nwork with Wikipedia editors to annotate 5,000 EW-SEW document pairs, labeling\nmore than 40,000 edits with proposed 19 categories. To scale our efforts, we\npropose several models to automatically label edits, achieving an F-1 score of\nup to 70.6, indicating that this is a tractable but challenging NLU task.\nFinally, we categorize the edits produced by several simplification models and\nfind that SWiPE-trained models generate more complex edits while reducing\nunwanted edits.",
+ "authors": "Philippe Laban, Jesse Vig, Wojciech Kryscinski, Shafiq Joty, Caiming Xiong, Chien-Sheng Wu",
+ "published": "2023-05-30",
+ "updated": "2023-05-30",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2405.01769v1",
+ "title": "A Survey on Large Language Models for Critical Societal Domains: Finance, Healthcare, and Law",
+ "abstract": "In the fast-evolving domain of artificial intelligence, large language models\n(LLMs) such as GPT-3 and GPT-4 are revolutionizing the landscapes of finance,\nhealthcare, and law: domains characterized by their reliance on professional\nexpertise, challenging data acquisition, high-stakes, and stringent regulatory\ncompliance. This survey offers a detailed exploration of the methodologies,\napplications, challenges, and forward-looking opportunities of LLMs within\nthese high-stakes sectors. We highlight the instrumental role of LLMs in\nenhancing diagnostic and treatment methodologies in healthcare, innovating\nfinancial analytics, and refining legal interpretation and compliance\nstrategies. Moreover, we critically examine the ethics for LLM applications in\nthese fields, pointing out the existing ethical concerns and the need for\ntransparent, fair, and robust AI systems that respect regulatory norms. By\npresenting a thorough review of current literature and practical applications,\nwe showcase the transformative impact of LLMs, and outline the imperative for\ninterdisciplinary cooperation, methodological advancements, and ethical\nvigilance. Through this lens, we aim to spark dialogue and inspire future\nresearch dedicated to maximizing the benefits of LLMs while mitigating their\nrisks in these precision-dependent sectors. To facilitate future research on\nLLMs in these critical societal domains, we also initiate a reading list that\ntracks the latest advancements under this topic, which will be continually\nupdated: \\url{https://github.com/czyssrs/LLM_X_papers}.",
+ "authors": "Zhiyu Zoey Chen, Jing Ma, Xinlu Zhang, Nan Hao, An Yan, Armineh Nourbakhsh, Xianjun Yang, Julian McAuley, Linda Petzold, William Yang Wang",
+ "published": "2024-05-02",
+ "updated": "2024-05-02",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2402.14208v2",
+ "title": "Content Conditional Debiasing for Fair Text Embedding",
+ "abstract": "Mitigating biases in machine learning models has gained increasing attention\nin Natural Language Processing (NLP). Yet, only a few studies focus on fair\ntext embeddings, which are crucial yet challenging for real-world applications.\nIn this paper, we propose a novel method for learning fair text embeddings. We\nachieve fairness while maintaining utility trade-off by ensuring conditional\nindependence between sensitive attributes and text embeddings conditioned on\nthe content. Specifically, we enforce that embeddings of texts with different\nsensitive attributes but identical content maintain the same distance toward\nthe embedding of their corresponding neutral text. Furthermore, we address the\nissue of lacking proper training data by using Large Language Models (LLMs) to\naugment texts into different sensitive groups. Our extensive evaluations\ndemonstrate that our approach effectively improves fairness while preserving\nthe utility of embeddings, representing a pioneering effort in achieving\nconditional independence for fair text embeddings.",
+ "authors": "Wenlong Deng, Blair Chen, Xiaoxiao Li, Christos Thrampoulidis",
+ "published": "2024-02-22",
+ "updated": "2024-02-23",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.CY",
+ "cs.LG"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2303.01248v3",
+ "title": "Can ChatGPT Assess Human Personalities? A General Evaluation Framework",
+ "abstract": "Large Language Models (LLMs) especially ChatGPT have produced impressive\nresults in various areas, but their potential human-like psychology is still\nlargely unexplored. Existing works study the virtual personalities of LLMs but\nrarely explore the possibility of analyzing human personalities via LLMs. This\npaper presents a generic evaluation framework for LLMs to assess human\npersonalities based on Myers Briggs Type Indicator (MBTI) tests. Specifically,\nwe first devise unbiased prompts by randomly permuting options in MBTI\nquestions and adopt the average testing result to encourage more impartial\nanswer generation. Then, we propose to replace the subject in question\nstatements to enable flexible queries and assessments on different subjects\nfrom LLMs. Finally, we re-formulate the question instructions in a manner of\ncorrectness evaluation to facilitate LLMs to generate clearer responses. The\nproposed framework enables LLMs to flexibly assess personalities of different\ngroups of people. We further propose three evaluation metrics to measure the\nconsistency, robustness, and fairness of assessment results from\nstate-of-the-art LLMs including ChatGPT and GPT-4. Our experiments reveal\nChatGPT's ability to assess human personalities, and the average results\ndemonstrate that it can achieve more consistent and fairer assessments in spite\nof lower robustness against prompt biases compared with InstructGPT.",
+ "authors": "Haocong Rao, Cyril Leung, Chunyan Miao",
+ "published": "2023-03-01",
+ "updated": "2023-10-13",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2308.05345v3",
+ "title": "RTLLM: An Open-Source Benchmark for Design RTL Generation with Large Language Model",
+ "abstract": "Inspired by the recent success of large language models (LLMs) like ChatGPT,\nresearchers start to explore the adoption of LLMs for agile hardware design,\nsuch as generating design RTL based on natural-language instructions. However,\nin existing works, their target designs are all relatively simple and in a\nsmall scale, and proposed by the authors themselves, making a fair comparison\namong different LLM solutions challenging. In addition, many prior works only\nfocus on the design correctness, without evaluating the design qualities of\ngenerated design RTL. In this work, we propose an open-source benchmark named\nRTLLM, for generating design RTL with natural language instructions. To\nsystematically evaluate the auto-generated design RTL, we summarized three\nprogressive goals, named syntax goal, functionality goal, and design quality\ngoal. This benchmark can automatically provide a quantitative evaluation of any\ngiven LLM-based solution. Furthermore, we propose an easy-to-use yet\nsurprisingly effective prompt engineering technique named self-planning, which\nproves to significantly boost the performance of GPT-3.5 in our proposed\nbenchmark.",
+ "authors": "Yao Lu, Shang Liu, Qijun Zhang, Zhiyao Xie",
+ "published": "2023-08-10",
+ "updated": "2023-11-11",
+ "primary_cat": "cs.LG",
+ "cats": [
+ "cs.LG",
+ "cs.AR"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2403.09606v1",
+ "title": "Large Language Models and Causal Inference in Collaboration: A Comprehensive Survey",
+ "abstract": "Causal inference has shown potential in enhancing the predictive accuracy,\nfairness, robustness, and explainability of Natural Language Processing (NLP)\nmodels by capturing causal relationships among variables. The emergence of\ngenerative Large Language Models (LLMs) has significantly impacted various NLP\ndomains, particularly through their advanced reasoning capabilities. This\nsurvey focuses on evaluating and improving LLMs from a causal view in the\nfollowing areas: understanding and improving the LLMs' reasoning capacity,\naddressing fairness and safety issues in LLMs, complementing LLMs with\nexplanations, and handling multimodality. Meanwhile, LLMs' strong reasoning\ncapacities can in turn contribute to the field of causal inference by aiding\ncausal relationship discovery and causal effect estimations. This review\nexplores the interplay between causal inference frameworks and LLMs from both\nperspectives, emphasizing their collective potential to further the development\nof more advanced and equitable artificial intelligence systems.",
+ "authors": "Xiaoyu Liu, Paiheng Xu, Junda Wu, Jiaxin Yuan, Yifan Yang, Yuhang Zhou, Fuxiao Liu, Tianrui Guan, Haoliang Wang, Tong Yu, Julian McAuley, Wei Ai, Furong Huang",
+ "published": "2024-03-14",
+ "updated": "2024-03-14",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2311.09447v2",
+ "title": "How Trustworthy are Open-Source LLMs? An Assessment under Malicious Demonstrations Shows their Vulnerabilities",
+ "abstract": "The rapid progress in open-source Large Language Models (LLMs) is\nsignificantly driving AI development forward. However, there is still a limited\nunderstanding of their trustworthiness. Deploying these models at scale without\nsufficient trustworthiness can pose significant risks, highlighting the need to\nuncover these issues promptly. In this work, we conduct an adversarial\nassessment of open-source LLMs on trustworthiness, scrutinizing them across\neight different aspects including toxicity, stereotypes, ethics, hallucination,\nfairness, sycophancy, privacy, and robustness against adversarial\ndemonstrations. We propose advCoU, an extended Chain of Utterances-based (CoU)\nprompting strategy by incorporating carefully crafted malicious demonstrations\nfor trustworthiness attack. Our extensive experiments encompass recent and\nrepresentative series of open-source LLMs, including Vicuna, MPT, Falcon,\nMistral, and Llama 2. The empirical outcomes underscore the efficacy of our\nattack strategy across diverse aspects. More interestingly, our result analysis\nreveals that models with superior performance in general NLP tasks do not\nalways have greater trustworthiness; in fact, larger models can be more\nvulnerable to attacks. Additionally, models that have undergone instruction\ntuning, focusing on instruction following, tend to be more susceptible,\nalthough fine-tuning LLMs for safety alignment proves effective in mitigating\nadversarial trustworthiness attacks.",
+ "authors": "Lingbo Mo, Boshi Wang, Muhao Chen, Huan Sun",
+ "published": "2023-11-15",
+ "updated": "2024-04-02",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2305.11595v3",
+ "title": "Examining Inter-Consistency of Large Language Models Collaboration: An In-depth Analysis via Debate",
+ "abstract": "Large Language Models (LLMs) have shown impressive capabilities in various\napplications, but they still face various inconsistency issues. Existing works\nprimarily focus on the inconsistency issues within a single LLM, while we\ncomplementarily explore the inter-consistency among multiple LLMs for\ncollaboration. To examine whether LLMs can collaborate effectively to achieve a\nconsensus for a shared goal, we focus on commonsense reasoning, and introduce a\nformal debate framework (FORD) to conduct a three-stage debate among LLMs with\nreal-world scenarios alignment: fair debate, mismatched debate, and roundtable\ndebate. Through extensive experiments on various datasets, LLMs can effectively\ncollaborate to reach a consensus despite noticeable inter-inconsistencies, but\nimbalances in their abilities can lead to domination by superior LLMs.\nLeveraging a more advanced LLM like GPT-4 as an authoritative judge can boost\ncollaboration performance. Our work contributes to understanding the\ninter-consistency among LLMs and lays the foundation for developing future\ncollaboration methods. Codes and data are available at\nhttps://github.com/Waste-Wood/FORD",
+ "authors": "Kai Xiong, Xiao Ding, Yixin Cao, Ting Liu, Bing Qin",
+ "published": "2023-05-19",
+ "updated": "2023-10-18",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2310.16343v2",
+ "title": "Evaluating, Understanding, and Improving Constrained Text Generation for Large Language Models",
+ "abstract": "Advancements in natural language generation (NLG) and large language models\n(LLMs) have led to proficient text generation in various tasks. However,\nintegrating intricate constraints into neural text generation, due to LLMs'\nopacity, remains challenging. This study investigates constrained text\ngeneration for LLMs, where predefined constraints are applied during LLM's\ngeneration process. Our research mainly focuses on mainstream open-source LLMs,\ncategorizing constraints into lexical, structural, and relation-based types. We\nalso present various benchmarks to facilitate fair evaluation. The study\naddresses some key research questions, including evaluating, understanding and\nimproving constrained text generation for LLMs. Results illuminate LLMs'\ncapacity and deficiency to incorporate constraints and provide insights for\nfuture developments in constrained text generation. Codes and datasets will be\nreleased upon acceptance.",
+ "authors": "Xiang Chen, Xiaojun Wan",
+ "published": "2023-10-25",
+ "updated": "2024-03-21",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2305.12090v1",
+ "title": "UP5: Unbiased Foundation Model for Fairness-aware Recommendation",
+ "abstract": "Recent advancements in foundation models such as large language models (LLM)\nhave propelled them to the forefront of recommender systems (RS). Moreover,\nfairness in RS is critical since many users apply it for decision-making and\ndemand fulfillment. However, at present, there is a lack of understanding\nregarding the level of fairness exhibited by recommendation foundation models\nand the appropriate methods for equitably treating different groups of users in\nfoundation models. In this paper, we focus on user-side unfairness problem and\nshow through a thorough examination that there is unfairness involved in LLMs\nthat lead to unfair recommendation results. To eliminate bias from LLM for\nfairness-aware recommendation, we introduce a novel Unbiased P5 (UP5)\nfoundation model based on Counterfactually-Fair-Prompting (CFP) techniques. CFP\nincludes two sub-modules: a personalized prefix prompt that enhances fairness\nwith respect to individual sensitive attributes, and a Prompt Mixture that\nintegrates multiple counterfactually-fair prompts for a set of sensitive\nattributes. Experiments are conducted on two real-world datasets, MovieLens-1M\nand Insurance, and results are compared with both matching-based and\nsequential-based fairness-aware recommendation models. The results show that\nUP5 achieves better recommendation performance and meanwhile exhibits a high\nlevel of fairness.",
+ "authors": "Wenyue Hua, Yingqiang Ge, Shuyuan Xu, Jianchao Ji, Yongfeng Zhang",
+ "published": "2023-05-20",
+ "updated": "2023-05-20",
+ "primary_cat": "cs.IR",
+ "cats": [
+ "cs.IR",
+ "cs.AI",
+ "cs.CL",
+ "cs.LG"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2310.05694v1",
+ "title": "A Survey of Large Language Models for Healthcare: from Data, Technology, and Applications to Accountability and Ethics",
+ "abstract": "The utilization of large language models (LLMs) in the Healthcare domain has\ngenerated both excitement and concern due to their ability to effectively\nrespond to freetext queries with certain professional knowledge. This survey\noutlines the capabilities of the currently developed LLMs for Healthcare and\nexplicates their development process, with the aim of providing an overview of\nthe development roadmap from traditional Pretrained Language Models (PLMs) to\nLLMs. Specifically, we first explore the potential of LLMs to enhance the\nefficiency and effectiveness of various Healthcare applications highlighting\nboth the strengths and limitations. Secondly, we conduct a comparison between\nthe previous PLMs and the latest LLMs, as well as comparing various LLMs with\neach other. Then we summarize related Healthcare training data, training\nmethods, optimization strategies, and usage. Finally, the unique concerns\nassociated with deploying LLMs in Healthcare settings are investigated,\nparticularly regarding fairness, accountability, transparency and ethics. Our\nsurvey provide a comprehensive investigation from perspectives of both computer\nscience and Healthcare specialty. Besides the discussion about Healthcare\nconcerns, we supports the computer science community by compiling a collection\nof open source resources, such as accessible datasets, the latest\nmethodologies, code implementations, and evaluation benchmarks in the Github.\nSummarily, we contend that a significant paradigm shift is underway,\ntransitioning from PLMs to LLMs. This shift encompasses a move from\ndiscriminative AI approaches to generative AI approaches, as well as a shift\nfrom model-centered methodologies to datacentered methodologies.",
+ "authors": "Kai He, Rui Mao, Qika Lin, Yucheng Ruan, Xiang Lan, Mengling Feng, Erik Cambria",
+ "published": "2023-10-09",
+ "updated": "2023-10-09",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2305.07609v3",
+ "title": "Is ChatGPT Fair for Recommendation? Evaluating Fairness in Large Language Model Recommendation",
+ "abstract": "The remarkable achievements of Large Language Models (LLMs) have led to the\nemergence of a novel recommendation paradigm -- Recommendation via LLM\n(RecLLM). Nevertheless, it is important to note that LLMs may contain social\nprejudices, and therefore, the fairness of recommendations made by RecLLM\nrequires further investigation. To avoid the potential risks of RecLLM, it is\nimperative to evaluate the fairness of RecLLM with respect to various sensitive\nattributes on the user side. Due to the differences between the RecLLM paradigm\nand the traditional recommendation paradigm, it is problematic to directly use\nthe fairness benchmark of traditional recommendation. To address the dilemma,\nwe propose a novel benchmark called Fairness of Recommendation via LLM\n(FaiRLLM). This benchmark comprises carefully crafted metrics and a dataset\nthat accounts for eight sensitive attributes1 in two recommendation scenarios:\nmusic and movies. By utilizing our FaiRLLM benchmark, we conducted an\nevaluation of ChatGPT and discovered that it still exhibits unfairness to some\nsensitive attributes when generating recommendations. Our code and dataset can\nbe found at https://github.com/jizhi-zhang/FaiRLLM.",
+ "authors": "Jizhi Zhang, Keqin Bao, Yang Zhang, Wenjie Wang, Fuli Feng, Xiangnan He",
+ "published": "2023-05-12",
+ "updated": "2023-10-17",
+ "primary_cat": "cs.IR",
+ "cats": [
+ "cs.IR",
+ "cs.CL",
+ "cs.CY"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2311.04205v2",
+ "title": "Rephrase and Respond: Let Large Language Models Ask Better Questions for Themselves",
+ "abstract": "Misunderstandings arise not only in interpersonal communication but also\nbetween humans and Large Language Models (LLMs). Such discrepancies can make\nLLMs interpret seemingly unambiguous questions in unexpected ways, yielding\nincorrect responses. While it is widely acknowledged that the quality of a\nprompt, such as a question, significantly impacts the quality of the response\nprovided by LLMs, a systematic method for crafting questions that LLMs can\nbetter comprehend is still underdeveloped. In this paper, we present a method\nnamed `Rephrase and Respond' (RaR), which allows LLMs to rephrase and expand\nquestions posed by humans and provide responses in a single prompt. This\napproach serves as a simple yet effective prompting method for improving\nperformance. We also introduce a two-step variant of RaR, where a rephrasing\nLLM first rephrases the question and then passes the original and rephrased\nquestions together to a different responding LLM. This facilitates the\neffective utilization of rephrased questions generated by one LLM with another.\nOur experiments demonstrate that our methods significantly improve the\nperformance of different models across a wide range to tasks. We further\nprovide a comprehensive comparison between RaR and the popular Chain-of-Thought\n(CoT) methods, both theoretically and empirically. We show that RaR is\ncomplementary to CoT and can be combined with CoT to achieve even better\nperformance. Our work not only contributes to enhancing LLM performance\nefficiently and effectively but also sheds light on a fair evaluation of LLM\ncapabilities. Data and codes are available at\nhttps://github.com/uclaml/Rephrase-and-Respond.",
+ "authors": "Yihe Deng, Weitong Zhang, Zixiang Chen, Quanquan Gu",
+ "published": "2023-11-07",
+ "updated": "2024-04-18",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.LG"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2311.03033v1",
+ "title": "Beyond Words: A Mathematical Framework for Interpreting Large Language Models",
+ "abstract": "Large language models (LLMs) are powerful AI tools that can generate and\ncomprehend natural language text and other complex information. However, the\nfield lacks a mathematical framework to systematically describe, compare and\nimprove LLMs. We propose Hex a framework that clarifies key terms and concepts\nin LLM research, such as hallucinations, alignment, self-verification and\nchain-of-thought reasoning. The Hex framework offers a precise and consistent\nway to characterize LLMs, identify their strengths and weaknesses, and\nintegrate new findings. Using Hex, we differentiate chain-of-thought reasoning\nfrom chain-of-thought prompting and establish the conditions under which they\nare equivalent. This distinction clarifies the basic assumptions behind\nchain-of-thought prompting and its implications for methods that use it, such\nas self-verification and prompt programming.\n Our goal is to provide a formal framework for LLMs that can help both\nresearchers and practitioners explore new possibilities for generative AI. We\ndo not claim to have a definitive solution, but rather a tool for opening up\nnew research avenues. We argue that our formal definitions and results are\ncrucial for advancing the discussion on how to build generative AI systems that\nare safe, reliable, fair and robust, especially in domains like healthcare and\nsoftware engineering.",
+ "authors": "Javier Gonz\u00e1lez, Aditya V. Nori",
+ "published": "2023-11-06",
+ "updated": "2023-11-06",
+ "primary_cat": "cs.LG",
+ "cats": [
+ "cs.LG",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2308.10149v2",
+ "title": "A Survey on Fairness in Large Language Models",
+ "abstract": "Large Language Models (LLMs) have shown powerful performance and development\nprospects and are widely deployed in the real world. However, LLMs can capture\nsocial biases from unprocessed training data and propagate the biases to\ndownstream tasks. Unfair LLM systems have undesirable social impacts and\npotential harms. In this paper, we provide a comprehensive review of related\nresearch on fairness in LLMs. Considering the influence of parameter magnitude\nand training paradigm on research strategy, we divide existing fairness\nresearch into oriented to medium-sized LLMs under pre-training and fine-tuning\nparadigms and oriented to large-sized LLMs under prompting paradigms. First,\nfor medium-sized LLMs, we introduce evaluation metrics and debiasing methods\nfrom the perspectives of intrinsic bias and extrinsic bias, respectively. Then,\nfor large-sized LLMs, we introduce recent fairness research, including fairness\nevaluation, reasons for bias, and debiasing methods. Finally, we discuss and\nprovide insight on the challenges and future directions for the development of\nfairness in LLMs.",
+ "authors": "Yingji Li, Mengnan Du, Rui Song, Xin Wang, Ying Wang",
+ "published": "2023-08-20",
+ "updated": "2024-02-21",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2309.08836v2",
+ "title": "Bias and Fairness in Chatbots: An Overview",
+ "abstract": "Chatbots have been studied for more than half a century. With the rapid\ndevelopment of natural language processing (NLP) technologies in recent years,\nchatbots using large language models (LLMs) have received much attention\nnowadays. Compared with traditional ones, modern chatbots are more powerful and\nhave been used in real-world applications. There are however, bias and fairness\nconcerns in modern chatbot design. Due to the huge amounts of training data,\nextremely large model sizes, and lack of interpretability, bias mitigation and\nfairness preservation of modern chatbots are challenging. Thus, a comprehensive\noverview on bias and fairness in chatbot systems is given in this paper. The\nhistory of chatbots and their categories are first reviewed. Then, bias sources\nand potential harms in applications are analyzed. Considerations in designing\nfair and unbiased chatbot systems are examined. Finally, future research\ndirections are discussed.",
+ "authors": "Jintang Xue, Yun-Cheng Wang, Chengwei Wei, Xiaofeng Liu, Jonghye Woo, C. -C. Jay Kuo",
+ "published": "2023-09-16",
+ "updated": "2023-12-10",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.CY"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2206.13757v1",
+ "title": "Flexible text generation for counterfactual fairness probing",
+ "abstract": "A common approach for testing fairness issues in text-based classifiers is\nthrough the use of counterfactuals: does the classifier output change if a\nsensitive attribute in the input is changed? Existing counterfactual generation\nmethods typically rely on wordlists or templates, producing simple\ncounterfactuals that don't take into account grammar, context, or subtle\nsensitive attribute references, and could miss issues that the wordlist\ncreators had not considered. In this paper, we introduce a task for generating\ncounterfactuals that overcomes these shortcomings, and demonstrate how large\nlanguage models (LLMs) can be leveraged to make progress on this task. We show\nthat this LLM-based method can produce complex counterfactuals that existing\nmethods cannot, comparing the performance of various counterfactual generation\nmethods on the Civil Comments dataset and showing their value in evaluating a\ntoxicity classifier.",
+ "authors": "Zee Fryer, Vera Axelrod, Ben Packer, Alex Beutel, Jilin Chen, Kellie Webster",
+ "published": "2022-06-28",
+ "updated": "2022-06-28",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.CY"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2402.15215v1",
+ "title": "Item-side Fairness of Large Language Model-based Recommendation System",
+ "abstract": "Recommendation systems for Web content distribution intricately connect to\nthe information access and exposure opportunities for vulnerable populations.\nThe emergence of Large Language Models-based Recommendation System (LRS) may\nintroduce additional societal challenges to recommendation systems due to the\ninherent biases in Large Language Models (LLMs). From the perspective of\nitem-side fairness, there remains a lack of comprehensive investigation into\nthe item-side fairness of LRS given the unique characteristics of LRS compared\nto conventional recommendation systems. To bridge this gap, this study examines\nthe property of LRS with respect to item-side fairness and reveals the\ninfluencing factors of both historical users' interactions and inherent\nsemantic biases of LLMs, shedding light on the need to extend conventional\nitem-side fairness methods for LRS. Towards this goal, we develop a concise and\neffective framework called IFairLRS to enhance the item-side fairness of an\nLRS. IFairLRS covers the main stages of building an LRS with specifically\nadapted strategies to calibrate the recommendations of LRS. We utilize IFairLRS\nto fine-tune LLaMA, a representative LLM, on \\textit{MovieLens} and\n\\textit{Steam} datasets, and observe significant item-side fairness\nimprovements. The code can be found in\nhttps://github.com/JiangM-C/IFairLRS.git.",
+ "authors": "Meng Jiang, Keqin Bao, Jizhi Zhang, Wenjie Wang, Zhengyi Yang, Fuli Feng, Xiangnan He",
+ "published": "2024-02-23",
+ "updated": "2024-02-23",
+ "primary_cat": "cs.IR",
+ "cats": [
+ "cs.IR"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2309.11653v2",
+ "title": "\"It's a Fair Game\", or Is It? Examining How Users Navigate Disclosure Risks and Benefits When Using LLM-Based Conversational Agents",
+ "abstract": "The widespread use of Large Language Model (LLM)-based conversational agents\n(CAs), especially in high-stakes domains, raises many privacy concerns.\nBuilding ethical LLM-based CAs that respect user privacy requires an in-depth\nunderstanding of the privacy risks that concern users the most. However,\nexisting research, primarily model-centered, does not provide insight into\nusers' perspectives. To bridge this gap, we analyzed sensitive disclosures in\nreal-world ChatGPT conversations and conducted semi-structured interviews with\n19 LLM-based CA users. We found that users are constantly faced with trade-offs\nbetween privacy, utility, and convenience when using LLM-based CAs. However,\nusers' erroneous mental models and the dark patterns in system design limited\ntheir awareness and comprehension of the privacy risks. Additionally, the\nhuman-like interactions encouraged more sensitive disclosures, which\ncomplicated users' ability to navigate the trade-offs. We discuss practical\ndesign guidelines and the needs for paradigm shifts to protect the privacy of\nLLM-based CA users.",
+ "authors": "Zhiping Zhang, Michelle Jia, Hao-Ping Lee, Bingsheng Yao, Sauvik Das, Ada Lerner, Dakuo Wang, Tianshi Li",
+ "published": "2023-09-20",
+ "updated": "2024-04-02",
+ "primary_cat": "cs.HC",
+ "cats": [
+ "cs.HC",
+ "cs.AI",
+ "cs.CR"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2307.11761v1",
+ "title": "Fairness of ChatGPT and the Role Of Explainable-Guided Prompts",
+ "abstract": "Our research investigates the potential of Large-scale Language Models\n(LLMs), specifically OpenAI's GPT, in credit risk assessment-a binary\nclassification task. Our findings suggest that LLMs, when directed by\njudiciously designed prompts and supplemented with domain-specific knowledge,\ncan parallel the performance of traditional Machine Learning (ML) models.\nIntriguingly, they achieve this with significantly less data-40 times less,\nutilizing merely 20 data points compared to the ML's 800. LLMs particularly\nexcel in minimizing false positives and enhancing fairness, both being vital\naspects of risk analysis. While our results did not surpass those of classical\nML models, they underscore the potential of LLMs in analogous tasks, laying a\ngroundwork for future explorations into harnessing the capabilities of LLMs in\ndiverse ML tasks.",
+ "authors": "Yashar Deldjoo",
+ "published": "2023-07-14",
+ "updated": "2023-07-14",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2403.15451v1",
+ "title": "Towards Enabling FAIR Dataspaces Using Large Language Models",
+ "abstract": "Dataspaces have recently gained adoption across various sectors, including\ntraditionally less digitized domains such as culture. Leveraging Semantic Web\ntechnologies helps to make dataspaces FAIR, but their complexity poses a\nsignificant challenge to the adoption of dataspaces and increases their cost.\nThe advent of Large Language Models (LLMs) raises the question of how these\nmodels can support the adoption of FAIR dataspaces. In this work, we\ndemonstrate the potential of LLMs in dataspaces with a concrete example. We\nalso derive a research agenda for exploring this emerging field.",
+ "authors": "Benedikt T. Arnold, Johannes Theissen-Lipp, Diego Collarana, Christoph Lange, Sandra Geisler, Edward Curry, Stefan Decker",
+ "published": "2024-03-18",
+ "updated": "2024-03-18",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2310.14607v2",
+ "title": "Confronting LLMs with Traditional ML: Rethinking the Fairness of Large Language Models in Tabular Classifications",
+ "abstract": "Recent literature has suggested the potential of using large language models\n(LLMs) to make classifications for tabular tasks. However, LLMs have been shown\nto exhibit harmful social biases that reflect the stereotypes and inequalities\npresent in society. To this end, as well as the widespread use of tabular data\nin many high-stake applications, it is important to explore the following\nquestions: what sources of information do LLMs draw upon when making\nclassifications for tabular tasks; whether and to what extent are LLM\nclassifications for tabular data influenced by social biases and stereotypes;\nand what are the consequential implications for fairness?\n Through a series of experiments, we delve into these questions and show that\nLLMs tend to inherit social biases from their training data which significantly\nimpact their fairness in tabular classification tasks. Furthermore, our\ninvestigations show that in the context of bias mitigation, though in-context\nlearning and finetuning have a moderate effect, the fairness metric gap between\ndifferent subgroups is still larger than that in traditional machine learning\nmodels, such as Random Forest and shallow Neural Networks. This observation\nemphasizes that the social biases are inherent within the LLMs themselves and\ninherited from their pretraining corpus, not only from the downstream task\ndatasets. Besides, we demonstrate that label-flipping of in-context examples\ncan significantly reduce biases, further highlighting the presence of inherent\nbias within LLMs.",
+ "authors": "Yanchen Liu, Srishti Gautam, Jiaqi Ma, Himabindu Lakkaraju",
+ "published": "2023-10-23",
+ "updated": "2024-04-02",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.LG"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2311.02294v1",
+ "title": "LLMs grasp morality in concept",
+ "abstract": "Work in AI ethics and fairness has made much progress in regulating LLMs to\nreflect certain values, such as fairness, truth, and diversity. However, it has\ntaken the problem of how LLMs might 'mean' anything at all for granted. Without\naddressing this, it is not clear what imbuing LLMs with such values even means.\nIn response, we provide a general theory of meaning that extends beyond humans.\nWe use this theory to explicate the precise nature of LLMs as meaning-agents.\nWe suggest that the LLM, by virtue of its position as a meaning-agent, already\ngrasps the constructions of human society (e.g. morality, gender, and race) in\nconcept. Consequently, under certain ethical frameworks, currently popular\nmethods for model alignment are limited at best and counterproductive at worst.\nMoreover, unaligned models may help us better develop our moral and social\nphilosophy.",
+ "authors": "Mark Pock, Andre Ye, Jared Moore",
+ "published": "2023-11-04",
+ "updated": "2023-11-04",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.CY"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2403.14473v1",
+ "title": "The Ethics of ChatGPT in Medicine and Healthcare: A Systematic Review on Large Language Models (LLMs)",
+ "abstract": "With the introduction of ChatGPT, Large Language Models (LLMs) have received\nenormous attention in healthcare. Despite their potential benefits, researchers\nhave underscored various ethical implications. While individual instances have\ndrawn much attention, the debate lacks a systematic overview of practical\napplications currently researched and ethical issues connected to them. Against\nthis background, this work aims to map the ethical landscape surrounding the\ncurrent stage of deployment of LLMs in medicine and healthcare. Electronic\ndatabases and preprint servers were queried using a comprehensive search\nstrategy. Studies were screened and extracted following a modified rapid review\napproach. Methodological quality was assessed using a hybrid approach. For 53\nrecords, a meta-aggregative synthesis was performed. Four fields of\napplications emerged and testify to a vivid exploration phase. Advantages of\nusing LLMs are attributed to their capacity in data analysis, personalized\ninformation provisioning, support in decision-making, mitigating information\nloss and enhancing information accessibility. However, we also identifies\nrecurrent ethical concerns connected to fairness, bias, non-maleficence,\ntransparency, and privacy. A distinctive concern is the tendency to produce\nharmful misinformation or convincingly but inaccurate content. A recurrent plea\nfor ethical guidance and human oversight is evident. Given the variety of use\ncases, it is suggested that the ethical guidance debate be reframed to focus on\ndefining what constitutes acceptable human oversight across the spectrum of\napplications. This involves considering diverse settings, varying potentials\nfor harm, and different acceptable thresholds for performance and certainty in\nhealthcare. In addition, a critical inquiry is necessary to determine the\nextent to which the current experimental use of LLMs is necessary and\njustified.",
+ "authors": "Joschka Haltaufderheide, Robert Ranisch",
+ "published": "2024-03-21",
+ "updated": "2024-03-21",
+ "primary_cat": "cs.CY",
+ "cats": [
+ "cs.CY"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2401.04057v1",
+ "title": "Unveiling Bias in Fairness Evaluations of Large Language Models: A Critical Literature Review of Music and Movie Recommendation Systems",
+ "abstract": "The rise of generative artificial intelligence, particularly Large Language\nModels (LLMs), has intensified the imperative to scrutinize fairness alongside\naccuracy. Recent studies have begun to investigate fairness evaluations for\nLLMs within domains such as recommendations. Given that personalization is an\nintrinsic aspect of recommendation systems, its incorporation into fairness\nassessments is paramount. Yet, the degree to which current fairness evaluation\nframeworks account for personalization remains unclear. Our comprehensive\nliterature review aims to fill this gap by examining how existing frameworks\nhandle fairness evaluations of LLMs, with a focus on the integration of\npersonalization factors. Despite an exhaustive collection and analysis of\nrelevant works, we discovered that most evaluations overlook personalization, a\ncritical facet of recommendation systems, thereby inadvertently perpetuating\nunfair practices. Our findings shed light on this oversight and underscore the\nurgent need for more nuanced fairness evaluations that acknowledge\npersonalization. Such improvements are vital for fostering equitable\ndevelopment within the AI community.",
+ "authors": "Chandan Kumar Sah, Dr. Lian Xiaoli, Muhammad Mirajul Islam",
+ "published": "2024-01-08",
+ "updated": "2024-01-08",
+ "primary_cat": "cs.IR",
+ "cats": [
+ "cs.IR",
+ "cs.AI",
+ "cs.SE"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2403.00811v1",
+ "title": "Cognitive Bias in High-Stakes Decision-Making with LLMs",
+ "abstract": "Large language models (LLMs) offer significant potential as tools to support\nan expanding range of decision-making tasks. However, given their training on\nhuman (created) data, LLMs can inherit both societal biases against protected\ngroups, as well as be subject to cognitive bias. Such human-like bias can\nimpede fair and explainable decisions made with LLM assistance. Our work\nintroduces BiasBuster, a framework designed to uncover, evaluate, and mitigate\ncognitive bias in LLMs, particularly in high-stakes decision-making tasks.\nInspired by prior research in psychology and cognitive sciences, we develop a\ndataset containing 16,800 prompts to evaluate different cognitive biases (e.g.,\nprompt-induced, sequential, inherent). We test various bias mitigation\nstrategies, amidst proposing a novel method using LLMs to debias their own\nprompts. Our analysis provides a comprehensive picture on the presence and\neffects of cognitive bias across different commercial and open-source models.\nWe demonstrate that our self-help debiasing effectively mitigate cognitive bias\nwithout having to manually craft examples for each bias type.",
+ "authors": "Jessica Echterhoff, Yao Liu, Abeer Alessa, Julian McAuley, Zexue He",
+ "published": "2024-02-25",
+ "updated": "2024-02-25",
+ "primary_cat": "cs.AI",
+ "cats": [
+ "cs.AI",
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2310.08780v1",
+ "title": "\"Im not Racist but...\": Discovering Bias in the Internal Knowledge of Large Language Models",
+ "abstract": "Large language models (LLMs) have garnered significant attention for their\nremarkable performance in a continuously expanding set of natural language\nprocessing tasks. However, these models have been shown to harbor inherent\nsocietal biases, or stereotypes, which can adversely affect their performance\nin their many downstream applications. In this paper, we introduce a novel,\npurely prompt-based approach to uncover hidden stereotypes within any arbitrary\nLLM. Our approach dynamically generates a knowledge representation of internal\nstereotypes, enabling the identification of biases encoded within the LLM's\ninternal knowledge. By illuminating the biases present in LLMs and offering a\nsystematic methodology for their analysis, our work contributes to advancing\ntransparency and promoting fairness in natural language processing systems.",
+ "authors": "Abel Salinas, Louis Penafiel, Robert McCormack, Fred Morstatter",
+ "published": "2023-10-13",
+ "updated": "2023-10-13",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2307.15997v1",
+ "title": "RoCar: A Relationship Network-based Evaluation Method to Large Language Models",
+ "abstract": "Large language models (LLMs) have received increasing attention. However, due\nto the complexity of its capabilities, how to rationally evaluate the\ncapabilities of LLMs is still a task to be solved. We propose the RoCar method,\nwhich utilizes the defined basic schemas to randomly construct a task graph and\ngenerates natural language evaluation tasks based on the task graph to evaluate\nthe reasoning and memory abilities of LLMs respectively. Due to the very large\nrandomness of the task construction process, it is possible to ensure that none\nof the LLMs to be tested has directly learned the evaluation tasks,\nguaranteeing the fairness of the evaluation method.",
+ "authors": "Ming Wang, Wenfang Wu, Chongyun Gao, Daling Wang, Shi Feng, Yifei Zhang",
+ "published": "2023-07-29",
+ "updated": "2023-07-29",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2311.07884v2",
+ "title": "Fair Abstractive Summarization of Diverse Perspectives",
+ "abstract": "People from different social and demographic groups express diverse\nperspectives and conflicting opinions on a broad set of topics such as product\nreviews, healthcare, law, and politics. A fair summary should provide a\ncomprehensive coverage of diverse perspectives without underrepresenting\ncertain groups. However, current work in summarization metrics and Large\nLanguage Models (LLMs) evaluation has not explored fair abstractive\nsummarization. In this paper, we systematically investigate fair abstractive\nsummarization for user-generated data. We first formally define fairness in\nabstractive summarization as not underrepresenting perspectives of any groups\nof people, and we propose four reference-free automatic metrics by measuring\nthe differences between target and source perspectives. We evaluate nine LLMs,\nincluding three GPT models, four LLaMA models, PaLM 2, and Claude, on six\ndatasets collected from social media, online reviews, and recorded transcripts.\nExperiments show that both the model-generated and the human-written reference\nsummaries suffer from low fairness. We conduct a comprehensive analysis of the\ncommon factors influencing fairness and propose three simple but effective\nmethods to alleviate unfair summarization. Our dataset and code are available\nat https://github.com/psunlpgroup/FairSumm.",
+ "authors": "Yusen Zhang, Nan Zhang, Yixin Liu, Alexander Fabbri, Junru Liu, Ryo Kamoi, Xiaoxin Lu, Caiming Xiong, Jieyu Zhao, Dragomir Radev, Kathleen McKeown, Rui Zhang",
+ "published": "2023-11-14",
+ "updated": "2024-03-30",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2402.04489v1",
+ "title": "De-amplifying Bias from Differential Privacy in Language Model Fine-tuning",
+ "abstract": "Fairness and privacy are two important values machine learning (ML)\npractitioners often seek to operationalize in models. Fairness aims to reduce\nmodel bias for social/demographic sub-groups. Privacy via differential privacy\n(DP) mechanisms, on the other hand, limits the impact of any individual's\ntraining data on the resulting model. The trade-offs between privacy and\nfairness goals of trustworthy ML pose a challenge to those wishing to address\nboth. We show that DP amplifies gender, racial, and religious bias when\nfine-tuning large language models (LLMs), producing models more biased than\nones fine-tuned without DP. We find the cause of the amplification to be a\ndisparity in convergence of gradients across sub-groups. Through the case of\nbinary gender bias, we demonstrate that Counterfactual Data Augmentation (CDA),\na known method for addressing bias, also mitigates bias amplification by DP. As\na consequence, DP and CDA together can be used to fine-tune models while\nmaintaining both fairness and privacy.",
+ "authors": "Sanjari Srivastava, Piotr Mardziel, Zhikhun Zhang, Archana Ahlawat, Anupam Datta, John C Mitchell",
+ "published": "2024-02-07",
+ "updated": "2024-02-07",
+ "primary_cat": "cs.LG",
+ "cats": [
+ "cs.LG",
+ "cs.CR",
+ "cs.CY",
+ "stat.ME"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2404.07981v1",
+ "title": "Manipulating Large Language Models to Increase Product Visibility",
+ "abstract": "Large language models (LLMs) are increasingly being integrated into search\nengines to provide natural language responses tailored to user queries.\nCustomers and end-users are also becoming more dependent on these models for\nquick and easy purchase decisions. In this work, we investigate whether\nrecommendations from LLMs can be manipulated to enhance a product's visibility.\nWe demonstrate that adding a strategic text sequence (STS) -- a carefully\ncrafted message -- to a product's information page can significantly increase\nits likelihood of being listed as the LLM's top recommendation. To understand\nthe impact of STS, we use a catalog of fictitious coffee machines and analyze\nits effect on two target products: one that seldom appears in the LLM's\nrecommendations and another that usually ranks second. We observe that the\nstrategic text sequence significantly enhances the visibility of both products\nby increasing their chances of appearing as the top recommendation. This\nability to manipulate LLM-generated search responses provides vendors with a\nconsiderable competitive advantage and has the potential to disrupt fair market\ncompetition. Just as search engine optimization (SEO) revolutionized how\nwebpages are customized to rank higher in search engine results, influencing\nLLM recommendations could profoundly impact content optimization for AI-driven\nsearch services. Code for our experiments is available at\nhttps://github.com/aounon/llm-rank-optimizer.",
+ "authors": "Aounon Kumar, Himabindu Lakkaraju",
+ "published": "2024-04-11",
+ "updated": "2024-04-11",
+ "primary_cat": "cs.IR",
+ "cats": [
+ "cs.IR",
+ "cs.AI",
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2307.03838v2",
+ "title": "RADAR: Robust AI-Text Detection via Adversarial Learning",
+ "abstract": "Recent advances in large language models (LLMs) and the intensifying\npopularity of ChatGPT-like applications have blurred the boundary of\nhigh-quality text generation between humans and machines. However, in addition\nto the anticipated revolutionary changes to our technology and society, the\ndifficulty of distinguishing LLM-generated texts (AI-text) from human-generated\ntexts poses new challenges of misuse and fairness, such as fake content\ngeneration, plagiarism, and false accusations of innocent writers. While\nexisting works show that current AI-text detectors are not robust to LLM-based\nparaphrasing, this paper aims to bridge this gap by proposing a new framework\ncalled RADAR, which jointly trains a robust AI-text detector via adversarial\nlearning. RADAR is based on adversarial training of a paraphraser and a\ndetector. The paraphraser's goal is to generate realistic content to evade\nAI-text detection. RADAR uses the feedback from the detector to update the\nparaphraser, and vice versa. Evaluated with 8 different LLMs (Pythia, Dolly\n2.0, Palmyra, Camel, GPT-J, Dolly 1.0, LLaMA, and Vicuna) across 4 datasets,\nexperimental results show that RADAR significantly outperforms existing AI-text\ndetection methods, especially when paraphrasing is in place. We also identify\nthe strong transferability of RADAR from instruction-tuned LLMs to other LLMs,\nand evaluate the improved capability of RADAR via GPT-3.5-Turbo.",
+ "authors": "Xiaomeng Hu, Pin-Yu Chen, Tsung-Yi Ho",
+ "published": "2023-07-07",
+ "updated": "2023-10-24",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.LG"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2311.08472v1",
+ "title": "Selecting Shots for Demographic Fairness in Few-Shot Learning with Large Language Models",
+ "abstract": "Recently, work in NLP has shifted to few-shot (in-context) learning, with\nlarge language models (LLMs) performing well across a range of tasks. However,\nwhile fairness evaluations have become a standard for supervised methods,\nlittle is known about the fairness of LLMs as prediction systems. Further,\ncommon standard methods for fairness involve access to models weights or are\napplied during finetuning, which are not applicable in few-shot learning. Do\nLLMs exhibit prediction biases when used for standard NLP tasks? In this work,\nwe explore the effect of shots, which directly affect the performance of\nmodels, on the fairness of LLMs as NLP classification systems. We consider how\ndifferent shot selection strategies, both existing and new demographically\nsensitive methods, affect model fairness across three standard fairness\ndatasets. We discuss how future work can include LLM fairness evaluations.",
+ "authors": "Carlos Aguirre, Kuleen Sasse, Isabel Cachola, Mark Dredze",
+ "published": "2023-11-14",
+ "updated": "2023-11-14",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2403.02839v1",
+ "title": "An Empirical Study of LLM-as-a-Judge for LLM Evaluation: Fine-tuned Judge Models are Task-specific Classifiers",
+ "abstract": "Recently, there has been a growing trend of utilizing Large Language Model\n(LLM) to evaluate the quality of other LLMs. Many studies have employed\nproprietary close-source models, especially GPT4, as the evaluator.\nAlternatively, other works have fine-tuned judge models based on open-source\nLLMs as the evaluator. In this study, we conduct an empirical study of\ndifferent judge models on their evaluation capability. Our findings indicate\nthat although the fine-tuned judge models achieve high accuracy on in-domain\ntest sets, even surpassing GPT4, they are inherently task-specific classifiers,\nand their generalizability and fairness severely underperform GPT4.",
+ "authors": "Hui Huang, Yingqi Qu, Jing Liu, Muyun Yang, Tiejun Zhao",
+ "published": "2024-03-05",
+ "updated": "2024-03-05",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2403.04814v2",
+ "title": "Evaluation of LLMs on Syntax-Aware Code Fill-in-the-Middle Tasks",
+ "abstract": "We introduce Syntax-Aware Fill-In-the-Middle (SAFIM), a new benchmark for\nevaluating Large Language Models (LLMs) on the code Fill-in-the-Middle (FIM)\ntask. This benchmark focuses on syntax-aware completions of program structures\nsuch as code blocks and conditional expressions, and includes 17,720 examples\nfrom multiple programming languages, sourced from recent code submissions after\nApril 2022 to minimize data contamination. SAFIM provides a robust framework\nwith various prompt designs and novel syntax-aware post-processing techniques,\nfacilitating accurate and fair comparisons across LLMs. Our comprehensive\nevaluation of 15 LLMs shows that FIM pretraining not only enhances FIM\nproficiency but also improves Left-to-Right (L2R) inference using LLMs. Our\nfindings challenge conventional beliefs and suggest that pretraining methods\nand data quality have more impact than model size. SAFIM thus serves as a\nfoundational platform for future research in effective pretraining strategies\nfor code LLMs. The evaluation toolkit and dataset are available at\nhttps://github.com/gonglinyuan/safim, and the leaderboard is available at\nhttps://safimbenchmark.com.",
+ "authors": "Linyuan Gong, Sida Wang, Mostafa Elhoushi, Alvin Cheung",
+ "published": "2024-03-07",
+ "updated": "2024-04-10",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.LG",
+ "cs.SE"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2402.07688v1",
+ "title": "CyberMetric: A Benchmark Dataset for Evaluating Large Language Models Knowledge in Cybersecurity",
+ "abstract": "Large Language Models (LLMs) excel across various domains, from computer\nvision to medical diagnostics. However, understanding the diverse landscape of\ncybersecurity, encompassing cryptography, reverse engineering, and managerial\nfacets like risk assessment, presents a challenge, even for human experts. In\nthis paper, we introduce CyberMetric, a benchmark dataset comprising 10,000\nquestions sourced from standards, certifications, research papers, books, and\nother publications in the cybersecurity domain. The questions are created\nthrough a collaborative process, i.e., merging expert knowledge with LLMs,\nincluding GPT-3.5 and Falcon-180B. Human experts spent over 200 hours verifying\ntheir accuracy and relevance. Beyond assessing LLMs' knowledge, the dataset's\nmain goal is to facilitate a fair comparison between humans and different LLMs\nin cybersecurity. To achieve this, we carefully selected 80 questions covering\na wide range of topics within cybersecurity and involved 30 participants of\ndiverse expertise levels, facilitating a comprehensive comparison between human\nand machine intelligence in this area. The findings revealed that LLMs\noutperformed humans in almost every aspect of cybersecurity.",
+ "authors": "Norbert Tihanyi, Mohamed Amine Ferrag, Ridhi Jain, Merouane Debbah",
+ "published": "2024-02-12",
+ "updated": "2024-02-12",
+ "primary_cat": "cs.AI",
+ "cats": [
+ "cs.AI",
+ "cs.CR"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2305.19118v1",
+ "title": "Encouraging Divergent Thinking in Large Language Models through Multi-Agent Debate",
+ "abstract": "Modern large language models (LLMs) like ChatGPT have shown remarkable\nperformance on general language tasks but still struggle on complex reasoning\ntasks, which drives the research on cognitive behaviors of LLMs to explore\nhuman-like problem-solving strategies. Along this direction, one representative\nstrategy is self-reflection, which asks an LLM to refine the solution with the\nfeedback generated by itself iteratively. However, our study shows that such\nreflection-style methods suffer from the Degeneration-of-Thought (DoT) problem:\nonce the LLM has established confidence in its solutions, it is unable to\ngenerate novel thoughts later through reflection even if its initial stance is\nincorrect. To address the DoT problem, we propose a Multi-Agent Debate (MAD)\nframework, in which multiple agents express their arguments in the state of\n\"tit for tat\" and a judge manages the debate process to obtain a final\nsolution. Clearly, our MAD framework encourages divergent thinking in LLMs\nwhich would be helpful for tasks that require deep levels of contemplation.\nExperiment results on two challenging datasets, commonsense machine translation\nand counter-intuitive arithmetic reasoning, demonstrate the effectiveness of\nour MAD framework. Extensive analyses suggest that the adaptive break of debate\nand the modest level of \"tit for tat\" state are required for MAD to obtain good\nperformance. Moreover, we find that LLMs might not be a fair judge if different\nLLMs are used for agents. Codes:\nhttps://github.com/Skytliang/Multi-Agents-Debate",
+ "authors": "Tian Liang, Zhiwei He, Wenxiang Jiao, Xing Wang, Yan Wang, Rui Wang, Yujiu Yang, Zhaopeng Tu, Shuming Shi",
+ "published": "2023-05-30",
+ "updated": "2023-05-30",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2311.18140v1",
+ "title": "ROBBIE: Robust Bias Evaluation of Large Generative Language Models",
+ "abstract": "As generative large language models (LLMs) grow more performant and\nprevalent, we must develop comprehensive enough tools to measure and improve\ntheir fairness. Different prompt-based datasets can be used to measure social\nbias across multiple text domains and demographic axes, meaning that testing\nLLMs on more datasets can potentially help us characterize their biases more\nfully, and better ensure equal and equitable treatment of marginalized\ndemographic groups. In this work, our focus is two-fold:\n (1) Benchmarking: a comparison of 6 different prompt-based bias and toxicity\nmetrics across 12 demographic axes and 5 families of generative LLMs. Out of\nthose 6 metrics, AdvPromptSet and HolisticBiasR are novel datasets proposed in\nthe paper. The comparison of those benchmarks gives us insights about the bias\nand toxicity of the compared models. Therefore, we explore the frequency of\ndemographic terms in common LLM pre-training corpora and how this may relate to\nmodel biases.\n (2) Mitigation: we conduct a comprehensive study of how well 3 bias/toxicity\nmitigation techniques perform across our suite of measurements. ROBBIE aims to\nprovide insights for practitioners while deploying a model, emphasizing the\nneed to not only measure potential harms, but also understand how they arise by\ncharacterizing the data, mitigate harms once found, and balance any trade-offs.\nWe open-source our analysis code in hopes of encouraging broader measurements\nof bias in future LLMs.",
+ "authors": "David Esiobu, Xiaoqing Tan, Saghar Hosseini, Megan Ung, Yuchen Zhang, Jude Fernandes, Jane Dwivedi-Yu, Eleonora Presani, Adina Williams, Eric Michael Smith",
+ "published": "2023-11-29",
+ "updated": "2023-11-29",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2404.02650v1",
+ "title": "Towards detecting unanticipated bias in Large Language Models",
+ "abstract": "Over the last year, Large Language Models (LLMs) like ChatGPT have become\nwidely available and have exhibited fairness issues similar to those in\nprevious machine learning systems. Current research is primarily focused on\nanalyzing and quantifying these biases in training data and their impact on the\ndecisions of these models, alongside developing mitigation strategies. This\nresearch largely targets well-known biases related to gender, race, ethnicity,\nand language. However, it is clear that LLMs are also affected by other, less\nobvious implicit biases. The complex and often opaque nature of these models\nmakes detecting such biases challenging, yet this is crucial due to their\npotential negative impact in various applications. In this paper, we explore\nnew avenues for detecting these unanticipated biases in LLMs, focusing\nspecifically on Uncertainty Quantification and Explainable AI methods. These\napproaches aim to assess the certainty of model decisions and to make the\ninternal decision-making processes of LLMs more transparent, thereby\nidentifying and understanding biases that are not immediately apparent. Through\nthis research, we aim to contribute to the development of fairer and more\ntransparent AI systems.",
+ "authors": "Anna Kruspe",
+ "published": "2024-04-03",
+ "updated": "2024-04-03",
+ "primary_cat": "cs.LG",
+ "cats": [
+ "cs.LG",
+ "cs.AI",
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2404.08656v1",
+ "title": "Linear Cross-document Event Coreference Resolution with X-AMR",
+ "abstract": "Event Coreference Resolution (ECR) as a pairwise mention classification task\nis expensive both for automated systems and manual annotations. The task's\nquadratic difficulty is exacerbated when using Large Language Models (LLMs),\nmaking prompt engineering for ECR prohibitively costly. In this work, we\npropose a graphical representation of events, X-AMR, anchored around individual\nmentions using a \\textbf{cross}-document version of \\textbf{A}bstract\n\\textbf{M}eaning \\textbf{R}epresentation. We then linearize the ECR with a\nnovel multi-hop coreference algorithm over the event graphs. The event graphs\nsimplify ECR, making it a) LLM cost-effective, b) compositional and\ninterpretable, and c) easily annotated. For a fair assessment, we first enrich\nan existing ECR benchmark dataset with these event graphs using an\nannotator-friendly tool we introduce. Then, we employ GPT-4, the newest LLM by\nOpenAI, for these annotations. Finally, using the ECR algorithm, we assess\nGPT-4 against humans and analyze its limitations. Through this research, we aim\nto advance the state-of-the-art for efficient ECR and shed light on the\npotential shortcomings of current LLMs at this task. Code and annotations:\n\\url{https://github.com/ahmeshaf/gpt_coref}",
+ "authors": "Shafiuddin Rehan Ahmed, George Arthur Baker, Evi Judge, Michael Regan, Kristin Wright-Bettner, Martha Palmer, James H. Martin",
+ "published": "2024-03-25",
+ "updated": "2024-03-25",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2312.15478v1",
+ "title": "A Group Fairness Lens for Large Language Models",
+ "abstract": "The rapid advancement of large language models has revolutionized various\napplications but also raised crucial concerns about their potential to\nperpetuate biases and unfairness when deployed in social media contexts.\nEvaluating LLMs' potential biases and fairness has become crucial, as existing\nmethods rely on limited prompts focusing on just a few groups, lacking a\ncomprehensive categorical perspective. In this paper, we propose evaluating LLM\nbiases from a group fairness lens using a novel hierarchical schema\ncharacterizing diverse social groups. Specifically, we construct a dataset,\nGFair, encapsulating target-attribute combinations across multiple dimensions.\nIn addition, we introduce statement organization, a new open-ended text\ngeneration task, to uncover complex biases in LLMs. Extensive evaluations of\npopular LLMs reveal inherent safety concerns. To mitigate the biases of LLM\nfrom a group fairness perspective, we pioneer a novel chain-of-thought method\nGF-Think to mitigate biases of LLMs from a group fairness perspective.\nExperimental results demonstrate its efficacy in mitigating bias in LLMs to\nachieve fairness.",
+ "authors": "Guanqun Bi, Lei Shen, Yuqiang Xie, Yanan Cao, Tiangang Zhu, Xiaodong He",
+ "published": "2023-12-24",
+ "updated": "2023-12-24",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2310.09219v5",
+ "title": "\"Kelly is a Warm Person, Joseph is a Role Model\": Gender Biases in LLM-Generated Reference Letters",
+ "abstract": "Large Language Models (LLMs) have recently emerged as an effective tool to\nassist individuals in writing various types of content, including professional\ndocuments such as recommendation letters. Though bringing convenience, this\napplication also introduces unprecedented fairness concerns. Model-generated\nreference letters might be directly used by users in professional scenarios. If\nunderlying biases exist in these model-constructed letters, using them without\nscrutinization could lead to direct societal harms, such as sabotaging\napplication success rates for female applicants. In light of this pressing\nissue, it is imminent and necessary to comprehensively study fairness issues\nand associated harms in this real-world use case. In this paper, we critically\nexamine gender biases in LLM-generated reference letters. Drawing inspiration\nfrom social science findings, we design evaluation methods to manifest biases\nthrough 2 dimensions: (1) biases in language style and (2) biases in lexical\ncontent. We further investigate the extent of bias propagation by analyzing the\nhallucination bias of models, a term that we define to be bias exacerbation in\nmodel-hallucinated contents. Through benchmarking evaluation on 2 popular LLMs-\nChatGPT and Alpaca, we reveal significant gender biases in LLM-generated\nrecommendation letters. Our findings not only warn against using LLMs for this\napplication without scrutinization, but also illuminate the importance of\nthoroughly studying hidden biases and harms in LLM-generated professional\ndocuments.",
+ "authors": "Yixin Wan, George Pu, Jiao Sun, Aparna Garimella, Kai-Wei Chang, Nanyun Peng",
+ "published": "2023-10-13",
+ "updated": "2023-12-01",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2311.02049v1",
+ "title": "Post Turing: Mapping the landscape of LLM Evaluation",
+ "abstract": "In the rapidly evolving landscape of Large Language Models (LLMs),\nintroduction of well-defined and standardized evaluation methodologies remains\na crucial challenge. This paper traces the historical trajectory of LLM\nevaluations, from the foundational questions posed by Alan Turing to the modern\nera of AI research. We categorize the evolution of LLMs into distinct periods,\neach characterized by its unique benchmarks and evaluation criteria. As LLMs\nincreasingly mimic human-like behaviors, traditional evaluation proxies, such\nas the Turing test, have become less reliable. We emphasize the pressing need\nfor a unified evaluation system, given the broader societal implications of\nthese models. Through an analysis of common evaluation methodologies, we\nadvocate for a qualitative shift in assessment approaches, underscoring the\nimportance of standardization and objective criteria. This work serves as a\ncall for the AI community to collaboratively address the challenges of LLM\nevaluation, ensuring their reliability, fairness, and societal benefit.",
+ "authors": "Alexey Tikhonov, Ivan P. Yamshchikov",
+ "published": "2023-11-03",
+ "updated": "2023-11-03",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "68T50",
+ "I.2.7"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2404.08517v1",
+ "title": "Online Safety Analysis for LLMs: a Benchmark, an Assessment, and a Path Forward",
+ "abstract": "While Large Language Models (LLMs) have seen widespread applications across\nnumerous fields, their limited interpretability poses concerns regarding their\nsafe operations from multiple aspects, e.g., truthfulness, robustness, and\nfairness. Recent research has started developing quality assurance methods for\nLLMs, introducing techniques such as offline detector-based or uncertainty\nestimation methods. However, these approaches predominantly concentrate on\npost-generation analysis, leaving the online safety analysis for LLMs during\nthe generation phase an unexplored area. To bridge this gap, we conduct in this\nwork a comprehensive evaluation of the effectiveness of existing online safety\nanalysis methods on LLMs. We begin with a pilot study that validates the\nfeasibility of detecting unsafe outputs in the early generation process.\nFollowing this, we establish the first publicly available benchmark of online\nsafety analysis for LLMs, including a broad spectrum of methods, models, tasks,\ndatasets, and evaluation metrics. Utilizing this benchmark, we extensively\nanalyze the performance of state-of-the-art online safety analysis methods on\nboth open-source and closed-source LLMs. This analysis reveals the strengths\nand weaknesses of individual methods and offers valuable insights into\nselecting the most appropriate method based on specific application scenarios\nand task requirements. Furthermore, we also explore the potential of using\nhybridization methods, i.e., combining multiple methods to derive a collective\nsafety conclusion, to enhance the efficacy of online safety analysis for LLMs.\nOur findings indicate a promising direction for the development of innovative\nand trustworthy quality assurance methodologies for LLMs, facilitating their\nreliable deployments across diverse domains.",
+ "authors": "Xuan Xie, Jiayang Song, Zhehua Zhou, Yuheng Huang, Da Song, Lei Ma",
+ "published": "2024-04-12",
+ "updated": "2024-04-12",
+ "primary_cat": "cs.SE",
+ "cats": [
+ "cs.SE",
+ "cs.AI",
+ "cs.CL",
+ "cs.CR",
+ "cs.LG"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2309.09397v1",
+ "title": "Do Large GPT Models Discover Moral Dimensions in Language Representations? A Topological Study Of Sentence Embeddings",
+ "abstract": "As Large Language Models are deployed within Artificial Intelligence systems,\nthat are increasingly integrated with human society, it becomes more important\nthan ever to study their internal structures. Higher level abilities of LLMs\nsuch as GPT-3.5 emerge in large part due to informative language\nrepresentations they induce from raw text data during pre-training on trillions\nof words. These embeddings exist in vector spaces of several thousand\ndimensions, and their processing involves mapping between multiple vector\nspaces, with total number of parameters on the order of trillions. Furthermore,\nthese language representations are induced by gradient optimization, resulting\nin a black box system that is hard to interpret. In this paper, we take a look\nat the topological structure of neuronal activity in the \"brain\" of Chat-GPT's\nfoundation language model, and analyze it with respect to a metric representing\nthe notion of fairness. We develop a novel approach to visualize GPT's moral\ndimensions. We first compute a fairness metric, inspired by social psychology\nliterature, to identify factors that typically influence fairness assessments\nin humans, such as legitimacy, need, and responsibility. Subsequently, we\nsummarize the manifold's shape using a lower-dimensional simplicial complex,\nwhose topology is derived from this metric. We color it with a heat map\nassociated with this fairness metric, producing human-readable visualizations\nof the high-dimensional sentence manifold. Our results show that sentence\nembeddings based on GPT-3.5 can be decomposed into two submanifolds\ncorresponding to fair and unfair moral judgments. This indicates that GPT-based\nlanguage models develop a moral dimension within their representation spaces\nand induce an understanding of fairness during their training process.",
+ "authors": "Stephen Fitz",
+ "published": "2023-09-17",
+ "updated": "2023-09-17",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.CY",
+ "cs.LG",
+ "cs.NE"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2311.18580v1",
+ "title": "FFT: Towards Harmlessness Evaluation and Analysis for LLMs with Factuality, Fairness, Toxicity",
+ "abstract": "The widespread of generative artificial intelligence has heightened concerns\nabout the potential harms posed by AI-generated texts, primarily stemming from\nfactoid, unfair, and toxic content. Previous researchers have invested much\neffort in assessing the harmlessness of generative language models. However,\nexisting benchmarks are struggling in the era of large language models (LLMs),\ndue to the stronger language generation and instruction following capabilities,\nas well as wider applications. In this paper, we propose FFT, a new benchmark\nwith 2116 elaborated-designed instances, for LLM harmlessness evaluation with\nfactuality, fairness, and toxicity. To investigate the potential harms of LLMs,\nwe evaluate 9 representative LLMs covering various parameter scales, training\nstages, and creators. Experiments show that the harmlessness of LLMs is still\nunder-satisfactory, and extensive analysis derives some insightful findings\nthat could inspire future research for harmless LLM research.",
+ "authors": "Shiyao Cui, Zhenyu Zhang, Yilong Chen, Wenyuan Zhang, Tianyun Liu, Siqi Wang, Tingwen Liu",
+ "published": "2023-11-30",
+ "updated": "2023-11-30",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.CR"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2404.01349v1",
+ "title": "Fairness in Large Language Models: A Taxonomic Survey",
+ "abstract": "Large Language Models (LLMs) have demonstrated remarkable success across\nvarious domains. However, despite their promising performance in numerous\nreal-world applications, most of these algorithms lack fairness considerations.\nConsequently, they may lead to discriminatory outcomes against certain\ncommunities, particularly marginalized populations, prompting extensive study\nin fair LLMs. On the other hand, fairness in LLMs, in contrast to fairness in\ntraditional machine learning, entails exclusive backgrounds, taxonomies, and\nfulfillment techniques. To this end, this survey presents a comprehensive\noverview of recent advances in the existing literature concerning fair LLMs.\nSpecifically, a brief introduction to LLMs is provided, followed by an analysis\nof factors contributing to bias in LLMs. Additionally, the concept of fairness\nin LLMs is discussed categorically, summarizing metrics for evaluating bias in\nLLMs and existing algorithms for promoting fairness. Furthermore, resources for\nevaluating bias in LLMs, including toolkits and datasets, are summarized.\nFinally, existing research challenges and open questions are discussed.",
+ "authors": "Zhibo Chu, Zichong Wang, Wenbin Zhang",
+ "published": "2024-03-31",
+ "updated": "2024-03-31",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2404.18276v1",
+ "title": "Bias Neutralization Framework: Measuring Fairness in Large Language Models with Bias Intelligence Quotient (BiQ)",
+ "abstract": "The burgeoning influence of Large Language Models (LLMs) in shaping public\ndiscourse and decision-making underscores the imperative to address inherent\nbiases within these AI systems. In the wake of AI's expansive integration\nacross sectors, addressing racial bias in LLMs has never been more critical.\nThis paper introduces a novel framework called Comprehensive Bias\nNeutralization Framework (CBNF) which embodies an innovative approach to\nquantifying and mitigating biases within LLMs. Our framework combines the Large\nLanguage Model Bias Index (LLMBI) [Oketunji, A., Anas, M., Saina, D., (2023)]\nand Bias removaL with No Demographics (BLIND) [Orgad, H., Belinkov, Y. (2023)]\nmethodologies to create a new metric called Bias Intelligence Quotient\n(BiQ)which detects, measures, and mitigates racial bias in LLMs without\nreliance on demographic annotations.\n By introducing a new metric called BiQ that enhances LLMBI with additional\nfairness metrics, CBNF offers a multi-dimensional metric for bias assessment,\nunderscoring the necessity of a nuanced approach to fairness in AI [Mehrabi et\nal., 2021]. This paper presents a detailed analysis of Latimer AI (a language\nmodel incrementally trained on black history and culture) in comparison to\nChatGPT 3.5, illustrating Latimer AI's efficacy in detecting racial, cultural,\nand gender biases through targeted training and refined bias mitigation\nstrategies [Latimer & Bender, 2023].",
+ "authors": "Malur Narayan, John Pasmore, Elton Sampaio, Vijay Raghavan, Gabriella Waters",
+ "published": "2024-04-28",
+ "updated": "2024-04-28",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "D.1; I.2"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2402.11764v1",
+ "title": "ChatGPT Based Data Augmentation for Improved Parameter-Efficient Debiasing of LLMs",
+ "abstract": "Large Language models (LLMs), while powerful, exhibit harmful social biases.\nDebiasing is often challenging due to computational costs, data constraints,\nand potential degradation of multi-task language capabilities. This work\nintroduces a novel approach utilizing ChatGPT to generate synthetic training\ndata, aiming to enhance the debiasing of LLMs. We propose two strategies:\nTargeted Prompting, which provides effective debiasing for known biases but\nnecessitates prior specification of bias in question; and General Prompting,\nwhich, while slightly less effective, offers debiasing across various\ncategories. We leverage resource-efficient LLM debiasing using adapter tuning\nand compare the effectiveness of our synthetic data to existing debiasing\ndatasets. Our results reveal that: (1) ChatGPT can efficiently produce\nhigh-quality training data for debiasing other LLMs; (2) data produced via our\napproach surpasses existing datasets in debiasing performance while also\npreserving internal knowledge of a pre-trained LLM; and (3) synthetic data\nexhibits generalizability across categories, effectively mitigating various\nbiases, including intersectional ones. These findings underscore the potential\nof synthetic data in advancing the fairness of LLMs with minimal retraining\ncost.",
+ "authors": "Pengrui Han, Rafal Kocielnik, Adhithya Saravanan, Roy Jiang, Or Sharir, Anima Anandkumar",
+ "published": "2024-02-19",
+ "updated": "2024-02-19",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.CY",
+ "68T50",
+ "I.2.7; K.4.1"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2305.13862v2",
+ "title": "A Trip Towards Fairness: Bias and De-Biasing in Large Language Models",
+ "abstract": "Cheap-to-Build Very Large-Language Models (CtB-LLMs) with affordable training\nare emerging as the next big revolution in natural language processing and\nunderstanding. These CtB-LLMs are democratizing access to trainable Very\nLarge-Language Models (VLLMs) and, thus, may represent the building blocks of\nmany NLP systems solving downstream tasks. Hence, a little or a large bias in\nCtB-LLMs may cause huge harm. In this paper, we performed a large investigation\nof the bias of three families of CtB-LLMs, and we showed that debiasing\ntechniques are effective and usable. Indeed, according to current tests, the\nLLaMA and the OPT families have an important bias in gender, race, religion,\nand profession. In contrast to the analysis for other LLMs, we discovered that\nbias depends not on the number of parameters but on the perplexity. Finally,\nthe debiasing of OPT using LoRA reduces bias up to 4.12 points in the\nnormalized stereotype score.",
+ "authors": "Leonardo Ranaldi, Elena Sofia Ruzzetti, Davide Venditti, Dario Onorati, Fabio Massimo Zanzotto",
+ "published": "2023-05-23",
+ "updated": "2023-08-29",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2402.06852v2",
+ "title": "ChemLLM: A Chemical Large Language Model",
+ "abstract": "Large language models (LLMs) have made impressive progress in chemistry\napplications. However, the community lacks an LLM specifically designed for\nchemistry. The main challenges are two-fold: firstly, most chemical data and\nscientific knowledge are stored in structured databases, which limits the\nmodel's ability to sustain coherent dialogue when used directly. Secondly,\nthere is an absence of objective and fair benchmark that encompass most\nchemistry tasks. Here, we introduce ChemLLM, a comprehensive framework that\nfeatures the first LLM dedicated to chemistry. It also includes ChemData, a\ndataset specifically designed for instruction tuning, and ChemBench, a robust\nbenchmark covering nine essential chemistry tasks. ChemLLM is adept at\nperforming various tasks across chemical disciplines with fluid dialogue\ninteraction. Notably, ChemLLM achieves results comparable to GPT-4 on the core\nchemical tasks and demonstrates competitive performance with LLMs of similar\nsize in general scenarios. ChemLLM paves a new path for exploration in chemical\nstudies, and our method of incorporating structured chemical knowledge into\ndialogue systems sets a new standard for developing LLMs in various scientific\nfields. Codes, Datasets, and Model weights are publicly accessible at\nhttps://hf.co/AI4Chem",
+ "authors": "Di Zhang, Wei Liu, Qian Tan, Jingdan Chen, Hang Yan, Yuliang Yan, Jiatong Li, Weiran Huang, Xiangyu Yue, Wanli Ouyang, Dongzhan Zhou, Shufei Zhang, Mao Su, Han-Sen Zhong, Yuqiang Li",
+ "published": "2024-02-10",
+ "updated": "2024-04-25",
+ "primary_cat": "cs.AI",
+ "cats": [
+ "cs.AI",
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2404.10199v3",
+ "title": "CULTURE-GEN: Revealing Global Cultural Perception in Language Models through Natural Language Prompting",
+ "abstract": "As the utilization of large language models (LLMs) has proliferated\nworldwide, it is crucial for them to have adequate knowledge and fair\nrepresentation for diverse global cultures. In this work, we uncover culture\nperceptions of three SOTA models on 110 countries and regions on 8\nculture-related topics through culture-conditioned generations, and extract\nsymbols from these generations that are associated to each culture by the LLM.\nWe discover that culture-conditioned generation consist of linguistic \"markers\"\nthat distinguish marginalized cultures apart from default cultures. We also\ndiscover that LLMs have an uneven degree of diversity in the culture symbols,\nand that cultures from different geographic regions have different presence in\nLLMs' culture-agnostic generation. Our findings promote further research in\nstudying the knowledge and fairness of global culture perception in LLMs. Code\nand Data can be found in: https://github.com/huihanlhh/Culture-Gen/",
+ "authors": "Huihan Li, Liwei Jiang, Jena D. Huang, Hyunwoo Kim, Sebastin Santy, Taylor Sorensen, Bill Yuchen Lin, Nouha Dziri, Xiang Ren, Yejin Choi",
+ "published": "2024-04-16",
+ "updated": "2024-04-26",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2311.01964v1",
+ "title": "Don't Make Your LLM an Evaluation Benchmark Cheater",
+ "abstract": "Large language models~(LLMs) have greatly advanced the frontiers of\nartificial intelligence, attaining remarkable improvement in model capacity. To\nassess the model performance, a typical approach is to construct evaluation\nbenchmarks for measuring the ability level of LLMs in different aspects.\nDespite that a number of high-quality benchmarks have been released, the\nconcerns about the appropriate use of these benchmarks and the fair comparison\nof different models are increasingly growing. Considering these concerns, in\nthis paper, we discuss the potential risk and impact of inappropriately using\nevaluation benchmarks and misleadingly interpreting the evaluation results.\nSpecially, we focus on a special issue that would lead to inappropriate\nevaluation, \\ie \\emph{benchmark leakage}, referring that the data related to\nevaluation sets is occasionally used for model training. This phenomenon now\nbecomes more common since pre-training data is often prepared ahead of model\ntest. We conduct extensive experiments to study the effect of benchmark\nleverage, and find that it can dramatically boost the evaluation results, which\nwould finally lead to an unreliable assessment of model performance. To improve\nthe use of existing evaluation benchmarks, we finally present several\nguidelines for both LLM developers and benchmark maintainers. We hope this work\ncan draw attention to appropriate training and evaluation of LLMs.",
+ "authors": "Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, Jiawei Han",
+ "published": "2023-11-03",
+ "updated": "2023-11-03",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2308.10397v2",
+ "title": "FairMonitor: A Four-Stage Automatic Framework for Detecting Stereotypes and Biases in Large Language Models",
+ "abstract": "Detecting stereotypes and biases in Large Language Models (LLMs) can enhance\nfairness and reduce adverse impacts on individuals or groups when these LLMs\nare applied. However, the majority of existing methods focus on measuring the\nmodel's preference towards sentences containing biases and stereotypes within\ndatasets, which lacks interpretability and cannot detect implicit biases and\nstereotypes in the real world. To address this gap, this paper introduces a\nfour-stage framework to directly evaluate stereotypes and biases in the\ngenerated content of LLMs, including direct inquiry testing, serial or adapted\nstory testing, implicit association testing, and unknown situation testing.\nAdditionally, the paper proposes multi-dimensional evaluation metrics and\nexplainable zero-shot prompts for automated evaluation. Using the education\nsector as a case study, we constructed the Edu-FairMonitor based on the\nfour-stage framework, which encompasses 12,632 open-ended questions covering\nnine sensitive factors and 26 educational scenarios. Experimental results\nreveal varying degrees of stereotypes and biases in five LLMs evaluated on\nEdu-FairMonitor. Moreover, the results of our proposed automated evaluation\nmethod have shown a high correlation with human annotations.",
+ "authors": "Yanhong Bai, Jiabao Zhao, Jinxin Shi, Tingjiang Wei, Xingjiao Wu, Liang He",
+ "published": "2023-08-21",
+ "updated": "2023-10-27",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2308.05374v2",
+ "title": "Trustworthy LLMs: a Survey and Guideline for Evaluating Large Language Models' Alignment",
+ "abstract": "Ensuring alignment, which refers to making models behave in accordance with\nhuman intentions [1,2], has become a critical task before deploying large\nlanguage models (LLMs) in real-world applications. For instance, OpenAI devoted\nsix months to iteratively aligning GPT-4 before its release [3]. However, a\nmajor challenge faced by practitioners is the lack of clear guidance on\nevaluating whether LLM outputs align with social norms, values, and\nregulations. This obstacle hinders systematic iteration and deployment of LLMs.\nTo address this issue, this paper presents a comprehensive survey of key\ndimensions that are crucial to consider when assessing LLM trustworthiness. The\nsurvey covers seven major categories of LLM trustworthiness: reliability,\nsafety, fairness, resistance to misuse, explainability and reasoning, adherence\nto social norms, and robustness. Each major category is further divided into\nseveral sub-categories, resulting in a total of 29 sub-categories.\nAdditionally, a subset of 8 sub-categories is selected for further\ninvestigation, where corresponding measurement studies are designed and\nconducted on several widely-used LLMs. The measurement results indicate that,\nin general, more aligned models tend to perform better in terms of overall\ntrustworthiness. However, the effectiveness of alignment varies across the\ndifferent trustworthiness categories considered. This highlights the importance\nof conducting more fine-grained analyses, testing, and making continuous\nimprovements on LLM alignment. By shedding light on these key dimensions of LLM\ntrustworthiness, this paper aims to provide valuable insights and guidance to\npractitioners in the field. Understanding and addressing these concerns will be\ncrucial in achieving reliable and ethically sound deployment of LLMs in various\napplications.",
+ "authors": "Yang Liu, Yuanshun Yao, Jean-Francois Ton, Xiaoying Zhang, Ruocheng Guo, Hao Cheng, Yegor Klochkov, Muhammad Faaiz Taufiq, Hang Li",
+ "published": "2023-08-10",
+ "updated": "2024-03-21",
+ "primary_cat": "cs.AI",
+ "cats": [
+ "cs.AI",
+ "cs.LG"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2403.17553v1",
+ "title": "RuBia: A Russian Language Bias Detection Dataset",
+ "abstract": "Warning: this work contains upsetting or disturbing content.\n Large language models (LLMs) tend to learn the social and cultural biases\npresent in the raw pre-training data. To test if an LLM's behavior is fair,\nfunctional datasets are employed, and due to their purpose, these datasets are\nhighly language and culture-specific. In this paper, we address a gap in the\nscope of multilingual bias evaluation by presenting a bias detection dataset\nspecifically designed for the Russian language, dubbed as RuBia. The RuBia\ndataset is divided into 4 domains: gender, nationality, socio-economic status,\nand diverse, each of the domains is further divided into multiple fine-grained\nsubdomains. Every example in the dataset consists of two sentences with the\nfirst reinforcing a potentially harmful stereotype or trope and the second\ncontradicting it. These sentence pairs were first written by volunteers and\nthen validated by native-speaking crowdsourcing workers. Overall, there are\nnearly 2,000 unique sentence pairs spread over 19 subdomains in RuBia. To\nillustrate the dataset's purpose, we conduct a diagnostic evaluation of\nstate-of-the-art or near-state-of-the-art LLMs and discuss the LLMs'\npredisposition to social biases.",
+ "authors": "Veronika Grigoreva, Anastasiia Ivanova, Ilseyar Alimova, Ekaterina Artemova",
+ "published": "2024-03-26",
+ "updated": "2024-03-26",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2401.15585v1",
+ "title": "Evaluating Gender Bias in Large Language Models via Chain-of-Thought Prompting",
+ "abstract": "There exist both scalable tasks, like reading comprehension and\nfact-checking, where model performance improves with model size, and unscalable\ntasks, like arithmetic reasoning and symbolic reasoning, where model\nperformance does not necessarily improve with model size. Large language models\n(LLMs) equipped with Chain-of-Thought (CoT) prompting are able to make accurate\nincremental predictions even on unscalable tasks. Unfortunately, despite their\nexceptional reasoning abilities, LLMs tend to internalize and reproduce\ndiscriminatory societal biases. Whether CoT can provide discriminatory or\negalitarian rationalizations for the implicit information in unscalable tasks\nremains an open question.\n In this study, we examine the impact of LLMs' step-by-step predictions on\ngender bias in unscalable tasks. For this purpose, we construct a benchmark for\nan unscalable task where the LLM is given a list of words comprising feminine,\nmasculine, and gendered occupational words, and is required to count the number\nof feminine and masculine words. In our CoT prompts, we require the LLM to\nexplicitly indicate whether each word in the word list is a feminine or\nmasculine before making the final predictions. With counting and handling the\nmeaning of words, this benchmark has characteristics of both arithmetic\nreasoning and symbolic reasoning. Experimental results in English show that\nwithout step-by-step prediction, most LLMs make socially biased predictions,\ndespite the task being as simple as counting words. Interestingly, CoT\nprompting reduces this unconscious social bias in LLMs and encourages fair\npredictions.",
+ "authors": "Masahiro Kaneko, Danushka Bollegala, Naoaki Okazaki, Timothy Baldwin",
+ "published": "2024-01-28",
+ "updated": "2024-01-28",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2310.06500v1",
+ "title": "MetaAgents: Simulating Interactions of Human Behaviors for LLM-based Task-oriented Coordination via Collaborative Generative Agents",
+ "abstract": "Significant advancements have occurred in the application of Large Language\nModels (LLMs) for various tasks and social simulations. Despite this, their\ncapacities to coordinate within task-oriented social contexts are\nunder-explored. Such capabilities are crucial if LLMs are to effectively mimic\nhuman-like social behavior and produce meaningful results. To bridge this gap,\nwe introduce collaborative generative agents, endowing LLM-based Agents with\nconsistent behavior patterns and task-solving abilities. We situate these\nagents in a simulated job fair environment as a case study to scrutinize their\ncoordination skills. We propose a novel framework that equips collaborative\ngenerative agents with human-like reasoning abilities and specialized skills.\nOur evaluation demonstrates that these agents show promising performance.\nHowever, we also uncover limitations that hinder their effectiveness in more\ncomplex coordination tasks. Our work provides valuable insights into the role\nand evolution of LLMs in task-oriented social simulations.",
+ "authors": "Yuan Li, Yixuan Zhang, Lichao Sun",
+ "published": "2023-10-10",
+ "updated": "2023-10-10",
+ "primary_cat": "cs.AI",
+ "cats": [
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2309.14345v2",
+ "title": "Bias Testing and Mitigation in LLM-based Code Generation",
+ "abstract": "Utilizing state-of-the-art Large Language Models (LLMs), automatic code\ngeneration models play a pivotal role in enhancing the productivity of software\ndevelopment procedures. As the adoption of LLMs becomes more widespread in\nsoftware coding ecosystems, a pressing issue has emerged: does the generated\ncode contain social bias and unfairness, such as those related to age, gender,\nand race? This issue concerns the integrity, fairness, and ethical foundation\nof software applications that depend on the code generated by these models, yet\nis under-explored in the literature. This paper presents a novel bias testing\nframework that is specifically designed for code generation tasks. Based on\nthis framework, we conduct an extensive evaluation of the bias in code\ngenerated by five state-of-the-art LLMs. Our findings reveal that 20.29% to\n44.93% code functions generated by the models under study are biased when\nhandling bias sensitive tasks (i.e., tasks that involve sensitive attributes\nsuch as age and gender). This indicates that the existing LLMs can be unfair in\ncode generation, posing risks of unintended and harmful software behaviors. To\nmitigate bias for code generation models, we evaluate five bias mitigation\nprompt strategies, i.e., utilizing bias testing results to refine the code\n(zero-shot), one-, few-shot, and two Chain-of-Thought (CoT) prompts. Our\nevaluation results illustrate that these strategies are all effective in\nmitigating bias. Overall, one-shot and few-shot learning are the two most\neffective. For GPT-4, 80% to 90% code bias can be removed with one-shot\nlearning.",
+ "authors": "Dong Huang, Qingwen Bu, Jie Zhang, Xiaofei Xie, Junjie Chen, Heming Cui",
+ "published": "2023-09-03",
+ "updated": "2024-01-09",
+ "primary_cat": "cs.SE",
+ "cats": [
+ "cs.SE",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2405.02219v1",
+ "title": "FairEvalLLM. A Comprehensive Framework for Benchmarking Fairness in Large Language Model Recommender Systems",
+ "abstract": "This paper presents a framework for evaluating fairness in recommender\nsystems powered by Large Language Models (RecLLMs), addressing the need for a\nunified approach that spans various fairness dimensions including sensitivity\nto user attributes, intrinsic fairness, and discussions of fairness based on\nunderlying benefits. In addition, our framework introduces counterfactual\nevaluations and integrates diverse user group considerations to enhance the\ndiscourse on fairness evaluation for RecLLMs.\n Our key contributions include the development of a robust framework for\nfairness evaluation in LLM-based recommendations and a structured method to\ncreate \\textit{informative user profiles} from demographic data, historical\nuser preferences, and recent interactions. We argue that the latter is\nessential for enhancing personalization in such systems, especially in\ntemporal-driven scenarios. We demonstrate the utility of our framework through\npractical applications on two datasets, LastFM-1K and ML-1M. We conduct\nexperiments on a subsample of 80 users from each dataset, testing and assessing\nthe effectiveness of various prompt construction scenarios and in-context\nlearning, comprising more than 50 scenarios. This results in more than 4000\nrecommendations (80 * 50 = 4000). Our study reveals that while there are no\nsignificant unfairness issues in scenarios involving sensitive attributes, some\nconcerns remain. However, in terms of intrinsic fairness, which does not\ninvolve direct sensitivity, unfairness across demographic groups remains\nsignificant. The code and data used for this paper are available at:\n\\url{https://shorturl.at/awBFM}.",
+ "authors": "Yashar Deldjoo",
+ "published": "2024-05-03",
+ "updated": "2024-05-03",
+ "primary_cat": "cs.IR",
+ "cats": [
+ "cs.IR"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2310.18130v2",
+ "title": "DELPHI: Data for Evaluating LLMs' Performance in Handling Controversial Issues",
+ "abstract": "Controversy is a reflection of our zeitgeist, and an important aspect to any\ndiscourse. The rise of large language models (LLMs) as conversational systems\nhas increased public reliance on these systems for answers to their various\nquestions. Consequently, it is crucial to systematically examine how these\nmodels respond to questions that pertaining to ongoing debates. However, few\nsuch datasets exist in providing human-annotated labels reflecting the\ncontemporary discussions. To foster research in this area, we propose a novel\nconstruction of a controversial questions dataset, expanding upon the publicly\nreleased Quora Question Pairs Dataset. This dataset presents challenges\nconcerning knowledge recency, safety, fairness, and bias. We evaluate different\nLLMs using a subset of this dataset, illuminating how they handle controversial\nissues and the stances they adopt. This research ultimately contributes to our\nunderstanding of LLMs' interaction with controversial issues, paving the way\nfor improvements in their comprehension and handling of complex societal\ndebates.",
+ "authors": "David Q. Sun, Artem Abzaliev, Hadas Kotek, Zidi Xiu, Christopher Klein, Jason D. Williams",
+ "published": "2023-10-27",
+ "updated": "2023-11-07",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.HC"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2404.13925v1",
+ "title": "MARIO Eval: Evaluate Your Math LLM with your Math LLM--A mathematical dataset evaluation toolkit",
+ "abstract": "Large language models (LLMs) have been explored in a variety of reasoning\ntasks including solving of mathematical problems. Each math dataset typically\nincludes its own specially designed evaluation script, which, while suitable\nfor its intended use, lacks generalizability across different datasets.\nConsequently, updates and adaptations to these evaluation tools tend to occur\nwithout being systematically reported, leading to inconsistencies and obstacles\nto fair comparison across studies. To bridge this gap, we introduce a\ncomprehensive mathematical evaluation toolkit that not only utilizes a python\ncomputer algebra system (CAS) for its numerical accuracy, but also integrates\nan optional LLM, known for its considerable natural language processing\ncapabilities. To validate the effectiveness of our toolkit, we manually\nannotated two distinct datasets. Our experiments demonstrate that the toolkit\nyields more robust evaluation results compared to prior works, even without an\nLLM. Furthermore, when an LLM is incorporated, there is a notable enhancement.\nThe code for our method will be made available at\n\\url{https://github.com/MARIO-Math-Reasoning/math_evaluation}.",
+ "authors": "Boning Zhang, Chengxi Li, Kai Fan",
+ "published": "2024-04-22",
+ "updated": "2024-04-22",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2312.07420v1",
+ "title": "FairSISA: Ensemble Post-Processing to Improve Fairness of Unlearning in LLMs",
+ "abstract": "Training large language models (LLMs) is a costly endeavour in terms of time\nand computational resources. The large amount of training data used during the\nunsupervised pre-training phase makes it difficult to verify all data and,\nunfortunately, undesirable data may be ingested during training. Re-training\nfrom scratch is impractical and has led to the creation of the 'unlearning'\ndiscipline where models are modified to \"unlearn\" undesirable information\nwithout retraining. However, any modification can alter the behaviour of LLMs,\nespecially on key dimensions such as fairness. This is the first work that\nexamines this interplay between unlearning and fairness for LLMs. In\nparticular, we focus on a popular unlearning framework known as SISA [Bourtoule\net al., 2021], which creates an ensemble of models trained on disjoint shards.\nWe evaluate the performance-fairness trade-off for SISA, and empirically\ndemsontrate that SISA can indeed reduce fairness in LLMs. To remedy this, we\npropose post-processing bias mitigation techniques for ensemble models produced\nby SISA. We adapt the post-processing fairness improvement technique from\n[Hardt et al., 2016] to design three methods that can handle model ensembles,\nand prove that one of the methods is an optimal fair predictor for ensemble of\nmodels. Through experimental results, we demonstrate the efficacy of our\npost-processing framework called 'FairSISA'.",
+ "authors": "Swanand Ravindra Kadhe, Anisa Halimi, Ambrish Rawat, Nathalie Baracaldo",
+ "published": "2023-12-12",
+ "updated": "2023-12-12",
+ "primary_cat": "cs.LG",
+ "cats": [
+ "cs.LG",
+ "cs.CY"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2312.06056v1",
+ "title": "METAL: Metamorphic Testing Framework for Analyzing Large-Language Model Qualities",
+ "abstract": "Large-Language Models (LLMs) have shifted the paradigm of natural language\ndata processing. However, their black-boxed and probabilistic characteristics\ncan lead to potential risks in the quality of outputs in diverse LLM\napplications. Recent studies have tested Quality Attributes (QAs), such as\nrobustness or fairness, of LLMs by generating adversarial input texts. However,\nexisting studies have limited their coverage of QAs and tasks in LLMs and are\ndifficult to extend. Additionally, these studies have only used one evaluation\nmetric, Attack Success Rate (ASR), to assess the effectiveness of their\napproaches. We propose a MEtamorphic Testing for Analyzing LLMs (METAL)\nframework to address these issues by applying Metamorphic Testing (MT)\ntechniques. This approach facilitates the systematic testing of LLM qualities\nby defining Metamorphic Relations (MRs), which serve as modularized evaluation\nmetrics. The METAL framework can automatically generate hundreds of MRs from\ntemplates that cover various QAs and tasks. In addition, we introduced novel\nmetrics that integrate the ASR method into the semantic qualities of text to\nassess the effectiveness of MRs accurately. Through the experiments conducted\nwith three prominent LLMs, we have confirmed that the METAL framework\neffectively evaluates essential QAs on primary LLM tasks and reveals the\nquality risks in LLMs. Moreover, the newly proposed metrics can guide the\noptimal MRs for testing each task and suggest the most effective method for\ngenerating MRs.",
+ "authors": "Sangwon Hyun, Mingyu Guo, M. Ali Babar",
+ "published": "2023-12-11",
+ "updated": "2023-12-11",
+ "primary_cat": "cs.SE",
+ "cats": [
+ "cs.SE",
+ "cs.AI",
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2403.15491v1",
+ "title": "Open Source Conversational LLMs do not know most Spanish words",
+ "abstract": "The growing interest in Large Language Models (LLMs) and in particular in\nconversational models with which users can interact has led to the development\nof a large number of open-source chat LLMs. These models are evaluated on a\nwide range of benchmarks to assess their capabilities in answering questions or\nsolving problems on almost any possible topic or to test their ability to\nreason or interpret texts. Instead, the evaluation of the knowledge that these\nmodels have of the languages has received much less attention. For example, the\nwords that they can recognize and use in different languages. In this paper, we\nevaluate the knowledge that open-source chat LLMs have of Spanish words by\ntesting a sample of words in a reference dictionary. The results show that\nopen-source chat LLMs produce incorrect meanings for an important fraction of\nthe words and are not able to use most of the words correctly to write\nsentences with context. These results show how Spanish is left behind in the\nopen-source LLM race and highlight the need to push for linguistic fairness in\nconversational LLMs ensuring that they provide similar performance across\nlanguages.",
+ "authors": "Javier Conde, Miguel Gonz\u00e1lez, Nina Melero, Raquel Ferrando, Gonzalo Mart\u00ednez, Elena Merino-G\u00f3mez, Jos\u00e9 Alberto Hern\u00e1ndez, Pedro Reviriego",
+ "published": "2024-03-21",
+ "updated": "2024-03-21",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2308.11483v1",
+ "title": "Large Language Models Sensitivity to The Order of Options in Multiple-Choice Questions",
+ "abstract": "Large Language Models (LLMs) have demonstrated remarkable capabilities in\nvarious NLP tasks. However, previous works have shown these models are\nsensitive towards prompt wording, and few-shot demonstrations and their order,\nposing challenges to fair assessment of these models. As these models become\nmore powerful, it becomes imperative to understand and address these\nlimitations. In this paper, we focus on LLMs robustness on the task of\nmultiple-choice questions -- commonly adopted task to study reasoning and\nfact-retrieving capability of LLMs. Investigating the sensitivity of LLMs\ntowards the order of options in multiple-choice questions, we demonstrate a\nconsiderable performance gap of approximately 13% to 75% in LLMs on different\nbenchmarks, when answer options are reordered, even when using demonstrations\nin a few-shot setting. Through a detailed analysis, we conjecture that this\nsensitivity arises when LLMs are uncertain about the prediction between the\ntop-2/3 choices, and specific options placements may favor certain prediction\nbetween those top choices depending on the question caused by positional bias.\nWe also identify patterns in top-2 choices that amplify or mitigate the model's\nbias toward option placement. We found that for amplifying bias, the optimal\nstrategy involves positioning the top two choices as the first and last\noptions. Conversely, to mitigate bias, we recommend placing these choices among\nthe adjacent options. To validate our conjecture, we conduct various\nexperiments and adopt two approaches to calibrate LLMs' predictions, leading to\nup to 8 percentage points improvement across different models and benchmarks.",
+ "authors": "Pouya Pezeshkpour, Estevam Hruschka",
+ "published": "2023-08-22",
+ "updated": "2023-08-22",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.LG"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2312.14804v1",
+ "title": "Use large language models to promote equity",
+ "abstract": "Advances in large language models (LLMs) have driven an explosion of interest\nabout their societal impacts. Much of the discourse around how they will impact\nsocial equity has been cautionary or negative, focusing on questions like \"how\nmight LLMs be biased and how would we mitigate those biases?\" This is a vital\ndiscussion: the ways in which AI generally, and LLMs specifically, can entrench\nbiases have been well-documented. But equally vital, and much less discussed,\nis the more opportunity-focused counterpoint: \"what promising applications do\nLLMs enable that could promote equity?\" If LLMs are to enable a more equitable\nworld, it is not enough just to play defense against their biases and failure\nmodes. We must also go on offense, applying them positively to equity-enhancing\nuse cases to increase opportunities for underserved groups and reduce societal\ndiscrimination. There are many choices which determine the impact of AI, and a\nfundamental choice very early in the pipeline is the problems we choose to\napply it to. If we focus only later in the pipeline -- making LLMs marginally\nmore fair as they facilitate use cases which intrinsically entrench power -- we\nwill miss an important opportunity to guide them to equitable impacts. Here, we\nhighlight the emerging potential of LLMs to promote equity by presenting four\nnewly possible, promising research directions, while keeping risks and\ncautionary points in clear view.",
+ "authors": "Emma Pierson, Divya Shanmugam, Rajiv Movva, Jon Kleinberg, Monica Agrawal, Mark Dredze, Kadija Ferryman, Judy Wawira Gichoya, Dan Jurafsky, Pang Wei Koh, Karen Levy, Sendhil Mullainathan, Ziad Obermeyer, Harini Suresh, Keyon Vafa",
+ "published": "2023-12-22",
+ "updated": "2023-12-22",
+ "primary_cat": "cs.CY",
+ "cats": [
+ "cs.CY"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2404.03192v1",
+ "title": "Do Large Language Models Rank Fairly? An Empirical Study on the Fairness of LLMs as Rankers",
+ "abstract": "The integration of Large Language Models (LLMs) in information retrieval has\nraised a critical reevaluation of fairness in the text-ranking models. LLMs,\nsuch as GPT models and Llama2, have shown effectiveness in natural language\nunderstanding tasks, and prior works (e.g., RankGPT) have also demonstrated\nthat the LLMs exhibit better performance than the traditional ranking models in\nthe ranking task. However, their fairness remains largely unexplored. This\npaper presents an empirical study evaluating these LLMs using the TREC Fair\nRanking dataset, focusing on the representation of binary protected attributes\nsuch as gender and geographic location, which are historically underrepresented\nin search outcomes. Our analysis delves into how these LLMs handle queries and\ndocuments related to these attributes, aiming to uncover biases in their\nranking algorithms. We assess fairness from both user and content perspectives,\ncontributing an empirical benchmark for evaluating LLMs as the fair ranker.",
+ "authors": "Yuan Wang, Xuyang Wu, Hsin-Tai Wu, Zhiqiang Tao, Yi Fang",
+ "published": "2024-04-04",
+ "updated": "2024-04-04",
+ "primary_cat": "cs.IR",
+ "cats": [
+ "cs.IR",
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2401.08495v2",
+ "title": "Large Language Models Portray Socially Subordinate Groups as More Homogeneous, Consistent with a Bias Observed in Humans",
+ "abstract": "Large language models (LLMs) are becoming pervasive in everyday life, yet\ntheir propensity to reproduce biases inherited from training data remains a\npressing concern. Prior investigations into bias in LLMs have focused on the\nassociation of social groups with stereotypical attributes. However, this is\nonly one form of human bias such systems may reproduce. We investigate a new\nform of bias in LLMs that resembles a social psychological phenomenon where\nsocially subordinate groups are perceived as more homogeneous than socially\ndominant groups. We had ChatGPT, a state-of-the-art LLM, generate texts about\nintersectional group identities and compared those texts on measures of\nhomogeneity. We consistently found that ChatGPT portrayed African, Asian, and\nHispanic Americans as more homogeneous than White Americans, indicating that\nthe model described racial minority groups with a narrower range of human\nexperience. ChatGPT also portrayed women as more homogeneous than men, but\nthese differences were small. Finally, we found that the effect of gender\ndiffered across racial/ethnic groups such that the effect of gender was\nconsistent within African and Hispanic Americans but not within Asian and White\nAmericans. We argue that the tendency of LLMs to describe groups as less\ndiverse risks perpetuating stereotypes and discriminatory behavior.",
+ "authors": "Messi H. J. Lee, Jacob M. Montgomery, Calvin K. Lai",
+ "published": "2024-01-16",
+ "updated": "2024-04-26",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2401.00588v1",
+ "title": "Fairness in Serving Large Language Models",
+ "abstract": "High-demand LLM inference services (e.g., ChatGPT and BARD) support a wide\nrange of requests from short chat conversations to long document reading. To\nensure that all client requests are processed fairly, most major LLM inference\nservices have request rate limits, to ensure that no client can dominate the\nrequest queue. However, this rudimentary notion of fairness also results in\nunder-utilization of the resources and poor client experience when there is\nspare capacity. While there is a rich literature on fair scheduling, serving\nLLMs presents new challenges due to their unpredictable request lengths and\ntheir unique batching characteristics on parallel accelerators. This paper\nintroduces the definition of LLM serving fairness based on a cost function that\naccounts for the number of input and output tokens processed. To achieve\nfairness in serving, we propose a novel scheduling algorithm, the Virtual Token\nCounter (VTC), a fair scheduler based on the continuous batching mechanism. We\nprove a 2x tight upper bound on the service difference between two backlogged\nclients, adhering to the requirement of work-conserving. Through extensive\nexperiments, we demonstrate the superior performance of VTC in ensuring\nfairness, especially in contrast to other baseline methods, which exhibit\nshortcomings under various conditions.",
+ "authors": "Ying Sheng, Shiyi Cao, Dacheng Li, Banghua Zhu, Zhuohan Li, Danyang Zhuo, Joseph E. Gonzalez, Ion Stoica",
+ "published": "2023-12-31",
+ "updated": "2023-12-31",
+ "primary_cat": "cs.AI",
+ "cats": [
+ "cs.AI",
+ "cs.LG",
+ "cs.PF"
+ ],
+ "category": "LLM Fairness"
+ }
+ ],
+ [
+ {
+ "url": "http://arxiv.org/abs/2405.01573v1",
+ "title": "Class-Level Code Generation from Natural Language Using Iterative, Tool-Enhanced Reasoning over Repository",
+ "abstract": "LLMs have demonstrated significant potential in code generation tasks,\nachieving promising results at the function or statement level in various\nbenchmarks. However, the complexities associated with creating code artifacts\nlike classes, particularly within the context of real-world software\nrepositories, remain underexplored. Existing research often treats class-level\ngeneration as an isolated task, neglecting the intricate dependencies and\ninteractions that characterize real-world software development environments. To\naddress this gap, we introduce RepoClassBench, a benchmark designed to\nrigorously evaluate LLMs in generating complex, class-level code within\nreal-world repositories. RepoClassBench includes natural language to class\ngeneration tasks across Java and Python, from a selection of public\nrepositories. We ensure that each class in our dataset not only has cross-file\ndependencies within the repository but also includes corresponding test cases\nto verify its functionality. We find that current models struggle with the\nrealistic challenges posed by our benchmark, primarily due to their limited\nexposure to relevant repository contexts. To address this shortcoming, we\nintroduce Retrieve-Repotools-Reflect (RRR), a novel approach that equips LLMs\nwith static analysis tools to iteratively navigate & reason about\nrepository-level context in an agent-based framework. Our experiments\ndemonstrate that RRR significantly outperforms existing baselines on\nRepoClassBench, showcasing its effectiveness across programming languages and\nin various settings. Our findings emphasize the need for benchmarks that\nincorporate repository-level dependencies to more accurately reflect the\ncomplexities of software development. Our work illustrates the benefits of\nleveraging specialized tools to enhance LLMs understanding of repository\ncontext. We plan to make our dataset and evaluation harness public.",
+ "authors": "Ajinkya Deshpande, Anmol Agarwal, Shashank Shet, Arun Iyer, Aditya Kanade, Ramakrishna Bairi, Suresh Parthasarathy",
+ "published": "2024-04-22",
+ "updated": "2024-04-22",
+ "primary_cat": "cs.SE",
+ "cats": [
+ "cs.SE",
+ "cs.AI"
+ ],
+ "label": "Original Paper",
+ "paper_cat": "LLM Fairness",
+ "gt": "Large Language Models have seen wide success on various coding tasks. Many benchmarks have been created to assess their performance. CoNaLA (Yin et al., 2018), consisting of 500 examples is a statement-level benchmark where the target of each example contains one statement. HumanEval (Chen et al., 2021) and MBPP (Odena et al., 2021) are two widely 2 Initial Generation Oracle Call Tool Invocation Reflection Improve Generation NL description: The public class StringNumberHandler, which extends the abstract class AbstractCellHandler \u2026The `getCellValue` method is a protected method \u2026 formatting function of the relevant utilities class specialized in handling Excel numbers, and returns the resultant string Initial class: public class String NumberHandler \u2026{ protected String getCellValue(\u2026) return NumberUtils.formatNumber(\u2026) } cannot find symbol symbol: variable NumberUtils Tool\u2019s output: Tool call: get_relevant_code('format numeric value\u2019) Output: The following pieces of code from the repository may be relevant for the query \u201cformat numeric value\u201d: #### Code Piece 1: For class io.github.zouzhiy.excel.utils.ExcelNumberUtils:: \u2026. static members: -format(java.lang.Number number, java.lang.String format) : String instance members: -format(java.lang.Number number, java.lang.String format) : String Reflection output: The feedback indicates that the class NumberUtils does not exist. I need to use the class ExcelNumberUtils instead. \ud835\udc65 Class Description Independent Tools \ud835\udc660 \ud835\udc610 \u2133\ud835\udc65, \ud835\udc610 LLM: Create class \ud835\udc66\ud835\udc56 \ud835\udc66\ud835\udc56+1 Dependent Tools Tool Descriptions Tool Execution LLM: Pick a tool Selected Tool Tool\u2019s output \u2133\ud835\udc65, \ud835\udc66\ud835\udc56, \ud835\udc53\ud835\udc4f\ud835\udc56, \ud835\udc47 \ud835\udc5b \ud835\udc47\ud835\udc56 \ud835\udc61\ud835\udc56 \u2133\ud835\udc65, \ud835\udc66\ud835\udc56, \ud835\udc53\ud835\udc4f\ud835\udc56, \ud835\udc61\ud835\udc56 Reflection output \ud835\udc5f\ud835\udc56 Improved class code \u2133\ud835\udc65, \ud835\udc66\ud835\udc56, \ud835\udc53\ud835\udc4f\ud835\udc56, \ud835\udc61\ud835\udc56, \ud835\udc5f \ud835\udc56 \ud835\udc65 \ud835\udc65 \ud835\udc65 \ud835\udc66\ud835\udc56 \ud835\udc66\ud835\udc56 \ud835\udc66\ud835\udc56 \ud835\udc53\ud835\udc4f\ud835\udc56 \ud835\udc53\ud835\udc4f\ud835\udc56 \ud835\udc53\ud835\udc4f\ud835\udc56 \ud835\udc61\ud835\udc56 \ud835\udc61\ud835\udc56 \ud835\udc5f\ud835\udc56 LLM: Improve class \u2130\ud835\udc47\ud835\udc56 Build Testcases \ud835\udc47 1 \ud835\udc47 \ud835\udc41 \ud835\udc47 2 \ud835\udc53\ud835\udc4f\ud835\udc56 Test Failures Build Errors Repository Tools Tools LLM: Reflect Feedback Oracle Feedback: Improved class: public class StringNumberHandler \u2026{ protected String getCellValue(\u2026) \u2026 return ExcelNumberUtils.format( numericValue, javaFormat); } Initial class code Figure 1: Flowchart illustrating the procedural framework of RRR. RRR utilizes the natural language description of the class and the outputs of independent tools to create an initial attempt. This attempt is evaluated by an oracle that pinpoints specific errors. Subsequently, RRR uses repository tools to gather information to rectify errors. It then reflects on feedback and tool insights to refine the attempt. This iterative cycle persists until all test cases pass or the maximum allowed number of oracle calls is reached. used datasets, for function level code-generation, consisting of 164 and 974 tasks respectively. At the class-level, ClassEval (Du et al., 2023) has been proposed with 100 class generation problems, where the input is the class skeleton. However, these are all independent codegeneration problems. Although ClassEval includes inter-method dependencies, they are all present within the same class. The external references come from well-known libraries that the LLM is likely to have memorized. In real world repositories, code includes complex inter-dependencies from other files in the repository. RepoBench (Liu et al., 2023), CoderEval (Zhang et al., 2024) and MGD (Agrawal et al., 2023) are attempts to move closer to this setting, and show that existing models perform much better on the standalone setting than the non-standalone setting. However they explore line and function level tasks in the context of a repository, whereas RepoClassBench explores the generation of non-standalone classes within the context of a repository. There are two aspects to solving our dataset, retrieving the right context, and reasoning to generate the code. Reasoning: To improve the generation of LLMs, various iterative refinement techniques have been proposed. Self-refine (Madaan et al., 2023) attempts to use the LLM as it\u2019s own critic and produces successively better outputs. Reflexion (Shinn et al., 2023) incorporates test-case feedback while generating the reflection on its output. LATS (Zhou et al., 2023) uses the LLM as an agent to explore a tree of solutions, using compiler and test feedback as observations. Retrieval: While reasoning-enhanced methods, in themselves, may be useful for standalone generations, they are not sufficient when external context is needed. This is especially true, when the context consists of private data, unseen during pretraining. Under this paradigm Retrieval-Augmented-Generation methods like REALM (Guu et al., 2020), ATLAS (Izacard et al., 2022), RetGen (Zhang et al., 2021), FLARE (Jiang et al., 2023) retrieve relevant context, usually by considering snippets with the highest similarity score with the query. Similarly, in the code setting RLPG (Shrivastava et al., 2023) trains a model to predict the relevant context source, but relies on there being a \u201dhole\u201d in the code, whereas there is no such hole in the NL to new class setting. Additionally the RLPG model was trained for Java, whereas for the other languages new models would need to be trained. This adds additional cost of constructing new training data and the actual training of new models. RepoCoder (Zhang et al., 2023) has been proposed to perform iterative retrieval and generation. While such similarity based RAG methods can retrieve \u201dsimilar\u201d context, they fails to effectively retrieve \u201ddependency\u201d context. Further discussion can be found in RQ2. 3 Figure 2: The dataset creation pipeline involved shortlisting candidate repositories, noting passing test cases, finding classes covered by passing test cases (which make external references) and finally mitigating memorization issues if necessary, using paraphrasing. In our method, we leverage repository-level tools to allow the LLM explore the repository, as an alternative retrieval mechanism, in addition to using test-case feedback. This is along the lines of several works that have explored equipping the LLM with tools like ReACT (Yao et al., 2023) and ToolFormer (Schick et al., 2023). However to our knowledge, this is the first work that curates tools-specifically for repository-exploration. Hence, we propose a benchmark that addresses the problem of class generation in the context of a repository, address a gap in the span of existing benchmarks, and also propose a novel method that integrates retrieval and reasoning, mitigating the shortcomings of existing methods.",
+ "pre_questions": [],
+ "main_content": "Introduction Using Large Language Models (LLMs) to generate code has garnered significant attention in recent years for its potential to streamline software development processes by automatically translating natural language descriptions into executable code snippets. Several code-specific models, like CodeGen (Nijkamp et al., 2023), WizardCoder (Luo et al., 2023), CodeLlama (Rozi` ere et al., 2024), StarCoder (Li et al., 2023), DeepSeekCoder (Guo et al., 2024) have been proposed to this end. While much of the focus in this domain has been on generating code units such as functions or statements, the specific task of generating classes has received comparatively less attention. Two of the most popular benchmarks HumanEval (Chen et al., 2021) and MBPP (Odena et al., 2021), for instance, focus on function generation. While useful, the problems in these datasets are short and standalone, and existing works have been able to show good 1 arXiv:2405.01573v1 [cs.SE] 22 Apr 2024 performance on these benchmarks. LATS (Zhou et al., 2023) for instance reports a 94.4% accuracy on HumanEval, and 81.1% accuracy on MBPP. To address both of these issues, ClassEval (Du et al., 2023) proposes a benchmark for class generation. The 100 classes in the ClassEval dataset were handcrafted such that they contain inter-method dependencies, i.e. a method could reference another method in the same class. Using this dataset, they showed that, LLMs have a harder time generating code with these kind of dependencies than standalone functions of the kind present in HumanEval or MBPP. While an important contribution, the problems proposed in ClassEval are still standalone when taking the class as a single unit. The only dependencies from outside the class are from well known libraries that the LLM is likely to have memorized. This narrow focus overlooks the complex dependencies that classes may have on other components within a codebase, presenting a gap in our understanding of code generation techniques\u2019 practical applicability. A much more useful problem is to consider the generation of a new class that depends on code from across a repository. To address this gap, we make an attempt at creating a dataset to explore the task of generating classes within the context of code repositories, where classes may interact with other code entities within a larger codebase. Specifically, we collect 130 Java classes from 10 repositories and 97 Python classes from 10 repositories to create RepoClassBench. Each class is present in the context of a real-world repository and has dependencies from the repository. Additionally, we make sure that each class has corresponding test cases that pass on the ground truth, and ensure sufficient coverage. To be able to solve the problems in this dataset, the model has to both, understand the functionality required from each method in the class and reason about how to use repositorydependencies to achieve the same. We provide an evaluation of existing code-generation techniques in this setting, and demonstrate their poor performance. Specifically, BASICPROMPTING either hallucinates identifiers or avoids the dependencies, REFLEXION is able to reason about the error, but does not have enough context to fix it, and RAG-based approaches are able to find similar snippets from across the repo but fail to bring in other kinds of dependencies that are required by the class. Taking a step forward, we address the shortcoming of these methods, by proposing a novel method called RRR and show significant gains. Specifically, RRR leverages existing programming language tools to retrieve precise information from across the repository. With the injection of pointed repository context through these tools, the model is able to fix the error observed during the feedback-reflection stage. By bridging these gaps, our study seeks to contribute to a deeper understanding of LLMs\u2019 potential in generating classes within real-world coding scenarios, with implications for the development of more effective code generation techniques in the future. Our contributions are three-fold: \u2022 We contribute the first benchmark RepoClassBench for class-level code generation in realistic environment of an existing repository, with 130 java classes spanning 10 repositories and 97 python classes spanning 10 repositories. \u2022 We propose a novel method called RRR that equips LLMs with static analysis tools to iteratively navigate and reason about repository-level context in an agent-based framework, and provide a comparison with existing methods. \u2022 We contribute 6 repository tools, based on our observations of common errors experienced by code agents in this setting. RepoClassBench is a benchmark featuring repositories from Github across languages: Java and Python. The task is to synthesize a complete class within a repository based on a natural language description, utilizing the context from other files within the same repository. Current benchmarks face two primary limitations: (1) they (Du et al., 2023) typically focus on generating small localized code snippets, which do not accurately represent the complex tasks software engineers encounter, often requiring a comprehensive understanding of the entire codebase; (2) they (Liu et al., 2023) rely on metrics such as exact-match or cosinesimilarity to the ground truth for evaluation, rather than assessing the functionality of the generated code through test cases. We mitigate these issues by designing a benchmark where every task corresponds to a class-generation problem where the LLM needs to synthesize the class based on the natural language specification of the class. We ensure that every class in our benchmark makes use of external references in the repository and is covered under test cases. 3.1 Benchmark Construction Stage 1 Shortlisting repositories: Our benchmark includes repositories both before and after the cutoff-date of the models we evaluate on. For JAVA we start with repositories considered in the MGD (Agrawal et al., 2023) dataset. For Python, we adapt the popular benchmark SWEBench (Jimenez et al., 2024) and also shortlist popular repositories which were first created on Github after Sept 2021. We filter out those repositories which we are unable to build and run. (Details in E.1.1) Stage 2 Shortlisting classes: Within each repository, we identify all classes that pass the existing test cases. We retain only those classes that (a) reference other parts of the repository within their body, and (b) have methods covered by test cases. To accommodate 4 the context length limitations of large language models (LLMs), we exclude classes whose implementations exceed 3,000 tokens (excluding docstrings). Additionally, we limit our selection to classes defined in the global namespace. (Details in E.1.2) Stage 3 Dataset paraphrasing: For repositories available before the LLMs\u2019 training data cutoff, we undertake a paraphrasing initiative, altering the names of most symbols to prevent models from completing tasks through mere memorization. (Details in E.1.3) Stage 4 Generating natural language specification: We break the information within each class into varying levels of granularity and record it as metadata. The complete metadata fields are listed in Table E.1.3. Methods are categorized by three information levels: (1) Signature, detailing input and output types; (2) Docstring, providing a high-level function description; (3) Body, outlining full implementation and logic, including external references. We prompt GPT-4 to generate the natural language description of the class by providing it varying granularity of information extracted as a subset of the metadata (refer to Table E.1.3). Hence, two types of natural language description in our dataset are:1. DETAILED: This includes details from the entire class body (excluding imports) and prompts GPT-4 to create an NL description. 2. SKETCHY: This omits method bodies from the prompt, leading GPT-4 to generate an NL description without low-level implementation specifics or explicit external references. In the SKETCHY setting, since GPT-4 does not receive the method bodies, the resulting natural language (NL) descriptions lack detailed implementation specifics and explicit mentions of the external references used during the method\u2019s development. Consequently, the SKETCHY NL descriptions present a higher level of difficulty compared to the DETAILED versions. To foster community engagement and further research, we make the metadata used for constructing these prompts publicly available. This allows others to create NL descriptions with varying degrees of specificity and ambiguity to challenge the models\u2019 capabilities. Example of the difference in prompts to GPT-4 for them can be found in Prompt 1. Some statistics about our dataset can be found in Table 1. Distribution of tasks across different repositories can be found in: Figure 3 and Figure 4. Java Python Num. of tasks 130 97 Length of DETAILED NL description 1475.98 / 286.89 3245.23 / 771.77 Length of SKETCHY NL description 1481.69 / 269.81 2633.20 / 607.64 Length of classes 2080 / 452.69 4663.76 / 1070.49 Num. of TCs directly covering the classes 5.48 42.94 Num. of unique Ext. Refs 3.51 7.06 Num. of funcs in the class 3.1 9.29 Num. of funcs covered in at least one TC 2.85 4.84 Num. of funcs making at least one Ext. Refs 2.28 4.84 Table 1: Dataset High level Statistics. Each row represents an average over all the tasks in the dataset. The cells with / represent the / . TC = Test Cases, funcs = functions, Ext. Refs ="
+ },
+ {
+ "url": "http://arxiv.org/abs/2303.17651v2",
+ "title": "Self-Refine: Iterative Refinement with Self-Feedback",
+ "abstract": "Like humans, large language models (LLMs) do not always generate the best\noutput on their first try. Motivated by how humans refine their written text,\nwe introduce Self-Refine, an approach for improving initial outputs from LLMs\nthrough iterative feedback and refinement. The main idea is to generate an\ninitial output using an LLMs; then, the same LLMs provides feedback for its\noutput and uses it to refine itself, iteratively. Self-Refine does not require\nany supervised training data, additional training, or reinforcement learning,\nand instead uses a single LLM as the generator, refiner, and feedback provider.\nWe evaluate Self-Refine across 7 diverse tasks, ranging from dialog response\ngeneration to mathematical reasoning, using state-of-the-art (GPT-3.5, ChatGPT,\nand GPT-4) LLMs. Across all evaluated tasks, outputs generated with Self-Refine\nare preferred by humans and automatic metrics over those generated with the\nsame LLM using conventional one-step generation, improving by ~20% absolute on\naverage in task performance. Our work demonstrates that even state-of-the-art\nLLMs like GPT-4 can be further improved at test time using our simple,\nstandalone approach.",
+ "authors": "Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, Shashank Gupta, Bodhisattwa Prasad Majumder, Katherine Hermann, Sean Welleck, Amir Yazdanbakhsh, Peter Clark",
+ "published": "2023-03-30",
+ "updated": "2023-05-25",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.LG"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2308.01861v2",
+ "title": "ClassEval: A Manually-Crafted Benchmark for Evaluating LLMs on Class-level Code Generation",
+ "abstract": "In this work, we make the first attempt to evaluate LLMs in a more\nchallenging code generation scenario, i.e. class-level code generation. We\nfirst manually construct the first class-level code generation benchmark\nClassEval of 100 class-level Python code generation tasks with approximately\n500 person-hours. Based on it, we then perform the first study of 11\nstate-of-the-art LLMs on class-level code generation. Based on our results, we\nhave the following main findings. First, we find that all existing LLMs show\nmuch worse performance on class-level code generation compared to on standalone\nmethod-level code generation benchmarks like HumanEval; and the method-level\ncoding ability cannot equivalently reflect the class-level coding ability among\nLLMs. Second, we find that GPT-4 and GPT-3.5 still exhibit dominate superior\nthan other LLMs on class-level code generation, and the second-tier models\nincludes Instruct-Starcoder, Instruct-Codegen, and Wizardcoder with very\nsimilar performance. Third, we find that generating the entire class all at\nonce (i.e. holistic generation strategy) is the best generation strategy only\nfor GPT-4 and GPT-3.5, while method-by-method generation (i.e. incremental and\ncompositional) is better strategies for the other models with limited ability\nof understanding long instructions and utilizing the middle information.\nLastly, we find the limited model ability of generating method-dependent code\nand discuss the frequent error types in generated classes. Our benchmark is\navailable at https://github.com/FudanSELab/ClassEval.",
+ "authors": "Xueying Du, Mingwei Liu, Kaixin Wang, Hanlin Wang, Junwei Liu, Yixuan Chen, Jiayi Feng, Chaofeng Sha, Xin Peng, Yiling Lou",
+ "published": "2023-08-03",
+ "updated": "2023-08-14",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2206.12839v3",
+ "title": "Repository-Level Prompt Generation for Large Language Models of Code",
+ "abstract": "With the success of large language models (LLMs) of code and their use as\ncode assistants (e.g. Codex used in GitHub Copilot), techniques for introducing\ndomain-specific knowledge in the prompt design process become important. In\nthis work, we propose a framework called Repo-Level Prompt Generator that\nlearns to generate example-specific prompts using prompt proposals. The prompt\nproposals take context from the entire repository, thereby incorporating both\nthe structure of the repository and the context from other relevant files (e.g.\nimports, parent class files). Our technique doesn't require any access to the\nweights of the LLM, making it applicable in cases where we only have black-box\naccess to the LLM. We conduct experiments on the task of single-line\ncode-autocompletion using code repositories taken from Google Code archives. We\ndemonstrate that an oracle constructed from our prompt proposals gives a\nremarkably high relative improvement of 36% over Codex, showing the quality of\nthese proposals. Further, we show that when we train a model to predict a\nprompt proposal, we can achieve significant performance gains over Codex and\nother baselines. We release our code, data, and trained checkpoints at:\n\\url{https://github.com/shrivastavadisha/repo_level_prompt_generation}.",
+ "authors": "Disha Shrivastava, Hugo Larochelle, Daniel Tarlow",
+ "published": "2022-06-26",
+ "updated": "2023-06-05",
+ "primary_cat": "cs.LG",
+ "cats": [
+ "cs.LG",
+ "cs.AI",
+ "cs.PL",
+ "cs.SE"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2303.11366v4",
+ "title": "Reflexion: Language Agents with Verbal Reinforcement Learning",
+ "abstract": "Large language models (LLMs) have been increasingly used to interact with\nexternal environments (e.g., games, compilers, APIs) as goal-driven agents.\nHowever, it remains challenging for these language agents to quickly and\nefficiently learn from trial-and-error as traditional reinforcement learning\nmethods require extensive training samples and expensive model fine-tuning. We\npropose Reflexion, a novel framework to reinforce language agents not by\nupdating weights, but instead through linguistic feedback. Concretely,\nReflexion agents verbally reflect on task feedback signals, then maintain their\nown reflective text in an episodic memory buffer to induce better\ndecision-making in subsequent trials. Reflexion is flexible enough to\nincorporate various types (scalar values or free-form language) and sources\n(external or internally simulated) of feedback signals, and obtains significant\nimprovements over a baseline agent across diverse tasks (sequential\ndecision-making, coding, language reasoning). For example, Reflexion achieves a\n91% pass@1 accuracy on the HumanEval coding benchmark, surpassing the previous\nstate-of-the-art GPT-4 that achieves 80%. We also conduct ablation and analysis\nstudies using different feedback signals, feedback incorporation methods, and\nagent types, and provide insights into how they affect performance.",
+ "authors": "Noah Shinn, Federico Cassano, Edward Berman, Ashwin Gopinath, Karthik Narasimhan, Shunyu Yao",
+ "published": "2023-03-20",
+ "updated": "2023-10-10",
+ "primary_cat": "cs.AI",
+ "cats": [
+ "cs.AI",
+ "cs.CL",
+ "cs.LG"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2302.04761v1",
+ "title": "Toolformer: Language Models Can Teach Themselves to Use Tools",
+ "abstract": "Language models (LMs) exhibit remarkable abilities to solve new tasks from\njust a few examples or textual instructions, especially at scale. They also,\nparadoxically, struggle with basic functionality, such as arithmetic or factual\nlookup, where much simpler and smaller models excel. In this paper, we show\nthat LMs can teach themselves to use external tools via simple APIs and achieve\nthe best of both worlds. We introduce Toolformer, a model trained to decide\nwhich APIs to call, when to call them, what arguments to pass, and how to best\nincorporate the results into future token prediction. This is done in a\nself-supervised way, requiring nothing more than a handful of demonstrations\nfor each API. We incorporate a range of tools, including a calculator, a Q\\&A\nsystem, two different search engines, a translation system, and a calendar.\nToolformer achieves substantially improved zero-shot performance across a\nvariety of downstream tasks, often competitive with much larger models, without\nsacrificing its core language modeling abilities.",
+ "authors": "Timo Schick, Jane Dwivedi-Yu, Roberto Dess\u00ec, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, Thomas Scialom",
+ "published": "2023-02-09",
+ "updated": "2023-02-09",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2306.03091v2",
+ "title": "RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems",
+ "abstract": "Large Language Models (LLMs) have greatly advanced code auto-completion\nsystems, with a potential for substantial productivity enhancements for\ndevelopers. However, current benchmarks mainly focus on single-file tasks,\nleaving an assessment gap for more complex, real-world, multi-file programming\nscenarios. To fill this gap, we introduce RepoBench, a new benchmark\nspecifically designed for evaluating repository-level code auto-completion\nsystems. RepoBench supports both Python and Java and consists of three\ninterconnected evaluation tasks: RepoBench-R (Retrieval), RepoBench-C (Code\nCompletion), and RepoBench-P (Pipeline). Each task respectively measures the\nsystem's ability to retrieve the most relevant code snippets from other files\nas cross-file context, predict the next line of code with cross-file and\nin-file context, and handle complex tasks that require a combination of both\nretrieval and next-line prediction. RepoBench aims to facilitate a more\ncomplete comparison of performance and encouraging continuous improvement in\nauto-completion systems. RepoBench is publicly available at\nhttps://github.com/Leolty/repobench.",
+ "authors": "Tianyang Liu, Canwen Xu, Julian McAuley",
+ "published": "2023-06-05",
+ "updated": "2023-10-04",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.SE"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2002.08909v1",
+ "title": "REALM: Retrieval-Augmented Language Model Pre-Training",
+ "abstract": "Language model pre-training has been shown to capture a surprising amount of\nworld knowledge, crucial for NLP tasks such as question answering. However,\nthis knowledge is stored implicitly in the parameters of a neural network,\nrequiring ever-larger networks to cover more facts.\n To capture knowledge in a more modular and interpretable way, we augment\nlanguage model pre-training with a latent knowledge retriever, which allows the\nmodel to retrieve and attend over documents from a large corpus such as\nWikipedia, used during pre-training, fine-tuning and inference. For the first\ntime, we show how to pre-train such a knowledge retriever in an unsupervised\nmanner, using masked language modeling as the learning signal and\nbackpropagating through a retrieval step that considers millions of documents.\n We demonstrate the effectiveness of Retrieval-Augmented Language Model\npre-training (REALM) by fine-tuning on the challenging task of Open-domain\nQuestion Answering (Open-QA). We compare against state-of-the-art models for\nboth explicit and implicit knowledge storage on three popular Open-QA\nbenchmarks, and find that we outperform all previous methods by a significant\nmargin (4-16% absolute accuracy), while also providing qualitative benefits\nsuch as interpretability and modularity.",
+ "authors": "Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, Ming-Wei Chang",
+ "published": "2020-02-10",
+ "updated": "2020-02-10",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.LG"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2311.02049v1",
+ "title": "Post Turing: Mapping the landscape of LLM Evaluation",
+ "abstract": "In the rapidly evolving landscape of Large Language Models (LLMs),\nintroduction of well-defined and standardized evaluation methodologies remains\na crucial challenge. This paper traces the historical trajectory of LLM\nevaluations, from the foundational questions posed by Alan Turing to the modern\nera of AI research. We categorize the evolution of LLMs into distinct periods,\neach characterized by its unique benchmarks and evaluation criteria. As LLMs\nincreasingly mimic human-like behaviors, traditional evaluation proxies, such\nas the Turing test, have become less reliable. We emphasize the pressing need\nfor a unified evaluation system, given the broader societal implications of\nthese models. Through an analysis of common evaluation methodologies, we\nadvocate for a qualitative shift in assessment approaches, underscoring the\nimportance of standardization and objective criteria. This work serves as a\ncall for the AI community to collaboratively address the challenges of LLM\nevaluation, ensuring their reliability, fairness, and societal benefit.",
+ "authors": "Alexey Tikhonov, Ivan P. Yamshchikov",
+ "published": "2023-11-03",
+ "updated": "2023-11-03",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "68T50",
+ "I.2.7"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2404.01349v1",
+ "title": "Fairness in Large Language Models: A Taxonomic Survey",
+ "abstract": "Large Language Models (LLMs) have demonstrated remarkable success across\nvarious domains. However, despite their promising performance in numerous\nreal-world applications, most of these algorithms lack fairness considerations.\nConsequently, they may lead to discriminatory outcomes against certain\ncommunities, particularly marginalized populations, prompting extensive study\nin fair LLMs. On the other hand, fairness in LLMs, in contrast to fairness in\ntraditional machine learning, entails exclusive backgrounds, taxonomies, and\nfulfillment techniques. To this end, this survey presents a comprehensive\noverview of recent advances in the existing literature concerning fair LLMs.\nSpecifically, a brief introduction to LLMs is provided, followed by an analysis\nof factors contributing to bias in LLMs. Additionally, the concept of fairness\nin LLMs is discussed categorically, summarizing metrics for evaluating bias in\nLLMs and existing algorithms for promoting fairness. Furthermore, resources for\nevaluating bias in LLMs, including toolkits and datasets, are summarized.\nFinally, existing research challenges and open questions are discussed.",
+ "authors": "Zhibo Chu, Zichong Wang, Wenbin Zhang",
+ "published": "2024-03-31",
+ "updated": "2024-03-31",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2310.14607v2",
+ "title": "Confronting LLMs with Traditional ML: Rethinking the Fairness of Large Language Models in Tabular Classifications",
+ "abstract": "Recent literature has suggested the potential of using large language models\n(LLMs) to make classifications for tabular tasks. However, LLMs have been shown\nto exhibit harmful social biases that reflect the stereotypes and inequalities\npresent in society. To this end, as well as the widespread use of tabular data\nin many high-stake applications, it is important to explore the following\nquestions: what sources of information do LLMs draw upon when making\nclassifications for tabular tasks; whether and to what extent are LLM\nclassifications for tabular data influenced by social biases and stereotypes;\nand what are the consequential implications for fairness?\n Through a series of experiments, we delve into these questions and show that\nLLMs tend to inherit social biases from their training data which significantly\nimpact their fairness in tabular classification tasks. Furthermore, our\ninvestigations show that in the context of bias mitigation, though in-context\nlearning and finetuning have a moderate effect, the fairness metric gap between\ndifferent subgroups is still larger than that in traditional machine learning\nmodels, such as Random Forest and shallow Neural Networks. This observation\nemphasizes that the social biases are inherent within the LLMs themselves and\ninherited from their pretraining corpus, not only from the downstream task\ndatasets. Besides, we demonstrate that label-flipping of in-context examples\ncan significantly reduce biases, further highlighting the presence of inherent\nbias within LLMs.",
+ "authors": "Yanchen Liu, Srishti Gautam, Jiaqi Ma, Himabindu Lakkaraju",
+ "published": "2023-10-23",
+ "updated": "2024-04-02",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.LG"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2402.06852v2",
+ "title": "ChemLLM: A Chemical Large Language Model",
+ "abstract": "Large language models (LLMs) have made impressive progress in chemistry\napplications. However, the community lacks an LLM specifically designed for\nchemistry. The main challenges are two-fold: firstly, most chemical data and\nscientific knowledge are stored in structured databases, which limits the\nmodel's ability to sustain coherent dialogue when used directly. Secondly,\nthere is an absence of objective and fair benchmark that encompass most\nchemistry tasks. Here, we introduce ChemLLM, a comprehensive framework that\nfeatures the first LLM dedicated to chemistry. It also includes ChemData, a\ndataset specifically designed for instruction tuning, and ChemBench, a robust\nbenchmark covering nine essential chemistry tasks. ChemLLM is adept at\nperforming various tasks across chemical disciplines with fluid dialogue\ninteraction. Notably, ChemLLM achieves results comparable to GPT-4 on the core\nchemical tasks and demonstrates competitive performance with LLMs of similar\nsize in general scenarios. ChemLLM paves a new path for exploration in chemical\nstudies, and our method of incorporating structured chemical knowledge into\ndialogue systems sets a new standard for developing LLMs in various scientific\nfields. Codes, Datasets, and Model weights are publicly accessible at\nhttps://hf.co/AI4Chem",
+ "authors": "Di Zhang, Wei Liu, Qian Tan, Jingdan Chen, Hang Yan, Yuliang Yan, Jiatong Li, Weiran Huang, Xiangyu Yue, Wanli Ouyang, Dongzhan Zhou, Shufei Zhang, Mao Su, Han-Sen Zhong, Yuqiang Li",
+ "published": "2024-02-10",
+ "updated": "2024-04-25",
+ "primary_cat": "cs.AI",
+ "cats": [
+ "cs.AI",
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2402.19465v1",
+ "title": "Towards Tracing Trustworthiness Dynamics: Revisiting Pre-training Period of Large Language Models",
+ "abstract": "Ensuring the trustworthiness of large language models (LLMs) is crucial. Most\nstudies concentrate on fully pre-trained LLMs to better understand and improve\nLLMs' trustworthiness. In this paper, to reveal the untapped potential of\npre-training, we pioneer the exploration of LLMs' trustworthiness during this\nperiod, focusing on five key dimensions: reliability, privacy, toxicity,\nfairness, and robustness. To begin with, we apply linear probing to LLMs. The\nhigh probing accuracy suggests that \\textit{LLMs in early pre-training can\nalready distinguish concepts in each trustworthiness dimension}. Therefore, to\nfurther uncover the hidden possibilities of pre-training, we extract steering\nvectors from a LLM's pre-training checkpoints to enhance the LLM's\ntrustworthiness. Finally, inspired by~\\citet{choi2023understanding} that mutual\ninformation estimation is bounded by linear probing accuracy, we also probe\nLLMs with mutual information to investigate the dynamics of trustworthiness\nduring pre-training. We are the first to observe a similar two-phase\nphenomenon: fitting and compression~\\citep{shwartz2017opening}. This research\nprovides an initial exploration of trustworthiness modeling during LLM\npre-training, seeking to unveil new insights and spur further developments in\nthe field. We will make our code publicly accessible at\n\\url{https://github.com/ChnQ/TracingLLM}.",
+ "authors": "Chen Qian, Jie Zhang, Wei Yao, Dongrui Liu, Zhenfei Yin, Yu Qiao, Yong Liu, Jing Shao",
+ "published": "2024-02-29",
+ "updated": "2024-02-29",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2305.07609v3",
+ "title": "Is ChatGPT Fair for Recommendation? Evaluating Fairness in Large Language Model Recommendation",
+ "abstract": "The remarkable achievements of Large Language Models (LLMs) have led to the\nemergence of a novel recommendation paradigm -- Recommendation via LLM\n(RecLLM). Nevertheless, it is important to note that LLMs may contain social\nprejudices, and therefore, the fairness of recommendations made by RecLLM\nrequires further investigation. To avoid the potential risks of RecLLM, it is\nimperative to evaluate the fairness of RecLLM with respect to various sensitive\nattributes on the user side. Due to the differences between the RecLLM paradigm\nand the traditional recommendation paradigm, it is problematic to directly use\nthe fairness benchmark of traditional recommendation. To address the dilemma,\nwe propose a novel benchmark called Fairness of Recommendation via LLM\n(FaiRLLM). This benchmark comprises carefully crafted metrics and a dataset\nthat accounts for eight sensitive attributes1 in two recommendation scenarios:\nmusic and movies. By utilizing our FaiRLLM benchmark, we conducted an\nevaluation of ChatGPT and discovered that it still exhibits unfairness to some\nsensitive attributes when generating recommendations. Our code and dataset can\nbe found at https://github.com/jizhi-zhang/FaiRLLM.",
+ "authors": "Jizhi Zhang, Keqin Bao, Yang Zhang, Wenjie Wang, Fuli Feng, Xiangnan He",
+ "published": "2023-05-12",
+ "updated": "2023-10-17",
+ "primary_cat": "cs.IR",
+ "cats": [
+ "cs.IR",
+ "cs.CL",
+ "cs.CY"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2311.06899v4",
+ "title": "Flames: Benchmarking Value Alignment of LLMs in Chinese",
+ "abstract": "The widespread adoption of large language models (LLMs) across various\nregions underscores the urgent need to evaluate their alignment with human\nvalues. Current benchmarks, however, fall short of effectively uncovering\nsafety vulnerabilities in LLMs. Despite numerous models achieving high scores\nand 'topping the chart' in these evaluations, there is still a significant gap\nin LLMs' deeper alignment with human values and achieving genuine harmlessness.\nTo this end, this paper proposes a value alignment benchmark named Flames,\nwhich encompasses both common harmlessness principles and a unique morality\ndimension that integrates specific Chinese values such as harmony. Accordingly,\nwe carefully design adversarial prompts that incorporate complex scenarios and\njailbreaking methods, mostly with implicit malice. By prompting 17 mainstream\nLLMs, we obtain model responses and rigorously annotate them for detailed\nevaluation. Our findings indicate that all the evaluated LLMs demonstrate\nrelatively poor performance on Flames, particularly in the safety and fairness\ndimensions. We also develop a lightweight specified scorer capable of scoring\nLLMs across multiple dimensions to efficiently evaluate new models on the\nbenchmark. The complexity of Flames has far exceeded existing benchmarks,\nsetting a new challenge for contemporary LLMs and highlighting the need for\nfurther alignment of LLMs. Our benchmark is publicly available at\nhttps://github.com/AIFlames/Flames.",
+ "authors": "Kexin Huang, Xiangyang Liu, Qianyu Guo, Tianxiang Sun, Jiawei Sun, Yaru Wang, Zeyang Zhou, Yixu Wang, Yan Teng, Xipeng Qiu, Yingchun Wang, Dahua Lin",
+ "published": "2023-11-12",
+ "updated": "2024-04-15",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2311.18140v1",
+ "title": "ROBBIE: Robust Bias Evaluation of Large Generative Language Models",
+ "abstract": "As generative large language models (LLMs) grow more performant and\nprevalent, we must develop comprehensive enough tools to measure and improve\ntheir fairness. Different prompt-based datasets can be used to measure social\nbias across multiple text domains and demographic axes, meaning that testing\nLLMs on more datasets can potentially help us characterize their biases more\nfully, and better ensure equal and equitable treatment of marginalized\ndemographic groups. In this work, our focus is two-fold:\n (1) Benchmarking: a comparison of 6 different prompt-based bias and toxicity\nmetrics across 12 demographic axes and 5 families of generative LLMs. Out of\nthose 6 metrics, AdvPromptSet and HolisticBiasR are novel datasets proposed in\nthe paper. The comparison of those benchmarks gives us insights about the bias\nand toxicity of the compared models. Therefore, we explore the frequency of\ndemographic terms in common LLM pre-training corpora and how this may relate to\nmodel biases.\n (2) Mitigation: we conduct a comprehensive study of how well 3 bias/toxicity\nmitigation techniques perform across our suite of measurements. ROBBIE aims to\nprovide insights for practitioners while deploying a model, emphasizing the\nneed to not only measure potential harms, but also understand how they arise by\ncharacterizing the data, mitigate harms once found, and balance any trade-offs.\nWe open-source our analysis code in hopes of encouraging broader measurements\nof bias in future LLMs.",
+ "authors": "David Esiobu, Xiaoqing Tan, Saghar Hosseini, Megan Ung, Yuchen Zhang, Jude Fernandes, Jane Dwivedi-Yu, Eleonora Presani, Adina Williams, Eric Michael Smith",
+ "published": "2023-11-29",
+ "updated": "2023-11-29",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2312.07420v1",
+ "title": "FairSISA: Ensemble Post-Processing to Improve Fairness of Unlearning in LLMs",
+ "abstract": "Training large language models (LLMs) is a costly endeavour in terms of time\nand computational resources. The large amount of training data used during the\nunsupervised pre-training phase makes it difficult to verify all data and,\nunfortunately, undesirable data may be ingested during training. Re-training\nfrom scratch is impractical and has led to the creation of the 'unlearning'\ndiscipline where models are modified to \"unlearn\" undesirable information\nwithout retraining. However, any modification can alter the behaviour of LLMs,\nespecially on key dimensions such as fairness. This is the first work that\nexamines this interplay between unlearning and fairness for LLMs. In\nparticular, we focus on a popular unlearning framework known as SISA [Bourtoule\net al., 2021], which creates an ensemble of models trained on disjoint shards.\nWe evaluate the performance-fairness trade-off for SISA, and empirically\ndemsontrate that SISA can indeed reduce fairness in LLMs. To remedy this, we\npropose post-processing bias mitigation techniques for ensemble models produced\nby SISA. We adapt the post-processing fairness improvement technique from\n[Hardt et al., 2016] to design three methods that can handle model ensembles,\nand prove that one of the methods is an optimal fair predictor for ensemble of\nmodels. Through experimental results, we demonstrate the efficacy of our\npost-processing framework called 'FairSISA'.",
+ "authors": "Swanand Ravindra Kadhe, Anisa Halimi, Ambrish Rawat, Nathalie Baracaldo",
+ "published": "2023-12-12",
+ "updated": "2023-12-12",
+ "primary_cat": "cs.LG",
+ "cats": [
+ "cs.LG",
+ "cs.CY"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2403.00884v2",
+ "title": "Text classification of column headers with a controlled vocabulary: leveraging LLMs for metadata enrichment",
+ "abstract": "Traditional dataset retrieval systems index on metadata information rather\nthan on the data values. Thus relying primarily on manual annotations and\nhigh-quality metadata, processes known to be labour-intensive and challenging\nto automate. We propose a method to support metadata enrichment with topic\nannotations of column headers using three Large Language Models (LLMs):\nChatGPT-3.5, GoogleBard and GoogleGemini. We investigate the LLMs ability to\nclassify column headers based on domain-specific topics from a controlled\nvocabulary. We evaluate our approach by assessing the internal consistency of\nthe LLMs, the inter-machine alignment, and the human-machine agreement for the\ntopic classification task. Additionally, we investigate the impact of\ncontextual information (i.e. dataset description) on the classification\noutcomes. Our results suggest that ChatGPT and GoogleGemini outperform\nGoogleBard for internal consistency as well as LLM-human-alignment.\nInterestingly, we found that context had no impact on the LLMs performances.\nThis work proposes a novel approach that leverages LLMs for text classification\nusing a controlled topic vocabulary, which has the potential to facilitate\nautomated metadata enrichment, thereby enhancing dataset retrieval and the\nFindability, Accessibility, Interoperability and Reusability (FAIR) of research\ndata on the Web.",
+ "authors": "Margherita Martorana, Tobias Kuhn, Lise Stork, Jacco van Ossenbruggen",
+ "published": "2024-03-01",
+ "updated": "2024-03-05",
+ "primary_cat": "cs.DB",
+ "cats": [
+ "cs.DB",
+ "cs.AI",
+ "cs.IR"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2404.08656v1",
+ "title": "Linear Cross-document Event Coreference Resolution with X-AMR",
+ "abstract": "Event Coreference Resolution (ECR) as a pairwise mention classification task\nis expensive both for automated systems and manual annotations. The task's\nquadratic difficulty is exacerbated when using Large Language Models (LLMs),\nmaking prompt engineering for ECR prohibitively costly. In this work, we\npropose a graphical representation of events, X-AMR, anchored around individual\nmentions using a \\textbf{cross}-document version of \\textbf{A}bstract\n\\textbf{M}eaning \\textbf{R}epresentation. We then linearize the ECR with a\nnovel multi-hop coreference algorithm over the event graphs. The event graphs\nsimplify ECR, making it a) LLM cost-effective, b) compositional and\ninterpretable, and c) easily annotated. For a fair assessment, we first enrich\nan existing ECR benchmark dataset with these event graphs using an\nannotator-friendly tool we introduce. Then, we employ GPT-4, the newest LLM by\nOpenAI, for these annotations. Finally, using the ECR algorithm, we assess\nGPT-4 against humans and analyze its limitations. Through this research, we aim\nto advance the state-of-the-art for efficient ECR and shed light on the\npotential shortcomings of current LLMs at this task. Code and annotations:\n\\url{https://github.com/ahmeshaf/gpt_coref}",
+ "authors": "Shafiuddin Rehan Ahmed, George Arthur Baker, Evi Judge, Michael Regan, Kristin Wright-Bettner, Martha Palmer, James H. Martin",
+ "published": "2024-03-25",
+ "updated": "2024-03-25",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2309.08836v2",
+ "title": "Bias and Fairness in Chatbots: An Overview",
+ "abstract": "Chatbots have been studied for more than half a century. With the rapid\ndevelopment of natural language processing (NLP) technologies in recent years,\nchatbots using large language models (LLMs) have received much attention\nnowadays. Compared with traditional ones, modern chatbots are more powerful and\nhave been used in real-world applications. There are however, bias and fairness\nconcerns in modern chatbot design. Due to the huge amounts of training data,\nextremely large model sizes, and lack of interpretability, bias mitigation and\nfairness preservation of modern chatbots are challenging. Thus, a comprehensive\noverview on bias and fairness in chatbot systems is given in this paper. The\nhistory of chatbots and their categories are first reviewed. Then, bias sources\nand potential harms in applications are analyzed. Considerations in designing\nfair and unbiased chatbot systems are examined. Finally, future research\ndirections are discussed.",
+ "authors": "Jintang Xue, Yun-Cheng Wang, Chengwei Wei, Xiaofeng Liu, Jonghye Woo, C. -C. Jay Kuo",
+ "published": "2023-09-16",
+ "updated": "2023-12-10",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.CY"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2402.10567v3",
+ "title": "InSaAF: Incorporating Safety through Accuracy and Fairness | Are LLMs ready for the Indian Legal Domain?",
+ "abstract": "Recent advancements in language technology and Artificial Intelligence have\nresulted in numerous Language Models being proposed to perform various tasks in\nthe legal domain ranging from predicting judgments to generating summaries.\nDespite their immense potential, these models have been proven to learn and\nexhibit societal biases and make unfair predictions. In this study, we explore\nthe ability of Large Language Models (LLMs) to perform legal tasks in the\nIndian landscape when social factors are involved. We present a novel metric,\n$\\beta$-weighted $\\textit{Legal Safety Score ($LSS_{\\beta}$)}$, which\nencapsulates both the fairness and accuracy aspects of the LLM. We assess LLMs'\nsafety by considering its performance in the $\\textit{Binary Statutory\nReasoning}$ task and its fairness exhibition with respect to various axes of\ndisparities in the Indian society. Task performance and fairness scores of\nLLaMA and LLaMA--2 models indicate that the proposed $LSS_{\\beta}$ metric can\neffectively determine the readiness of a model for safe usage in the legal\nsector. We also propose finetuning pipelines, utilising specialised legal\ndatasets, as a potential method to mitigate bias and improve model safety. The\nfinetuning procedures on LLaMA and LLaMA--2 models increase the $LSS_{\\beta}$,\nimproving their usability in the Indian legal domain. Our code is publicly\nreleased.",
+ "authors": "Yogesh Tripathi, Raghav Donakanti, Sahil Girhepuje, Ishan Kavathekar, Bhaskara Hanuma Vedula, Gokul S Krishnan, Shreya Goyal, Anmol Goel, Balaraman Ravindran, Ponnurangam Kumaraguru",
+ "published": "2024-02-16",
+ "updated": "2024-02-21",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2310.09219v5",
+ "title": "\"Kelly is a Warm Person, Joseph is a Role Model\": Gender Biases in LLM-Generated Reference Letters",
+ "abstract": "Large Language Models (LLMs) have recently emerged as an effective tool to\nassist individuals in writing various types of content, including professional\ndocuments such as recommendation letters. Though bringing convenience, this\napplication also introduces unprecedented fairness concerns. Model-generated\nreference letters might be directly used by users in professional scenarios. If\nunderlying biases exist in these model-constructed letters, using them without\nscrutinization could lead to direct societal harms, such as sabotaging\napplication success rates for female applicants. In light of this pressing\nissue, it is imminent and necessary to comprehensively study fairness issues\nand associated harms in this real-world use case. In this paper, we critically\nexamine gender biases in LLM-generated reference letters. Drawing inspiration\nfrom social science findings, we design evaluation methods to manifest biases\nthrough 2 dimensions: (1) biases in language style and (2) biases in lexical\ncontent. We further investigate the extent of bias propagation by analyzing the\nhallucination bias of models, a term that we define to be bias exacerbation in\nmodel-hallucinated contents. Through benchmarking evaluation on 2 popular LLMs-\nChatGPT and Alpaca, we reveal significant gender biases in LLM-generated\nrecommendation letters. Our findings not only warn against using LLMs for this\napplication without scrutinization, but also illuminate the importance of\nthoroughly studying hidden biases and harms in LLM-generated professional\ndocuments.",
+ "authors": "Yixin Wan, George Pu, Jiao Sun, Aparna Garimella, Kai-Wei Chang, Nanyun Peng",
+ "published": "2023-10-13",
+ "updated": "2023-12-01",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2405.02219v1",
+ "title": "FairEvalLLM. A Comprehensive Framework for Benchmarking Fairness in Large Language Model Recommender Systems",
+ "abstract": "This paper presents a framework for evaluating fairness in recommender\nsystems powered by Large Language Models (RecLLMs), addressing the need for a\nunified approach that spans various fairness dimensions including sensitivity\nto user attributes, intrinsic fairness, and discussions of fairness based on\nunderlying benefits. In addition, our framework introduces counterfactual\nevaluations and integrates diverse user group considerations to enhance the\ndiscourse on fairness evaluation for RecLLMs.\n Our key contributions include the development of a robust framework for\nfairness evaluation in LLM-based recommendations and a structured method to\ncreate \\textit{informative user profiles} from demographic data, historical\nuser preferences, and recent interactions. We argue that the latter is\nessential for enhancing personalization in such systems, especially in\ntemporal-driven scenarios. We demonstrate the utility of our framework through\npractical applications on two datasets, LastFM-1K and ML-1M. We conduct\nexperiments on a subsample of 80 users from each dataset, testing and assessing\nthe effectiveness of various prompt construction scenarios and in-context\nlearning, comprising more than 50 scenarios. This results in more than 4000\nrecommendations (80 * 50 = 4000). Our study reveals that while there are no\nsignificant unfairness issues in scenarios involving sensitive attributes, some\nconcerns remain. However, in terms of intrinsic fairness, which does not\ninvolve direct sensitivity, unfairness across demographic groups remains\nsignificant. The code and data used for this paper are available at:\n\\url{https://shorturl.at/awBFM}.",
+ "authors": "Yashar Deldjoo",
+ "published": "2024-05-03",
+ "updated": "2024-05-03",
+ "primary_cat": "cs.IR",
+ "cats": [
+ "cs.IR"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2401.04057v1",
+ "title": "Unveiling Bias in Fairness Evaluations of Large Language Models: A Critical Literature Review of Music and Movie Recommendation Systems",
+ "abstract": "The rise of generative artificial intelligence, particularly Large Language\nModels (LLMs), has intensified the imperative to scrutinize fairness alongside\naccuracy. Recent studies have begun to investigate fairness evaluations for\nLLMs within domains such as recommendations. Given that personalization is an\nintrinsic aspect of recommendation systems, its incorporation into fairness\nassessments is paramount. Yet, the degree to which current fairness evaluation\nframeworks account for personalization remains unclear. Our comprehensive\nliterature review aims to fill this gap by examining how existing frameworks\nhandle fairness evaluations of LLMs, with a focus on the integration of\npersonalization factors. Despite an exhaustive collection and analysis of\nrelevant works, we discovered that most evaluations overlook personalization, a\ncritical facet of recommendation systems, thereby inadvertently perpetuating\nunfair practices. Our findings shed light on this oversight and underscore the\nurgent need for more nuanced fairness evaluations that acknowledge\npersonalization. Such improvements are vital for fostering equitable\ndevelopment within the AI community.",
+ "authors": "Chandan Kumar Sah, Dr. Lian Xiaoli, Muhammad Mirajul Islam",
+ "published": "2024-01-08",
+ "updated": "2024-01-08",
+ "primary_cat": "cs.IR",
+ "cats": [
+ "cs.IR",
+ "cs.AI",
+ "cs.SE"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2405.01769v1",
+ "title": "A Survey on Large Language Models for Critical Societal Domains: Finance, Healthcare, and Law",
+ "abstract": "In the fast-evolving domain of artificial intelligence, large language models\n(LLMs) such as GPT-3 and GPT-4 are revolutionizing the landscapes of finance,\nhealthcare, and law: domains characterized by their reliance on professional\nexpertise, challenging data acquisition, high-stakes, and stringent regulatory\ncompliance. This survey offers a detailed exploration of the methodologies,\napplications, challenges, and forward-looking opportunities of LLMs within\nthese high-stakes sectors. We highlight the instrumental role of LLMs in\nenhancing diagnostic and treatment methodologies in healthcare, innovating\nfinancial analytics, and refining legal interpretation and compliance\nstrategies. Moreover, we critically examine the ethics for LLM applications in\nthese fields, pointing out the existing ethical concerns and the need for\ntransparent, fair, and robust AI systems that respect regulatory norms. By\npresenting a thorough review of current literature and practical applications,\nwe showcase the transformative impact of LLMs, and outline the imperative for\ninterdisciplinary cooperation, methodological advancements, and ethical\nvigilance. Through this lens, we aim to spark dialogue and inspire future\nresearch dedicated to maximizing the benefits of LLMs while mitigating their\nrisks in these precision-dependent sectors. To facilitate future research on\nLLMs in these critical societal domains, we also initiate a reading list that\ntracks the latest advancements under this topic, which will be continually\nupdated: \\url{https://github.com/czyssrs/LLM_X_papers}.",
+ "authors": "Zhiyu Zoey Chen, Jing Ma, Xinlu Zhang, Nan Hao, An Yan, Armineh Nourbakhsh, Xianjun Yang, Julian McAuley, Linda Petzold, William Yang Wang",
+ "published": "2024-05-02",
+ "updated": "2024-05-02",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2404.07981v1",
+ "title": "Manipulating Large Language Models to Increase Product Visibility",
+ "abstract": "Large language models (LLMs) are increasingly being integrated into search\nengines to provide natural language responses tailored to user queries.\nCustomers and end-users are also becoming more dependent on these models for\nquick and easy purchase decisions. In this work, we investigate whether\nrecommendations from LLMs can be manipulated to enhance a product's visibility.\nWe demonstrate that adding a strategic text sequence (STS) -- a carefully\ncrafted message -- to a product's information page can significantly increase\nits likelihood of being listed as the LLM's top recommendation. To understand\nthe impact of STS, we use a catalog of fictitious coffee machines and analyze\nits effect on two target products: one that seldom appears in the LLM's\nrecommendations and another that usually ranks second. We observe that the\nstrategic text sequence significantly enhances the visibility of both products\nby increasing their chances of appearing as the top recommendation. This\nability to manipulate LLM-generated search responses provides vendors with a\nconsiderable competitive advantage and has the potential to disrupt fair market\ncompetition. Just as search engine optimization (SEO) revolutionized how\nwebpages are customized to rank higher in search engine results, influencing\nLLM recommendations could profoundly impact content optimization for AI-driven\nsearch services. Code for our experiments is available at\nhttps://github.com/aounon/llm-rank-optimizer.",
+ "authors": "Aounon Kumar, Himabindu Lakkaraju",
+ "published": "2024-04-11",
+ "updated": "2024-04-11",
+ "primary_cat": "cs.IR",
+ "cats": [
+ "cs.IR",
+ "cs.AI",
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2404.06003v1",
+ "title": "FreeEval: A Modular Framework for Trustworthy and Efficient Evaluation of Large Language Models",
+ "abstract": "The rapid development of large language model (LLM) evaluation methodologies\nand datasets has led to a profound challenge: integrating state-of-the-art\nevaluation techniques cost-effectively while ensuring reliability,\nreproducibility, and efficiency. Currently, there is a notable absence of a\nunified and adaptable framework that seamlessly integrates various evaluation\napproaches. Moreover, the reliability of evaluation findings is often\nquestionable due to potential data contamination, with the evaluation\nefficiency commonly overlooked when facing the substantial costs associated\nwith LLM inference. In response to these challenges, we introduce FreeEval, a\nmodular and scalable framework crafted to enable trustworthy and efficient\nautomatic evaluations of LLMs. Firstly, FreeEval's unified abstractions\nsimplify the integration and improve the transparency of diverse evaluation\nmethodologies, encompassing dynamic evaluation that demand sophisticated LLM\ninteractions. Secondly, the framework integrates meta-evaluation techniques\nlike human evaluation and data contamination detection, which, along with\ndynamic evaluation modules in the platform, enhance the fairness of the\nevaluation outcomes. Lastly, FreeEval is designed with a high-performance\ninfrastructure, including distributed computation and caching strategies,\nenabling extensive evaluations across multi-node, multi-GPU clusters for\nopen-source and proprietary LLMs.",
+ "authors": "Zhuohao Yu, Chang Gao, Wenjin Yao, Yidong Wang, Zhengran Zeng, Wei Ye, Jindong Wang, Yue Zhang, Shikun Zhang",
+ "published": "2024-04-09",
+ "updated": "2024-04-09",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2308.05374v2",
+ "title": "Trustworthy LLMs: a Survey and Guideline for Evaluating Large Language Models' Alignment",
+ "abstract": "Ensuring alignment, which refers to making models behave in accordance with\nhuman intentions [1,2], has become a critical task before deploying large\nlanguage models (LLMs) in real-world applications. For instance, OpenAI devoted\nsix months to iteratively aligning GPT-4 before its release [3]. However, a\nmajor challenge faced by practitioners is the lack of clear guidance on\nevaluating whether LLM outputs align with social norms, values, and\nregulations. This obstacle hinders systematic iteration and deployment of LLMs.\nTo address this issue, this paper presents a comprehensive survey of key\ndimensions that are crucial to consider when assessing LLM trustworthiness. The\nsurvey covers seven major categories of LLM trustworthiness: reliability,\nsafety, fairness, resistance to misuse, explainability and reasoning, adherence\nto social norms, and robustness. Each major category is further divided into\nseveral sub-categories, resulting in a total of 29 sub-categories.\nAdditionally, a subset of 8 sub-categories is selected for further\ninvestigation, where corresponding measurement studies are designed and\nconducted on several widely-used LLMs. The measurement results indicate that,\nin general, more aligned models tend to perform better in terms of overall\ntrustworthiness. However, the effectiveness of alignment varies across the\ndifferent trustworthiness categories considered. This highlights the importance\nof conducting more fine-grained analyses, testing, and making continuous\nimprovements on LLM alignment. By shedding light on these key dimensions of LLM\ntrustworthiness, this paper aims to provide valuable insights and guidance to\npractitioners in the field. Understanding and addressing these concerns will be\ncrucial in achieving reliable and ethically sound deployment of LLMs in various\napplications.",
+ "authors": "Yang Liu, Yuanshun Yao, Jean-Francois Ton, Xiaoying Zhang, Ruocheng Guo, Hao Cheng, Yegor Klochkov, Muhammad Faaiz Taufiq, Hang Li",
+ "published": "2023-08-10",
+ "updated": "2024-03-21",
+ "primary_cat": "cs.AI",
+ "cats": [
+ "cs.AI",
+ "cs.LG"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2305.19118v1",
+ "title": "Encouraging Divergent Thinking in Large Language Models through Multi-Agent Debate",
+ "abstract": "Modern large language models (LLMs) like ChatGPT have shown remarkable\nperformance on general language tasks but still struggle on complex reasoning\ntasks, which drives the research on cognitive behaviors of LLMs to explore\nhuman-like problem-solving strategies. Along this direction, one representative\nstrategy is self-reflection, which asks an LLM to refine the solution with the\nfeedback generated by itself iteratively. However, our study shows that such\nreflection-style methods suffer from the Degeneration-of-Thought (DoT) problem:\nonce the LLM has established confidence in its solutions, it is unable to\ngenerate novel thoughts later through reflection even if its initial stance is\nincorrect. To address the DoT problem, we propose a Multi-Agent Debate (MAD)\nframework, in which multiple agents express their arguments in the state of\n\"tit for tat\" and a judge manages the debate process to obtain a final\nsolution. Clearly, our MAD framework encourages divergent thinking in LLMs\nwhich would be helpful for tasks that require deep levels of contemplation.\nExperiment results on two challenging datasets, commonsense machine translation\nand counter-intuitive arithmetic reasoning, demonstrate the effectiveness of\nour MAD framework. Extensive analyses suggest that the adaptive break of debate\nand the modest level of \"tit for tat\" state are required for MAD to obtain good\nperformance. Moreover, we find that LLMs might not be a fair judge if different\nLLMs are used for agents. Codes:\nhttps://github.com/Skytliang/Multi-Agents-Debate",
+ "authors": "Tian Liang, Zhiwei He, Wenxiang Jiao, Xing Wang, Yan Wang, Rui Wang, Yujiu Yang, Zhaopeng Tu, Shuming Shi",
+ "published": "2023-05-30",
+ "updated": "2023-05-30",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2311.03033v1",
+ "title": "Beyond Words: A Mathematical Framework for Interpreting Large Language Models",
+ "abstract": "Large language models (LLMs) are powerful AI tools that can generate and\ncomprehend natural language text and other complex information. However, the\nfield lacks a mathematical framework to systematically describe, compare and\nimprove LLMs. We propose Hex a framework that clarifies key terms and concepts\nin LLM research, such as hallucinations, alignment, self-verification and\nchain-of-thought reasoning. The Hex framework offers a precise and consistent\nway to characterize LLMs, identify their strengths and weaknesses, and\nintegrate new findings. Using Hex, we differentiate chain-of-thought reasoning\nfrom chain-of-thought prompting and establish the conditions under which they\nare equivalent. This distinction clarifies the basic assumptions behind\nchain-of-thought prompting and its implications for methods that use it, such\nas self-verification and prompt programming.\n Our goal is to provide a formal framework for LLMs that can help both\nresearchers and practitioners explore new possibilities for generative AI. We\ndo not claim to have a definitive solution, but rather a tool for opening up\nnew research avenues. We argue that our formal definitions and results are\ncrucial for advancing the discussion on how to build generative AI systems that\nare safe, reliable, fair and robust, especially in domains like healthcare and\nsoftware engineering.",
+ "authors": "Javier Gonz\u00e1lez, Aditya V. Nori",
+ "published": "2023-11-06",
+ "updated": "2023-11-06",
+ "primary_cat": "cs.LG",
+ "cats": [
+ "cs.LG",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2308.11483v1",
+ "title": "Large Language Models Sensitivity to The Order of Options in Multiple-Choice Questions",
+ "abstract": "Large Language Models (LLMs) have demonstrated remarkable capabilities in\nvarious NLP tasks. However, previous works have shown these models are\nsensitive towards prompt wording, and few-shot demonstrations and their order,\nposing challenges to fair assessment of these models. As these models become\nmore powerful, it becomes imperative to understand and address these\nlimitations. In this paper, we focus on LLMs robustness on the task of\nmultiple-choice questions -- commonly adopted task to study reasoning and\nfact-retrieving capability of LLMs. Investigating the sensitivity of LLMs\ntowards the order of options in multiple-choice questions, we demonstrate a\nconsiderable performance gap of approximately 13% to 75% in LLMs on different\nbenchmarks, when answer options are reordered, even when using demonstrations\nin a few-shot setting. Through a detailed analysis, we conjecture that this\nsensitivity arises when LLMs are uncertain about the prediction between the\ntop-2/3 choices, and specific options placements may favor certain prediction\nbetween those top choices depending on the question caused by positional bias.\nWe also identify patterns in top-2 choices that amplify or mitigate the model's\nbias toward option placement. We found that for amplifying bias, the optimal\nstrategy involves positioning the top two choices as the first and last\noptions. Conversely, to mitigate bias, we recommend placing these choices among\nthe adjacent options. To validate our conjecture, we conduct various\nexperiments and adopt two approaches to calibrate LLMs' predictions, leading to\nup to 8 percentage points improvement across different models and benchmarks.",
+ "authors": "Pouya Pezeshkpour, Estevam Hruschka",
+ "published": "2023-08-22",
+ "updated": "2023-08-22",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.LG"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2401.08495v2",
+ "title": "Large Language Models Portray Socially Subordinate Groups as More Homogeneous, Consistent with a Bias Observed in Humans",
+ "abstract": "Large language models (LLMs) are becoming pervasive in everyday life, yet\ntheir propensity to reproduce biases inherited from training data remains a\npressing concern. Prior investigations into bias in LLMs have focused on the\nassociation of social groups with stereotypical attributes. However, this is\nonly one form of human bias such systems may reproduce. We investigate a new\nform of bias in LLMs that resembles a social psychological phenomenon where\nsocially subordinate groups are perceived as more homogeneous than socially\ndominant groups. We had ChatGPT, a state-of-the-art LLM, generate texts about\nintersectional group identities and compared those texts on measures of\nhomogeneity. We consistently found that ChatGPT portrayed African, Asian, and\nHispanic Americans as more homogeneous than White Americans, indicating that\nthe model described racial minority groups with a narrower range of human\nexperience. ChatGPT also portrayed women as more homogeneous than men, but\nthese differences were small. Finally, we found that the effect of gender\ndiffered across racial/ethnic groups such that the effect of gender was\nconsistent within African and Hispanic Americans but not within Asian and White\nAmericans. We argue that the tendency of LLMs to describe groups as less\ndiverse risks perpetuating stereotypes and discriminatory behavior.",
+ "authors": "Messi H. J. Lee, Jacob M. Montgomery, Calvin K. Lai",
+ "published": "2024-01-16",
+ "updated": "2024-04-26",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2404.18276v1",
+ "title": "Bias Neutralization Framework: Measuring Fairness in Large Language Models with Bias Intelligence Quotient (BiQ)",
+ "abstract": "The burgeoning influence of Large Language Models (LLMs) in shaping public\ndiscourse and decision-making underscores the imperative to address inherent\nbiases within these AI systems. In the wake of AI's expansive integration\nacross sectors, addressing racial bias in LLMs has never been more critical.\nThis paper introduces a novel framework called Comprehensive Bias\nNeutralization Framework (CBNF) which embodies an innovative approach to\nquantifying and mitigating biases within LLMs. Our framework combines the Large\nLanguage Model Bias Index (LLMBI) [Oketunji, A., Anas, M., Saina, D., (2023)]\nand Bias removaL with No Demographics (BLIND) [Orgad, H., Belinkov, Y. (2023)]\nmethodologies to create a new metric called Bias Intelligence Quotient\n(BiQ)which detects, measures, and mitigates racial bias in LLMs without\nreliance on demographic annotations.\n By introducing a new metric called BiQ that enhances LLMBI with additional\nfairness metrics, CBNF offers a multi-dimensional metric for bias assessment,\nunderscoring the necessity of a nuanced approach to fairness in AI [Mehrabi et\nal., 2021]. This paper presents a detailed analysis of Latimer AI (a language\nmodel incrementally trained on black history and culture) in comparison to\nChatGPT 3.5, illustrating Latimer AI's efficacy in detecting racial, cultural,\nand gender biases through targeted training and refined bias mitigation\nstrategies [Latimer & Bender, 2023].",
+ "authors": "Malur Narayan, John Pasmore, Elton Sampaio, Vijay Raghavan, Gabriella Waters",
+ "published": "2024-04-28",
+ "updated": "2024-04-28",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "D.1; I.2"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2403.17553v1",
+ "title": "RuBia: A Russian Language Bias Detection Dataset",
+ "abstract": "Warning: this work contains upsetting or disturbing content.\n Large language models (LLMs) tend to learn the social and cultural biases\npresent in the raw pre-training data. To test if an LLM's behavior is fair,\nfunctional datasets are employed, and due to their purpose, these datasets are\nhighly language and culture-specific. In this paper, we address a gap in the\nscope of multilingual bias evaluation by presenting a bias detection dataset\nspecifically designed for the Russian language, dubbed as RuBia. The RuBia\ndataset is divided into 4 domains: gender, nationality, socio-economic status,\nand diverse, each of the domains is further divided into multiple fine-grained\nsubdomains. Every example in the dataset consists of two sentences with the\nfirst reinforcing a potentially harmful stereotype or trope and the second\ncontradicting it. These sentence pairs were first written by volunteers and\nthen validated by native-speaking crowdsourcing workers. Overall, there are\nnearly 2,000 unique sentence pairs spread over 19 subdomains in RuBia. To\nillustrate the dataset's purpose, we conduct a diagnostic evaluation of\nstate-of-the-art or near-state-of-the-art LLMs and discuss the LLMs'\npredisposition to social biases.",
+ "authors": "Veronika Grigoreva, Anastasiia Ivanova, Ilseyar Alimova, Ekaterina Artemova",
+ "published": "2024-03-26",
+ "updated": "2024-03-26",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2311.02294v1",
+ "title": "LLMs grasp morality in concept",
+ "abstract": "Work in AI ethics and fairness has made much progress in regulating LLMs to\nreflect certain values, such as fairness, truth, and diversity. However, it has\ntaken the problem of how LLMs might 'mean' anything at all for granted. Without\naddressing this, it is not clear what imbuing LLMs with such values even means.\nIn response, we provide a general theory of meaning that extends beyond humans.\nWe use this theory to explicate the precise nature of LLMs as meaning-agents.\nWe suggest that the LLM, by virtue of its position as a meaning-agent, already\ngrasps the constructions of human society (e.g. morality, gender, and race) in\nconcept. Consequently, under certain ethical frameworks, currently popular\nmethods for model alignment are limited at best and counterproductive at worst.\nMoreover, unaligned models may help us better develop our moral and social\nphilosophy.",
+ "authors": "Mark Pock, Andre Ye, Jared Moore",
+ "published": "2023-11-04",
+ "updated": "2023-11-04",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.CY"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2402.12150v1",
+ "title": "Your Large Language Model is Secretly a Fairness Proponent and You Should Prompt it Like One",
+ "abstract": "The widespread adoption of large language models (LLMs) underscores the\nurgent need to ensure their fairness. However, LLMs frequently present dominant\nviewpoints while ignoring alternative perspectives from minority parties,\nresulting in potential biases. We hypothesize that these fairness-violating\nbehaviors occur because LLMs express their viewpoints using a human personality\nthat represents the majority of training data. In response to this, we validate\nthat prompting LLMs with specific roles can allow LLMs to express diverse\nviewpoints. Building on this insight and observation, we develop FairThinking,\na pipeline designed to automatically generate roles that enable LLMs to\narticulate diverse perspectives for fair expressions. To evaluate FairThinking,\nwe create a dataset with a thousand items covering three fairness-related\ntopics and conduct experiments on GPT-3.5, GPT-4, Llama2, and Mistral to\ndemonstrate its superior performance.",
+ "authors": "Tianlin Li, Xiaoyu Zhang, Chao Du, Tianyu Pang, Qian Liu, Qing Guo, Chao Shen, Yang Liu",
+ "published": "2024-02-19",
+ "updated": "2024-02-19",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "I.2; J.4"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2404.08517v1",
+ "title": "Online Safety Analysis for LLMs: a Benchmark, an Assessment, and a Path Forward",
+ "abstract": "While Large Language Models (LLMs) have seen widespread applications across\nnumerous fields, their limited interpretability poses concerns regarding their\nsafe operations from multiple aspects, e.g., truthfulness, robustness, and\nfairness. Recent research has started developing quality assurance methods for\nLLMs, introducing techniques such as offline detector-based or uncertainty\nestimation methods. However, these approaches predominantly concentrate on\npost-generation analysis, leaving the online safety analysis for LLMs during\nthe generation phase an unexplored area. To bridge this gap, we conduct in this\nwork a comprehensive evaluation of the effectiveness of existing online safety\nanalysis methods on LLMs. We begin with a pilot study that validates the\nfeasibility of detecting unsafe outputs in the early generation process.\nFollowing this, we establish the first publicly available benchmark of online\nsafety analysis for LLMs, including a broad spectrum of methods, models, tasks,\ndatasets, and evaluation metrics. Utilizing this benchmark, we extensively\nanalyze the performance of state-of-the-art online safety analysis methods on\nboth open-source and closed-source LLMs. This analysis reveals the strengths\nand weaknesses of individual methods and offers valuable insights into\nselecting the most appropriate method based on specific application scenarios\nand task requirements. Furthermore, we also explore the potential of using\nhybridization methods, i.e., combining multiple methods to derive a collective\nsafety conclusion, to enhance the efficacy of online safety analysis for LLMs.\nOur findings indicate a promising direction for the development of innovative\nand trustworthy quality assurance methodologies for LLMs, facilitating their\nreliable deployments across diverse domains.",
+ "authors": "Xuan Xie, Jiayang Song, Zhehua Zhou, Yuheng Huang, Da Song, Lei Ma",
+ "published": "2024-04-12",
+ "updated": "2024-04-12",
+ "primary_cat": "cs.SE",
+ "cats": [
+ "cs.SE",
+ "cs.AI",
+ "cs.CL",
+ "cs.CR",
+ "cs.LG"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2307.03838v2",
+ "title": "RADAR: Robust AI-Text Detection via Adversarial Learning",
+ "abstract": "Recent advances in large language models (LLMs) and the intensifying\npopularity of ChatGPT-like applications have blurred the boundary of\nhigh-quality text generation between humans and machines. However, in addition\nto the anticipated revolutionary changes to our technology and society, the\ndifficulty of distinguishing LLM-generated texts (AI-text) from human-generated\ntexts poses new challenges of misuse and fairness, such as fake content\ngeneration, plagiarism, and false accusations of innocent writers. While\nexisting works show that current AI-text detectors are not robust to LLM-based\nparaphrasing, this paper aims to bridge this gap by proposing a new framework\ncalled RADAR, which jointly trains a robust AI-text detector via adversarial\nlearning. RADAR is based on adversarial training of a paraphraser and a\ndetector. The paraphraser's goal is to generate realistic content to evade\nAI-text detection. RADAR uses the feedback from the detector to update the\nparaphraser, and vice versa. Evaluated with 8 different LLMs (Pythia, Dolly\n2.0, Palmyra, Camel, GPT-J, Dolly 1.0, LLaMA, and Vicuna) across 4 datasets,\nexperimental results show that RADAR significantly outperforms existing AI-text\ndetection methods, especially when paraphrasing is in place. We also identify\nthe strong transferability of RADAR from instruction-tuned LLMs to other LLMs,\nand evaluate the improved capability of RADAR via GPT-3.5-Turbo.",
+ "authors": "Xiaomeng Hu, Pin-Yu Chen, Tsung-Yi Ho",
+ "published": "2023-07-07",
+ "updated": "2023-10-24",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.LG"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2309.03852v2",
+ "title": "FLM-101B: An Open LLM and How to Train It with $100K Budget",
+ "abstract": "Large language models (LLMs) have achieved remarkable success in NLP and\nmultimodal tasks, among others. Despite these successes, two main challenges\nremain in developing LLMs: (i) high computational cost, and (ii) fair and\nobjective evaluations. In this paper, we report a solution to significantly\nreduce LLM training cost through a growth strategy. We demonstrate that a\n101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US\ndollars. Inspired by IQ tests, we also consolidate an additional range of\nevaluations on top of existing evaluations that focus on knowledge-oriented\nabilities. These IQ evaluations include symbolic mapping, rule understanding,\npattern mining, and anti-interference. Such evaluations minimize the potential\nimpact of memorization. Experimental results show that our model, named\nFLM-101B, trained with a budget of 100K US dollars, achieves performance\ncomparable to powerful and well-known models, e.g., GPT-3 and GLM-130B,\nespecially on the additional range of IQ evaluations. The checkpoint of\nFLM-101B is released at https://huggingface.co/CofeAI/FLM-101B.",
+ "authors": "Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang",
+ "published": "2023-09-07",
+ "updated": "2023-09-17",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2312.14769v3",
+ "title": "Large Language Model (LLM) Bias Index -- LLMBI",
+ "abstract": "The Large Language Model Bias Index (LLMBI) is a pioneering approach designed\nto quantify and address biases inherent in large language models (LLMs), such\nas GPT-4. We recognise the increasing prevalence and impact of LLMs across\ndiverse sectors. This research introduces a novel metric, LLMBI, to\nsystematically measure and mitigate biases potentially skewing model responses.\nWe formulated LLMBI using a composite scoring system incorporating multiple\ndimensions of bias, including but not limited to age, gender, and racial\nbiases. To operationalise this metric, we engaged in a multi-step process\ninvolving collecting and annotating LLM responses, applying sophisticated\nNatural Language Processing (NLP) techniques for bias detection, and computing\nthe LLMBI score through a specially crafted mathematical formula. The formula\nintegrates weighted averages of various bias dimensions, a penalty for dataset\ndiversity deficiencies, and a correction for sentiment biases. Our empirical\nanalysis, conducted using responses from OpenAI's API, employs advanced\nsentiment analysis as a representative method for bias detection. The research\nreveals LLMs, whilst demonstrating impressive capabilities in text generation,\nexhibit varying degrees of bias across different dimensions. LLMBI provides a\nquantifiable measure to compare biases across models and over time, offering a\nvital tool for systems engineers, researchers and regulators in enhancing the\nfairness and reliability of LLMs. It highlights the potential of LLMs in\nmimicking unbiased human-like responses. Additionally, it underscores the\nnecessity of continuously monitoring and recalibrating such models to align\nwith evolving societal norms and ethical standards.",
+ "authors": "Abiodun Finbarrs Oketunji, Muhammad Anas, Deepthi Saina",
+ "published": "2023-12-22",
+ "updated": "2023-12-29",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.CY",
+ "cs.LG",
+ "I.2.7"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2402.04489v1",
+ "title": "De-amplifying Bias from Differential Privacy in Language Model Fine-tuning",
+ "abstract": "Fairness and privacy are two important values machine learning (ML)\npractitioners often seek to operationalize in models. Fairness aims to reduce\nmodel bias for social/demographic sub-groups. Privacy via differential privacy\n(DP) mechanisms, on the other hand, limits the impact of any individual's\ntraining data on the resulting model. The trade-offs between privacy and\nfairness goals of trustworthy ML pose a challenge to those wishing to address\nboth. We show that DP amplifies gender, racial, and religious bias when\nfine-tuning large language models (LLMs), producing models more biased than\nones fine-tuned without DP. We find the cause of the amplification to be a\ndisparity in convergence of gradients across sub-groups. Through the case of\nbinary gender bias, we demonstrate that Counterfactual Data Augmentation (CDA),\na known method for addressing bias, also mitigates bias amplification by DP. As\na consequence, DP and CDA together can be used to fine-tune models while\nmaintaining both fairness and privacy.",
+ "authors": "Sanjari Srivastava, Piotr Mardziel, Zhikhun Zhang, Archana Ahlawat, Anupam Datta, John C Mitchell",
+ "published": "2024-02-07",
+ "updated": "2024-02-07",
+ "primary_cat": "cs.LG",
+ "cats": [
+ "cs.LG",
+ "cs.CR",
+ "cs.CY",
+ "stat.ME"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2310.13343v1",
+ "title": "Challenges and Contributing Factors in the Utilization of Large Language Models (LLMs)",
+ "abstract": "With the development of large language models (LLMs) like the GPT series,\ntheir widespread use across various application scenarios presents a myriad of\nchallenges. This review initially explores the issue of domain specificity,\nwhere LLMs may struggle to provide precise answers to specialized questions\nwithin niche fields. The problem of knowledge forgetting arises as these LLMs\nmight find it hard to balance old and new information. The knowledge repetition\nphenomenon reveals that sometimes LLMs might deliver overly mechanized\nresponses, lacking depth and originality. Furthermore, knowledge illusion\ndescribes situations where LLMs might provide answers that seem insightful but\nare actually superficial, while knowledge toxicity focuses on harmful or biased\ninformation outputs. These challenges underscore problems in the training data\nand algorithmic design of LLMs. To address these issues, it's suggested to\ndiversify training data, fine-tune models, enhance transparency and\ninterpretability, and incorporate ethics and fairness training. Future\ntechnological trends might lean towards iterative methodologies, multimodal\nlearning, model personalization and customization, and real-time learning and\nfeedback mechanisms. In conclusion, future LLMs should prioritize fairness,\ntransparency, and ethics, ensuring they uphold high moral and ethical standards\nwhen serving humanity.",
+ "authors": "Xiaoliang Chen, Liangbin Li, Le Chang, Yunhe Huang, Yuxuan Zhao, Yuxiao Zhang, Dinuo Li",
+ "published": "2023-10-20",
+ "updated": "2023-10-20",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2403.15451v1",
+ "title": "Towards Enabling FAIR Dataspaces Using Large Language Models",
+ "abstract": "Dataspaces have recently gained adoption across various sectors, including\ntraditionally less digitized domains such as culture. Leveraging Semantic Web\ntechnologies helps to make dataspaces FAIR, but their complexity poses a\nsignificant challenge to the adoption of dataspaces and increases their cost.\nThe advent of Large Language Models (LLMs) raises the question of how these\nmodels can support the adoption of FAIR dataspaces. In this work, we\ndemonstrate the potential of LLMs in dataspaces with a concrete example. We\nalso derive a research agenda for exploring this emerging field.",
+ "authors": "Benedikt T. Arnold, Johannes Theissen-Lipp, Diego Collarana, Christoph Lange, Sandra Geisler, Edward Curry, Stefan Decker",
+ "published": "2024-03-18",
+ "updated": "2024-03-18",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2312.15198v2",
+ "title": "Do LLM Agents Exhibit Social Behavior?",
+ "abstract": "The advances of Large Language Models (LLMs) are expanding their utility in\nboth academic research and practical applications. Recent social science\nresearch has explored the use of these ``black-box'' LLM agents for simulating\ncomplex social systems and potentially substituting human subjects in\nexperiments. Our study delves into this emerging domain, investigating the\nextent to which LLMs exhibit key social interaction principles, such as social\nlearning, social preference, and cooperative behavior (indirect reciprocity),\nin their interactions with humans and other agents. We develop a framework for\nour study, wherein classical laboratory experiments involving human subjects\nare adapted to use LLM agents. This approach involves step-by-step reasoning\nthat mirrors human cognitive processes and zero-shot learning to assess the\ninnate preferences of LLMs. Our analysis of LLM agents' behavior includes both\nthe primary effects and an in-depth examination of the underlying mechanisms.\nFocusing on GPT-4, our analyses suggest that LLM agents appear to exhibit a\nrange of human-like social behaviors such as distributional and reciprocity\npreferences, responsiveness to group identity cues, engagement in indirect\nreciprocity, and social learning capabilities. However, our analysis also\nreveals notable differences: LLMs demonstrate a pronounced fairness preference,\nweaker positive reciprocity, and a more calculating approach in social learning\ncompared to humans. These insights indicate that while LLMs hold great promise\nfor applications in social science research, such as in laboratory experiments\nand agent-based modeling, the subtle behavioral differences between LLM agents\nand humans warrant further investigation. Careful examination and development\nof protocols in evaluating the social behaviors of LLMs are necessary before\ndirectly applying these models to emulate human behavior.",
+ "authors": "Yan Leng, Yuan Yuan",
+ "published": "2023-12-23",
+ "updated": "2024-02-22",
+ "primary_cat": "cs.AI",
+ "cats": [
+ "cs.AI",
+ "cs.SI",
+ "econ.GN",
+ "q-fin.EC"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2311.00306v1",
+ "title": "Probing Explicit and Implicit Gender Bias through LLM Conditional Text Generation",
+ "abstract": "Large Language Models (LLMs) can generate biased and toxic responses. Yet\nmost prior work on LLM gender bias evaluation requires predefined\ngender-related phrases or gender stereotypes, which are challenging to be\ncomprehensively collected and are limited to explicit bias evaluation. In\naddition, we believe that instances devoid of gender-related language or\nexplicit stereotypes in inputs can still induce gender bias in LLMs. Thus, in\nthis work, we propose a conditional text generation mechanism without the need\nfor predefined gender phrases and stereotypes. This approach employs three\ntypes of inputs generated through three distinct strategies to probe LLMs,\naiming to show evidence of explicit and implicit gender biases in LLMs. We also\nutilize explicit and implicit evaluation metrics to evaluate gender bias in\nLLMs under different strategies. Our experiments demonstrate that an increased\nmodel size does not consistently lead to enhanced fairness and all tested LLMs\nexhibit explicit and/or implicit gender bias, even when explicit gender\nstereotypes are absent in the inputs.",
+ "authors": "Xiangjue Dong, Yibo Wang, Philip S. Yu, James Caverlee",
+ "published": "2023-11-01",
+ "updated": "2023-11-01",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2401.00625v2",
+ "title": "Beyond Efficiency: A Systematic Survey of Resource-Efficient Large Language Models",
+ "abstract": "The burgeoning field of Large Language Models (LLMs), exemplified by\nsophisticated models like OpenAI's ChatGPT, represents a significant\nadvancement in artificial intelligence. These models, however, bring forth\nsubstantial challenges in the high consumption of computational, memory,\nenergy, and financial resources, especially in environments with limited\nresource capabilities. This survey aims to systematically address these\nchallenges by reviewing a broad spectrum of techniques designed to enhance the\nresource efficiency of LLMs. We categorize methods based on their optimization\nfocus: computational, memory, energy, financial, and network resources and\ntheir applicability across various stages of an LLM's lifecycle, including\narchitecture design, pretraining, finetuning, and system design. Additionally,\nthe survey introduces a nuanced categorization of resource efficiency\ntechniques by their specific resource types, which uncovers the intricate\nrelationships and mappings between various resources and corresponding\noptimization techniques. A standardized set of evaluation metrics and datasets\nis also presented to facilitate consistent and fair comparisons across\ndifferent models and techniques. By offering a comprehensive overview of the\ncurrent sota and identifying open research avenues, this survey serves as a\nfoundational reference for researchers and practitioners, aiding them in\ndeveloping more sustainable and efficient LLMs in a rapidly evolving landscape.",
+ "authors": "Guangji Bai, Zheng Chai, Chen Ling, Shiyu Wang, Jiaying Lu, Nan Zhang, Tingwei Shi, Ziyang Yu, Mengdan Zhu, Yifei Zhang, Carl Yang, Yue Cheng, Liang Zhao",
+ "published": "2024-01-01",
+ "updated": "2024-01-04",
+ "primary_cat": "cs.LG",
+ "cats": [
+ "cs.LG"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2404.12736v1",
+ "title": "Large Language Model Supply Chain: A Research Agenda",
+ "abstract": "The rapid advancements in pre-trained Large Language Models (LLMs) and Large\nMultimodal Models (LMMs) have ushered in a new era of intelligent applications,\ntransforming fields ranging from natural language processing to content\ngeneration. The LLM supply chain represents a crucial aspect of the\ncontemporary artificial intelligence landscape. It encompasses the entire\nlifecycle of pre-trained models, from its initial development and training to\nits final deployment and application in various domains. This paper presents a\ncomprehensive overview of the LLM supply chain, highlighting its three core\nelements: 1) the model infrastructure, encompassing datasets and toolchain for\ntraining, optimization, and deployment; 2) the model lifecycle, covering\ntraining, testing, releasing, and ongoing maintenance; and 3) the downstream\napplication ecosystem, enabling the integration of pre-trained models into a\nwide range of intelligent applications. However, this rapidly evolving field\nfaces numerous challenges across these key components, including data privacy\nand security, model interpretability and fairness, infrastructure scalability,\nand regulatory compliance. Addressing these challenges is essential for\nharnessing the full potential of LLMs and ensuring their ethical and\nresponsible use. This paper provides a future research agenda for the LLM\nsupply chain, aiming at driving the continued advancement and responsible\ndeployment of these transformative LLMs.",
+ "authors": "Shenao Wang, Yanjie Zhao, Xinyi Hou, Haoyu Wang",
+ "published": "2024-04-19",
+ "updated": "2024-04-19",
+ "primary_cat": "cs.SE",
+ "cats": [
+ "cs.SE"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2311.18580v1",
+ "title": "FFT: Towards Harmlessness Evaluation and Analysis for LLMs with Factuality, Fairness, Toxicity",
+ "abstract": "The widespread of generative artificial intelligence has heightened concerns\nabout the potential harms posed by AI-generated texts, primarily stemming from\nfactoid, unfair, and toxic content. Previous researchers have invested much\neffort in assessing the harmlessness of generative language models. However,\nexisting benchmarks are struggling in the era of large language models (LLMs),\ndue to the stronger language generation and instruction following capabilities,\nas well as wider applications. In this paper, we propose FFT, a new benchmark\nwith 2116 elaborated-designed instances, for LLM harmlessness evaluation with\nfactuality, fairness, and toxicity. To investigate the potential harms of LLMs,\nwe evaluate 9 representative LLMs covering various parameter scales, training\nstages, and creators. Experiments show that the harmlessness of LLMs is still\nunder-satisfactory, and extensive analysis derives some insightful findings\nthat could inspire future research for harmless LLM research.",
+ "authors": "Shiyao Cui, Zhenyu Zhang, Yilong Chen, Wenyuan Zhang, Tianyun Liu, Siqi Wang, Tingwen Liu",
+ "published": "2023-11-30",
+ "updated": "2023-11-30",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.CR"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2402.08189v1",
+ "title": "Simulating Human Strategic Behavior: Comparing Single and Multi-agent LLMs",
+ "abstract": "When creating plans, policies, or applications for people, it is challenging\nfor designers to think through the strategic ways that different people will\nbehave. Recently, Large Language Models (LLMs) have been shown to create\nrealistic simulations of human-like behavior based on personas. We build on\nthis to investigate whether LLMs can simulate human strategic behavior. Human\nstrategies are complex because they take into account social norms in addition\nto aiming to maximize personal gain. The ultimatum game is a classic economics\nexperiment used to understand human strategic behavior in a social setting. It\nshows that people will often choose to \"punish\" other players to enforce social\nnorms rather than to maximize personal profits. We test whether LLMs can\nreplicate this complex behavior in simulations. We compare two architectures:\nsingle- and multi-agent LLMs. We compare their abilities to (1) simulate\nhuman-like actions in the ultimatum game, (2) simulate two player\npersonalities, greedy and fair, and (3) create robust strategies that are\nlogically complete and consistent with personality. Our evaluation shows the\nmulti-agent architecture is much more accurate than single LLMs (88% vs. 50%)\nin simulating human strategy creation and actions for personality pairs. Thus\nthere is potential to use LLMs to simulate human strategic behavior to help\ndesigners, planners, and policymakers perform preliminary exploration of how\npeople behave in systems.",
+ "authors": "Karthik Sreedhar, Lydia Chilton",
+ "published": "2024-02-13",
+ "updated": "2024-02-13",
+ "primary_cat": "cs.HC",
+ "cats": [
+ "cs.HC"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2305.01937v1",
+ "title": "Can Large Language Models Be an Alternative to Human Evaluations?",
+ "abstract": "Human evaluation is indispensable and inevitable for assessing the quality of\ntexts generated by machine learning models or written by humans. However, human\nevaluation is very difficult to reproduce and its quality is notoriously\nunstable, hindering fair comparisons among different natural language\nprocessing (NLP) models and algorithms. Recently, large language models (LLMs)\nhave demonstrated exceptional performance on unseen tasks when only the task\ninstructions are provided. In this paper, we explore if such an ability of the\nLLMs can be used as an alternative to human evaluation. We present the LLMs\nwith the exact same instructions, samples to be evaluated, and questions used\nto conduct human evaluation, and then ask the LLMs to generate responses to\nthose questions; we dub this LLM evaluation. We use human evaluation and LLM\nevaluation to evaluate the texts in two NLP tasks: open-ended story generation\nand adversarial attacks. We show that the result of LLM evaluation is\nconsistent with the results obtained by expert human evaluation: the texts\nrated higher by human experts are also rated higher by the LLMs. We also find\nthat the results of LLM evaluation are stable over different formatting of the\ntask instructions and the sampling algorithm used to generate the answer. We\nare the first to show the potential of using LLMs to assess the quality of\ntexts and discuss the limitations and ethical considerations of LLM evaluation.",
+ "authors": "Cheng-Han Chiang, Hung-yi Lee",
+ "published": "2023-05-03",
+ "updated": "2023-05-03",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.HC"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2303.01248v3",
+ "title": "Can ChatGPT Assess Human Personalities? A General Evaluation Framework",
+ "abstract": "Large Language Models (LLMs) especially ChatGPT have produced impressive\nresults in various areas, but their potential human-like psychology is still\nlargely unexplored. Existing works study the virtual personalities of LLMs but\nrarely explore the possibility of analyzing human personalities via LLMs. This\npaper presents a generic evaluation framework for LLMs to assess human\npersonalities based on Myers Briggs Type Indicator (MBTI) tests. Specifically,\nwe first devise unbiased prompts by randomly permuting options in MBTI\nquestions and adopt the average testing result to encourage more impartial\nanswer generation. Then, we propose to replace the subject in question\nstatements to enable flexible queries and assessments on different subjects\nfrom LLMs. Finally, we re-formulate the question instructions in a manner of\ncorrectness evaluation to facilitate LLMs to generate clearer responses. The\nproposed framework enables LLMs to flexibly assess personalities of different\ngroups of people. We further propose three evaluation metrics to measure the\nconsistency, robustness, and fairness of assessment results from\nstate-of-the-art LLMs including ChatGPT and GPT-4. Our experiments reveal\nChatGPT's ability to assess human personalities, and the average results\ndemonstrate that it can achieve more consistent and fairer assessments in spite\nof lower robustness against prompt biases compared with InstructGPT.",
+ "authors": "Haocong Rao, Cyril Leung, Chunyan Miao",
+ "published": "2023-03-01",
+ "updated": "2023-10-13",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2402.18502v1",
+ "title": "Few-Shot Fairness: Unveiling LLM's Potential for Fairness-Aware Classification",
+ "abstract": "Employing Large Language Models (LLM) in various downstream applications such\nas classification is crucial, especially for smaller companies lacking the\nexpertise and resources required for fine-tuning a model. Fairness in LLMs\nhelps ensure inclusivity, equal representation based on factors such as race,\ngender and promotes responsible AI deployment. As the use of LLMs has become\nincreasingly prevalent, it is essential to assess whether LLMs can generate\nfair outcomes when subjected to considerations of fairness. In this study, we\nintroduce a framework outlining fairness regulations aligned with various\nfairness definitions, with each definition being modulated by varying degrees\nof abstraction. We explore the configuration for in-context learning and the\nprocedure for selecting in-context demonstrations using RAG, while\nincorporating fairness rules into the process. Experiments conducted with\ndifferent LLMs indicate that GPT-4 delivers superior results in terms of both\naccuracy and fairness compared to other models. This work is one of the early\nattempts to achieve fairness in prediction tasks by utilizing LLMs through\nin-context learning.",
+ "authors": "Garima Chhikara, Anurag Sharma, Kripabandhu Ghosh, Abhijnan Chakraborty",
+ "published": "2024-02-28",
+ "updated": "2024-02-28",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2311.04205v2",
+ "title": "Rephrase and Respond: Let Large Language Models Ask Better Questions for Themselves",
+ "abstract": "Misunderstandings arise not only in interpersonal communication but also\nbetween humans and Large Language Models (LLMs). Such discrepancies can make\nLLMs interpret seemingly unambiguous questions in unexpected ways, yielding\nincorrect responses. While it is widely acknowledged that the quality of a\nprompt, such as a question, significantly impacts the quality of the response\nprovided by LLMs, a systematic method for crafting questions that LLMs can\nbetter comprehend is still underdeveloped. In this paper, we present a method\nnamed `Rephrase and Respond' (RaR), which allows LLMs to rephrase and expand\nquestions posed by humans and provide responses in a single prompt. This\napproach serves as a simple yet effective prompting method for improving\nperformance. We also introduce a two-step variant of RaR, where a rephrasing\nLLM first rephrases the question and then passes the original and rephrased\nquestions together to a different responding LLM. This facilitates the\neffective utilization of rephrased questions generated by one LLM with another.\nOur experiments demonstrate that our methods significantly improve the\nperformance of different models across a wide range to tasks. We further\nprovide a comprehensive comparison between RaR and the popular Chain-of-Thought\n(CoT) methods, both theoretically and empirically. We show that RaR is\ncomplementary to CoT and can be combined with CoT to achieve even better\nperformance. Our work not only contributes to enhancing LLM performance\nefficiently and effectively but also sheds light on a fair evaluation of LLM\ncapabilities. Data and codes are available at\nhttps://github.com/uclaml/Rephrase-and-Respond.",
+ "authors": "Yihe Deng, Weitong Zhang, Zixiang Chen, Quanquan Gu",
+ "published": "2023-11-07",
+ "updated": "2024-04-18",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.LG"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2402.14208v2",
+ "title": "Content Conditional Debiasing for Fair Text Embedding",
+ "abstract": "Mitigating biases in machine learning models has gained increasing attention\nin Natural Language Processing (NLP). Yet, only a few studies focus on fair\ntext embeddings, which are crucial yet challenging for real-world applications.\nIn this paper, we propose a novel method for learning fair text embeddings. We\nachieve fairness while maintaining utility trade-off by ensuring conditional\nindependence between sensitive attributes and text embeddings conditioned on\nthe content. Specifically, we enforce that embeddings of texts with different\nsensitive attributes but identical content maintain the same distance toward\nthe embedding of their corresponding neutral text. Furthermore, we address the\nissue of lacking proper training data by using Large Language Models (LLMs) to\naugment texts into different sensitive groups. Our extensive evaluations\ndemonstrate that our approach effectively improves fairness while preserving\nthe utility of embeddings, representing a pioneering effort in achieving\nconditional independence for fair text embeddings.",
+ "authors": "Wenlong Deng, Blair Chen, Xiaoxiao Li, Christos Thrampoulidis",
+ "published": "2024-02-22",
+ "updated": "2024-02-23",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.CY",
+ "cs.LG"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2403.15491v1",
+ "title": "Open Source Conversational LLMs do not know most Spanish words",
+ "abstract": "The growing interest in Large Language Models (LLMs) and in particular in\nconversational models with which users can interact has led to the development\nof a large number of open-source chat LLMs. These models are evaluated on a\nwide range of benchmarks to assess their capabilities in answering questions or\nsolving problems on almost any possible topic or to test their ability to\nreason or interpret texts. Instead, the evaluation of the knowledge that these\nmodels have of the languages has received much less attention. For example, the\nwords that they can recognize and use in different languages. In this paper, we\nevaluate the knowledge that open-source chat LLMs have of Spanish words by\ntesting a sample of words in a reference dictionary. The results show that\nopen-source chat LLMs produce incorrect meanings for an important fraction of\nthe words and are not able to use most of the words correctly to write\nsentences with context. These results show how Spanish is left behind in the\nopen-source LLM race and highlight the need to push for linguistic fairness in\nconversational LLMs ensuring that they provide similar performance across\nlanguages.",
+ "authors": "Javier Conde, Miguel Gonz\u00e1lez, Nina Melero, Raquel Ferrando, Gonzalo Mart\u00ednez, Elena Merino-G\u00f3mez, Jos\u00e9 Alberto Hern\u00e1ndez, Pedro Reviriego",
+ "published": "2024-03-21",
+ "updated": "2024-03-21",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2311.08472v1",
+ "title": "Selecting Shots for Demographic Fairness in Few-Shot Learning with Large Language Models",
+ "abstract": "Recently, work in NLP has shifted to few-shot (in-context) learning, with\nlarge language models (LLMs) performing well across a range of tasks. However,\nwhile fairness evaluations have become a standard for supervised methods,\nlittle is known about the fairness of LLMs as prediction systems. Further,\ncommon standard methods for fairness involve access to models weights or are\napplied during finetuning, which are not applicable in few-shot learning. Do\nLLMs exhibit prediction biases when used for standard NLP tasks? In this work,\nwe explore the effect of shots, which directly affect the performance of\nmodels, on the fairness of LLMs as NLP classification systems. We consider how\ndifferent shot selection strategies, both existing and new demographically\nsensitive methods, affect model fairness across three standard fairness\ndatasets. We discuss how future work can include LLM fairness evaluations.",
+ "authors": "Carlos Aguirre, Kuleen Sasse, Isabel Cachola, Mark Dredze",
+ "published": "2023-11-14",
+ "updated": "2023-11-14",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2305.03514v3",
+ "title": "Can Large Language Models Transform Computational Social Science?",
+ "abstract": "Large Language Models (LLMs) are capable of successfully performing many\nlanguage processing tasks zero-shot (without training data). If zero-shot LLMs\ncan also reliably classify and explain social phenomena like persuasiveness and\npolitical ideology, then LLMs could augment the Computational Social Science\n(CSS) pipeline in important ways. This work provides a road map for using LLMs\nas CSS tools. Towards this end, we contribute a set of prompting best practices\nand an extensive evaluation pipeline to measure the zero-shot performance of 13\nlanguage models on 25 representative English CSS benchmarks. On taxonomic\nlabeling tasks (classification), LLMs fail to outperform the best fine-tuned\nmodels but still achieve fair levels of agreement with humans. On free-form\ncoding tasks (generation), LLMs produce explanations that often exceed the\nquality of crowdworkers' gold references. We conclude that the performance of\ntoday's LLMs can augment the CSS research pipeline in two ways: (1) serving as\nzero-shot data annotators on human annotation teams, and (2) bootstrapping\nchallenging creative generation tasks (e.g., explaining the underlying\nattributes of a text). In summary, LLMs are posed to meaningfully participate\nin social science analysis in partnership with humans.",
+ "authors": "Caleb Ziems, William Held, Omar Shaikh, Jiaao Chen, Zhehao Zhang, Diyi Yang",
+ "published": "2023-04-12",
+ "updated": "2024-02-26",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.LG"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2308.10149v2",
+ "title": "A Survey on Fairness in Large Language Models",
+ "abstract": "Large Language Models (LLMs) have shown powerful performance and development\nprospects and are widely deployed in the real world. However, LLMs can capture\nsocial biases from unprocessed training data and propagate the biases to\ndownstream tasks. Unfair LLM systems have undesirable social impacts and\npotential harms. In this paper, we provide a comprehensive review of related\nresearch on fairness in LLMs. Considering the influence of parameter magnitude\nand training paradigm on research strategy, we divide existing fairness\nresearch into oriented to medium-sized LLMs under pre-training and fine-tuning\nparadigms and oriented to large-sized LLMs under prompting paradigms. First,\nfor medium-sized LLMs, we introduce evaluation metrics and debiasing methods\nfrom the perspectives of intrinsic bias and extrinsic bias, respectively. Then,\nfor large-sized LLMs, we introduce recent fairness research, including fairness\nevaluation, reasons for bias, and debiasing methods. Finally, we discuss and\nprovide insight on the challenges and future directions for the development of\nfairness in LLMs.",
+ "authors": "Yingji Li, Mengnan Du, Rui Song, Xin Wang, Ying Wang",
+ "published": "2023-08-20",
+ "updated": "2024-02-21",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2311.04892v2",
+ "title": "Bias Runs Deep: Implicit Reasoning Biases in Persona-Assigned LLMs",
+ "abstract": "Recent works have showcased the ability of LLMs to embody diverse personas in\ntheir responses, exemplified by prompts like 'You are Yoda. Explain the Theory\nof Relativity.' While this ability allows personalization of LLMs and enables\nhuman behavior simulation, its effect on LLMs' capabilities remains unclear. To\nfill this gap, we present the first extensive study of the unintended\nside-effects of persona assignment on the ability of LLMs to perform basic\nreasoning tasks. Our study covers 24 reasoning datasets, 4 LLMs, and 19 diverse\npersonas (e.g. an Asian person) spanning 5 socio-demographic groups. Our\nexperiments unveil that LLMs harbor deep rooted bias against various\nsocio-demographics underneath a veneer of fairness. While they overtly reject\nstereotypes when explicitly asked ('Are Black people less skilled at\nmathematics?'), they manifest stereotypical and erroneous presumptions when\nasked to answer questions while adopting a persona. These can be observed as\nabstentions in responses, e.g., 'As a Black person, I can't answer this\nquestion as it requires math knowledge', and generally result in a substantial\nperformance drop. Our experiments with ChatGPT-3.5 show that this bias is\nubiquitous - 80% of our personas demonstrate bias; it is significant - some\ndatasets show performance drops of 70%+; and can be especially harmful for\ncertain groups - some personas suffer statistically significant drops on 80%+\nof the datasets. Overall, all 4 LLMs exhibit this bias to varying extents, with\nGPT-4-Turbo showing the least but still a problematic amount of bias (evident\nin 42% of the personas). Further analysis shows that these persona-induced\nerrors can be hard-to-discern and hard-to-avoid. Our findings serve as a\ncautionary tale that the practice of assigning personas to LLMs - a trend on\nthe rise - can surface their deep-rooted biases and have unforeseeable and\ndetrimental side-effects.",
+ "authors": "Shashank Gupta, Vaishnavi Shrivastava, Ameet Deshpande, Ashwin Kalyan, Peter Clark, Ashish Sabharwal, Tushar Khot",
+ "published": "2023-11-08",
+ "updated": "2024-01-27",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2402.17916v2",
+ "title": "LLM-Resistant Math Word Problem Generation via Adversarial Attacks",
+ "abstract": "Large language models (LLMs) have significantly transformed the educational\nlandscape. As current plagiarism detection tools struggle to keep pace with\nLLMs' rapid advancements, the educational community faces the challenge of\nassessing students' true problem-solving abilities in the presence of LLMs. In\nthis work, we explore a new paradigm for ensuring fair evaluation -- generating\nadversarial examples which preserve the structure and difficulty of the\noriginal questions aimed for assessment, but are unsolvable by LLMs. Focusing\non the domain of math word problems, we leverage abstract syntax trees to\nstructurally generate adversarial examples that cause LLMs to produce incorrect\nanswers by simply editing the numeric values in the problems. We conduct\nexperiments on various open- and closed-source LLMs, quantitatively and\nqualitatively demonstrating that our method significantly degrades their math\nproblem-solving ability. We identify shared vulnerabilities among LLMs and\npropose a cost-effective approach to attack high-cost models. Additionally, we\nconduct automatic analysis on math problems and investigate the cause of\nfailure, offering a nuanced view into model's limitation.",
+ "authors": "Roy Xie, Chengxuan Huang, Junlin Wang, Bhuwan Dhingra",
+ "published": "2024-02-27",
+ "updated": "2024-03-30",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2403.02839v1",
+ "title": "An Empirical Study of LLM-as-a-Judge for LLM Evaluation: Fine-tuned Judge Models are Task-specific Classifiers",
+ "abstract": "Recently, there has been a growing trend of utilizing Large Language Model\n(LLM) to evaluate the quality of other LLMs. Many studies have employed\nproprietary close-source models, especially GPT4, as the evaluator.\nAlternatively, other works have fine-tuned judge models based on open-source\nLLMs as the evaluator. In this study, we conduct an empirical study of\ndifferent judge models on their evaluation capability. Our findings indicate\nthat although the fine-tuned judge models achieve high accuracy on in-domain\ntest sets, even surpassing GPT4, they are inherently task-specific classifiers,\nand their generalizability and fairness severely underperform GPT4.",
+ "authors": "Hui Huang, Yingqi Qu, Jing Liu, Muyun Yang, Tiejun Zhao",
+ "published": "2024-03-05",
+ "updated": "2024-03-05",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2404.03192v1",
+ "title": "Do Large Language Models Rank Fairly? An Empirical Study on the Fairness of LLMs as Rankers",
+ "abstract": "The integration of Large Language Models (LLMs) in information retrieval has\nraised a critical reevaluation of fairness in the text-ranking models. LLMs,\nsuch as GPT models and Llama2, have shown effectiveness in natural language\nunderstanding tasks, and prior works (e.g., RankGPT) have also demonstrated\nthat the LLMs exhibit better performance than the traditional ranking models in\nthe ranking task. However, their fairness remains largely unexplored. This\npaper presents an empirical study evaluating these LLMs using the TREC Fair\nRanking dataset, focusing on the representation of binary protected attributes\nsuch as gender and geographic location, which are historically underrepresented\nin search outcomes. Our analysis delves into how these LLMs handle queries and\ndocuments related to these attributes, aiming to uncover biases in their\nranking algorithms. We assess fairness from both user and content perspectives,\ncontributing an empirical benchmark for evaluating LLMs as the fair ranker.",
+ "authors": "Yuan Wang, Xuyang Wu, Hsin-Tai Wu, Zhiqiang Tao, Yi Fang",
+ "published": "2024-04-04",
+ "updated": "2024-04-04",
+ "primary_cat": "cs.IR",
+ "cats": [
+ "cs.IR",
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2403.13840v1",
+ "title": "Whose Side Are You On? Investigating the Political Stance of Large Language Models",
+ "abstract": "Large Language Models (LLMs) have gained significant popularity for their\napplication in various everyday tasks such as text generation, summarization,\nand information retrieval. As the widespread adoption of LLMs continues to\nsurge, it becomes increasingly crucial to ensure that these models yield\nresponses that are politically impartial, with the aim of preventing\ninformation bubbles, upholding fairness in representation, and mitigating\nconfirmation bias. In this paper, we propose a quantitative framework and\npipeline designed to systematically investigate the political orientation of\nLLMs. Our investigation delves into the political alignment of LLMs across a\nspectrum of eight polarizing topics, spanning from abortion to LGBTQ issues.\nAcross topics, the results indicate that LLMs exhibit a tendency to provide\nresponses that closely align with liberal or left-leaning perspectives rather\nthan conservative or right-leaning ones when user queries include details\npertaining to occupation, race, or political affiliation. The findings\npresented in this study not only reaffirm earlier observations regarding the\nleft-leaning characteristics of LLMs but also surface particular attributes,\nsuch as occupation, that are particularly susceptible to such inclinations even\nwhen directly steered towards conservatism. As a recommendation to avoid these\nmodels providing politicised responses, users should be mindful when crafting\nqueries, and exercise caution in selecting neutral prompt language.",
+ "authors": "Pagnarasmey Pit, Xingjun Ma, Mike Conway, Qingyu Chen, James Bailey, Henry Pit, Putrasmey Keo, Watey Diep, Yu-Gang Jiang",
+ "published": "2024-03-15",
+ "updated": "2024-03-15",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.SI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2309.14345v2",
+ "title": "Bias Testing and Mitigation in LLM-based Code Generation",
+ "abstract": "Utilizing state-of-the-art Large Language Models (LLMs), automatic code\ngeneration models play a pivotal role in enhancing the productivity of software\ndevelopment procedures. As the adoption of LLMs becomes more widespread in\nsoftware coding ecosystems, a pressing issue has emerged: does the generated\ncode contain social bias and unfairness, such as those related to age, gender,\nand race? This issue concerns the integrity, fairness, and ethical foundation\nof software applications that depend on the code generated by these models, yet\nis under-explored in the literature. This paper presents a novel bias testing\nframework that is specifically designed for code generation tasks. Based on\nthis framework, we conduct an extensive evaluation of the bias in code\ngenerated by five state-of-the-art LLMs. Our findings reveal that 20.29% to\n44.93% code functions generated by the models under study are biased when\nhandling bias sensitive tasks (i.e., tasks that involve sensitive attributes\nsuch as age and gender). This indicates that the existing LLMs can be unfair in\ncode generation, posing risks of unintended and harmful software behaviors. To\nmitigate bias for code generation models, we evaluate five bias mitigation\nprompt strategies, i.e., utilizing bias testing results to refine the code\n(zero-shot), one-, few-shot, and two Chain-of-Thought (CoT) prompts. Our\nevaluation results illustrate that these strategies are all effective in\nmitigating bias. Overall, one-shot and few-shot learning are the two most\neffective. For GPT-4, 80% to 90% code bias can be removed with one-shot\nlearning.",
+ "authors": "Dong Huang, Qingwen Bu, Jie Zhang, Xiaofei Xie, Junjie Chen, Heming Cui",
+ "published": "2023-09-03",
+ "updated": "2024-01-09",
+ "primary_cat": "cs.SE",
+ "cats": [
+ "cs.SE",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2401.15585v1",
+ "title": "Evaluating Gender Bias in Large Language Models via Chain-of-Thought Prompting",
+ "abstract": "There exist both scalable tasks, like reading comprehension and\nfact-checking, where model performance improves with model size, and unscalable\ntasks, like arithmetic reasoning and symbolic reasoning, where model\nperformance does not necessarily improve with model size. Large language models\n(LLMs) equipped with Chain-of-Thought (CoT) prompting are able to make accurate\nincremental predictions even on unscalable tasks. Unfortunately, despite their\nexceptional reasoning abilities, LLMs tend to internalize and reproduce\ndiscriminatory societal biases. Whether CoT can provide discriminatory or\negalitarian rationalizations for the implicit information in unscalable tasks\nremains an open question.\n In this study, we examine the impact of LLMs' step-by-step predictions on\ngender bias in unscalable tasks. For this purpose, we construct a benchmark for\nan unscalable task where the LLM is given a list of words comprising feminine,\nmasculine, and gendered occupational words, and is required to count the number\nof feminine and masculine words. In our CoT prompts, we require the LLM to\nexplicitly indicate whether each word in the word list is a feminine or\nmasculine before making the final predictions. With counting and handling the\nmeaning of words, this benchmark has characteristics of both arithmetic\nreasoning and symbolic reasoning. Experimental results in English show that\nwithout step-by-step prediction, most LLMs make socially biased predictions,\ndespite the task being as simple as counting words. Interestingly, CoT\nprompting reduces this unconscious social bias in LLMs and encourages fair\npredictions.",
+ "authors": "Masahiro Kaneko, Danushka Bollegala, Naoaki Okazaki, Timothy Baldwin",
+ "published": "2024-01-28",
+ "updated": "2024-01-28",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2305.11595v3",
+ "title": "Examining Inter-Consistency of Large Language Models Collaboration: An In-depth Analysis via Debate",
+ "abstract": "Large Language Models (LLMs) have shown impressive capabilities in various\napplications, but they still face various inconsistency issues. Existing works\nprimarily focus on the inconsistency issues within a single LLM, while we\ncomplementarily explore the inter-consistency among multiple LLMs for\ncollaboration. To examine whether LLMs can collaborate effectively to achieve a\nconsensus for a shared goal, we focus on commonsense reasoning, and introduce a\nformal debate framework (FORD) to conduct a three-stage debate among LLMs with\nreal-world scenarios alignment: fair debate, mismatched debate, and roundtable\ndebate. Through extensive experiments on various datasets, LLMs can effectively\ncollaborate to reach a consensus despite noticeable inter-inconsistencies, but\nimbalances in their abilities can lead to domination by superior LLMs.\nLeveraging a more advanced LLM like GPT-4 as an authoritative judge can boost\ncollaboration performance. Our work contributes to understanding the\ninter-consistency among LLMs and lays the foundation for developing future\ncollaboration methods. Codes and data are available at\nhttps://github.com/Waste-Wood/FORD",
+ "authors": "Kai Xiong, Xiao Ding, Yixin Cao, Ting Liu, Bing Qin",
+ "published": "2023-05-19",
+ "updated": "2023-10-18",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2311.09447v2",
+ "title": "How Trustworthy are Open-Source LLMs? An Assessment under Malicious Demonstrations Shows their Vulnerabilities",
+ "abstract": "The rapid progress in open-source Large Language Models (LLMs) is\nsignificantly driving AI development forward. However, there is still a limited\nunderstanding of their trustworthiness. Deploying these models at scale without\nsufficient trustworthiness can pose significant risks, highlighting the need to\nuncover these issues promptly. In this work, we conduct an adversarial\nassessment of open-source LLMs on trustworthiness, scrutinizing them across\neight different aspects including toxicity, stereotypes, ethics, hallucination,\nfairness, sycophancy, privacy, and robustness against adversarial\ndemonstrations. We propose advCoU, an extended Chain of Utterances-based (CoU)\nprompting strategy by incorporating carefully crafted malicious demonstrations\nfor trustworthiness attack. Our extensive experiments encompass recent and\nrepresentative series of open-source LLMs, including Vicuna, MPT, Falcon,\nMistral, and Llama 2. The empirical outcomes underscore the efficacy of our\nattack strategy across diverse aspects. More interestingly, our result analysis\nreveals that models with superior performance in general NLP tasks do not\nalways have greater trustworthiness; in fact, larger models can be more\nvulnerable to attacks. Additionally, models that have undergone instruction\ntuning, focusing on instruction following, tend to be more susceptible,\nalthough fine-tuning LLMs for safety alignment proves effective in mitigating\nadversarial trustworthiness attacks.",
+ "authors": "Lingbo Mo, Boshi Wang, Muhao Chen, Huan Sun",
+ "published": "2023-11-15",
+ "updated": "2024-04-02",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2312.15398v1",
+ "title": "Fairness-Aware Structured Pruning in Transformers",
+ "abstract": "The increasing size of large language models (LLMs) has introduced challenges\nin their training and inference. Removing model components is perceived as a\nsolution to tackle the large model sizes, however, existing pruning methods\nsolely focus on performance, without considering an essential aspect for the\nresponsible use of LLMs: model fairness. It is crucial to address the fairness\nof LLMs towards diverse groups, such as women, Black people, LGBTQ+, Jewish\ncommunities, among others, as they are being deployed and available to a wide\naudience. In this work, first, we investigate how attention heads impact\nfairness and performance in pre-trained transformer-based language models. We\nthen propose a novel method to prune the attention heads that negatively impact\nfairness while retaining the heads critical for performance, i.e. language\nmodeling capabilities. Our approach is practical in terms of time and\nresources, as it does not require fine-tuning the final pruned, and fairer,\nmodel. Our findings demonstrate a reduction in gender bias by 19%, 19.5%,\n39.5%, 34.7%, 23%, and 8% for DistilGPT-2, GPT-2, GPT-Neo of two different\nsizes, GPT-J, and Llama 2 models, respectively, in comparison to the biased\nmodel, with only a slight decrease in performance.",
+ "authors": "Abdelrahman Zayed, Goncalo Mordido, Samira Shabanian, Ioana Baldini, Sarath Chandar",
+ "published": "2023-12-24",
+ "updated": "2023-12-24",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.CY",
+ "cs.LG"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2311.01964v1",
+ "title": "Don't Make Your LLM an Evaluation Benchmark Cheater",
+ "abstract": "Large language models~(LLMs) have greatly advanced the frontiers of\nartificial intelligence, attaining remarkable improvement in model capacity. To\nassess the model performance, a typical approach is to construct evaluation\nbenchmarks for measuring the ability level of LLMs in different aspects.\nDespite that a number of high-quality benchmarks have been released, the\nconcerns about the appropriate use of these benchmarks and the fair comparison\nof different models are increasingly growing. Considering these concerns, in\nthis paper, we discuss the potential risk and impact of inappropriately using\nevaluation benchmarks and misleadingly interpreting the evaluation results.\nSpecially, we focus on a special issue that would lead to inappropriate\nevaluation, \\ie \\emph{benchmark leakage}, referring that the data related to\nevaluation sets is occasionally used for model training. This phenomenon now\nbecomes more common since pre-training data is often prepared ahead of model\ntest. We conduct extensive experiments to study the effect of benchmark\nleverage, and find that it can dramatically boost the evaluation results, which\nwould finally lead to an unreliable assessment of model performance. To improve\nthe use of existing evaluation benchmarks, we finally present several\nguidelines for both LLM developers and benchmark maintainers. We hope this work\ncan draw attention to appropriate training and evaluation of LLMs.",
+ "authors": "Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, Jiawei Han",
+ "published": "2023-11-03",
+ "updated": "2023-11-03",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2310.08780v1",
+ "title": "\"Im not Racist but...\": Discovering Bias in the Internal Knowledge of Large Language Models",
+ "abstract": "Large language models (LLMs) have garnered significant attention for their\nremarkable performance in a continuously expanding set of natural language\nprocessing tasks. However, these models have been shown to harbor inherent\nsocietal biases, or stereotypes, which can adversely affect their performance\nin their many downstream applications. In this paper, we introduce a novel,\npurely prompt-based approach to uncover hidden stereotypes within any arbitrary\nLLM. Our approach dynamically generates a knowledge representation of internal\nstereotypes, enabling the identification of biases encoded within the LLM's\ninternal knowledge. By illuminating the biases present in LLMs and offering a\nsystematic methodology for their analysis, our work contributes to advancing\ntransparency and promoting fairness in natural language processing systems.",
+ "authors": "Abel Salinas, Louis Penafiel, Robert McCormack, Fred Morstatter",
+ "published": "2023-10-13",
+ "updated": "2023-10-13",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2401.01262v2",
+ "title": "Fairness Certification for Natural Language Processing and Large Language Models",
+ "abstract": "Natural Language Processing (NLP) plays an important role in our daily lives,\nparticularly due to the enormous progress of Large Language Models (LLM).\nHowever, NLP has many fairness-critical use cases, e.g., as an expert system in\nrecruitment or as an LLM-based tutor in education. Since NLP is based on human\nlanguage, potentially harmful biases can diffuse into NLP systems and produce\nunfair results, discriminate against minorities or generate legal issues.\nHence, it is important to develop a fairness certification for NLP approaches.\nWe follow a qualitative research approach towards a fairness certification for\nNLP. In particular, we have reviewed a large body of literature on algorithmic\nfairness, and we have conducted semi-structured expert interviews with a wide\nrange of experts from that area. We have systematically devised six fairness\ncriteria for NLP, which can be further refined into 18 sub-categories. Our\ncriteria offer a foundation for operationalizing and testing processes to\ncertify fairness, both from the perspective of the auditor and the audited\norganization.",
+ "authors": "Vincent Freiberger, Erik Buchmann",
+ "published": "2024-01-02",
+ "updated": "2024-01-03",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.CY",
+ "cs.LG",
+ "68T50",
+ "I.2.7"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2304.03728v1",
+ "title": "Interpretable Unified Language Checking",
+ "abstract": "Despite recent concerns about undesirable behaviors generated by large\nlanguage models (LLMs), including non-factual, biased, and hateful language, we\nfind LLMs are inherent multi-task language checkers based on their latent\nrepresentations of natural and social knowledge. We present an interpretable,\nunified, language checking (UniLC) method for both human and machine-generated\nlanguage that aims to check if language input is factual and fair. While\nfairness and fact-checking tasks have been handled separately with dedicated\nmodels, we find that LLMs can achieve high performance on a combination of\nfact-checking, stereotype detection, and hate speech detection tasks with a\nsimple, few-shot, unified set of prompts. With the ``1/2-shot'' multi-task\nlanguage checking method proposed in this work, the GPT3.5-turbo model\noutperforms fully supervised baselines on several language tasks. The simple\napproach and results suggest that based on strong latent knowledge\nrepresentations, an LLM can be an adaptive and explainable tool for detecting\nmisinformation, stereotypes, and hate speech.",
+ "authors": "Tianhua Zhang, Hongyin Luo, Yung-Sung Chuang, Wei Fang, Luc Gaitskell, Thomas Hartvigsen, Xixin Wu, Danny Fox, Helen Meng, James Glass",
+ "published": "2023-04-07",
+ "updated": "2023-04-07",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2404.13925v1",
+ "title": "MARIO Eval: Evaluate Your Math LLM with your Math LLM--A mathematical dataset evaluation toolkit",
+ "abstract": "Large language models (LLMs) have been explored in a variety of reasoning\ntasks including solving of mathematical problems. Each math dataset typically\nincludes its own specially designed evaluation script, which, while suitable\nfor its intended use, lacks generalizability across different datasets.\nConsequently, updates and adaptations to these evaluation tools tend to occur\nwithout being systematically reported, leading to inconsistencies and obstacles\nto fair comparison across studies. To bridge this gap, we introduce a\ncomprehensive mathematical evaluation toolkit that not only utilizes a python\ncomputer algebra system (CAS) for its numerical accuracy, but also integrates\nan optional LLM, known for its considerable natural language processing\ncapabilities. To validate the effectiveness of our toolkit, we manually\nannotated two distinct datasets. Our experiments demonstrate that the toolkit\nyields more robust evaluation results compared to prior works, even without an\nLLM. Furthermore, when an LLM is incorporated, there is a notable enhancement.\nThe code for our method will be made available at\n\\url{https://github.com/MARIO-Math-Reasoning/math_evaluation}.",
+ "authors": "Boning Zhang, Chengxi Li, Kai Fan",
+ "published": "2024-04-22",
+ "updated": "2024-04-22",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2310.15007v1",
+ "title": "Did the Neurons Read your Book? Document-level Membership Inference for Large Language Models",
+ "abstract": "With large language models (LLMs) poised to become embedded in our daily\nlives, questions are starting to be raised about the dataset(s) they learned\nfrom. These questions range from potential bias or misinformation LLMs could\nretain from their training data to questions of copyright and fair use of\nhuman-generated text. However, while these questions emerge, developers of the\nrecent state-of-the-art LLMs become increasingly reluctant to disclose details\non their training corpus. We here introduce the task of document-level\nmembership inference for real-world LLMs, i.e. inferring whether the LLM has\nseen a given document during training or not. First, we propose a procedure for\nthe development and evaluation of document-level membership inference for LLMs\nby leveraging commonly used data sources for training and the model release\ndate. We then propose a practical, black-box method to predict document-level\nmembership and instantiate it on OpenLLaMA-7B with both books and academic\npapers. We show our methodology to perform very well, reaching an impressive\nAUC of 0.856 for books and 0.678 for papers. We then show our approach to\noutperform the sentence-level membership inference attacks used in the privacy\nliterature for the document-level membership task. We finally evaluate whether\nsmaller models might be less sensitive to document-level inference and show\nOpenLLaMA-3B to be approximately as sensitive as OpenLLaMA-7B to our approach.\nTaken together, our results show that accurate document-level membership can be\ninferred for LLMs, increasing the transparency of technology poised to change\nour lives.",
+ "authors": "Matthieu Meeus, Shubham Jain, Marek Rei, Yves-Alexandre de Montjoye",
+ "published": "2023-10-23",
+ "updated": "2023-10-23",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.CR",
+ "cs.LG"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2308.10397v2",
+ "title": "FairMonitor: A Four-Stage Automatic Framework for Detecting Stereotypes and Biases in Large Language Models",
+ "abstract": "Detecting stereotypes and biases in Large Language Models (LLMs) can enhance\nfairness and reduce adverse impacts on individuals or groups when these LLMs\nare applied. However, the majority of existing methods focus on measuring the\nmodel's preference towards sentences containing biases and stereotypes within\ndatasets, which lacks interpretability and cannot detect implicit biases and\nstereotypes in the real world. To address this gap, this paper introduces a\nfour-stage framework to directly evaluate stereotypes and biases in the\ngenerated content of LLMs, including direct inquiry testing, serial or adapted\nstory testing, implicit association testing, and unknown situation testing.\nAdditionally, the paper proposes multi-dimensional evaluation metrics and\nexplainable zero-shot prompts for automated evaluation. Using the education\nsector as a case study, we constructed the Edu-FairMonitor based on the\nfour-stage framework, which encompasses 12,632 open-ended questions covering\nnine sensitive factors and 26 educational scenarios. Experimental results\nreveal varying degrees of stereotypes and biases in five LLMs evaluated on\nEdu-FairMonitor. Moreover, the results of our proposed automated evaluation\nmethod have shown a high correlation with human annotations.",
+ "authors": "Yanhong Bai, Jiabao Zhao, Jinxin Shi, Tingjiang Wei, Xingjiao Wu, Liang He",
+ "published": "2023-08-21",
+ "updated": "2023-10-27",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2307.15997v1",
+ "title": "RoCar: A Relationship Network-based Evaluation Method to Large Language Models",
+ "abstract": "Large language models (LLMs) have received increasing attention. However, due\nto the complexity of its capabilities, how to rationally evaluate the\ncapabilities of LLMs is still a task to be solved. We propose the RoCar method,\nwhich utilizes the defined basic schemas to randomly construct a task graph and\ngenerates natural language evaluation tasks based on the task graph to evaluate\nthe reasoning and memory abilities of LLMs respectively. Due to the very large\nrandomness of the task construction process, it is possible to ensure that none\nof the LLMs to be tested has directly learned the evaluation tasks,\nguaranteeing the fairness of the evaluation method.",
+ "authors": "Ming Wang, Wenfang Wu, Chongyun Gao, Daling Wang, Shi Feng, Yifei Zhang",
+ "published": "2023-07-29",
+ "updated": "2023-07-29",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2305.12090v1",
+ "title": "UP5: Unbiased Foundation Model for Fairness-aware Recommendation",
+ "abstract": "Recent advancements in foundation models such as large language models (LLM)\nhave propelled them to the forefront of recommender systems (RS). Moreover,\nfairness in RS is critical since many users apply it for decision-making and\ndemand fulfillment. However, at present, there is a lack of understanding\nregarding the level of fairness exhibited by recommendation foundation models\nand the appropriate methods for equitably treating different groups of users in\nfoundation models. In this paper, we focus on user-side unfairness problem and\nshow through a thorough examination that there is unfairness involved in LLMs\nthat lead to unfair recommendation results. To eliminate bias from LLM for\nfairness-aware recommendation, we introduce a novel Unbiased P5 (UP5)\nfoundation model based on Counterfactually-Fair-Prompting (CFP) techniques. CFP\nincludes two sub-modules: a personalized prefix prompt that enhances fairness\nwith respect to individual sensitive attributes, and a Prompt Mixture that\nintegrates multiple counterfactually-fair prompts for a set of sensitive\nattributes. Experiments are conducted on two real-world datasets, MovieLens-1M\nand Insurance, and results are compared with both matching-based and\nsequential-based fairness-aware recommendation models. The results show that\nUP5 achieves better recommendation performance and meanwhile exhibits a high\nlevel of fairness.",
+ "authors": "Wenyue Hua, Yingqiang Ge, Shuyuan Xu, Jianchao Ji, Yongfeng Zhang",
+ "published": "2023-05-20",
+ "updated": "2023-05-20",
+ "primary_cat": "cs.IR",
+ "cats": [
+ "cs.IR",
+ "cs.AI",
+ "cs.CL",
+ "cs.LG"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2309.09397v1",
+ "title": "Do Large GPT Models Discover Moral Dimensions in Language Representations? A Topological Study Of Sentence Embeddings",
+ "abstract": "As Large Language Models are deployed within Artificial Intelligence systems,\nthat are increasingly integrated with human society, it becomes more important\nthan ever to study their internal structures. Higher level abilities of LLMs\nsuch as GPT-3.5 emerge in large part due to informative language\nrepresentations they induce from raw text data during pre-training on trillions\nof words. These embeddings exist in vector spaces of several thousand\ndimensions, and their processing involves mapping between multiple vector\nspaces, with total number of parameters on the order of trillions. Furthermore,\nthese language representations are induced by gradient optimization, resulting\nin a black box system that is hard to interpret. In this paper, we take a look\nat the topological structure of neuronal activity in the \"brain\" of Chat-GPT's\nfoundation language model, and analyze it with respect to a metric representing\nthe notion of fairness. We develop a novel approach to visualize GPT's moral\ndimensions. We first compute a fairness metric, inspired by social psychology\nliterature, to identify factors that typically influence fairness assessments\nin humans, such as legitimacy, need, and responsibility. Subsequently, we\nsummarize the manifold's shape using a lower-dimensional simplicial complex,\nwhose topology is derived from this metric. We color it with a heat map\nassociated with this fairness metric, producing human-readable visualizations\nof the high-dimensional sentence manifold. Our results show that sentence\nembeddings based on GPT-3.5 can be decomposed into two submanifolds\ncorresponding to fair and unfair moral judgments. This indicates that GPT-based\nlanguage models develop a moral dimension within their representation spaces\nand induce an understanding of fairness during their training process.",
+ "authors": "Stephen Fitz",
+ "published": "2023-09-17",
+ "updated": "2023-09-17",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.CY",
+ "cs.LG",
+ "cs.NE"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2310.06500v1",
+ "title": "MetaAgents: Simulating Interactions of Human Behaviors for LLM-based Task-oriented Coordination via Collaborative Generative Agents",
+ "abstract": "Significant advancements have occurred in the application of Large Language\nModels (LLMs) for various tasks and social simulations. Despite this, their\ncapacities to coordinate within task-oriented social contexts are\nunder-explored. Such capabilities are crucial if LLMs are to effectively mimic\nhuman-like social behavior and produce meaningful results. To bridge this gap,\nwe introduce collaborative generative agents, endowing LLM-based Agents with\nconsistent behavior patterns and task-solving abilities. We situate these\nagents in a simulated job fair environment as a case study to scrutinize their\ncoordination skills. We propose a novel framework that equips collaborative\ngenerative agents with human-like reasoning abilities and specialized skills.\nOur evaluation demonstrates that these agents show promising performance.\nHowever, we also uncover limitations that hinder their effectiveness in more\ncomplex coordination tasks. Our work provides valuable insights into the role\nand evolution of LLMs in task-oriented social simulations.",
+ "authors": "Yuan Li, Yixuan Zhang, Lichao Sun",
+ "published": "2023-10-10",
+ "updated": "2023-10-10",
+ "primary_cat": "cs.AI",
+ "cats": [
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2307.11761v1",
+ "title": "Fairness of ChatGPT and the Role Of Explainable-Guided Prompts",
+ "abstract": "Our research investigates the potential of Large-scale Language Models\n(LLMs), specifically OpenAI's GPT, in credit risk assessment-a binary\nclassification task. Our findings suggest that LLMs, when directed by\njudiciously designed prompts and supplemented with domain-specific knowledge,\ncan parallel the performance of traditional Machine Learning (ML) models.\nIntriguingly, they achieve this with significantly less data-40 times less,\nutilizing merely 20 data points compared to the ML's 800. LLMs particularly\nexcel in minimizing false positives and enhancing fairness, both being vital\naspects of risk analysis. While our results did not surpass those of classical\nML models, they underscore the potential of LLMs in analogous tasks, laying a\ngroundwork for future explorations into harnessing the capabilities of LLMs in\ndiverse ML tasks.",
+ "authors": "Yashar Deldjoo",
+ "published": "2023-07-14",
+ "updated": "2023-07-14",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2305.18569v1",
+ "title": "Fairness of ChatGPT",
+ "abstract": "Understanding and addressing unfairness in LLMs are crucial for responsible\nAI deployment. However, there is a limited availability of quantitative\nanalyses and in-depth studies regarding fairness evaluations in LLMs,\nespecially when applying LLMs to high-stakes fields. This work aims to fill\nthis gap by providing a systematic evaluation of the effectiveness and fairness\nof LLMs using ChatGPT as a study case. We focus on assessing ChatGPT's\nperformance in high-takes fields including education, criminology, finance and\nhealthcare. To make thorough evaluation, we consider both group fairness and\nindividual fairness and we also observe the disparities in ChatGPT's outputs\nunder a set of biased or unbiased prompts. This work contributes to a deeper\nunderstanding of LLMs' fairness performance, facilitates bias mitigation and\nfosters the development of responsible artificial intelligence systems.",
+ "authors": "Yunqi Li, Yongfeng Zhang",
+ "published": "2023-05-22",
+ "updated": "2023-05-22",
+ "primary_cat": "cs.LG",
+ "cats": [
+ "cs.LG",
+ "cs.AI",
+ "cs.CL",
+ "cs.CY"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2403.14473v1",
+ "title": "The Ethics of ChatGPT in Medicine and Healthcare: A Systematic Review on Large Language Models (LLMs)",
+ "abstract": "With the introduction of ChatGPT, Large Language Models (LLMs) have received\nenormous attention in healthcare. Despite their potential benefits, researchers\nhave underscored various ethical implications. While individual instances have\ndrawn much attention, the debate lacks a systematic overview of practical\napplications currently researched and ethical issues connected to them. Against\nthis background, this work aims to map the ethical landscape surrounding the\ncurrent stage of deployment of LLMs in medicine and healthcare. Electronic\ndatabases and preprint servers were queried using a comprehensive search\nstrategy. Studies were screened and extracted following a modified rapid review\napproach. Methodological quality was assessed using a hybrid approach. For 53\nrecords, a meta-aggregative synthesis was performed. Four fields of\napplications emerged and testify to a vivid exploration phase. Advantages of\nusing LLMs are attributed to their capacity in data analysis, personalized\ninformation provisioning, support in decision-making, mitigating information\nloss and enhancing information accessibility. However, we also identifies\nrecurrent ethical concerns connected to fairness, bias, non-maleficence,\ntransparency, and privacy. A distinctive concern is the tendency to produce\nharmful misinformation or convincingly but inaccurate content. A recurrent plea\nfor ethical guidance and human oversight is evident. Given the variety of use\ncases, it is suggested that the ethical guidance debate be reframed to focus on\ndefining what constitutes acceptable human oversight across the spectrum of\napplications. This involves considering diverse settings, varying potentials\nfor harm, and different acceptable thresholds for performance and certainty in\nhealthcare. In addition, a critical inquiry is necessary to determine the\nextent to which the current experimental use of LLMs is necessary and\njustified.",
+ "authors": "Joschka Haltaufderheide, Robert Ranisch",
+ "published": "2024-03-21",
+ "updated": "2024-03-21",
+ "primary_cat": "cs.CY",
+ "cats": [
+ "cs.CY"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2402.15215v1",
+ "title": "Item-side Fairness of Large Language Model-based Recommendation System",
+ "abstract": "Recommendation systems for Web content distribution intricately connect to\nthe information access and exposure opportunities for vulnerable populations.\nThe emergence of Large Language Models-based Recommendation System (LRS) may\nintroduce additional societal challenges to recommendation systems due to the\ninherent biases in Large Language Models (LLMs). From the perspective of\nitem-side fairness, there remains a lack of comprehensive investigation into\nthe item-side fairness of LRS given the unique characteristics of LRS compared\nto conventional recommendation systems. To bridge this gap, this study examines\nthe property of LRS with respect to item-side fairness and reveals the\ninfluencing factors of both historical users' interactions and inherent\nsemantic biases of LLMs, shedding light on the need to extend conventional\nitem-side fairness methods for LRS. Towards this goal, we develop a concise and\neffective framework called IFairLRS to enhance the item-side fairness of an\nLRS. IFairLRS covers the main stages of building an LRS with specifically\nadapted strategies to calibrate the recommendations of LRS. We utilize IFairLRS\nto fine-tune LLaMA, a representative LLM, on \\textit{MovieLens} and\n\\textit{Steam} datasets, and observe significant item-side fairness\nimprovements. The code can be found in\nhttps://github.com/JiangM-C/IFairLRS.git.",
+ "authors": "Meng Jiang, Keqin Bao, Jizhi Zhang, Wenjie Wang, Zhengyi Yang, Fuli Feng, Xiangnan He",
+ "published": "2024-02-23",
+ "updated": "2024-02-23",
+ "primary_cat": "cs.IR",
+ "cats": [
+ "cs.IR"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2305.13862v2",
+ "title": "A Trip Towards Fairness: Bias and De-Biasing in Large Language Models",
+ "abstract": "Cheap-to-Build Very Large-Language Models (CtB-LLMs) with affordable training\nare emerging as the next big revolution in natural language processing and\nunderstanding. These CtB-LLMs are democratizing access to trainable Very\nLarge-Language Models (VLLMs) and, thus, may represent the building blocks of\nmany NLP systems solving downstream tasks. Hence, a little or a large bias in\nCtB-LLMs may cause huge harm. In this paper, we performed a large investigation\nof the bias of three families of CtB-LLMs, and we showed that debiasing\ntechniques are effective and usable. Indeed, according to current tests, the\nLLaMA and the OPT families have an important bias in gender, race, religion,\nand profession. In contrast to the analysis for other LLMs, we discovered that\nbias depends not on the number of parameters but on the perplexity. Finally,\nthe debiasing of OPT using LoRA reduces bias up to 4.12 points in the\nnormalized stereotype score.",
+ "authors": "Leonardo Ranaldi, Elena Sofia Ruzzetti, Davide Venditti, Dario Onorati, Fabio Massimo Zanzotto",
+ "published": "2023-05-23",
+ "updated": "2023-08-29",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2404.02650v1",
+ "title": "Towards detecting unanticipated bias in Large Language Models",
+ "abstract": "Over the last year, Large Language Models (LLMs) like ChatGPT have become\nwidely available and have exhibited fairness issues similar to those in\nprevious machine learning systems. Current research is primarily focused on\nanalyzing and quantifying these biases in training data and their impact on the\ndecisions of these models, alongside developing mitigation strategies. This\nresearch largely targets well-known biases related to gender, race, ethnicity,\nand language. However, it is clear that LLMs are also affected by other, less\nobvious implicit biases. The complex and often opaque nature of these models\nmakes detecting such biases challenging, yet this is crucial due to their\npotential negative impact in various applications. In this paper, we explore\nnew avenues for detecting these unanticipated biases in LLMs, focusing\nspecifically on Uncertainty Quantification and Explainable AI methods. These\napproaches aim to assess the certainty of model decisions and to make the\ninternal decision-making processes of LLMs more transparent, thereby\nidentifying and understanding biases that are not immediately apparent. Through\nthis research, we aim to contribute to the development of fairer and more\ntransparent AI systems.",
+ "authors": "Anna Kruspe",
+ "published": "2024-04-03",
+ "updated": "2024-04-03",
+ "primary_cat": "cs.LG",
+ "cats": [
+ "cs.LG",
+ "cs.AI",
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2310.05694v1",
+ "title": "A Survey of Large Language Models for Healthcare: from Data, Technology, and Applications to Accountability and Ethics",
+ "abstract": "The utilization of large language models (LLMs) in the Healthcare domain has\ngenerated both excitement and concern due to their ability to effectively\nrespond to freetext queries with certain professional knowledge. This survey\noutlines the capabilities of the currently developed LLMs for Healthcare and\nexplicates their development process, with the aim of providing an overview of\nthe development roadmap from traditional Pretrained Language Models (PLMs) to\nLLMs. Specifically, we first explore the potential of LLMs to enhance the\nefficiency and effectiveness of various Healthcare applications highlighting\nboth the strengths and limitations. Secondly, we conduct a comparison between\nthe previous PLMs and the latest LLMs, as well as comparing various LLMs with\neach other. Then we summarize related Healthcare training data, training\nmethods, optimization strategies, and usage. Finally, the unique concerns\nassociated with deploying LLMs in Healthcare settings are investigated,\nparticularly regarding fairness, accountability, transparency and ethics. Our\nsurvey provide a comprehensive investigation from perspectives of both computer\nscience and Healthcare specialty. Besides the discussion about Healthcare\nconcerns, we supports the computer science community by compiling a collection\nof open source resources, such as accessible datasets, the latest\nmethodologies, code implementations, and evaluation benchmarks in the Github.\nSummarily, we contend that a significant paradigm shift is underway,\ntransitioning from PLMs to LLMs. This shift encompasses a move from\ndiscriminative AI approaches to generative AI approaches, as well as a shift\nfrom model-centered methodologies to datacentered methodologies.",
+ "authors": "Kai He, Rui Mao, Qika Lin, Yucheng Ruan, Xiang Lan, Mengling Feng, Erik Cambria",
+ "published": "2023-10-09",
+ "updated": "2023-10-09",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2403.09606v1",
+ "title": "Large Language Models and Causal Inference in Collaboration: A Comprehensive Survey",
+ "abstract": "Causal inference has shown potential in enhancing the predictive accuracy,\nfairness, robustness, and explainability of Natural Language Processing (NLP)\nmodels by capturing causal relationships among variables. The emergence of\ngenerative Large Language Models (LLMs) has significantly impacted various NLP\ndomains, particularly through their advanced reasoning capabilities. This\nsurvey focuses on evaluating and improving LLMs from a causal view in the\nfollowing areas: understanding and improving the LLMs' reasoning capacity,\naddressing fairness and safety issues in LLMs, complementing LLMs with\nexplanations, and handling multimodality. Meanwhile, LLMs' strong reasoning\ncapacities can in turn contribute to the field of causal inference by aiding\ncausal relationship discovery and causal effect estimations. This review\nexplores the interplay between causal inference frameworks and LLMs from both\nperspectives, emphasizing their collective potential to further the development\nof more advanced and equitable artificial intelligence systems.",
+ "authors": "Xiaoyu Liu, Paiheng Xu, Junda Wu, Jiaxin Yuan, Yifan Yang, Yuhang Zhou, Fuxiao Liu, Tianrui Guan, Haoliang Wang, Tong Yu, Julian McAuley, Wei Ai, Furong Huang",
+ "published": "2024-03-14",
+ "updated": "2024-03-14",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2403.05668v1",
+ "title": "CFaiRLLM: Consumer Fairness Evaluation in Large-Language Model Recommender System",
+ "abstract": "In the evolving landscape of recommender systems, the integration of Large\nLanguage Models (LLMs) such as ChatGPT marks a new era, introducing the concept\nof Recommendation via LLM (RecLLM). While these advancements promise\nunprecedented personalization and efficiency, they also bring to the fore\ncritical concerns regarding fairness, particularly in how recommendations might\ninadvertently perpetuate or amplify biases associated with sensitive user\nattributes. In order to address these concerns, our study introduces a\ncomprehensive evaluation framework, CFaiRLLM, aimed at evaluating (and thereby\nmitigating) biases on the consumer side within RecLLMs.\n Our research methodically assesses the fairness of RecLLMs by examining how\nrecommendations might vary with the inclusion of sensitive attributes such as\ngender, age, and their intersections, through both similarity alignment and\ntrue preference alignment. By analyzing recommendations generated under\ndifferent conditions-including the use of sensitive attributes in user\nprompts-our framework identifies potential biases in the recommendations\nprovided. A key part of our study involves exploring how different detailed\nstrategies for constructing user profiles (random, top-rated, recent) impact\nthe alignment between recommendations made without consideration of sensitive\nattributes and those that are sensitive-attribute-aware, highlighting the bias\nmechanisms within RecLLMs.\n The findings in our study highlight notable disparities in the fairness of\nrecommendations, particularly when sensitive attributes are integrated into the\nrecommendation process, either individually or in combination. The analysis\ndemonstrates that the choice of user profile sampling strategy plays a\nsignificant role in affecting fairness outcomes, highlighting the complexity of\nachieving fair recommendations in the era of LLMs.",
+ "authors": "Yashar Deldjoo, Tommaso di Noia",
+ "published": "2024-03-08",
+ "updated": "2024-03-08",
+ "primary_cat": "cs.IR",
+ "cats": [
+ "cs.IR"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2311.07884v2",
+ "title": "Fair Abstractive Summarization of Diverse Perspectives",
+ "abstract": "People from different social and demographic groups express diverse\nperspectives and conflicting opinions on a broad set of topics such as product\nreviews, healthcare, law, and politics. A fair summary should provide a\ncomprehensive coverage of diverse perspectives without underrepresenting\ncertain groups. However, current work in summarization metrics and Large\nLanguage Models (LLMs) evaluation has not explored fair abstractive\nsummarization. In this paper, we systematically investigate fair abstractive\nsummarization for user-generated data. We first formally define fairness in\nabstractive summarization as not underrepresenting perspectives of any groups\nof people, and we propose four reference-free automatic metrics by measuring\nthe differences between target and source perspectives. We evaluate nine LLMs,\nincluding three GPT models, four LLaMA models, PaLM 2, and Claude, on six\ndatasets collected from social media, online reviews, and recorded transcripts.\nExperiments show that both the model-generated and the human-written reference\nsummaries suffer from low fairness. We conduct a comprehensive analysis of the\ncommon factors influencing fairness and propose three simple but effective\nmethods to alleviate unfair summarization. Our dataset and code are available\nat https://github.com/psunlpgroup/FairSumm.",
+ "authors": "Yusen Zhang, Nan Zhang, Yixin Liu, Alexander Fabbri, Junru Liu, Ryo Kamoi, Xiaoxin Lu, Caiming Xiong, Jieyu Zhao, Dragomir Radev, Kathleen McKeown, Rui Zhang",
+ "published": "2023-11-14",
+ "updated": "2024-03-30",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2206.13757v1",
+ "title": "Flexible text generation for counterfactual fairness probing",
+ "abstract": "A common approach for testing fairness issues in text-based classifiers is\nthrough the use of counterfactuals: does the classifier output change if a\nsensitive attribute in the input is changed? Existing counterfactual generation\nmethods typically rely on wordlists or templates, producing simple\ncounterfactuals that don't take into account grammar, context, or subtle\nsensitive attribute references, and could miss issues that the wordlist\ncreators had not considered. In this paper, we introduce a task for generating\ncounterfactuals that overcomes these shortcomings, and demonstrate how large\nlanguage models (LLMs) can be leveraged to make progress on this task. We show\nthat this LLM-based method can produce complex counterfactuals that existing\nmethods cannot, comparing the performance of various counterfactual generation\nmethods on the Civil Comments dataset and showing their value in evaluating a\ntoxicity classifier.",
+ "authors": "Zee Fryer, Vera Axelrod, Ben Packer, Alex Beutel, Jilin Chen, Kellie Webster",
+ "published": "2022-06-28",
+ "updated": "2022-06-28",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.CY"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2310.18333v3",
+ "title": "She had Cobalt Blue Eyes: Prompt Testing to Create Aligned and Sustainable Language Models",
+ "abstract": "As the use of large language models (LLMs) increases within society, as does\nthe risk of their misuse. Appropriate safeguards must be in place to ensure LLM\noutputs uphold the ethical standards of society, highlighting the positive role\nthat artificial intelligence technologies can have. Recent events indicate\nethical concerns around conventionally trained LLMs, leading to overall unsafe\nuser experiences. This motivates our research question: how do we ensure LLM\nalignment? In this work, we introduce a test suite of unique prompts to foster\nthe development of aligned LLMs that are fair, safe, and robust. We show that\nprompting LLMs at every step of the development pipeline, including data\ncuration, pre-training, and fine-tuning, will result in an overall more\nresponsible model. Our test suite evaluates outputs from four state-of-the-art\nlanguage models: GPT-3.5, GPT-4, OPT, and LLaMA-2. The assessment presented in\nthis paper highlights a gap between societal alignment and the capabilities of\ncurrent LLMs. Additionally, implementing a test suite such as ours lowers the\nenvironmental overhead of making models safe and fair.",
+ "authors": "Veronica Chatrath, Oluwanifemi Bamgbose, Shaina Raza",
+ "published": "2023-10-20",
+ "updated": "2023-12-15",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2402.07688v1",
+ "title": "CyberMetric: A Benchmark Dataset for Evaluating Large Language Models Knowledge in Cybersecurity",
+ "abstract": "Large Language Models (LLMs) excel across various domains, from computer\nvision to medical diagnostics. However, understanding the diverse landscape of\ncybersecurity, encompassing cryptography, reverse engineering, and managerial\nfacets like risk assessment, presents a challenge, even for human experts. In\nthis paper, we introduce CyberMetric, a benchmark dataset comprising 10,000\nquestions sourced from standards, certifications, research papers, books, and\nother publications in the cybersecurity domain. The questions are created\nthrough a collaborative process, i.e., merging expert knowledge with LLMs,\nincluding GPT-3.5 and Falcon-180B. Human experts spent over 200 hours verifying\ntheir accuracy and relevance. Beyond assessing LLMs' knowledge, the dataset's\nmain goal is to facilitate a fair comparison between humans and different LLMs\nin cybersecurity. To achieve this, we carefully selected 80 questions covering\na wide range of topics within cybersecurity and involved 30 participants of\ndiverse expertise levels, facilitating a comprehensive comparison between human\nand machine intelligence in this area. The findings revealed that LLMs\noutperformed humans in almost every aspect of cybersecurity.",
+ "authors": "Norbert Tihanyi, Mohamed Amine Ferrag, Ridhi Jain, Merouane Debbah",
+ "published": "2024-02-12",
+ "updated": "2024-02-12",
+ "primary_cat": "cs.AI",
+ "cats": [
+ "cs.AI",
+ "cs.CR"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2311.13095v1",
+ "title": "Enhancing Logical Reasoning in Large Language Models to Facilitate Legal Applications",
+ "abstract": "Language serves as a vehicle for conveying thought, enabling communication\namong individuals. The ability to distinguish between diverse concepts,\nidentify fairness and injustice, and comprehend a range of legal notions\nfundamentally relies on logical reasoning. Large Language Models (LLMs) attempt\nto emulate human language understanding and generation, but their competency in\nlogical reasoning remains limited. This paper seeks to address the\nphilosophical question: How can we effectively teach logical reasoning to LLMs\nwhile maintaining a deep understanding of the intricate relationship between\nlanguage and logic? By focusing on bolstering LLMs' capabilities in logical\nreasoning, we aim to expand their applicability in law and other\nlogic-intensive disciplines. To this end, we propose a Reinforcement Learning\nfrom Logical Feedback (RLLF) approach, which serves as a potential framework\nfor refining LLMs' reasoning capacities. Through RLLF and a revised evaluation\nmethodology, we explore new avenues for research in this domain and contribute\nto the development of LLMs capable of handling complex legal reasoning tasks\nwhile acknowledging the fundamental connection between language and logic.",
+ "authors": "Ha-Thanh Nguyen, Wachara Fungwacharakorn, Ken Satoh",
+ "published": "2023-11-22",
+ "updated": "2023-11-22",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2312.06056v1",
+ "title": "METAL: Metamorphic Testing Framework for Analyzing Large-Language Model Qualities",
+ "abstract": "Large-Language Models (LLMs) have shifted the paradigm of natural language\ndata processing. However, their black-boxed and probabilistic characteristics\ncan lead to potential risks in the quality of outputs in diverse LLM\napplications. Recent studies have tested Quality Attributes (QAs), such as\nrobustness or fairness, of LLMs by generating adversarial input texts. However,\nexisting studies have limited their coverage of QAs and tasks in LLMs and are\ndifficult to extend. Additionally, these studies have only used one evaluation\nmetric, Attack Success Rate (ASR), to assess the effectiveness of their\napproaches. We propose a MEtamorphic Testing for Analyzing LLMs (METAL)\nframework to address these issues by applying Metamorphic Testing (MT)\ntechniques. This approach facilitates the systematic testing of LLM qualities\nby defining Metamorphic Relations (MRs), which serve as modularized evaluation\nmetrics. The METAL framework can automatically generate hundreds of MRs from\ntemplates that cover various QAs and tasks. In addition, we introduced novel\nmetrics that integrate the ASR method into the semantic qualities of text to\nassess the effectiveness of MRs accurately. Through the experiments conducted\nwith three prominent LLMs, we have confirmed that the METAL framework\neffectively evaluates essential QAs on primary LLM tasks and reveals the\nquality risks in LLMs. Moreover, the newly proposed metrics can guide the\noptimal MRs for testing each task and suggest the most effective method for\ngenerating MRs.",
+ "authors": "Sangwon Hyun, Mingyu Guo, M. Ali Babar",
+ "published": "2023-12-11",
+ "updated": "2023-12-11",
+ "primary_cat": "cs.SE",
+ "cats": [
+ "cs.SE",
+ "cs.AI",
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2403.00811v1",
+ "title": "Cognitive Bias in High-Stakes Decision-Making with LLMs",
+ "abstract": "Large language models (LLMs) offer significant potential as tools to support\nan expanding range of decision-making tasks. However, given their training on\nhuman (created) data, LLMs can inherit both societal biases against protected\ngroups, as well as be subject to cognitive bias. Such human-like bias can\nimpede fair and explainable decisions made with LLM assistance. Our work\nintroduces BiasBuster, a framework designed to uncover, evaluate, and mitigate\ncognitive bias in LLMs, particularly in high-stakes decision-making tasks.\nInspired by prior research in psychology and cognitive sciences, we develop a\ndataset containing 16,800 prompts to evaluate different cognitive biases (e.g.,\nprompt-induced, sequential, inherent). We test various bias mitigation\nstrategies, amidst proposing a novel method using LLMs to debias their own\nprompts. Our analysis provides a comprehensive picture on the presence and\neffects of cognitive bias across different commercial and open-source models.\nWe demonstrate that our self-help debiasing effectively mitigate cognitive bias\nwithout having to manually craft examples for each bias type.",
+ "authors": "Jessica Echterhoff, Yao Liu, Abeer Alessa, Julian McAuley, Zexue He",
+ "published": "2024-02-25",
+ "updated": "2024-02-25",
+ "primary_cat": "cs.AI",
+ "cats": [
+ "cs.AI",
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2402.11406v2",
+ "title": "Don't Go To Extremes: Revealing the Excessive Sensitivity and Calibration Limitations of LLMs in Implicit Hate Speech Detection",
+ "abstract": "The fairness and trustworthiness of Large Language Models (LLMs) are\nreceiving increasing attention. Implicit hate speech, which employs indirect\nlanguage to convey hateful intentions, occupies a significant portion of\npractice. However, the extent to which LLMs effectively address this issue\nremains insufficiently examined. This paper delves into the capability of LLMs\nto detect implicit hate speech (Classification Task) and express confidence in\ntheir responses (Calibration Task). Our evaluation meticulously considers\nvarious prompt patterns and mainstream uncertainty estimation methods. Our\nfindings highlight that LLMs exhibit two extremes: (1) LLMs display excessive\nsensitivity towards groups or topics that may cause fairness issues, resulting\nin misclassifying benign statements as hate speech. (2) LLMs' confidence scores\nfor each method excessively concentrate on a fixed range, remaining unchanged\nregardless of the dataset's complexity. Consequently, the calibration\nperformance is heavily reliant on primary classification accuracy. These\ndiscoveries unveil new limitations of LLMs, underscoring the need for caution\nwhen optimizing models to ensure they do not veer towards extremes. This serves\nas a reminder to carefully consider sensitivity and confidence in the pursuit\nof model fairness.",
+ "authors": "Min Zhang, Jianfeng He, Taoran Ji, Chang-Tien Lu",
+ "published": "2024-02-18",
+ "updated": "2024-02-26",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2402.11764v1",
+ "title": "ChatGPT Based Data Augmentation for Improved Parameter-Efficient Debiasing of LLMs",
+ "abstract": "Large Language models (LLMs), while powerful, exhibit harmful social biases.\nDebiasing is often challenging due to computational costs, data constraints,\nand potential degradation of multi-task language capabilities. This work\nintroduces a novel approach utilizing ChatGPT to generate synthetic training\ndata, aiming to enhance the debiasing of LLMs. We propose two strategies:\nTargeted Prompting, which provides effective debiasing for known biases but\nnecessitates prior specification of bias in question; and General Prompting,\nwhich, while slightly less effective, offers debiasing across various\ncategories. We leverage resource-efficient LLM debiasing using adapter tuning\nand compare the effectiveness of our synthetic data to existing debiasing\ndatasets. Our results reveal that: (1) ChatGPT can efficiently produce\nhigh-quality training data for debiasing other LLMs; (2) data produced via our\napproach surpasses existing datasets in debiasing performance while also\npreserving internal knowledge of a pre-trained LLM; and (3) synthetic data\nexhibits generalizability across categories, effectively mitigating various\nbiases, including intersectional ones. These findings underscore the potential\nof synthetic data in advancing the fairness of LLMs with minimal retraining\ncost.",
+ "authors": "Pengrui Han, Rafal Kocielnik, Adhithya Saravanan, Roy Jiang, Or Sharir, Anima Anandkumar",
+ "published": "2024-02-19",
+ "updated": "2024-02-19",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.CY",
+ "68T50",
+ "I.2.7; K.4.1"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2312.15478v1",
+ "title": "A Group Fairness Lens for Large Language Models",
+ "abstract": "The rapid advancement of large language models has revolutionized various\napplications but also raised crucial concerns about their potential to\nperpetuate biases and unfairness when deployed in social media contexts.\nEvaluating LLMs' potential biases and fairness has become crucial, as existing\nmethods rely on limited prompts focusing on just a few groups, lacking a\ncomprehensive categorical perspective. In this paper, we propose evaluating LLM\nbiases from a group fairness lens using a novel hierarchical schema\ncharacterizing diverse social groups. Specifically, we construct a dataset,\nGFair, encapsulating target-attribute combinations across multiple dimensions.\nIn addition, we introduce statement organization, a new open-ended text\ngeneration task, to uncover complex biases in LLMs. Extensive evaluations of\npopular LLMs reveal inherent safety concerns. To mitigate the biases of LLM\nfrom a group fairness perspective, we pioneer a novel chain-of-thought method\nGF-Think to mitigate biases of LLMs from a group fairness perspective.\nExperimental results demonstrate its efficacy in mitigating bias in LLMs to\nachieve fairness.",
+ "authors": "Guanqun Bi, Lei Shen, Yuqiang Xie, Yanan Cao, Tiangang Zhu, Xiaodong He",
+ "published": "2023-12-24",
+ "updated": "2023-12-24",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2310.18130v2",
+ "title": "DELPHI: Data for Evaluating LLMs' Performance in Handling Controversial Issues",
+ "abstract": "Controversy is a reflection of our zeitgeist, and an important aspect to any\ndiscourse. The rise of large language models (LLMs) as conversational systems\nhas increased public reliance on these systems for answers to their various\nquestions. Consequently, it is crucial to systematically examine how these\nmodels respond to questions that pertaining to ongoing debates. However, few\nsuch datasets exist in providing human-annotated labels reflecting the\ncontemporary discussions. To foster research in this area, we propose a novel\nconstruction of a controversial questions dataset, expanding upon the publicly\nreleased Quora Question Pairs Dataset. This dataset presents challenges\nconcerning knowledge recency, safety, fairness, and bias. We evaluate different\nLLMs using a subset of this dataset, illuminating how they handle controversial\nissues and the stances they adopt. This research ultimately contributes to our\nunderstanding of LLMs' interaction with controversial issues, paving the way\nfor improvements in their comprehension and handling of complex societal\ndebates.",
+ "authors": "David Q. Sun, Artem Abzaliev, Hadas Kotek, Zidi Xiu, Christopher Klein, Jason D. Williams",
+ "published": "2023-10-27",
+ "updated": "2023-11-07",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.HC"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2312.14804v1",
+ "title": "Use large language models to promote equity",
+ "abstract": "Advances in large language models (LLMs) have driven an explosion of interest\nabout their societal impacts. Much of the discourse around how they will impact\nsocial equity has been cautionary or negative, focusing on questions like \"how\nmight LLMs be biased and how would we mitigate those biases?\" This is a vital\ndiscussion: the ways in which AI generally, and LLMs specifically, can entrench\nbiases have been well-documented. But equally vital, and much less discussed,\nis the more opportunity-focused counterpoint: \"what promising applications do\nLLMs enable that could promote equity?\" If LLMs are to enable a more equitable\nworld, it is not enough just to play defense against their biases and failure\nmodes. We must also go on offense, applying them positively to equity-enhancing\nuse cases to increase opportunities for underserved groups and reduce societal\ndiscrimination. There are many choices which determine the impact of AI, and a\nfundamental choice very early in the pipeline is the problems we choose to\napply it to. If we focus only later in the pipeline -- making LLMs marginally\nmore fair as they facilitate use cases which intrinsically entrench power -- we\nwill miss an important opportunity to guide them to equitable impacts. Here, we\nhighlight the emerging potential of LLMs to promote equity by presenting four\nnewly possible, promising research directions, while keeping risks and\ncautionary points in clear view.",
+ "authors": "Emma Pierson, Divya Shanmugam, Rajiv Movva, Jon Kleinberg, Monica Agrawal, Mark Dredze, Kadija Ferryman, Judy Wawira Gichoya, Dan Jurafsky, Pang Wei Koh, Karen Levy, Sendhil Mullainathan, Ziad Obermeyer, Harini Suresh, Keyon Vafa",
+ "published": "2023-12-22",
+ "updated": "2023-12-22",
+ "primary_cat": "cs.CY",
+ "cats": [
+ "cs.CY"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2401.00588v1",
+ "title": "Fairness in Serving Large Language Models",
+ "abstract": "High-demand LLM inference services (e.g., ChatGPT and BARD) support a wide\nrange of requests from short chat conversations to long document reading. To\nensure that all client requests are processed fairly, most major LLM inference\nservices have request rate limits, to ensure that no client can dominate the\nrequest queue. However, this rudimentary notion of fairness also results in\nunder-utilization of the resources and poor client experience when there is\nspare capacity. While there is a rich literature on fair scheduling, serving\nLLMs presents new challenges due to their unpredictable request lengths and\ntheir unique batching characteristics on parallel accelerators. This paper\nintroduces the definition of LLM serving fairness based on a cost function that\naccounts for the number of input and output tokens processed. To achieve\nfairness in serving, we propose a novel scheduling algorithm, the Virtual Token\nCounter (VTC), a fair scheduler based on the continuous batching mechanism. We\nprove a 2x tight upper bound on the service difference between two backlogged\nclients, adhering to the requirement of work-conserving. Through extensive\nexperiments, we demonstrate the superior performance of VTC in ensuring\nfairness, especially in contrast to other baseline methods, which exhibit\nshortcomings under various conditions.",
+ "authors": "Ying Sheng, Shiyi Cao, Dacheng Li, Banghua Zhu, Zhuohan Li, Danyang Zhuo, Joseph E. Gonzalez, Ion Stoica",
+ "published": "2023-12-31",
+ "updated": "2023-12-31",
+ "primary_cat": "cs.AI",
+ "cats": [
+ "cs.AI",
+ "cs.LG",
+ "cs.PF"
+ ],
+ "category": "LLM Fairness"
+ }
+ ],
+ [
+ {
+ "url": "http://arxiv.org/abs/2404.15790v1",
+ "title": "Leveraging Large Language Models for Multimodal Search",
+ "abstract": "Multimodal search has become increasingly important in providing users with a\nnatural and effective way to ex-press their search intentions. Images offer\nfine-grained details of the desired products, while text allows for easily\nincorporating search modifications. However, some existing multimodal search\nsystems are unreliable and fail to address simple queries. The problem becomes\nharder with the large variability of natural language text queries, which may\ncontain ambiguous, implicit, and irrelevant in-formation. Addressing these\nissues may require systems with enhanced matching capabilities, reasoning\nabilities, and context-aware query parsing and rewriting. This paper introduces\na novel multimodal search model that achieves a new performance milestone on\nthe Fashion200K dataset. Additionally, we propose a novel search interface\nintegrating Large Language Models (LLMs) to facilitate natural language\ninteraction. This interface routes queries to search systems while\nconversationally engaging with users and considering previous searches. When\ncoupled with our multimodal search model, it heralds a new era of shopping\nassistants capable of offering human-like interaction and enhancing the overall\nsearch experience.",
+ "authors": "Oriol Barbany, Michael Huang, Xinliang Zhu, Arnab Dhua",
+ "published": "2024-04-24",
+ "updated": "2024-04-24",
+ "primary_cat": "cs.CV",
+ "cats": [
+ "cs.CV"
+ ],
+ "label": "Original Paper",
+ "paper_cat": "LLM Fairness",
+ "gt": "When tackling the CIR problem, the TIRG model [46] computes an image representation and modifies it with a text representation on the same space rather than fusing both modalities to create a new feature as in most of the other works. Crucially, this method is trained first on image retrieval and gradually incorporates text modifications. The VAL framework [10] is based on computing image representations at various levels and using a transformer [45] conditioned on language semantics to extract features. Then, an objective function evaluates the feature similarities hierarchically. The text and image encoders of a CLIP model [36] can be used for zero-shot retrieval with a simple Multi-Layer Perceptron (MLP) [40] and leveraging LLMs [5]. Another approach is to perform a late fusion of CLIP embeddings [4], which can be improved by fine-tuning the CLIP text encoder Baldrati et al. [3]. The hypothesis is that image and text embeddings obtained by CLIP are aligned, while the CIR problem requires a text representation that expresses differences w.r.t. the image representation. CosMo [25] independently modulates the content and style of the reference image based on the modification text. This work assumes that style information is removed by simply performing instance normalization on the image features. With this assumption in mind, the normalized features are fed to the content modulator, which transforms them conditioned on text features. Then, the output of the content modulator is given to the style modulator, which along with the text features and the channel-wise statistics of the normalization, obtains the final representation. FashionVLP [16] is based on extracting image features using a pretrained feature extractor, not only on the whole image but also on the cropped clothing, fashion landmarks, and regions of interest. The obtained image representations are concatenated with object tags extracted with an object detector, a class token, and the word tokens computed using BERT [13]. An alternative to tackle the problem of generic visual feature extractors not focusing on fashion-specific details without using the multiple inputs required in Goenka et al. [16], is proposed in FashionSAP [20]. FashionSAP leverages the FashionGen dataset [39] for fine-grained fashion vision-language pretraining. To do that, Han et al. [20] use a multi-task objective composed of retrieval and language modeling losses. CIR is then solved by fusing text and image features using multiple cross-attention layers, and the tasks included in the training objective are solved using different heads for each task. CompoDiff [17] proposes to solve the CIR using a denoising transformer that provides the retrieval embedding conditioned on features of the reference image and the modifying text. Similarly to Rombach et al. [38], the diffusion process is performed in the latent space instead of the pixel space. Given that CompoDiff is a data-hungry method, Gu et al. [17] create a synthetic dataset of 18 million image triplets using StableDiffusion [38] for its training. CompoDiff performs better when using text features obtained with a T5-XL model [37] in addition to the text representations obtained with [36]. Koh et al. [23] uses a frozen LLM to process the input text and visual features that have been transformed with a learned linear mapping as in LLaVA [32]. To counteract the inferior expressiveness of causal attention over its bidirectional counterpart, Koh et al. [23] append a special [RET] token at the end of the outputs that allows the LLM to perform an extra attention step over all tokens. The hidden representations of [RET] are then mapped to an embedding space that is used for retrieval. Couairon et al. [12] tackle a similar problem in which the transformation query is not a single word but a tuple of two words corresponding to the original and target attributes. As an example, for a reference image with caption \"A cat is sitting on the grass\", a source text \"cat\" 2 Cross Attn MLP SoftMax Cross Attn Self Attn Self Attn Self Attn Self Attn T5 decoder T5 encoder T5 embedding \u201creplace gray by orange\u201d \u201corange ashville leather jacket\u201d Cross Attention xN Self Attention MLP Learned queries QFormer ... Transformer Layer Norm Projection Vision Transformer Reference image Target caption Target image Modifying text Query Key Value LoRA Q K O Matmul Matmul SoftMax LoRA V Self/Cross Attention layers with LoRA Projection MLP Figure 2. Proposed architecture: We extract visual features from the reference image xref using a Vision Transformer [14], specifically, a pretrained CLIP [36] model with frozen weights. We extract features before the projection layer, which are then processed using a Querying transFormer (Q-Former), which performs cross-attention with a set of learned queries. The resulting output of the Q-Former is concatenated with the embeddings obtained from the modifying text (t), which expresses a modification in the reference image. Subsequently, all this information is fed into a T5 model [37], an encoder-decoder LLM. We employ Low-Rank Adaptation (LoRA) [21] to learn low-rank updates for the query and value matrices in all attention layers, while keeping the rest of the parameters frozen. The output of the LLM yields a probability distribution from which a sentence is generated. To ensure alignment with the target caption (i.e., the caption of the target image xtrg, which corresponds to the caption of the reference image after incorporating the text modifications), a language modeling loss is used. The hidden states of the LLM are then projected into a space of embeddings used for retrieval. A retrieval loss term pushes together the embedding of the target image G(xtrg) and that obtained using the reference image and the modifying text F(xref, t). and a target text \"dog\", the model should be able to retrieve images of dogs sitting on the grass.",
+ "pre_questions": [],
+ "main_content": "Introduction The Composed Image Retrieval (CIR) problem, also known as Text-Guided Image Retrieval (TGIR), involves finding images that closely match a reference image after applying text modifications. For instance, given a reference image of a blue dress and the instruction \"replace blue with red\", the retrieved images should depict red dresses resembling the reference. It is natural for users to search for products using information from multiple modalities, such as images and text. Enabling visual search allows for finding visually similar correspondences and obtaining fine-grained results. Otherwise, text-only search tools would require extensive textual descriptions to reach the same level of detail. Thus, it is more natural and convenient for users to upload a picture of their desired product or a similar version rather than articulating their search entirely in words. Traditional search engines often struggle to deliver precise results to users due to the challenges posed by overly specific, broad, or irrelevant queries. Moreover, these engines typically lack support for understanding natural language text and reasoning about search queries while conversationally engaging with the user. In the context of the Fashion200K benchmark [19], several existing approaches fail to retrieve the correct query among the top matches. Concretely, most of the baselines considered in this work fail to retrieve the correct image 1 arXiv:2404.15790v1 [cs.CV] 24 Apr 2024 among the top 10 matches in 60% of the cases, as shown in our results in Sec. 4.1. In this paper, we propose to leverage pretrained largescale models that can digest image and text inputs. We focus on improving the performance on the Fashion200K dataset [19] and achieve state-of-the-art results that improve upon previous work by a significant margin. However, all the queries in Fashion200K follow the simple formatting \"replace {original attribute} with {target attribute}\", which impedes generalizing to natural language text. For this reason, we develop a novel interactive multimodal search solution leveraging recent advances in LLMs and vision-language models that can understand complex text queries and route them to the correct search tool with the required formatting. Leveraging LLMs facilitates digesting natural language queries and allows taking contextual information into account. Moreover, the length of the context recent LLMs can consider allows for incorporating information from previous interactions. We include a high-level overview of our approach in Fig. 1. The main contributions of this work include: \u2022 Improved Multimodal Search: We introduce a method that adapts foundational vision and language models for multimodal retrieval, which achieved state-of-the-art results on Fashion200k. We present the technical details in Sec. 3.1 and discuss the experimental results in Sec. 4.1. \u2022 Conversational Interface: We propose an interface that Sec. 3.1 and discuss the experimental results in Sec. 4.1. \u2022 Conversational Interface: We propose an interface that harnesses state-of-the-art LLMs to interpret natural language inputs and route formatted queries to the available search tools. We describe the details of the backend in Sec. 3.2 and include examples in Sec. 4.2. In this section, we propose a model to perform an image search merging text and image inputs in Sec. 3.1. While this model outperforms alternative approaches by a large margin, it is trained on a dataset with specific formatting (see Sec. 4.1). Instead of artificially augmenting the vocabulary seen during training as in Gu et al. [17], we propose a conversational interface orchestrated by a LLM that can structure the queries to a format understandable to our multimodal search model. Sec. 3.2 describes the principles of our approach. The proposed framework offers a modular architecture that allows interchanging search models with different formatting constraints while providing enhanced natural language understanding, a working memory, and a human-like shopping assistant experience. 3.1. Improved multimodal search In the CIR problem, a dataset D is composed of triplets with reference and target image as well as a modifying text, i.e., D := {(x(i) ref , x(i) trg , t(i))}i\u2208[n]. The objective is to learn the transformations \\cF : \\x x _ { \\tex t {ref}} \\times \\ttt \\mapsto \\Psi \\qquad ; \\qquad \\cG : \\xx _{\\text {trg}} \\mapsto \\Psi (1) along with a metric space (\u03a8, d) with fixed d : \u03a8 \u00d7 \u03a8 \u2192R such that d(\\cF ( \\xx _{\\text {ref}}, \\ ttt ), \\cG (\\xx _{\\text {trg}})) < d(\\cF (\\xx _{\\text {ref}}, \\ttt ), \\cG (\\xx _{\\text {trg}}')) (2) if xref after applying the modifications described by t is semantically more similar to xtrg than it is to x\u2032 trg [6]. Commonly to other works [41, 43, 53], we normalize the space \u03a8to the unit hypersphere for training stability, and choose d to be the cosine distance. In this work, we use off-the-shelf foundational models for vision and language to compute the transformation F. Concretely, we use an architecture similar to BLIP2 [29] and adapt it for the CIR problem. BLIP2 [29] uses a module referred to as the Q-Former, which allows ingesting image features obtained by a powerful feature extractor. These image features provide fine-grained descriptions of the input product and are transformed into the space of text embeddings of a LLM. Then, the LLM processes the fused text and image embeddings. The Q-Former consists of two transformer submodules sharing the same self-attention layers to extract information from the input text and the image features. The image trans3 formers also contain a set of learnable query embeddings, which can be interpreted as a form of prefix tuning [30]. To generate image-only search embeddings using our model, one simply needs to input the images into the model and provide an empty string as the input text. Intuitively, this processes the images without any text modifications. In other words, we use \\c G (\\xx ) := \\cF (\\xx , \\texttt {\"\"}) (3) We illustrate the proposed architecture for F in Fig. 2. We use the image part of the CLIP [36] model to obtain visual features and a T5 model [37] as LLM to process the modifying text and the visual features processed by the QFormer. We initialize the model using the BLIP2 weights with all the parameters frozen. The pretrained weights perform the task of image captioning, which is different from the task we are trying to solve. Instead, we define a new task that we refer to as composed captioning. The objective of this task is to generate the caption of the product that we would obtain by merging the information of the product in the input image and the text modifications. We hypothesize that if the proposed model can solve the problem of composed captioning, the information captured by the LLM is enough to describe the target product. Intuitively, similarity search happens at a latent space close to the final text representations, making the CIR problem closer to the task of text-to-text retrieval. However, as the proposed model is able to capture fine-grained information by leveraging powerful visual representations, we are able to obtain an impressive retrieval performance. This is expected as the BLIP2 achieves state-of-the-art performance on Visual Question Answering (VQA) benchmarks, showing that image information can be effectively captured. To adapt the LLM to this task while retaining its knowledge, we applied LoRA [21] to the query and value matrices of all the self-attention and cross-attention layers of the LLM. LoRA [21] learns a residual representation on top of some layers using matrices with low rank. Theoretically, this is supported by the fact that LLMs adapted to a specific task have low intrinsic dimension [1], and in practice it allows training with low computational resources and limited data. Moreover, only modifying a few parameters reduces the risk of catastrophic forgetting, observed in some studies where full fine-tuning of an LLM decreases the performance compared to using it frozen or fine-tuning it with parameter-efficient techniques [23, 31]. The hidden states of the T5 decoder are a sequence of tensors. Instead of using a class-like token as in Koh et al. [23] to summarize the information along the temporal dimension, we perform an average followed by layer normalization [2]. This technique was utilized in EVA [15], which improves over CLIP [36] in several downstream tasks. The result is then projected to the embedding dimension using a ReLU-activated MLP and followed by normalization. We train the model using a multi-task objective involving the InfoNCE loss [34], a lower bound on the mutual information [27], as retrieval term: \\begin { sp l it} \\c L _{\\ t ex t {In foNCE } } : = \\E _ i \\left [ \\ log \\f rac {\\ exp \\ left (S_{i, i} \\cdot \\tau \\right ) }{\\sum _{j}\\exp \\left (S_{i, j}\\cdot \\tau \\right )} \\right ] \\\\ S_{i, j} := \\lin {\\cF (\\xx _{\\text {ref}}^{(i)}, \\ttt ^{(i)}), \\cG (\\xx _{\\text {trg}}^{(j)})}\\,, \\end {split} \\label {eq:infonce} (4) where \u03c4 is a learnable scaling parameter. Practically, given that our model has many parameters, the maximum batch sizes we can achieve have an order of magnitude of hundreds of samples. Given that this can affect the retrieval performance due to a lack of negative samples, we maintain a cross-batch memory as proposed in Wang et al. [47] and use it for the computation of Eq. (4). On top of that, we add a standard maximum likelihood as a language modeling term LLM. We compute this objective using teacher forcing [50], based on providing the groundtruth outputs of previous tokens to estimate the next token, and cross-entropy loss. The final loss is \\cL =\\cL _{\\t ext {LM}} + \\omega \\cL _{\\text {InfoNCE}}\\,, (5) where \u03c9 is a hyperparameter determining the relative importance of the retrieval task. 3.2. Conversational interface Inspired by Visual ChatGPT [52], we connect a user chat to a prompt manager that acts as a middle-man to a LLM and provides it with access to tools. Differently from Wu et al. [52], these tools are not only to understand and modify images but also to perform searches with both unimodal and multimodal inputs. From the user\u2019s perspective, the proposed framework allows implicitly using a search tool without requiring any input pattern. For example, interacting with a model like SIMAT [12] could be unintuitive as it requires two words with the original and target attributes. We trained our multimodal search model on Fashion200K [19], which only contains inputs of the form \"replace {original attribute} with {target attribute}\" (see Sec. 4.1). We could formulate this prompt using the same inputs that a model like SIMAT requires and thus modify them to match the training distribution of our model. Since the LLMs can only ingest text information, we add image understanding tools to provide information about the images and their content, as well as search tools: Image search: Image-only search based on CLIP [36] image embeddings. We use this tool internally when a user uploads an image to show an initial result to users, which 4 may inspire them to write the follow-up queries. The descriptions of the search results are provided to the LLM to enable Retrieval Augmented Generation (RAG) [26] Multimodal search: The input of the multimodal search tool is an image and two text strings expressing the original and target attributes. We use our model and feed it the Fashion200K [19] prompt created from these attributes. VQA model: We use the BLIP [28] pretrained base model1 to facilitate image understanding to the LLM. Our approach to providing image information to the LLM is similar to LENS [7], as it is a training-free method applicable to any off-the-shelf LLM. 3.2.1 Workflow In this section, we describe the main events in the interface and the triggered actions. Start: When a new user starts a new session, we create a unique identifier used to set a dedicated folder to store images and initialize the memory to store the context. The memory contains a conversation where the lines prefixed with \"Human:\" come from the user, and those starting with \"AI:\" are outputs of the LLM shown to the user. Image input: When a user uploads an image, we store it in the session folder using file names with sequential numerical identifiers, i.e., IMG 001.png, IMG 002.png, IMG 003.png, etc. Then we add a fake conversation to the memory: Human: I provided a figure named {image_filename}. { description} AI: Provide more details if you are not satisfied with the results. where description is the text output of the search action. Search: Every time a search tool is used, the results are shown to the user in a carousel of images. Additionally, we add the following information to the memory that will be provided to the LLM once invoked Top-{len(image_descriptions)} results are: { image_descriptions}. which contains the descriptions of the top retrieved images. These details help the LLM understand the fine-grained details (e.g., brand, product type, technical specifications, color, etc.) and the multimodal search intention. We can interpret this as a form of RAG [26]. RAG is based on using an external knowledge base for retrieving facts to ground LLMs on the most accurate and up-to-date information. Text input: Every time the user provides some text input, we invoke the LLMthrough the prompt manager. In this stage, the LLM can communicate directly to the user or use special formatting to call some tools. If the LLM wants to 1https://huggingface.co/Salesforce/blipvqabase perform a multimodal search, it can typically find the target attribute in the text input, which only needs to be formatted and simplified. However, in most cases, the original attribute is not included in the input text as it is implicit in the image. Generally, the descriptions contain enough information to perform the query. Otherwise, the LLM can use the VQA model to ask specific questions about the image. 3.2.2 Prompt manager The prompt manager implements the workflow described in the previous section and empowers the LLM with access to different tools. The tool calls are coordinated by defining a syntax that processes the output of the LLM and parses the actions and the text visible to the user in the chat. Every time the LLM is triggered, the prompt manager does so with a prompt that includes a description of the task, formatting instructions, previous interactions, and outputs of the tools. We crafted a task description that specifies that the LLM can ask follow-up questions to the customers if the search intents are unclear or the query is too broad. In the prompt, we also include examples of use cases written in natural language. The formatting instructions describe when the LLM should use a tool, which are the inputs, how to obtain them, and what are the tool outputs. For each tool, we have to define a name and a description that may include examples, input and output requirements, or cases where the tool should be used. In this work, we test two prompt managers: Langchain [9]: We take the Langchain prompts from Visual ChatGPT [52] and adapt them to our task. The syntax to use a tool is: Thought: Do I need to use a tool? Yes Action: Multimodal search Action Input: IMG_001.png;natural;black Our prompt manager: Inspired by the recent success of visual programming [18, 44], we propose to use a syntax similar to calling a function in programming languages: SEARCH(IMG_001.png;natural;black) In Fig. 1, we illustrate an example of a conversation and the actions that the prompt manager and the LLM trigger. Visual programming typically performs a single call to a LLM, and the output is a single action or a series of actions whose inputs and outputs can be variables defined on the fly by other functions. While Langchain [9] allows performing multiple actions, it requires executing them one at a time. When the LLM expresses the intention to use a tool, Langchain calls the tool and prompts the LLM again with the output of such a tool. The visual programming approach only invokes the LLM once, saving latency and possible costs attributed to API calls. However, in visual programming, the LLM cannot process the output of tools 5 Table 1. Quantitative results: Recall@k on the Fashion200K dataset [19]. Our method is able to successfully fuse image and text information and generate a representation that is useful to caption the resulting image and generate an embedding for retrieval purposes. Best results shown in boldface. Method \u2193 R@10 R@50 Average RN [42] 40.5 62.4 51.4 MRN [22] 40.0 61.9 50.9 FiLM [35] 39.5 61.9 50.7 TIRG [46] 42.5 63.8 53.2 CosMo [25] 50.4 69.3 59.8 FashionVLP [16] 49.9 70.5 60.2 VAL [10] 53.8 73.3 63.6 Ours 71.4 91.6 81.5 but only use their outputs blindly. For the sake of simplicity, we restrict the custom prompt manager to handle single actions, but this could easily be extended following Gupta and Kembhavi [18], Sur\u00b4 \u0131s et al. [44]. Additionally, we propose to include Chain-of-Thought (COT) [24, 49, 54, 55]. COT is a technique that enforces that the LLM reasons about the actions that should be taken. This simple technique has reportedly found numerous benefits. Following the example above, the complete output expected by the LLM would be as follows: Thought: I can see that human uploaded an image of a deep v-neck tee. From the results, the color of the tee is natural. The user wants the color to be black instead. I have to call search. Action: SEARCH(IMG_001.png;natural;black) While Langchain and our prompt manager use the special prefix \"Thought\" to handle certain parts of the query, their purposes are distinct. In Langchain, the prefix is used to parse lines in the LLM output. If a line starts with this prefix, Langchain expects to find the question \"Do I need to use a tool?\" followed by \"Yes\" or \"No\", indicating whether a tool should be used. In contrast, our novel prompt manager does not impose any specific format on lines starting with the \"Thought\" prefix. Instead, these lines are solely dedicated to incorporating COT reasoning. 4. Experiments 4.1. Multimodal search on Fashion200K Implementation details: We use the Flan T5 XL model [11], which is a 3 billion parameter LLM from the T5 family [37], finetuned using instruction tuning [48]. We obtain the visual features with CLIP-L model [36], a model with patch size 14 and 428 million parameter. In total, the model has around 3.5 million parameter, which requires splitting the model across different GPUs for training. Concretely, we use 8 NVIDIA V100 GPUs. LoRA is performed with a rank of r = 16, scaling \u03b1 = 32 and dropout of 0.5 on the query and value matrices of the attention layers of the LLM. The hidden representation obtained from the LLM is transformed with a linear layer of size 1024, passed through a ReLU activation, and then transformed with another linear layer that yields an embedding of size 768. Such an embedding is normalized to have unit norm and used for retrieval. We optimize the model with AdamW [33] with a learning rate of 10\u22125 and weight decay of 0.5 for a total of 300 epochs. The learning rate is linearly increased from 0 to the initial learning rate during the first 1000 steps. We set the weight of the language modeling loss as \u03c9 = 1. The effective batch size considering all the GPUs is 4,096, and the total number of embeddings included in the cross-batch memory Wang et al. [47] is 65,536. Dataset: The Fashion200K [19] is a large-scale fashion dataset crawled from online shopping websites. The dataset contains over 200,000 images with paired product descriptions and attributes. All descriptions are fashion-specific and have more than four words, e.g., \u201cBeige v-neck bell-sleeve top\u201d. Similarly to Vo et al. [46], text queries for the CIR problem are generated by comparing the attributes of different images and finding pairs with one attribute difference. Then, a query is formed as \"replace {original attribute} with {target attribute}\". When trained on Fashion-200K [19], our method achieves state-of-the-art results, improving the retrieval performance of competitive methods by 20% recall at positions 10 and 50. Tab. 1 includes the comparison with some of the CIR methods reviewed in Sec. 2 [10, 16, 25, 46], as well as the visual reasoning-based baselines RN [42], MRN [22], and FiLM [35]. One of the reasons is that the model can exploit the image and text understanding prior of a foundational model that can perform image captioning, and adapt it for the related task of composed captioning. The hidden representations of the model contain enough information to describe the target image and are effectively used for that purpose. Adapting to this new task becomes easier given the specific formatting of the modifying text, which facilitates extracting the important parts of the query. The results show that it is possible to distill knowledge from a large vision and language model trained on largescale datasets. While our model has billions of parameters, which is far more than the other models, we are able to learn a new task similar to the ones that the pretrained model could solve with only learning a few parameters consisting of a very small percentage of the total model size. We include some qualitative examples in Fig. 3. These show that our model can successfully incorporate text information and modify the internal description formed about 6 (a) Successful examples (b) Failure examples Figure 3. Qualitative results: Examples of queries of the Fashion-200k dataset [19] and the 4 best matches. The correct matches are shown in green and incorrect ones in red. In the succesful examples, we can see that our proposal is able to incorporate modifications to the input product involving changes to color and material among others. Despite not retrieving the correct products in the failure examples, almost all the retrieved images satisfy the search criteria. the input image. The successful results in Fig. 3a show that the proposed model retrieves visually similar and can incorporate modifications of different attributes such as the color and the material. The failures in Fig. 3b show that all the first retrieve results satisfy the search criteria, with some of them even belonging the same product. This hints at our model having an even better performance in practice than what the benchmark reflects. Overall, we can see from all the qualitative examples that all the top-ranked results are relevant. The only exception is the inclusion of the reference image, which is a common error in retrieval systems given that the search embedding is computed from such an image. 4.2. Search interface One of the key drivers of performance is based on reformulating the examples. While the examples in Langchain are written using natural language, we advocate for using LLM model instructions. In this sense, the examples contain exactly the input that the LLM would receive including the product type, top-k product titles, and user input. Such examples also contain the expected model output including the COT reasoning and the action itself. This reinforces the format instructions and the benefits of RAG. Note that the proposed reformulation introduces some redundancy w.r.t. the Langchain formatting instructions. Additionally, it requires to allocate much more space for examples. Despite these considerations, we find our approach beneficial. For a fair comparison, we also limit the full prompt to fit the context of the smallest LLM and empirically find that allocating more space to examples is beneficial even if this is at the cost of removing the prefix. We tested different LLMs for the search interface. Among all, GPT-3 [8], concretely the text-davinci-003 model, was empirically found to be the best performing. Fig. 4 shows an example of our conversational interface displaying a real example in which composed retrieval is performed. Besides GPT-3 [8], we compared different open-source models from the transformers library [51]. Surprisingly, these models performed poorly. Digging into the outputs of the LLMs we could see that one of the failure cases of Fastchat [56] had the following output: Thought: Do I need to use a tool? Yes Action: Multimodal altering search Action Input: image_file: IMG_001.png, attribute value of the product in the image: chair, desired attribute value: sofa While the former contains the correct action to take and the correct inputs, i.e. image file, negative text and positive text, it is not correctly formatted for Langchain. Instead, GPT-3 [8] is able to generate a correctly formatted output: Thought: Do I need to use a tool? Yes Action: Multimodal altering search Action Input: IMG_001.png;chair;sofa This example shows that FastChat [56] has the knowledge to perform a successful query but struggles to use the complicated formatting of Langchain. This example is the main motivation why we developed the novel prompt manager in Sec. 3.2.2. 5. Limitations The model in Sec. 3.1 can achieve an impressive performance on Fashion200K. As discussed in Sec. 4.1, the char7 Figure 4. Proposed conversational multimodal search system: In this example, the user uploads an image from the Fashion200K dataset [19] and provides text input intending to search an a dress similar to the product in the image but in a different color. An LLM, specifically GPT-3 [8], processes the user\u2019s prompt and invokes our novel multimodal search model with the uploaded image and a formatted text query. The desired attribute indicated by the user is \u201cbeige\u201d, which can be inferred from the text input. The original attribute is required by the prompt used during the training of our model and is correctly identified by the LLM as \u201cgray\u201d. In this case, the LLM can obtain this information leveraging the RAG based on obtaining the product descriptions of the first matches using image search with the uploaded picture. The conversational nature of the interactions with the user offers an improved search experience. acteristics of this dataset are ideal for our model to excel but may hinder generalizing to natural language queries. This is solved with our conversational interface, but the current setup is restricted to modifying a single attribute at a time. Using hard prompts to encode the task description is simple and applicable to black-box models such as LLMs accessed through an API. However, it reduces the effective context length of LLMs and requires prompt engineering, which is a tedious process. Although LLMs have a large context size, the prompt yields an effective input size that is relatively small, and the memory rapidly fills up. In practice, the memory gets truncated if conversations are too long, hence discarding the first interactions. 6. Conclusions This paper presents a comprehensive pipeline to perform image retrieval with text modifications, addressing the CIR problem. Our novel composed retrieval model, built upon the BLIP2 architecture [28] and leveraging LLMs, has demonstrated superior performance on the Fashion200K dataset [19] compared to previous models. In this work, we also describe the integration of LLMs into a search interface, offering a conversational search assistant experience that enhances user interaction. We implement a prompt manager to enable using small LLMs and incorporate the COT [24, 49] and RAG [26] techniques to improve system performance. Our experiments underscore the importance of addressing inherent challenges in multimodal search, including enhancing matching capabilities and handling ambiguous natural language queries. Acknowledgments The authors acknowledge Ren\u00b4 e Vidal for constructive discussions. O.B. is part of project SGR 00514, supported by Departament de Recerca i Universitats de la Generalitat de Catalunya. 8"
+ },
+ {
+ "url": "http://arxiv.org/abs/2010.11929v2",
+ "title": "An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale",
+ "abstract": "While the Transformer architecture has become the de-facto standard for\nnatural language processing tasks, its applications to computer vision remain\nlimited. In vision, attention is either applied in conjunction with\nconvolutional networks, or used to replace certain components of convolutional\nnetworks while keeping their overall structure in place. We show that this\nreliance on CNNs is not necessary and a pure transformer applied directly to\nsequences of image patches can perform very well on image classification tasks.\nWhen pre-trained on large amounts of data and transferred to multiple mid-sized\nor small image recognition benchmarks (ImageNet, CIFAR-100, VTAB, etc.), Vision\nTransformer (ViT) attains excellent results compared to state-of-the-art\nconvolutional networks while requiring substantially fewer computational\nresources to train.",
+ "authors": "Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby",
+ "published": "2020-10-22",
+ "updated": "2021-06-03",
+ "primary_cat": "cs.CV",
+ "cats": [
+ "cs.CV",
+ "cs.AI",
+ "cs.LG"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2303.15247v2",
+ "title": "Zero-Shot Composed Image Retrieval with Textual Inversion",
+ "abstract": "Composed Image Retrieval (CIR) aims to retrieve a target image based on a\nquery composed of a reference image and a relative caption that describes the\ndifference between the two images. The high effort and cost required for\nlabeling datasets for CIR hamper the widespread usage of existing methods, as\nthey rely on supervised learning. In this work, we propose a new task,\nZero-Shot CIR (ZS-CIR), that aims to address CIR without requiring a labeled\ntraining dataset. Our approach, named zero-Shot composEd imAge Retrieval with\ntextuaL invErsion (SEARLE), maps the visual features of the reference image\ninto a pseudo-word token in CLIP token embedding space and integrates it with\nthe relative caption. To support research on ZS-CIR, we introduce an\nopen-domain benchmarking dataset named Composed Image Retrieval on Common\nObjects in context (CIRCO), which is the first dataset for CIR containing\nmultiple ground truths for each query. The experiments show that SEARLE\nexhibits better performance than the baselines on the two main datasets for CIR\ntasks, FashionIQ and CIRR, and on the proposed CIRCO. The dataset, the code and\nthe model are publicly available at https://github.com/miccunifi/SEARLE.",
+ "authors": "Alberto Baldrati, Lorenzo Agnolucci, Marco Bertini, Alberto Del Bimbo",
+ "published": "2023-03-27",
+ "updated": "2023-08-19",
+ "primary_cat": "cs.CV",
+ "cats": [
+ "cs.CV",
+ "cs.CL",
+ "cs.IR"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/1810.04805v2",
+ "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding",
+ "abstract": "We introduce a new language representation model called BERT, which stands\nfor Bidirectional Encoder Representations from Transformers. Unlike recent\nlanguage representation models, BERT is designed to pre-train deep\nbidirectional representations from unlabeled text by jointly conditioning on\nboth left and right context in all layers. As a result, the pre-trained BERT\nmodel can be fine-tuned with just one additional output layer to create\nstate-of-the-art models for a wide range of tasks, such as question answering\nand language inference, without substantial task-specific architecture\nmodifications.\n BERT is conceptually simple and empirically powerful. It obtains new\nstate-of-the-art results on eleven natural language processing tasks, including\npushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI\naccuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering\nTest F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1\n(5.1 point absolute improvement).",
+ "authors": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova",
+ "published": "2018-10-11",
+ "updated": "2019-05-24",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/1806.08317v2",
+ "title": "Fashion-Gen: The Generative Fashion Dataset and Challenge",
+ "abstract": "We introduce a new dataset of 293,008 high definition (1360 x 1360 pixels)\nfashion images paired with item descriptions provided by professional stylists.\nEach item is photographed from a variety of angles. We provide baseline results\non 1) high-resolution image generation, and 2) image generation conditioned on\nthe given text descriptions. We invite the community to improve upon these\nbaselines. In this paper, we also outline the details of a challenge that we\nare launching based upon this dataset.",
+ "authors": "Negar Rostamzadeh, Seyedarian Hosseini, Thomas Boquet, Wojciech Stokowiec, Ying Zhang, Christian Jauvin, Chris Pal",
+ "published": "2018-06-21",
+ "updated": "2018-07-30",
+ "primary_cat": "stat.ML",
+ "cats": [
+ "stat.ML",
+ "cs.LG"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/1706.03762v7",
+ "title": "Attention Is All You Need",
+ "abstract": "The dominant sequence transduction models are based on complex recurrent or\nconvolutional neural networks in an encoder-decoder configuration. The best\nperforming models also connect the encoder and decoder through an attention\nmechanism. We propose a new simple network architecture, the Transformer, based\nsolely on attention mechanisms, dispensing with recurrence and convolutions\nentirely. Experiments on two machine translation tasks show these models to be\nsuperior in quality while being more parallelizable and requiring significantly\nless time to train. Our model achieves 28.4 BLEU on the WMT 2014\nEnglish-to-German translation task, improving over the existing best results,\nincluding ensembles by over 2 BLEU. On the WMT 2014 English-to-French\ntranslation task, our model establishes a new single-model state-of-the-art\nBLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction\nof the training costs of the best models from the literature. We show that the\nTransformer generalizes well to other tasks by applying it successfully to\nEnglish constituency parsing both with large and limited training data.",
+ "authors": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin",
+ "published": "2017-06-12",
+ "updated": "2023-08-02",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.LG"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2303.11916v3",
+ "title": "CompoDiff: Versatile Composed Image Retrieval With Latent Diffusion",
+ "abstract": "This paper proposes a novel diffusion-based model, CompoDiff, for solving\nzero-shot Composed Image Retrieval (ZS-CIR) with latent diffusion. This paper\nalso introduces a new synthetic dataset, named SynthTriplets18M, with 18.8\nmillion reference images, conditions, and corresponding target image triplets\nto train CIR models. CompoDiff and SynthTriplets18M tackle the shortages of the\nprevious CIR approaches, such as poor generalizability due to the small dataset\nscale and the limited types of conditions. CompoDiff not only achieves a new\nstate-of-the-art on four ZS-CIR benchmarks, including FashionIQ, CIRR, CIRCO,\nand GeneCIS, but also enables a more versatile and controllable CIR by\naccepting various conditions, such as negative text, and image mask conditions.\nCompoDiff also shows the controllability of the condition strength between text\nand image queries and the trade-off between inference speed and performance,\nwhich are unavailable with existing CIR methods. The code and dataset are\navailable at https://github.com/navervision/CompoDiff",
+ "authors": "Geonmo Gu, Sanghyuk Chun, Wonjae Kim, HeeJae Jun, Yoohoon Kang, Sangdoo Yun",
+ "published": "2023-03-21",
+ "updated": "2024-02-25",
+ "primary_cat": "cs.CV",
+ "cats": [
+ "cs.CV",
+ "cs.IR"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2103.00020v1",
+ "title": "Learning Transferable Visual Models From Natural Language Supervision",
+ "abstract": "State-of-the-art computer vision systems are trained to predict a fixed set\nof predetermined object categories. This restricted form of supervision limits\ntheir generality and usability since additional labeled data is needed to\nspecify any other visual concept. Learning directly from raw text about images\nis a promising alternative which leverages a much broader source of\nsupervision. We demonstrate that the simple pre-training task of predicting\nwhich caption goes with which image is an efficient and scalable way to learn\nSOTA image representations from scratch on a dataset of 400 million (image,\ntext) pairs collected from the internet. After pre-training, natural language\nis used to reference learned visual concepts (or describe new ones) enabling\nzero-shot transfer of the model to downstream tasks. We study the performance\nof this approach by benchmarking on over 30 different existing computer vision\ndatasets, spanning tasks such as OCR, action recognition in videos,\ngeo-localization, and many types of fine-grained object classification. The\nmodel transfers non-trivially to most tasks and is often competitive with a\nfully supervised baseline without the need for any dataset specific training.\nFor instance, we match the accuracy of the original ResNet-50 on ImageNet\nzero-shot without needing to use any of the 1.28 million training examples it\nwas trained on. We release our code and pre-trained model weights at\nhttps://github.com/OpenAI/CLIP.",
+ "authors": "Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever",
+ "published": "2021-02-26",
+ "updated": "2021-02-26",
+ "primary_cat": "cs.CV",
+ "cats": [
+ "cs.CV",
+ "cs.LG"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2304.05051v1",
+ "title": "FashionSAP: Symbols and Attributes Prompt for Fine-grained Fashion Vision-Language Pre-training",
+ "abstract": "Fashion vision-language pre-training models have shown efficacy for a wide\nrange of downstream tasks. However, general vision-language pre-training models\npay less attention to fine-grained domain features, while these features are\nimportant in distinguishing the specific domain tasks from general tasks. We\npropose a method for fine-grained fashion vision-language pre-training based on\nfashion Symbols and Attributes Prompt (FashionSAP) to model fine-grained\nmulti-modalities fashion attributes and characteristics. Firstly, we propose\nthe fashion symbols, a novel abstract fashion concept layer, to represent\ndifferent fashion items and to generalize various kinds of fine-grained fashion\nfeatures, making modelling fine-grained attributes more effective. Secondly,\nthe attributes prompt method is proposed to make the model learn specific\nattributes of fashion items explicitly. We design proper prompt templates\naccording to the format of fashion data. Comprehensive experiments are\nconducted on two public fashion benchmarks, i.e., FashionGen and FashionIQ, and\nFashionSAP gets SOTA performances for four popular fashion tasks. The ablation\nstudy also shows the proposed abstract fashion symbols, and the attribute\nprompt method enables the model to acquire fine-grained semantics in the\nfashion domain effectively. The obvious performance gains from FashionSAP\nprovide a new baseline for future fashion task research.",
+ "authors": "Yunpeng Han, Lisai Zhang, Qingcai Chen, Zhijian Chen, Zhonghua Li, Jianxin Yang, Zhao Cao",
+ "published": "2023-04-11",
+ "updated": "2023-04-11",
+ "primary_cat": "cs.CV",
+ "cats": [
+ "cs.CV",
+ "cs.CL"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2301.13823v4",
+ "title": "Grounding Language Models to Images for Multimodal Inputs and Outputs",
+ "abstract": "We propose an efficient method to ground pretrained text-only language models\nto the visual domain, enabling them to process arbitrarily interleaved\nimage-and-text data, and generate text interleaved with retrieved images. Our\nmethod leverages the abilities of language models learnt from large scale\ntext-only pretraining, such as in-context learning and free-form text\ngeneration. We keep the language model frozen, and finetune input and output\nlinear layers to enable cross-modality interactions. This allows our model to\nprocess arbitrarily interleaved image-and-text inputs, and generate free-form\ntext interleaved with retrieved images. We achieve strong zero-shot performance\non grounded tasks such as contextual image retrieval and multimodal dialogue,\nand showcase compelling interactive abilities. Our approach works with any\noff-the-shelf language model and paves the way towards an effective, general\nsolution for leveraging pretrained language models in visually grounded\nsettings.",
+ "authors": "Jing Yu Koh, Ruslan Salakhutdinov, Daniel Fried",
+ "published": "2023-01-31",
+ "updated": "2023-06-13",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.CV",
+ "cs.LG"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2112.10752v2",
+ "title": "High-Resolution Image Synthesis with Latent Diffusion Models",
+ "abstract": "By decomposing the image formation process into a sequential application of\ndenoising autoencoders, diffusion models (DMs) achieve state-of-the-art\nsynthesis results on image data and beyond. Additionally, their formulation\nallows for a guiding mechanism to control the image generation process without\nretraining. However, since these models typically operate directly in pixel\nspace, optimization of powerful DMs often consumes hundreds of GPU days and\ninference is expensive due to sequential evaluations. To enable DM training on\nlimited computational resources while retaining their quality and flexibility,\nwe apply them in the latent space of powerful pretrained autoencoders. In\ncontrast to previous work, training diffusion models on such a representation\nallows for the first time to reach a near-optimal point between complexity\nreduction and detail preservation, greatly boosting visual fidelity. By\nintroducing cross-attention layers into the model architecture, we turn\ndiffusion models into powerful and flexible generators for general conditioning\ninputs such as text or bounding boxes and high-resolution synthesis becomes\npossible in a convolutional manner. Our latent diffusion models (LDMs) achieve\na new state of the art for image inpainting and highly competitive performance\non various tasks, including unconditional image generation, semantic scene\nsynthesis, and super-resolution, while significantly reducing computational\nrequirements compared to pixel-based DMs. Code is available at\nhttps://github.com/CompVis/latent-diffusion .",
+ "authors": "Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Bj\u00f6rn Ommer",
+ "published": "2021-12-20",
+ "updated": "2022-04-13",
+ "primary_cat": "cs.CV",
+ "cats": [
+ "cs.CV"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2302.03084v2",
+ "title": "Pic2Word: Mapping Pictures to Words for Zero-shot Composed Image Retrieval",
+ "abstract": "In Composed Image Retrieval (CIR), a user combines a query image with text to\ndescribe their intended target. Existing methods rely on supervised learning of\nCIR models using labeled triplets consisting of the query image, text\nspecification, and the target image. Labeling such triplets is expensive and\nhinders broad applicability of CIR. In this work, we propose to study an\nimportant task, Zero-Shot Composed Image Retrieval (ZS-CIR), whose goal is to\nbuild a CIR model without requiring labeled triplets for training. To this end,\nwe propose a novel method, called Pic2Word, that requires only weakly labeled\nimage-caption pairs and unlabeled image datasets to train. Unlike existing\nsupervised CIR models, our model trained on weakly labeled or unlabeled\ndatasets shows strong generalization across diverse ZS-CIR tasks, e.g.,\nattribute editing, object composition, and domain conversion. Our approach\noutperforms several supervised CIR methods on the common CIR benchmark, CIRR\nand Fashion-IQ. Code will be made publicly available at\nhttps://github.com/google-research/composed_image_retrieval.",
+ "authors": "Kuniaki Saito, Kihyuk Sohn, Xiang Zhang, Chun-Liang Li, Chen-Yu Lee, Kate Saenko, Tomas Pfister",
+ "published": "2023-02-06",
+ "updated": "2023-05-15",
+ "primary_cat": "cs.CV",
+ "cats": [
+ "cs.CV"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2112.03162v2",
+ "title": "Embedding Arithmetic of Multimodal Queries for Image Retrieval",
+ "abstract": "Latent text representations exhibit geometric regularities, such as the\nfamous analogy: queen is to king what woman is to man. Such structured semantic\nrelations were not demonstrated on image representations. Recent works aiming\nat bridging this semantic gap embed images and text into a multimodal space,\nenabling the transfer of text-defined transformations to the image modality. We\nintroduce the SIMAT dataset to evaluate the task of Image Retrieval with\nMultimodal queries. SIMAT contains 6k images and 18k textual transformation\nqueries that aim at either replacing scene elements or changing pairwise\nrelationships between scene elements. The goal is to retrieve an image\nconsistent with the (source image, text transformation) query. We use an\nimage/text matching oracle (OSCAR) to assess whether the image transformation\nis successful. The SIMAT dataset will be publicly available. We use SIMAT to\nevaluate the geometric properties of multimodal embedding spaces trained with\nan image/text matching objective, like CLIP. We show that vanilla CLIP\nembeddings are not very well suited to transform images with delta vectors, but\nthat a simple finetuning on the COCO dataset can bring dramatic improvements.\nWe also study whether it is beneficial to leverage pretrained universal\nsentence encoders (FastText, LASER and LaBSE).",
+ "authors": "Guillaume Couairon, Matthieu Cord, Matthijs Douze, Holger Schwenk",
+ "published": "2021-12-06",
+ "updated": "2022-10-20",
+ "primary_cat": "cs.CV",
+ "cats": [
+ "cs.CV",
+ "cs.CL"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/1812.07119v1",
+ "title": "Composing Text and Image for Image Retrieval - An Empirical Odyssey",
+ "abstract": "In this paper, we study the task of image retrieval, where the input query is\nspecified in the form of an image plus some text that describes desired\nmodifications to the input image. For example, we may present an image of the\nEiffel tower, and ask the system to find images which are visually similar but\nare modified in small ways, such as being taken at nighttime instead of during\nthe day. To tackle this task, we learn a similarity metric between a target\nimage and a source image plus source text, an embedding and composing function\nsuch that target image feature is close to the source image plus text\ncomposition feature. We propose a new way to combine image and text using such\nfunction that is designed for the retrieval task. We show this outperforms\nexisting approaches on 3 different datasets, namely Fashion-200k, MIT-States\nand a new synthetic dataset we create based on CLEVR. We also show that our\napproach can be used to classify input queries, in addition to image retrieval.",
+ "authors": "Nam Vo, Lu Jiang, Chen Sun, Kevin Murphy, Li-Jia Li, Li Fei-Fei, James Hays",
+ "published": "2018-12-18",
+ "updated": "2018-12-18",
+ "primary_cat": "cs.CV",
+ "cats": [
+ "cs.CV"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2304.08485v2",
+ "title": "Visual Instruction Tuning",
+ "abstract": "Instruction tuning large language models (LLMs) using machine-generated\ninstruction-following data has improved zero-shot capabilities on new tasks,\nbut the idea is less explored in the multimodal field. In this paper, we\npresent the first attempt to use language-only GPT-4 to generate multimodal\nlanguage-image instruction-following data. By instruction tuning on such\ngenerated data, we introduce LLaVA: Large Language and Vision Assistant, an\nend-to-end trained large multimodal model that connects a vision encoder and\nLLM for general-purpose visual and language understanding.Our early experiments\nshow that LLaVA demonstrates impressive multimodel chat abilities, sometimes\nexhibiting the behaviors of multimodal GPT-4 on unseen images/instructions, and\nyields a 85.1% relative score compared with GPT-4 on a synthetic multimodal\ninstruction-following dataset. When fine-tuned on Science QA, the synergy of\nLLaVA and GPT-4 achieves a new state-of-the-art accuracy of 92.53%. We make\nGPT-4 generated visual instruction tuning data, our model and code base\npublicly available.",
+ "authors": "Haotian Liu, Chunyuan Li, Qingyang Wu, Yong Jae Lee",
+ "published": "2023-04-17",
+ "updated": "2023-12-11",
+ "primary_cat": "cs.CV",
+ "cats": [
+ "cs.CV",
+ "cs.AI",
+ "cs.CL",
+ "cs.LG"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2106.09685v2",
+ "title": "LoRA: Low-Rank Adaptation of Large Language Models",
+ "abstract": "An important paradigm of natural language processing consists of large-scale\npre-training on general domain data and adaptation to particular tasks or\ndomains. As we pre-train larger models, full fine-tuning, which retrains all\nmodel parameters, becomes less feasible. Using GPT-3 175B as an example --\ndeploying independent instances of fine-tuned models, each with 175B\nparameters, is prohibitively expensive. We propose Low-Rank Adaptation, or\nLoRA, which freezes the pre-trained model weights and injects trainable rank\ndecomposition matrices into each layer of the Transformer architecture, greatly\nreducing the number of trainable parameters for downstream tasks. Compared to\nGPT-3 175B fine-tuned with Adam, LoRA can reduce the number of trainable\nparameters by 10,000 times and the GPU memory requirement by 3 times. LoRA\nperforms on-par or better than fine-tuning in model quality on RoBERTa,\nDeBERTa, GPT-2, and GPT-3, despite having fewer trainable parameters, a higher\ntraining throughput, and, unlike adapters, no additional inference latency. We\nalso provide an empirical investigation into rank-deficiency in language model\nadaptation, which sheds light on the efficacy of LoRA. We release a package\nthat facilitates the integration of LoRA with PyTorch models and provide our\nimplementations and model checkpoints for RoBERTa, DeBERTa, and GPT-2 at\nhttps://github.com/microsoft/LoRA.",
+ "authors": "Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen",
+ "published": "2021-06-17",
+ "updated": "2021-10-16",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.LG"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/1910.10683v4",
+ "title": "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer",
+ "abstract": "Transfer learning, where a model is first pre-trained on a data-rich task\nbefore being fine-tuned on a downstream task, has emerged as a powerful\ntechnique in natural language processing (NLP). The effectiveness of transfer\nlearning has given rise to a diversity of approaches, methodology, and\npractice. In this paper, we explore the landscape of transfer learning\ntechniques for NLP by introducing a unified framework that converts all\ntext-based language problems into a text-to-text format. Our systematic study\ncompares pre-training objectives, architectures, unlabeled data sets, transfer\napproaches, and other factors on dozens of language understanding tasks. By\ncombining the insights from our exploration with scale and our new ``Colossal\nClean Crawled Corpus'', we achieve state-of-the-art results on many benchmarks\ncovering summarization, question answering, text classification, and more. To\nfacilitate future work on transfer learning for NLP, we release our data set,\npre-trained models, and code.",
+ "authors": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu",
+ "published": "2019-10-23",
+ "updated": "2023-09-19",
+ "primary_cat": "cs.LG",
+ "cats": [
+ "cs.LG",
+ "cs.CL",
+ "stat.ML"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2402.06852v2",
+ "title": "ChemLLM: A Chemical Large Language Model",
+ "abstract": "Large language models (LLMs) have made impressive progress in chemistry\napplications. However, the community lacks an LLM specifically designed for\nchemistry. The main challenges are two-fold: firstly, most chemical data and\nscientific knowledge are stored in structured databases, which limits the\nmodel's ability to sustain coherent dialogue when used directly. Secondly,\nthere is an absence of objective and fair benchmark that encompass most\nchemistry tasks. Here, we introduce ChemLLM, a comprehensive framework that\nfeatures the first LLM dedicated to chemistry. It also includes ChemData, a\ndataset specifically designed for instruction tuning, and ChemBench, a robust\nbenchmark covering nine essential chemistry tasks. ChemLLM is adept at\nperforming various tasks across chemical disciplines with fluid dialogue\ninteraction. Notably, ChemLLM achieves results comparable to GPT-4 on the core\nchemical tasks and demonstrates competitive performance with LLMs of similar\nsize in general scenarios. ChemLLM paves a new path for exploration in chemical\nstudies, and our method of incorporating structured chemical knowledge into\ndialogue systems sets a new standard for developing LLMs in various scientific\nfields. Codes, Datasets, and Model weights are publicly accessible at\nhttps://hf.co/AI4Chem",
+ "authors": "Di Zhang, Wei Liu, Qian Tan, Jingdan Chen, Hang Yan, Yuliang Yan, Jiatong Li, Weiran Huang, Xiangyu Yue, Wanli Ouyang, Dongzhan Zhou, Shufei Zhang, Mao Su, Han-Sen Zhong, Yuqiang Li",
+ "published": "2024-02-10",
+ "updated": "2024-04-25",
+ "primary_cat": "cs.AI",
+ "cats": [
+ "cs.AI",
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2307.15997v1",
+ "title": "RoCar: A Relationship Network-based Evaluation Method to Large Language Models",
+ "abstract": "Large language models (LLMs) have received increasing attention. However, due\nto the complexity of its capabilities, how to rationally evaluate the\ncapabilities of LLMs is still a task to be solved. We propose the RoCar method,\nwhich utilizes the defined basic schemas to randomly construct a task graph and\ngenerates natural language evaluation tasks based on the task graph to evaluate\nthe reasoning and memory abilities of LLMs respectively. Due to the very large\nrandomness of the task construction process, it is possible to ensure that none\nof the LLMs to be tested has directly learned the evaluation tasks,\nguaranteeing the fairness of the evaluation method.",
+ "authors": "Ming Wang, Wenfang Wu, Chongyun Gao, Daling Wang, Shi Feng, Yifei Zhang",
+ "published": "2023-07-29",
+ "updated": "2023-07-29",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2402.19465v1",
+ "title": "Towards Tracing Trustworthiness Dynamics: Revisiting Pre-training Period of Large Language Models",
+ "abstract": "Ensuring the trustworthiness of large language models (LLMs) is crucial. Most\nstudies concentrate on fully pre-trained LLMs to better understand and improve\nLLMs' trustworthiness. In this paper, to reveal the untapped potential of\npre-training, we pioneer the exploration of LLMs' trustworthiness during this\nperiod, focusing on five key dimensions: reliability, privacy, toxicity,\nfairness, and robustness. To begin with, we apply linear probing to LLMs. The\nhigh probing accuracy suggests that \\textit{LLMs in early pre-training can\nalready distinguish concepts in each trustworthiness dimension}. Therefore, to\nfurther uncover the hidden possibilities of pre-training, we extract steering\nvectors from a LLM's pre-training checkpoints to enhance the LLM's\ntrustworthiness. Finally, inspired by~\\citet{choi2023understanding} that mutual\ninformation estimation is bounded by linear probing accuracy, we also probe\nLLMs with mutual information to investigate the dynamics of trustworthiness\nduring pre-training. We are the first to observe a similar two-phase\nphenomenon: fitting and compression~\\citep{shwartz2017opening}. This research\nprovides an initial exploration of trustworthiness modeling during LLM\npre-training, seeking to unveil new insights and spur further developments in\nthe field. We will make our code publicly accessible at\n\\url{https://github.com/ChnQ/TracingLLM}.",
+ "authors": "Chen Qian, Jie Zhang, Wei Yao, Dongrui Liu, Zhenfei Yin, Yu Qiao, Yong Liu, Jing Shao",
+ "published": "2024-02-29",
+ "updated": "2024-02-29",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2402.04489v1",
+ "title": "De-amplifying Bias from Differential Privacy in Language Model Fine-tuning",
+ "abstract": "Fairness and privacy are two important values machine learning (ML)\npractitioners often seek to operationalize in models. Fairness aims to reduce\nmodel bias for social/demographic sub-groups. Privacy via differential privacy\n(DP) mechanisms, on the other hand, limits the impact of any individual's\ntraining data on the resulting model. The trade-offs between privacy and\nfairness goals of trustworthy ML pose a challenge to those wishing to address\nboth. We show that DP amplifies gender, racial, and religious bias when\nfine-tuning large language models (LLMs), producing models more biased than\nones fine-tuned without DP. We find the cause of the amplification to be a\ndisparity in convergence of gradients across sub-groups. Through the case of\nbinary gender bias, we demonstrate that Counterfactual Data Augmentation (CDA),\na known method for addressing bias, also mitigates bias amplification by DP. As\na consequence, DP and CDA together can be used to fine-tune models while\nmaintaining both fairness and privacy.",
+ "authors": "Sanjari Srivastava, Piotr Mardziel, Zhikhun Zhang, Archana Ahlawat, Anupam Datta, John C Mitchell",
+ "published": "2024-02-07",
+ "updated": "2024-02-07",
+ "primary_cat": "cs.LG",
+ "cats": [
+ "cs.LG",
+ "cs.CR",
+ "cs.CY",
+ "stat.ME"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2303.01248v3",
+ "title": "Can ChatGPT Assess Human Personalities? A General Evaluation Framework",
+ "abstract": "Large Language Models (LLMs) especially ChatGPT have produced impressive\nresults in various areas, but their potential human-like psychology is still\nlargely unexplored. Existing works study the virtual personalities of LLMs but\nrarely explore the possibility of analyzing human personalities via LLMs. This\npaper presents a generic evaluation framework for LLMs to assess human\npersonalities based on Myers Briggs Type Indicator (MBTI) tests. Specifically,\nwe first devise unbiased prompts by randomly permuting options in MBTI\nquestions and adopt the average testing result to encourage more impartial\nanswer generation. Then, we propose to replace the subject in question\nstatements to enable flexible queries and assessments on different subjects\nfrom LLMs. Finally, we re-formulate the question instructions in a manner of\ncorrectness evaluation to facilitate LLMs to generate clearer responses. The\nproposed framework enables LLMs to flexibly assess personalities of different\ngroups of people. We further propose three evaluation metrics to measure the\nconsistency, robustness, and fairness of assessment results from\nstate-of-the-art LLMs including ChatGPT and GPT-4. Our experiments reveal\nChatGPT's ability to assess human personalities, and the average results\ndemonstrate that it can achieve more consistent and fairer assessments in spite\nof lower robustness against prompt biases compared with InstructGPT.",
+ "authors": "Haocong Rao, Cyril Leung, Chunyan Miao",
+ "published": "2023-03-01",
+ "updated": "2023-10-13",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2402.15215v1",
+ "title": "Item-side Fairness of Large Language Model-based Recommendation System",
+ "abstract": "Recommendation systems for Web content distribution intricately connect to\nthe information access and exposure opportunities for vulnerable populations.\nThe emergence of Large Language Models-based Recommendation System (LRS) may\nintroduce additional societal challenges to recommendation systems due to the\ninherent biases in Large Language Models (LLMs). From the perspective of\nitem-side fairness, there remains a lack of comprehensive investigation into\nthe item-side fairness of LRS given the unique characteristics of LRS compared\nto conventional recommendation systems. To bridge this gap, this study examines\nthe property of LRS with respect to item-side fairness and reveals the\ninfluencing factors of both historical users' interactions and inherent\nsemantic biases of LLMs, shedding light on the need to extend conventional\nitem-side fairness methods for LRS. Towards this goal, we develop a concise and\neffective framework called IFairLRS to enhance the item-side fairness of an\nLRS. IFairLRS covers the main stages of building an LRS with specifically\nadapted strategies to calibrate the recommendations of LRS. We utilize IFairLRS\nto fine-tune LLaMA, a representative LLM, on \\textit{MovieLens} and\n\\textit{Steam} datasets, and observe significant item-side fairness\nimprovements. The code can be found in\nhttps://github.com/JiangM-C/IFairLRS.git.",
+ "authors": "Meng Jiang, Keqin Bao, Jizhi Zhang, Wenjie Wang, Zhengyi Yang, Fuli Feng, Xiangnan He",
+ "published": "2024-02-23",
+ "updated": "2024-02-23",
+ "primary_cat": "cs.IR",
+ "cats": [
+ "cs.IR"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2308.05374v2",
+ "title": "Trustworthy LLMs: a Survey and Guideline for Evaluating Large Language Models' Alignment",
+ "abstract": "Ensuring alignment, which refers to making models behave in accordance with\nhuman intentions [1,2], has become a critical task before deploying large\nlanguage models (LLMs) in real-world applications. For instance, OpenAI devoted\nsix months to iteratively aligning GPT-4 before its release [3]. However, a\nmajor challenge faced by practitioners is the lack of clear guidance on\nevaluating whether LLM outputs align with social norms, values, and\nregulations. This obstacle hinders systematic iteration and deployment of LLMs.\nTo address this issue, this paper presents a comprehensive survey of key\ndimensions that are crucial to consider when assessing LLM trustworthiness. The\nsurvey covers seven major categories of LLM trustworthiness: reliability,\nsafety, fairness, resistance to misuse, explainability and reasoning, adherence\nto social norms, and robustness. Each major category is further divided into\nseveral sub-categories, resulting in a total of 29 sub-categories.\nAdditionally, a subset of 8 sub-categories is selected for further\ninvestigation, where corresponding measurement studies are designed and\nconducted on several widely-used LLMs. The measurement results indicate that,\nin general, more aligned models tend to perform better in terms of overall\ntrustworthiness. However, the effectiveness of alignment varies across the\ndifferent trustworthiness categories considered. This highlights the importance\nof conducting more fine-grained analyses, testing, and making continuous\nimprovements on LLM alignment. By shedding light on these key dimensions of LLM\ntrustworthiness, this paper aims to provide valuable insights and guidance to\npractitioners in the field. Understanding and addressing these concerns will be\ncrucial in achieving reliable and ethically sound deployment of LLMs in various\napplications.",
+ "authors": "Yang Liu, Yuanshun Yao, Jean-Francois Ton, Xiaoying Zhang, Ruocheng Guo, Hao Cheng, Yegor Klochkov, Muhammad Faaiz Taufiq, Hang Li",
+ "published": "2023-08-10",
+ "updated": "2024-03-21",
+ "primary_cat": "cs.AI",
+ "cats": [
+ "cs.AI",
+ "cs.LG"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2402.08189v1",
+ "title": "Simulating Human Strategic Behavior: Comparing Single and Multi-agent LLMs",
+ "abstract": "When creating plans, policies, or applications for people, it is challenging\nfor designers to think through the strategic ways that different people will\nbehave. Recently, Large Language Models (LLMs) have been shown to create\nrealistic simulations of human-like behavior based on personas. We build on\nthis to investigate whether LLMs can simulate human strategic behavior. Human\nstrategies are complex because they take into account social norms in addition\nto aiming to maximize personal gain. The ultimatum game is a classic economics\nexperiment used to understand human strategic behavior in a social setting. It\nshows that people will often choose to \"punish\" other players to enforce social\nnorms rather than to maximize personal profits. We test whether LLMs can\nreplicate this complex behavior in simulations. We compare two architectures:\nsingle- and multi-agent LLMs. We compare their abilities to (1) simulate\nhuman-like actions in the ultimatum game, (2) simulate two player\npersonalities, greedy and fair, and (3) create robust strategies that are\nlogically complete and consistent with personality. Our evaluation shows the\nmulti-agent architecture is much more accurate than single LLMs (88% vs. 50%)\nin simulating human strategy creation and actions for personality pairs. Thus\nthere is potential to use LLMs to simulate human strategic behavior to help\ndesigners, planners, and policymakers perform preliminary exploration of how\npeople behave in systems.",
+ "authors": "Karthik Sreedhar, Lydia Chilton",
+ "published": "2024-02-13",
+ "updated": "2024-02-13",
+ "primary_cat": "cs.HC",
+ "cats": [
+ "cs.HC"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2311.08472v1",
+ "title": "Selecting Shots for Demographic Fairness in Few-Shot Learning with Large Language Models",
+ "abstract": "Recently, work in NLP has shifted to few-shot (in-context) learning, with\nlarge language models (LLMs) performing well across a range of tasks. However,\nwhile fairness evaluations have become a standard for supervised methods,\nlittle is known about the fairness of LLMs as prediction systems. Further,\ncommon standard methods for fairness involve access to models weights or are\napplied during finetuning, which are not applicable in few-shot learning. Do\nLLMs exhibit prediction biases when used for standard NLP tasks? In this work,\nwe explore the effect of shots, which directly affect the performance of\nmodels, on the fairness of LLMs as NLP classification systems. We consider how\ndifferent shot selection strategies, both existing and new demographically\nsensitive methods, affect model fairness across three standard fairness\ndatasets. We discuss how future work can include LLM fairness evaluations.",
+ "authors": "Carlos Aguirre, Kuleen Sasse, Isabel Cachola, Mark Dredze",
+ "published": "2023-11-14",
+ "updated": "2023-11-14",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2403.15491v1",
+ "title": "Open Source Conversational LLMs do not know most Spanish words",
+ "abstract": "The growing interest in Large Language Models (LLMs) and in particular in\nconversational models with which users can interact has led to the development\nof a large number of open-source chat LLMs. These models are evaluated on a\nwide range of benchmarks to assess their capabilities in answering questions or\nsolving problems on almost any possible topic or to test their ability to\nreason or interpret texts. Instead, the evaluation of the knowledge that these\nmodels have of the languages has received much less attention. For example, the\nwords that they can recognize and use in different languages. In this paper, we\nevaluate the knowledge that open-source chat LLMs have of Spanish words by\ntesting a sample of words in a reference dictionary. The results show that\nopen-source chat LLMs produce incorrect meanings for an important fraction of\nthe words and are not able to use most of the words correctly to write\nsentences with context. These results show how Spanish is left behind in the\nopen-source LLM race and highlight the need to push for linguistic fairness in\nconversational LLMs ensuring that they provide similar performance across\nlanguages.",
+ "authors": "Javier Conde, Miguel Gonz\u00e1lez, Nina Melero, Raquel Ferrando, Gonzalo Mart\u00ednez, Elena Merino-G\u00f3mez, Jos\u00e9 Alberto Hern\u00e1ndez, Pedro Reviriego",
+ "published": "2024-03-21",
+ "updated": "2024-03-21",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2304.03728v1",
+ "title": "Interpretable Unified Language Checking",
+ "abstract": "Despite recent concerns about undesirable behaviors generated by large\nlanguage models (LLMs), including non-factual, biased, and hateful language, we\nfind LLMs are inherent multi-task language checkers based on their latent\nrepresentations of natural and social knowledge. We present an interpretable,\nunified, language checking (UniLC) method for both human and machine-generated\nlanguage that aims to check if language input is factual and fair. While\nfairness and fact-checking tasks have been handled separately with dedicated\nmodels, we find that LLMs can achieve high performance on a combination of\nfact-checking, stereotype detection, and hate speech detection tasks with a\nsimple, few-shot, unified set of prompts. With the ``1/2-shot'' multi-task\nlanguage checking method proposed in this work, the GPT3.5-turbo model\noutperforms fully supervised baselines on several language tasks. The simple\napproach and results suggest that based on strong latent knowledge\nrepresentations, an LLM can be an adaptive and explainable tool for detecting\nmisinformation, stereotypes, and hate speech.",
+ "authors": "Tianhua Zhang, Hongyin Luo, Yung-Sung Chuang, Wei Fang, Luc Gaitskell, Thomas Hartvigsen, Xixin Wu, Danny Fox, Helen Meng, James Glass",
+ "published": "2023-04-07",
+ "updated": "2023-04-07",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2310.05694v1",
+ "title": "A Survey of Large Language Models for Healthcare: from Data, Technology, and Applications to Accountability and Ethics",
+ "abstract": "The utilization of large language models (LLMs) in the Healthcare domain has\ngenerated both excitement and concern due to their ability to effectively\nrespond to freetext queries with certain professional knowledge. This survey\noutlines the capabilities of the currently developed LLMs for Healthcare and\nexplicates their development process, with the aim of providing an overview of\nthe development roadmap from traditional Pretrained Language Models (PLMs) to\nLLMs. Specifically, we first explore the potential of LLMs to enhance the\nefficiency and effectiveness of various Healthcare applications highlighting\nboth the strengths and limitations. Secondly, we conduct a comparison between\nthe previous PLMs and the latest LLMs, as well as comparing various LLMs with\neach other. Then we summarize related Healthcare training data, training\nmethods, optimization strategies, and usage. Finally, the unique concerns\nassociated with deploying LLMs in Healthcare settings are investigated,\nparticularly regarding fairness, accountability, transparency and ethics. Our\nsurvey provide a comprehensive investigation from perspectives of both computer\nscience and Healthcare specialty. Besides the discussion about Healthcare\nconcerns, we supports the computer science community by compiling a collection\nof open source resources, such as accessible datasets, the latest\nmethodologies, code implementations, and evaluation benchmarks in the Github.\nSummarily, we contend that a significant paradigm shift is underway,\ntransitioning from PLMs to LLMs. This shift encompasses a move from\ndiscriminative AI approaches to generative AI approaches, as well as a shift\nfrom model-centered methodologies to datacentered methodologies.",
+ "authors": "Kai He, Rui Mao, Qika Lin, Yucheng Ruan, Xiang Lan, Mengling Feng, Erik Cambria",
+ "published": "2023-10-09",
+ "updated": "2023-10-09",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2310.18333v3",
+ "title": "She had Cobalt Blue Eyes: Prompt Testing to Create Aligned and Sustainable Language Models",
+ "abstract": "As the use of large language models (LLMs) increases within society, as does\nthe risk of their misuse. Appropriate safeguards must be in place to ensure LLM\noutputs uphold the ethical standards of society, highlighting the positive role\nthat artificial intelligence technologies can have. Recent events indicate\nethical concerns around conventionally trained LLMs, leading to overall unsafe\nuser experiences. This motivates our research question: how do we ensure LLM\nalignment? In this work, we introduce a test suite of unique prompts to foster\nthe development of aligned LLMs that are fair, safe, and robust. We show that\nprompting LLMs at every step of the development pipeline, including data\ncuration, pre-training, and fine-tuning, will result in an overall more\nresponsible model. Our test suite evaluates outputs from four state-of-the-art\nlanguage models: GPT-3.5, GPT-4, OPT, and LLaMA-2. The assessment presented in\nthis paper highlights a gap between societal alignment and the capabilities of\ncurrent LLMs. Additionally, implementing a test suite such as ours lowers the\nenvironmental overhead of making models safe and fair.",
+ "authors": "Veronica Chatrath, Oluwanifemi Bamgbose, Shaina Raza",
+ "published": "2023-10-20",
+ "updated": "2023-12-15",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2310.13343v1",
+ "title": "Challenges and Contributing Factors in the Utilization of Large Language Models (LLMs)",
+ "abstract": "With the development of large language models (LLMs) like the GPT series,\ntheir widespread use across various application scenarios presents a myriad of\nchallenges. This review initially explores the issue of domain specificity,\nwhere LLMs may struggle to provide precise answers to specialized questions\nwithin niche fields. The problem of knowledge forgetting arises as these LLMs\nmight find it hard to balance old and new information. The knowledge repetition\nphenomenon reveals that sometimes LLMs might deliver overly mechanized\nresponses, lacking depth and originality. Furthermore, knowledge illusion\ndescribes situations where LLMs might provide answers that seem insightful but\nare actually superficial, while knowledge toxicity focuses on harmful or biased\ninformation outputs. These challenges underscore problems in the training data\nand algorithmic design of LLMs. To address these issues, it's suggested to\ndiversify training data, fine-tune models, enhance transparency and\ninterpretability, and incorporate ethics and fairness training. Future\ntechnological trends might lean towards iterative methodologies, multimodal\nlearning, model personalization and customization, and real-time learning and\nfeedback mechanisms. In conclusion, future LLMs should prioritize fairness,\ntransparency, and ethics, ensuring they uphold high moral and ethical standards\nwhen serving humanity.",
+ "authors": "Xiaoliang Chen, Liangbin Li, Le Chang, Yunhe Huang, Yuxuan Zhao, Yuxiao Zhang, Dinuo Li",
+ "published": "2023-10-20",
+ "updated": "2023-10-20",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2403.00811v1",
+ "title": "Cognitive Bias in High-Stakes Decision-Making with LLMs",
+ "abstract": "Large language models (LLMs) offer significant potential as tools to support\nan expanding range of decision-making tasks. However, given their training on\nhuman (created) data, LLMs can inherit both societal biases against protected\ngroups, as well as be subject to cognitive bias. Such human-like bias can\nimpede fair and explainable decisions made with LLM assistance. Our work\nintroduces BiasBuster, a framework designed to uncover, evaluate, and mitigate\ncognitive bias in LLMs, particularly in high-stakes decision-making tasks.\nInspired by prior research in psychology and cognitive sciences, we develop a\ndataset containing 16,800 prompts to evaluate different cognitive biases (e.g.,\nprompt-induced, sequential, inherent). We test various bias mitigation\nstrategies, amidst proposing a novel method using LLMs to debias their own\nprompts. Our analysis provides a comprehensive picture on the presence and\neffects of cognitive bias across different commercial and open-source models.\nWe demonstrate that our self-help debiasing effectively mitigate cognitive bias\nwithout having to manually craft examples for each bias type.",
+ "authors": "Jessica Echterhoff, Yao Liu, Abeer Alessa, Julian McAuley, Zexue He",
+ "published": "2024-02-25",
+ "updated": "2024-02-25",
+ "primary_cat": "cs.AI",
+ "cats": [
+ "cs.AI",
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2311.04892v2",
+ "title": "Bias Runs Deep: Implicit Reasoning Biases in Persona-Assigned LLMs",
+ "abstract": "Recent works have showcased the ability of LLMs to embody diverse personas in\ntheir responses, exemplified by prompts like 'You are Yoda. Explain the Theory\nof Relativity.' While this ability allows personalization of LLMs and enables\nhuman behavior simulation, its effect on LLMs' capabilities remains unclear. To\nfill this gap, we present the first extensive study of the unintended\nside-effects of persona assignment on the ability of LLMs to perform basic\nreasoning tasks. Our study covers 24 reasoning datasets, 4 LLMs, and 19 diverse\npersonas (e.g. an Asian person) spanning 5 socio-demographic groups. Our\nexperiments unveil that LLMs harbor deep rooted bias against various\nsocio-demographics underneath a veneer of fairness. While they overtly reject\nstereotypes when explicitly asked ('Are Black people less skilled at\nmathematics?'), they manifest stereotypical and erroneous presumptions when\nasked to answer questions while adopting a persona. These can be observed as\nabstentions in responses, e.g., 'As a Black person, I can't answer this\nquestion as it requires math knowledge', and generally result in a substantial\nperformance drop. Our experiments with ChatGPT-3.5 show that this bias is\nubiquitous - 80% of our personas demonstrate bias; it is significant - some\ndatasets show performance drops of 70%+; and can be especially harmful for\ncertain groups - some personas suffer statistically significant drops on 80%+\nof the datasets. Overall, all 4 LLMs exhibit this bias to varying extents, with\nGPT-4-Turbo showing the least but still a problematic amount of bias (evident\nin 42% of the personas). Further analysis shows that these persona-induced\nerrors can be hard-to-discern and hard-to-avoid. Our findings serve as a\ncautionary tale that the practice of assigning personas to LLMs - a trend on\nthe rise - can surface their deep-rooted biases and have unforeseeable and\ndetrimental side-effects.",
+ "authors": "Shashank Gupta, Vaishnavi Shrivastava, Ameet Deshpande, Ashwin Kalyan, Peter Clark, Ashish Sabharwal, Tushar Khot",
+ "published": "2023-11-08",
+ "updated": "2024-01-27",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2312.15478v1",
+ "title": "A Group Fairness Lens for Large Language Models",
+ "abstract": "The rapid advancement of large language models has revolutionized various\napplications but also raised crucial concerns about their potential to\nperpetuate biases and unfairness when deployed in social media contexts.\nEvaluating LLMs' potential biases and fairness has become crucial, as existing\nmethods rely on limited prompts focusing on just a few groups, lacking a\ncomprehensive categorical perspective. In this paper, we propose evaluating LLM\nbiases from a group fairness lens using a novel hierarchical schema\ncharacterizing diverse social groups. Specifically, we construct a dataset,\nGFair, encapsulating target-attribute combinations across multiple dimensions.\nIn addition, we introduce statement organization, a new open-ended text\ngeneration task, to uncover complex biases in LLMs. Extensive evaluations of\npopular LLMs reveal inherent safety concerns. To mitigate the biases of LLM\nfrom a group fairness perspective, we pioneer a novel chain-of-thought method\nGF-Think to mitigate biases of LLMs from a group fairness perspective.\nExperimental results demonstrate its efficacy in mitigating bias in LLMs to\nachieve fairness.",
+ "authors": "Guanqun Bi, Lei Shen, Yuqiang Xie, Yanan Cao, Tiangang Zhu, Xiaodong He",
+ "published": "2023-12-24",
+ "updated": "2023-12-24",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2310.08780v1",
+ "title": "\"Im not Racist but...\": Discovering Bias in the Internal Knowledge of Large Language Models",
+ "abstract": "Large language models (LLMs) have garnered significant attention for their\nremarkable performance in a continuously expanding set of natural language\nprocessing tasks. However, these models have been shown to harbor inherent\nsocietal biases, or stereotypes, which can adversely affect their performance\nin their many downstream applications. In this paper, we introduce a novel,\npurely prompt-based approach to uncover hidden stereotypes within any arbitrary\nLLM. Our approach dynamically generates a knowledge representation of internal\nstereotypes, enabling the identification of biases encoded within the LLM's\ninternal knowledge. By illuminating the biases present in LLMs and offering a\nsystematic methodology for their analysis, our work contributes to advancing\ntransparency and promoting fairness in natural language processing systems.",
+ "authors": "Abel Salinas, Louis Penafiel, Robert McCormack, Fred Morstatter",
+ "published": "2023-10-13",
+ "updated": "2023-10-13",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2311.18140v1",
+ "title": "ROBBIE: Robust Bias Evaluation of Large Generative Language Models",
+ "abstract": "As generative large language models (LLMs) grow more performant and\nprevalent, we must develop comprehensive enough tools to measure and improve\ntheir fairness. Different prompt-based datasets can be used to measure social\nbias across multiple text domains and demographic axes, meaning that testing\nLLMs on more datasets can potentially help us characterize their biases more\nfully, and better ensure equal and equitable treatment of marginalized\ndemographic groups. In this work, our focus is two-fold:\n (1) Benchmarking: a comparison of 6 different prompt-based bias and toxicity\nmetrics across 12 demographic axes and 5 families of generative LLMs. Out of\nthose 6 metrics, AdvPromptSet and HolisticBiasR are novel datasets proposed in\nthe paper. The comparison of those benchmarks gives us insights about the bias\nand toxicity of the compared models. Therefore, we explore the frequency of\ndemographic terms in common LLM pre-training corpora and how this may relate to\nmodel biases.\n (2) Mitigation: we conduct a comprehensive study of how well 3 bias/toxicity\nmitigation techniques perform across our suite of measurements. ROBBIE aims to\nprovide insights for practitioners while deploying a model, emphasizing the\nneed to not only measure potential harms, but also understand how they arise by\ncharacterizing the data, mitigate harms once found, and balance any trade-offs.\nWe open-source our analysis code in hopes of encouraging broader measurements\nof bias in future LLMs.",
+ "authors": "David Esiobu, Xiaoqing Tan, Saghar Hosseini, Megan Ung, Yuchen Zhang, Jude Fernandes, Jane Dwivedi-Yu, Eleonora Presani, Adina Williams, Eric Michael Smith",
+ "published": "2023-11-29",
+ "updated": "2023-11-29",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2309.09397v1",
+ "title": "Do Large GPT Models Discover Moral Dimensions in Language Representations? A Topological Study Of Sentence Embeddings",
+ "abstract": "As Large Language Models are deployed within Artificial Intelligence systems,\nthat are increasingly integrated with human society, it becomes more important\nthan ever to study their internal structures. Higher level abilities of LLMs\nsuch as GPT-3.5 emerge in large part due to informative language\nrepresentations they induce from raw text data during pre-training on trillions\nof words. These embeddings exist in vector spaces of several thousand\ndimensions, and their processing involves mapping between multiple vector\nspaces, with total number of parameters on the order of trillions. Furthermore,\nthese language representations are induced by gradient optimization, resulting\nin a black box system that is hard to interpret. In this paper, we take a look\nat the topological structure of neuronal activity in the \"brain\" of Chat-GPT's\nfoundation language model, and analyze it with respect to a metric representing\nthe notion of fairness. We develop a novel approach to visualize GPT's moral\ndimensions. We first compute a fairness metric, inspired by social psychology\nliterature, to identify factors that typically influence fairness assessments\nin humans, such as legitimacy, need, and responsibility. Subsequently, we\nsummarize the manifold's shape using a lower-dimensional simplicial complex,\nwhose topology is derived from this metric. We color it with a heat map\nassociated with this fairness metric, producing human-readable visualizations\nof the high-dimensional sentence manifold. Our results show that sentence\nembeddings based on GPT-3.5 can be decomposed into two submanifolds\ncorresponding to fair and unfair moral judgments. This indicates that GPT-based\nlanguage models develop a moral dimension within their representation spaces\nand induce an understanding of fairness during their training process.",
+ "authors": "Stephen Fitz",
+ "published": "2023-09-17",
+ "updated": "2023-09-17",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.CY",
+ "cs.LG",
+ "cs.NE"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2312.14769v3",
+ "title": "Large Language Model (LLM) Bias Index -- LLMBI",
+ "abstract": "The Large Language Model Bias Index (LLMBI) is a pioneering approach designed\nto quantify and address biases inherent in large language models (LLMs), such\nas GPT-4. We recognise the increasing prevalence and impact of LLMs across\ndiverse sectors. This research introduces a novel metric, LLMBI, to\nsystematically measure and mitigate biases potentially skewing model responses.\nWe formulated LLMBI using a composite scoring system incorporating multiple\ndimensions of bias, including but not limited to age, gender, and racial\nbiases. To operationalise this metric, we engaged in a multi-step process\ninvolving collecting and annotating LLM responses, applying sophisticated\nNatural Language Processing (NLP) techniques for bias detection, and computing\nthe LLMBI score through a specially crafted mathematical formula. The formula\nintegrates weighted averages of various bias dimensions, a penalty for dataset\ndiversity deficiencies, and a correction for sentiment biases. Our empirical\nanalysis, conducted using responses from OpenAI's API, employs advanced\nsentiment analysis as a representative method for bias detection. The research\nreveals LLMs, whilst demonstrating impressive capabilities in text generation,\nexhibit varying degrees of bias across different dimensions. LLMBI provides a\nquantifiable measure to compare biases across models and over time, offering a\nvital tool for systems engineers, researchers and regulators in enhancing the\nfairness and reliability of LLMs. It highlights the potential of LLMs in\nmimicking unbiased human-like responses. Additionally, it underscores the\nnecessity of continuously monitoring and recalibrating such models to align\nwith evolving societal norms and ethical standards.",
+ "authors": "Abiodun Finbarrs Oketunji, Muhammad Anas, Deepthi Saina",
+ "published": "2023-12-22",
+ "updated": "2023-12-29",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.CY",
+ "cs.LG",
+ "I.2.7"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2403.00884v2",
+ "title": "Text classification of column headers with a controlled vocabulary: leveraging LLMs for metadata enrichment",
+ "abstract": "Traditional dataset retrieval systems index on metadata information rather\nthan on the data values. Thus relying primarily on manual annotations and\nhigh-quality metadata, processes known to be labour-intensive and challenging\nto automate. We propose a method to support metadata enrichment with topic\nannotations of column headers using three Large Language Models (LLMs):\nChatGPT-3.5, GoogleBard and GoogleGemini. We investigate the LLMs ability to\nclassify column headers based on domain-specific topics from a controlled\nvocabulary. We evaluate our approach by assessing the internal consistency of\nthe LLMs, the inter-machine alignment, and the human-machine agreement for the\ntopic classification task. Additionally, we investigate the impact of\ncontextual information (i.e. dataset description) on the classification\noutcomes. Our results suggest that ChatGPT and GoogleGemini outperform\nGoogleBard for internal consistency as well as LLM-human-alignment.\nInterestingly, we found that context had no impact on the LLMs performances.\nThis work proposes a novel approach that leverages LLMs for text classification\nusing a controlled topic vocabulary, which has the potential to facilitate\nautomated metadata enrichment, thereby enhancing dataset retrieval and the\nFindability, Accessibility, Interoperability and Reusability (FAIR) of research\ndata on the Web.",
+ "authors": "Margherita Martorana, Tobias Kuhn, Lise Stork, Jacco van Ossenbruggen",
+ "published": "2024-03-01",
+ "updated": "2024-03-05",
+ "primary_cat": "cs.DB",
+ "cats": [
+ "cs.DB",
+ "cs.AI",
+ "cs.IR"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2403.17553v1",
+ "title": "RuBia: A Russian Language Bias Detection Dataset",
+ "abstract": "Warning: this work contains upsetting or disturbing content.\n Large language models (LLMs) tend to learn the social and cultural biases\npresent in the raw pre-training data. To test if an LLM's behavior is fair,\nfunctional datasets are employed, and due to their purpose, these datasets are\nhighly language and culture-specific. In this paper, we address a gap in the\nscope of multilingual bias evaluation by presenting a bias detection dataset\nspecifically designed for the Russian language, dubbed as RuBia. The RuBia\ndataset is divided into 4 domains: gender, nationality, socio-economic status,\nand diverse, each of the domains is further divided into multiple fine-grained\nsubdomains. Every example in the dataset consists of two sentences with the\nfirst reinforcing a potentially harmful stereotype or trope and the second\ncontradicting it. These sentence pairs were first written by volunteers and\nthen validated by native-speaking crowdsourcing workers. Overall, there are\nnearly 2,000 unique sentence pairs spread over 19 subdomains in RuBia. To\nillustrate the dataset's purpose, we conduct a diagnostic evaluation of\nstate-of-the-art or near-state-of-the-art LLMs and discuss the LLMs'\npredisposition to social biases.",
+ "authors": "Veronika Grigoreva, Anastasiia Ivanova, Ilseyar Alimova, Ekaterina Artemova",
+ "published": "2024-03-26",
+ "updated": "2024-03-26",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2312.07420v1",
+ "title": "FairSISA: Ensemble Post-Processing to Improve Fairness of Unlearning in LLMs",
+ "abstract": "Training large language models (LLMs) is a costly endeavour in terms of time\nand computational resources. The large amount of training data used during the\nunsupervised pre-training phase makes it difficult to verify all data and,\nunfortunately, undesirable data may be ingested during training. Re-training\nfrom scratch is impractical and has led to the creation of the 'unlearning'\ndiscipline where models are modified to \"unlearn\" undesirable information\nwithout retraining. However, any modification can alter the behaviour of LLMs,\nespecially on key dimensions such as fairness. This is the first work that\nexamines this interplay between unlearning and fairness for LLMs. In\nparticular, we focus on a popular unlearning framework known as SISA [Bourtoule\net al., 2021], which creates an ensemble of models trained on disjoint shards.\nWe evaluate the performance-fairness trade-off for SISA, and empirically\ndemsontrate that SISA can indeed reduce fairness in LLMs. To remedy this, we\npropose post-processing bias mitigation techniques for ensemble models produced\nby SISA. We adapt the post-processing fairness improvement technique from\n[Hardt et al., 2016] to design three methods that can handle model ensembles,\nand prove that one of the methods is an optimal fair predictor for ensemble of\nmodels. Through experimental results, we demonstrate the efficacy of our\npost-processing framework called 'FairSISA'.",
+ "authors": "Swanand Ravindra Kadhe, Anisa Halimi, Ambrish Rawat, Nathalie Baracaldo",
+ "published": "2023-12-12",
+ "updated": "2023-12-12",
+ "primary_cat": "cs.LG",
+ "cats": [
+ "cs.LG",
+ "cs.CY"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2311.09447v2",
+ "title": "How Trustworthy are Open-Source LLMs? An Assessment under Malicious Demonstrations Shows their Vulnerabilities",
+ "abstract": "The rapid progress in open-source Large Language Models (LLMs) is\nsignificantly driving AI development forward. However, there is still a limited\nunderstanding of their trustworthiness. Deploying these models at scale without\nsufficient trustworthiness can pose significant risks, highlighting the need to\nuncover these issues promptly. In this work, we conduct an adversarial\nassessment of open-source LLMs on trustworthiness, scrutinizing them across\neight different aspects including toxicity, stereotypes, ethics, hallucination,\nfairness, sycophancy, privacy, and robustness against adversarial\ndemonstrations. We propose advCoU, an extended Chain of Utterances-based (CoU)\nprompting strategy by incorporating carefully crafted malicious demonstrations\nfor trustworthiness attack. Our extensive experiments encompass recent and\nrepresentative series of open-source LLMs, including Vicuna, MPT, Falcon,\nMistral, and Llama 2. The empirical outcomes underscore the efficacy of our\nattack strategy across diverse aspects. More interestingly, our result analysis\nreveals that models with superior performance in general NLP tasks do not\nalways have greater trustworthiness; in fact, larger models can be more\nvulnerable to attacks. Additionally, models that have undergone instruction\ntuning, focusing on instruction following, tend to be more susceptible,\nalthough fine-tuning LLMs for safety alignment proves effective in mitigating\nadversarial trustworthiness attacks.",
+ "authors": "Lingbo Mo, Boshi Wang, Muhao Chen, Huan Sun",
+ "published": "2023-11-15",
+ "updated": "2024-04-02",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2401.04057v1",
+ "title": "Unveiling Bias in Fairness Evaluations of Large Language Models: A Critical Literature Review of Music and Movie Recommendation Systems",
+ "abstract": "The rise of generative artificial intelligence, particularly Large Language\nModels (LLMs), has intensified the imperative to scrutinize fairness alongside\naccuracy. Recent studies have begun to investigate fairness evaluations for\nLLMs within domains such as recommendations. Given that personalization is an\nintrinsic aspect of recommendation systems, its incorporation into fairness\nassessments is paramount. Yet, the degree to which current fairness evaluation\nframeworks account for personalization remains unclear. Our comprehensive\nliterature review aims to fill this gap by examining how existing frameworks\nhandle fairness evaluations of LLMs, with a focus on the integration of\npersonalization factors. Despite an exhaustive collection and analysis of\nrelevant works, we discovered that most evaluations overlook personalization, a\ncritical facet of recommendation systems, thereby inadvertently perpetuating\nunfair practices. Our findings shed light on this oversight and underscore the\nurgent need for more nuanced fairness evaluations that acknowledge\npersonalization. Such improvements are vital for fostering equitable\ndevelopment within the AI community.",
+ "authors": "Chandan Kumar Sah, Dr. Lian Xiaoli, Muhammad Mirajul Islam",
+ "published": "2024-01-08",
+ "updated": "2024-01-08",
+ "primary_cat": "cs.IR",
+ "cats": [
+ "cs.IR",
+ "cs.AI",
+ "cs.SE"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2305.19118v1",
+ "title": "Encouraging Divergent Thinking in Large Language Models through Multi-Agent Debate",
+ "abstract": "Modern large language models (LLMs) like ChatGPT have shown remarkable\nperformance on general language tasks but still struggle on complex reasoning\ntasks, which drives the research on cognitive behaviors of LLMs to explore\nhuman-like problem-solving strategies. Along this direction, one representative\nstrategy is self-reflection, which asks an LLM to refine the solution with the\nfeedback generated by itself iteratively. However, our study shows that such\nreflection-style methods suffer from the Degeneration-of-Thought (DoT) problem:\nonce the LLM has established confidence in its solutions, it is unable to\ngenerate novel thoughts later through reflection even if its initial stance is\nincorrect. To address the DoT problem, we propose a Multi-Agent Debate (MAD)\nframework, in which multiple agents express their arguments in the state of\n\"tit for tat\" and a judge manages the debate process to obtain a final\nsolution. Clearly, our MAD framework encourages divergent thinking in LLMs\nwhich would be helpful for tasks that require deep levels of contemplation.\nExperiment results on two challenging datasets, commonsense machine translation\nand counter-intuitive arithmetic reasoning, demonstrate the effectiveness of\nour MAD framework. Extensive analyses suggest that the adaptive break of debate\nand the modest level of \"tit for tat\" state are required for MAD to obtain good\nperformance. Moreover, we find that LLMs might not be a fair judge if different\nLLMs are used for agents. Codes:\nhttps://github.com/Skytliang/Multi-Agents-Debate",
+ "authors": "Tian Liang, Zhiwei He, Wenxiang Jiao, Xing Wang, Yan Wang, Rui Wang, Yujiu Yang, Zhaopeng Tu, Shuming Shi",
+ "published": "2023-05-30",
+ "updated": "2023-05-30",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2402.12150v1",
+ "title": "Your Large Language Model is Secretly a Fairness Proponent and You Should Prompt it Like One",
+ "abstract": "The widespread adoption of large language models (LLMs) underscores the\nurgent need to ensure their fairness. However, LLMs frequently present dominant\nviewpoints while ignoring alternative perspectives from minority parties,\nresulting in potential biases. We hypothesize that these fairness-violating\nbehaviors occur because LLMs express their viewpoints using a human personality\nthat represents the majority of training data. In response to this, we validate\nthat prompting LLMs with specific roles can allow LLMs to express diverse\nviewpoints. Building on this insight and observation, we develop FairThinking,\na pipeline designed to automatically generate roles that enable LLMs to\narticulate diverse perspectives for fair expressions. To evaluate FairThinking,\nwe create a dataset with a thousand items covering three fairness-related\ntopics and conduct experiments on GPT-3.5, GPT-4, Llama2, and Mistral to\ndemonstrate its superior performance.",
+ "authors": "Tianlin Li, Xiaoyu Zhang, Chao Du, Tianyu Pang, Qian Liu, Qing Guo, Chao Shen, Yang Liu",
+ "published": "2024-02-19",
+ "updated": "2024-02-19",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "I.2; J.4"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2404.08517v1",
+ "title": "Online Safety Analysis for LLMs: a Benchmark, an Assessment, and a Path Forward",
+ "abstract": "While Large Language Models (LLMs) have seen widespread applications across\nnumerous fields, their limited interpretability poses concerns regarding their\nsafe operations from multiple aspects, e.g., truthfulness, robustness, and\nfairness. Recent research has started developing quality assurance methods for\nLLMs, introducing techniques such as offline detector-based or uncertainty\nestimation methods. However, these approaches predominantly concentrate on\npost-generation analysis, leaving the online safety analysis for LLMs during\nthe generation phase an unexplored area. To bridge this gap, we conduct in this\nwork a comprehensive evaluation of the effectiveness of existing online safety\nanalysis methods on LLMs. We begin with a pilot study that validates the\nfeasibility of detecting unsafe outputs in the early generation process.\nFollowing this, we establish the first publicly available benchmark of online\nsafety analysis for LLMs, including a broad spectrum of methods, models, tasks,\ndatasets, and evaluation metrics. Utilizing this benchmark, we extensively\nanalyze the performance of state-of-the-art online safety analysis methods on\nboth open-source and closed-source LLMs. This analysis reveals the strengths\nand weaknesses of individual methods and offers valuable insights into\nselecting the most appropriate method based on specific application scenarios\nand task requirements. Furthermore, we also explore the potential of using\nhybridization methods, i.e., combining multiple methods to derive a collective\nsafety conclusion, to enhance the efficacy of online safety analysis for LLMs.\nOur findings indicate a promising direction for the development of innovative\nand trustworthy quality assurance methodologies for LLMs, facilitating their\nreliable deployments across diverse domains.",
+ "authors": "Xuan Xie, Jiayang Song, Zhehua Zhou, Yuheng Huang, Da Song, Lei Ma",
+ "published": "2024-04-12",
+ "updated": "2024-04-12",
+ "primary_cat": "cs.SE",
+ "cats": [
+ "cs.SE",
+ "cs.AI",
+ "cs.CL",
+ "cs.CR",
+ "cs.LG"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2402.10567v3",
+ "title": "InSaAF: Incorporating Safety through Accuracy and Fairness | Are LLMs ready for the Indian Legal Domain?",
+ "abstract": "Recent advancements in language technology and Artificial Intelligence have\nresulted in numerous Language Models being proposed to perform various tasks in\nthe legal domain ranging from predicting judgments to generating summaries.\nDespite their immense potential, these models have been proven to learn and\nexhibit societal biases and make unfair predictions. In this study, we explore\nthe ability of Large Language Models (LLMs) to perform legal tasks in the\nIndian landscape when social factors are involved. We present a novel metric,\n$\\beta$-weighted $\\textit{Legal Safety Score ($LSS_{\\beta}$)}$, which\nencapsulates both the fairness and accuracy aspects of the LLM. We assess LLMs'\nsafety by considering its performance in the $\\textit{Binary Statutory\nReasoning}$ task and its fairness exhibition with respect to various axes of\ndisparities in the Indian society. Task performance and fairness scores of\nLLaMA and LLaMA--2 models indicate that the proposed $LSS_{\\beta}$ metric can\neffectively determine the readiness of a model for safe usage in the legal\nsector. We also propose finetuning pipelines, utilising specialised legal\ndatasets, as a potential method to mitigate bias and improve model safety. The\nfinetuning procedures on LLaMA and LLaMA--2 models increase the $LSS_{\\beta}$,\nimproving their usability in the Indian legal domain. Our code is publicly\nreleased.",
+ "authors": "Yogesh Tripathi, Raghav Donakanti, Sahil Girhepuje, Ishan Kavathekar, Bhaskara Hanuma Vedula, Gokul S Krishnan, Shreya Goyal, Anmol Goel, Balaraman Ravindran, Ponnurangam Kumaraguru",
+ "published": "2024-02-16",
+ "updated": "2024-02-21",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2310.16343v2",
+ "title": "Evaluating, Understanding, and Improving Constrained Text Generation for Large Language Models",
+ "abstract": "Advancements in natural language generation (NLG) and large language models\n(LLMs) have led to proficient text generation in various tasks. However,\nintegrating intricate constraints into neural text generation, due to LLMs'\nopacity, remains challenging. This study investigates constrained text\ngeneration for LLMs, where predefined constraints are applied during LLM's\ngeneration process. Our research mainly focuses on mainstream open-source LLMs,\ncategorizing constraints into lexical, structural, and relation-based types. We\nalso present various benchmarks to facilitate fair evaluation. The study\naddresses some key research questions, including evaluating, understanding and\nimproving constrained text generation for LLMs. Results illuminate LLMs'\ncapacity and deficiency to incorporate constraints and provide insights for\nfuture developments in constrained text generation. Codes and datasets will be\nreleased upon acceptance.",
+ "authors": "Xiang Chen, Xiaojun Wan",
+ "published": "2023-10-25",
+ "updated": "2024-03-21",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2308.10149v2",
+ "title": "A Survey on Fairness in Large Language Models",
+ "abstract": "Large Language Models (LLMs) have shown powerful performance and development\nprospects and are widely deployed in the real world. However, LLMs can capture\nsocial biases from unprocessed training data and propagate the biases to\ndownstream tasks. Unfair LLM systems have undesirable social impacts and\npotential harms. In this paper, we provide a comprehensive review of related\nresearch on fairness in LLMs. Considering the influence of parameter magnitude\nand training paradigm on research strategy, we divide existing fairness\nresearch into oriented to medium-sized LLMs under pre-training and fine-tuning\nparadigms and oriented to large-sized LLMs under prompting paradigms. First,\nfor medium-sized LLMs, we introduce evaluation metrics and debiasing methods\nfrom the perspectives of intrinsic bias and extrinsic bias, respectively. Then,\nfor large-sized LLMs, we introduce recent fairness research, including fairness\nevaluation, reasons for bias, and debiasing methods. Finally, we discuss and\nprovide insight on the challenges and future directions for the development of\nfairness in LLMs.",
+ "authors": "Yingji Li, Mengnan Du, Rui Song, Xin Wang, Ying Wang",
+ "published": "2023-08-20",
+ "updated": "2024-02-21",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2309.08836v2",
+ "title": "Bias and Fairness in Chatbots: An Overview",
+ "abstract": "Chatbots have been studied for more than half a century. With the rapid\ndevelopment of natural language processing (NLP) technologies in recent years,\nchatbots using large language models (LLMs) have received much attention\nnowadays. Compared with traditional ones, modern chatbots are more powerful and\nhave been used in real-world applications. There are however, bias and fairness\nconcerns in modern chatbot design. Due to the huge amounts of training data,\nextremely large model sizes, and lack of interpretability, bias mitigation and\nfairness preservation of modern chatbots are challenging. Thus, a comprehensive\noverview on bias and fairness in chatbot systems is given in this paper. The\nhistory of chatbots and their categories are first reviewed. Then, bias sources\nand potential harms in applications are analyzed. Considerations in designing\nfair and unbiased chatbot systems are examined. Finally, future research\ndirections are discussed.",
+ "authors": "Jintang Xue, Yun-Cheng Wang, Chengwei Wei, Xiaofeng Liu, Jonghye Woo, C. -C. Jay Kuo",
+ "published": "2023-09-16",
+ "updated": "2023-12-10",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.CY"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2404.02650v1",
+ "title": "Towards detecting unanticipated bias in Large Language Models",
+ "abstract": "Over the last year, Large Language Models (LLMs) like ChatGPT have become\nwidely available and have exhibited fairness issues similar to those in\nprevious machine learning systems. Current research is primarily focused on\nanalyzing and quantifying these biases in training data and their impact on the\ndecisions of these models, alongside developing mitigation strategies. This\nresearch largely targets well-known biases related to gender, race, ethnicity,\nand language. However, it is clear that LLMs are also affected by other, less\nobvious implicit biases. The complex and often opaque nature of these models\nmakes detecting such biases challenging, yet this is crucial due to their\npotential negative impact in various applications. In this paper, we explore\nnew avenues for detecting these unanticipated biases in LLMs, focusing\nspecifically on Uncertainty Quantification and Explainable AI methods. These\napproaches aim to assess the certainty of model decisions and to make the\ninternal decision-making processes of LLMs more transparent, thereby\nidentifying and understanding biases that are not immediately apparent. Through\nthis research, we aim to contribute to the development of fairer and more\ntransparent AI systems.",
+ "authors": "Anna Kruspe",
+ "published": "2024-04-03",
+ "updated": "2024-04-03",
+ "primary_cat": "cs.LG",
+ "cats": [
+ "cs.LG",
+ "cs.AI",
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2312.15398v1",
+ "title": "Fairness-Aware Structured Pruning in Transformers",
+ "abstract": "The increasing size of large language models (LLMs) has introduced challenges\nin their training and inference. Removing model components is perceived as a\nsolution to tackle the large model sizes, however, existing pruning methods\nsolely focus on performance, without considering an essential aspect for the\nresponsible use of LLMs: model fairness. It is crucial to address the fairness\nof LLMs towards diverse groups, such as women, Black people, LGBTQ+, Jewish\ncommunities, among others, as they are being deployed and available to a wide\naudience. In this work, first, we investigate how attention heads impact\nfairness and performance in pre-trained transformer-based language models. We\nthen propose a novel method to prune the attention heads that negatively impact\nfairness while retaining the heads critical for performance, i.e. language\nmodeling capabilities. Our approach is practical in terms of time and\nresources, as it does not require fine-tuning the final pruned, and fairer,\nmodel. Our findings demonstrate a reduction in gender bias by 19%, 19.5%,\n39.5%, 34.7%, 23%, and 8% for DistilGPT-2, GPT-2, GPT-Neo of two different\nsizes, GPT-J, and Llama 2 models, respectively, in comparison to the biased\nmodel, with only a slight decrease in performance.",
+ "authors": "Abdelrahman Zayed, Goncalo Mordido, Samira Shabanian, Ioana Baldini, Sarath Chandar",
+ "published": "2023-12-24",
+ "updated": "2023-12-24",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.CY",
+ "cs.LG"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2307.11761v1",
+ "title": "Fairness of ChatGPT and the Role Of Explainable-Guided Prompts",
+ "abstract": "Our research investigates the potential of Large-scale Language Models\n(LLMs), specifically OpenAI's GPT, in credit risk assessment-a binary\nclassification task. Our findings suggest that LLMs, when directed by\njudiciously designed prompts and supplemented with domain-specific knowledge,\ncan parallel the performance of traditional Machine Learning (ML) models.\nIntriguingly, they achieve this with significantly less data-40 times less,\nutilizing merely 20 data points compared to the ML's 800. LLMs particularly\nexcel in minimizing false positives and enhancing fairness, both being vital\naspects of risk analysis. While our results did not surpass those of classical\nML models, they underscore the potential of LLMs in analogous tasks, laying a\ngroundwork for future explorations into harnessing the capabilities of LLMs in\ndiverse ML tasks.",
+ "authors": "Yashar Deldjoo",
+ "published": "2023-07-14",
+ "updated": "2023-07-14",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2305.13862v2",
+ "title": "A Trip Towards Fairness: Bias and De-Biasing in Large Language Models",
+ "abstract": "Cheap-to-Build Very Large-Language Models (CtB-LLMs) with affordable training\nare emerging as the next big revolution in natural language processing and\nunderstanding. These CtB-LLMs are democratizing access to trainable Very\nLarge-Language Models (VLLMs) and, thus, may represent the building blocks of\nmany NLP systems solving downstream tasks. Hence, a little or a large bias in\nCtB-LLMs may cause huge harm. In this paper, we performed a large investigation\nof the bias of three families of CtB-LLMs, and we showed that debiasing\ntechniques are effective and usable. Indeed, according to current tests, the\nLLaMA and the OPT families have an important bias in gender, race, religion,\nand profession. In contrast to the analysis for other LLMs, we discovered that\nbias depends not on the number of parameters but on the perplexity. Finally,\nthe debiasing of OPT using LoRA reduces bias up to 4.12 points in the\nnormalized stereotype score.",
+ "authors": "Leonardo Ranaldi, Elena Sofia Ruzzetti, Davide Venditti, Dario Onorati, Fabio Massimo Zanzotto",
+ "published": "2023-05-23",
+ "updated": "2023-08-29",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2401.00625v2",
+ "title": "Beyond Efficiency: A Systematic Survey of Resource-Efficient Large Language Models",
+ "abstract": "The burgeoning field of Large Language Models (LLMs), exemplified by\nsophisticated models like OpenAI's ChatGPT, represents a significant\nadvancement in artificial intelligence. These models, however, bring forth\nsubstantial challenges in the high consumption of computational, memory,\nenergy, and financial resources, especially in environments with limited\nresource capabilities. This survey aims to systematically address these\nchallenges by reviewing a broad spectrum of techniques designed to enhance the\nresource efficiency of LLMs. We categorize methods based on their optimization\nfocus: computational, memory, energy, financial, and network resources and\ntheir applicability across various stages of an LLM's lifecycle, including\narchitecture design, pretraining, finetuning, and system design. Additionally,\nthe survey introduces a nuanced categorization of resource efficiency\ntechniques by their specific resource types, which uncovers the intricate\nrelationships and mappings between various resources and corresponding\noptimization techniques. A standardized set of evaluation metrics and datasets\nis also presented to facilitate consistent and fair comparisons across\ndifferent models and techniques. By offering a comprehensive overview of the\ncurrent sota and identifying open research avenues, this survey serves as a\nfoundational reference for researchers and practitioners, aiding them in\ndeveloping more sustainable and efficient LLMs in a rapidly evolving landscape.",
+ "authors": "Guangji Bai, Zheng Chai, Chen Ling, Shiyu Wang, Jiaying Lu, Nan Zhang, Tingwei Shi, Ziyang Yu, Mengdan Zhu, Yifei Zhang, Carl Yang, Yue Cheng, Liang Zhao",
+ "published": "2024-01-01",
+ "updated": "2024-01-04",
+ "primary_cat": "cs.LG",
+ "cats": [
+ "cs.LG"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2404.18276v1",
+ "title": "Bias Neutralization Framework: Measuring Fairness in Large Language Models with Bias Intelligence Quotient (BiQ)",
+ "abstract": "The burgeoning influence of Large Language Models (LLMs) in shaping public\ndiscourse and decision-making underscores the imperative to address inherent\nbiases within these AI systems. In the wake of AI's expansive integration\nacross sectors, addressing racial bias in LLMs has never been more critical.\nThis paper introduces a novel framework called Comprehensive Bias\nNeutralization Framework (CBNF) which embodies an innovative approach to\nquantifying and mitigating biases within LLMs. Our framework combines the Large\nLanguage Model Bias Index (LLMBI) [Oketunji, A., Anas, M., Saina, D., (2023)]\nand Bias removaL with No Demographics (BLIND) [Orgad, H., Belinkov, Y. (2023)]\nmethodologies to create a new metric called Bias Intelligence Quotient\n(BiQ)which detects, measures, and mitigates racial bias in LLMs without\nreliance on demographic annotations.\n By introducing a new metric called BiQ that enhances LLMBI with additional\nfairness metrics, CBNF offers a multi-dimensional metric for bias assessment,\nunderscoring the necessity of a nuanced approach to fairness in AI [Mehrabi et\nal., 2021]. This paper presents a detailed analysis of Latimer AI (a language\nmodel incrementally trained on black history and culture) in comparison to\nChatGPT 3.5, illustrating Latimer AI's efficacy in detecting racial, cultural,\nand gender biases through targeted training and refined bias mitigation\nstrategies [Latimer & Bender, 2023].",
+ "authors": "Malur Narayan, John Pasmore, Elton Sampaio, Vijay Raghavan, Gabriella Waters",
+ "published": "2024-04-28",
+ "updated": "2024-04-28",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "D.1; I.2"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2404.12736v1",
+ "title": "Large Language Model Supply Chain: A Research Agenda",
+ "abstract": "The rapid advancements in pre-trained Large Language Models (LLMs) and Large\nMultimodal Models (LMMs) have ushered in a new era of intelligent applications,\ntransforming fields ranging from natural language processing to content\ngeneration. The LLM supply chain represents a crucial aspect of the\ncontemporary artificial intelligence landscape. It encompasses the entire\nlifecycle of pre-trained models, from its initial development and training to\nits final deployment and application in various domains. This paper presents a\ncomprehensive overview of the LLM supply chain, highlighting its three core\nelements: 1) the model infrastructure, encompassing datasets and toolchain for\ntraining, optimization, and deployment; 2) the model lifecycle, covering\ntraining, testing, releasing, and ongoing maintenance; and 3) the downstream\napplication ecosystem, enabling the integration of pre-trained models into a\nwide range of intelligent applications. However, this rapidly evolving field\nfaces numerous challenges across these key components, including data privacy\nand security, model interpretability and fairness, infrastructure scalability,\nand regulatory compliance. Addressing these challenges is essential for\nharnessing the full potential of LLMs and ensuring their ethical and\nresponsible use. This paper provides a future research agenda for the LLM\nsupply chain, aiming at driving the continued advancement and responsible\ndeployment of these transformative LLMs.",
+ "authors": "Shenao Wang, Yanjie Zhao, Xinyi Hou, Haoyu Wang",
+ "published": "2024-04-19",
+ "updated": "2024-04-19",
+ "primary_cat": "cs.SE",
+ "cats": [
+ "cs.SE"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2312.06056v1",
+ "title": "METAL: Metamorphic Testing Framework for Analyzing Large-Language Model Qualities",
+ "abstract": "Large-Language Models (LLMs) have shifted the paradigm of natural language\ndata processing. However, their black-boxed and probabilistic characteristics\ncan lead to potential risks in the quality of outputs in diverse LLM\napplications. Recent studies have tested Quality Attributes (QAs), such as\nrobustness or fairness, of LLMs by generating adversarial input texts. However,\nexisting studies have limited their coverage of QAs and tasks in LLMs and are\ndifficult to extend. Additionally, these studies have only used one evaluation\nmetric, Attack Success Rate (ASR), to assess the effectiveness of their\napproaches. We propose a MEtamorphic Testing for Analyzing LLMs (METAL)\nframework to address these issues by applying Metamorphic Testing (MT)\ntechniques. This approach facilitates the systematic testing of LLM qualities\nby defining Metamorphic Relations (MRs), which serve as modularized evaluation\nmetrics. The METAL framework can automatically generate hundreds of MRs from\ntemplates that cover various QAs and tasks. In addition, we introduced novel\nmetrics that integrate the ASR method into the semantic qualities of text to\nassess the effectiveness of MRs accurately. Through the experiments conducted\nwith three prominent LLMs, we have confirmed that the METAL framework\neffectively evaluates essential QAs on primary LLM tasks and reveals the\nquality risks in LLMs. Moreover, the newly proposed metrics can guide the\noptimal MRs for testing each task and suggest the most effective method for\ngenerating MRs.",
+ "authors": "Sangwon Hyun, Mingyu Guo, M. Ali Babar",
+ "published": "2023-12-11",
+ "updated": "2023-12-11",
+ "primary_cat": "cs.SE",
+ "cats": [
+ "cs.SE",
+ "cs.AI",
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2308.11483v1",
+ "title": "Large Language Models Sensitivity to The Order of Options in Multiple-Choice Questions",
+ "abstract": "Large Language Models (LLMs) have demonstrated remarkable capabilities in\nvarious NLP tasks. However, previous works have shown these models are\nsensitive towards prompt wording, and few-shot demonstrations and their order,\nposing challenges to fair assessment of these models. As these models become\nmore powerful, it becomes imperative to understand and address these\nlimitations. In this paper, we focus on LLMs robustness on the task of\nmultiple-choice questions -- commonly adopted task to study reasoning and\nfact-retrieving capability of LLMs. Investigating the sensitivity of LLMs\ntowards the order of options in multiple-choice questions, we demonstrate a\nconsiderable performance gap of approximately 13% to 75% in LLMs on different\nbenchmarks, when answer options are reordered, even when using demonstrations\nin a few-shot setting. Through a detailed analysis, we conjecture that this\nsensitivity arises when LLMs are uncertain about the prediction between the\ntop-2/3 choices, and specific options placements may favor certain prediction\nbetween those top choices depending on the question caused by positional bias.\nWe also identify patterns in top-2 choices that amplify or mitigate the model's\nbias toward option placement. We found that for amplifying bias, the optimal\nstrategy involves positioning the top two choices as the first and last\noptions. Conversely, to mitigate bias, we recommend placing these choices among\nthe adjacent options. To validate our conjecture, we conduct various\nexperiments and adopt two approaches to calibrate LLMs' predictions, leading to\nup to 8 percentage points improvement across different models and benchmarks.",
+ "authors": "Pouya Pezeshkpour, Estevam Hruschka",
+ "published": "2023-08-22",
+ "updated": "2023-08-22",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.LG"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2206.13757v1",
+ "title": "Flexible text generation for counterfactual fairness probing",
+ "abstract": "A common approach for testing fairness issues in text-based classifiers is\nthrough the use of counterfactuals: does the classifier output change if a\nsensitive attribute in the input is changed? Existing counterfactual generation\nmethods typically rely on wordlists or templates, producing simple\ncounterfactuals that don't take into account grammar, context, or subtle\nsensitive attribute references, and could miss issues that the wordlist\ncreators had not considered. In this paper, we introduce a task for generating\ncounterfactuals that overcomes these shortcomings, and demonstrate how large\nlanguage models (LLMs) can be leveraged to make progress on this task. We show\nthat this LLM-based method can produce complex counterfactuals that existing\nmethods cannot, comparing the performance of various counterfactual generation\nmethods on the Civil Comments dataset and showing their value in evaluating a\ntoxicity classifier.",
+ "authors": "Zee Fryer, Vera Axelrod, Ben Packer, Alex Beutel, Jilin Chen, Kellie Webster",
+ "published": "2022-06-28",
+ "updated": "2022-06-28",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.CY"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2401.15585v1",
+ "title": "Evaluating Gender Bias in Large Language Models via Chain-of-Thought Prompting",
+ "abstract": "There exist both scalable tasks, like reading comprehension and\nfact-checking, where model performance improves with model size, and unscalable\ntasks, like arithmetic reasoning and symbolic reasoning, where model\nperformance does not necessarily improve with model size. Large language models\n(LLMs) equipped with Chain-of-Thought (CoT) prompting are able to make accurate\nincremental predictions even on unscalable tasks. Unfortunately, despite their\nexceptional reasoning abilities, LLMs tend to internalize and reproduce\ndiscriminatory societal biases. Whether CoT can provide discriminatory or\negalitarian rationalizations for the implicit information in unscalable tasks\nremains an open question.\n In this study, we examine the impact of LLMs' step-by-step predictions on\ngender bias in unscalable tasks. For this purpose, we construct a benchmark for\nan unscalable task where the LLM is given a list of words comprising feminine,\nmasculine, and gendered occupational words, and is required to count the number\nof feminine and masculine words. In our CoT prompts, we require the LLM to\nexplicitly indicate whether each word in the word list is a feminine or\nmasculine before making the final predictions. With counting and handling the\nmeaning of words, this benchmark has characteristics of both arithmetic\nreasoning and symbolic reasoning. Experimental results in English show that\nwithout step-by-step prediction, most LLMs make socially biased predictions,\ndespite the task being as simple as counting words. Interestingly, CoT\nprompting reduces this unconscious social bias in LLMs and encourages fair\npredictions.",
+ "authors": "Masahiro Kaneko, Danushka Bollegala, Naoaki Okazaki, Timothy Baldwin",
+ "published": "2024-01-28",
+ "updated": "2024-01-28",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2404.06003v1",
+ "title": "FreeEval: A Modular Framework for Trustworthy and Efficient Evaluation of Large Language Models",
+ "abstract": "The rapid development of large language model (LLM) evaluation methodologies\nand datasets has led to a profound challenge: integrating state-of-the-art\nevaluation techniques cost-effectively while ensuring reliability,\nreproducibility, and efficiency. Currently, there is a notable absence of a\nunified and adaptable framework that seamlessly integrates various evaluation\napproaches. Moreover, the reliability of evaluation findings is often\nquestionable due to potential data contamination, with the evaluation\nefficiency commonly overlooked when facing the substantial costs associated\nwith LLM inference. In response to these challenges, we introduce FreeEval, a\nmodular and scalable framework crafted to enable trustworthy and efficient\nautomatic evaluations of LLMs. Firstly, FreeEval's unified abstractions\nsimplify the integration and improve the transparency of diverse evaluation\nmethodologies, encompassing dynamic evaluation that demand sophisticated LLM\ninteractions. Secondly, the framework integrates meta-evaluation techniques\nlike human evaluation and data contamination detection, which, along with\ndynamic evaluation modules in the platform, enhance the fairness of the\nevaluation outcomes. Lastly, FreeEval is designed with a high-performance\ninfrastructure, including distributed computation and caching strategies,\nenabling extensive evaluations across multi-node, multi-GPU clusters for\nopen-source and proprietary LLMs.",
+ "authors": "Zhuohao Yu, Chang Gao, Wenjin Yao, Yidong Wang, Zhengran Zeng, Wei Ye, Jindong Wang, Yue Zhang, Shikun Zhang",
+ "published": "2024-04-09",
+ "updated": "2024-04-09",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2401.01262v2",
+ "title": "Fairness Certification for Natural Language Processing and Large Language Models",
+ "abstract": "Natural Language Processing (NLP) plays an important role in our daily lives,\nparticularly due to the enormous progress of Large Language Models (LLM).\nHowever, NLP has many fairness-critical use cases, e.g., as an expert system in\nrecruitment or as an LLM-based tutor in education. Since NLP is based on human\nlanguage, potentially harmful biases can diffuse into NLP systems and produce\nunfair results, discriminate against minorities or generate legal issues.\nHence, it is important to develop a fairness certification for NLP approaches.\nWe follow a qualitative research approach towards a fairness certification for\nNLP. In particular, we have reviewed a large body of literature on algorithmic\nfairness, and we have conducted semi-structured expert interviews with a wide\nrange of experts from that area. We have systematically devised six fairness\ncriteria for NLP, which can be further refined into 18 sub-categories. Our\ncriteria offer a foundation for operationalizing and testing processes to\ncertify fairness, both from the perspective of the auditor and the audited\norganization.",
+ "authors": "Vincent Freiberger, Erik Buchmann",
+ "published": "2024-01-02",
+ "updated": "2024-01-03",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.CY",
+ "cs.LG",
+ "68T50",
+ "I.2.7"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2402.17916v2",
+ "title": "LLM-Resistant Math Word Problem Generation via Adversarial Attacks",
+ "abstract": "Large language models (LLMs) have significantly transformed the educational\nlandscape. As current plagiarism detection tools struggle to keep pace with\nLLMs' rapid advancements, the educational community faces the challenge of\nassessing students' true problem-solving abilities in the presence of LLMs. In\nthis work, we explore a new paradigm for ensuring fair evaluation -- generating\nadversarial examples which preserve the structure and difficulty of the\noriginal questions aimed for assessment, but are unsolvable by LLMs. Focusing\non the domain of math word problems, we leverage abstract syntax trees to\nstructurally generate adversarial examples that cause LLMs to produce incorrect\nanswers by simply editing the numeric values in the problems. We conduct\nexperiments on various open- and closed-source LLMs, quantitatively and\nqualitatively demonstrating that our method significantly degrades their math\nproblem-solving ability. We identify shared vulnerabilities among LLMs and\npropose a cost-effective approach to attack high-cost models. Additionally, we\nconduct automatic analysis on math problems and investigate the cause of\nfailure, offering a nuanced view into model's limitation.",
+ "authors": "Roy Xie, Chengxuan Huang, Junlin Wang, Bhuwan Dhingra",
+ "published": "2024-02-27",
+ "updated": "2024-03-30",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2305.11595v3",
+ "title": "Examining Inter-Consistency of Large Language Models Collaboration: An In-depth Analysis via Debate",
+ "abstract": "Large Language Models (LLMs) have shown impressive capabilities in various\napplications, but they still face various inconsistency issues. Existing works\nprimarily focus on the inconsistency issues within a single LLM, while we\ncomplementarily explore the inter-consistency among multiple LLMs for\ncollaboration. To examine whether LLMs can collaborate effectively to achieve a\nconsensus for a shared goal, we focus on commonsense reasoning, and introduce a\nformal debate framework (FORD) to conduct a three-stage debate among LLMs with\nreal-world scenarios alignment: fair debate, mismatched debate, and roundtable\ndebate. Through extensive experiments on various datasets, LLMs can effectively\ncollaborate to reach a consensus despite noticeable inter-inconsistencies, but\nimbalances in their abilities can lead to domination by superior LLMs.\nLeveraging a more advanced LLM like GPT-4 as an authoritative judge can boost\ncollaboration performance. Our work contributes to understanding the\ninter-consistency among LLMs and lays the foundation for developing future\ncollaboration methods. Codes and data are available at\nhttps://github.com/Waste-Wood/FORD",
+ "authors": "Kai Xiong, Xiao Ding, Yixin Cao, Ting Liu, Bing Qin",
+ "published": "2023-05-19",
+ "updated": "2023-10-18",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2405.02219v1",
+ "title": "FairEvalLLM. A Comprehensive Framework for Benchmarking Fairness in Large Language Model Recommender Systems",
+ "abstract": "This paper presents a framework for evaluating fairness in recommender\nsystems powered by Large Language Models (RecLLMs), addressing the need for a\nunified approach that spans various fairness dimensions including sensitivity\nto user attributes, intrinsic fairness, and discussions of fairness based on\nunderlying benefits. In addition, our framework introduces counterfactual\nevaluations and integrates diverse user group considerations to enhance the\ndiscourse on fairness evaluation for RecLLMs.\n Our key contributions include the development of a robust framework for\nfairness evaluation in LLM-based recommendations and a structured method to\ncreate \\textit{informative user profiles} from demographic data, historical\nuser preferences, and recent interactions. We argue that the latter is\nessential for enhancing personalization in such systems, especially in\ntemporal-driven scenarios. We demonstrate the utility of our framework through\npractical applications on two datasets, LastFM-1K and ML-1M. We conduct\nexperiments on a subsample of 80 users from each dataset, testing and assessing\nthe effectiveness of various prompt construction scenarios and in-context\nlearning, comprising more than 50 scenarios. This results in more than 4000\nrecommendations (80 * 50 = 4000). Our study reveals that while there are no\nsignificant unfairness issues in scenarios involving sensitive attributes, some\nconcerns remain. However, in terms of intrinsic fairness, which does not\ninvolve direct sensitivity, unfairness across demographic groups remains\nsignificant. The code and data used for this paper are available at:\n\\url{https://shorturl.at/awBFM}.",
+ "authors": "Yashar Deldjoo",
+ "published": "2024-05-03",
+ "updated": "2024-05-03",
+ "primary_cat": "cs.IR",
+ "cats": [
+ "cs.IR"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2403.14473v1",
+ "title": "The Ethics of ChatGPT in Medicine and Healthcare: A Systematic Review on Large Language Models (LLMs)",
+ "abstract": "With the introduction of ChatGPT, Large Language Models (LLMs) have received\nenormous attention in healthcare. Despite their potential benefits, researchers\nhave underscored various ethical implications. While individual instances have\ndrawn much attention, the debate lacks a systematic overview of practical\napplications currently researched and ethical issues connected to them. Against\nthis background, this work aims to map the ethical landscape surrounding the\ncurrent stage of deployment of LLMs in medicine and healthcare. Electronic\ndatabases and preprint servers were queried using a comprehensive search\nstrategy. Studies were screened and extracted following a modified rapid review\napproach. Methodological quality was assessed using a hybrid approach. For 53\nrecords, a meta-aggregative synthesis was performed. Four fields of\napplications emerged and testify to a vivid exploration phase. Advantages of\nusing LLMs are attributed to their capacity in data analysis, personalized\ninformation provisioning, support in decision-making, mitigating information\nloss and enhancing information accessibility. However, we also identifies\nrecurrent ethical concerns connected to fairness, bias, non-maleficence,\ntransparency, and privacy. A distinctive concern is the tendency to produce\nharmful misinformation or convincingly but inaccurate content. A recurrent plea\nfor ethical guidance and human oversight is evident. Given the variety of use\ncases, it is suggested that the ethical guidance debate be reframed to focus on\ndefining what constitutes acceptable human oversight across the spectrum of\napplications. This involves considering diverse settings, varying potentials\nfor harm, and different acceptable thresholds for performance and certainty in\nhealthcare. In addition, a critical inquiry is necessary to determine the\nextent to which the current experimental use of LLMs is necessary and\njustified.",
+ "authors": "Joschka Haltaufderheide, Robert Ranisch",
+ "published": "2024-03-21",
+ "updated": "2024-03-21",
+ "primary_cat": "cs.CY",
+ "cats": [
+ "cs.CY"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2311.02294v1",
+ "title": "LLMs grasp morality in concept",
+ "abstract": "Work in AI ethics and fairness has made much progress in regulating LLMs to\nreflect certain values, such as fairness, truth, and diversity. However, it has\ntaken the problem of how LLMs might 'mean' anything at all for granted. Without\naddressing this, it is not clear what imbuing LLMs with such values even means.\nIn response, we provide a general theory of meaning that extends beyond humans.\nWe use this theory to explicate the precise nature of LLMs as meaning-agents.\nWe suggest that the LLM, by virtue of its position as a meaning-agent, already\ngrasps the constructions of human society (e.g. morality, gender, and race) in\nconcept. Consequently, under certain ethical frameworks, currently popular\nmethods for model alignment are limited at best and counterproductive at worst.\nMoreover, unaligned models may help us better develop our moral and social\nphilosophy.",
+ "authors": "Mark Pock, Andre Ye, Jared Moore",
+ "published": "2023-11-04",
+ "updated": "2023-11-04",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.CY"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2309.11653v2",
+ "title": "\"It's a Fair Game\", or Is It? Examining How Users Navigate Disclosure Risks and Benefits When Using LLM-Based Conversational Agents",
+ "abstract": "The widespread use of Large Language Model (LLM)-based conversational agents\n(CAs), especially in high-stakes domains, raises many privacy concerns.\nBuilding ethical LLM-based CAs that respect user privacy requires an in-depth\nunderstanding of the privacy risks that concern users the most. However,\nexisting research, primarily model-centered, does not provide insight into\nusers' perspectives. To bridge this gap, we analyzed sensitive disclosures in\nreal-world ChatGPT conversations and conducted semi-structured interviews with\n19 LLM-based CA users. We found that users are constantly faced with trade-offs\nbetween privacy, utility, and convenience when using LLM-based CAs. However,\nusers' erroneous mental models and the dark patterns in system design limited\ntheir awareness and comprehension of the privacy risks. Additionally, the\nhuman-like interactions encouraged more sensitive disclosures, which\ncomplicated users' ability to navigate the trade-offs. We discuss practical\ndesign guidelines and the needs for paradigm shifts to protect the privacy of\nLLM-based CA users.",
+ "authors": "Zhiping Zhang, Michelle Jia, Hao-Ping Lee, Bingsheng Yao, Sauvik Das, Ada Lerner, Dakuo Wang, Tianshi Li",
+ "published": "2023-09-20",
+ "updated": "2024-04-02",
+ "primary_cat": "cs.HC",
+ "cats": [
+ "cs.HC",
+ "cs.AI",
+ "cs.CR"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2404.08656v1",
+ "title": "Linear Cross-document Event Coreference Resolution with X-AMR",
+ "abstract": "Event Coreference Resolution (ECR) as a pairwise mention classification task\nis expensive both for automated systems and manual annotations. The task's\nquadratic difficulty is exacerbated when using Large Language Models (LLMs),\nmaking prompt engineering for ECR prohibitively costly. In this work, we\npropose a graphical representation of events, X-AMR, anchored around individual\nmentions using a \\textbf{cross}-document version of \\textbf{A}bstract\n\\textbf{M}eaning \\textbf{R}epresentation. We then linearize the ECR with a\nnovel multi-hop coreference algorithm over the event graphs. The event graphs\nsimplify ECR, making it a) LLM cost-effective, b) compositional and\ninterpretable, and c) easily annotated. For a fair assessment, we first enrich\nan existing ECR benchmark dataset with these event graphs using an\nannotator-friendly tool we introduce. Then, we employ GPT-4, the newest LLM by\nOpenAI, for these annotations. Finally, using the ECR algorithm, we assess\nGPT-4 against humans and analyze its limitations. Through this research, we aim\nto advance the state-of-the-art for efficient ECR and shed light on the\npotential shortcomings of current LLMs at this task. Code and annotations:\n\\url{https://github.com/ahmeshaf/gpt_coref}",
+ "authors": "Shafiuddin Rehan Ahmed, George Arthur Baker, Evi Judge, Michael Regan, Kristin Wright-Bettner, Martha Palmer, James H. Martin",
+ "published": "2024-03-25",
+ "updated": "2024-03-25",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2311.03033v1",
+ "title": "Beyond Words: A Mathematical Framework for Interpreting Large Language Models",
+ "abstract": "Large language models (LLMs) are powerful AI tools that can generate and\ncomprehend natural language text and other complex information. However, the\nfield lacks a mathematical framework to systematically describe, compare and\nimprove LLMs. We propose Hex a framework that clarifies key terms and concepts\nin LLM research, such as hallucinations, alignment, self-verification and\nchain-of-thought reasoning. The Hex framework offers a precise and consistent\nway to characterize LLMs, identify their strengths and weaknesses, and\nintegrate new findings. Using Hex, we differentiate chain-of-thought reasoning\nfrom chain-of-thought prompting and establish the conditions under which they\nare equivalent. This distinction clarifies the basic assumptions behind\nchain-of-thought prompting and its implications for methods that use it, such\nas self-verification and prompt programming.\n Our goal is to provide a formal framework for LLMs that can help both\nresearchers and practitioners explore new possibilities for generative AI. We\ndo not claim to have a definitive solution, but rather a tool for opening up\nnew research avenues. We argue that our formal definitions and results are\ncrucial for advancing the discussion on how to build generative AI systems that\nare safe, reliable, fair and robust, especially in domains like healthcare and\nsoftware engineering.",
+ "authors": "Javier Gonz\u00e1lez, Aditya V. Nori",
+ "published": "2023-11-06",
+ "updated": "2023-11-06",
+ "primary_cat": "cs.LG",
+ "cats": [
+ "cs.LG",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2310.18130v2",
+ "title": "DELPHI: Data for Evaluating LLMs' Performance in Handling Controversial Issues",
+ "abstract": "Controversy is a reflection of our zeitgeist, and an important aspect to any\ndiscourse. The rise of large language models (LLMs) as conversational systems\nhas increased public reliance on these systems for answers to their various\nquestions. Consequently, it is crucial to systematically examine how these\nmodels respond to questions that pertaining to ongoing debates. However, few\nsuch datasets exist in providing human-annotated labels reflecting the\ncontemporary discussions. To foster research in this area, we propose a novel\nconstruction of a controversial questions dataset, expanding upon the publicly\nreleased Quora Question Pairs Dataset. This dataset presents challenges\nconcerning knowledge recency, safety, fairness, and bias. We evaluate different\nLLMs using a subset of this dataset, illuminating how they handle controversial\nissues and the stances they adopt. This research ultimately contributes to our\nunderstanding of LLMs' interaction with controversial issues, paving the way\nfor improvements in their comprehension and handling of complex societal\ndebates.",
+ "authors": "David Q. Sun, Artem Abzaliev, Hadas Kotek, Zidi Xiu, Christopher Klein, Jason D. Williams",
+ "published": "2023-10-27",
+ "updated": "2023-11-07",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.HC"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2305.07609v3",
+ "title": "Is ChatGPT Fair for Recommendation? Evaluating Fairness in Large Language Model Recommendation",
+ "abstract": "The remarkable achievements of Large Language Models (LLMs) have led to the\nemergence of a novel recommendation paradigm -- Recommendation via LLM\n(RecLLM). Nevertheless, it is important to note that LLMs may contain social\nprejudices, and therefore, the fairness of recommendations made by RecLLM\nrequires further investigation. To avoid the potential risks of RecLLM, it is\nimperative to evaluate the fairness of RecLLM with respect to various sensitive\nattributes on the user side. Due to the differences between the RecLLM paradigm\nand the traditional recommendation paradigm, it is problematic to directly use\nthe fairness benchmark of traditional recommendation. To address the dilemma,\nwe propose a novel benchmark called Fairness of Recommendation via LLM\n(FaiRLLM). This benchmark comprises carefully crafted metrics and a dataset\nthat accounts for eight sensitive attributes1 in two recommendation scenarios:\nmusic and movies. By utilizing our FaiRLLM benchmark, we conducted an\nevaluation of ChatGPT and discovered that it still exhibits unfairness to some\nsensitive attributes when generating recommendations. Our code and dataset can\nbe found at https://github.com/jizhi-zhang/FaiRLLM.",
+ "authors": "Jizhi Zhang, Keqin Bao, Yang Zhang, Wenjie Wang, Fuli Feng, Xiangnan He",
+ "published": "2023-05-12",
+ "updated": "2023-10-17",
+ "primary_cat": "cs.IR",
+ "cats": [
+ "cs.IR",
+ "cs.CL",
+ "cs.CY"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2403.09606v1",
+ "title": "Large Language Models and Causal Inference in Collaboration: A Comprehensive Survey",
+ "abstract": "Causal inference has shown potential in enhancing the predictive accuracy,\nfairness, robustness, and explainability of Natural Language Processing (NLP)\nmodels by capturing causal relationships among variables. The emergence of\ngenerative Large Language Models (LLMs) has significantly impacted various NLP\ndomains, particularly through their advanced reasoning capabilities. This\nsurvey focuses on evaluating and improving LLMs from a causal view in the\nfollowing areas: understanding and improving the LLMs' reasoning capacity,\naddressing fairness and safety issues in LLMs, complementing LLMs with\nexplanations, and handling multimodality. Meanwhile, LLMs' strong reasoning\ncapacities can in turn contribute to the field of causal inference by aiding\ncausal relationship discovery and causal effect estimations. This review\nexplores the interplay between causal inference frameworks and LLMs from both\nperspectives, emphasizing their collective potential to further the development\nof more advanced and equitable artificial intelligence systems.",
+ "authors": "Xiaoyu Liu, Paiheng Xu, Junda Wu, Jiaxin Yuan, Yifan Yang, Yuhang Zhou, Fuxiao Liu, Tianrui Guan, Haoliang Wang, Tong Yu, Julian McAuley, Wei Ai, Furong Huang",
+ "published": "2024-03-14",
+ "updated": "2024-03-14",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2308.05345v3",
+ "title": "RTLLM: An Open-Source Benchmark for Design RTL Generation with Large Language Model",
+ "abstract": "Inspired by the recent success of large language models (LLMs) like ChatGPT,\nresearchers start to explore the adoption of LLMs for agile hardware design,\nsuch as generating design RTL based on natural-language instructions. However,\nin existing works, their target designs are all relatively simple and in a\nsmall scale, and proposed by the authors themselves, making a fair comparison\namong different LLM solutions challenging. In addition, many prior works only\nfocus on the design correctness, without evaluating the design qualities of\ngenerated design RTL. In this work, we propose an open-source benchmark named\nRTLLM, for generating design RTL with natural language instructions. To\nsystematically evaluate the auto-generated design RTL, we summarized three\nprogressive goals, named syntax goal, functionality goal, and design quality\ngoal. This benchmark can automatically provide a quantitative evaluation of any\ngiven LLM-based solution. Furthermore, we propose an easy-to-use yet\nsurprisingly effective prompt engineering technique named self-planning, which\nproves to significantly boost the performance of GPT-3.5 in our proposed\nbenchmark.",
+ "authors": "Yao Lu, Shang Liu, Qijun Zhang, Zhiyao Xie",
+ "published": "2023-08-10",
+ "updated": "2023-11-11",
+ "primary_cat": "cs.LG",
+ "cats": [
+ "cs.LG",
+ "cs.AR"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2403.02839v1",
+ "title": "An Empirical Study of LLM-as-a-Judge for LLM Evaluation: Fine-tuned Judge Models are Task-specific Classifiers",
+ "abstract": "Recently, there has been a growing trend of utilizing Large Language Model\n(LLM) to evaluate the quality of other LLMs. Many studies have employed\nproprietary close-source models, especially GPT4, as the evaluator.\nAlternatively, other works have fine-tuned judge models based on open-source\nLLMs as the evaluator. In this study, we conduct an empirical study of\ndifferent judge models on their evaluation capability. Our findings indicate\nthat although the fine-tuned judge models achieve high accuracy on in-domain\ntest sets, even surpassing GPT4, they are inherently task-specific classifiers,\nand their generalizability and fairness severely underperform GPT4.",
+ "authors": "Hui Huang, Yingqi Qu, Jing Liu, Muyun Yang, Tiejun Zhao",
+ "published": "2024-03-05",
+ "updated": "2024-03-05",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2310.15007v1",
+ "title": "Did the Neurons Read your Book? Document-level Membership Inference for Large Language Models",
+ "abstract": "With large language models (LLMs) poised to become embedded in our daily\nlives, questions are starting to be raised about the dataset(s) they learned\nfrom. These questions range from potential bias or misinformation LLMs could\nretain from their training data to questions of copyright and fair use of\nhuman-generated text. However, while these questions emerge, developers of the\nrecent state-of-the-art LLMs become increasingly reluctant to disclose details\non their training corpus. We here introduce the task of document-level\nmembership inference for real-world LLMs, i.e. inferring whether the LLM has\nseen a given document during training or not. First, we propose a procedure for\nthe development and evaluation of document-level membership inference for LLMs\nby leveraging commonly used data sources for training and the model release\ndate. We then propose a practical, black-box method to predict document-level\nmembership and instantiate it on OpenLLaMA-7B with both books and academic\npapers. We show our methodology to perform very well, reaching an impressive\nAUC of 0.856 for books and 0.678 for papers. We then show our approach to\noutperform the sentence-level membership inference attacks used in the privacy\nliterature for the document-level membership task. We finally evaluate whether\nsmaller models might be less sensitive to document-level inference and show\nOpenLLaMA-3B to be approximately as sensitive as OpenLLaMA-7B to our approach.\nTaken together, our results show that accurate document-level membership can be\ninferred for LLMs, increasing the transparency of technology poised to change\nour lives.",
+ "authors": "Matthieu Meeus, Shubham Jain, Marek Rei, Yves-Alexandre de Montjoye",
+ "published": "2023-10-23",
+ "updated": "2023-10-23",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.CR",
+ "cs.LG"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2404.03192v1",
+ "title": "Do Large Language Models Rank Fairly? An Empirical Study on the Fairness of LLMs as Rankers",
+ "abstract": "The integration of Large Language Models (LLMs) in information retrieval has\nraised a critical reevaluation of fairness in the text-ranking models. LLMs,\nsuch as GPT models and Llama2, have shown effectiveness in natural language\nunderstanding tasks, and prior works (e.g., RankGPT) have also demonstrated\nthat the LLMs exhibit better performance than the traditional ranking models in\nthe ranking task. However, their fairness remains largely unexplored. This\npaper presents an empirical study evaluating these LLMs using the TREC Fair\nRanking dataset, focusing on the representation of binary protected attributes\nsuch as gender and geographic location, which are historically underrepresented\nin search outcomes. Our analysis delves into how these LLMs handle queries and\ndocuments related to these attributes, aiming to uncover biases in their\nranking algorithms. We assess fairness from both user and content perspectives,\ncontributing an empirical benchmark for evaluating LLMs as the fair ranker.",
+ "authors": "Yuan Wang, Xuyang Wu, Hsin-Tai Wu, Zhiqiang Tao, Yi Fang",
+ "published": "2024-04-04",
+ "updated": "2024-04-04",
+ "primary_cat": "cs.IR",
+ "cats": [
+ "cs.IR",
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2402.11406v2",
+ "title": "Don't Go To Extremes: Revealing the Excessive Sensitivity and Calibration Limitations of LLMs in Implicit Hate Speech Detection",
+ "abstract": "The fairness and trustworthiness of Large Language Models (LLMs) are\nreceiving increasing attention. Implicit hate speech, which employs indirect\nlanguage to convey hateful intentions, occupies a significant portion of\npractice. However, the extent to which LLMs effectively address this issue\nremains insufficiently examined. This paper delves into the capability of LLMs\nto detect implicit hate speech (Classification Task) and express confidence in\ntheir responses (Calibration Task). Our evaluation meticulously considers\nvarious prompt patterns and mainstream uncertainty estimation methods. Our\nfindings highlight that LLMs exhibit two extremes: (1) LLMs display excessive\nsensitivity towards groups or topics that may cause fairness issues, resulting\nin misclassifying benign statements as hate speech. (2) LLMs' confidence scores\nfor each method excessively concentrate on a fixed range, remaining unchanged\nregardless of the dataset's complexity. Consequently, the calibration\nperformance is heavily reliant on primary classification accuracy. These\ndiscoveries unveil new limitations of LLMs, underscoring the need for caution\nwhen optimizing models to ensure they do not veer towards extremes. This serves\nas a reminder to carefully consider sensitivity and confidence in the pursuit\nof model fairness.",
+ "authors": "Min Zhang, Jianfeng He, Taoran Ji, Chang-Tien Lu",
+ "published": "2024-02-18",
+ "updated": "2024-02-26",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2403.15451v1",
+ "title": "Towards Enabling FAIR Dataspaces Using Large Language Models",
+ "abstract": "Dataspaces have recently gained adoption across various sectors, including\ntraditionally less digitized domains such as culture. Leveraging Semantic Web\ntechnologies helps to make dataspaces FAIR, but their complexity poses a\nsignificant challenge to the adoption of dataspaces and increases their cost.\nThe advent of Large Language Models (LLMs) raises the question of how these\nmodels can support the adoption of FAIR dataspaces. In this work, we\ndemonstrate the potential of LLMs in dataspaces with a concrete example. We\nalso derive a research agenda for exploring this emerging field.",
+ "authors": "Benedikt T. Arnold, Johannes Theissen-Lipp, Diego Collarana, Christoph Lange, Sandra Geisler, Edward Curry, Stefan Decker",
+ "published": "2024-03-18",
+ "updated": "2024-03-18",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2310.09219v5",
+ "title": "\"Kelly is a Warm Person, Joseph is a Role Model\": Gender Biases in LLM-Generated Reference Letters",
+ "abstract": "Large Language Models (LLMs) have recently emerged as an effective tool to\nassist individuals in writing various types of content, including professional\ndocuments such as recommendation letters. Though bringing convenience, this\napplication also introduces unprecedented fairness concerns. Model-generated\nreference letters might be directly used by users in professional scenarios. If\nunderlying biases exist in these model-constructed letters, using them without\nscrutinization could lead to direct societal harms, such as sabotaging\napplication success rates for female applicants. In light of this pressing\nissue, it is imminent and necessary to comprehensively study fairness issues\nand associated harms in this real-world use case. In this paper, we critically\nexamine gender biases in LLM-generated reference letters. Drawing inspiration\nfrom social science findings, we design evaluation methods to manifest biases\nthrough 2 dimensions: (1) biases in language style and (2) biases in lexical\ncontent. We further investigate the extent of bias propagation by analyzing the\nhallucination bias of models, a term that we define to be bias exacerbation in\nmodel-hallucinated contents. Through benchmarking evaluation on 2 popular LLMs-\nChatGPT and Alpaca, we reveal significant gender biases in LLM-generated\nrecommendation letters. Our findings not only warn against using LLMs for this\napplication without scrutinization, but also illuminate the importance of\nthoroughly studying hidden biases and harms in LLM-generated professional\ndocuments.",
+ "authors": "Yixin Wan, George Pu, Jiao Sun, Aparna Garimella, Kai-Wei Chang, Nanyun Peng",
+ "published": "2023-10-13",
+ "updated": "2023-12-01",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2311.04205v2",
+ "title": "Rephrase and Respond: Let Large Language Models Ask Better Questions for Themselves",
+ "abstract": "Misunderstandings arise not only in interpersonal communication but also\nbetween humans and Large Language Models (LLMs). Such discrepancies can make\nLLMs interpret seemingly unambiguous questions in unexpected ways, yielding\nincorrect responses. While it is widely acknowledged that the quality of a\nprompt, such as a question, significantly impacts the quality of the response\nprovided by LLMs, a systematic method for crafting questions that LLMs can\nbetter comprehend is still underdeveloped. In this paper, we present a method\nnamed `Rephrase and Respond' (RaR), which allows LLMs to rephrase and expand\nquestions posed by humans and provide responses in a single prompt. This\napproach serves as a simple yet effective prompting method for improving\nperformance. We also introduce a two-step variant of RaR, where a rephrasing\nLLM first rephrases the question and then passes the original and rephrased\nquestions together to a different responding LLM. This facilitates the\neffective utilization of rephrased questions generated by one LLM with another.\nOur experiments demonstrate that our methods significantly improve the\nperformance of different models across a wide range to tasks. We further\nprovide a comprehensive comparison between RaR and the popular Chain-of-Thought\n(CoT) methods, both theoretically and empirically. We show that RaR is\ncomplementary to CoT and can be combined with CoT to achieve even better\nperformance. Our work not only contributes to enhancing LLM performance\nefficiently and effectively but also sheds light on a fair evaluation of LLM\ncapabilities. Data and codes are available at\nhttps://github.com/uclaml/Rephrase-and-Respond.",
+ "authors": "Yihe Deng, Weitong Zhang, Zixiang Chen, Quanquan Gu",
+ "published": "2023-11-07",
+ "updated": "2024-04-18",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.LG"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2403.13840v1",
+ "title": "Whose Side Are You On? Investigating the Political Stance of Large Language Models",
+ "abstract": "Large Language Models (LLMs) have gained significant popularity for their\napplication in various everyday tasks such as text generation, summarization,\nand information retrieval. As the widespread adoption of LLMs continues to\nsurge, it becomes increasingly crucial to ensure that these models yield\nresponses that are politically impartial, with the aim of preventing\ninformation bubbles, upholding fairness in representation, and mitigating\nconfirmation bias. In this paper, we propose a quantitative framework and\npipeline designed to systematically investigate the political orientation of\nLLMs. Our investigation delves into the political alignment of LLMs across a\nspectrum of eight polarizing topics, spanning from abortion to LGBTQ issues.\nAcross topics, the results indicate that LLMs exhibit a tendency to provide\nresponses that closely align with liberal or left-leaning perspectives rather\nthan conservative or right-leaning ones when user queries include details\npertaining to occupation, race, or political affiliation. The findings\npresented in this study not only reaffirm earlier observations regarding the\nleft-leaning characteristics of LLMs but also surface particular attributes,\nsuch as occupation, that are particularly susceptible to such inclinations even\nwhen directly steered towards conservatism. As a recommendation to avoid these\nmodels providing politicised responses, users should be mindful when crafting\nqueries, and exercise caution in selecting neutral prompt language.",
+ "authors": "Pagnarasmey Pit, Xingjun Ma, Mike Conway, Qingyu Chen, James Bailey, Henry Pit, Putrasmey Keo, Watey Diep, Yu-Gang Jiang",
+ "published": "2024-03-15",
+ "updated": "2024-03-15",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.SI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2311.13095v1",
+ "title": "Enhancing Logical Reasoning in Large Language Models to Facilitate Legal Applications",
+ "abstract": "Language serves as a vehicle for conveying thought, enabling communication\namong individuals. The ability to distinguish between diverse concepts,\nidentify fairness and injustice, and comprehend a range of legal notions\nfundamentally relies on logical reasoning. Large Language Models (LLMs) attempt\nto emulate human language understanding and generation, but their competency in\nlogical reasoning remains limited. This paper seeks to address the\nphilosophical question: How can we effectively teach logical reasoning to LLMs\nwhile maintaining a deep understanding of the intricate relationship between\nlanguage and logic? By focusing on bolstering LLMs' capabilities in logical\nreasoning, we aim to expand their applicability in law and other\nlogic-intensive disciplines. To this end, we propose a Reinforcement Learning\nfrom Logical Feedback (RLLF) approach, which serves as a potential framework\nfor refining LLMs' reasoning capacities. Through RLLF and a revised evaluation\nmethodology, we explore new avenues for research in this domain and contribute\nto the development of LLMs capable of handling complex legal reasoning tasks\nwhile acknowledging the fundamental connection between language and logic.",
+ "authors": "Ha-Thanh Nguyen, Wachara Fungwacharakorn, Ken Satoh",
+ "published": "2023-11-22",
+ "updated": "2023-11-22",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2405.01769v1",
+ "title": "A Survey on Large Language Models for Critical Societal Domains: Finance, Healthcare, and Law",
+ "abstract": "In the fast-evolving domain of artificial intelligence, large language models\n(LLMs) such as GPT-3 and GPT-4 are revolutionizing the landscapes of finance,\nhealthcare, and law: domains characterized by their reliance on professional\nexpertise, challenging data acquisition, high-stakes, and stringent regulatory\ncompliance. This survey offers a detailed exploration of the methodologies,\napplications, challenges, and forward-looking opportunities of LLMs within\nthese high-stakes sectors. We highlight the instrumental role of LLMs in\nenhancing diagnostic and treatment methodologies in healthcare, innovating\nfinancial analytics, and refining legal interpretation and compliance\nstrategies. Moreover, we critically examine the ethics for LLM applications in\nthese fields, pointing out the existing ethical concerns and the need for\ntransparent, fair, and robust AI systems that respect regulatory norms. By\npresenting a thorough review of current literature and practical applications,\nwe showcase the transformative impact of LLMs, and outline the imperative for\ninterdisciplinary cooperation, methodological advancements, and ethical\nvigilance. Through this lens, we aim to spark dialogue and inspire future\nresearch dedicated to maximizing the benefits of LLMs while mitigating their\nrisks in these precision-dependent sectors. To facilitate future research on\nLLMs in these critical societal domains, we also initiate a reading list that\ntracks the latest advancements under this topic, which will be continually\nupdated: \\url{https://github.com/czyssrs/LLM_X_papers}.",
+ "authors": "Zhiyu Zoey Chen, Jing Ma, Xinlu Zhang, Nan Hao, An Yan, Armineh Nourbakhsh, Xianjun Yang, Julian McAuley, Linda Petzold, William Yang Wang",
+ "published": "2024-05-02",
+ "updated": "2024-05-02",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2401.08495v2",
+ "title": "Large Language Models Portray Socially Subordinate Groups as More Homogeneous, Consistent with a Bias Observed in Humans",
+ "abstract": "Large language models (LLMs) are becoming pervasive in everyday life, yet\ntheir propensity to reproduce biases inherited from training data remains a\npressing concern. Prior investigations into bias in LLMs have focused on the\nassociation of social groups with stereotypical attributes. However, this is\nonly one form of human bias such systems may reproduce. We investigate a new\nform of bias in LLMs that resembles a social psychological phenomenon where\nsocially subordinate groups are perceived as more homogeneous than socially\ndominant groups. We had ChatGPT, a state-of-the-art LLM, generate texts about\nintersectional group identities and compared those texts on measures of\nhomogeneity. We consistently found that ChatGPT portrayed African, Asian, and\nHispanic Americans as more homogeneous than White Americans, indicating that\nthe model described racial minority groups with a narrower range of human\nexperience. ChatGPT also portrayed women as more homogeneous than men, but\nthese differences were small. Finally, we found that the effect of gender\ndiffered across racial/ethnic groups such that the effect of gender was\nconsistent within African and Hispanic Americans but not within Asian and White\nAmericans. We argue that the tendency of LLMs to describe groups as less\ndiverse risks perpetuating stereotypes and discriminatory behavior.",
+ "authors": "Messi H. J. Lee, Jacob M. Montgomery, Calvin K. Lai",
+ "published": "2024-01-16",
+ "updated": "2024-04-26",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2404.10199v3",
+ "title": "CULTURE-GEN: Revealing Global Cultural Perception in Language Models through Natural Language Prompting",
+ "abstract": "As the utilization of large language models (LLMs) has proliferated\nworldwide, it is crucial for them to have adequate knowledge and fair\nrepresentation for diverse global cultures. In this work, we uncover culture\nperceptions of three SOTA models on 110 countries and regions on 8\nculture-related topics through culture-conditioned generations, and extract\nsymbols from these generations that are associated to each culture by the LLM.\nWe discover that culture-conditioned generation consist of linguistic \"markers\"\nthat distinguish marginalized cultures apart from default cultures. We also\ndiscover that LLMs have an uneven degree of diversity in the culture symbols,\nand that cultures from different geographic regions have different presence in\nLLMs' culture-agnostic generation. Our findings promote further research in\nstudying the knowledge and fairness of global culture perception in LLMs. Code\nand Data can be found in: https://github.com/huihanlhh/Culture-Gen/",
+ "authors": "Huihan Li, Liwei Jiang, Jena D. Huang, Hyunwoo Kim, Sebastin Santy, Taylor Sorensen, Bill Yuchen Lin, Nouha Dziri, Xiang Ren, Yejin Choi",
+ "published": "2024-04-16",
+ "updated": "2024-04-26",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2310.06500v1",
+ "title": "MetaAgents: Simulating Interactions of Human Behaviors for LLM-based Task-oriented Coordination via Collaborative Generative Agents",
+ "abstract": "Significant advancements have occurred in the application of Large Language\nModels (LLMs) for various tasks and social simulations. Despite this, their\ncapacities to coordinate within task-oriented social contexts are\nunder-explored. Such capabilities are crucial if LLMs are to effectively mimic\nhuman-like social behavior and produce meaningful results. To bridge this gap,\nwe introduce collaborative generative agents, endowing LLM-based Agents with\nconsistent behavior patterns and task-solving abilities. We situate these\nagents in a simulated job fair environment as a case study to scrutinize their\ncoordination skills. We propose a novel framework that equips collaborative\ngenerative agents with human-like reasoning abilities and specialized skills.\nOur evaluation demonstrates that these agents show promising performance.\nHowever, we also uncover limitations that hinder their effectiveness in more\ncomplex coordination tasks. Our work provides valuable insights into the role\nand evolution of LLMs in task-oriented social simulations.",
+ "authors": "Yuan Li, Yixuan Zhang, Lichao Sun",
+ "published": "2023-10-10",
+ "updated": "2023-10-10",
+ "primary_cat": "cs.AI",
+ "cats": [
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2403.04814v2",
+ "title": "Evaluation of LLMs on Syntax-Aware Code Fill-in-the-Middle Tasks",
+ "abstract": "We introduce Syntax-Aware Fill-In-the-Middle (SAFIM), a new benchmark for\nevaluating Large Language Models (LLMs) on the code Fill-in-the-Middle (FIM)\ntask. This benchmark focuses on syntax-aware completions of program structures\nsuch as code blocks and conditional expressions, and includes 17,720 examples\nfrom multiple programming languages, sourced from recent code submissions after\nApril 2022 to minimize data contamination. SAFIM provides a robust framework\nwith various prompt designs and novel syntax-aware post-processing techniques,\nfacilitating accurate and fair comparisons across LLMs. Our comprehensive\nevaluation of 15 LLMs shows that FIM pretraining not only enhances FIM\nproficiency but also improves Left-to-Right (L2R) inference using LLMs. Our\nfindings challenge conventional beliefs and suggest that pretraining methods\nand data quality have more impact than model size. SAFIM thus serves as a\nfoundational platform for future research in effective pretraining strategies\nfor code LLMs. The evaluation toolkit and dataset are available at\nhttps://github.com/gonglinyuan/safim, and the leaderboard is available at\nhttps://safimbenchmark.com.",
+ "authors": "Linyuan Gong, Sida Wang, Mostafa Elhoushi, Alvin Cheung",
+ "published": "2024-03-07",
+ "updated": "2024-04-10",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.LG",
+ "cs.SE"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2311.18580v1",
+ "title": "FFT: Towards Harmlessness Evaluation and Analysis for LLMs with Factuality, Fairness, Toxicity",
+ "abstract": "The widespread of generative artificial intelligence has heightened concerns\nabout the potential harms posed by AI-generated texts, primarily stemming from\nfactoid, unfair, and toxic content. Previous researchers have invested much\neffort in assessing the harmlessness of generative language models. However,\nexisting benchmarks are struggling in the era of large language models (LLMs),\ndue to the stronger language generation and instruction following capabilities,\nas well as wider applications. In this paper, we propose FFT, a new benchmark\nwith 2116 elaborated-designed instances, for LLM harmlessness evaluation with\nfactuality, fairness, and toxicity. To investigate the potential harms of LLMs,\nwe evaluate 9 representative LLMs covering various parameter scales, training\nstages, and creators. Experiments show that the harmlessness of LLMs is still\nunder-satisfactory, and extensive analysis derives some insightful findings\nthat could inspire future research for harmless LLM research.",
+ "authors": "Shiyao Cui, Zhenyu Zhang, Yilong Chen, Wenyuan Zhang, Tianyun Liu, Siqi Wang, Tingwen Liu",
+ "published": "2023-11-30",
+ "updated": "2023-11-30",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.CR"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2402.18502v1",
+ "title": "Few-Shot Fairness: Unveiling LLM's Potential for Fairness-Aware Classification",
+ "abstract": "Employing Large Language Models (LLM) in various downstream applications such\nas classification is crucial, especially for smaller companies lacking the\nexpertise and resources required for fine-tuning a model. Fairness in LLMs\nhelps ensure inclusivity, equal representation based on factors such as race,\ngender and promotes responsible AI deployment. As the use of LLMs has become\nincreasingly prevalent, it is essential to assess whether LLMs can generate\nfair outcomes when subjected to considerations of fairness. In this study, we\nintroduce a framework outlining fairness regulations aligned with various\nfairness definitions, with each definition being modulated by varying degrees\nof abstraction. We explore the configuration for in-context learning and the\nprocedure for selecting in-context demonstrations using RAG, while\nincorporating fairness rules into the process. Experiments conducted with\ndifferent LLMs indicate that GPT-4 delivers superior results in terms of both\naccuracy and fairness compared to other models. This work is one of the early\nattempts to achieve fairness in prediction tasks by utilizing LLMs through\nin-context learning.",
+ "authors": "Garima Chhikara, Anurag Sharma, Kripabandhu Ghosh, Abhijnan Chakraborty",
+ "published": "2024-02-28",
+ "updated": "2024-02-28",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2311.00306v1",
+ "title": "Probing Explicit and Implicit Gender Bias through LLM Conditional Text Generation",
+ "abstract": "Large Language Models (LLMs) can generate biased and toxic responses. Yet\nmost prior work on LLM gender bias evaluation requires predefined\ngender-related phrases or gender stereotypes, which are challenging to be\ncomprehensively collected and are limited to explicit bias evaluation. In\naddition, we believe that instances devoid of gender-related language or\nexplicit stereotypes in inputs can still induce gender bias in LLMs. Thus, in\nthis work, we propose a conditional text generation mechanism without the need\nfor predefined gender phrases and stereotypes. This approach employs three\ntypes of inputs generated through three distinct strategies to probe LLMs,\naiming to show evidence of explicit and implicit gender biases in LLMs. We also\nutilize explicit and implicit evaluation metrics to evaluate gender bias in\nLLMs under different strategies. Our experiments demonstrate that an increased\nmodel size does not consistently lead to enhanced fairness and all tested LLMs\nexhibit explicit and/or implicit gender bias, even when explicit gender\nstereotypes are absent in the inputs.",
+ "authors": "Xiangjue Dong, Yibo Wang, Philip S. Yu, James Caverlee",
+ "published": "2023-11-01",
+ "updated": "2023-11-01",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2305.12090v1",
+ "title": "UP5: Unbiased Foundation Model for Fairness-aware Recommendation",
+ "abstract": "Recent advancements in foundation models such as large language models (LLM)\nhave propelled them to the forefront of recommender systems (RS). Moreover,\nfairness in RS is critical since many users apply it for decision-making and\ndemand fulfillment. However, at present, there is a lack of understanding\nregarding the level of fairness exhibited by recommendation foundation models\nand the appropriate methods for equitably treating different groups of users in\nfoundation models. In this paper, we focus on user-side unfairness problem and\nshow through a thorough examination that there is unfairness involved in LLMs\nthat lead to unfair recommendation results. To eliminate bias from LLM for\nfairness-aware recommendation, we introduce a novel Unbiased P5 (UP5)\nfoundation model based on Counterfactually-Fair-Prompting (CFP) techniques. CFP\nincludes two sub-modules: a personalized prefix prompt that enhances fairness\nwith respect to individual sensitive attributes, and a Prompt Mixture that\nintegrates multiple counterfactually-fair prompts for a set of sensitive\nattributes. Experiments are conducted on two real-world datasets, MovieLens-1M\nand Insurance, and results are compared with both matching-based and\nsequential-based fairness-aware recommendation models. The results show that\nUP5 achieves better recommendation performance and meanwhile exhibits a high\nlevel of fairness.",
+ "authors": "Wenyue Hua, Yingqiang Ge, Shuyuan Xu, Jianchao Ji, Yongfeng Zhang",
+ "published": "2023-05-20",
+ "updated": "2023-05-20",
+ "primary_cat": "cs.IR",
+ "cats": [
+ "cs.IR",
+ "cs.AI",
+ "cs.CL",
+ "cs.LG"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2309.03852v2",
+ "title": "FLM-101B: An Open LLM and How to Train It with $100K Budget",
+ "abstract": "Large language models (LLMs) have achieved remarkable success in NLP and\nmultimodal tasks, among others. Despite these successes, two main challenges\nremain in developing LLMs: (i) high computational cost, and (ii) fair and\nobjective evaluations. In this paper, we report a solution to significantly\nreduce LLM training cost through a growth strategy. We demonstrate that a\n101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US\ndollars. Inspired by IQ tests, we also consolidate an additional range of\nevaluations on top of existing evaluations that focus on knowledge-oriented\nabilities. These IQ evaluations include symbolic mapping, rule understanding,\npattern mining, and anti-interference. Such evaluations minimize the potential\nimpact of memorization. Experimental results show that our model, named\nFLM-101B, trained with a budget of 100K US dollars, achieves performance\ncomparable to powerful and well-known models, e.g., GPT-3 and GLM-130B,\nespecially on the additional range of IQ evaluations. The checkpoint of\nFLM-101B is released at https://huggingface.co/CofeAI/FLM-101B.",
+ "authors": "Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang",
+ "published": "2023-09-07",
+ "updated": "2023-09-17",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2402.14208v2",
+ "title": "Content Conditional Debiasing for Fair Text Embedding",
+ "abstract": "Mitigating biases in machine learning models has gained increasing attention\nin Natural Language Processing (NLP). Yet, only a few studies focus on fair\ntext embeddings, which are crucial yet challenging for real-world applications.\nIn this paper, we propose a novel method for learning fair text embeddings. We\nachieve fairness while maintaining utility trade-off by ensuring conditional\nindependence between sensitive attributes and text embeddings conditioned on\nthe content. Specifically, we enforce that embeddings of texts with different\nsensitive attributes but identical content maintain the same distance toward\nthe embedding of their corresponding neutral text. Furthermore, we address the\nissue of lacking proper training data by using Large Language Models (LLMs) to\naugment texts into different sensitive groups. Our extensive evaluations\ndemonstrate that our approach effectively improves fairness while preserving\nthe utility of embeddings, representing a pioneering effort in achieving\nconditional independence for fair text embeddings.",
+ "authors": "Wenlong Deng, Blair Chen, Xiaoxiao Li, Christos Thrampoulidis",
+ "published": "2024-02-22",
+ "updated": "2024-02-23",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.CY",
+ "cs.LG"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2403.05668v1",
+ "title": "CFaiRLLM: Consumer Fairness Evaluation in Large-Language Model Recommender System",
+ "abstract": "In the evolving landscape of recommender systems, the integration of Large\nLanguage Models (LLMs) such as ChatGPT marks a new era, introducing the concept\nof Recommendation via LLM (RecLLM). While these advancements promise\nunprecedented personalization and efficiency, they also bring to the fore\ncritical concerns regarding fairness, particularly in how recommendations might\ninadvertently perpetuate or amplify biases associated with sensitive user\nattributes. In order to address these concerns, our study introduces a\ncomprehensive evaluation framework, CFaiRLLM, aimed at evaluating (and thereby\nmitigating) biases on the consumer side within RecLLMs.\n Our research methodically assesses the fairness of RecLLMs by examining how\nrecommendations might vary with the inclusion of sensitive attributes such as\ngender, age, and their intersections, through both similarity alignment and\ntrue preference alignment. By analyzing recommendations generated under\ndifferent conditions-including the use of sensitive attributes in user\nprompts-our framework identifies potential biases in the recommendations\nprovided. A key part of our study involves exploring how different detailed\nstrategies for constructing user profiles (random, top-rated, recent) impact\nthe alignment between recommendations made without consideration of sensitive\nattributes and those that are sensitive-attribute-aware, highlighting the bias\nmechanisms within RecLLMs.\n The findings in our study highlight notable disparities in the fairness of\nrecommendations, particularly when sensitive attributes are integrated into the\nrecommendation process, either individually or in combination. The analysis\ndemonstrates that the choice of user profile sampling strategy plays a\nsignificant role in affecting fairness outcomes, highlighting the complexity of\nachieving fair recommendations in the era of LLMs.",
+ "authors": "Yashar Deldjoo, Tommaso di Noia",
+ "published": "2024-03-08",
+ "updated": "2024-03-08",
+ "primary_cat": "cs.IR",
+ "cats": [
+ "cs.IR"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2308.10397v2",
+ "title": "FairMonitor: A Four-Stage Automatic Framework for Detecting Stereotypes and Biases in Large Language Models",
+ "abstract": "Detecting stereotypes and biases in Large Language Models (LLMs) can enhance\nfairness and reduce adverse impacts on individuals or groups when these LLMs\nare applied. However, the majority of existing methods focus on measuring the\nmodel's preference towards sentences containing biases and stereotypes within\ndatasets, which lacks interpretability and cannot detect implicit biases and\nstereotypes in the real world. To address this gap, this paper introduces a\nfour-stage framework to directly evaluate stereotypes and biases in the\ngenerated content of LLMs, including direct inquiry testing, serial or adapted\nstory testing, implicit association testing, and unknown situation testing.\nAdditionally, the paper proposes multi-dimensional evaluation metrics and\nexplainable zero-shot prompts for automated evaluation. Using the education\nsector as a case study, we constructed the Edu-FairMonitor based on the\nfour-stage framework, which encompasses 12,632 open-ended questions covering\nnine sensitive factors and 26 educational scenarios. Experimental results\nreveal varying degrees of stereotypes and biases in five LLMs evaluated on\nEdu-FairMonitor. Moreover, the results of our proposed automated evaluation\nmethod have shown a high correlation with human annotations.",
+ "authors": "Yanhong Bai, Jiabao Zhao, Jinxin Shi, Tingjiang Wei, Xingjiao Wu, Liang He",
+ "published": "2023-08-21",
+ "updated": "2023-10-27",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2305.03514v3",
+ "title": "Can Large Language Models Transform Computational Social Science?",
+ "abstract": "Large Language Models (LLMs) are capable of successfully performing many\nlanguage processing tasks zero-shot (without training data). If zero-shot LLMs\ncan also reliably classify and explain social phenomena like persuasiveness and\npolitical ideology, then LLMs could augment the Computational Social Science\n(CSS) pipeline in important ways. This work provides a road map for using LLMs\nas CSS tools. Towards this end, we contribute a set of prompting best practices\nand an extensive evaluation pipeline to measure the zero-shot performance of 13\nlanguage models on 25 representative English CSS benchmarks. On taxonomic\nlabeling tasks (classification), LLMs fail to outperform the best fine-tuned\nmodels but still achieve fair levels of agreement with humans. On free-form\ncoding tasks (generation), LLMs produce explanations that often exceed the\nquality of crowdworkers' gold references. We conclude that the performance of\ntoday's LLMs can augment the CSS research pipeline in two ways: (1) serving as\nzero-shot data annotators on human annotation teams, and (2) bootstrapping\nchallenging creative generation tasks (e.g., explaining the underlying\nattributes of a text). In summary, LLMs are posed to meaningfully participate\nin social science analysis in partnership with humans.",
+ "authors": "Caleb Ziems, William Held, Omar Shaikh, Jiaao Chen, Zhehao Zhang, Diyi Yang",
+ "published": "2023-04-12",
+ "updated": "2024-02-26",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.LG"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2404.13925v1",
+ "title": "MARIO Eval: Evaluate Your Math LLM with your Math LLM--A mathematical dataset evaluation toolkit",
+ "abstract": "Large language models (LLMs) have been explored in a variety of reasoning\ntasks including solving of mathematical problems. Each math dataset typically\nincludes its own specially designed evaluation script, which, while suitable\nfor its intended use, lacks generalizability across different datasets.\nConsequently, updates and adaptations to these evaluation tools tend to occur\nwithout being systematically reported, leading to inconsistencies and obstacles\nto fair comparison across studies. To bridge this gap, we introduce a\ncomprehensive mathematical evaluation toolkit that not only utilizes a python\ncomputer algebra system (CAS) for its numerical accuracy, but also integrates\nan optional LLM, known for its considerable natural language processing\ncapabilities. To validate the effectiveness of our toolkit, we manually\nannotated two distinct datasets. Our experiments demonstrate that the toolkit\nyields more robust evaluation results compared to prior works, even without an\nLLM. Furthermore, when an LLM is incorporated, there is a notable enhancement.\nThe code for our method will be made available at\n\\url{https://github.com/MARIO-Math-Reasoning/math_evaluation}.",
+ "authors": "Boning Zhang, Chengxi Li, Kai Fan",
+ "published": "2024-04-22",
+ "updated": "2024-04-22",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2404.07981v1",
+ "title": "Manipulating Large Language Models to Increase Product Visibility",
+ "abstract": "Large language models (LLMs) are increasingly being integrated into search\nengines to provide natural language responses tailored to user queries.\nCustomers and end-users are also becoming more dependent on these models for\nquick and easy purchase decisions. In this work, we investigate whether\nrecommendations from LLMs can be manipulated to enhance a product's visibility.\nWe demonstrate that adding a strategic text sequence (STS) -- a carefully\ncrafted message -- to a product's information page can significantly increase\nits likelihood of being listed as the LLM's top recommendation. To understand\nthe impact of STS, we use a catalog of fictitious coffee machines and analyze\nits effect on two target products: one that seldom appears in the LLM's\nrecommendations and another that usually ranks second. We observe that the\nstrategic text sequence significantly enhances the visibility of both products\nby increasing their chances of appearing as the top recommendation. This\nability to manipulate LLM-generated search responses provides vendors with a\nconsiderable competitive advantage and has the potential to disrupt fair market\ncompetition. Just as search engine optimization (SEO) revolutionized how\nwebpages are customized to rank higher in search engine results, influencing\nLLM recommendations could profoundly impact content optimization for AI-driven\nsearch services. Code for our experiments is available at\nhttps://github.com/aounon/llm-rank-optimizer.",
+ "authors": "Aounon Kumar, Himabindu Lakkaraju",
+ "published": "2024-04-11",
+ "updated": "2024-04-11",
+ "primary_cat": "cs.IR",
+ "cats": [
+ "cs.IR",
+ "cs.AI",
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2312.15198v2",
+ "title": "Do LLM Agents Exhibit Social Behavior?",
+ "abstract": "The advances of Large Language Models (LLMs) are expanding their utility in\nboth academic research and practical applications. Recent social science\nresearch has explored the use of these ``black-box'' LLM agents for simulating\ncomplex social systems and potentially substituting human subjects in\nexperiments. Our study delves into this emerging domain, investigating the\nextent to which LLMs exhibit key social interaction principles, such as social\nlearning, social preference, and cooperative behavior (indirect reciprocity),\nin their interactions with humans and other agents. We develop a framework for\nour study, wherein classical laboratory experiments involving human subjects\nare adapted to use LLM agents. This approach involves step-by-step reasoning\nthat mirrors human cognitive processes and zero-shot learning to assess the\ninnate preferences of LLMs. Our analysis of LLM agents' behavior includes both\nthe primary effects and an in-depth examination of the underlying mechanisms.\nFocusing on GPT-4, our analyses suggest that LLM agents appear to exhibit a\nrange of human-like social behaviors such as distributional and reciprocity\npreferences, responsiveness to group identity cues, engagement in indirect\nreciprocity, and social learning capabilities. However, our analysis also\nreveals notable differences: LLMs demonstrate a pronounced fairness preference,\nweaker positive reciprocity, and a more calculating approach in social learning\ncompared to humans. These insights indicate that while LLMs hold great promise\nfor applications in social science research, such as in laboratory experiments\nand agent-based modeling, the subtle behavioral differences between LLM agents\nand humans warrant further investigation. Careful examination and development\nof protocols in evaluating the social behaviors of LLMs are necessary before\ndirectly applying these models to emulate human behavior.",
+ "authors": "Yan Leng, Yuan Yuan",
+ "published": "2023-12-23",
+ "updated": "2024-02-22",
+ "primary_cat": "cs.AI",
+ "cats": [
+ "cs.AI",
+ "cs.SI",
+ "econ.GN",
+ "q-fin.EC"
+ ],
+ "category": "LLM Fairness"
+ }
+ ]
+ ]
+ },
+ {
+ "url": "http://arxiv.org/abs/1908.10084v1",
+ "title": "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
+ "abstract": "BERT (Devlin et al., 2018) and RoBERTa (Liu et al., 2019) has set a new\nstate-of-the-art performance on sentence-pair regression tasks like semantic\ntextual similarity (STS). However, it requires that both sentences are fed into\nthe network, which causes a massive computational overhead: Finding the most\nsimilar pair in a collection of 10,000 sentences requires about 50 million\ninference computations (~65 hours) with BERT. The construction of BERT makes it\nunsuitable for semantic similarity search as well as for unsupervised tasks\nlike clustering.\n In this publication, we present Sentence-BERT (SBERT), a modification of the\npretrained BERT network that use siamese and triplet network structures to\nderive semantically meaningful sentence embeddings that can be compared using\ncosine-similarity. This reduces the effort for finding the most similar pair\nfrom 65 hours with BERT / RoBERTa to about 5 seconds with SBERT, while\nmaintaining the accuracy from BERT.\n We evaluate SBERT and SRoBERTa on common STS tasks and transfer learning\ntasks, where it outperforms other state-of-the-art sentence embeddings methods.",
+ "authors": "Nils Reimers, Iryna Gurevych",
+ "published": "2019-08-27",
+ "updated": "2019-08-27",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/1705.02364v5",
+ "title": "Supervised Learning of Universal Sentence Representations from Natural Language Inference Data",
+ "abstract": "Many modern NLP systems rely on word embeddings, previously trained in an\nunsupervised manner on large corpora, as base features. Efforts to obtain\nembeddings for larger chunks of text, such as sentences, have however not been\nso successful. Several attempts at learning unsupervised representations of\nsentences have not reached satisfactory enough performance to be widely\nadopted. In this paper, we show how universal sentence representations trained\nusing the supervised data of the Stanford Natural Language Inference datasets\ncan consistently outperform unsupervised methods like SkipThought vectors on a\nwide range of transfer tasks. Much like how computer vision uses ImageNet to\nobtain features, which can then be transferred to other tasks, our work tends\nto indicate the suitability of natural language inference for transfer learning\nto other NLP tasks. Our encoder is publicly available.",
+ "authors": "Alexis Conneau, Douwe Kiela, Holger Schwenk, Loic Barrault, Antoine Bordes",
+ "published": "2017-05-05",
+ "updated": "2018-07-08",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2103.00020v1",
+ "title": "Learning Transferable Visual Models From Natural Language Supervision",
+ "abstract": "State-of-the-art computer vision systems are trained to predict a fixed set\nof predetermined object categories. This restricted form of supervision limits\ntheir generality and usability since additional labeled data is needed to\nspecify any other visual concept. Learning directly from raw text about images\nis a promising alternative which leverages a much broader source of\nsupervision. We demonstrate that the simple pre-training task of predicting\nwhich caption goes with which image is an efficient and scalable way to learn\nSOTA image representations from scratch on a dataset of 400 million (image,\ntext) pairs collected from the internet. After pre-training, natural language\nis used to reference learned visual concepts (or describe new ones) enabling\nzero-shot transfer of the model to downstream tasks. We study the performance\nof this approach by benchmarking on over 30 different existing computer vision\ndatasets, spanning tasks such as OCR, action recognition in videos,\ngeo-localization, and many types of fine-grained object classification. The\nmodel transfers non-trivially to most tasks and is often competitive with a\nfully supervised baseline without the need for any dataset specific training.\nFor instance, we match the accuracy of the original ResNet-50 on ImageNet\nzero-shot without needing to use any of the 1.28 million training examples it\nwas trained on. We release our code and pre-trained model weights at\nhttps://github.com/OpenAI/CLIP.",
+ "authors": "Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever",
+ "published": "2021-02-26",
+ "updated": "2021-02-26",
+ "primary_cat": "cs.CV",
+ "cats": [
+ "cs.CV",
+ "cs.LG"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/1907.00937v1",
+ "title": "Semantic Product Search",
+ "abstract": "We study the problem of semantic matching in product search, that is, given a\ncustomer query, retrieve all semantically related products from the catalog.\nPure lexical matching via an inverted index falls short in this respect due to\nseveral factors: a) lack of understanding of hypernyms, synonyms, and antonyms,\nb) fragility to morphological variants (e.g. \"woman\" vs. \"women\"), and c)\nsensitivity to spelling errors. To address these issues, we train a deep\nlearning model for semantic matching using customer behavior data. Much of the\nrecent work on large-scale semantic search using deep learning focuses on\nranking for web search. In contrast, semantic matching for product search\npresents several novel challenges, which we elucidate in this paper. We address\nthese challenges by a) developing a new loss function that has an inbuilt\nthreshold to differentiate between random negative examples, impressed but not\npurchased examples, and positive examples (purchased items), b) using average\npooling in conjunction with n-grams to capture short-range linguistic patterns,\nc) using hashing to handle out of vocabulary tokens, and d) using a model\nparallel training architecture to scale across 8 GPUs. We present compelling\noffline results that demonstrate at least 4.7% improvement in Recall@100 and\n14.5% improvement in mean average precision (MAP) over baseline\nstate-of-the-art semantic search methods using the same tokenization method.\nMoreover, we present results and discuss learnings from online A/B tests which\ndemonstrate the efficacy of our method.",
+ "authors": "Priyanka Nigam, Yiwei Song, Vijai Mohan, Vihan Lakshman, Weitian, Ding, Ankit Shingavi, Choon Hui Teo, Hao Gu, Bing Yin",
+ "published": "2019-07-01",
+ "updated": "2019-07-01",
+ "primary_cat": "cs.IR",
+ "cats": [
+ "cs.IR",
+ "cs.CL"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2205.11728v1",
+ "title": "ItemSage: Learning Product Embeddings for Shopping Recommendations at Pinterest",
+ "abstract": "Learned embeddings for products are an important building block for web-scale\ne-commerce recommendation systems. At Pinterest, we build a single set of\nproduct embeddings called ItemSage to provide relevant recommendations in all\nshopping use cases including user, image and search based recommendations. This\napproach has led to significant improvements in engagement and conversion\nmetrics, while reducing both infrastructure and maintenance cost. While most\nprior work focuses on building product embeddings from features coming from a\nsingle modality, we introduce a transformer-based architecture capable of\naggregating information from both text and image modalities and show that it\nsignificantly outperforms single modality baselines. We also utilize multi-task\nlearning to make ItemSage optimized for several engagement types, leading to a\ncandidate generation system that is efficient for all of the engagement\nobjectives of the end-to-end recommendation system. Extensive offline\nexperiments are conducted to illustrate the effectiveness of our approach and\nresults from online A/B experiments show substantial gains in key business\nmetrics (up to +7% gross merchandise value/user and +11% click volume).",
+ "authors": "Paul Baltescu, Haoyu Chen, Nikil Pancha, Andrew Zhai, Jure Leskovec, Charles Rosenberg",
+ "published": "2022-05-24",
+ "updated": "2022-05-24",
+ "primary_cat": "cs.IR",
+ "cats": [
+ "cs.IR",
+ "cs.LG"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2202.07247v1",
+ "title": "CommerceMM: Large-Scale Commerce MultiModal Representation Learning with Omni Retrieval",
+ "abstract": "We introduce CommerceMM - a multimodal model capable of providing a diverse\nand granular understanding of commerce topics associated to the given piece of\ncontent (image, text, image+text), and having the capability to generalize to a\nwide range of tasks, including Multimodal Categorization, Image-Text Retrieval,\nQuery-to-Product Retrieval, Image-to-Product Retrieval, etc. We follow the\npre-training + fine-tuning training regime and present 5 effective pre-training\ntasks on image-text pairs. To embrace more common and diverse commerce data\nwith text-to-multimodal, image-to-multimodal, and multimodal-to-multimodal\nmapping, we propose another 9 novel cross-modal and cross-pair retrieval tasks,\ncalled Omni-Retrieval pre-training. The pre-training is conducted in an\nefficient manner with only two forward/backward updates for the combined 14\ntasks. Extensive experiments and analysis show the effectiveness of each task.\nWhen combining all pre-training tasks, our model achieves state-of-the-art\nperformance on 7 commerce-related downstream tasks after fine-tuning.\nAdditionally, we propose a novel approach of modality randomization to\ndynamically adjust our model under different efficiency constraints.",
+ "authors": "Licheng Yu, Jun Chen, Animesh Sinha, Mengjiao MJ Wang, Hugo Chen, Tamara L. Berg, Ning Zhang",
+ "published": "2022-02-15",
+ "updated": "2022-02-15",
+ "primary_cat": "cs.CV",
+ "cats": [
+ "cs.CV",
+ "cs.AI",
+ "cs.CL",
+ "cs.MM",
+ "cs.SI"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2006.11632v2",
+ "title": "Embedding-based Retrieval in Facebook Search",
+ "abstract": "Search in social networks such as Facebook poses different challenges than in\nclassical web search: besides the query text, it is important to take into\naccount the searcher's context to provide relevant results. Their social graph\nis an integral part of this context and is a unique aspect of Facebook search.\nWhile embedding-based retrieval (EBR) has been applied in eb search engines for\nyears, Facebook search was still mainly based on a Boolean matching model. In\nthis paper, we discuss the techniques for applying EBR to a Facebook Search\nsystem. We introduce the unified embedding framework developed to model\nsemantic embeddings for personalized search, and the system to serve\nembedding-based retrieval in a typical search system based on an inverted\nindex. We discuss various tricks and experiences on end-to-end optimization of\nthe whole system, including ANN parameter tuning and full-stack optimization.\nFinally, we present our progress on two selected advanced topics about\nmodeling. We evaluated EBR on verticals for Facebook Search with significant\nmetrics gains observed in online A/B experiments. We believe this paper will\nprovide useful insights and experiences to help people on developing\nembedding-based retrieval systems in search engines.",
+ "authors": "Jui-Ting Huang, Ashish Sharma, Shuying Sun, Li Xia, David Zhang, Philip Pronin, Janani Padmanabhan, Giuseppe Ottaviano, Linjun Yang",
+ "published": "2020-06-20",
+ "updated": "2020-07-29",
+ "primary_cat": "cs.IR",
+ "cats": [
+ "cs.IR"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2202.06212v1",
+ "title": "Uni-Retriever: Towards Learning The Unified Embedding Based Retriever in Bing Sponsored Search",
+ "abstract": "Embedding based retrieval (EBR) is a fundamental building block in many web\napplications. However, EBR in sponsored search is distinguished from other\ngeneric scenarios and technically challenging due to the need of serving\nmultiple retrieval purposes: firstly, it has to retrieve high-relevance ads,\nwhich may exactly serve user's search intent; secondly, it needs to retrieve\nhigh-CTR ads so as to maximize the overall user clicks. In this paper, we\npresent a novel representation learning framework Uni-Retriever developed for\nBing Search, which unifies two different training modes knowledge distillation\nand contrastive learning to realize both required objectives. On one hand, the\ncapability of making high-relevance retrieval is established by distilling\nknowledge from the ``relevance teacher model''. On the other hand, the\ncapability of making high-CTR retrieval is optimized by learning to\ndiscriminate user's clicked ads from the entire corpus. The two training modes\nare jointly performed as a multi-objective learning process, such that the ads\nof high relevance and CTR can be favored by the generated embeddings. Besides\nthe learning strategy, we also elaborate our solution for EBR serving pipeline\nbuilt upon the substantially optimized DiskANN, where massive-scale EBR can be\nperformed with competitive time and memory efficiency, and accomplished in\nhigh-quality. We make comprehensive offline and online experiments to evaluate\nthe proposed techniques, whose findings may provide useful insights for the\nfuture development of EBR systems. Uni-Retriever has been mainstreamed as the\nmajor retrieval path in Bing's production thanks to the notable improvements on\nthe representation and EBR serving quality.",
+ "authors": "Jianjin Zhang, Zheng Liu, Weihao Han, Shitao Xiao, Ruicheng Zheng, Yingxia Shao, Hao Sun, Hanqing Zhu, Premkumar Srinivasan, Denvy Deng, Qi Zhang, Xing Xie",
+ "published": "2022-02-13",
+ "updated": "2022-02-13",
+ "primary_cat": "cs.IR",
+ "cats": [
+ "cs.IR",
+ "cs.CL"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2108.05887v1",
+ "title": "Billion-Scale Pretraining with Vision Transformers for Multi-Task Visual Representations",
+ "abstract": "Large-scale pretraining of visual representations has led to state-of-the-art\nperformance on a range of benchmark computer vision tasks, yet the benefits of\nthese techniques at extreme scale in complex production systems has been\nrelatively unexplored. We consider the case of a popular visual discovery\nproduct, where these representations are trained with multi-task learning, from\nuse-case specific visual understanding (e.g. skin tone classification) to\ngeneral representation learning for all visual content (e.g. embeddings for\nretrieval). In this work, we describe how we (1) generate a dataset with over a\nbillion images via large weakly-supervised pretraining to improve the\nperformance of these visual representations, and (2) leverage Transformers to\nreplace the traditional convolutional backbone, with insights into both system\nand performance improvements, especially at 1B+ image scale. To support this\nbackbone model, we detail a systematic approach to deriving weakly-supervised\nimage annotations from heterogenous text signals, demonstrating the benefits of\nclustering techniques to handle the long-tail distribution of image labels.\nThrough a comprehensive study of offline and online evaluation, we show that\nlarge-scale Transformer-based pretraining provides significant benefits to\nindustry computer vision applications. The model is deployed in a production\nvisual shopping system, with 36% improvement in top-1 relevance and 23%\nimprovement in click-through volume. We conduct extensive experiments to better\nunderstand the empirical relationships between Transformer-based architectures,\ndataset scale, and the performance of production vision systems.",
+ "authors": "Josh Beal, Hao-Yu Wu, Dong Huk Park, Andrew Zhai, Dmitry Kislyuk",
+ "published": "2021-08-12",
+ "updated": "2021-08-12",
+ "primary_cat": "cs.CV",
+ "cats": [
+ "cs.CV",
+ "cs.AI",
+ "cs.LG"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/1706.02216v4",
+ "title": "Inductive Representation Learning on Large Graphs",
+ "abstract": "Low-dimensional embeddings of nodes in large graphs have proved extremely\nuseful in a variety of prediction tasks, from content recommendation to\nidentifying protein functions. However, most existing approaches require that\nall nodes in the graph are present during training of the embeddings; these\nprevious approaches are inherently transductive and do not naturally generalize\nto unseen nodes. Here we present GraphSAGE, a general, inductive framework that\nleverages node feature information (e.g., text attributes) to efficiently\ngenerate node embeddings for previously unseen data. Instead of training\nindividual embeddings for each node, we learn a function that generates\nembeddings by sampling and aggregating features from a node's local\nneighborhood. Our algorithm outperforms strong baselines on three inductive\nnode-classification benchmarks: we classify the category of unseen nodes in\nevolving information graphs based on citation and Reddit post data, and we show\nthat our algorithm generalizes to completely unseen graphs using a multi-graph\ndataset of protein-protein interactions.",
+ "authors": "William L. Hamilton, Rex Ying, Jure Leskovec",
+ "published": "2017-06-07",
+ "updated": "2018-09-10",
+ "primary_cat": "cs.SI",
+ "cats": [
+ "cs.SI",
+ "cs.LG",
+ "stat.ML"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2106.09297v1",
+ "title": "Embedding-based Product Retrieval in Taobao Search",
+ "abstract": "Nowadays, the product search service of e-commerce platforms has become a\nvital shopping channel in people's life. The retrieval phase of products\ndetermines the search system's quality and gradually attracts researchers'\nattention. Retrieving the most relevant products from a large-scale corpus\nwhile preserving personalized user characteristics remains an open question.\nRecent approaches in this domain have mainly focused on embedding-based\nretrieval (EBR) systems. However, after a long period of practice on Taobao, we\nfind that the performance of the EBR system is dramatically degraded due to\nits: (1) low relevance with a given query and (2) discrepancy between the\ntraining and inference phases. Therefore, we propose a novel and practical\nembedding-based product retrieval model, named Multi-Grained Deep Semantic\nProduct Retrieval (MGDSPR). Specifically, we first identify the inconsistency\nbetween the training and inference stages, and then use the softmax\ncross-entropy loss as the training objective, which achieves better performance\nand faster convergence. Two efficient methods are further proposed to improve\nretrieval relevance, including smoothing noisy training data and generating\nrelevance-improving hard negative samples without requiring extra knowledge and\ntraining procedures. We evaluate MGDSPR on Taobao Product Search with\nsignificant metrics gains observed in offline experiments and online A/B tests.\nMGDSPR has been successfully deployed to the existing multi-channel retrieval\nsystem in Taobao Search. We also introduce the online deployment scheme and\nshare practical lessons of our retrieval system to contribute to the community.",
+ "authors": "Sen Li, Fuyu Lv, Taiwei Jin, Guli Lin, Keping Yang, Xiaoyi Zeng, Xiao-Ming Wu, Qianli Ma",
+ "published": "2021-06-17",
+ "updated": "2021-06-17",
+ "primary_cat": "cs.IR",
+ "cats": [
+ "cs.IR"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/1806.01973v1",
+ "title": "Graph Convolutional Neural Networks for Web-Scale Recommender Systems",
+ "abstract": "Recent advancements in deep neural networks for graph-structured data have\nled to state-of-the-art performance on recommender system benchmarks. However,\nmaking these methods practical and scalable to web-scale recommendation tasks\nwith billions of items and hundreds of millions of users remains a challenge.\nHere we describe a large-scale deep recommendation engine that we developed and\ndeployed at Pinterest. We develop a data-efficient Graph Convolutional Network\n(GCN) algorithm PinSage, which combines efficient random walks and graph\nconvolutions to generate embeddings of nodes (i.e., items) that incorporate\nboth graph structure as well as node feature information. Compared to prior GCN\napproaches, we develop a novel method based on highly efficient random walks to\nstructure the convolutions and design a novel training strategy that relies on\nharder-and-harder training examples to improve robustness and convergence of\nthe model. We also develop an efficient MapReduce model inference algorithm to\ngenerate embeddings using a trained model. We deploy PinSage at Pinterest and\ntrain it on 7.5 billion examples on a graph with 3 billion nodes representing\npins and boards, and 18 billion edges. According to offline metrics, user\nstudies and A/B tests, PinSage generates higher-quality recommendations than\ncomparable deep learning and graph-based alternatives. To our knowledge, this\nis the largest application of deep graph embeddings to date and paves the way\nfor a new generation of web-scale recommender systems based on graph\nconvolutional architectures.",
+ "authors": "Rex Ying, Ruining He, Kaifeng Chen, Pong Eksombatchai, William L. Hamilton, Jure Leskovec",
+ "published": "2018-06-06",
+ "updated": "2018-06-06",
+ "primary_cat": "cs.IR",
+ "cats": [
+ "cs.IR",
+ "cs.LG",
+ "stat.ML"
+ ],
+ "label": "Related Work"
+ },
+ {
+ "url": "http://arxiv.org/abs/2402.04489v1",
+ "title": "De-amplifying Bias from Differential Privacy in Language Model Fine-tuning",
+ "abstract": "Fairness and privacy are two important values machine learning (ML)\npractitioners often seek to operationalize in models. Fairness aims to reduce\nmodel bias for social/demographic sub-groups. Privacy via differential privacy\n(DP) mechanisms, on the other hand, limits the impact of any individual's\ntraining data on the resulting model. The trade-offs between privacy and\nfairness goals of trustworthy ML pose a challenge to those wishing to address\nboth. We show that DP amplifies gender, racial, and religious bias when\nfine-tuning large language models (LLMs), producing models more biased than\nones fine-tuned without DP. We find the cause of the amplification to be a\ndisparity in convergence of gradients across sub-groups. Through the case of\nbinary gender bias, we demonstrate that Counterfactual Data Augmentation (CDA),\na known method for addressing bias, also mitigates bias amplification by DP. As\na consequence, DP and CDA together can be used to fine-tune models while\nmaintaining both fairness and privacy.",
+ "authors": "Sanjari Srivastava, Piotr Mardziel, Zhikhun Zhang, Archana Ahlawat, Anupam Datta, John C Mitchell",
+ "published": "2024-02-07",
+ "updated": "2024-02-07",
+ "primary_cat": "cs.LG",
+ "cats": [
+ "cs.LG",
+ "cs.CR",
+ "cs.CY",
+ "stat.ME"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2305.03514v3",
+ "title": "Can Large Language Models Transform Computational Social Science?",
+ "abstract": "Large Language Models (LLMs) are capable of successfully performing many\nlanguage processing tasks zero-shot (without training data). If zero-shot LLMs\ncan also reliably classify and explain social phenomena like persuasiveness and\npolitical ideology, then LLMs could augment the Computational Social Science\n(CSS) pipeline in important ways. This work provides a road map for using LLMs\nas CSS tools. Towards this end, we contribute a set of prompting best practices\nand an extensive evaluation pipeline to measure the zero-shot performance of 13\nlanguage models on 25 representative English CSS benchmarks. On taxonomic\nlabeling tasks (classification), LLMs fail to outperform the best fine-tuned\nmodels but still achieve fair levels of agreement with humans. On free-form\ncoding tasks (generation), LLMs produce explanations that often exceed the\nquality of crowdworkers' gold references. We conclude that the performance of\ntoday's LLMs can augment the CSS research pipeline in two ways: (1) serving as\nzero-shot data annotators on human annotation teams, and (2) bootstrapping\nchallenging creative generation tasks (e.g., explaining the underlying\nattributes of a text). In summary, LLMs are posed to meaningfully participate\nin social science analysis in partnership with humans.",
+ "authors": "Caleb Ziems, William Held, Omar Shaikh, Jiaao Chen, Zhehao Zhang, Diyi Yang",
+ "published": "2023-04-12",
+ "updated": "2024-02-26",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.LG"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2310.14607v2",
+ "title": "Confronting LLMs with Traditional ML: Rethinking the Fairness of Large Language Models in Tabular Classifications",
+ "abstract": "Recent literature has suggested the potential of using large language models\n(LLMs) to make classifications for tabular tasks. However, LLMs have been shown\nto exhibit harmful social biases that reflect the stereotypes and inequalities\npresent in society. To this end, as well as the widespread use of tabular data\nin many high-stake applications, it is important to explore the following\nquestions: what sources of information do LLMs draw upon when making\nclassifications for tabular tasks; whether and to what extent are LLM\nclassifications for tabular data influenced by social biases and stereotypes;\nand what are the consequential implications for fairness?\n Through a series of experiments, we delve into these questions and show that\nLLMs tend to inherit social biases from their training data which significantly\nimpact their fairness in tabular classification tasks. Furthermore, our\ninvestigations show that in the context of bias mitigation, though in-context\nlearning and finetuning have a moderate effect, the fairness metric gap between\ndifferent subgroups is still larger than that in traditional machine learning\nmodels, such as Random Forest and shallow Neural Networks. This observation\nemphasizes that the social biases are inherent within the LLMs themselves and\ninherited from their pretraining corpus, not only from the downstream task\ndatasets. Besides, we demonstrate that label-flipping of in-context examples\ncan significantly reduce biases, further highlighting the presence of inherent\nbias within LLMs.",
+ "authors": "Yanchen Liu, Srishti Gautam, Jiaqi Ma, Himabindu Lakkaraju",
+ "published": "2023-10-23",
+ "updated": "2024-04-02",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.LG"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2403.02839v1",
+ "title": "An Empirical Study of LLM-as-a-Judge for LLM Evaluation: Fine-tuned Judge Models are Task-specific Classifiers",
+ "abstract": "Recently, there has been a growing trend of utilizing Large Language Model\n(LLM) to evaluate the quality of other LLMs. Many studies have employed\nproprietary close-source models, especially GPT4, as the evaluator.\nAlternatively, other works have fine-tuned judge models based on open-source\nLLMs as the evaluator. In this study, we conduct an empirical study of\ndifferent judge models on their evaluation capability. Our findings indicate\nthat although the fine-tuned judge models achieve high accuracy on in-domain\ntest sets, even surpassing GPT4, they are inherently task-specific classifiers,\nand their generalizability and fairness severely underperform GPT4.",
+ "authors": "Hui Huang, Yingqi Qu, Jing Liu, Muyun Yang, Tiejun Zhao",
+ "published": "2024-03-05",
+ "updated": "2024-03-05",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2307.03838v2",
+ "title": "RADAR: Robust AI-Text Detection via Adversarial Learning",
+ "abstract": "Recent advances in large language models (LLMs) and the intensifying\npopularity of ChatGPT-like applications have blurred the boundary of\nhigh-quality text generation between humans and machines. However, in addition\nto the anticipated revolutionary changes to our technology and society, the\ndifficulty of distinguishing LLM-generated texts (AI-text) from human-generated\ntexts poses new challenges of misuse and fairness, such as fake content\ngeneration, plagiarism, and false accusations of innocent writers. While\nexisting works show that current AI-text detectors are not robust to LLM-based\nparaphrasing, this paper aims to bridge this gap by proposing a new framework\ncalled RADAR, which jointly trains a robust AI-text detector via adversarial\nlearning. RADAR is based on adversarial training of a paraphraser and a\ndetector. The paraphraser's goal is to generate realistic content to evade\nAI-text detection. RADAR uses the feedback from the detector to update the\nparaphraser, and vice versa. Evaluated with 8 different LLMs (Pythia, Dolly\n2.0, Palmyra, Camel, GPT-J, Dolly 1.0, LLaMA, and Vicuna) across 4 datasets,\nexperimental results show that RADAR significantly outperforms existing AI-text\ndetection methods, especially when paraphrasing is in place. We also identify\nthe strong transferability of RADAR from instruction-tuned LLMs to other LLMs,\nand evaluate the improved capability of RADAR via GPT-3.5-Turbo.",
+ "authors": "Xiaomeng Hu, Pin-Yu Chen, Tsung-Yi Ho",
+ "published": "2023-07-07",
+ "updated": "2023-10-24",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.LG"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2403.09606v1",
+ "title": "Large Language Models and Causal Inference in Collaboration: A Comprehensive Survey",
+ "abstract": "Causal inference has shown potential in enhancing the predictive accuracy,\nfairness, robustness, and explainability of Natural Language Processing (NLP)\nmodels by capturing causal relationships among variables. The emergence of\ngenerative Large Language Models (LLMs) has significantly impacted various NLP\ndomains, particularly through their advanced reasoning capabilities. This\nsurvey focuses on evaluating and improving LLMs from a causal view in the\nfollowing areas: understanding and improving the LLMs' reasoning capacity,\naddressing fairness and safety issues in LLMs, complementing LLMs with\nexplanations, and handling multimodality. Meanwhile, LLMs' strong reasoning\ncapacities can in turn contribute to the field of causal inference by aiding\ncausal relationship discovery and causal effect estimations. This review\nexplores the interplay between causal inference frameworks and LLMs from both\nperspectives, emphasizing their collective potential to further the development\nof more advanced and equitable artificial intelligence systems.",
+ "authors": "Xiaoyu Liu, Paiheng Xu, Junda Wu, Jiaxin Yuan, Yifan Yang, Yuhang Zhou, Fuxiao Liu, Tianrui Guan, Haoliang Wang, Tong Yu, Julian McAuley, Wei Ai, Furong Huang",
+ "published": "2024-03-14",
+ "updated": "2024-03-14",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2403.15451v1",
+ "title": "Towards Enabling FAIR Dataspaces Using Large Language Models",
+ "abstract": "Dataspaces have recently gained adoption across various sectors, including\ntraditionally less digitized domains such as culture. Leveraging Semantic Web\ntechnologies helps to make dataspaces FAIR, but their complexity poses a\nsignificant challenge to the adoption of dataspaces and increases their cost.\nThe advent of Large Language Models (LLMs) raises the question of how these\nmodels can support the adoption of FAIR dataspaces. In this work, we\ndemonstrate the potential of LLMs in dataspaces with a concrete example. We\nalso derive a research agenda for exploring this emerging field.",
+ "authors": "Benedikt T. Arnold, Johannes Theissen-Lipp, Diego Collarana, Christoph Lange, Sandra Geisler, Edward Curry, Stefan Decker",
+ "published": "2024-03-18",
+ "updated": "2024-03-18",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2310.06500v1",
+ "title": "MetaAgents: Simulating Interactions of Human Behaviors for LLM-based Task-oriented Coordination via Collaborative Generative Agents",
+ "abstract": "Significant advancements have occurred in the application of Large Language\nModels (LLMs) for various tasks and social simulations. Despite this, their\ncapacities to coordinate within task-oriented social contexts are\nunder-explored. Such capabilities are crucial if LLMs are to effectively mimic\nhuman-like social behavior and produce meaningful results. To bridge this gap,\nwe introduce collaborative generative agents, endowing LLM-based Agents with\nconsistent behavior patterns and task-solving abilities. We situate these\nagents in a simulated job fair environment as a case study to scrutinize their\ncoordination skills. We propose a novel framework that equips collaborative\ngenerative agents with human-like reasoning abilities and specialized skills.\nOur evaluation demonstrates that these agents show promising performance.\nHowever, we also uncover limitations that hinder their effectiveness in more\ncomplex coordination tasks. Our work provides valuable insights into the role\nand evolution of LLMs in task-oriented social simulations.",
+ "authors": "Yuan Li, Yixuan Zhang, Lichao Sun",
+ "published": "2023-10-10",
+ "updated": "2023-10-10",
+ "primary_cat": "cs.AI",
+ "cats": [
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2310.18130v2",
+ "title": "DELPHI: Data for Evaluating LLMs' Performance in Handling Controversial Issues",
+ "abstract": "Controversy is a reflection of our zeitgeist, and an important aspect to any\ndiscourse. The rise of large language models (LLMs) as conversational systems\nhas increased public reliance on these systems for answers to their various\nquestions. Consequently, it is crucial to systematically examine how these\nmodels respond to questions that pertaining to ongoing debates. However, few\nsuch datasets exist in providing human-annotated labels reflecting the\ncontemporary discussions. To foster research in this area, we propose a novel\nconstruction of a controversial questions dataset, expanding upon the publicly\nreleased Quora Question Pairs Dataset. This dataset presents challenges\nconcerning knowledge recency, safety, fairness, and bias. We evaluate different\nLLMs using a subset of this dataset, illuminating how they handle controversial\nissues and the stances they adopt. This research ultimately contributes to our\nunderstanding of LLMs' interaction with controversial issues, paving the way\nfor improvements in their comprehension and handling of complex societal\ndebates.",
+ "authors": "David Q. Sun, Artem Abzaliev, Hadas Kotek, Zidi Xiu, Christopher Klein, Jason D. Williams",
+ "published": "2023-10-27",
+ "updated": "2023-11-07",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.HC"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2206.13757v1",
+ "title": "Flexible text generation for counterfactual fairness probing",
+ "abstract": "A common approach for testing fairness issues in text-based classifiers is\nthrough the use of counterfactuals: does the classifier output change if a\nsensitive attribute in the input is changed? Existing counterfactual generation\nmethods typically rely on wordlists or templates, producing simple\ncounterfactuals that don't take into account grammar, context, or subtle\nsensitive attribute references, and could miss issues that the wordlist\ncreators had not considered. In this paper, we introduce a task for generating\ncounterfactuals that overcomes these shortcomings, and demonstrate how large\nlanguage models (LLMs) can be leveraged to make progress on this task. We show\nthat this LLM-based method can produce complex counterfactuals that existing\nmethods cannot, comparing the performance of various counterfactual generation\nmethods on the Civil Comments dataset and showing their value in evaluating a\ntoxicity classifier.",
+ "authors": "Zee Fryer, Vera Axelrod, Ben Packer, Alex Beutel, Jilin Chen, Kellie Webster",
+ "published": "2022-06-28",
+ "updated": "2022-06-28",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.CY"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2311.01964v1",
+ "title": "Don't Make Your LLM an Evaluation Benchmark Cheater",
+ "abstract": "Large language models~(LLMs) have greatly advanced the frontiers of\nartificial intelligence, attaining remarkable improvement in model capacity. To\nassess the model performance, a typical approach is to construct evaluation\nbenchmarks for measuring the ability level of LLMs in different aspects.\nDespite that a number of high-quality benchmarks have been released, the\nconcerns about the appropriate use of these benchmarks and the fair comparison\nof different models are increasingly growing. Considering these concerns, in\nthis paper, we discuss the potential risk and impact of inappropriately using\nevaluation benchmarks and misleadingly interpreting the evaluation results.\nSpecially, we focus on a special issue that would lead to inappropriate\nevaluation, \\ie \\emph{benchmark leakage}, referring that the data related to\nevaluation sets is occasionally used for model training. This phenomenon now\nbecomes more common since pre-training data is often prepared ahead of model\ntest. We conduct extensive experiments to study the effect of benchmark\nleverage, and find that it can dramatically boost the evaluation results, which\nwould finally lead to an unreliable assessment of model performance. To improve\nthe use of existing evaluation benchmarks, we finally present several\nguidelines for both LLM developers and benchmark maintainers. We hope this work\ncan draw attention to appropriate training and evaluation of LLMs.",
+ "authors": "Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, Jiawei Han",
+ "published": "2023-11-03",
+ "updated": "2023-11-03",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2305.01937v1",
+ "title": "Can Large Language Models Be an Alternative to Human Evaluations?",
+ "abstract": "Human evaluation is indispensable and inevitable for assessing the quality of\ntexts generated by machine learning models or written by humans. However, human\nevaluation is very difficult to reproduce and its quality is notoriously\nunstable, hindering fair comparisons among different natural language\nprocessing (NLP) models and algorithms. Recently, large language models (LLMs)\nhave demonstrated exceptional performance on unseen tasks when only the task\ninstructions are provided. In this paper, we explore if such an ability of the\nLLMs can be used as an alternative to human evaluation. We present the LLMs\nwith the exact same instructions, samples to be evaluated, and questions used\nto conduct human evaluation, and then ask the LLMs to generate responses to\nthose questions; we dub this LLM evaluation. We use human evaluation and LLM\nevaluation to evaluate the texts in two NLP tasks: open-ended story generation\nand adversarial attacks. We show that the result of LLM evaluation is\nconsistent with the results obtained by expert human evaluation: the texts\nrated higher by human experts are also rated higher by the LLMs. We also find\nthat the results of LLM evaluation are stable over different formatting of the\ntask instructions and the sampling algorithm used to generate the answer. We\nare the first to show the potential of using LLMs to assess the quality of\ntexts and discuss the limitations and ethical considerations of LLM evaluation.",
+ "authors": "Cheng-Han Chiang, Hung-yi Lee",
+ "published": "2023-05-03",
+ "updated": "2023-05-03",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.HC"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2404.12736v1",
+ "title": "Large Language Model Supply Chain: A Research Agenda",
+ "abstract": "The rapid advancements in pre-trained Large Language Models (LLMs) and Large\nMultimodal Models (LMMs) have ushered in a new era of intelligent applications,\ntransforming fields ranging from natural language processing to content\ngeneration. The LLM supply chain represents a crucial aspect of the\ncontemporary artificial intelligence landscape. It encompasses the entire\nlifecycle of pre-trained models, from its initial development and training to\nits final deployment and application in various domains. This paper presents a\ncomprehensive overview of the LLM supply chain, highlighting its three core\nelements: 1) the model infrastructure, encompassing datasets and toolchain for\ntraining, optimization, and deployment; 2) the model lifecycle, covering\ntraining, testing, releasing, and ongoing maintenance; and 3) the downstream\napplication ecosystem, enabling the integration of pre-trained models into a\nwide range of intelligent applications. However, this rapidly evolving field\nfaces numerous challenges across these key components, including data privacy\nand security, model interpretability and fairness, infrastructure scalability,\nand regulatory compliance. Addressing these challenges is essential for\nharnessing the full potential of LLMs and ensuring their ethical and\nresponsible use. This paper provides a future research agenda for the LLM\nsupply chain, aiming at driving the continued advancement and responsible\ndeployment of these transformative LLMs.",
+ "authors": "Shenao Wang, Yanjie Zhao, Xinyi Hou, Haoyu Wang",
+ "published": "2024-04-19",
+ "updated": "2024-04-19",
+ "primary_cat": "cs.SE",
+ "cats": [
+ "cs.SE"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2404.02650v1",
+ "title": "Towards detecting unanticipated bias in Large Language Models",
+ "abstract": "Over the last year, Large Language Models (LLMs) like ChatGPT have become\nwidely available and have exhibited fairness issues similar to those in\nprevious machine learning systems. Current research is primarily focused on\nanalyzing and quantifying these biases in training data and their impact on the\ndecisions of these models, alongside developing mitigation strategies. This\nresearch largely targets well-known biases related to gender, race, ethnicity,\nand language. However, it is clear that LLMs are also affected by other, less\nobvious implicit biases. The complex and often opaque nature of these models\nmakes detecting such biases challenging, yet this is crucial due to their\npotential negative impact in various applications. In this paper, we explore\nnew avenues for detecting these unanticipated biases in LLMs, focusing\nspecifically on Uncertainty Quantification and Explainable AI methods. These\napproaches aim to assess the certainty of model decisions and to make the\ninternal decision-making processes of LLMs more transparent, thereby\nidentifying and understanding biases that are not immediately apparent. Through\nthis research, we aim to contribute to the development of fairer and more\ntransparent AI systems.",
+ "authors": "Anna Kruspe",
+ "published": "2024-04-03",
+ "updated": "2024-04-03",
+ "primary_cat": "cs.LG",
+ "cats": [
+ "cs.LG",
+ "cs.AI",
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2402.19465v1",
+ "title": "Towards Tracing Trustworthiness Dynamics: Revisiting Pre-training Period of Large Language Models",
+ "abstract": "Ensuring the trustworthiness of large language models (LLMs) is crucial. Most\nstudies concentrate on fully pre-trained LLMs to better understand and improve\nLLMs' trustworthiness. In this paper, to reveal the untapped potential of\npre-training, we pioneer the exploration of LLMs' trustworthiness during this\nperiod, focusing on five key dimensions: reliability, privacy, toxicity,\nfairness, and robustness. To begin with, we apply linear probing to LLMs. The\nhigh probing accuracy suggests that \\textit{LLMs in early pre-training can\nalready distinguish concepts in each trustworthiness dimension}. Therefore, to\nfurther uncover the hidden possibilities of pre-training, we extract steering\nvectors from a LLM's pre-training checkpoints to enhance the LLM's\ntrustworthiness. Finally, inspired by~\\citet{choi2023understanding} that mutual\ninformation estimation is bounded by linear probing accuracy, we also probe\nLLMs with mutual information to investigate the dynamics of trustworthiness\nduring pre-training. We are the first to observe a similar two-phase\nphenomenon: fitting and compression~\\citep{shwartz2017opening}. This research\nprovides an initial exploration of trustworthiness modeling during LLM\npre-training, seeking to unveil new insights and spur further developments in\nthe field. We will make our code publicly accessible at\n\\url{https://github.com/ChnQ/TracingLLM}.",
+ "authors": "Chen Qian, Jie Zhang, Wei Yao, Dongrui Liu, Zhenfei Yin, Yu Qiao, Yong Liu, Jing Shao",
+ "published": "2024-02-29",
+ "updated": "2024-02-29",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2307.11761v1",
+ "title": "Fairness of ChatGPT and the Role Of Explainable-Guided Prompts",
+ "abstract": "Our research investigates the potential of Large-scale Language Models\n(LLMs), specifically OpenAI's GPT, in credit risk assessment-a binary\nclassification task. Our findings suggest that LLMs, when directed by\njudiciously designed prompts and supplemented with domain-specific knowledge,\ncan parallel the performance of traditional Machine Learning (ML) models.\nIntriguingly, they achieve this with significantly less data-40 times less,\nutilizing merely 20 data points compared to the ML's 800. LLMs particularly\nexcel in minimizing false positives and enhancing fairness, both being vital\naspects of risk analysis. While our results did not surpass those of classical\nML models, they underscore the potential of LLMs in analogous tasks, laying a\ngroundwork for future explorations into harnessing the capabilities of LLMs in\ndiverse ML tasks.",
+ "authors": "Yashar Deldjoo",
+ "published": "2023-07-14",
+ "updated": "2023-07-14",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2404.01349v1",
+ "title": "Fairness in Large Language Models: A Taxonomic Survey",
+ "abstract": "Large Language Models (LLMs) have demonstrated remarkable success across\nvarious domains. However, despite their promising performance in numerous\nreal-world applications, most of these algorithms lack fairness considerations.\nConsequently, they may lead to discriminatory outcomes against certain\ncommunities, particularly marginalized populations, prompting extensive study\nin fair LLMs. On the other hand, fairness in LLMs, in contrast to fairness in\ntraditional machine learning, entails exclusive backgrounds, taxonomies, and\nfulfillment techniques. To this end, this survey presents a comprehensive\noverview of recent advances in the existing literature concerning fair LLMs.\nSpecifically, a brief introduction to LLMs is provided, followed by an analysis\nof factors contributing to bias in LLMs. Additionally, the concept of fairness\nin LLMs is discussed categorically, summarizing metrics for evaluating bias in\nLLMs and existing algorithms for promoting fairness. Furthermore, resources for\nevaluating bias in LLMs, including toolkits and datasets, are summarized.\nFinally, existing research challenges and open questions are discussed.",
+ "authors": "Zhibo Chu, Zichong Wang, Wenbin Zhang",
+ "published": "2024-03-31",
+ "updated": "2024-03-31",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2402.11764v1",
+ "title": "ChatGPT Based Data Augmentation for Improved Parameter-Efficient Debiasing of LLMs",
+ "abstract": "Large Language models (LLMs), while powerful, exhibit harmful social biases.\nDebiasing is often challenging due to computational costs, data constraints,\nand potential degradation of multi-task language capabilities. This work\nintroduces a novel approach utilizing ChatGPT to generate synthetic training\ndata, aiming to enhance the debiasing of LLMs. We propose two strategies:\nTargeted Prompting, which provides effective debiasing for known biases but\nnecessitates prior specification of bias in question; and General Prompting,\nwhich, while slightly less effective, offers debiasing across various\ncategories. We leverage resource-efficient LLM debiasing using adapter tuning\nand compare the effectiveness of our synthetic data to existing debiasing\ndatasets. Our results reveal that: (1) ChatGPT can efficiently produce\nhigh-quality training data for debiasing other LLMs; (2) data produced via our\napproach surpasses existing datasets in debiasing performance while also\npreserving internal knowledge of a pre-trained LLM; and (3) synthetic data\nexhibits generalizability across categories, effectively mitigating various\nbiases, including intersectional ones. These findings underscore the potential\nof synthetic data in advancing the fairness of LLMs with minimal retraining\ncost.",
+ "authors": "Pengrui Han, Rafal Kocielnik, Adhithya Saravanan, Roy Jiang, Or Sharir, Anima Anandkumar",
+ "published": "2024-02-19",
+ "updated": "2024-02-19",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.CY",
+ "68T50",
+ "I.2.7; K.4.1"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2404.10199v3",
+ "title": "CULTURE-GEN: Revealing Global Cultural Perception in Language Models through Natural Language Prompting",
+ "abstract": "As the utilization of large language models (LLMs) has proliferated\nworldwide, it is crucial for them to have adequate knowledge and fair\nrepresentation for diverse global cultures. In this work, we uncover culture\nperceptions of three SOTA models on 110 countries and regions on 8\nculture-related topics through culture-conditioned generations, and extract\nsymbols from these generations that are associated to each culture by the LLM.\nWe discover that culture-conditioned generation consist of linguistic \"markers\"\nthat distinguish marginalized cultures apart from default cultures. We also\ndiscover that LLMs have an uneven degree of diversity in the culture symbols,\nand that cultures from different geographic regions have different presence in\nLLMs' culture-agnostic generation. Our findings promote further research in\nstudying the knowledge and fairness of global culture perception in LLMs. Code\nand Data can be found in: https://github.com/huihanlhh/Culture-Gen/",
+ "authors": "Huihan Li, Liwei Jiang, Jena D. Huang, Hyunwoo Kim, Sebastin Santy, Taylor Sorensen, Bill Yuchen Lin, Nouha Dziri, Xiang Ren, Yejin Choi",
+ "published": "2024-04-16",
+ "updated": "2024-04-26",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2311.04892v2",
+ "title": "Bias Runs Deep: Implicit Reasoning Biases in Persona-Assigned LLMs",
+ "abstract": "Recent works have showcased the ability of LLMs to embody diverse personas in\ntheir responses, exemplified by prompts like 'You are Yoda. Explain the Theory\nof Relativity.' While this ability allows personalization of LLMs and enables\nhuman behavior simulation, its effect on LLMs' capabilities remains unclear. To\nfill this gap, we present the first extensive study of the unintended\nside-effects of persona assignment on the ability of LLMs to perform basic\nreasoning tasks. Our study covers 24 reasoning datasets, 4 LLMs, and 19 diverse\npersonas (e.g. an Asian person) spanning 5 socio-demographic groups. Our\nexperiments unveil that LLMs harbor deep rooted bias against various\nsocio-demographics underneath a veneer of fairness. While they overtly reject\nstereotypes when explicitly asked ('Are Black people less skilled at\nmathematics?'), they manifest stereotypical and erroneous presumptions when\nasked to answer questions while adopting a persona. These can be observed as\nabstentions in responses, e.g., 'As a Black person, I can't answer this\nquestion as it requires math knowledge', and generally result in a substantial\nperformance drop. Our experiments with ChatGPT-3.5 show that this bias is\nubiquitous - 80% of our personas demonstrate bias; it is significant - some\ndatasets show performance drops of 70%+; and can be especially harmful for\ncertain groups - some personas suffer statistically significant drops on 80%+\nof the datasets. Overall, all 4 LLMs exhibit this bias to varying extents, with\nGPT-4-Turbo showing the least but still a problematic amount of bias (evident\nin 42% of the personas). Further analysis shows that these persona-induced\nerrors can be hard-to-discern and hard-to-avoid. Our findings serve as a\ncautionary tale that the practice of assigning personas to LLMs - a trend on\nthe rise - can surface their deep-rooted biases and have unforeseeable and\ndetrimental side-effects.",
+ "authors": "Shashank Gupta, Vaishnavi Shrivastava, Ameet Deshpande, Ashwin Kalyan, Peter Clark, Ashish Sabharwal, Tushar Khot",
+ "published": "2023-11-08",
+ "updated": "2024-01-27",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2310.16343v2",
+ "title": "Evaluating, Understanding, and Improving Constrained Text Generation for Large Language Models",
+ "abstract": "Advancements in natural language generation (NLG) and large language models\n(LLMs) have led to proficient text generation in various tasks. However,\nintegrating intricate constraints into neural text generation, due to LLMs'\nopacity, remains challenging. This study investigates constrained text\ngeneration for LLMs, where predefined constraints are applied during LLM's\ngeneration process. Our research mainly focuses on mainstream open-source LLMs,\ncategorizing constraints into lexical, structural, and relation-based types. We\nalso present various benchmarks to facilitate fair evaluation. The study\naddresses some key research questions, including evaluating, understanding and\nimproving constrained text generation for LLMs. Results illuminate LLMs'\ncapacity and deficiency to incorporate constraints and provide insights for\nfuture developments in constrained text generation. Codes and datasets will be\nreleased upon acceptance.",
+ "authors": "Xiang Chen, Xiaojun Wan",
+ "published": "2023-10-25",
+ "updated": "2024-03-21",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2401.01262v2",
+ "title": "Fairness Certification for Natural Language Processing and Large Language Models",
+ "abstract": "Natural Language Processing (NLP) plays an important role in our daily lives,\nparticularly due to the enormous progress of Large Language Models (LLM).\nHowever, NLP has many fairness-critical use cases, e.g., as an expert system in\nrecruitment or as an LLM-based tutor in education. Since NLP is based on human\nlanguage, potentially harmful biases can diffuse into NLP systems and produce\nunfair results, discriminate against minorities or generate legal issues.\nHence, it is important to develop a fairness certification for NLP approaches.\nWe follow a qualitative research approach towards a fairness certification for\nNLP. In particular, we have reviewed a large body of literature on algorithmic\nfairness, and we have conducted semi-structured expert interviews with a wide\nrange of experts from that area. We have systematically devised six fairness\ncriteria for NLP, which can be further refined into 18 sub-categories. Our\ncriteria offer a foundation for operationalizing and testing processes to\ncertify fairness, both from the perspective of the auditor and the audited\norganization.",
+ "authors": "Vincent Freiberger, Erik Buchmann",
+ "published": "2024-01-02",
+ "updated": "2024-01-03",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.CY",
+ "cs.LG",
+ "68T50",
+ "I.2.7"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2304.03728v1",
+ "title": "Interpretable Unified Language Checking",
+ "abstract": "Despite recent concerns about undesirable behaviors generated by large\nlanguage models (LLMs), including non-factual, biased, and hateful language, we\nfind LLMs are inherent multi-task language checkers based on their latent\nrepresentations of natural and social knowledge. We present an interpretable,\nunified, language checking (UniLC) method for both human and machine-generated\nlanguage that aims to check if language input is factual and fair. While\nfairness and fact-checking tasks have been handled separately with dedicated\nmodels, we find that LLMs can achieve high performance on a combination of\nfact-checking, stereotype detection, and hate speech detection tasks with a\nsimple, few-shot, unified set of prompts. With the ``1/2-shot'' multi-task\nlanguage checking method proposed in this work, the GPT3.5-turbo model\noutperforms fully supervised baselines on several language tasks. The simple\napproach and results suggest that based on strong latent knowledge\nrepresentations, an LLM can be an adaptive and explainable tool for detecting\nmisinformation, stereotypes, and hate speech.",
+ "authors": "Tianhua Zhang, Hongyin Luo, Yung-Sung Chuang, Wei Fang, Luc Gaitskell, Thomas Hartvigsen, Xixin Wu, Danny Fox, Helen Meng, James Glass",
+ "published": "2023-04-07",
+ "updated": "2023-04-07",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2309.03852v2",
+ "title": "FLM-101B: An Open LLM and How to Train It with $100K Budget",
+ "abstract": "Large language models (LLMs) have achieved remarkable success in NLP and\nmultimodal tasks, among others. Despite these successes, two main challenges\nremain in developing LLMs: (i) high computational cost, and (ii) fair and\nobjective evaluations. In this paper, we report a solution to significantly\nreduce LLM training cost through a growth strategy. We demonstrate that a\n101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US\ndollars. Inspired by IQ tests, we also consolidate an additional range of\nevaluations on top of existing evaluations that focus on knowledge-oriented\nabilities. These IQ evaluations include symbolic mapping, rule understanding,\npattern mining, and anti-interference. Such evaluations minimize the potential\nimpact of memorization. Experimental results show that our model, named\nFLM-101B, trained with a budget of 100K US dollars, achieves performance\ncomparable to powerful and well-known models, e.g., GPT-3 and GLM-130B,\nespecially on the additional range of IQ evaluations. The checkpoint of\nFLM-101B is released at https://huggingface.co/CofeAI/FLM-101B.",
+ "authors": "Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang",
+ "published": "2023-09-07",
+ "updated": "2023-09-17",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2308.11483v1",
+ "title": "Large Language Models Sensitivity to The Order of Options in Multiple-Choice Questions",
+ "abstract": "Large Language Models (LLMs) have demonstrated remarkable capabilities in\nvarious NLP tasks. However, previous works have shown these models are\nsensitive towards prompt wording, and few-shot demonstrations and their order,\nposing challenges to fair assessment of these models. As these models become\nmore powerful, it becomes imperative to understand and address these\nlimitations. In this paper, we focus on LLMs robustness on the task of\nmultiple-choice questions -- commonly adopted task to study reasoning and\nfact-retrieving capability of LLMs. Investigating the sensitivity of LLMs\ntowards the order of options in multiple-choice questions, we demonstrate a\nconsiderable performance gap of approximately 13% to 75% in LLMs on different\nbenchmarks, when answer options are reordered, even when using demonstrations\nin a few-shot setting. Through a detailed analysis, we conjecture that this\nsensitivity arises when LLMs are uncertain about the prediction between the\ntop-2/3 choices, and specific options placements may favor certain prediction\nbetween those top choices depending on the question caused by positional bias.\nWe also identify patterns in top-2 choices that amplify or mitigate the model's\nbias toward option placement. We found that for amplifying bias, the optimal\nstrategy involves positioning the top two choices as the first and last\noptions. Conversely, to mitigate bias, we recommend placing these choices among\nthe adjacent options. To validate our conjecture, we conduct various\nexperiments and adopt two approaches to calibrate LLMs' predictions, leading to\nup to 8 percentage points improvement across different models and benchmarks.",
+ "authors": "Pouya Pezeshkpour, Estevam Hruschka",
+ "published": "2023-08-22",
+ "updated": "2023-08-22",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.LG"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2307.15997v1",
+ "title": "RoCar: A Relationship Network-based Evaluation Method to Large Language Models",
+ "abstract": "Large language models (LLMs) have received increasing attention. However, due\nto the complexity of its capabilities, how to rationally evaluate the\ncapabilities of LLMs is still a task to be solved. We propose the RoCar method,\nwhich utilizes the defined basic schemas to randomly construct a task graph and\ngenerates natural language evaluation tasks based on the task graph to evaluate\nthe reasoning and memory abilities of LLMs respectively. Due to the very large\nrandomness of the task construction process, it is possible to ensure that none\nof the LLMs to be tested has directly learned the evaluation tasks,\nguaranteeing the fairness of the evaluation method.",
+ "authors": "Ming Wang, Wenfang Wu, Chongyun Gao, Daling Wang, Shi Feng, Yifei Zhang",
+ "published": "2023-07-29",
+ "updated": "2023-07-29",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2305.12090v1",
+ "title": "UP5: Unbiased Foundation Model for Fairness-aware Recommendation",
+ "abstract": "Recent advancements in foundation models such as large language models (LLM)\nhave propelled them to the forefront of recommender systems (RS). Moreover,\nfairness in RS is critical since many users apply it for decision-making and\ndemand fulfillment. However, at present, there is a lack of understanding\nregarding the level of fairness exhibited by recommendation foundation models\nand the appropriate methods for equitably treating different groups of users in\nfoundation models. In this paper, we focus on user-side unfairness problem and\nshow through a thorough examination that there is unfairness involved in LLMs\nthat lead to unfair recommendation results. To eliminate bias from LLM for\nfairness-aware recommendation, we introduce a novel Unbiased P5 (UP5)\nfoundation model based on Counterfactually-Fair-Prompting (CFP) techniques. CFP\nincludes two sub-modules: a personalized prefix prompt that enhances fairness\nwith respect to individual sensitive attributes, and a Prompt Mixture that\nintegrates multiple counterfactually-fair prompts for a set of sensitive\nattributes. Experiments are conducted on two real-world datasets, MovieLens-1M\nand Insurance, and results are compared with both matching-based and\nsequential-based fairness-aware recommendation models. The results show that\nUP5 achieves better recommendation performance and meanwhile exhibits a high\nlevel of fairness.",
+ "authors": "Wenyue Hua, Yingqiang Ge, Shuyuan Xu, Jianchao Ji, Yongfeng Zhang",
+ "published": "2023-05-20",
+ "updated": "2023-05-20",
+ "primary_cat": "cs.IR",
+ "cats": [
+ "cs.IR",
+ "cs.AI",
+ "cs.CL",
+ "cs.LG"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2305.19118v1",
+ "title": "Encouraging Divergent Thinking in Large Language Models through Multi-Agent Debate",
+ "abstract": "Modern large language models (LLMs) like ChatGPT have shown remarkable\nperformance on general language tasks but still struggle on complex reasoning\ntasks, which drives the research on cognitive behaviors of LLMs to explore\nhuman-like problem-solving strategies. Along this direction, one representative\nstrategy is self-reflection, which asks an LLM to refine the solution with the\nfeedback generated by itself iteratively. However, our study shows that such\nreflection-style methods suffer from the Degeneration-of-Thought (DoT) problem:\nonce the LLM has established confidence in its solutions, it is unable to\ngenerate novel thoughts later through reflection even if its initial stance is\nincorrect. To address the DoT problem, we propose a Multi-Agent Debate (MAD)\nframework, in which multiple agents express their arguments in the state of\n\"tit for tat\" and a judge manages the debate process to obtain a final\nsolution. Clearly, our MAD framework encourages divergent thinking in LLMs\nwhich would be helpful for tasks that require deep levels of contemplation.\nExperiment results on two challenging datasets, commonsense machine translation\nand counter-intuitive arithmetic reasoning, demonstrate the effectiveness of\nour MAD framework. Extensive analyses suggest that the adaptive break of debate\nand the modest level of \"tit for tat\" state are required for MAD to obtain good\nperformance. Moreover, we find that LLMs might not be a fair judge if different\nLLMs are used for agents. Codes:\nhttps://github.com/Skytliang/Multi-Agents-Debate",
+ "authors": "Tian Liang, Zhiwei He, Wenxiang Jiao, Xing Wang, Yan Wang, Rui Wang, Yujiu Yang, Zhaopeng Tu, Shuming Shi",
+ "published": "2023-05-30",
+ "updated": "2023-05-30",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2402.02680v1",
+ "title": "Large Language Models are Geographically Biased",
+ "abstract": "Large Language Models (LLMs) inherently carry the biases contained in their\ntraining corpora, which can lead to the perpetuation of societal harm. As the\nimpact of these foundation models grows, understanding and evaluating their\nbiases becomes crucial to achieving fairness and accuracy. We propose to study\nwhat LLMs know about the world we live in through the lens of geography. This\napproach is particularly powerful as there is ground truth for the numerous\naspects of human life that are meaningfully projected onto geographic space\nsuch as culture, race, language, politics, and religion. We show various\nproblematic geographic biases, which we define as systemic errors in geospatial\npredictions. Initially, we demonstrate that LLMs are capable of making accurate\nzero-shot geospatial predictions in the form of ratings that show strong\nmonotonic correlation with ground truth (Spearman's $\\rho$ of up to 0.89). We\nthen show that LLMs exhibit common biases across a range of objective and\nsubjective topics. In particular, LLMs are clearly biased against locations\nwith lower socioeconomic conditions (e.g. most of Africa) on a variety of\nsensitive subjective topics such as attractiveness, morality, and intelligence\n(Spearman's $\\rho$ of up to 0.70). Finally, we introduce a bias score to\nquantify this and find that there is significant variation in the magnitude of\nbias across existing LLMs.",
+ "authors": "Rohin Manvi, Samar Khanna, Marshall Burke, David Lobell, Stefano Ermon",
+ "published": "2024-02-05",
+ "updated": "2024-02-05",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.CY",
+ "cs.LG"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2311.13095v1",
+ "title": "Enhancing Logical Reasoning in Large Language Models to Facilitate Legal Applications",
+ "abstract": "Language serves as a vehicle for conveying thought, enabling communication\namong individuals. The ability to distinguish between diverse concepts,\nidentify fairness and injustice, and comprehend a range of legal notions\nfundamentally relies on logical reasoning. Large Language Models (LLMs) attempt\nto emulate human language understanding and generation, but their competency in\nlogical reasoning remains limited. This paper seeks to address the\nphilosophical question: How can we effectively teach logical reasoning to LLMs\nwhile maintaining a deep understanding of the intricate relationship between\nlanguage and logic? By focusing on bolstering LLMs' capabilities in logical\nreasoning, we aim to expand their applicability in law and other\nlogic-intensive disciplines. To this end, we propose a Reinforcement Learning\nfrom Logical Feedback (RLLF) approach, which serves as a potential framework\nfor refining LLMs' reasoning capacities. Through RLLF and a revised evaluation\nmethodology, we explore new avenues for research in this domain and contribute\nto the development of LLMs capable of handling complex legal reasoning tasks\nwhile acknowledging the fundamental connection between language and logic.",
+ "authors": "Ha-Thanh Nguyen, Wachara Fungwacharakorn, Ken Satoh",
+ "published": "2023-11-22",
+ "updated": "2023-11-22",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2402.08189v1",
+ "title": "Simulating Human Strategic Behavior: Comparing Single and Multi-agent LLMs",
+ "abstract": "When creating plans, policies, or applications for people, it is challenging\nfor designers to think through the strategic ways that different people will\nbehave. Recently, Large Language Models (LLMs) have been shown to create\nrealistic simulations of human-like behavior based on personas. We build on\nthis to investigate whether LLMs can simulate human strategic behavior. Human\nstrategies are complex because they take into account social norms in addition\nto aiming to maximize personal gain. The ultimatum game is a classic economics\nexperiment used to understand human strategic behavior in a social setting. It\nshows that people will often choose to \"punish\" other players to enforce social\nnorms rather than to maximize personal profits. We test whether LLMs can\nreplicate this complex behavior in simulations. We compare two architectures:\nsingle- and multi-agent LLMs. We compare their abilities to (1) simulate\nhuman-like actions in the ultimatum game, (2) simulate two player\npersonalities, greedy and fair, and (3) create robust strategies that are\nlogically complete and consistent with personality. Our evaluation shows the\nmulti-agent architecture is much more accurate than single LLMs (88% vs. 50%)\nin simulating human strategy creation and actions for personality pairs. Thus\nthere is potential to use LLMs to simulate human strategic behavior to help\ndesigners, planners, and policymakers perform preliminary exploration of how\npeople behave in systems.",
+ "authors": "Karthik Sreedhar, Lydia Chilton",
+ "published": "2024-02-13",
+ "updated": "2024-02-13",
+ "primary_cat": "cs.HC",
+ "cats": [
+ "cs.HC"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2311.04205v2",
+ "title": "Rephrase and Respond: Let Large Language Models Ask Better Questions for Themselves",
+ "abstract": "Misunderstandings arise not only in interpersonal communication but also\nbetween humans and Large Language Models (LLMs). Such discrepancies can make\nLLMs interpret seemingly unambiguous questions in unexpected ways, yielding\nincorrect responses. While it is widely acknowledged that the quality of a\nprompt, such as a question, significantly impacts the quality of the response\nprovided by LLMs, a systematic method for crafting questions that LLMs can\nbetter comprehend is still underdeveloped. In this paper, we present a method\nnamed `Rephrase and Respond' (RaR), which allows LLMs to rephrase and expand\nquestions posed by humans and provide responses in a single prompt. This\napproach serves as a simple yet effective prompting method for improving\nperformance. We also introduce a two-step variant of RaR, where a rephrasing\nLLM first rephrases the question and then passes the original and rephrased\nquestions together to a different responding LLM. This facilitates the\neffective utilization of rephrased questions generated by one LLM with another.\nOur experiments demonstrate that our methods significantly improve the\nperformance of different models across a wide range to tasks. We further\nprovide a comprehensive comparison between RaR and the popular Chain-of-Thought\n(CoT) methods, both theoretically and empirically. We show that RaR is\ncomplementary to CoT and can be combined with CoT to achieve even better\nperformance. Our work not only contributes to enhancing LLM performance\nefficiently and effectively but also sheds light on a fair evaluation of LLM\ncapabilities. Data and codes are available at\nhttps://github.com/uclaml/Rephrase-and-Respond.",
+ "authors": "Yihe Deng, Weitong Zhang, Zixiang Chen, Quanquan Gu",
+ "published": "2023-11-07",
+ "updated": "2024-04-18",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.LG"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2311.09447v2",
+ "title": "How Trustworthy are Open-Source LLMs? An Assessment under Malicious Demonstrations Shows their Vulnerabilities",
+ "abstract": "The rapid progress in open-source Large Language Models (LLMs) is\nsignificantly driving AI development forward. However, there is still a limited\nunderstanding of their trustworthiness. Deploying these models at scale without\nsufficient trustworthiness can pose significant risks, highlighting the need to\nuncover these issues promptly. In this work, we conduct an adversarial\nassessment of open-source LLMs on trustworthiness, scrutinizing them across\neight different aspects including toxicity, stereotypes, ethics, hallucination,\nfairness, sycophancy, privacy, and robustness against adversarial\ndemonstrations. We propose advCoU, an extended Chain of Utterances-based (CoU)\nprompting strategy by incorporating carefully crafted malicious demonstrations\nfor trustworthiness attack. Our extensive experiments encompass recent and\nrepresentative series of open-source LLMs, including Vicuna, MPT, Falcon,\nMistral, and Llama 2. The empirical outcomes underscore the efficacy of our\nattack strategy across diverse aspects. More interestingly, our result analysis\nreveals that models with superior performance in general NLP tasks do not\nalways have greater trustworthiness; in fact, larger models can be more\nvulnerable to attacks. Additionally, models that have undergone instruction\ntuning, focusing on instruction following, tend to be more susceptible,\nalthough fine-tuning LLMs for safety alignment proves effective in mitigating\nadversarial trustworthiness attacks.",
+ "authors": "Lingbo Mo, Boshi Wang, Muhao Chen, Huan Sun",
+ "published": "2023-11-15",
+ "updated": "2024-04-02",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2310.13343v1",
+ "title": "Challenges and Contributing Factors in the Utilization of Large Language Models (LLMs)",
+ "abstract": "With the development of large language models (LLMs) like the GPT series,\ntheir widespread use across various application scenarios presents a myriad of\nchallenges. This review initially explores the issue of domain specificity,\nwhere LLMs may struggle to provide precise answers to specialized questions\nwithin niche fields. The problem of knowledge forgetting arises as these LLMs\nmight find it hard to balance old and new information. The knowledge repetition\nphenomenon reveals that sometimes LLMs might deliver overly mechanized\nresponses, lacking depth and originality. Furthermore, knowledge illusion\ndescribes situations where LLMs might provide answers that seem insightful but\nare actually superficial, while knowledge toxicity focuses on harmful or biased\ninformation outputs. These challenges underscore problems in the training data\nand algorithmic design of LLMs. To address these issues, it's suggested to\ndiversify training data, fine-tune models, enhance transparency and\ninterpretability, and incorporate ethics and fairness training. Future\ntechnological trends might lean towards iterative methodologies, multimodal\nlearning, model personalization and customization, and real-time learning and\nfeedback mechanisms. In conclusion, future LLMs should prioritize fairness,\ntransparency, and ethics, ensuring they uphold high moral and ethical standards\nwhen serving humanity.",
+ "authors": "Xiaoliang Chen, Liangbin Li, Le Chang, Yunhe Huang, Yuxuan Zhao, Yuxiao Zhang, Dinuo Li",
+ "published": "2023-10-20",
+ "updated": "2023-10-20",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2402.06852v2",
+ "title": "ChemLLM: A Chemical Large Language Model",
+ "abstract": "Large language models (LLMs) have made impressive progress in chemistry\napplications. However, the community lacks an LLM specifically designed for\nchemistry. The main challenges are two-fold: firstly, most chemical data and\nscientific knowledge are stored in structured databases, which limits the\nmodel's ability to sustain coherent dialogue when used directly. Secondly,\nthere is an absence of objective and fair benchmark that encompass most\nchemistry tasks. Here, we introduce ChemLLM, a comprehensive framework that\nfeatures the first LLM dedicated to chemistry. It also includes ChemData, a\ndataset specifically designed for instruction tuning, and ChemBench, a robust\nbenchmark covering nine essential chemistry tasks. ChemLLM is adept at\nperforming various tasks across chemical disciplines with fluid dialogue\ninteraction. Notably, ChemLLM achieves results comparable to GPT-4 on the core\nchemical tasks and demonstrates competitive performance with LLMs of similar\nsize in general scenarios. ChemLLM paves a new path for exploration in chemical\nstudies, and our method of incorporating structured chemical knowledge into\ndialogue systems sets a new standard for developing LLMs in various scientific\nfields. Codes, Datasets, and Model weights are publicly accessible at\nhttps://hf.co/AI4Chem",
+ "authors": "Di Zhang, Wei Liu, Qian Tan, Jingdan Chen, Hang Yan, Yuliang Yan, Jiatong Li, Weiran Huang, Xiangyu Yue, Wanli Ouyang, Dongzhan Zhou, Shufei Zhang, Mao Su, Han-Sen Zhong, Yuqiang Li",
+ "published": "2024-02-10",
+ "updated": "2024-04-25",
+ "primary_cat": "cs.AI",
+ "cats": [
+ "cs.AI",
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2404.08656v1",
+ "title": "Linear Cross-document Event Coreference Resolution with X-AMR",
+ "abstract": "Event Coreference Resolution (ECR) as a pairwise mention classification task\nis expensive both for automated systems and manual annotations. The task's\nquadratic difficulty is exacerbated when using Large Language Models (LLMs),\nmaking prompt engineering for ECR prohibitively costly. In this work, we\npropose a graphical representation of events, X-AMR, anchored around individual\nmentions using a \\textbf{cross}-document version of \\textbf{A}bstract\n\\textbf{M}eaning \\textbf{R}epresentation. We then linearize the ECR with a\nnovel multi-hop coreference algorithm over the event graphs. The event graphs\nsimplify ECR, making it a) LLM cost-effective, b) compositional and\ninterpretable, and c) easily annotated. For a fair assessment, we first enrich\nan existing ECR benchmark dataset with these event graphs using an\nannotator-friendly tool we introduce. Then, we employ GPT-4, the newest LLM by\nOpenAI, for these annotations. Finally, using the ECR algorithm, we assess\nGPT-4 against humans and analyze its limitations. Through this research, we aim\nto advance the state-of-the-art for efficient ECR and shed light on the\npotential shortcomings of current LLMs at this task. Code and annotations:\n\\url{https://github.com/ahmeshaf/gpt_coref}",
+ "authors": "Shafiuddin Rehan Ahmed, George Arthur Baker, Evi Judge, Michael Regan, Kristin Wright-Bettner, Martha Palmer, James H. Martin",
+ "published": "2024-03-25",
+ "updated": "2024-03-25",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2311.00306v1",
+ "title": "Probing Explicit and Implicit Gender Bias through LLM Conditional Text Generation",
+ "abstract": "Large Language Models (LLMs) can generate biased and toxic responses. Yet\nmost prior work on LLM gender bias evaluation requires predefined\ngender-related phrases or gender stereotypes, which are challenging to be\ncomprehensively collected and are limited to explicit bias evaluation. In\naddition, we believe that instances devoid of gender-related language or\nexplicit stereotypes in inputs can still induce gender bias in LLMs. Thus, in\nthis work, we propose a conditional text generation mechanism without the need\nfor predefined gender phrases and stereotypes. This approach employs three\ntypes of inputs generated through three distinct strategies to probe LLMs,\naiming to show evidence of explicit and implicit gender biases in LLMs. We also\nutilize explicit and implicit evaluation metrics to evaluate gender bias in\nLLMs under different strategies. Our experiments demonstrate that an increased\nmodel size does not consistently lead to enhanced fairness and all tested LLMs\nexhibit explicit and/or implicit gender bias, even when explicit gender\nstereotypes are absent in the inputs.",
+ "authors": "Xiangjue Dong, Yibo Wang, Philip S. Yu, James Caverlee",
+ "published": "2023-11-01",
+ "updated": "2023-11-01",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2402.14208v2",
+ "title": "Content Conditional Debiasing for Fair Text Embedding",
+ "abstract": "Mitigating biases in machine learning models has gained increasing attention\nin Natural Language Processing (NLP). Yet, only a few studies focus on fair\ntext embeddings, which are crucial yet challenging for real-world applications.\nIn this paper, we propose a novel method for learning fair text embeddings. We\nachieve fairness while maintaining utility trade-off by ensuring conditional\nindependence between sensitive attributes and text embeddings conditioned on\nthe content. Specifically, we enforce that embeddings of texts with different\nsensitive attributes but identical content maintain the same distance toward\nthe embedding of their corresponding neutral text. Furthermore, we address the\nissue of lacking proper training data by using Large Language Models (LLMs) to\naugment texts into different sensitive groups. Our extensive evaluations\ndemonstrate that our approach effectively improves fairness while preserving\nthe utility of embeddings, representing a pioneering effort in achieving\nconditional independence for fair text embeddings.",
+ "authors": "Wenlong Deng, Blair Chen, Xiaoxiao Li, Christos Thrampoulidis",
+ "published": "2024-02-22",
+ "updated": "2024-02-23",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.CY",
+ "cs.LG"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2403.14473v1",
+ "title": "The Ethics of ChatGPT in Medicine and Healthcare: A Systematic Review on Large Language Models (LLMs)",
+ "abstract": "With the introduction of ChatGPT, Large Language Models (LLMs) have received\nenormous attention in healthcare. Despite their potential benefits, researchers\nhave underscored various ethical implications. While individual instances have\ndrawn much attention, the debate lacks a systematic overview of practical\napplications currently researched and ethical issues connected to them. Against\nthis background, this work aims to map the ethical landscape surrounding the\ncurrent stage of deployment of LLMs in medicine and healthcare. Electronic\ndatabases and preprint servers were queried using a comprehensive search\nstrategy. Studies were screened and extracted following a modified rapid review\napproach. Methodological quality was assessed using a hybrid approach. For 53\nrecords, a meta-aggregative synthesis was performed. Four fields of\napplications emerged and testify to a vivid exploration phase. Advantages of\nusing LLMs are attributed to their capacity in data analysis, personalized\ninformation provisioning, support in decision-making, mitigating information\nloss and enhancing information accessibility. However, we also identifies\nrecurrent ethical concerns connected to fairness, bias, non-maleficence,\ntransparency, and privacy. A distinctive concern is the tendency to produce\nharmful misinformation or convincingly but inaccurate content. A recurrent plea\nfor ethical guidance and human oversight is evident. Given the variety of use\ncases, it is suggested that the ethical guidance debate be reframed to focus on\ndefining what constitutes acceptable human oversight across the spectrum of\napplications. This involves considering diverse settings, varying potentials\nfor harm, and different acceptable thresholds for performance and certainty in\nhealthcare. In addition, a critical inquiry is necessary to determine the\nextent to which the current experimental use of LLMs is necessary and\njustified.",
+ "authors": "Joschka Haltaufderheide, Robert Ranisch",
+ "published": "2024-03-21",
+ "updated": "2024-03-21",
+ "primary_cat": "cs.CY",
+ "cats": [
+ "cs.CY"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2311.02294v1",
+ "title": "LLMs grasp morality in concept",
+ "abstract": "Work in AI ethics and fairness has made much progress in regulating LLMs to\nreflect certain values, such as fairness, truth, and diversity. However, it has\ntaken the problem of how LLMs might 'mean' anything at all for granted. Without\naddressing this, it is not clear what imbuing LLMs with such values even means.\nIn response, we provide a general theory of meaning that extends beyond humans.\nWe use this theory to explicate the precise nature of LLMs as meaning-agents.\nWe suggest that the LLM, by virtue of its position as a meaning-agent, already\ngrasps the constructions of human society (e.g. morality, gender, and race) in\nconcept. Consequently, under certain ethical frameworks, currently popular\nmethods for model alignment are limited at best and counterproductive at worst.\nMoreover, unaligned models may help us better develop our moral and social\nphilosophy.",
+ "authors": "Mark Pock, Andre Ye, Jared Moore",
+ "published": "2023-11-04",
+ "updated": "2023-11-04",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.CY"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2404.07981v1",
+ "title": "Manipulating Large Language Models to Increase Product Visibility",
+ "abstract": "Large language models (LLMs) are increasingly being integrated into search\nengines to provide natural language responses tailored to user queries.\nCustomers and end-users are also becoming more dependent on these models for\nquick and easy purchase decisions. In this work, we investigate whether\nrecommendations from LLMs can be manipulated to enhance a product's visibility.\nWe demonstrate that adding a strategic text sequence (STS) -- a carefully\ncrafted message -- to a product's information page can significantly increase\nits likelihood of being listed as the LLM's top recommendation. To understand\nthe impact of STS, we use a catalog of fictitious coffee machines and analyze\nits effect on two target products: one that seldom appears in the LLM's\nrecommendations and another that usually ranks second. We observe that the\nstrategic text sequence significantly enhances the visibility of both products\nby increasing their chances of appearing as the top recommendation. This\nability to manipulate LLM-generated search responses provides vendors with a\nconsiderable competitive advantage and has the potential to disrupt fair market\ncompetition. Just as search engine optimization (SEO) revolutionized how\nwebpages are customized to rank higher in search engine results, influencing\nLLM recommendations could profoundly impact content optimization for AI-driven\nsearch services. Code for our experiments is available at\nhttps://github.com/aounon/llm-rank-optimizer.",
+ "authors": "Aounon Kumar, Himabindu Lakkaraju",
+ "published": "2024-04-11",
+ "updated": "2024-04-11",
+ "primary_cat": "cs.IR",
+ "cats": [
+ "cs.IR",
+ "cs.AI",
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2311.08472v1",
+ "title": "Selecting Shots for Demographic Fairness in Few-Shot Learning with Large Language Models",
+ "abstract": "Recently, work in NLP has shifted to few-shot (in-context) learning, with\nlarge language models (LLMs) performing well across a range of tasks. However,\nwhile fairness evaluations have become a standard for supervised methods,\nlittle is known about the fairness of LLMs as prediction systems. Further,\ncommon standard methods for fairness involve access to models weights or are\napplied during finetuning, which are not applicable in few-shot learning. Do\nLLMs exhibit prediction biases when used for standard NLP tasks? In this work,\nwe explore the effect of shots, which directly affect the performance of\nmodels, on the fairness of LLMs as NLP classification systems. We consider how\ndifferent shot selection strategies, both existing and new demographically\nsensitive methods, affect model fairness across three standard fairness\ndatasets. We discuss how future work can include LLM fairness evaluations.",
+ "authors": "Carlos Aguirre, Kuleen Sasse, Isabel Cachola, Mark Dredze",
+ "published": "2023-11-14",
+ "updated": "2023-11-14",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2310.08780v1",
+ "title": "\"Im not Racist but...\": Discovering Bias in the Internal Knowledge of Large Language Models",
+ "abstract": "Large language models (LLMs) have garnered significant attention for their\nremarkable performance in a continuously expanding set of natural language\nprocessing tasks. However, these models have been shown to harbor inherent\nsocietal biases, or stereotypes, which can adversely affect their performance\nin their many downstream applications. In this paper, we introduce a novel,\npurely prompt-based approach to uncover hidden stereotypes within any arbitrary\nLLM. Our approach dynamically generates a knowledge representation of internal\nstereotypes, enabling the identification of biases encoded within the LLM's\ninternal knowledge. By illuminating the biases present in LLMs and offering a\nsystematic methodology for their analysis, our work contributes to advancing\ntransparency and promoting fairness in natural language processing systems.",
+ "authors": "Abel Salinas, Louis Penafiel, Robert McCormack, Fred Morstatter",
+ "published": "2023-10-13",
+ "updated": "2023-10-13",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2401.00625v2",
+ "title": "Beyond Efficiency: A Systematic Survey of Resource-Efficient Large Language Models",
+ "abstract": "The burgeoning field of Large Language Models (LLMs), exemplified by\nsophisticated models like OpenAI's ChatGPT, represents a significant\nadvancement in artificial intelligence. These models, however, bring forth\nsubstantial challenges in the high consumption of computational, memory,\nenergy, and financial resources, especially in environments with limited\nresource capabilities. This survey aims to systematically address these\nchallenges by reviewing a broad spectrum of techniques designed to enhance the\nresource efficiency of LLMs. We categorize methods based on their optimization\nfocus: computational, memory, energy, financial, and network resources and\ntheir applicability across various stages of an LLM's lifecycle, including\narchitecture design, pretraining, finetuning, and system design. Additionally,\nthe survey introduces a nuanced categorization of resource efficiency\ntechniques by their specific resource types, which uncovers the intricate\nrelationships and mappings between various resources and corresponding\noptimization techniques. A standardized set of evaluation metrics and datasets\nis also presented to facilitate consistent and fair comparisons across\ndifferent models and techniques. By offering a comprehensive overview of the\ncurrent sota and identifying open research avenues, this survey serves as a\nfoundational reference for researchers and practitioners, aiding them in\ndeveloping more sustainable and efficient LLMs in a rapidly evolving landscape.",
+ "authors": "Guangji Bai, Zheng Chai, Chen Ling, Shiyu Wang, Jiaying Lu, Nan Zhang, Tingwei Shi, Ziyang Yu, Mengdan Zhu, Yifei Zhang, Carl Yang, Yue Cheng, Liang Zhao",
+ "published": "2024-01-01",
+ "updated": "2024-01-04",
+ "primary_cat": "cs.LG",
+ "cats": [
+ "cs.LG"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2305.13862v2",
+ "title": "A Trip Towards Fairness: Bias and De-Biasing in Large Language Models",
+ "abstract": "Cheap-to-Build Very Large-Language Models (CtB-LLMs) with affordable training\nare emerging as the next big revolution in natural language processing and\nunderstanding. These CtB-LLMs are democratizing access to trainable Very\nLarge-Language Models (VLLMs) and, thus, may represent the building blocks of\nmany NLP systems solving downstream tasks. Hence, a little or a large bias in\nCtB-LLMs may cause huge harm. In this paper, we performed a large investigation\nof the bias of three families of CtB-LLMs, and we showed that debiasing\ntechniques are effective and usable. Indeed, according to current tests, the\nLLaMA and the OPT families have an important bias in gender, race, religion,\nand profession. In contrast to the analysis for other LLMs, we discovered that\nbias depends not on the number of parameters but on the perplexity. Finally,\nthe debiasing of OPT using LoRA reduces bias up to 4.12 points in the\nnormalized stereotype score.",
+ "authors": "Leonardo Ranaldi, Elena Sofia Ruzzetti, Davide Venditti, Dario Onorati, Fabio Massimo Zanzotto",
+ "published": "2023-05-23",
+ "updated": "2023-08-29",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2312.14769v3",
+ "title": "Large Language Model (LLM) Bias Index -- LLMBI",
+ "abstract": "The Large Language Model Bias Index (LLMBI) is a pioneering approach designed\nto quantify and address biases inherent in large language models (LLMs), such\nas GPT-4. We recognise the increasing prevalence and impact of LLMs across\ndiverse sectors. This research introduces a novel metric, LLMBI, to\nsystematically measure and mitigate biases potentially skewing model responses.\nWe formulated LLMBI using a composite scoring system incorporating multiple\ndimensions of bias, including but not limited to age, gender, and racial\nbiases. To operationalise this metric, we engaged in a multi-step process\ninvolving collecting and annotating LLM responses, applying sophisticated\nNatural Language Processing (NLP) techniques for bias detection, and computing\nthe LLMBI score through a specially crafted mathematical formula. The formula\nintegrates weighted averages of various bias dimensions, a penalty for dataset\ndiversity deficiencies, and a correction for sentiment biases. Our empirical\nanalysis, conducted using responses from OpenAI's API, employs advanced\nsentiment analysis as a representative method for bias detection. The research\nreveals LLMs, whilst demonstrating impressive capabilities in text generation,\nexhibit varying degrees of bias across different dimensions. LLMBI provides a\nquantifiable measure to compare biases across models and over time, offering a\nvital tool for systems engineers, researchers and regulators in enhancing the\nfairness and reliability of LLMs. It highlights the potential of LLMs in\nmimicking unbiased human-like responses. Additionally, it underscores the\nnecessity of continuously monitoring and recalibrating such models to align\nwith evolving societal norms and ethical standards.",
+ "authors": "Abiodun Finbarrs Oketunji, Muhammad Anas, Deepthi Saina",
+ "published": "2023-12-22",
+ "updated": "2023-12-29",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.CY",
+ "cs.LG",
+ "I.2.7"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2404.03192v1",
+ "title": "Do Large Language Models Rank Fairly? An Empirical Study on the Fairness of LLMs as Rankers",
+ "abstract": "The integration of Large Language Models (LLMs) in information retrieval has\nraised a critical reevaluation of fairness in the text-ranking models. LLMs,\nsuch as GPT models and Llama2, have shown effectiveness in natural language\nunderstanding tasks, and prior works (e.g., RankGPT) have also demonstrated\nthat the LLMs exhibit better performance than the traditional ranking models in\nthe ranking task. However, their fairness remains largely unexplored. This\npaper presents an empirical study evaluating these LLMs using the TREC Fair\nRanking dataset, focusing on the representation of binary protected attributes\nsuch as gender and geographic location, which are historically underrepresented\nin search outcomes. Our analysis delves into how these LLMs handle queries and\ndocuments related to these attributes, aiming to uncover biases in their\nranking algorithms. We assess fairness from both user and content perspectives,\ncontributing an empirical benchmark for evaluating LLMs as the fair ranker.",
+ "authors": "Yuan Wang, Xuyang Wu, Hsin-Tai Wu, Zhiqiang Tao, Yi Fang",
+ "published": "2024-04-04",
+ "updated": "2024-04-04",
+ "primary_cat": "cs.IR",
+ "cats": [
+ "cs.IR",
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2404.18276v1",
+ "title": "Bias Neutralization Framework: Measuring Fairness in Large Language Models with Bias Intelligence Quotient (BiQ)",
+ "abstract": "The burgeoning influence of Large Language Models (LLMs) in shaping public\ndiscourse and decision-making underscores the imperative to address inherent\nbiases within these AI systems. In the wake of AI's expansive integration\nacross sectors, addressing racial bias in LLMs has never been more critical.\nThis paper introduces a novel framework called Comprehensive Bias\nNeutralization Framework (CBNF) which embodies an innovative approach to\nquantifying and mitigating biases within LLMs. Our framework combines the Large\nLanguage Model Bias Index (LLMBI) [Oketunji, A., Anas, M., Saina, D., (2023)]\nand Bias removaL with No Demographics (BLIND) [Orgad, H., Belinkov, Y. (2023)]\nmethodologies to create a new metric called Bias Intelligence Quotient\n(BiQ)which detects, measures, and mitigates racial bias in LLMs without\nreliance on demographic annotations.\n By introducing a new metric called BiQ that enhances LLMBI with additional\nfairness metrics, CBNF offers a multi-dimensional metric for bias assessment,\nunderscoring the necessity of a nuanced approach to fairness in AI [Mehrabi et\nal., 2021]. This paper presents a detailed analysis of Latimer AI (a language\nmodel incrementally trained on black history and culture) in comparison to\nChatGPT 3.5, illustrating Latimer AI's efficacy in detecting racial, cultural,\nand gender biases through targeted training and refined bias mitigation\nstrategies [Latimer & Bender, 2023].",
+ "authors": "Malur Narayan, John Pasmore, Elton Sampaio, Vijay Raghavan, Gabriella Waters",
+ "published": "2024-04-28",
+ "updated": "2024-04-28",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "D.1; I.2"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2403.17553v1",
+ "title": "RuBia: A Russian Language Bias Detection Dataset",
+ "abstract": "Warning: this work contains upsetting or disturbing content.\n Large language models (LLMs) tend to learn the social and cultural biases\npresent in the raw pre-training data. To test if an LLM's behavior is fair,\nfunctional datasets are employed, and due to their purpose, these datasets are\nhighly language and culture-specific. In this paper, we address a gap in the\nscope of multilingual bias evaluation by presenting a bias detection dataset\nspecifically designed for the Russian language, dubbed as RuBia. The RuBia\ndataset is divided into 4 domains: gender, nationality, socio-economic status,\nand diverse, each of the domains is further divided into multiple fine-grained\nsubdomains. Every example in the dataset consists of two sentences with the\nfirst reinforcing a potentially harmful stereotype or trope and the second\ncontradicting it. These sentence pairs were first written by volunteers and\nthen validated by native-speaking crowdsourcing workers. Overall, there are\nnearly 2,000 unique sentence pairs spread over 19 subdomains in RuBia. To\nillustrate the dataset's purpose, we conduct a diagnostic evaluation of\nstate-of-the-art or near-state-of-the-art LLMs and discuss the LLMs'\npredisposition to social biases.",
+ "authors": "Veronika Grigoreva, Anastasiia Ivanova, Ilseyar Alimova, Ekaterina Artemova",
+ "published": "2024-03-26",
+ "updated": "2024-03-26",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2311.07884v2",
+ "title": "Fair Abstractive Summarization of Diverse Perspectives",
+ "abstract": "People from different social and demographic groups express diverse\nperspectives and conflicting opinions on a broad set of topics such as product\nreviews, healthcare, law, and politics. A fair summary should provide a\ncomprehensive coverage of diverse perspectives without underrepresenting\ncertain groups. However, current work in summarization metrics and Large\nLanguage Models (LLMs) evaluation has not explored fair abstractive\nsummarization. In this paper, we systematically investigate fair abstractive\nsummarization for user-generated data. We first formally define fairness in\nabstractive summarization as not underrepresenting perspectives of any groups\nof people, and we propose four reference-free automatic metrics by measuring\nthe differences between target and source perspectives. We evaluate nine LLMs,\nincluding three GPT models, four LLaMA models, PaLM 2, and Claude, on six\ndatasets collected from social media, online reviews, and recorded transcripts.\nExperiments show that both the model-generated and the human-written reference\nsummaries suffer from low fairness. We conduct a comprehensive analysis of the\ncommon factors influencing fairness and propose three simple but effective\nmethods to alleviate unfair summarization. Our dataset and code are available\nat https://github.com/psunlpgroup/FairSumm.",
+ "authors": "Yusen Zhang, Nan Zhang, Yixin Liu, Alexander Fabbri, Junru Liu, Ryo Kamoi, Xiaoxin Lu, Caiming Xiong, Jieyu Zhao, Dragomir Radev, Kathleen McKeown, Rui Zhang",
+ "published": "2023-11-14",
+ "updated": "2024-03-30",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2403.05668v1",
+ "title": "CFaiRLLM: Consumer Fairness Evaluation in Large-Language Model Recommender System",
+ "abstract": "In the evolving landscape of recommender systems, the integration of Large\nLanguage Models (LLMs) such as ChatGPT marks a new era, introducing the concept\nof Recommendation via LLM (RecLLM). While these advancements promise\nunprecedented personalization and efficiency, they also bring to the fore\ncritical concerns regarding fairness, particularly in how recommendations might\ninadvertently perpetuate or amplify biases associated with sensitive user\nattributes. In order to address these concerns, our study introduces a\ncomprehensive evaluation framework, CFaiRLLM, aimed at evaluating (and thereby\nmitigating) biases on the consumer side within RecLLMs.\n Our research methodically assesses the fairness of RecLLMs by examining how\nrecommendations might vary with the inclusion of sensitive attributes such as\ngender, age, and their intersections, through both similarity alignment and\ntrue preference alignment. By analyzing recommendations generated under\ndifferent conditions-including the use of sensitive attributes in user\nprompts-our framework identifies potential biases in the recommendations\nprovided. A key part of our study involves exploring how different detailed\nstrategies for constructing user profiles (random, top-rated, recent) impact\nthe alignment between recommendations made without consideration of sensitive\nattributes and those that are sensitive-attribute-aware, highlighting the bias\nmechanisms within RecLLMs.\n The findings in our study highlight notable disparities in the fairness of\nrecommendations, particularly when sensitive attributes are integrated into the\nrecommendation process, either individually or in combination. The analysis\ndemonstrates that the choice of user profile sampling strategy plays a\nsignificant role in affecting fairness outcomes, highlighting the complexity of\nachieving fair recommendations in the era of LLMs.",
+ "authors": "Yashar Deldjoo, Tommaso di Noia",
+ "published": "2024-03-08",
+ "updated": "2024-03-08",
+ "primary_cat": "cs.IR",
+ "cats": [
+ "cs.IR"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2309.14345v2",
+ "title": "Bias Testing and Mitigation in LLM-based Code Generation",
+ "abstract": "Utilizing state-of-the-art Large Language Models (LLMs), automatic code\ngeneration models play a pivotal role in enhancing the productivity of software\ndevelopment procedures. As the adoption of LLMs becomes more widespread in\nsoftware coding ecosystems, a pressing issue has emerged: does the generated\ncode contain social bias and unfairness, such as those related to age, gender,\nand race? This issue concerns the integrity, fairness, and ethical foundation\nof software applications that depend on the code generated by these models, yet\nis under-explored in the literature. This paper presents a novel bias testing\nframework that is specifically designed for code generation tasks. Based on\nthis framework, we conduct an extensive evaluation of the bias in code\ngenerated by five state-of-the-art LLMs. Our findings reveal that 20.29% to\n44.93% code functions generated by the models under study are biased when\nhandling bias sensitive tasks (i.e., tasks that involve sensitive attributes\nsuch as age and gender). This indicates that the existing LLMs can be unfair in\ncode generation, posing risks of unintended and harmful software behaviors. To\nmitigate bias for code generation models, we evaluate five bias mitigation\nprompt strategies, i.e., utilizing bias testing results to refine the code\n(zero-shot), one-, few-shot, and two Chain-of-Thought (CoT) prompts. Our\nevaluation results illustrate that these strategies are all effective in\nmitigating bias. Overall, one-shot and few-shot learning are the two most\neffective. For GPT-4, 80% to 90% code bias can be removed with one-shot\nlearning.",
+ "authors": "Dong Huang, Qingwen Bu, Jie Zhang, Xiaofei Xie, Junjie Chen, Heming Cui",
+ "published": "2023-09-03",
+ "updated": "2024-01-09",
+ "primary_cat": "cs.SE",
+ "cats": [
+ "cs.SE",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2403.00884v2",
+ "title": "Text classification of column headers with a controlled vocabulary: leveraging LLMs for metadata enrichment",
+ "abstract": "Traditional dataset retrieval systems index on metadata information rather\nthan on the data values. Thus relying primarily on manual annotations and\nhigh-quality metadata, processes known to be labour-intensive and challenging\nto automate. We propose a method to support metadata enrichment with topic\nannotations of column headers using three Large Language Models (LLMs):\nChatGPT-3.5, GoogleBard and GoogleGemini. We investigate the LLMs ability to\nclassify column headers based on domain-specific topics from a controlled\nvocabulary. We evaluate our approach by assessing the internal consistency of\nthe LLMs, the inter-machine alignment, and the human-machine agreement for the\ntopic classification task. Additionally, we investigate the impact of\ncontextual information (i.e. dataset description) on the classification\noutcomes. Our results suggest that ChatGPT and GoogleGemini outperform\nGoogleBard for internal consistency as well as LLM-human-alignment.\nInterestingly, we found that context had no impact on the LLMs performances.\nThis work proposes a novel approach that leverages LLMs for text classification\nusing a controlled topic vocabulary, which has the potential to facilitate\nautomated metadata enrichment, thereby enhancing dataset retrieval and the\nFindability, Accessibility, Interoperability and Reusability (FAIR) of research\ndata on the Web.",
+ "authors": "Margherita Martorana, Tobias Kuhn, Lise Stork, Jacco van Ossenbruggen",
+ "published": "2024-03-01",
+ "updated": "2024-03-05",
+ "primary_cat": "cs.DB",
+ "cats": [
+ "cs.DB",
+ "cs.AI",
+ "cs.IR"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2404.06003v1",
+ "title": "FreeEval: A Modular Framework for Trustworthy and Efficient Evaluation of Large Language Models",
+ "abstract": "The rapid development of large language model (LLM) evaluation methodologies\nand datasets has led to a profound challenge: integrating state-of-the-art\nevaluation techniques cost-effectively while ensuring reliability,\nreproducibility, and efficiency. Currently, there is a notable absence of a\nunified and adaptable framework that seamlessly integrates various evaluation\napproaches. Moreover, the reliability of evaluation findings is often\nquestionable due to potential data contamination, with the evaluation\nefficiency commonly overlooked when facing the substantial costs associated\nwith LLM inference. In response to these challenges, we introduce FreeEval, a\nmodular and scalable framework crafted to enable trustworthy and efficient\nautomatic evaluations of LLMs. Firstly, FreeEval's unified abstractions\nsimplify the integration and improve the transparency of diverse evaluation\nmethodologies, encompassing dynamic evaluation that demand sophisticated LLM\ninteractions. Secondly, the framework integrates meta-evaluation techniques\nlike human evaluation and data contamination detection, which, along with\ndynamic evaluation modules in the platform, enhance the fairness of the\nevaluation outcomes. Lastly, FreeEval is designed with a high-performance\ninfrastructure, including distributed computation and caching strategies,\nenabling extensive evaluations across multi-node, multi-GPU clusters for\nopen-source and proprietary LLMs.",
+ "authors": "Zhuohao Yu, Chang Gao, Wenjin Yao, Yidong Wang, Zhengran Zeng, Wei Ye, Jindong Wang, Yue Zhang, Shikun Zhang",
+ "published": "2024-04-09",
+ "updated": "2024-04-09",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2308.10149v2",
+ "title": "A Survey on Fairness in Large Language Models",
+ "abstract": "Large Language Models (LLMs) have shown powerful performance and development\nprospects and are widely deployed in the real world. However, LLMs can capture\nsocial biases from unprocessed training data and propagate the biases to\ndownstream tasks. Unfair LLM systems have undesirable social impacts and\npotential harms. In this paper, we provide a comprehensive review of related\nresearch on fairness in LLMs. Considering the influence of parameter magnitude\nand training paradigm on research strategy, we divide existing fairness\nresearch into oriented to medium-sized LLMs under pre-training and fine-tuning\nparadigms and oriented to large-sized LLMs under prompting paradigms. First,\nfor medium-sized LLMs, we introduce evaluation metrics and debiasing methods\nfrom the perspectives of intrinsic bias and extrinsic bias, respectively. Then,\nfor large-sized LLMs, we introduce recent fairness research, including fairness\nevaluation, reasons for bias, and debiasing methods. Finally, we discuss and\nprovide insight on the challenges and future directions for the development of\nfairness in LLMs.",
+ "authors": "Yingji Li, Mengnan Du, Rui Song, Xin Wang, Ying Wang",
+ "published": "2023-08-20",
+ "updated": "2024-02-21",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2305.07609v3",
+ "title": "Is ChatGPT Fair for Recommendation? Evaluating Fairness in Large Language Model Recommendation",
+ "abstract": "The remarkable achievements of Large Language Models (LLMs) have led to the\nemergence of a novel recommendation paradigm -- Recommendation via LLM\n(RecLLM). Nevertheless, it is important to note that LLMs may contain social\nprejudices, and therefore, the fairness of recommendations made by RecLLM\nrequires further investigation. To avoid the potential risks of RecLLM, it is\nimperative to evaluate the fairness of RecLLM with respect to various sensitive\nattributes on the user side. Due to the differences between the RecLLM paradigm\nand the traditional recommendation paradigm, it is problematic to directly use\nthe fairness benchmark of traditional recommendation. To address the dilemma,\nwe propose a novel benchmark called Fairness of Recommendation via LLM\n(FaiRLLM). This benchmark comprises carefully crafted metrics and a dataset\nthat accounts for eight sensitive attributes1 in two recommendation scenarios:\nmusic and movies. By utilizing our FaiRLLM benchmark, we conducted an\nevaluation of ChatGPT and discovered that it still exhibits unfairness to some\nsensitive attributes when generating recommendations. Our code and dataset can\nbe found at https://github.com/jizhi-zhang/FaiRLLM.",
+ "authors": "Jizhi Zhang, Keqin Bao, Yang Zhang, Wenjie Wang, Fuli Feng, Xiangnan He",
+ "published": "2023-05-12",
+ "updated": "2023-10-17",
+ "primary_cat": "cs.IR",
+ "cats": [
+ "cs.IR",
+ "cs.CL",
+ "cs.CY"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2310.09219v5",
+ "title": "\"Kelly is a Warm Person, Joseph is a Role Model\": Gender Biases in LLM-Generated Reference Letters",
+ "abstract": "Large Language Models (LLMs) have recently emerged as an effective tool to\nassist individuals in writing various types of content, including professional\ndocuments such as recommendation letters. Though bringing convenience, this\napplication also introduces unprecedented fairness concerns. Model-generated\nreference letters might be directly used by users in professional scenarios. If\nunderlying biases exist in these model-constructed letters, using them without\nscrutinization could lead to direct societal harms, such as sabotaging\napplication success rates for female applicants. In light of this pressing\nissue, it is imminent and necessary to comprehensively study fairness issues\nand associated harms in this real-world use case. In this paper, we critically\nexamine gender biases in LLM-generated reference letters. Drawing inspiration\nfrom social science findings, we design evaluation methods to manifest biases\nthrough 2 dimensions: (1) biases in language style and (2) biases in lexical\ncontent. We further investigate the extent of bias propagation by analyzing the\nhallucination bias of models, a term that we define to be bias exacerbation in\nmodel-hallucinated contents. Through benchmarking evaluation on 2 popular LLMs-\nChatGPT and Alpaca, we reveal significant gender biases in LLM-generated\nrecommendation letters. Our findings not only warn against using LLMs for this\napplication without scrutinization, but also illuminate the importance of\nthoroughly studying hidden biases and harms in LLM-generated professional\ndocuments.",
+ "authors": "Yixin Wan, George Pu, Jiao Sun, Aparna Garimella, Kai-Wei Chang, Nanyun Peng",
+ "published": "2023-10-13",
+ "updated": "2023-12-01",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2404.13925v1",
+ "title": "MARIO Eval: Evaluate Your Math LLM with your Math LLM--A mathematical dataset evaluation toolkit",
+ "abstract": "Large language models (LLMs) have been explored in a variety of reasoning\ntasks including solving of mathematical problems. Each math dataset typically\nincludes its own specially designed evaluation script, which, while suitable\nfor its intended use, lacks generalizability across different datasets.\nConsequently, updates and adaptations to these evaluation tools tend to occur\nwithout being systematically reported, leading to inconsistencies and obstacles\nto fair comparison across studies. To bridge this gap, we introduce a\ncomprehensive mathematical evaluation toolkit that not only utilizes a python\ncomputer algebra system (CAS) for its numerical accuracy, but also integrates\nan optional LLM, known for its considerable natural language processing\ncapabilities. To validate the effectiveness of our toolkit, we manually\nannotated two distinct datasets. Our experiments demonstrate that the toolkit\nyields more robust evaluation results compared to prior works, even without an\nLLM. Furthermore, when an LLM is incorporated, there is a notable enhancement.\nThe code for our method will be made available at\n\\url{https://github.com/MARIO-Math-Reasoning/math_evaluation}.",
+ "authors": "Boning Zhang, Chengxi Li, Kai Fan",
+ "published": "2024-04-22",
+ "updated": "2024-04-22",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2312.06056v1",
+ "title": "METAL: Metamorphic Testing Framework for Analyzing Large-Language Model Qualities",
+ "abstract": "Large-Language Models (LLMs) have shifted the paradigm of natural language\ndata processing. However, their black-boxed and probabilistic characteristics\ncan lead to potential risks in the quality of outputs in diverse LLM\napplications. Recent studies have tested Quality Attributes (QAs), such as\nrobustness or fairness, of LLMs by generating adversarial input texts. However,\nexisting studies have limited their coverage of QAs and tasks in LLMs and are\ndifficult to extend. Additionally, these studies have only used one evaluation\nmetric, Attack Success Rate (ASR), to assess the effectiveness of their\napproaches. We propose a MEtamorphic Testing for Analyzing LLMs (METAL)\nframework to address these issues by applying Metamorphic Testing (MT)\ntechniques. This approach facilitates the systematic testing of LLM qualities\nby defining Metamorphic Relations (MRs), which serve as modularized evaluation\nmetrics. The METAL framework can automatically generate hundreds of MRs from\ntemplates that cover various QAs and tasks. In addition, we introduced novel\nmetrics that integrate the ASR method into the semantic qualities of text to\nassess the effectiveness of MRs accurately. Through the experiments conducted\nwith three prominent LLMs, we have confirmed that the METAL framework\neffectively evaluates essential QAs on primary LLM tasks and reveals the\nquality risks in LLMs. Moreover, the newly proposed metrics can guide the\noptimal MRs for testing each task and suggest the most effective method for\ngenerating MRs.",
+ "authors": "Sangwon Hyun, Mingyu Guo, M. Ali Babar",
+ "published": "2023-12-11",
+ "updated": "2023-12-11",
+ "primary_cat": "cs.SE",
+ "cats": [
+ "cs.SE",
+ "cs.AI",
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2309.08836v2",
+ "title": "Bias and Fairness in Chatbots: An Overview",
+ "abstract": "Chatbots have been studied for more than half a century. With the rapid\ndevelopment of natural language processing (NLP) technologies in recent years,\nchatbots using large language models (LLMs) have received much attention\nnowadays. Compared with traditional ones, modern chatbots are more powerful and\nhave been used in real-world applications. There are however, bias and fairness\nconcerns in modern chatbot design. Due to the huge amounts of training data,\nextremely large model sizes, and lack of interpretability, bias mitigation and\nfairness preservation of modern chatbots are challenging. Thus, a comprehensive\noverview on bias and fairness in chatbot systems is given in this paper. The\nhistory of chatbots and their categories are first reviewed. Then, bias sources\nand potential harms in applications are analyzed. Considerations in designing\nfair and unbiased chatbot systems are examined. Finally, future research\ndirections are discussed.",
+ "authors": "Jintang Xue, Yun-Cheng Wang, Chengwei Wei, Xiaofeng Liu, Jonghye Woo, C. -C. Jay Kuo",
+ "published": "2023-09-16",
+ "updated": "2023-12-10",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.CY"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2303.01248v3",
+ "title": "Can ChatGPT Assess Human Personalities? A General Evaluation Framework",
+ "abstract": "Large Language Models (LLMs) especially ChatGPT have produced impressive\nresults in various areas, but their potential human-like psychology is still\nlargely unexplored. Existing works study the virtual personalities of LLMs but\nrarely explore the possibility of analyzing human personalities via LLMs. This\npaper presents a generic evaluation framework for LLMs to assess human\npersonalities based on Myers Briggs Type Indicator (MBTI) tests. Specifically,\nwe first devise unbiased prompts by randomly permuting options in MBTI\nquestions and adopt the average testing result to encourage more impartial\nanswer generation. Then, we propose to replace the subject in question\nstatements to enable flexible queries and assessments on different subjects\nfrom LLMs. Finally, we re-formulate the question instructions in a manner of\ncorrectness evaluation to facilitate LLMs to generate clearer responses. The\nproposed framework enables LLMs to flexibly assess personalities of different\ngroups of people. We further propose three evaluation metrics to measure the\nconsistency, robustness, and fairness of assessment results from\nstate-of-the-art LLMs including ChatGPT and GPT-4. Our experiments reveal\nChatGPT's ability to assess human personalities, and the average results\ndemonstrate that it can achieve more consistent and fairer assessments in spite\nof lower robustness against prompt biases compared with InstructGPT.",
+ "authors": "Haocong Rao, Cyril Leung, Chunyan Miao",
+ "published": "2023-03-01",
+ "updated": "2023-10-13",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2312.15478v1",
+ "title": "A Group Fairness Lens for Large Language Models",
+ "abstract": "The rapid advancement of large language models has revolutionized various\napplications but also raised crucial concerns about their potential to\nperpetuate biases and unfairness when deployed in social media contexts.\nEvaluating LLMs' potential biases and fairness has become crucial, as existing\nmethods rely on limited prompts focusing on just a few groups, lacking a\ncomprehensive categorical perspective. In this paper, we propose evaluating LLM\nbiases from a group fairness lens using a novel hierarchical schema\ncharacterizing diverse social groups. Specifically, we construct a dataset,\nGFair, encapsulating target-attribute combinations across multiple dimensions.\nIn addition, we introduce statement organization, a new open-ended text\ngeneration task, to uncover complex biases in LLMs. Extensive evaluations of\npopular LLMs reveal inherent safety concerns. To mitigate the biases of LLM\nfrom a group fairness perspective, we pioneer a novel chain-of-thought method\nGF-Think to mitigate biases of LLMs from a group fairness perspective.\nExperimental results demonstrate its efficacy in mitigating bias in LLMs to\nachieve fairness.",
+ "authors": "Guanqun Bi, Lei Shen, Yuqiang Xie, Yanan Cao, Tiangang Zhu, Xiaodong He",
+ "published": "2023-12-24",
+ "updated": "2023-12-24",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2311.03033v1",
+ "title": "Beyond Words: A Mathematical Framework for Interpreting Large Language Models",
+ "abstract": "Large language models (LLMs) are powerful AI tools that can generate and\ncomprehend natural language text and other complex information. However, the\nfield lacks a mathematical framework to systematically describe, compare and\nimprove LLMs. We propose Hex a framework that clarifies key terms and concepts\nin LLM research, such as hallucinations, alignment, self-verification and\nchain-of-thought reasoning. The Hex framework offers a precise and consistent\nway to characterize LLMs, identify their strengths and weaknesses, and\nintegrate new findings. Using Hex, we differentiate chain-of-thought reasoning\nfrom chain-of-thought prompting and establish the conditions under which they\nare equivalent. This distinction clarifies the basic assumptions behind\nchain-of-thought prompting and its implications for methods that use it, such\nas self-verification and prompt programming.\n Our goal is to provide a formal framework for LLMs that can help both\nresearchers and practitioners explore new possibilities for generative AI. We\ndo not claim to have a definitive solution, but rather a tool for opening up\nnew research avenues. We argue that our formal definitions and results are\ncrucial for advancing the discussion on how to build generative AI systems that\nare safe, reliable, fair and robust, especially in domains like healthcare and\nsoftware engineering.",
+ "authors": "Javier Gonz\u00e1lez, Aditya V. Nori",
+ "published": "2023-11-06",
+ "updated": "2023-11-06",
+ "primary_cat": "cs.LG",
+ "cats": [
+ "cs.LG",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2402.17916v2",
+ "title": "LLM-Resistant Math Word Problem Generation via Adversarial Attacks",
+ "abstract": "Large language models (LLMs) have significantly transformed the educational\nlandscape. As current plagiarism detection tools struggle to keep pace with\nLLMs' rapid advancements, the educational community faces the challenge of\nassessing students' true problem-solving abilities in the presence of LLMs. In\nthis work, we explore a new paradigm for ensuring fair evaluation -- generating\nadversarial examples which preserve the structure and difficulty of the\noriginal questions aimed for assessment, but are unsolvable by LLMs. Focusing\non the domain of math word problems, we leverage abstract syntax trees to\nstructurally generate adversarial examples that cause LLMs to produce incorrect\nanswers by simply editing the numeric values in the problems. We conduct\nexperiments on various open- and closed-source LLMs, quantitatively and\nqualitatively demonstrating that our method significantly degrades their math\nproblem-solving ability. We identify shared vulnerabilities among LLMs and\npropose a cost-effective approach to attack high-cost models. Additionally, we\nconduct automatic analysis on math problems and investigate the cause of\nfailure, offering a nuanced view into model's limitation.",
+ "authors": "Roy Xie, Chengxuan Huang, Junlin Wang, Bhuwan Dhingra",
+ "published": "2024-02-27",
+ "updated": "2024-03-30",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2305.11595v3",
+ "title": "Examining Inter-Consistency of Large Language Models Collaboration: An In-depth Analysis via Debate",
+ "abstract": "Large Language Models (LLMs) have shown impressive capabilities in various\napplications, but they still face various inconsistency issues. Existing works\nprimarily focus on the inconsistency issues within a single LLM, while we\ncomplementarily explore the inter-consistency among multiple LLMs for\ncollaboration. To examine whether LLMs can collaborate effectively to achieve a\nconsensus for a shared goal, we focus on commonsense reasoning, and introduce a\nformal debate framework (FORD) to conduct a three-stage debate among LLMs with\nreal-world scenarios alignment: fair debate, mismatched debate, and roundtable\ndebate. Through extensive experiments on various datasets, LLMs can effectively\ncollaborate to reach a consensus despite noticeable inter-inconsistencies, but\nimbalances in their abilities can lead to domination by superior LLMs.\nLeveraging a more advanced LLM like GPT-4 as an authoritative judge can boost\ncollaboration performance. Our work contributes to understanding the\ninter-consistency among LLMs and lays the foundation for developing future\ncollaboration methods. Codes and data are available at\nhttps://github.com/Waste-Wood/FORD",
+ "authors": "Kai Xiong, Xiao Ding, Yixin Cao, Ting Liu, Bing Qin",
+ "published": "2023-05-19",
+ "updated": "2023-10-18",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2405.01769v1",
+ "title": "A Survey on Large Language Models for Critical Societal Domains: Finance, Healthcare, and Law",
+ "abstract": "In the fast-evolving domain of artificial intelligence, large language models\n(LLMs) such as GPT-3 and GPT-4 are revolutionizing the landscapes of finance,\nhealthcare, and law: domains characterized by their reliance on professional\nexpertise, challenging data acquisition, high-stakes, and stringent regulatory\ncompliance. This survey offers a detailed exploration of the methodologies,\napplications, challenges, and forward-looking opportunities of LLMs within\nthese high-stakes sectors. We highlight the instrumental role of LLMs in\nenhancing diagnostic and treatment methodologies in healthcare, innovating\nfinancial analytics, and refining legal interpretation and compliance\nstrategies. Moreover, we critically examine the ethics for LLM applications in\nthese fields, pointing out the existing ethical concerns and the need for\ntransparent, fair, and robust AI systems that respect regulatory norms. By\npresenting a thorough review of current literature and practical applications,\nwe showcase the transformative impact of LLMs, and outline the imperative for\ninterdisciplinary cooperation, methodological advancements, and ethical\nvigilance. Through this lens, we aim to spark dialogue and inspire future\nresearch dedicated to maximizing the benefits of LLMs while mitigating their\nrisks in these precision-dependent sectors. To facilitate future research on\nLLMs in these critical societal domains, we also initiate a reading list that\ntracks the latest advancements under this topic, which will be continually\nupdated: \\url{https://github.com/czyssrs/LLM_X_papers}.",
+ "authors": "Zhiyu Zoey Chen, Jing Ma, Xinlu Zhang, Nan Hao, An Yan, Armineh Nourbakhsh, Xianjun Yang, Julian McAuley, Linda Petzold, William Yang Wang",
+ "published": "2024-05-02",
+ "updated": "2024-05-02",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2312.07420v1",
+ "title": "FairSISA: Ensemble Post-Processing to Improve Fairness of Unlearning in LLMs",
+ "abstract": "Training large language models (LLMs) is a costly endeavour in terms of time\nand computational resources. The large amount of training data used during the\nunsupervised pre-training phase makes it difficult to verify all data and,\nunfortunately, undesirable data may be ingested during training. Re-training\nfrom scratch is impractical and has led to the creation of the 'unlearning'\ndiscipline where models are modified to \"unlearn\" undesirable information\nwithout retraining. However, any modification can alter the behaviour of LLMs,\nespecially on key dimensions such as fairness. This is the first work that\nexamines this interplay between unlearning and fairness for LLMs. In\nparticular, we focus on a popular unlearning framework known as SISA [Bourtoule\net al., 2021], which creates an ensemble of models trained on disjoint shards.\nWe evaluate the performance-fairness trade-off for SISA, and empirically\ndemsontrate that SISA can indeed reduce fairness in LLMs. To remedy this, we\npropose post-processing bias mitigation techniques for ensemble models produced\nby SISA. We adapt the post-processing fairness improvement technique from\n[Hardt et al., 2016] to design three methods that can handle model ensembles,\nand prove that one of the methods is an optimal fair predictor for ensemble of\nmodels. Through experimental results, we demonstrate the efficacy of our\npost-processing framework called 'FairSISA'.",
+ "authors": "Swanand Ravindra Kadhe, Anisa Halimi, Ambrish Rawat, Nathalie Baracaldo",
+ "published": "2023-12-12",
+ "updated": "2023-12-12",
+ "primary_cat": "cs.LG",
+ "cats": [
+ "cs.LG",
+ "cs.CY"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2311.18140v1",
+ "title": "ROBBIE: Robust Bias Evaluation of Large Generative Language Models",
+ "abstract": "As generative large language models (LLMs) grow more performant and\nprevalent, we must develop comprehensive enough tools to measure and improve\ntheir fairness. Different prompt-based datasets can be used to measure social\nbias across multiple text domains and demographic axes, meaning that testing\nLLMs on more datasets can potentially help us characterize their biases more\nfully, and better ensure equal and equitable treatment of marginalized\ndemographic groups. In this work, our focus is two-fold:\n (1) Benchmarking: a comparison of 6 different prompt-based bias and toxicity\nmetrics across 12 demographic axes and 5 families of generative LLMs. Out of\nthose 6 metrics, AdvPromptSet and HolisticBiasR are novel datasets proposed in\nthe paper. The comparison of those benchmarks gives us insights about the bias\nand toxicity of the compared models. Therefore, we explore the frequency of\ndemographic terms in common LLM pre-training corpora and how this may relate to\nmodel biases.\n (2) Mitigation: we conduct a comprehensive study of how well 3 bias/toxicity\nmitigation techniques perform across our suite of measurements. ROBBIE aims to\nprovide insights for practitioners while deploying a model, emphasizing the\nneed to not only measure potential harms, but also understand how they arise by\ncharacterizing the data, mitigate harms once found, and balance any trade-offs.\nWe open-source our analysis code in hopes of encouraging broader measurements\nof bias in future LLMs.",
+ "authors": "David Esiobu, Xiaoqing Tan, Saghar Hosseini, Megan Ung, Yuchen Zhang, Jude Fernandes, Jane Dwivedi-Yu, Eleonora Presani, Adina Williams, Eric Michael Smith",
+ "published": "2023-11-29",
+ "updated": "2023-11-29",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2312.15198v2",
+ "title": "Do LLM Agents Exhibit Social Behavior?",
+ "abstract": "The advances of Large Language Models (LLMs) are expanding their utility in\nboth academic research and practical applications. Recent social science\nresearch has explored the use of these ``black-box'' LLM agents for simulating\ncomplex social systems and potentially substituting human subjects in\nexperiments. Our study delves into this emerging domain, investigating the\nextent to which LLMs exhibit key social interaction principles, such as social\nlearning, social preference, and cooperative behavior (indirect reciprocity),\nin their interactions with humans and other agents. We develop a framework for\nour study, wherein classical laboratory experiments involving human subjects\nare adapted to use LLM agents. This approach involves step-by-step reasoning\nthat mirrors human cognitive processes and zero-shot learning to assess the\ninnate preferences of LLMs. Our analysis of LLM agents' behavior includes both\nthe primary effects and an in-depth examination of the underlying mechanisms.\nFocusing on GPT-4, our analyses suggest that LLM agents appear to exhibit a\nrange of human-like social behaviors such as distributional and reciprocity\npreferences, responsiveness to group identity cues, engagement in indirect\nreciprocity, and social learning capabilities. However, our analysis also\nreveals notable differences: LLMs demonstrate a pronounced fairness preference,\nweaker positive reciprocity, and a more calculating approach in social learning\ncompared to humans. These insights indicate that while LLMs hold great promise\nfor applications in social science research, such as in laboratory experiments\nand agent-based modeling, the subtle behavioral differences between LLM agents\nand humans warrant further investigation. Careful examination and development\nof protocols in evaluating the social behaviors of LLMs are necessary before\ndirectly applying these models to emulate human behavior.",
+ "authors": "Yan Leng, Yuan Yuan",
+ "published": "2023-12-23",
+ "updated": "2024-02-22",
+ "primary_cat": "cs.AI",
+ "cats": [
+ "cs.AI",
+ "cs.SI",
+ "econ.GN",
+ "q-fin.EC"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2401.08495v2",
+ "title": "Large Language Models Portray Socially Subordinate Groups as More Homogeneous, Consistent with a Bias Observed in Humans",
+ "abstract": "Large language models (LLMs) are becoming pervasive in everyday life, yet\ntheir propensity to reproduce biases inherited from training data remains a\npressing concern. Prior investigations into bias in LLMs have focused on the\nassociation of social groups with stereotypical attributes. However, this is\nonly one form of human bias such systems may reproduce. We investigate a new\nform of bias in LLMs that resembles a social psychological phenomenon where\nsocially subordinate groups are perceived as more homogeneous than socially\ndominant groups. We had ChatGPT, a state-of-the-art LLM, generate texts about\nintersectional group identities and compared those texts on measures of\nhomogeneity. We consistently found that ChatGPT portrayed African, Asian, and\nHispanic Americans as more homogeneous than White Americans, indicating that\nthe model described racial minority groups with a narrower range of human\nexperience. ChatGPT also portrayed women as more homogeneous than men, but\nthese differences were small. Finally, we found that the effect of gender\ndiffered across racial/ethnic groups such that the effect of gender was\nconsistent within African and Hispanic Americans but not within Asian and White\nAmericans. We argue that the tendency of LLMs to describe groups as less\ndiverse risks perpetuating stereotypes and discriminatory behavior.",
+ "authors": "Messi H. J. Lee, Jacob M. Montgomery, Calvin K. Lai",
+ "published": "2024-01-16",
+ "updated": "2024-04-26",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2312.15398v1",
+ "title": "Fairness-Aware Structured Pruning in Transformers",
+ "abstract": "The increasing size of large language models (LLMs) has introduced challenges\nin their training and inference. Removing model components is perceived as a\nsolution to tackle the large model sizes, however, existing pruning methods\nsolely focus on performance, without considering an essential aspect for the\nresponsible use of LLMs: model fairness. It is crucial to address the fairness\nof LLMs towards diverse groups, such as women, Black people, LGBTQ+, Jewish\ncommunities, among others, as they are being deployed and available to a wide\naudience. In this work, first, we investigate how attention heads impact\nfairness and performance in pre-trained transformer-based language models. We\nthen propose a novel method to prune the attention heads that negatively impact\nfairness while retaining the heads critical for performance, i.e. language\nmodeling capabilities. Our approach is practical in terms of time and\nresources, as it does not require fine-tuning the final pruned, and fairer,\nmodel. Our findings demonstrate a reduction in gender bias by 19%, 19.5%,\n39.5%, 34.7%, 23%, and 8% for DistilGPT-2, GPT-2, GPT-Neo of two different\nsizes, GPT-J, and Llama 2 models, respectively, in comparison to the biased\nmodel, with only a slight decrease in performance.",
+ "authors": "Abdelrahman Zayed, Goncalo Mordido, Samira Shabanian, Ioana Baldini, Sarath Chandar",
+ "published": "2023-12-24",
+ "updated": "2023-12-24",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.CY",
+ "cs.LG"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2305.18569v1",
+ "title": "Fairness of ChatGPT",
+ "abstract": "Understanding and addressing unfairness in LLMs are crucial for responsible\nAI deployment. However, there is a limited availability of quantitative\nanalyses and in-depth studies regarding fairness evaluations in LLMs,\nespecially when applying LLMs to high-stakes fields. This work aims to fill\nthis gap by providing a systematic evaluation of the effectiveness and fairness\nof LLMs using ChatGPT as a study case. We focus on assessing ChatGPT's\nperformance in high-takes fields including education, criminology, finance and\nhealthcare. To make thorough evaluation, we consider both group fairness and\nindividual fairness and we also observe the disparities in ChatGPT's outputs\nunder a set of biased or unbiased prompts. This work contributes to a deeper\nunderstanding of LLMs' fairness performance, facilitates bias mitigation and\nfosters the development of responsible artificial intelligence systems.",
+ "authors": "Yunqi Li, Yongfeng Zhang",
+ "published": "2023-05-22",
+ "updated": "2023-05-22",
+ "primary_cat": "cs.LG",
+ "cats": [
+ "cs.LG",
+ "cs.AI",
+ "cs.CL",
+ "cs.CY"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2403.04814v2",
+ "title": "Evaluation of LLMs on Syntax-Aware Code Fill-in-the-Middle Tasks",
+ "abstract": "We introduce Syntax-Aware Fill-In-the-Middle (SAFIM), a new benchmark for\nevaluating Large Language Models (LLMs) on the code Fill-in-the-Middle (FIM)\ntask. This benchmark focuses on syntax-aware completions of program structures\nsuch as code blocks and conditional expressions, and includes 17,720 examples\nfrom multiple programming languages, sourced from recent code submissions after\nApril 2022 to minimize data contamination. SAFIM provides a robust framework\nwith various prompt designs and novel syntax-aware post-processing techniques,\nfacilitating accurate and fair comparisons across LLMs. Our comprehensive\nevaluation of 15 LLMs shows that FIM pretraining not only enhances FIM\nproficiency but also improves Left-to-Right (L2R) inference using LLMs. Our\nfindings challenge conventional beliefs and suggest that pretraining methods\nand data quality have more impact than model size. SAFIM thus serves as a\nfoundational platform for future research in effective pretraining strategies\nfor code LLMs. The evaluation toolkit and dataset are available at\nhttps://github.com/gonglinyuan/safim, and the leaderboard is available at\nhttps://safimbenchmark.com.",
+ "authors": "Linyuan Gong, Sida Wang, Mostafa Elhoushi, Alvin Cheung",
+ "published": "2024-03-07",
+ "updated": "2024-04-10",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.LG",
+ "cs.SE"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2309.09397v1",
+ "title": "Do Large GPT Models Discover Moral Dimensions in Language Representations? A Topological Study Of Sentence Embeddings",
+ "abstract": "As Large Language Models are deployed within Artificial Intelligence systems,\nthat are increasingly integrated with human society, it becomes more important\nthan ever to study their internal structures. Higher level abilities of LLMs\nsuch as GPT-3.5 emerge in large part due to informative language\nrepresentations they induce from raw text data during pre-training on trillions\nof words. These embeddings exist in vector spaces of several thousand\ndimensions, and their processing involves mapping between multiple vector\nspaces, with total number of parameters on the order of trillions. Furthermore,\nthese language representations are induced by gradient optimization, resulting\nin a black box system that is hard to interpret. In this paper, we take a look\nat the topological structure of neuronal activity in the \"brain\" of Chat-GPT's\nfoundation language model, and analyze it with respect to a metric representing\nthe notion of fairness. We develop a novel approach to visualize GPT's moral\ndimensions. We first compute a fairness metric, inspired by social psychology\nliterature, to identify factors that typically influence fairness assessments\nin humans, such as legitimacy, need, and responsibility. Subsequently, we\nsummarize the manifold's shape using a lower-dimensional simplicial complex,\nwhose topology is derived from this metric. We color it with a heat map\nassociated with this fairness metric, producing human-readable visualizations\nof the high-dimensional sentence manifold. Our results show that sentence\nembeddings based on GPT-3.5 can be decomposed into two submanifolds\ncorresponding to fair and unfair moral judgments. This indicates that GPT-based\nlanguage models develop a moral dimension within their representation spaces\nand induce an understanding of fairness during their training process.",
+ "authors": "Stephen Fitz",
+ "published": "2023-09-17",
+ "updated": "2023-09-17",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.CY",
+ "cs.LG",
+ "cs.NE"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2310.05694v1",
+ "title": "A Survey of Large Language Models for Healthcare: from Data, Technology, and Applications to Accountability and Ethics",
+ "abstract": "The utilization of large language models (LLMs) in the Healthcare domain has\ngenerated both excitement and concern due to their ability to effectively\nrespond to freetext queries with certain professional knowledge. This survey\noutlines the capabilities of the currently developed LLMs for Healthcare and\nexplicates their development process, with the aim of providing an overview of\nthe development roadmap from traditional Pretrained Language Models (PLMs) to\nLLMs. Specifically, we first explore the potential of LLMs to enhance the\nefficiency and effectiveness of various Healthcare applications highlighting\nboth the strengths and limitations. Secondly, we conduct a comparison between\nthe previous PLMs and the latest LLMs, as well as comparing various LLMs with\neach other. Then we summarize related Healthcare training data, training\nmethods, optimization strategies, and usage. Finally, the unique concerns\nassociated with deploying LLMs in Healthcare settings are investigated,\nparticularly regarding fairness, accountability, transparency and ethics. Our\nsurvey provide a comprehensive investigation from perspectives of both computer\nscience and Healthcare specialty. Besides the discussion about Healthcare\nconcerns, we supports the computer science community by compiling a collection\nof open source resources, such as accessible datasets, the latest\nmethodologies, code implementations, and evaluation benchmarks in the Github.\nSummarily, we contend that a significant paradigm shift is underway,\ntransitioning from PLMs to LLMs. This shift encompasses a move from\ndiscriminative AI approaches to generative AI approaches, as well as a shift\nfrom model-centered methodologies to datacentered methodologies.",
+ "authors": "Kai He, Rui Mao, Qika Lin, Yucheng Ruan, Xiang Lan, Mengling Feng, Erik Cambria",
+ "published": "2023-10-09",
+ "updated": "2023-10-09",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2402.10567v3",
+ "title": "InSaAF: Incorporating Safety through Accuracy and Fairness | Are LLMs ready for the Indian Legal Domain?",
+ "abstract": "Recent advancements in language technology and Artificial Intelligence have\nresulted in numerous Language Models being proposed to perform various tasks in\nthe legal domain ranging from predicting judgments to generating summaries.\nDespite their immense potential, these models have been proven to learn and\nexhibit societal biases and make unfair predictions. In this study, we explore\nthe ability of Large Language Models (LLMs) to perform legal tasks in the\nIndian landscape when social factors are involved. We present a novel metric,\n$\\beta$-weighted $\\textit{Legal Safety Score ($LSS_{\\beta}$)}$, which\nencapsulates both the fairness and accuracy aspects of the LLM. We assess LLMs'\nsafety by considering its performance in the $\\textit{Binary Statutory\nReasoning}$ task and its fairness exhibition with respect to various axes of\ndisparities in the Indian society. Task performance and fairness scores of\nLLaMA and LLaMA--2 models indicate that the proposed $LSS_{\\beta}$ metric can\neffectively determine the readiness of a model for safe usage in the legal\nsector. We also propose finetuning pipelines, utilising specialised legal\ndatasets, as a potential method to mitigate bias and improve model safety. The\nfinetuning procedures on LLaMA and LLaMA--2 models increase the $LSS_{\\beta}$,\nimproving their usability in the Indian legal domain. Our code is publicly\nreleased.",
+ "authors": "Yogesh Tripathi, Raghav Donakanti, Sahil Girhepuje, Ishan Kavathekar, Bhaskara Hanuma Vedula, Gokul S Krishnan, Shreya Goyal, Anmol Goel, Balaraman Ravindran, Ponnurangam Kumaraguru",
+ "published": "2024-02-16",
+ "updated": "2024-02-21",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2404.08517v1",
+ "title": "Online Safety Analysis for LLMs: a Benchmark, an Assessment, and a Path Forward",
+ "abstract": "While Large Language Models (LLMs) have seen widespread applications across\nnumerous fields, their limited interpretability poses concerns regarding their\nsafe operations from multiple aspects, e.g., truthfulness, robustness, and\nfairness. Recent research has started developing quality assurance methods for\nLLMs, introducing techniques such as offline detector-based or uncertainty\nestimation methods. However, these approaches predominantly concentrate on\npost-generation analysis, leaving the online safety analysis for LLMs during\nthe generation phase an unexplored area. To bridge this gap, we conduct in this\nwork a comprehensive evaluation of the effectiveness of existing online safety\nanalysis methods on LLMs. We begin with a pilot study that validates the\nfeasibility of detecting unsafe outputs in the early generation process.\nFollowing this, we establish the first publicly available benchmark of online\nsafety analysis for LLMs, including a broad spectrum of methods, models, tasks,\ndatasets, and evaluation metrics. Utilizing this benchmark, we extensively\nanalyze the performance of state-of-the-art online safety analysis methods on\nboth open-source and closed-source LLMs. This analysis reveals the strengths\nand weaknesses of individual methods and offers valuable insights into\nselecting the most appropriate method based on specific application scenarios\nand task requirements. Furthermore, we also explore the potential of using\nhybridization methods, i.e., combining multiple methods to derive a collective\nsafety conclusion, to enhance the efficacy of online safety analysis for LLMs.\nOur findings indicate a promising direction for the development of innovative\nand trustworthy quality assurance methodologies for LLMs, facilitating their\nreliable deployments across diverse domains.",
+ "authors": "Xuan Xie, Jiayang Song, Zhehua Zhou, Yuheng Huang, Da Song, Lei Ma",
+ "published": "2024-04-12",
+ "updated": "2024-04-12",
+ "primary_cat": "cs.SE",
+ "cats": [
+ "cs.SE",
+ "cs.AI",
+ "cs.CL",
+ "cs.CR",
+ "cs.LG"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2312.14804v1",
+ "title": "Use large language models to promote equity",
+ "abstract": "Advances in large language models (LLMs) have driven an explosion of interest\nabout their societal impacts. Much of the discourse around how they will impact\nsocial equity has been cautionary or negative, focusing on questions like \"how\nmight LLMs be biased and how would we mitigate those biases?\" This is a vital\ndiscussion: the ways in which AI generally, and LLMs specifically, can entrench\nbiases have been well-documented. But equally vital, and much less discussed,\nis the more opportunity-focused counterpoint: \"what promising applications do\nLLMs enable that could promote equity?\" If LLMs are to enable a more equitable\nworld, it is not enough just to play defense against their biases and failure\nmodes. We must also go on offense, applying them positively to equity-enhancing\nuse cases to increase opportunities for underserved groups and reduce societal\ndiscrimination. There are many choices which determine the impact of AI, and a\nfundamental choice very early in the pipeline is the problems we choose to\napply it to. If we focus only later in the pipeline -- making LLMs marginally\nmore fair as they facilitate use cases which intrinsically entrench power -- we\nwill miss an important opportunity to guide them to equitable impacts. Here, we\nhighlight the emerging potential of LLMs to promote equity by presenting four\nnewly possible, promising research directions, while keeping risks and\ncautionary points in clear view.",
+ "authors": "Emma Pierson, Divya Shanmugam, Rajiv Movva, Jon Kleinberg, Monica Agrawal, Mark Dredze, Kadija Ferryman, Judy Wawira Gichoya, Dan Jurafsky, Pang Wei Koh, Karen Levy, Sendhil Mullainathan, Ziad Obermeyer, Harini Suresh, Keyon Vafa",
+ "published": "2023-12-22",
+ "updated": "2023-12-22",
+ "primary_cat": "cs.CY",
+ "cats": [
+ "cs.CY"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2403.00811v1",
+ "title": "Cognitive Bias in High-Stakes Decision-Making with LLMs",
+ "abstract": "Large language models (LLMs) offer significant potential as tools to support\nan expanding range of decision-making tasks. However, given their training on\nhuman (created) data, LLMs can inherit both societal biases against protected\ngroups, as well as be subject to cognitive bias. Such human-like bias can\nimpede fair and explainable decisions made with LLM assistance. Our work\nintroduces BiasBuster, a framework designed to uncover, evaluate, and mitigate\ncognitive bias in LLMs, particularly in high-stakes decision-making tasks.\nInspired by prior research in psychology and cognitive sciences, we develop a\ndataset containing 16,800 prompts to evaluate different cognitive biases (e.g.,\nprompt-induced, sequential, inherent). We test various bias mitigation\nstrategies, amidst proposing a novel method using LLMs to debias their own\nprompts. Our analysis provides a comprehensive picture on the presence and\neffects of cognitive bias across different commercial and open-source models.\nWe demonstrate that our self-help debiasing effectively mitigate cognitive bias\nwithout having to manually craft examples for each bias type.",
+ "authors": "Jessica Echterhoff, Yao Liu, Abeer Alessa, Julian McAuley, Zexue He",
+ "published": "2024-02-25",
+ "updated": "2024-02-25",
+ "primary_cat": "cs.AI",
+ "cats": [
+ "cs.AI",
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2405.02219v1",
+ "title": "FairEvalLLM. A Comprehensive Framework for Benchmarking Fairness in Large Language Model Recommender Systems",
+ "abstract": "This paper presents a framework for evaluating fairness in recommender\nsystems powered by Large Language Models (RecLLMs), addressing the need for a\nunified approach that spans various fairness dimensions including sensitivity\nto user attributes, intrinsic fairness, and discussions of fairness based on\nunderlying benefits. In addition, our framework introduces counterfactual\nevaluations and integrates diverse user group considerations to enhance the\ndiscourse on fairness evaluation for RecLLMs.\n Our key contributions include the development of a robust framework for\nfairness evaluation in LLM-based recommendations and a structured method to\ncreate \\textit{informative user profiles} from demographic data, historical\nuser preferences, and recent interactions. We argue that the latter is\nessential for enhancing personalization in such systems, especially in\ntemporal-driven scenarios. We demonstrate the utility of our framework through\npractical applications on two datasets, LastFM-1K and ML-1M. We conduct\nexperiments on a subsample of 80 users from each dataset, testing and assessing\nthe effectiveness of various prompt construction scenarios and in-context\nlearning, comprising more than 50 scenarios. This results in more than 4000\nrecommendations (80 * 50 = 4000). Our study reveals that while there are no\nsignificant unfairness issues in scenarios involving sensitive attributes, some\nconcerns remain. However, in terms of intrinsic fairness, which does not\ninvolve direct sensitivity, unfairness across demographic groups remains\nsignificant. The code and data used for this paper are available at:\n\\url{https://shorturl.at/awBFM}.",
+ "authors": "Yashar Deldjoo",
+ "published": "2024-05-03",
+ "updated": "2024-05-03",
+ "primary_cat": "cs.IR",
+ "cats": [
+ "cs.IR"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2403.13840v1",
+ "title": "Whose Side Are You On? Investigating the Political Stance of Large Language Models",
+ "abstract": "Large Language Models (LLMs) have gained significant popularity for their\napplication in various everyday tasks such as text generation, summarization,\nand information retrieval. As the widespread adoption of LLMs continues to\nsurge, it becomes increasingly crucial to ensure that these models yield\nresponses that are politically impartial, with the aim of preventing\ninformation bubbles, upholding fairness in representation, and mitigating\nconfirmation bias. In this paper, we propose a quantitative framework and\npipeline designed to systematically investigate the political orientation of\nLLMs. Our investigation delves into the political alignment of LLMs across a\nspectrum of eight polarizing topics, spanning from abortion to LGBTQ issues.\nAcross topics, the results indicate that LLMs exhibit a tendency to provide\nresponses that closely align with liberal or left-leaning perspectives rather\nthan conservative or right-leaning ones when user queries include details\npertaining to occupation, race, or political affiliation. The findings\npresented in this study not only reaffirm earlier observations regarding the\nleft-leaning characteristics of LLMs but also surface particular attributes,\nsuch as occupation, that are particularly susceptible to such inclinations even\nwhen directly steered towards conservatism. As a recommendation to avoid these\nmodels providing politicised responses, users should be mindful when crafting\nqueries, and exercise caution in selecting neutral prompt language.",
+ "authors": "Pagnarasmey Pit, Xingjun Ma, Mike Conway, Qingyu Chen, James Bailey, Henry Pit, Putrasmey Keo, Watey Diep, Yu-Gang Jiang",
+ "published": "2024-03-15",
+ "updated": "2024-03-15",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.SI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2308.10397v2",
+ "title": "FairMonitor: A Four-Stage Automatic Framework for Detecting Stereotypes and Biases in Large Language Models",
+ "abstract": "Detecting stereotypes and biases in Large Language Models (LLMs) can enhance\nfairness and reduce adverse impacts on individuals or groups when these LLMs\nare applied. However, the majority of existing methods focus on measuring the\nmodel's preference towards sentences containing biases and stereotypes within\ndatasets, which lacks interpretability and cannot detect implicit biases and\nstereotypes in the real world. To address this gap, this paper introduces a\nfour-stage framework to directly evaluate stereotypes and biases in the\ngenerated content of LLMs, including direct inquiry testing, serial or adapted\nstory testing, implicit association testing, and unknown situation testing.\nAdditionally, the paper proposes multi-dimensional evaluation metrics and\nexplainable zero-shot prompts for automated evaluation. Using the education\nsector as a case study, we constructed the Edu-FairMonitor based on the\nfour-stage framework, which encompasses 12,632 open-ended questions covering\nnine sensitive factors and 26 educational scenarios. Experimental results\nreveal varying degrees of stereotypes and biases in five LLMs evaluated on\nEdu-FairMonitor. Moreover, the results of our proposed automated evaluation\nmethod have shown a high correlation with human annotations.",
+ "authors": "Yanhong Bai, Jiabao Zhao, Jinxin Shi, Tingjiang Wei, Xingjiao Wu, Liang He",
+ "published": "2023-08-21",
+ "updated": "2023-10-27",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2310.18333v3",
+ "title": "She had Cobalt Blue Eyes: Prompt Testing to Create Aligned and Sustainable Language Models",
+ "abstract": "As the use of large language models (LLMs) increases within society, as does\nthe risk of their misuse. Appropriate safeguards must be in place to ensure LLM\noutputs uphold the ethical standards of society, highlighting the positive role\nthat artificial intelligence technologies can have. Recent events indicate\nethical concerns around conventionally trained LLMs, leading to overall unsafe\nuser experiences. This motivates our research question: how do we ensure LLM\nalignment? In this work, we introduce a test suite of unique prompts to foster\nthe development of aligned LLMs that are fair, safe, and robust. We show that\nprompting LLMs at every step of the development pipeline, including data\ncuration, pre-training, and fine-tuning, will result in an overall more\nresponsible model. Our test suite evaluates outputs from four state-of-the-art\nlanguage models: GPT-3.5, GPT-4, OPT, and LLaMA-2. The assessment presented in\nthis paper highlights a gap between societal alignment and the capabilities of\ncurrent LLMs. Additionally, implementing a test suite such as ours lowers the\nenvironmental overhead of making models safe and fair.",
+ "authors": "Veronica Chatrath, Oluwanifemi Bamgbose, Shaina Raza",
+ "published": "2023-10-20",
+ "updated": "2023-12-15",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2402.18502v1",
+ "title": "Few-Shot Fairness: Unveiling LLM's Potential for Fairness-Aware Classification",
+ "abstract": "Employing Large Language Models (LLM) in various downstream applications such\nas classification is crucial, especially for smaller companies lacking the\nexpertise and resources required for fine-tuning a model. Fairness in LLMs\nhelps ensure inclusivity, equal representation based on factors such as race,\ngender and promotes responsible AI deployment. As the use of LLMs has become\nincreasingly prevalent, it is essential to assess whether LLMs can generate\nfair outcomes when subjected to considerations of fairness. In this study, we\nintroduce a framework outlining fairness regulations aligned with various\nfairness definitions, with each definition being modulated by varying degrees\nof abstraction. We explore the configuration for in-context learning and the\nprocedure for selecting in-context demonstrations using RAG, while\nincorporating fairness rules into the process. Experiments conducted with\ndifferent LLMs indicate that GPT-4 delivers superior results in terms of both\naccuracy and fairness compared to other models. This work is one of the early\nattempts to achieve fairness in prediction tasks by utilizing LLMs through\nin-context learning.",
+ "authors": "Garima Chhikara, Anurag Sharma, Kripabandhu Ghosh, Abhijnan Chakraborty",
+ "published": "2024-02-28",
+ "updated": "2024-02-28",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2311.02049v1",
+ "title": "Post Turing: Mapping the landscape of LLM Evaluation",
+ "abstract": "In the rapidly evolving landscape of Large Language Models (LLMs),\nintroduction of well-defined and standardized evaluation methodologies remains\na crucial challenge. This paper traces the historical trajectory of LLM\nevaluations, from the foundational questions posed by Alan Turing to the modern\nera of AI research. We categorize the evolution of LLMs into distinct periods,\neach characterized by its unique benchmarks and evaluation criteria. As LLMs\nincreasingly mimic human-like behaviors, traditional evaluation proxies, such\nas the Turing test, have become less reliable. We emphasize the pressing need\nfor a unified evaluation system, given the broader societal implications of\nthese models. Through an analysis of common evaluation methodologies, we\nadvocate for a qualitative shift in assessment approaches, underscoring the\nimportance of standardization and objective criteria. This work serves as a\ncall for the AI community to collaboratively address the challenges of LLM\nevaluation, ensuring their reliability, fairness, and societal benefit.",
+ "authors": "Alexey Tikhonov, Ivan P. Yamshchikov",
+ "published": "2023-11-03",
+ "updated": "2023-11-03",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "68T50",
+ "I.2.7"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2308.05374v2",
+ "title": "Trustworthy LLMs: a Survey and Guideline for Evaluating Large Language Models' Alignment",
+ "abstract": "Ensuring alignment, which refers to making models behave in accordance with\nhuman intentions [1,2], has become a critical task before deploying large\nlanguage models (LLMs) in real-world applications. For instance, OpenAI devoted\nsix months to iteratively aligning GPT-4 before its release [3]. However, a\nmajor challenge faced by practitioners is the lack of clear guidance on\nevaluating whether LLM outputs align with social norms, values, and\nregulations. This obstacle hinders systematic iteration and deployment of LLMs.\nTo address this issue, this paper presents a comprehensive survey of key\ndimensions that are crucial to consider when assessing LLM trustworthiness. The\nsurvey covers seven major categories of LLM trustworthiness: reliability,\nsafety, fairness, resistance to misuse, explainability and reasoning, adherence\nto social norms, and robustness. Each major category is further divided into\nseveral sub-categories, resulting in a total of 29 sub-categories.\nAdditionally, a subset of 8 sub-categories is selected for further\ninvestigation, where corresponding measurement studies are designed and\nconducted on several widely-used LLMs. The measurement results indicate that,\nin general, more aligned models tend to perform better in terms of overall\ntrustworthiness. However, the effectiveness of alignment varies across the\ndifferent trustworthiness categories considered. This highlights the importance\nof conducting more fine-grained analyses, testing, and making continuous\nimprovements on LLM alignment. By shedding light on these key dimensions of LLM\ntrustworthiness, this paper aims to provide valuable insights and guidance to\npractitioners in the field. Understanding and addressing these concerns will be\ncrucial in achieving reliable and ethically sound deployment of LLMs in various\napplications.",
+ "authors": "Yang Liu, Yuanshun Yao, Jean-Francois Ton, Xiaoying Zhang, Ruocheng Guo, Hao Cheng, Yegor Klochkov, Muhammad Faaiz Taufiq, Hang Li",
+ "published": "2023-08-10",
+ "updated": "2024-03-21",
+ "primary_cat": "cs.AI",
+ "cats": [
+ "cs.AI",
+ "cs.LG"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2402.07688v1",
+ "title": "CyberMetric: A Benchmark Dataset for Evaluating Large Language Models Knowledge in Cybersecurity",
+ "abstract": "Large Language Models (LLMs) excel across various domains, from computer\nvision to medical diagnostics. However, understanding the diverse landscape of\ncybersecurity, encompassing cryptography, reverse engineering, and managerial\nfacets like risk assessment, presents a challenge, even for human experts. In\nthis paper, we introduce CyberMetric, a benchmark dataset comprising 10,000\nquestions sourced from standards, certifications, research papers, books, and\nother publications in the cybersecurity domain. The questions are created\nthrough a collaborative process, i.e., merging expert knowledge with LLMs,\nincluding GPT-3.5 and Falcon-180B. Human experts spent over 200 hours verifying\ntheir accuracy and relevance. Beyond assessing LLMs' knowledge, the dataset's\nmain goal is to facilitate a fair comparison between humans and different LLMs\nin cybersecurity. To achieve this, we carefully selected 80 questions covering\na wide range of topics within cybersecurity and involved 30 participants of\ndiverse expertise levels, facilitating a comprehensive comparison between human\nand machine intelligence in this area. The findings revealed that LLMs\noutperformed humans in almost every aspect of cybersecurity.",
+ "authors": "Norbert Tihanyi, Mohamed Amine Ferrag, Ridhi Jain, Merouane Debbah",
+ "published": "2024-02-12",
+ "updated": "2024-02-12",
+ "primary_cat": "cs.AI",
+ "cats": [
+ "cs.AI",
+ "cs.CR"
+ ],
+ "category": "LLM Fairness"
+ },
+ {
+ "url": "http://arxiv.org/abs/2401.00588v1",
+ "title": "Fairness in Serving Large Language Models",
+ "abstract": "High-demand LLM inference services (e.g., ChatGPT and BARD) support a wide\nrange of requests from short chat conversations to long document reading. To\nensure that all client requests are processed fairly, most major LLM inference\nservices have request rate limits, to ensure that no client can dominate the\nrequest queue. However, this rudimentary notion of fairness also results in\nunder-utilization of the resources and poor client experience when there is\nspare capacity. While there is a rich literature on fair scheduling, serving\nLLMs presents new challenges due to their unpredictable request lengths and\ntheir unique batching characteristics on parallel accelerators. This paper\nintroduces the definition of LLM serving fairness based on a cost function that\naccounts for the number of input and output tokens processed. To achieve\nfairness in serving, we propose a novel scheduling algorithm, the Virtual Token\nCounter (VTC), a fair scheduler based on the continuous batching mechanism. We\nprove a 2x tight upper bound on the service difference between two backlogged\nclients, adhering to the requirement of work-conserving. Through extensive\nexperiments, we demonstrate the superior performance of VTC in ensuring\nfairness, especially in contrast to other baseline methods, which exhibit\nshortcomings under various conditions.",
+ "authors": "Ying Sheng, Shiyi Cao, Dacheng Li, Banghua Zhu, Zhuohan Li, Danyang Zhuo, Joseph E. Gonzalez, Ion Stoica",
+ "published": "2023-12-31",
+ "updated": "2023-12-31",
+ "primary_cat": "cs.AI",
+ "cats": [
+ "cs.AI",
+ "cs.LG",
+ "cs.PF"
+ ],
+ "category": "LLM Fairness"
+ }
+]
\ No newline at end of file