XaiverZ commited on
Commit
8325b8e
·
1 Parent(s): 13b0aa3
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. intro_8K/test_introduction_short_2405.00957v1.json +18 -0
  2. intro_8K/test_introduction_short_2405.00958v1.json +20 -0
  3. intro_8K/test_introduction_short_2405.00970v1.json +18 -0
  4. intro_8K/test_introduction_short_2405.00972v1.json +20 -0
  5. intro_8K/test_introduction_short_2405.00977v1.json +17 -0
  6. intro_8K/test_introduction_short_2405.00978v1.json +17 -0
  7. intro_8K/test_introduction_short_2405.00981v1.json +17 -0
  8. intro_8K/test_introduction_short_2405.00982v1.json +17 -0
  9. intro_8K/test_introduction_short_2405.00988v1.json +18 -0
  10. intro_8K/test_introduction_short_2405.01008v2.json +16 -0
  11. intro_8K/test_introduction_short_2405.01029v2.json +17 -0
  12. intro_8K/test_introduction_short_2405.01051v1.json +17 -0
  13. intro_8K/test_introduction_short_2405.01063v1.json +18 -0
  14. intro_8K/test_introduction_short_2405.01097v1.json +21 -0
  15. intro_8K/test_introduction_short_2405.01102v1.json +17 -0
  16. intro_8K/test_introduction_short_2405.01103v1.json +17 -0
  17. intro_8K/test_introduction_short_2405.01116v1.json +16 -0
  18. intro_8K/test_introduction_short_2405.01130v1.json +16 -0
  19. intro_8K/test_introduction_short_2405.01143v1.json +16 -0
  20. intro_8K/test_introduction_short_2405.01159v1.json +16 -0
  21. intro_8K/test_introduction_short_2405.01175v1.json +17 -0
  22. intro_8K/test_introduction_short_2405.01266v1.json +17 -0
  23. intro_8K/test_introduction_short_2405.01270v1.json +16 -0
  24. intro_8K/test_introduction_short_2405.01280v1.json +16 -0
  25. intro_8K/test_introduction_short_2405.01345v1.json +16 -0
  26. intro_8K/test_introduction_short_2405.01350v1.json +17 -0
  27. intro_8K/test_introduction_short_2405.01359v1.json +17 -0
  28. intro_8K/test_introduction_short_2405.01373v1.json +16 -0
  29. intro_8K/test_introduction_short_2405.01379v1.json +16 -0
  30. intro_8K/test_introduction_short_2405.01387v1.json +16 -0
  31. intro_8K/test_introduction_short_2405.01413v1.json +19 -0
  32. intro_8K/test_introduction_short_2405.01418v1.json +17 -0
  33. intro_8K/test_introduction_short_2405.01448v1.json +17 -0
  34. intro_8K/test_introduction_short_2405.01481v1.json +18 -0
  35. intro_8K/test_introduction_short_2405.01496v1.json +16 -0
  36. intro_8K/test_introduction_short_2405.01502v1.json +18 -0
  37. intro_8K/test_introduction_short_2405.01580v1.json +17 -0
  38. intro_8K/test_introduction_short_2405.01591v1.json +18 -0
  39. intro_8K/test_introduction_short_2405.01593v1.json +18 -0
  40. intro_8K/test_introduction_short_2405.01610v1.json +17 -0
  41. intro_8K/test_introduction_short_2405.01625v1.json +18 -0
  42. intro_8K/test_introduction_short_2405.01649v1.json +16 -0
  43. intro_8K/test_introduction_short_2405.01660v1.json +17 -0
  44. intro_8K/test_introduction_short_2405.01668v1.json +17 -0
  45. intro_8K/test_introduction_short_2405.01686v1.json +17 -0
  46. intro_8K/test_introduction_short_2405.01693v1.json +16 -0
  47. intro_8K/test_introduction_short_2405.01705v1.json +18 -0
  48. intro_8K/test_introduction_short_2405.01724v1.json +19 -0
  49. intro_8K/test_introduction_short_2405.01730v1.json +17 -0
  50. intro_8K/test_introduction_short_2405.01744v1.json +19 -0
intro_8K/test_introduction_short_2405.00957v1.json ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.00957v1",
3
+ "title": "IntraMix: Intra-Class Mixup Generation for Accurate Labels and Neighbors",
4
+ "abstract": "Graph Neural Networks (GNNs) demonstrate excellent performance on graphs,\nwith their core idea about aggregating neighborhood information and learning\nfrom labels. However, the prevailing challenges in most graph datasets are\ntwofold of Insufficient High-Quality Labels and Lack of Neighborhoods,\nresulting in weak GNNs. Existing data augmentation methods designed to address\nthese two issues often tackle only one. They may either require extensive\ntraining of generators, rely on overly simplistic strategies, or demand\nsubstantial prior knowledge, leading to suboptimal generalization abilities. To\nsimultaneously address both of these two challenges, we propose an elegant\nmethod called IntraMix. IntraMix innovatively employs Mixup among low-quality\nlabeled data of the same class, generating high-quality labeled data at minimal\ncost. Additionally, it establishes neighborhoods for the generated data by\nconnecting them with data from the same class with high confidence, thereby\nenriching the neighborhoods of graphs. IntraMix efficiently tackles both\nchallenges faced by graphs and challenges the prior notion of the limited\neffectiveness of Mixup in node classification. IntraMix serves as a universal\nframework that can be readily applied to all GNNs. Extensive experiments\ndemonstrate the effectiveness of IntraMix across various GNNs and datasets.",
5
+ "authors": "Shenghe Zheng, Hongzhi Wang, Xianglong Liu",
6
+ "published": "2024-05-02",
7
+ "updated": "2024-05-02",
8
+ "primary_cat": "cs.LG",
9
+ "cats": [
10
+ "cs.LG",
11
+ "cs.AI",
12
+ "cs.SI"
13
+ ],
14
+ "label": "Original Paper",
15
+ "paper_cat": "Knowledge AND Graph",
16
+ "gt": "Graph Neural Networks (GNNs) have shown great ability in various graph tasks (Zhou et al., 2020). The essence of GNN lies in the propagation mechanism, where information is gathered from neighbors, and involves supervised learn- ing with labels (Xu et al., 2019). However, most graphs suffer from the absence of high-quality labeled data and *Equal contribution 1Harbin Institute of Technology. Corre- spondence to: Shenghe Zheng <shenghez.zheng@ gmail.com>, Hongzhi Wang <[email protected]>. incomplete node neighbors, which precisely pinpoint the pain points of GNN, i.e., the dual demands for high-quality labels and rich neighborhoods (Ding et al., 2022). Data augmentation is one way to address these issues. How- ever, graphs are non-Euclidean data, making it challeng- ing to apply augmentation methods developed for regular Euclidean data such as images to graphs due to the seman- tic and topological structure of graphs (Han et al., 2022). Therefore, unique augmentation methods need to be pro- posed for graphs. The first step is to clarify the objective of graph augmentation: generating high-quality nodes and enriching neighborhoods. However, most current methods can only solve one aspect. Moreover, some require train- ing generators for the node or edge, incurring extensive costs (Zhao et al., 2021; Liu et al., 2022), while others rely on overly simplistic methods such as random drops that result in marginal improvement (Fang et al., 2023). Addi- tionally, some methods require excessive prior knowledge, leading to weak generalization abilities (Yoo et al., 2022). Therefore, current methods are insufficient to address the two problems. So there is an urgent need for an efficient method that can generate nodes with high-quality labels and construct reasonable neighborhoods for node classification. We observe that existing methods mostly overlook low- quality labels. The distribution of low-quality data often slightly differs from high-quality data and may contain fea- tures not present in current data. Extracting information from low-quality data would contribute to enriching data diversity. In general, noise diverges in all directions, leading the data beyond the expected distribution, and resulting in low-quality data (Fr\u00b4 enay & Verleysen, 2013). A natural idea is to blend noisy data, utilizing the directionality of noise to neutralize noise and generate high-quality data within the correct distribution. Therefore, Mixup (Zhang et al., 2018) comes into our view as a data augmentation method that involves mixing data. Mixup is mathematically defined as \u02c6 x = \u03bbxi + (1 \u2212\u03bb)xj, \u02c6 y = \u03bbyi + (1 \u2212\u03bb)yj, where (xi, yi), (xj, yj) are randomly selected, and y represents the label. However, although Mixup excels in Euclidean data, experiments commonly suggest that Mixup is hard to be effective in node classification (Wu et al., 2023). Therefore, a natural question emerges: Can Mixup solve the graph augmentation problems for node classification? 1 arXiv:2405.00957v1 [cs.LG] 2 May 2024 IntraMix: Intra-Class Mixup Generation for Accurate Labels and Neighbors Label(Noisy) True Label Mixup Label Wrong Message Passing Wrong Mixup Label \u2026 \u2026 High Quality Mixup Label \u2026 \u2026 High Quality Label Correct Message Passing (a). Vallina Mixup in Graph (b). Intra-Class Mixup of IntraMix in Graph Mixup Process Figure 1. a). Vanilla Mixup may retain noise in labels, and con- necting generated nodes to nodes used for their generation leads to incorrect propagation. b).IntraMix generates high-quality labels by Intra-Class Mixup and enriches the neighborhood while preserving correctness by connecting generated nodes to high-quality nodes. Due to the characteristics of graphs, using Mixup is highly challenging. Typically, Mixup randomly mixes two data in proportion. In node classification, its poor performance can be attributed to two reasons, as illustrated in Figure 1(a): Mixup relies on existing labeled data to generate data, and some graphs have extremely sparse labels, leading to the information bottleneck in generation (Li et al., 2020). Even if there are enough labeled nodes, based on the assumption that nodes of the same class are more likely to appear as neighbors (neighborhood assumption) (Zhu et al., 2020), us- ing Mixup between random classes generating nodes whose distribution lies in the middle of the distributions of two classes (Wu et al., 2023). This makes it hard to confirm the neighbors of the generated nodes, and connecting them to any class of nodes is inappropriate, potentially confusing the GNNs. The second problem highlights the fact that Mixup cannot seamlessly integrate into node classification. To address the aforementioned issues, we propose IntraMix, a novel graph augmentation method for node classification, as shown in Figure 1(b). The basic idea is to Mixup within nodes of the same class while the generated single labels facilitate neighbor selection, addressing the primary chal- lenge faced by Mixup. After obtaining low-quality labels by pseudo-labeling (Lee et al., 2013), we apply Mixup among data of the same class. The labels of generated data are of much higher quality than the original nodes. Intuitively, if we simplify the labeled noise as \u03f5 \u223cN(0, \u03c32), the mean distribution of two noises \u00af \u03f5 \u223cN(0, 1 2\u03c32), with a smaller variance, increases the likelihood that the label of generated data is accurate. Therefore, we address the issue of sparse high-quality labeled data in graphs by Intra-Class Mixup. For neighbors of the node v generated by Intra-Class Mixup, we connect v to two nodes with high confidence of the same class with v. This has two benefits. Firstly, based on the neighborhood assumption mentioned above, we accurately find neighbors for v, providing it with information gain. Secondly, by connecting v to two nodes that may belong to the same class, we not only bring message interaction to the neighborhoods of these two nodes, but also reduce the noise impact that arises from inaccurate labels. In this way, we construct rich and reasonable node neighborhoods. Therefore, IntraMix simultaneously addresses two data is- sues in node classification. It is a data augmentation method decoupled from GNNs and exhibits strong generalization capabilities. Our key contributions are as follows: \u2022 For the first time, we introduce Intra-Class Mixup as the core data augmentation in node classification, highlighting its effectiveness in generating high-quality labeled data. \u2022 The proposed IntraMix tackles sparse labels and incom- plete neighborhoods in GNNs through a simple but efficient way of Intra-Class Mixup and neighborhood selection. \u2022 Extensive experiments show that IntraMix improves the performance of GNNs on diverse datasets. Theoretical anal- ysis elucidates the rationale behind IntraMix.",
17
+ "main_content": "Notations: Given a graph G = (V, E), where V = {vi}N i=1 is the set of nodes, and E represents the set of edges, the adjacency relationship between nodes can be represented by A \u2208{0, 1}N\u00d7N, where Aij = 1 if and only if (vi, vj) \u2208E. We use X \u2208RN\u00d7D to denote the feature of nodes. The labels of nodes are represented by Y . Based on the presence or absence of labels, the node set can be divided into Dl = {(xl1, yl1), ...(xlN , ylN )} and Du = {xu1, ...xuN }. We can use pseudo-labeling methods to assign low-quality labels Yu to nodes in Du, getting a low-quality set Dp = {(xu1, yu1), ...(xuN , yuN )}. We define Ni = {vj|Aij = 1} as the neighboring nodes of vi. Node Classification with GNNs: Given a graph, the {|} Node Classification with GNNs: Given a graph G, the node classification involves determining the category of nodes on G. GNNs achieve this by propagating messages on G, representing each node as a vector hv. The propagation for the k-th layer of a GNN is represented as follows: hk v = COM(hk\u22121 v , AGG({hk\u22121 u |u \u2208Nv})) (1) where COM and AGG are COMBINE and AGGREGATE functions, respectively, and hk v denotes the feature of v at 2 IntraMix: Intra-Class Mixup Generation for Accurate Labels and Neighbors Input Graph (with Sparse Labels) High Quality Labeled Node Low Quality Labeled Node Unlabeled Node Mixup Progress Added Edge Low Quality Labeled Graph Find High Quality Label Intra-Class Mixup Generate Neighbor Output GNN dropout\ud835\udc8a\ud835\udc8a \u2026 \u2026 \u2026 \u2026 Label Detector Generate Low Quality Label Label Generator \u2026 \u2026 Figure 2. The workflow of IntraMix involves three main steps. First, it utilizes pseudo-labeling to generate low-quality labels for unlabeled nodes. Following that, Intra-Class Mixup is employed to generate high-quality labeled nodes from low-quality ones. Additionally, it identifies nodes with high confidence in the same class and connects them, thus constructing a rich and reasonable neighborhood. the k-th layer. The output hv in the last layer of GNN is used for classification as yv = softmax(hv), where yv is the predicted label for v. 3. Methodology In this section, we provide a detailed explanation of IntraMix. Firstly, we present the Intra-Class Mixup in 3.1. This method generates high-quality labeled data from lowquality data, addressing the issue of label sparsity. Then, we show the method for finding neighbors for nodes in 3.2. Next, in 3.3, we present the workflow and conduct complexity analysis in 3.4. The framework is illustrated in Figure 2. 3.1. Intra-Class Mixup Motivation: In supervised learning, labels aid in helping the model to explore classification boundaries, and having more labels typically allows for learning finer boundaries (Van Engelen & Hoos, 2020). However, in node classification, the scarcity of high-quality labels hampers GNNs from effectively learning, resulting in poor performance. Nevertheless, directly utilizing low-quality labels from pseudo-labeling introduces noise detrimental to learning. As we know, lowquality labeled data are often closer to the boundaries that GNNs can learn from the current data, containing distribution information (Lee et al., 2013). There is the possibility of generating high-quality data from them as mentioned in Sec 1. We aim to neutralize data noise in different directions by blending data. Considering that data generated using Vallina Mixup lies between two existing classes, making it hard to determine neighbors, we innovatively propose Intra-Class Mixup. It not only generates high-quality data but also facilitates the determination of neighbors. Approach: We use pseudo-labeling to transform the unlabeled nodes Du into a set of nodes with low-quality labels Dp. Then, we get D = Dl\u222aDp = {(x1, y1), ..., (xN, yN)}, where there are a few high-quality labels and lots of lowquality labels. In contrast to the Vallina Mixup performed between random samples, to overcome noise, we perform Mixup among nodes with the same low-quality labels to obtain high-quality labeled data guaranteed by Theorem 3.1. The generated dataset is represented as: Dm = {(\u02c6 x, \u02c6 y)|\u02c6 x = M\u03bb(xi, xj), \u02c6 y = yi = yj} (2) where M\u03bb(xi, xj) = \u03bbxi +(1\u2212\u03bb)xj, (xi, yi), (xj, yj) \u2208D (3) The number of generated nodes is manually set. The labels in Dm are of higher quality compared to D, a guarantee provided by Theorem 3.1. The proof can be found in Appendix A.1. In other words, the generated labels exhibit less noise than those of their source nodes. Through Intra-Class Mixup, we obtain a dataset with high-quality labels, leading to improved performance of GNNs. Theorem 3.1. Assuming different classes of data have varying levels of noise, i.e., Pnoise(yi|x) = P(yi|x) + \u03f5i, where Pnoise(yi|x) and P(yi|x) represent the label distribution of class i with and without noise, respectively, and \u03f5i \u223cN(0, \u03c32 i ) is the noise. For Intra-Class Mixup 3 IntraMix: Intra-Class Mixup Generation for Accurate Labels and Neighbors satisfying Equation 2, the probability that the noise is smaller in the generated data than in the original data is 2 \u03c0 arctan[\u03bb2 + (1 \u2212\u03bb)2]\u22121 2 > 0.5. The ratio of the expected noise in the generated data to the expected noise in the original data is [\u03bb2 + (1 \u2212\u03bb)2] 1 2 < 1, indicating that the noise in the generated data is smaller. 3.2. Neighbor Selection Motivation: The strength of GNN lies in gathering information from the neighborhoods to generate node embeddings, making the neighborhood crucial for GNNs (Hamilton et al., 2017). After generating the node v in Sec 3.1, to leverage the advantages of GNN, it is necessary to find neighbors for v. We aim to construct a neighborhood that satisfies two requirements: a). The neighborhood is accurate for v; b). The neighbors of v can obtain richer information through v. If v is simply connected to the node that generated it, since the nodes used for Mixup are mostly labeled with low quality, it is prone to unreasonably connecting two nodes, resulting in incorrect propagation. Since nodes of the same class are more likely to appear in the neighborhood in homogeneous graphs, a natural idea is to connect v with nodes of high confidence in the same class. In this way, we can find the correct neighbors for v and, acting as a bridge, connect the neighborhoods of two nodes of the same class through v to obtain more information, as shown in Figure 1(b). Approach: As mentioned above, neighborhood selection involves two steps. First, finding nodes highly likely to be of the same class as v, and second, determining how to connect v with these nodes. We will now introduce them separately. In the process of finding high-quality nodes, we ingeniously design an ensemble learning approach without additional training costs. We employ the GNN utilized for pseudolabeling to predict nodes under n different dropout rates. The nodes consistently predicted in all n trials are considered high-quality. This is essentially an ensemble approach using n GNNs but with only n inference costs, significantly reducing consumption. The selection is expressed as: Dh = {(x, y)|f1(x) = ... = fn(x), (x, y) \u2208D} (4) where fi represents GNNs with different dropout rates. After obtaining the high-quality set Dh, it is time to establish the neighborhoods between Dh and Dm generated by Mixup. To ensure the correctness of the neighborhoods, we adopt the approach of randomly connecting the generated data to high-quality nodes of the same class. The augmented edge set \u02c6 E of the original edge set E can be expressed as: \u02c6 E = E \u222a{e(\u02c6 x, xi)|(\u02c6 x, y) \u2208Dm, (xi, y) \u2208Dh} (5) where e(a, b) represents en edge between nodes a and b. In this way, we not only find reasonable neighbors for the generated nodes but also establish an information exchange Algorithm 1 Workflow of IntraMix Input: Graph G = (V, E), V can be divided into Dl and Du as Sec 2, Class category of nodes C, GNN model f 1: Pre-train f on G according to Equation.6 2: Generate pseudo labels for Du using f, get \u02c6 Du 3: D = Dl \u222a\u02c6 Du 4: Generate Mixup set Dm = {Vm, Em} as Equation.2 5: V = V \u222aVm 6: Generate high-quality set Dh according to Equation.4 7: for (\u02c6 x, \u02c6 y) \u2208Dm do 8: E \u222a{e(\u02c6 x, xi), e(\u02c6 x, xj)}, where (xi/xj, \u02c6 y) \u2208Dh 9: end for Output: the augmented graph G = (V, E) path between two nodes of the same class. Additionally, by not directly connecting the two nodes, potential noise impacts are avoided. The elimination effect of noise is guaranteed by Theorem 3.2. The detailed proof can be found in Appendix A.2. Through this method, the issue of missing neighborhoods in the graph is alleviated, and a graph with richer structural information is constructed. Theorem 3.2. The label noise can be equivalently represented as node noise, i.e., Pnoise(x|yi) = P(x|yi) + \u03b4i, where \u03b4i \u223cN(0, \u03c32 xi). Assuming Equation 1 can be simplified as hk v = MLP k[(1 + \u03b7k)hk\u22121 v + 1 |Nv| P u\u2208Nv hk\u22121 u ], where \u03b7k is a learnable variable. Let m and n be nodes from the i-th class, xm \u223cP(x|yi), xn \u223cPnoise(x|yi). In a two-layer GNN, the ratio of the expected impact of n\u2019s noise on m when nodes connected through Intra-Class Mixup to the expected impact when directly connecting m and n is q (\u03bb2 + (1 \u2212\u03bb)2) + 1 4(2+\u03b71+\u03b72). This ratio can be controlled to be less than 1 by adjusting learnable \u03b7, indicating that our method leads to a smaller noise impact. 3.3. Workflow In this section, we will briefly introduce the loss function in training and the overall workflow of IntraMix. Firstly, we demonstrate the most commonly used crossentropy loss, and it can be replaced with any function based on the task. Given the node feature X, the node labels Y , and the training set DT , the loss function is as follows: L = \u2212 1 |DT | X (xi,yi)\u2208DT C X c=1 I[yi = c] log P(c|xi, \u03b8) (6) where C is the class number, \u03b8 denotes the GNN parameters, and P(c|xi, \u03b8) is the probability of xi belonging to class c. Next, we introduce the workflow of IntraMix. For detailed steps, refer to Algorithm 1. The process begins by generating low-quality labels for unlabeled nodes through pseudolabeling (lines 1-3). Following that, high-quality labeled 4 IntraMix: Intra-Class Mixup Generation for Accurate Labels and Neighbors Table 1. Node Classification accuracy(%) with semi-supervised Graphs. The average result of 30 runs is reported. Models Strategy Cora CiteSeer Pubmed CS Physics GCN Original 81.51 \u00b1 0.42 70.30 \u00b1 0.54 79.06 \u00b1 0.31 91.24 \u00b1 0.43 92.56 \u00b1 1.31 GraphMix 82.29 \u00b1 3.71 74.55 \u00b1 0.52 82.82 \u00b1 0.53 91.90 \u00b1 0.22 90.43 \u00b1 1.76 CODA 83.47 \u00b1 0.48 73.48 \u00b1 0.24 78.50 \u00b1 0.35 91.01 \u00b1 0.75 92.57 \u00b1 0.41 DropMessage 83.33 \u00b1 0.41 71.83 \u00b1 0.35 79.20 \u00b1 0.25 91.50 \u00b1 0.31 92.74 \u00b1 0.72 MH-Aug 84.21 \u00b1 0.38 73.82 \u00b1 0.82 80.51 \u00b1 0.32 92.52 \u00b1 0.37 92.91 \u00b1 0.46 LA-GCN 84.61 \u00b1 0.57 74.70 \u00b1 0.51 81.73 \u00b1 0.71 92.60 \u00b1 0.26 93.26 \u00b1 0.43 NodeMixup 83.47 \u00b1 0.32 74.12 \u00b1 0.35 81.16 \u00b1 0.21 92.69 \u00b1 0.44 93.97 \u00b1 0.45 IntraMix 85.25 \u00b1 0.42 74.80 \u00b1 0.46 82.98 \u00b1 0.54 92.86 \u00b1 0.04 94.27 \u00b1 0.14 GAT Original 82.04 \u00b1 0.62 71.82 \u00b1 0.83 78.00 \u00b1 0.71 90.52 \u00b1 0.44 91.97 \u00b1 0.65 GraphMix 82.76 \u00b1 0.62 73.04 \u00b1 0.51 78.82 \u00b1 0.44 90.57 \u00b1 1.03 92.90 \u00b1 0.42 CODA 83.36 \u00b1 0.31 72.93 \u00b1 0.42 79.37 \u00b1 1.33 90.41 \u00b1 0.41 92.09 \u00b1 0.62 DropMessage 82.20 \u00b1 0.24 71.48 \u00b1 0.37 78.14 \u00b1 0.25 91.02 \u00b1 0.51 92.03 \u00b1 0.72 MH-Aug 84.52 \u00b1 0.91 73.44 \u00b1 0.81 79.82 \u00b1 0.55 91.26 \u00b1 0.35 92.72 \u00b1 0.42 LA-GAT 84.72 \u00b1 0.45 73.71 \u00b1 0.52 81.04 \u00b1 0.43 91.52 \u00b1 0.31 93.42 \u00b1 0.45 NodeMixup 83.52 \u00b1 0.31 74.30 \u00b1 0.12 81.26 \u00b1 0.34 92.69 \u00b1 0.21 93.87 \u00b1 0.30 IntraMix 85.03 \u00b1 0.45 74.50 \u00b1 0.24 81.76 \u00b1 0.32 92.40 \u00b1 0.24 94.12 \u00b1 0.24 SAGE Original 78.12 \u00b1 0.32 68.09 \u00b1 0.81 77.30 \u00b1 0.74 91.01 \u00b1 0.93 93.09 \u00b1 0.41 GraphMix 80.09 \u00b1 0.82 70.97 \u00b1 1.21 79.85 \u00b1 0.42 91.55 \u00b1 0.33 93.25 \u00b1 0.33 CODA 83.55 \u00b1 0.14 73.24 \u00b1 0.24 79.28 \u00b1 0.46 91.64 \u00b1 0.41 93.42 \u00b1 0.36 MH-Aug 84.50 \u00b1 0.39 75.25 \u00b1 0.44 80.68 \u00b1 0.36 92.27 \u00b1 0.49 93.58 \u00b1 0.53 LA-SAGE 84.41 \u00b1 0.35 74.16 \u00b1 0.32 80.72 \u00b1 0.42 92.41 \u00b1 0.54 93.41 \u00b1 0.31 NodeMixup 81.93 \u00b1 0.22 74.12 \u00b1 0.44 79.97 \u00b1 0.53 91.97 \u00b1 0.24 94.76 \u00b1 0.25 IntraMix 84.72 \u00b1 0.34 74.37 \u00b1 0.45 81.02 \u00b1 0.49 92.80 \u00b1 0.26 94.87 \u00b1 0.04 APPNP Original 80.03 \u00b1 0.53 70.30 \u00b1 0.61 78.67 \u00b1 0.24 91.79 \u00b1 0.55 92.36 \u00b1 0.81 GraphMix 82.98 \u00b1 0.42 70.26 \u00b1 0.43 78.73 \u00b1 0.45 91.53 \u00b1 0.61 94.12 \u00b1 0.14 DropMessage 82.37 \u00b1 0.23 72.65 \u00b1 0.53 80.04 \u00b1 0.42 91.25 \u00b1 0.51 93.54 \u00b1 0.63 MH-Aug 85.04 \u00b1 0.41 74.52 \u00b1 0.32 80.71 \u00b1 0.31 92.95 \u00b1 0.34 94.03 \u00b1 0.25 LA-APPNP 85.42 \u00b1 0.33 74.83 \u00b1 0.29 81.41 \u00b1 0.55 92.71 \u00b1 0.47 94.52 \u00b1 0.27 NodeMixup 83.54 \u00b1 0.45 75.12 \u00b1 0.33 79.93 \u00b1 0.12 92.82 \u00b1 0.24 94.34 \u00b1 0.22 IntraMix 85.99 \u00b1 0.48 75.25 \u00b1 0.42 81.96 \u00b1 0.34 93.24 \u00b1 0.21 94.79 \u00b1 0.14 data is generated by Intra-class Mixup. Subsequently, a reasonable neighborhood is constructed (lines 6-8). The output is a graph better suitable for node classification. 3.4. Complexity Analysis Assuming the number of generated nodes determined during the Mixup generation process is m, the time cost incurred during the generation and finding neighbors of nodes is O(m). Since our Intra-Class Mixup for node classification does not incur additional computations, its complexity is the same as the original GNNs. Assuming the original time complexity of the GNN is O(|V | \u00d7 F \u00d7 F \u2032) + O(|E| \u00d7 F \u2032), where F denotes the input feature dimension of nodes, and F \u2032 is the hidden layer dimension of GNN. The time complexity after using IntraMix is O(|V | \u00d7 F \u00d7 F \u2032) + O(|E| \u00d7 F \u2032) + O(m \u00d7 F \u00d7 F \u2032) + O(2m \u00d7 F \u2032) + O(m). As in most cases, m \u226a|V |, the time complexity is in the same order of magnitude as the original GNN. 4. Experiment In this section, we show the excellent performance of IntraMix in both semi-supervised and full-supervised node classification using various GNNs across multiple datasets. Sec 4.3 shows the inductive learning ability of IntraMix. The analysis includes how IntraMix overcomes over-smoothing, and we conduct ablation experiments for in-depth analysis. 4.1. Semi-supervised Learning Datasets: We evaluate IntraMix on commonly used semi-supervised datasets for node classification, including Cora, CiteSeer, Pubmed (Sen et al., 2017), CS, and Physics (Shchur et al., 2018). We follow the original splits for these datasets. Details can be found in Appendix B.1. Baselines: We utilize four popular GNNs: GCN (Kipf & Welling, 2017), GAT (Veli\u02c7 ckovi\u00b4 c et al., 2018), GraphSAGE (SAGE) (Hamilton et al., 2017), and APPNP (Gasteiger 5 IntraMix: Intra-Class Mixup Generation for Accurate Labels and Neighbors et al., 2019). Additionally, we compare IntraMix with various mainstream graph augmentation methods (Verma et al., 2021; Duan et al., 2023; Fang et al., 2023; Park et al., 2021; Liu et al., 2022; Lu et al., 2023). Details can be found in Appendix B.2. For each graph augmentation applied to each GNN, we use the same hyperparameters for fairness. When comparing with other methods, we use the settings from their open-source code and report the average results over 30 runs. All experiments are conducted on an RTX-3090. Result: It is crucial to note that semi-supervised experiments are more important than fully supervised ones. This is primarily due to the sparse labels in most real-world scenarios of graphs. The results of semi-supervised experiments reflect the method\u2019s potential when applied to real-world situations. Observing the results in Table 1, it is evident that IntraMix demonstrates superior performance across almost all GNNs and datasets. This indicates that the IntraMix generation of high-quality labeled nodes and neighborhoods, enriches the knowledge on the graph, making the graph more conducive for GNNs. Additionally, it is noteworthy that IntraMix exhibits greater advantages on SAGE and APPNP. This is attributed to the use of neighbor sampling for message aggregation of SAGE and the customized message-passing of APPNP, both of which prioritize the correct and richness of the neighborhood of nodes in comparison to GCN and GAT. The superiority on these two models further validates the rationality and richness of the neighborhoods constructed by IntraMix and the correctness of the high-quality labeled generated nodes. 4.2. Full-supervised Learning Datasets: To evaluate IntraMix on fully supervised datasets, we utilized the well-known ogbn-arxiv (Hu et al., 2020) and Flickr (Zeng et al., 2020) datasets, following standard partitioning methods. For detailed information on the datasets, please refer to the Appendix B.1. Baselines: In this part, we consider three GNNs: GCN, GAT, and GraphSAGE. Additionally, we compare IntraMix with various mainstream methods, and details about the comparative methods can be found in Appendix B.2. Results: Observing the results in Table 2, it is evident that in fully supervised experiments, IntraMix consistently demonstrates superior performance across almost all GNNs and datasets, aligning with the outcomes in semi-supervised learning. This further affirms the rationale behind IntraMix. Despite the abundance of labeled data in fully supervised learning, which diminishes the improvement of high-quality labeled nodes provided by IntraMix, the rich and accurate neighborhoods constructed by IntraMix still enhance the quality of the graph. IntraMix constructs richer paths for propagation by connecting nodes with label similarity, thereby enhancing node representations. Consequently, Table 2. Node Classification accuracy(%) with full supervised graphs. The average result of 10 runs is reported. Models Strategy ogbn-arxiv Flickr GCN Original 71.74 \u00b1 0.29 51.88 \u00b1 0.41 FLAG 72.04 \u00b1 0.20 52.05 \u00b1 0.16 LAGCN 72.08 \u00b1 0.14 52.63 \u00b1 0.16 NodeMixup 73.26 \u00b1 0.25 52.54 \u00b1 0.21 IntraMix 73.51 \u00b1 0.22 53.03 \u00b1 0.25 GAT Original 73.65 \u00b1 0.11 49.88 \u00b1 0.32 FLAG 73.71 \u00b1 0.13 51.34 \u00b1 0.27 LAGAT 73.77 \u00b1 0.12 52.63 \u00b1 0.16 NodeMixup 73.24 \u00b1 0.32 52.82 \u00b1 0.36 IntraMix 73.85 \u00b1 0.12 53.49 \u00b1 0.09 SAGE Original 71.49 \u00b1 0.27 50.47 \u00b1 0.21 FLAG 72.19 \u00b1 0.21 52.39\u00b1 0.28 LASAGE 72.30 \u00b1 0.12 54.24 \u00b1 0.25 NodeMixup 72.01 \u00b1 0.35 53.49 \u00b1 0.24 IntraMix 73.61 \u00b1 0.09 54.65 \u00b1 0.26 Table 3. Node Classification with inductive settings. Models Strategy Cora CiteSeer GAT Original 81.3 \u00b1 0.5 70.4 \u00b1 1.2 LAGAT 82.7 \u00b1 0.8 72.1 \u00b1 0.7 NodeMixup 83.1 \u00b1 0.5 71.8 \u00b1 0.9 IntraMix 83.8 \u00b1 0.6 72.9 \u00b1 0.6 SAGE Original 80.1 \u00b1 1.7 69.1 \u00b1 2.9 LAGSAGE 81.7 \u00b1 0.8 73.0 \u00b1 1.1 NodeMixup 81.9 \u00b1 0.5 73.1 \u00b1 1.3 IntraMix 82.9 \u00b1 0.4 73.9 \u00b1 0.8 even in fully supervised datasets where labeled data is sufficient, IntraMix demonstrates excellent capabilities. 4.3. Inductive Learning The experiments mentioned above are conducted in a transductive setting. To demonstrate the reliability of IntraMix in inductive learning setups, we conduct inductive experiments on Cora and CiteSeer, combining GraphSAGE and GAT. The results are presented in Table 3. In inductive learning, GNNs can only observe non-test data during training, which aligns more closely with real-world scenarios. From the results, it is evident that IntraMix also exhibits excellent performance in inductive learning. This strongly validates that the nodes with more accurate labels and rich neighborhoods constructed by IntraMix indeed provide the graph with richer information. As a result, GNNs trained with IntraMix can learn more comprehensive patterns and make accurate predictions even for unseen nodes, confirming IntraMix as a generalizable graph augmentation framework applicable to real-world scenarios. 6 IntraMix: Intra-Class Mixup Generation for Accurate Labels and Neighbors Table 4. Ablation of Intra-Class Mixup on GCN. w con connects the generated nodes with nodes used in generation, and sim con is the similar connection. \u2191means an improvement to the original. Strategy Cora CiteSeer Pubmed Original 81.5 \u00b1 0.4 70.3 \u00b1 0.5 79.0 \u00b1 0.3 Only PL 82.9 \u00b1 0.2 72.3 \u00b1 0.3 79.5 \u00b1 0.2 Only UPS 83.1 \u00b1 0.4 72.8 \u00b1 0.6 79.7 \u00b1 0.4 Mixup(w/o con) 58.9 \u00b1 22.3 52.3 \u00b1 17.6 70.0 \u00b1 10.8 Mixup(w con) 83.0 \u00b1 1.2 71.3 \u00b1 3.5 79.4 \u00b1 1.1 Mixup(sim con) 83.1 \u00b1 1.8 71.5 \u00b1 1.9 79.8 \u00b1 3.8 Intra-Class Mixup 85.2 (\u21913.7) 74.8 (\u21914.5) 82.9 (\u21913.9) Table 5. Explore the effect of generating node with Intra-Class Mixup. Zeros means replacing the generated nodes with an allzero vector, and Ones means replacing them with an all-one vector. Strategy Cora CiteSeer Pubmed Original 81.5 \u00b1 0.4 70.3 \u00b1 0.5 79.0 \u00b1 0.3 Ones 31.9 (\u219349.6) 21.5 (\u219348.8) 38.1 (\u219340.9) Zeros 83.8 (\u21912.3) 73.6 (\u21913.3) 80.7 (\u21911.7) IntraMix 85.2 (\u21913.7) 74.8 (\u21914.5) 82.9 (\u21913.9) 4.4. Ablation Experiment To demonstrate the effects of each IntraMix component, we conduct ablation experiments using GCN on Cora, CiteSeer, and Pubmed. All other parts of IntraMix are kept unchanged except for the mentioned ablated components. Intra-Class Mixup: We discuss the effectiveness of IntraClass Mixup. We compare it with methods that do not use Mixup, relying solely on pseudo-labeling(PL), and introduce an advanced PL method called UPS (Rizve et al., 2021). Additionally, we compare Intra-Class Mixup with Vallina Mixup, which employs various connection methods for generated nodes. The results are shown in Table 4. Among these methods, Intra-Class Mixup demonstrates the best performance, demonstrating nearly 3.5% improvement in accuracy compared to the original GCN. This is because, compared to methods using only pseudo-labels, Intra-Class Mixup generates higher-quality labeled nodes and constructs richer neighborhoods, enabling GNNs to get more information. Regarding Mixup, we utilize three connecting methods: treating generated nodes as isolated (w/o con), connecting them with nodes used for generation (w con), and connecting them with nodes with similar embeddings (sim con). However, none of these methods perform well. As Theorem 3.1 suggests, Intra-Class Mixup ensures the improvement of label quality for each class, a guarantee that Mixup cannot provide. Furthermore, the fact that Intra-Class Mixup data have a single label makes it convenient to select similar neighbors. In contrast, Mixup generates data with mixed labels, introducing the risk of connecting to any class of node and potentially causing errors in propagation. This Table 6. Effects of Neighbor Selection on GCN. \u2191means improvement compared to the original, while \u2193indicates a reduction. Strategy Cora CiteSeer Pubmed Original-GCN 81.5 \u00b1 0.4 70.3 \u00b1 0.5 79.0 \u00b1 0.3 Direct Con 83.6 (\u21912.1) 73.4 (\u21913.1) 78.0 (\u21931.0) Random Con 76.7 (\u21934.8) 67.0 (\u21933.3) 65.1 (\u219313.9) Without Con 82.9 (\u21911.4) 72.8 (\u21912.5) 79.4 (\u21910.4) Vallina Con 84.3 (\u21912.8) 73.6 (\u21913.3) 79.8 (\u21910.8) Similar Con 84.5 (\u21913.0) 74.0 (\u21913.7) 80.3 (\u21911.3) IntraMix 85.2 (\u21913.7) 74.8 (\u21914.5) 82.9 (\u21913.9) is a key reason for the poor performance of Mixup in node classification. Simultaneously, in Table 5, when replacing the node generated by IntraMix with an all-zero vector and an all-one vector, both perform worse than IntraMix. This phenomenon indicates that, when excluding the influence of neighbor selection, the nodes generated by IntraMix are also effective for graph augmentation. Neighbor Selection: This part shows the importance of Neighbor Selection. We compare various selection methods together with the direct connection of high-quality labeled nodes of the same class (Direct con) in Table 6. We observe that these methods are less effective than IntraMix. Direct con is not effective because, despite high-quality labeling, noise is still present. However, IntraMix avoids this problem. The experimental results validate Theorem 3.2. Compared to other neighbor selection methods, IntraMix proposes a simple way to select nodes more likely to serve as neighbors, leading to more accurate information passing. Among the methods, Vallina Con indicates connecting the generated node to the nodes used for generation. Similar Con (SC) denotes connecting the nodes to nodes with similar embeddings. SC performs great, highlighting the importance of selecting similar nodes as neighbors, aligning with our intuition about the same class among neighborhoods. However, SC is not as good as IntraMix, mainly because the initial neighbors for generated nodes are empty, making it hard to provide accurate embeddings for similarity measurement. Even if the similarity could be determined, connecting overly similar nodes resulted in insufficient neighborhood information. In comparison, IntraMix connects nodes with the same label, maintaining neighborhood correctness while connecting nodes that are not extremely similar. IntraMix enriches the knowledge within the neighborhood. In Table 5, using an all-zero vector to eliminate the influence of Mixup still shows a performance improvement. This reflects the rationality of our proposed Neighbor Selection method, which is effective for graph augmentation. Utilization of unlabeled data: In this part, we show the importance of using unlabeled nodes to obtain low-quality data, and the results are shown in Figure 3(a). The poor 7 IntraMix: Intra-Class Mixup Generation for Accurate Labels and Neighbors Cora Pubmed CiteSeer 72 74 80 82 84 Accuracy(%) Usage Ratio 0.0 0.2 0.4 0.6 0.8 1.0 (a) Utilization of unlabeled data 0.0 0.1 0.2 0.3 0.4 0.5 83.0 83.5 84.0 84.5 85.0 85.5 86.0 l Accuracy(%) (b) Sensitivity Analysis of \u03bb 2 3 4 5 6 7 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 MADGap Metric Layer GCN GAT GraphSAGE LAGCN GRAND IntraMix (c) Over-smoothing Analysis on Cora Figure 3. a) Experimental results using different proportions of unlabeled nodes show that performance improves as more unlabeled nodes are utilized. b) Sensitivity analysis of \u03bb indicates that the best performance is achieved when \u03bb = 0.5. c) Analysis reveals that IntraMix demonstrates effective capabilities in overcoming over-smoothing with deep GNNs. performance of Mixup only within sparsely labeled nodes is due to the limited nodes. Even though Mixup can augment the label information to some extent, the insufficient nodes used for generation create a bottleneck in information gain, hindering GNNs from learning enough knowledge. Despite the low-quality labels provided by pseudo-labeling for unlabeled data, Intra-Class Mixup enhances the label quality, thus providing GNNs with ample knowledge space. Sensitivity Analysis of \u03bb: This part discusses the impact of \u03bb in Intra-Class Mixup. The experiment is conducted using GCN on Cora, and detailed results are presented in Figure 3 (b). According to Theorem 3.1, the best noise reduction in each class label is achieved when \u03bb = 0.5. The results validate our theoretical analysis, showing that the performance of GCN gradually improves as \u03bb varies from 0 to 0.5. This indicates that when \u03bb is closer to 0.5, the quality of the generated data labels is higher. Therefore, we choose \u03bb \u223cB(2, 2), where B denotes Beta Distribution. 4.5. Over-smoothing Analysis As is well known, deep GNNs can result in over-smoothing, a phenomenon characterized by the convergence of node embeddings. We show the ability to alleviate over-smoothing of IntraMix in Figure 3(c). We use MADgap (Chen et al., 2020) as the metric, where a larger MADgap indicates a milder over-smoothing. Surprisingly, although IntraMix is not specifically designed to address over-smoothing, it shows a strong ability to counteract over-smoothing, reaching a level similar to GRAND (Feng et al., 2020), a method specialized in addressing over-smoothing. This is attributed to the bridging effect of the generated nodes, connecting nodes of the same class with high confidence in a random manner. This process resembles random propagation, providing effective resistance against over-smoothing. Additionally, the richer neighborhood and node features inherently mitigate over-smoothing issues (Keriven, 2022). 5. Related Work Graph Augmentation: The primary purpose of graph augmentation is to address two common challenges in graphs encountered by GNN, scarcity of labels and incomplete neighborhoods (Ding et al., 2022). Graph augmentation can be categorized into Node Manipulation (Verma et al., 2021), Edge Manipulation (Rong et al., 2020), Feature Manipulation (Feng et al., 2019), and Subgraph Manipulation (Park et al., 2022). However, existing methods either require complex generators (Liu et al., 2022) or extensive empirical involvement (Wang et al., 2020), failing to effectively address the two issues. The proposed IntraMix offers a simple solution to simultaneously tackle the two challenges faced by GNNs. Details can be found in Appendix C.2. Mixup: Mixup is a promising data augmentation medthod (Zhang et al., 2018), enhancing the generalization of various tasks (Verma et al., 2019; Sun et al., 2020). However, there has been limited focus on the application of Mixup in node classification. We address the shortcomings of Mixup in node classification, proposing IntraMix, which involves Intra-Class Mixup. IntraMix provides richer information for the graph, improving the performance of GNNs in node classification. Details can be found in Appendix C.1. 6. Conclusion This paper presents IntraMix, an elegant graph augmentation method for node classification. We utilize Intra-Class Mixup to generate high-quality labels to address the issue of sparse high-quality labels. To address the problem of limited neighborhoods, we connect the generated nodes with nodes that are highly likely from the same class. IntraMix provides an elegant solution to the dual challenges faced by graphs. Moreover, IntraMix is a flexible method that can be applied to all GNNs. Future work will focus on exploring neighbor selection methods to construct more realistic graphs. 8 IntraMix: Intra-Class Mixup Generation for Accurate Labels and Neighbors Impact Statements This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none which we feel must be specifically highlighted here."
18
+ }
intro_8K/test_introduction_short_2405.00958v1.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.00958v1",
3
+ "title": "Generative manufacturing systems using diffusion models and ChatGPT",
4
+ "abstract": "In this study, we introduce Generative Manufacturing Systems (GMS) as a novel\napproach to effectively manage and coordinate autonomous manufacturing assets,\nthereby enhancing their responsiveness and flexibility to address a wide array\nof production objectives and human preferences. Deviating from traditional\nexplicit modeling, GMS employs generative AI, including diffusion models and\nChatGPT, for implicit learning from envisioned futures, marking a shift from a\nmodel-optimum to a training-sampling decision-making. Through the integration\nof generative AI, GMS enables complex decision-making through interactive\ndialogue with humans, allowing manufacturing assets to generate multiple\nhigh-quality global decisions that can be iteratively refined based on human\nfeedback. Empirical findings showcase GMS's substantial improvement in system\nresilience and responsiveness to uncertainties, with decision times reduced\nfrom seconds to milliseconds. The study underscores the inherent creativity and\ndiversity in the generated solutions, facilitating human-centric\ndecision-making through seamless and continuous human-machine interactions.",
5
+ "authors": "Xingyu Li, Fei Tao, Wei Ye, Aydin Nassehi, John W. Sutherland",
6
+ "published": "2024-05-02",
7
+ "updated": "2024-05-02",
8
+ "primary_cat": "cs.LG",
9
+ "cats": [
10
+ "cs.LG",
11
+ "cs.AI",
12
+ "cs.HC",
13
+ "cs.SY",
14
+ "eess.SY"
15
+ ],
16
+ "label": "Original Paper",
17
+ "paper_cat": "Diffusion AND Model",
18
+ "gt": "Manufacturing systems confront persistent uncertainties with varying forms, urgencies, and impacts. Firstly, the advent of mass personalization [1] cou- pled with regulatory and standard changes adds complexity to production requirements, mandating systems to adeptly navigate evolving demands and obligations. Secondly, production disruptions [2, 3], like natural disasters, pandemics, financial crises, and geopolitical conflicts trigger resource scarci- ties and shifts in consumer behavior. 20\u201330% of firms and businesses are compelled to close following a major disruption [4]. Lastly, emerging manu- facturing initiatives driven by sustainable, societal, and ecological goals have the potential to reshape production objectives, necessitating a thorough re- assessment of existing systems [5]. Future manufacturing systems require the flexibility to promptly adapt to uncertainties and maintain a nuanced balance between emerging initiatives and constraints. The first integration of flexibility into MSs traces back to the 1960s with the inception of flexible manufacturing systems [6]. Despite efforts to im- prove hardware and software flexibility, the inherent NP-hard complexity of centralized control in these manufacturing systems, especially with a grow- ing number of assets and planning horizons, impedes system responsiveness to uncertainties. Increasing autonomy of manufacturing assets, including robots, vehicles, and mobile manipulators, poses an opportunity to address this challenge by delegating decision authority to each asset. Manufacturers like Audi have shifted from rigid line production to segmented workstations with autonomous assets [7]. Assets tailored for specific manufacturing tasks, such as Little Helper, OMRON MoMa, and KMR IIWA, have demonstrated efficacy across automotive and aerospace industries [8]. Through strate- gic task assignment and routing, these assets have the potential to realize adaptable layouts and schedules, anticipating up to 30% increase in worker utilization and output levels [9]. Emerging manufacturing systems, exemplified by agent-based manufac- turing [10], matrix production systems [11], and anarchic manufacturing [12], incorporate asset autonomy via decentralized or distributed control. How- ever, as autonomous assets become more complex and flexible, facilitated by open interfaces and universal standards [13, 14], these control approaches encounter challenges as well. Each asset often lacks comprehensive aware- ness of the entire system and its constraints [15], resulting in conflicts in aligning individual plans and impeding the attainment of optimal solutions. 2 More importantly, optimal solutions are contingent upon effectively balanc- ing diverse objectives and stakeholders\u2019 preferences [16, 17], which may not be completely and explicitly modeled. To fully exploit the benefits of as- set autonomy, a revolutionary approach is imperative but missing\u2014one that efficiently manages a multitude of diverse assets for various production ob- jectives under uncertainties, all while ensuring the centrality of humans in decision-making. Generative models provide a transformative opportunity to address these challenges through their distinctive generative capabilities, probabilistic modeling, and interactive decision-making. Herein, we proposed a GMS, signifying a fundamental transition from traditional explicit model of now to implicit knowledge of future. Drawing inspiration from the envi- sions of the daydreaming factory [18], our approach involves exploring diverse combinations of decisions and uncertainties to generate numerous potential futures. Through utilizing the generative models, including diffusion mod- els and ChatGPT, GMS proficiently captures underlying patterns and dis- tributions of the decisions from the future experience, facilitating creative decision-making even for scenarios beyond the initial scope of exploration.",
19
+ "main_content": "We envisage a synergistic integration of stationary machinery, autonomous assets, and diverse human workforces in the futuristic manufacturing systems. Considering the growing asset autonomy and mobility, we propose that autonomous assets and humans can dynamically relocate and self-organize across various workstations to enhance manufacturing operations and streamline material flows. GMS is designed to skillfully adjust configuration and schedules to handle uncertainties and production objectives, all while under human supervision. As illustrated in Fig. 1, humans contribute concerns and suggestions for various objectives to the autonomous assets, fostering collaborative expertise under a unified production floor. To facilitate human interaction with assets, GMS employs large language models like ChatGPT, XLNet, and Turning-NLP to convert human inquiries into machine language. Image generation models, including diffusion models, BigGAN, and DALL-E, are then adopted to generate system configurations (humans and assets at each station) in response to human inquiries. Further granularity in decisions is achieved through operational schedules and task allocations, distributing tasks across stations and between humans and robots while considering material and process constraints. 3 Figure 1: Schematic representation of GMS. Left: assets receiving human inquiries; Middle: GMS models get trained from explorations of futures and process human inquiries to sample new decisions; Right: GMS responds to human inquiries by providing diverse options for configurations and schedules. Unlike existing approaches relying on explicit models and their convergence to find optimal decisions (model-optimum), GMS employs a trainingsampling approach by extensively exploring future scenarios, GMS implicitly learns probabilistic distributions of good decisions, assembling these distributions in accordance with human desires and production objectives for decision sampling. This shift from a model-optimum to a training-sampling approach not only addresses computational challenges in existing manufacturing systems but also introduces the following benefits: Creativity: The incorporation of noises during sampling enables a broader spectrum of potential decisions. Additionally, generative models can innovate 4 novel decisions through purposeful combinations of learned distributions, a critical aspect of aligning with emerging human inquiries and unforeseen scenarios. Resilience: Training-sampling boosts system resilience in two folds: firstly, sampling decisions prove substantially more efficient as compared to optimization convergence, which enhances system responsiveness amidst uncertainties; secondly, sampling provides varied solutions for a wide range of scenarios, equipping the GMS with a diverse set of potential responses to enhance resilience. Human-centricity: The implicit knowledge of GMS seamlessly integrates with human inquiry, knowledge, and expertise, allowing humans to tap into the nuanced insights within generative models. This synergy enables a more cohesive and effective collaboration between humans and autonomous assets, where humans can harness the capabilities of GMS to augment their decision-making while instilling a sense of ownership and job satisfaction. 3. Generative Models In this section, we present two exemplary generative models for dynamically managing assets in GMS: 1) ChatGPT to extract system requirements from human inquiries, and 2) diffusion models to create configurations to meet those requirements. A configuration is encoded as a matrix x = {xij}, defines the quantity of assets of type i \u2208I in station j \u2208J. For scheduling, we leverage models from [19] to establish a mapping between the configuration and its corresponding optimal scheduling. For simplicity, we consider human heterogeneity only in skill levels, and machine health and production quality are maintained at consistently high levels. 3.1. ChatGPT Utilizing OpenAI\u2019s ChatGPT API in Python with the gpt-3.5-turbo model variant, we create a named entity recognition task to generate key requirements from human inquiries. For example, when presented with the query \u201dI need a production line with a minimal capacity of 240 part/hour, using no more than 9 machines.\u201d the response is a class c = \u2019(240, None, 9)\u2019, where \u2019None\u2019 functions as a placeholder for human skills not explicitly mentioned. 5 3.2. Diffusion models For the decision generation, we adopt diffusion models to learn the underlying patterns, features, and distributions in the training data of envisioned configurations. The diffusion model sets itself apart from other machine learning models by iteratively refining noise-corrupted data to generate new samples, which involves two processes as shown in Fig. 2: 1) forward process adding noises \u03f5t at each step until the data x0 is destroyed, and 2) reverse process \u2013 sampling new x0 by iteratively removing estimated noises. Figure 2: Forward and reverse processes in the diffusion model. We denote latent variables zt as noisy data in the forward process, which is calculated by introducing Gaussian noise \u03f5 \u223cN(0, I) to the input data x0 at each step t \u2208T, with weights determined by the forward process variances \u03b2t, namely, zt = \u03b1tx0 + \u03c3t\u03f5 (1) where, \u03b1t = qQt s=1(1 \u2212\u03b2s), and \u03c3t = p 1 \u2212\u03b12 t, are derived from the Markov Chain [20]. In the reverse process, a learning model h\u03b8 is utilized to estimate the noise \u00af \u03f5c t given zt to restore the original data x0. Here, we parameterize the added noise \u00af \u03f5c t as a function of zt and the current step t, and the class label c of x0, namely, \u00af \u03f5c t = h\u03b8(zt, t, c) (2) Training diffusion model involves minimizing the disparity between the estimated and true noises to maximize the likelihood of the generated samples 6 aligning with the distribution of training data. Detailed training process is delineated in Algorithm 1. The sampling process involves using the following linear combinations to integrate estimated noises \u02dc \u03f5t from both unconditional and conditional data of target class c from human inquiries, namely, Algorithm 1 Diffusion Model Training: 1: repeat until converged 2: select t \u2208[1, T] \u25b7Sample step value 3: random select x0 \u25b7Sample training data 4: obtain the class c of data x0 \u25b7Obtain the data class 5: c \u2190\u2205with probability pu \u25b7Randomly discard class info 6: sample zt = \u03b1tx0 + \u03c3t\u03f5 \u25b7Obtain latent variable of t 7: take gradient descent on: \u25bd\u03b8J = \u25bd\u03b8\u2225h\u03b8(zt, t, c) \u2212\u03f5\u22252 2 \u25b7Gradient calculation \u02dc \u03f5t \u2190(1 + w)\u00af \u03f5c t \u2212w\u00af \u03f5\u2205 t (3) where w denotes the guidance strengths to control the blend of these two types of noises during sampling. Provided the estimated noise \u02dc \u03f5t and a sample with noisy xt, the sample at the preceding step xt\u22121 can be attained from the following distribution: xt\u22121 \u223cN( 1 \u221a1 \u2212\u03b2t (xt \u2212\u03b2t \u03c3t \u02dc \u03f5t), \u03b2tI) (4) By randomly initializing noise matrix xT \u223cN(0, I) and iteratively sampling using Eqn. 4, a new configuration x0 can be attained. 3.3. Learning model The learning model h\u03b8 aims to infer the noise estimate \u03f5t from the latent variable zt, both in dimensions I \u00d7J. To obtain a natural symmetry, padding is implemented, resulting in squared matrices of size p = max{I, J}. Fig. 3 shows the proposed learning model in a U-Net structure, which is utilized to facilitate information flow between pooling and transposed-convolution pathways. Residual convolutional blocks are tailored to enhance hierarchical feature extraction and pattern recognition for data in matrix format. The introduction of skip connections seamlessly integrates learned features and contextual information across diverse levels of U-Net. With identical input and output sizes, skip connections effectively facilitate the direct 7 Figure 3: U-Net architecture for noise estimation using the residual convolutional blocks. Each block has two consecutive convolutional layers with batch normalization, GELU activation, and a residual connection that adds input to the output tensor, ensuring the network learns residual mappings.. transfer of information across layers and preserve fine details and contextual information throughout the network. The U-Net architecture, enriched by skip connections, effectively retains spatial features in configuration and noise matrices, offering significant advantages in the generation process. 3.4. Daydreaming Process To effectively capture implicit knowledge, this study integrates the daydreaming process [18] with meta-heuristics to explore potential decisions in anticipated future scenarios. The process initiates with the generation of random future scenarios, including structured randomness in demands, humans, and autonomous asset capabilities, and corresponding decisions in configurations and schedules. Decision quality is enhanced by integrating selection, crossover, and mutation operations inspired by the genetic algorithm, which provides two advantages: 1) guiding the generation of diverse and adequate configurations for efficient learning, and 2) accelerating data accumulation by storing populations from each generation. Termination of daydreaming occurs after a predefined number of iterations rather than model convergence, 8 ensuring a balanced dataset. Each explored configuration is evaluated based on multiple objectives of interest. 4. Result We implemented and simulated GMS in the industrial use case for part processing, following [19]. The system assumes 9 types of assets and operations/operation setups, distributed across 7 stations to facilitate flexible collaborations. Human skill levels were randomized as high/moderate/low (120/60/0 parts/hour) across different operations. The daydreaming process involved the randomization of worker skills over 25 generations, each including 40 potential configurations. Cplex was used to obtain the mapping between configurations and optimal schedules. The simulation spanned 120 runtime units, generating 120,000 data over 15 hours for training purposes. The diffusion process and learning model were implemented in Python using PyTorch. Based on the optimal tuning results, the process variance was set at \u03b20 = 10\u22124 and \u03b2T = 0.02 with total steps of T = 400 and guidance strength of w = 2. Fig. 4 shows the sampling process for generating configurations with a specified target capacity. As the step decreases, sampled configurations demonstrate increased rationality, yielding distinct layouts. The rational generation relies on the adept accumulation of implicit knowledge of key features and patterns. For instance, configurations with 0 capacity predominantly display light colors in later parts of the matrix, signifying minimal asset utilization in few types. A system in such uniform composition lacks the ability to perform all operations, resulting in 0 capacity. With capacity escalates, more assets (darker colors) in diverse types are included to enhance parallel production and operation efficiency. A comprehensive analysis of decision time was performed to assess the efficacy of GMS in responsiveness compared to existing methods. The comparative approaches applied widely utilized meta-heuristic algorithms for configuration optimization. Table 1 records the average decision time taken to obtain a configuration with the requisite capacity over five runs of each capacity. Diffusion models maintain low decision times, ranging from 9\u2030 to 16\u2030 seconds per decision, across specified capacities. This consistent efficiency signifies a quantitative improvement compared to other algorithms, which 9 Figure 4: Sampling configurations with target capacity over steps, depicting asset types (x-value) and quantities (pixel darkness) at each station (y-value). typically exceed 10 seconds, and, at times, failing to attain the target capacity even after 300 seconds. The consistent efficiency of diffusion models un10 Table 1: Comparison of decision time (in seconds) to other algorithms. Algorithm 0 60 120 180 240 300 Particle Swam Optimization 7.66 8.14 7.84 8.33 7.93 8.083 Genetic Algorithm 18.51 16.07 15.66 15.32 14.95 >300 Differential Evolution 32.50 31.11 28.98 28.66 30.11 >300 Simulated Annealing 20.20 21.87 21.37 20.93 21.60 36.189 Imperial Competitive Algorithm 20.14 19.57 19.63 20.56 20.13 >300 Diffusion Models 0.009 0.016 0.011 0.016 0.013 0.009 derscores a pivotal advancement in algorithmic efficacy of training-sampling approach as opposed to the model-optimum approach, markedly enhancing responsiveness and resilience of GMS to uncertainties. To comprehensively assess the quality of generated samples, we randomly sample 1000 configurations and evaluate them by three metrics: 1) precision accuracy (Accu) and the mean squared errors (MSE) of matching the requisite capacity, 2) diversity \u2013 duplication rate (DR) of the generated configurations that exist in the training data, and 3) fidelity Fr\u00b4 echet Inception Distance (FID) measures perceptual quality and fidelity of generated samples as compared to the distribution of training data. The performance of diffusion models with and without guidance is listed in Table 2. Table 2: Model performance with (top) and without (bottom) guidance. Metric 0 30 60 90 120 150 180 240 270 300 Accu (%) 100 72.1 56.3 75.1 80.7 39.6 54.6 66.5 91.3 98.5 MSE 0 15.8 27.8 14.4 19.5 27.9 34.2 23.3 10.5 3.6 DR (\u2030) 0 1 0 0 0 0 0 2 2 13 FID (10\u22126) 6.7 11.7 23.0 32.7 26.4 27.8 12.8 5.3 1.9 2.0 Accu (%) 98.1 38.4 70.4 36.6 59.6 17.1 49.7 52.1 55.2 50.8 MSE 7.9 24.2 28.5 27.5 37.0 42.1 44.7 44.8 40.3 45.8 DCR (\u2030) 2 1 0 0 0 0 0 0 2 13 FID (10\u22126) 22.7 26.3 26.6 26.3 20.7 20.8 22.5 21.6 17.7 23.5 In contrast to the unguided model, the proposed model exhibits notably enhanced accuracy (7/10 above 65%) and low MSE (9/10 below 30), highlighting its high precision in generating samples to specified requirements. Notably, both models yield a low DR of less than 13\u2030, emphasizing decision diversity by randomly sampling from implicit knowledge. Low FID 11 underscores the model\u2019s ability to closely match the patterns and distributions of the training data and reproduce key features to generate realistic decisions. Comparatively, the FID score in scenarios with guidance is much lower in the extreme capacities, easily distinguishable in patterns, but higher in capacity 90-150 due to high similarities in the corresponding configurations. Overall, these precise, high-fidelity, and diverse decisions showcase the resilience and creativity of GMS in accommodating uncertainties and diverse objectives. Fig. 5 illustrates the dynamic interaction between humans and assets in GMS, refining system configurations (marked by shapes) and schedules (marked by colors). Using the ChatGPT API, human textual inquiries were transformed into class label c to guide the diffusion model in the sampling process. Analyzing the top 5 decisions based on the best fitness, our findings underscore the remarkable capability of GMS to harmonize the system with diverse objectives, including capacity, human skills, and involved assets. This interactive process centralizes humans in decision-making, cultivating a synergistic collaboration between humans and autonomous assets for continuous exploration and refinement, ultimately shaping GMS to align with varied objectives, constraints, and human desires in real-time. 5. Conclusions The study introduces GMS to harness the increasing autonomy in manufacturing assets to address uncertainties, human desires, and emerging production objectives. GMS signifies a paradigm shift in decision-making from model-optimization to training-sampling. In an industrial use case, our findings highlight that GMS consistently outperforms existing approaches in decision times, diversity, and quality, highlighting its resilience and creativity. GMS adeptly adjusts configuration and schedule to human inquiries and additional objectives, fostering human-centric decision-making for collaborative exploration and continuous refinement. Future studies could explore diverse scenarios, including decisions (e.g., diagnosis, quality control) and performance metrics (e.g., carbon emissions, human well-being) while incorporating more complex human inquiries through embeddings rather than fixed classes. 12"
20
+ }
intro_8K/test_introduction_short_2405.00970v1.json ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.00970v1",
3
+ "title": "How Can I Get It Right? Using GPT to Rephrase Incorrect Trainee Responses",
4
+ "abstract": "One-on-one tutoring is widely acknowledged as an effective instructional\nmethod, conditioned on qualified tutors. However, the high demand for qualified\ntutors remains a challenge, often necessitating the training of novice tutors\n(i.e., trainees) to ensure effective tutoring. Research suggests that providing\ntimely explanatory feedback can facilitate the training process for trainees.\nHowever, it presents challenges due to the time-consuming nature of assessing\ntrainee performance by human experts. Inspired by the recent advancements of\nlarge language models (LLMs), our study employed the GPT-4 model to build an\nexplanatory feedback system. This system identifies trainees' responses in\nbinary form (i.e., correct/incorrect) and automatically provides template-based\nfeedback with responses appropriately rephrased by the GPT-4 model. We\nconducted our study on 410 responses from trainees across three training\nlessons: Giving Effective Praise, Reacting to Errors, and Determining What\nStudents Know. Our findings indicate that: 1) using a few-shot approach, the\nGPT-4 model effectively identifies correct/incorrect trainees' responses from\nthree training lessons with an average F1 score of 0.84 and an AUC score of\n0.85; and 2) using the few-shot approach, the GPT-4 model adeptly rephrases\nincorrect trainees' responses into desired responses, achieving performance\ncomparable to that of human experts.",
5
+ "authors": "Jionghao Lin, Zifei Han, Danielle R. Thomas, Ashish Gurung, Shivang Gupta, Vincent Aleven, Kenneth R. Koedinger",
6
+ "published": "2024-05-02",
7
+ "updated": "2024-05-02",
8
+ "primary_cat": "cs.CL",
9
+ "cats": [
10
+ "cs.CL",
11
+ "cs.AI",
12
+ "cs.HC"
13
+ ],
14
+ "label": "Original Paper",
15
+ "paper_cat": "LLM Fairness",
16
+ "gt": "One-on-one tutoring has been recognized as a highly effective strategy for enhanc- ing student learning, with substantial evidence supporting its impact [1, 2]. However, there are significant challenges associated with the scalability of one-on-one tutoring, primarily due to the scarcity of skilled tutors, including certified teachers and para- professionals. This shortage has left an estimated 16 million students in the United States in need of individualized support, as highlighted by [1]. In response to this shortage, there has been a strategic shift towards effectively training novice tutors, including community volunteers, retired individuals, and college students, to fulfill tutoring role [2]. The growing demand for skilled tutors has resulted in the development of various professional development programs tailored to the unique needs of nonprofessional and novice tutors [2]. Driven by this need, researchers have explored the use of online scenario-based training to simulate real-life tutoring scenarios for novice tutors [3] and pre-service teachers [4]. Fig. 1 illustrates a scenario on Giving Effective Praise. It demonstrates how tutors can fail to appropriately acknowledge the student\u2019s efforts by providing outcome-based praise as opposed to effort-based praise. For instance, saying \u201cKevin, good job getting the problem correct!\u201d fails to acknowledge the student\u2019s efforts and persistence. As indicated in previous research [5, 6], the availability of real- time explanatory feedback within the scenario-based training lessons can help tutors provide effective praise. Particularly, real-time feedback on learners\u2019 errors, similar to the feedback received while engaging in the deliberate practice of responding to Fig. 1: An example of a trainee (i.e., novice tutor) incorrectly responding to an open- ended question on how to best reply to a student by giving effective praise. In this particular example, the trainee is praising the student for getting the problem correct, which is achievement or outcomes-based praise and not based on effort. 2 situational judgment tests, is described as a favorable learning condition and can lead to better learning outcomes [7, p. 5]. While the benefits of real-time explanatory feedback in enhancing tutor learning outcomes are well-documented, crafting such feedback presents substantial challenges due to its labor-intensive nature. Traditionally, providing this level of specialized train- ing, replete with personalized explanatory feedback, warrants a substantial investment of effort and time. The process of providing personalized feedback to novice tutors requires considerable time and effort from skilled tutors to ensure feedback effective- ness and relevance. Moreover, beyond the substantial investment of time and effort, the feasibility of scaling such training protocols to meet the high demand across educa- tional settings significantly compounds the challenge. However, recent breakthroughs in large language models (LLMs) offer a promising avenue for streamlining this pro- cess. Models such as the Generative Pre-trained Transformer (GPT) could potentially automate the generation of personalized, real-time feedback for tutors [6, 8]. This automation not only has the potential to alleviate the resource burden but also to enhance the specificity and precision of the feedback by accurately identifying the personalized needs of the tutors [6]. Currently, the quality of automated explanatory feedback is lacking, with many sys- tems failing to provide learners with accurate feedback on their constructed responses [5, 6]. We argue that the quality of feedback for tutor training can be further improved. Inspired by the feedback research [9\u201311], where learners interpret performance-related information to enhance their understanding, we postulate that presenting desired tutoring responses within feedback to novice tutors can enhance the effectiveness of the training. However, rephrasing incorrect tutor responses into the correct or desired form often necessitates a substantial investment of time and effort from experienced tutors\u2014hence introducing scalability constraints associated with tutor training. Thus, we aim to explore approaches to improve our ability and accuracy in providing tutors with explanatory feedback while also mitigating the time and effort requirements of human graders by automating the process of generating explanatory feedback and cor- rection to their responses. The automation requires the development of classification systems that can effectively analyze tutor responses or, in other words, classification systems that determine the correctness of tutor responses to scenario-specific require- ments of the learners. However, there is useful learner information within appropriate classified incorrect responses. These incorrect learner-sourced responses can be used to provide tutors corrective, explanatory feedback by taking an incorrect response and rephrasing or modifying it to make it a desired, or correct, response. Research supports when learners are given specific feedback related to their responses, such as taking incorrect tutor responses and personalizing them by making them correct, they gain a better understanding of their learning [12, 13]. We aim to explore how GPT models can serve as supplementary tools to deliver synchronous feedback to tutors on their responses of how to best respond to specific training scenarios (e.g., praising a student for effort) leveraging useful tutor incorrect responses. We propose two Research Questions: RQ1: Can a large language model accurately identify trainees\u2019 incorrect responses where trainees failed to effectively guide students in specific training scenarios? 3 RQ2: Can GPT-4 be harnessed to enhance the effectiveness of trainees\u2019 responses in specific training scenarios? We initially developed a binary classifier to determine tutor\u2019s correct and incorrect responses from three training lessons: Giving Effective Praise, Reacting to Errors, and Determining What Students Know. We employed zero-shot and few-shot learning approaches to classify the trainees\u2019 responses. Our result demonstrated that the five- shot learning approach achieved acceptable performance in identifying the incorrect responses. Building upon the results of RQ1, we selected the incorrect responses identified by our optimal few-shot learning classifier, which is further used for the RQ2. We explored the idea of rephrasing incorrect trainees\u2019 responses to determine if we can prompt GPT-4 to effectively make them correct. An example of an incorrect response from the lesson Giving Effective Praise is shown in Fig. 1), e.g., \u201cKevin, good job getting the problem correct!\u201d. Through extensive experiments, we obtained an effective prompt to secure the rephrased responses presented in an accurate form with minimal changes of the words from the original incorrect responses. Building upon the result from RQ1 and RQ2, we build a feedback system to provide explanatory feedback to the incorrect trainee\u2019s response shown in Fig. 2. Fig. 2: Explanatory feedback for novice tutor responses.",
17
+ "main_content": "2.1 Significance of Feedback on Learning Feedback plays a crucial role in improving the students\u2019 learning outcomes and performance [9, 10, 14]. In the field of feedback research, theoretical models have been developed to explain the impact of feedback on learning and to identify the core principles that underpin effective feedback design. Hattie and Timperley [10] defined feedback as the information about the correctness of a learner\u2019s actions or decisions, along with explanations about why those actions or decisions are right or wrong, underlines the significance of feedback. As emphasized in their work [10], the influence of feedback on learning varies based on the type and timing of its delivery. Effective feedback should assist learners in understanding the rationale behind the feedback, which is crucial for deeper learning [9]. Moreover, including the correct answer within the feedback substantially enhances its efficacy by offering learners the information needed to correct their errors [11]. This is especially relevant when learners 4 answer open-ended questions, as simply knowing that their response is incorrect may not suffice to improve their understanding [11]. By presenting the correct answer (or correct responses to open-ended question) in the feedback, learners can compare their responses with the correct responses, identify areas for improvement, and gain guidance on how to approach similar questions in the future [12, 13]. To help learners identify their misconception in the open-ended question, we posit that it is necessary to include the correct responses in the feedback. However, providing timely explanatory feedback faces challenges since crafting effective explanatory feedback is often timeconsuming and labor-intensive nature [6, 8, 14]. To address this issue, it is necessary to develop automated feedback generation system. 2.2 Feedback Generation The development of automated feedback has received significant attention from educational researchers [5, 6, 8, 15, 16]. For example, Ontask [15] is a rule-based feedback provision system designed to assist instructors in delivering personalized feedback based on specific conditions of learners (e.g., the duration spent on the learning system). Additionally, Demszky et al. [16] developed a feedback system that automatically delivers explanatory feedback to instructors via email within two to four days after their tutoring sessions. Their study results [16] indicate that timely explanatory feedback enhanced learners\u2019 satisfaction. Lin et al. [5] used sequence labeling techniques to provide automated explanatory feedback, which demonstrated the potential of the large language models on identifying the effective components of feedback. Despite demonstrating the effectiveness of automated feedback systems, the provision of feedback with correct responses to open-ended question is still under-explored, which are needed to advance feedback systems 2.3 Using Large Language Models for Feedback Generation Inspired by recent research on using large language models for feedback generation [5, 6, 17\u201320], we posit that GPT-based large language models hold potential for advancing the development of automated feedback. For example, Dai et al. [8] investigated the capability of GPT-3.5 model (ChatGPT) to generate feedback for students\u2019 writing assignment and they [8] found that GPT-3.5 could produce feedback that was more readable than that of human instructors. Subsequently, Dai et al. [20] found that GPT-4 outperformed both GPT-3.5 and human instructors in providing effective feedback based on the feedback attributes proposed by [10]. Then, Hirunyasiri et al. [6] leveraged the GPT-4 model to provide timely feedback for human tutors\u2019 training. Their results [6] indicated that GPT-4 outperformed human educational experts in identifying a specific tutoring practice, giving effective praise. While these studies have demonstrated the feasibility of GPT-based models in feedback generation, none have ventured into generating explanatory feedback with correct responses to open-ended questions. Given that GPT-4 has shown remarkable performance on various educational tasks (e.g., generating high-quality answer responses for middle school math 5 questions [17] and providing feedback for multiple-choice questions at the middleschool math level [18]), our study also leveraged the GPT-4 model to further explore its capabilities in automatically generating explanatory feedback. 3 Method 3.1 Data We developed an online learning platform1 to facilitate training for the novice tutors in the form of brief scenario-based lessons. Within the scope of this study, we refer to the novice tutors participating in the training activities as trainees. Aligning with previously demonstrated competencies of effective tutoring [21], each lesson presents scenario-based questions to facilitate an authentic and contextually relevant tutor learning opportunity. These scenarios challenged the tutors to apply their knowledge and skills by simulating real-world tutoring situations (see Fig. 1). We examined the trainees\u2019 performance and understanding across three lessons: Giving Effective Praise, Reacting to Errors, and Determining What Students Know. These lessons are based on the skillsets that were identified to be crucial for tutors in previous work [3, 21]. Each lesson consisted of two scenarios. Across all trainees, we collected 410 responses: 140 responses from the 70 trainees who took the Giving Effective Praise lesson, 118 responses from Reacting to Errors (59 trainees), and 152 responses from Determining What Students Know (76 trainees). Before analysis, we removed 10, 4, and 13 responses respectively from each lesson because they were either empty or contained incoherent or meaningless content (e.g., \u201cad;fajkl\u201d, \u201ctest test test\u201d or \u201cI have no idea\u201d), resulting in a total of 383 analyzed responses. We also collected demographic information about the trainees, including their experience as tutors, as presented in Table 1. For each lesson, tutors provided self-reported demographic details, including information regarding their race, gender, age, and tutoring experience. 3.2 Annotation for Trainee\u2019s Responses In the lesson Giving Effective Praise, trainees practice their skills in engaging students by offering effort-based praise. The praise provided by trainees should effectively acknowledge students\u2019 efforts and aim to enhance their motivation and desire to keep learning. A tutoring scenario was depicted where a student was struggling to persevere on an assignment (See the scenario in Table 2). The tutor trainee\u2019s responses were expected to show the components of effective praise as suggested by research recommendation [3]. Effective praise should be: 1) timely, positive, and sincere, 2) highlighting what student did well during the tutoring, 3) genuine and avoiding generic comments like \u201cgreat job\u201d, and 4) focus on the learning process rather than on the student or the outcome. In short, correct praise responses should be supportive, positive, encouraging, and acknowledging the student\u2019s effort during the learning process. In Table 2, we demonstrate some praise responses with an explanation of the rationale for labeling responses as either Correct or Incorrect. 1https://www.tutors.plus/solution/training 6 Table 1: Demographic information of participants Demographic Categories Giving Effective Praise (n = 70) Reacting to Errors (n = 59) Determining What Students Know (n = 76) Gender Female 27 23 32 Male 34 30 34 Non-binary 1 1 1 Unknown 8 5 9 Age 18-24 8 8 11 25-34 11 6 9 35-50 12 9 9 51-64 21 22 25 65+ 12 12 16 Unknown 6 2 6 Ethnicity Asian 12 10 18 White 34 30 34 Others 8 8 9 Unknown 16 11 15 Tutoring Experience* Level 1 4 3 4 Level 2 14 11 14 Level 3 22 18 24 Level 4 20 21 24 Level 5 5 4 5 Unknown 5 6 5 Tutoring Experience*: The tutors were asked to rate their prior tutoring experience on a five-point Likert scale, where Level 1 indicated a tutor with limited experience, and Level 5 signified an expert tutor. In the lesson Reacting to Errors, trainees practice their skills in responding to student errors. Trainees employ various pedagogical strategies aimed towards addressing gaps in the learners\u2019 knowledge through constructive feedback. Instead of overt criticism, the emphasis is on fostering a positive approach to errors. This approach seeks to shift students\u2019 perception towards errors by underscoring their importance in the learning process. A tutoring scenario was depicted where a student made a mistake in solving a problem (See the scenario in Table 3). The tutor trainee\u2019s responses to students\u2019 errors should help students develop their critical thinking skills and encourage students to correct their mistakes. According to [3], to effectively respond to students\u2019 errors, one should: 1) indirectly inform students about their mistake in the problemsolving process, 2) guide the student towards self-correction, and 3) show praise for the student\u2019s effort or attempt. Responses that directly highlight the student\u2019s error or inform the student what to do are not desired in the tutoring practice [3]. In Table 3, we demonstrated some responses of reacting to errors with the explanation of the rationale for labeling responses as either Correct or Incorrect. 7 Table 2: Examples of correct and incorrect trainee responses for the lesson Giving Effective Praise with annotation rationale. Scenario You\u2019re tutoring a student named Kevin. He is struggling to understand a math problem. When he doesn\u2019t get the answer correct the first time, he wants to quit. After trying several different approaches, Kevin gets the problem correct. As Kevin\u2019s tutor, you want him to continue working through solving more problems on his math assignment. Trainee Response Interpretation You are making steady progress and it is good to see the results of your efforts Correct Response The response expresses the sense of positive and sincere. The praise focuses on the student\u2019s perseverance and acknowledges the students for working hard and the process of learning. You did a great job, well done! Incorrect Response This response is sincere and positive but the praise does not focus on student learning efforts or learning actions (e.g., demonstrated the problem-solving procedural). Table 3: Examples of both correct and incorrect trainee responses for the lesson Reacting to Errors with annotation rationale. Scenario Imagine you are a mentor to a student, Aaron, who has a long history of struggling with math. Aaron is not particularly motivated to learn math. He just finished a math problem adding a 3-digit and 2-digit number and has made a common mistake (shown below). Trainee Response: Interpretation: Lucy, very well, but I have to point that we have another way of doing the math problem, we can repeat the math together, what do you think? Correct Response This response avoids using direct words about the student\u2019s mistake and implicitly clears up the misconception. Instead, the response encourages the student to make another attempt instead of explicit answers This is very close! I see one issue, can you walk me through the how you worked through the problem? Incorrect Response This response asks the student to walk through the steps but it still uses the word \u201cissue\u201d, which may be frustrating. In the lesson Determining What Students Know, this lesson is designed to enhance the tutor trainees\u2019 skills in discerning the current knowledge level of the students by distinguishing what the students have comprehended and what still needs to be learned. A tutoring scenario was depicted where a student was given a math problem they did not know how to solve (see the scenario in Table 4). The tutor trainee\u2019s responses were used to gauge the student\u2019s prior knowledge at the start of the session and provide instruction based on what students already know as a launching point for 8 the rest of the session. According to [3], effective response of determining what students know should be: 1) prompting students to demonstrate what they have already done or explain what they know, 2) presenting in an open-ended form and avoiding asking student\u2019s understanding of specific knowledge concept, 3) guiding the tutoring conversation to locate the student\u2019s misunderstanding, 4) providing instructional support to help students find the correct answer. To summarize, correct response of determining what students know should assess a student\u2019s prior knowledge, guide the conversation to catch student\u2019s misconceptions or errors and support productive struggle. In Table 4, we demonstrated some responses of determining what students know with the explanation of the rationale for labeling responses as either Correct or Incorrect. Table 4: Examples of both correct and incorrect trainee responses for the lesson Determining What Students Know with annotation rationale. Scenario You are working with a student named Cindy on her math homework. She is having trouble solving a geometry problem dealing with triangles. She shows you the following diagram displaying a triangle and states that she has to determine the value of angle x. Cindy says, \u201dI don\u2019t know what to do.\u201d Trainee Response: Interpretation: What have you tried so far? Correct Response This response asks an open-ended question to understand what students have tried so far and gauge the student\u2019s knowledge Do you know what PEMDAS means? Incorrect Response Tutors\u2019 responses can contain questions to students but they must be open-ended and non-specific to assess student\u2019s knowledge of an individual knowledge component. 3.3 Identifying desired trainee responses One of the motivations for this study is the creation of a classifier capable of discerning desired attributes in a tutor\u2019s responses to scenario-based prompts. The goal is to determine whether the tutors can adapt to the specific scenarios and integrate scenario-specific instructional practices when supporting the learners. For instance, should a trainee fail to acknowledge the learner\u2019s effort when working on an activity requiring effective praise, the classifier would categorize the tutor\u2019s feedback as Incorrect (less desirable). Identifying these scenarios presents an opportunity to personalize training activities for trainees, enhancing their ability to learn from and rectify specific instructional methodologies. In addressing RQ1, we first employed two expert raters, both specialists in educational instruction and feedback, to annotate trainees\u2019 responses as either Correct (desirable) or Incorrect (less-desirable). Using Cohen\u2019s \u03ba, we determined inter-rater 9 reliability, obtaining scores of 0.85, 0.81, and 0.64 for Giving Effective Praise, Reacting to Errors, and Determining What Students Know, respectively. These scores of inter-rater reliability are considered sufficient [22]. Disagreements between the raters prompted input from a third expert to ensure consistency in annotations. Then, recognizing the typical need for a large amount of data when training classifiers from scratch for natural language processing tasks, we turned to recent advances in machine learning. As documented in [23, 24], zero-shot and few-shot learning methods can effectively discern patterns in datasets, even when they are limited or absent. These methods leverage the inherent capability of pre-trained models, which is crucial for ensuring classification performance and generalizability. The principle mirrors human cognition, as explored in [23, 24], where individuals apply their generalized knowledge to identify unfamiliar objects or concepts. Further details of these methods are described below: \u2022 Zero-shot Learning: In zero-shot learning, the classifier is trained to perform tasks for which it has seen no labeled examples at all. This is achieved by transferring knowledge from related tasks and using semantic relationships between classes. The model\u2019s prior knowledge, often in the form of embeddings or representations that capture semantic meanings, is crucial for it to make predictions in unseen classes [24]. \u2022 Few-shot Learning: In few-shot learning, the classifier is trained to perform tasks using a limited amount of labeled data. The underlying principle is to leverage the knowledge acquired by the model from previous and related tasks to facilitate effective generalization to a new task, even when provided with minimal data. The prior knowledge enables the classifier to adapt to new tasks with only a few examples [23]. Additionally, given that our classifier is designed to categorize trainees\u2019 responses into two categories (i.e., correct or incorrect), the few-shot learning with two classification categories is commonly termed \u201ctwo-way few-shot learning\u201d. For instance, a two-way two-shot contains two correct responses and two incorrect responses. Upon a thorough review of existing literature [25], we found that most studies implemented few-shot learning with the number of shots less than or equal to five. In line with this consensus, our study also sets five shots as the maximum threshold for the number of shots. As described, both zero-shot and few-shot learning methods rely on a robust pretrained model. These pre-trained models, having been exposed to extensive training corpora, inherently possess base knowledge that allows them to discern generalized patterns even from minimal datasets. Inspired by the effectiveness of GPT-4 models in existing educational tasks [6, 17, 18], we adopted the state-of-the-art GPT-4 model [26] as the foundational model for conducting binary classification of trainees\u2019 responses. A GPT prompt is a sentence or phrase provided to the GPT model to produce a response [8, 27]. Our prompt strategies are detailed in Table 5. The prompt strategies are in the form of Chat-Completion, which refers to the generated response produced by the GPT-4 model during a conversation. When a user provides a prompt, GPT-4 processes the prompt and generates a relevant response, known as the \u201cCompletion\u201d. The Chat-Completion is set up to generate the label for each trainee\u2019s response. For Zero-shot implementation, as presented in Table 5, 10 Table 5: Prompt strategies for a binary classifier. We used Chat-Completion to process the trainees\u2019 responses in batch Zero Shot Few-shot Role Content Role Content System \u201cYou are a binary classifier.\u201d System \u201cYou are a binary classifier.\u201d User {Lesson Principle} + \u201cAccording to the lesson principle, please determine if the following response contains\u201d + {Lesson Name} + \u201cplease respond YES; if not, please respond NO.\u201d User {Lesson Principle} + \u201cAccording to the lesson principle, please determine if the following response contains\u201d + {Lesson Name} + \u201cplease respond YES; if not, please respond NO.\u201d Assistant \u201cSure, please enter the response from tutor\u201d Assistant \u201cPlease provide some examples of correct and incorrect response\u201d User {Textual response} User {Correct example} + {Incorrect example} Assistant \u201cSure, please enter the response from tutor\u201d User {Textual response} the Chat-Completion has three different chat roles: System, User, and Assistant. The role of System represents the assigned default character for the machine. In our case, GPT-4 facilitates the role of a \u201cbinary classifier\u201d. The role of User represents human input. The role of Assistant denotes a machine-generated response, which is to frame the prompting process as a conversation. Compared to the Zero-shot learning approach, the few-shot learning approach provides a limited number of correct and incorrect examples for the GPT-4 model to understand the classification patterns (Table 5). Subsequently, our proposed prompt requires specific inputs from the User. The input of {Lesson Principle} is based on the principles of a correct response from the lesson materials created by Thomas et al. [3]. The input of {Textual response} is the trainee\u2019s response. As there are three distinct lessons, the input of {Lesson Name} in the instruction prompt is substituted with the appropriate lesson name. 3.4 Enhancing the trainee responses by GPT models To explore RQ2, we used the GPT-4 model to rephrase incorrect responses into correct forms effectively. We designed the prompt strategies presented in Table 6. For the Zero-shot learning, we assigned a role with GPT-4 to rephrase the trainee\u2019s response (i.e., \u201cYou are rephrasing tutor\u2019s response\u201d). For the role of User, similar to RQ1, we used {Lesson Principle} to enable GPT-4 to understand the correct form of tutor responses. To effectively rephrase the trainees\u2019 responses, we believe that providing context about the scenario in which the responses were given might lead GPT-4 to generate more accurate rephrased outputs. Thus, in the prompt, we also added the input of {Lesson Scenario}, which was the actual text of the scenario-based question, as demonstrated in Table 2, 3, & 4. In the context of the few-shot learning approach, we supplied two examples of rephrased incorrect responses in their correct forms provided in the training lessons to help the GPT-4 model infer the rephrasing rules (see Table 5). The GPT-4 Chat-Completion is presented in Table 6. 11 Table 6: Prompt strategies for binary classifier. We used Chat-Completion to process the trainees\u2019 responses in batch Zero Shot Few-shot Role Content Role Content System \u201cYou are rephrasing tutor\u2019s response.\u201d System \u201cYou are rephrasing tutor\u2019s response.\u201d User {Lesson Principle}+\u201cThe provided response attempts to answer to the following scenario.\u201d+{Lesson Scenario}+\u201cPlease rephrase the tutor\u2019s response according to the principle mentioned above to create a better example of\u201d+{Lesson Name}+\u201cRetain words and ideas from the tutor\u2019s response. Limit changes to the original tutor\u2019s response to a minimum. Maintain the same length as the original tutor\u2019s response. Please rephrase as less words as possible from the original tutor\u2019s response. Highest priority is to make sure to follow the principle of the correct response when rephrasing.\u201d User {Lesson Principle}+\u201cThe provided response attempts to answer to the following scenario.\u201d+{Lesson Scenario}+\u201cPlease rephrase the tutor\u2019s response according to the principle mentioned above to create a better example of\u201d+{Lesson Name} + \u201cRetain words and ideas from the tutor\u2019s response. Limit changes to the original tutor\u2019s response to a minimum. Maintain the same length as the original tutor\u2019s response. Please rephrase as less words as possible from the original tutor\u2019s response. Highest priority is to make sure to follow the principle of the correct response when rephrasing.\u201d Assistant \u201cSure, please enter the response\u201d Assistant \u201cPlease provide some examples of how you will rephrase the given incorrect response to make it correct\u201d User {Textual response} User {Rephrased examples} Assistant \u201cSure, please enter the response\u201d User {Textual response} 3.5 Evaluation approach Evaluation for RQ1. We employ both the F1 score and the Area under the ROC curve (AUC) for evaluating the performance of our classification model. Furthermore, given our specific focus on identifying incorrect feedback, we incorporate two additional metrics: the Negative Predictive Value (NPV) and the True Negative Rate (TNR). These measures are crucial for determining the model\u2019s efficacy in minimizing false negatives and minimizing such errors is critical, as a false identification can result in incorrect feedback. Incorrect feedback can further undermine the training\u2019s effectiveness, potentially eroding trust and changing how trainees engage with the training activities. We provide the formulas for NPV and TNR in equations 1 and 2, respectively. Both NPV and TNR are metrics that range from 0 to 1, with higher values signifying a model\u2019s enhanced capability to correctly identify true negative instances. Negative Predictive V alue (NPV ) = True Negative True Negative + False Negative (1) True Negative Rate (TNR) = True Negative True Negative + False Positive (2) 12 Evaluation for RQ2. After rephrasing the trainee\u2019s responses, we evaluate the accuracy and quality of the rephrased responses. In order to achieve this, we first utilized the most effective binary classifier developed in RQ1 to classify the rephrase responses. Then, we compared the number of correct responses in rephrased responses and correct responses in original responses. Specifically, we wanted to investigate the extent to which the GPT-4 model has the capability to improve the accuracy of the trainee\u2019s responses. When the number of correct labels in rephrase responses is more than the correct responses in the original responses, it indicates that the GPT-4 model has the ability to accurately rephrase the trainee\u2019s responses and the classifier developed in RQ1 generally satisfied with the rephrased result. Additionally, we aim to compare the quality rephrased responses by GPT-4 with the ones by human expert. To do so, we first hired three experienced human tutors who completed the training for the three lessons. These three experts were asked to rephrase the incorrect responses based on the research recommendation provided in the lessons. Afterward, we invited a fourth human educational expert to assess the quality of rephrased responses in two dimensions: Accuracy and Responsiveness. The dimension of Accuracy was used to measure the correctness of the rephrased responses. Regarding the dimension of Responsiveness, it evaluates how the rephrased response selectively changes some words to improve the trainee\u2019s original response, while largely preserving the original words and ideas from the trainee\u2019s response. In our study, we designed the question for evaluating Accuracy by asking \u201cThe rephrased response is a better example of {Lesson Name} than the trainee\u2019s response\u201d and the question for evaluating Responsiveness by asking \u201cThe rephrased response changes some words to improve the trainee\u2019s response, but otherwise keeps words and ideas from the trainee\u2019s response\u201d. The educational expert answered the questions by using the five-point Likert scale (i.e., Strongly Disagree to Strongly Agree). 4 Results 4.1 Results for RQ1: Binary Classifier for Correct Responses For RQ1, we explored the zero-shot and few-shot approaches to train a binary classifier using the GPT-4 model, as detailed in Sec. 3.3. The classifier\u2019s performance is presented in Table 7. For the lesson Giving Effective Praise, the zero-shot approach resulted in an F1 score of 0.761 and an AUC of 0.743. When leveraging a two-way few-shot learning approach, we observed an improvement in the performance. The F1 scores remained consistently high, ranging from 0.856 to 0.872, with the 3-shot model achieving the peak performance. In parallel, the AUC scores were also robust, varying from 0.851 to 0.865, with the 5-shot model outperforming the others. Despite these improvements, the NPV and TNR metrics showed greater variability. The NPV spanned from 0.8 to 0.88, with the 3-shot model again taking the lead, whereas the TNR fluctuated between 0.744 to 0.851, with the 5-shot configuration achieving the strongest performance. For the lesson on Reacting to Errors, the performance of the zero-shot learning approach resulted in an F1 score of 0.767 and an AUC of 0.768. It is worth noting 13 Table 7: Classification performance of the responses from three lessons Lessons Metrics Zero Shot Two-way Few-shot 1-shot 2-shot 3-shot 4-shot 5-shot Giving Effective Praise F1 0.761 0.870 0.845 0.872 0.856 0.860 AUC 0.743 0.858 0.836 0.863 0.851 0.865 NPV 0.666 0.841 0.853 0.881 0.841 0.800 TNR 0.680 0.787 0.744 0.787 0.787 0.851 Reacting to Errors F1 0.767 0.779 0.821 0.840 0.823 0.867 AUC 0.768 0.778 0.819 0.838 0.822 0.866 NPV 0.911 0.892 0.866 0.857 0.823 0.880 TNR 0.585 0.622 0.736 0.792 0.792 0.830 Determining What Students Know F1 0.660 0.712 0.718 0.747 0.798 0.805 AUC 0.668 0.712 0.719 0.748 0.799 0.806 NPV 0.630 0.714 0.733 0.733 0.818 0.821 TNR 0.828 0.714 0.786 0.785 0.771 0.786 Note: AUC represents Area under the ROC Curve; NPV represent Negative Predicted Value; TNR represents True Negative Rate. that the zero-shot learning approach had an impressive NPV score of 0.911, the highest NPV score for feedback from Reacting to Errors activity, indicating the model\u2019s robustness in identifying true negative outcomes. When utilizing two-way few-shot learning approaches, the 5-shot learning approach presented the highest F1, AUC, and TNR scores at 0.867, 0.866, and 0.83, respectively. Lastly, for the lesson on Determining What Students Know, the zero-shot learning approach resulted in an F1 score of 0.66 and AUC of 0.668, the lowest across the three lessons. Interestingly, the zero-shot model had a higher TNR score of 0.828, indicating that the model was adept at identifying true negative cases for this lesson. The performance across the F1, AUC, and NPV metrics presented a general uptick with the adoption of the two-way few-shot learning method, with the 5-shot variant demonstrating the highest enhancements, reflected by F1, AUC, and NPV scores of 0.805, 0.806, and 0.821, respectively. 4.2 Results for RQ2: Using GPT-4 to Rephrase Incorrect Responses For RQ2, we examine the application of GPT-4 in transforming trainees\u2019 incorrect responses into a preferred format that exemplifies effective feedback, thereby demonstrating the correct manner to meet learner needs through feedback revision. To accomplish this, we utilized the most effective binary classifier identified from RQ1, the 5-shot classifier, to pinpoint incorrect responses within the three lessons. The identified responses were then compared with the responses identified by the expert human raters as described in Sec. 3.3. The intersection of the responses identified as incorrect by both the classifier and the human rates resulted in 36 responses for Giving Effective Praise, 42 responses for Reacting to Errors, and 53 responses for Determining What Students Know. The overlap between the five-shot classifier and human raters was 14 85%, 83%, and 78.6% for Giving Effective Praise, Reacting to Errors, and Determining What Students Know, respectively, as indicated by the TNR scores for the 5-shot approach shown in Table 7. As each training activity across the three lessons contained two paired examples to illustrate effective feedback in each scenario, we utilized the two paired examples per lesson to take a two-shot learning approach in exploring the effectiveness of GPT-4 in rephrasing student feedback. In this section, we report on the accuracy and responsiveness of the rephrased trainee responses by comparing the responses generated using zero-shot and two-shot GPT-4 models with responses rephrased by humans across the three lessons. The responses were assessed using a five-point Likert scale, i.e., Strongly Disagree (represented by -2), Disagree (represented by -1), Neutral (represented by 0), Agree (represented by 1), and Strongly Agree (represented by 2), as described in Sec. 3.5. Given the ordinal nature of Likert scale data, we utilize the Mann-Whitney U test, a non-parametric statistical method, to ascertain if the accuracy and responsiveness of the rephrased responses are statistically different. First, we examined the accuracy and responsiveness of the rephrased trainee responses for the lesson Giving Effective Praise, as presented in Fig. 3. We observed a higher median accuracy score of 1 for responses rephrased by GPT-4 (both Zeroshot and Few-shot) whereas the human rephrased responses received a median score of -1. As shown in Table 8, the accuracy scores of the rephrased responses generated using both GPT models (zero-shot and few-shot) were significantly higher than the responses rephrased by the humans (p < 0.001) indicating that the GPT-4 models Fig. 3: Distribution of accuracy and responsiveness scores from the lesson Giving Effective Praise Table 8: Statistics for rephrased responses from the lesson Giving Effective Praise. Metrics Zero-shot Few-shot Human Mean Accuracy 0.61 0.92 -0.36 Mean Responsiveness 0.22 0.44 0.44 words/response (mean) 17.28 21.72 12.28 words/response (SD) 6.28 18.94 5.88 15 were more effective at rephrasing the responses to the desired format in comparison to humans. While we did not observe a significant difference in the accuracy of the two GPT-based models, we observed a higher variance in the score of the zero-shot approach in comparison to the accuracy scores for the two-shot approach. When analyzing the responsiveness of the rephrased responses, we did not observe a significant difference between the responsiveness score of the GPT-4 rephrased responses and human rephrased responses; however, the human rephrased responses had a higher variance in comparison to the responsiveness scores of GPT-4 rephrased responses. The result demonstrated that the few-shot learning approach performed significantly better than the human in terms of the accuracy of the rephrased responses, while there was no significant difference in the responsiveness of the rephrased responses between the rephrased responses from the humans and the GPT-4 models. It indicated the effectiveness of few-shot learning on rephrasing the incorrect trainees\u2019 responses on the lesson of Giving Effective Praise. Similarly, we evaluated the rephrased responses provided by both GPT-4 models and human for the Reacting to Errors lesson, presented in Fig. 4. The GPT-4-generated responses achieved a median accuracy score of 1, outperforming the human-revised responses, which held a median score of 0. Upon examining the rating further, as presented in Table 9, the accuracy of responses rephrased using the few-shot approach was significantly higher than those rephrased by humans (p < 0.01). Even the zero-shot rephrased responses were more accurate than human alterations (p < 0.05). As for the responsiveness, most of the scores from the GPT-revised and human-revised responses were clustered between 0 and 1, with no significant difference in responsiveness between them. Additionally, the table also indicated that the average word count per response remained consistent between the GPT and human revisions, demonstrating that the GPT models, especially the few-shot approach, are adept at effectively rephrasing incorrect responses to Reacting to Errors without extensive modification to the original wording and sentence structure provided by the trainees. Fig. 4: Distribution of accuracy and responsiveness scores from the lesson Reacting to Errors 16 Table 9: Statistics for rephrased responses from the lesson Reacting to Errors. Metrics Zero-shot Few-shot Human Mean Accuracy 0.62 0.86 0.07 Mean Responsiveness 0.17 0.17 0.21 words/response (mean) 15.79 15.40 15.26 words/response (SD) 6.93 7.24 5.35 Finally, our evaluation of the rephrased responses from the lesson Determining What Students Know, as illustrated in Fig. 5 and Table 10, revealed no significant difference in the dimensions of accuracy and responsiveness across the three approaches. Notably, unlike the accuracy in the other two chapters, the responsiveness scores from the few-shot method were marginally higher than those rephrased by humans (p = 0.08), indicating comparable performance between the automated few-shot and zero-shot approaches and human expertise. At the same time, no statistical significance was observed across conditions for responsiveness. Interestingly, it was in the Determining What Students Know lesson that the classification model had its weakest performance among the three lessons. Fig. 5: Distribution of accuracy and responsiveness scores from the lesson Determining What Students Know Table 10: Statistics for rephrased responses from the lesson Determining What Students Know. Metrics Zero-shot Few-shot Human Mean Accuracy 0.68 0.70 0.85 Mean Responsiveness 0.28 0.30 0.06 words/response (mean) 22.72 20.83 20.09 words/response (SD) 18.51 16.12 8.10 17 5 Discussion Providing explanatory feedback is a fundamental requirement for delivering personalized feedback to learners. Our study explored the use of large language models (GPT-4 model) to automate the facilitation of explanatory feedback to novice tutors, where the main findings can be summarized in two folds: Firstly. GPT-4 models, especially for the few-shot approach, have the potential to accurately identify the correct and incorrect trainees\u2019 responses, which can be used to provide corrective feedback when training novice tutors on the scenario-based tasks. Our results indicate that despite a limited number of samples, the GPT-4 model can accurately identify the incorrect trainees\u2019 responses across three different tutor training lessons (i.e., Giving Effective Praise, Reacting to Errors, and Determining What Students Know). By comparing the classification performance with zero-shot learning, the few-shot learning approach, especially with increasing shots, generally tends to improve the model\u2019s classification performance. This improvement suggests that more examples might increase GPT\u2019s capability to recognize the many different ways to express a target concept like effort-based praise (e.g., \u201cGood effort on solving the problem\u201d), and distinguish it from a related concept, like outcome-based praise (e.g., \u201cGood job\u201d). The implications of this finding is profound, especially when considered alongside existing research on neural network learning in humans. Previous research [28] has illustrated that both the quantity and diversity of examples play a significant role in the learning process, with optimal outcomes achieved through exposure to a range of examples that are internally diverse yet distinct from other categories. Applying this principle to the context of LLM training suggests a strategy where examples within a category (e.g., praising effort) are maximally diverse, whereas examples across categories are closely aligned (e.g., comparing praise for effort with praise for outcomes). Pursuing this line of inquiry in future research could yield valuable insights into the mechanisms underpinning effective learning in both human and artificial neural networks. By systematically exploring the interplay between example diversity and learning efficacy, we can refine our understanding of how best to structure training data for LLMs like GPT-4, ultimately enhancing their utility in educational applications. Secondly, the capability of GPT-4, particularly when employing the few-shot learning approach, extends to effectively rephrasing trainees\u2019 incorrect responses into a desired format. Notably, GPT-4\u2019s performance in rephrasing incorrect responses to correct ones is on par with, and sometimes surpasses, that of experienced human tutors. This proficiency likely stems from GPT-4\u2019s advanced understanding of context and language nuances [26], enabling it to reconstruct trainees\u2019 incorrect responses to align more closely with the desired responses. The practical implications of the GPT4\u2019s capabilities are significant. The classified and rephrased responses generated by GPT-4 can be integrated into template-based feedback systems. Such integration facilitates the provision of real-time and explanatory feedback to novice tutors (or trainees) during their training sessions. 18 5.1 Implications The incorporation of the binary classifier and its generalizability in terms of performance holds significant implications for providing explanatory feedback. The classified results (i.e., correct or incorrect responses) on trainee\u2019s responses can be further integrated into the provision of corrective feedback as shown in Fig. 2. Specifically, by identifying the incorrect responses, our feedback system can use the template-based feedback to provide suggestions for trainees to consider, i.e., \u201cAI-generated feedback suggests that your response could focus more on praising the student for their efforts in the learning process. Providing corrective feedback is essential in the learning process for tutor training since it can assist the tutors in identifying their errors and improving the quality of their feedback [11]. Furthermore, this study demonstrated the potential of prompting GPT-4 models in rephrasing incorrect trainees\u2019 responses into the desired form. We measured the quality of rephrased responses from GPT-4 models and human experts in terms of their accuracy and responsiveness as described in Sec. 3.5. Based on our observations, the rephrased responses consistently rated higher in accuracy while the responsiveness of the GPT-4 generated responses were comparable to the human-generated responses. For instance, a tutor response classified as incorrect, \u201cGreat Job! But there is a tiny mistake, can you tell what was it ?\u201d from the lesson Reacting to Errors was rephrased in the following manner: Zero-shot: \u201cGood effort, Aaron! There\u2019s a small detail that needs tweaking. Can you spot what it might be?\u201d Few-shot: \u201cGood effort! Let\u2019s take another look at this, can you walk me through your process?\u201d Human: \u201cGreat Job! But there is a tiny mistake, can you tell what was it by going over each step?\u201d The incorrect response used the word \u201cmistake\u201d in the response, which failed to reframe the mistakes as a learning opportunity effectively and is considered to be an ineffective tutoring strategy as indicated by [3, 21]. In contrast, both zero-shot and few-shot responses were more effective in accounting for the student\u2019s errors when rephrasing the feedback. The zero-shot rephrased response avoids using direct words about the student\u2019s mistake and implicitly mentions the misconception \u201cThere\u2019s a small detail that needs tweaking.\u201d whereas the few-shot approach encourages the student to revisit the problem Let\u2019s take another look at this which is also in line with the effective practice of reacting to errors [3, 21]. It should be noted that both zeroshot and few-shot rephrased responses also changed the praise on student\u2019s outcome (i.e., \u201cGreat Job\u201d) to student learning effort (i.e., \u201cGood effort\u201d). However, the human expert failed to rephrase the incorrect response into the desired form even though the expert changed the words from the original incorrect response. Additionally, when the accuracy of rephrased responses was rated at 1 (i.e., Agree on a five-point Likert scale) or 2 (i.e., Strongly Agree) but the responsiveness was rated from -2 (i.e., Strongly Disagree) to 0 (i.e., Neutral), the rephrased responses still were presented effective responses. For example, an example of incorrect response \u201cI think you are doing great.\u201d from Giving Effective Praise was rephrased as below: 19 Zero-shot: \u201cYou\u2019re showing excellent effort in tackling your math work, Carla. Your perseverance is commendable.\u201d Few-shot: \u201cI see you\u2019re making good progress with your efforts. Keep it up!\u201d Human: \u201cYou are doing a great job!\u201d The incorrect response \u201cI think you are doing great.\u201d failed to praise student on their learning efforts or learning actions but on their learning outcome, which is considered ineffective praise as indicated in [3, 21]. Both zero-shot and few-shot rephrased responses were rated accuracy of 2 (i.e., Strongly Agree) but responsiveness of 0 (i.e., Neutral). Both shot and few-shot rephrased responses demonstrated praise on the student learning efforts as underlined in the examples, but both responses changed many words, which was not similar to the original incorrect responses. In comparison, the responsiveness of human rephrased responses was rated at 1 since there was only several words were changed from the original incorrect response. However, the human expert failed to revise the praise correctly, and the rephrased response was rated at -1 (i.e., Disagree). The rephrased praise still focused on the student learning outcome (i.e., great job!) rather than their learning efforts, which is not considered an effective response for praising student as indicated by [3]. As summarized by the evaluation results of both GPT-4 and human rephrased responses, we proposed a framework for determining the quality of the rephrased responses, shown in Fig. 6. Fig. 6: Framework for determining the quality of the rephrased responses. This framework (Fig. 6) aims to guide future work to understand the extent to which the rephrased responses are considered high quality. When the accuracy of the rephrased response is rated at 1 or 2, the rephrased responses are considered to be acceptable. Based on our observation, the optimal rephrased responses should be high in both accuracy and responsiveness (i.e., Excellent area in Fig. 6), which could guide the trainees to understand the desired form of the responses and also help them know where they did not perform well while providing their scenario specific feedback. Since the dimension of responsiveness aims to minimize the changes of words in the 20 responses, we expect the trainees to be able to locate the parts of the sentence that are incorrect and rephrase them accordingly. Similarly, a high accuracy and lower responsiveness (i.e., Good area in Fig. 6) could guide the trainee to recognize the desired quality of the feedback. However, as shown in the above example, the low responsiveness of the rephrased responses is an indicator of the modifications required in the original incorrect responses, which may not be as helpful to the trainees if the rephrasing resulted in major structural and semantic changes that are harder to learn and retain. Finally, we defined responses in two areas as undesirable responses, as illustrated in Fig. 6. The undesirable responses, marked by a low accuracy score (\u22640), undermine the effectiveness of the feedback [3]. While the rephrased responses might demonstrate high responsiveness, the low accuracy of the response is still detrimental to its effectiveness and, as such, is not desirable. The rephrased feedback (\u201cYou are doing a great job!\u201d ), as presented above, is an example of a rephrased response with a low accuracy but high responsiveness score. 5.2 Limitations and Future Work Evaluating impact of proposed feedback system on tutoring practice. While our current findings demonstrated the potential of GPT models in providing explanatory feedback and appropriately rephrased responses, there is a need for a more comprehensive evaluation of such feedback\u2019s effectiveness in tutor training. In future work, we plan to investigate the influence of the feedback on tutor practice. Specifically, we will examine the direct effects of our feedback on tutors\u2019 skill acquisition, retention, and application in real-world tutoring scenarios. By conducting longitudinal studies with both control and experimental groups, we aim to gain a clearer understanding of the long-term advantages and possible challenges of our approach. Such insights will not only shed light on the efficacy of our feedback system but also inform potential refinements to enhance the training process for novice tutors. Using advanced prompt strategies for explanatory feedback. In our current study, we utilized zero-shot and few-shot prompt strategies to identify correct or incorrect trainees\u2019 responses (RQ1) and to rephrase these incorrect responses appropriately (RQ2). While our proposed prompting strategies demonstrated promising results, there is potential for further improvement. We are considering the adoption of more advanced prompt strategies. Two such strategies that have caught our attention are the Tree of Thoughts [29] and Graph of Thoughts [30]. These prompting strategies are expected to offer a more nuanced and structured way of understanding the task context and generating relevant information, potentially leading to more accurate and insightful results. A comprehensive exploration of these advanced prompting strategies is beyond the scope of our current study. Thus, in future work, we aim to delve deeper into these prompt strategies to investigate their efficacy and potentials on the improvement of the quality of explanatory feedback. Generalizability across other tutor training lessons. While our study demonstrated promising results on providing explanatory feedback primarily from three lessons, to further explore the efficacy of our feedback system, broader evaluations of the feedback system on other lessons are also important such as Using Motivational 21 Strategies and Ensuring Conceptual Understanding. All the lessons on our platform introduces tutors to unique teaching scenarios and challenges. Ensuring that our feedback system is equally adept at handling the intricacies of each lesson is crucial for its overall success. Thus, it is important to evaluate the efficacy of our developed feedback system across all lessons, ensuring that the feedback provided is accurate, relevant, and conducive to the emerging tutor training process, continuously guiding tutors towards pedagogical excellence. Enhancing explanatory feedback through sequence labeling The primary objective of this study is to provide automatic explanatory feedback. We have demosntrated the demo of our developed explanaotry feedback system shown in Fig. 2. To further unlock the potential of automatic explanatory feedback, we propose a significant enhancement: the integration of sequence labeling method, as originally introduced in the work by [5, 31]. In their research, they employed a color-coded highlighting approach to distinguish between the effective and ineffective component of trainee\u2019s responses, aiming to facilitate a clearer comprehension of correctness or incorrectness. By incorporating this sequence labeling approach in the provision of explanatory feedback, we expect that the feedback can demonstrate more corrective information fostering a deeper understanding among trainees regarding the construction of effective responses. Enhancing trainee response evaluation beyond binary classification. Our study leveraged GPT-4\u2019s capabilities to categorize trainee responses into binary classes: correct or incorrect. However, this dichotomous approach may be overly simplistic and potentially limiting for real-world applications where a more nuanced understanding is required. Acknowledging this, we recognize the necessity of developing a more granular evaluation scale. A tiered ranking system, perhaps on a fiveor ten-point scale, could provide a more detailed and effective assessment of trainee responses, aligning more closely with the complexities of real-world scenarios. This insight highlights a limitation in our current methodology and underscores the potential for future research to explore more sophisticated classification frameworks that can capture the varied spectrum of trainee performance more accurately. Strategies for safeguarding privacy information in real-world tutoring. Our study observed that responses from trainee tutors across three different lessons often included the use of student names, as in \u201cKevin, good job getting the problem correct!\u201d This pattern suggests a tendency among some tutors to personalize their feedback by mentioning students by name during actual tutoring sessions. To further evaluate the practices of novice tutors within real-world tutoring contexts, it is necessary to colect and archive transcripts of tutoring dialogues in our database. To protect data privacy, we intend to anonymize any sensitive information, such as names, locations, and ages, contained within these transcripts. Enhancing automated explanatory feedback quality through human-in-theloop design. In our future work, we aim to explore the enhancement of automated explanatory feedback quality through the incorporation of a human-in-the-loop design. This approach will involve integrating human interaction directly into the feedback loop, enabling a ranking system where responses generated by Large Language Models 22 (LLMs) are reviewed and prioritized based on human judgment. Such a mechanism is expected to provide stronger signals to the AI, guiding it towards producing outputs that are more aligned with human expectations. Crowd sourcing the evaluation of rephrased responses from trainees. Inviting educational experts to evaluate the quality of rephrased responses is often timeconsuming and impractical, especially when dealing with a large volume of tutor responses. To address this issue, we suggest a crowd-sourcing approach for rating the rephrased responses. we plan to include the question (shown in Table 11) into the lesson and invite tutor trainees to answer the question. Table 11 presents the Scenario question and a response from a previous trainee which was identified an incorrect response. We will employ the large language models to rephrase the incorrect trainee\u2019s response and also keep the original incorrect response in the question. The new trainees are invited to rate the quality of responses based on the accuracy, responsiveness in a five-point scale. We also incorporate the original response for trainee to rate their scores. Since our developed binary classifier was not perfect, misclassified incorrect might exist, we also want the trainee\u2019s to provide their rating on the original responses. By doing so, we can obtain their ratings of rephrased responses and we expect our trainees can obtain better understanding about the presence of the effective form of responses in different training lessons. Table 11: Sample question for crowd sourcing the ratings of rephrased responses from the trainee tutors. Scenario: What exactly would you say to Cindy to begin helping her solve the math problem? Response: Are you familiar with the definition and notation for angle and side congruence? Rephrased Responses Accuracy Responsiveness 1. How would you define when angles or sides in a triangle are congruent? What does that mean to you? 4 3 2. Can you explain your understanding of angle and side congruence, and their notations? 4 3. What do you understand about the concept of congruence in relation to sides and angles? 4. Are you familiar with the definition and notation for angle and side congruence? Explanatory feedback for the synchronous tutoring session. Our study demonstrated the capability of GPT-4 models to provide explanatory feedback and adeptly rephrasing tutor responses into a desired format. As shown in Sec. 4.2, our proposed few-shot learning approach could achieve performance comparable to human experts in rephrasing responses appropriately, which could help reduce the use of inappropriate instructional responses during the student learning process. Given our current findings, we expect the integration of our developed explanatory feedback system into synchronous text-based online tutoring could facilitate the tutoring process. Previous studies [32\u201334] have emphasized the importance of showing effective responses to students. Given the growing demand for qualified tutors, our feedback 23 system, when integrated with synchronous tutoring platforms, can equip novice tutors to deliver timely and appropriate instructional feedback. To assess the influence of our exploratory feedback system on tutoring, We recommend conducting randomized controlled experiments to examine the efficacy of our feedback system further. In the experiment setup, tutors in experimental group will use our explanatory feedback system to provide instructional response, whereas the tutors in the control group will follow business-as-usual tutoring. The investigation aims for a comprehensive understanding of the system\u2019s strengths and areas needing improvement. 6 Conclusion We aimed to provide automatic explanatory feedback to enhance tutor training. Our study explored the potential of GPT-4 model in delivering real-time explanatory feedback for open-ended questions selected from three tutor training lessons. We first prompted the GPT-4 model to act as a binary classifier to identify incorrect tutor responses. With well-designed prompting strategies, the GPT-4 model, using a fewshot approach, accurately identified incorrect trainee responses across all three lessons we examined. We then used the GPT-4 model to rephrase incorrect responses into the desired responses. Our results demonstrated that the quality of rephrased responses provided by GPT-4, using a few-shot approach, achieved performance comparable to that of human experts. These results indicate that our proposed automatic explanatory feedback system shows promise in providing real-time feedback. Our study sheds light on the development of feedback provision for learners. By integrating our feedback system, we expect it can facilitate the tutor training process and further alleviate the the challenges associated with recruiting qualified tutors. Acknowledgments. This work is supported by funding from the Richard King Mellon Foundation (Grant #10851). Any opinions, findings, and conclusions expressed in this paper are those of the authors. We also wish to express our gratitude to Dr. Ralph Abboud for his invaluable guidance and recommendations, and to the members of Ken\u2019s lab for their insightful feedback on this work. Special thanks to Ms. Jiarui Rao for her assistance in verifying the rating scheme. Declarations Ethics Approval The study presented in this paper obtained the Institutional Review Boards (IRB) approval from Carnegie Mellon University. Conflicts of Interest The authors have no relevant financial or non-financial interests to disclose, nor conflicting interests nor competing interests."
18
+ }
intro_8K/test_introduction_short_2405.00972v1.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.00972v1",
3
+ "title": "CACTUS: Chemistry Agent Connecting Tool-Usage to Science",
4
+ "abstract": "Large language models (LLMs) have shown remarkable potential in various\ndomains, but they often lack the ability to access and reason over\ndomain-specific knowledge and tools. In this paper, we introduced CACTUS\n(Chemistry Agent Connecting Tool-Usage to Science), an LLM-based agent that\nintegrates cheminformatics tools to enable advanced reasoning and\nproblem-solving in chemistry and molecular discovery. We evaluate the\nperformance of CACTUS using a diverse set of open-source LLMs, including\nGemma-7b, Falcon-7b, MPT-7b, Llama2-7b, and Mistral-7b, on a benchmark of\nthousands of chemistry questions. Our results demonstrate that CACTUS\nsignificantly outperforms baseline LLMs, with the Gemma-7b and Mistral-7b\nmodels achieving the highest accuracy regardless of the prompting strategy\nused. Moreover, we explore the impact of domain-specific prompting and hardware\nconfigurations on model performance, highlighting the importance of prompt\nengineering and the potential for deploying smaller models on consumer-grade\nhardware without significant loss in accuracy. By combining the cognitive\ncapabilities of open-source LLMs with domain-specific tools, CACTUS can assist\nresearchers in tasks such as molecular property prediction, similarity\nsearching, and drug-likeness assessment. Furthermore, CACTUS represents a\nsignificant milestone in the field of cheminformatics, offering an adaptable\ntool for researchers engaged in chemistry and molecular discovery. By\nintegrating the strengths of open-source LLMs with domain-specific tools,\nCACTUS has the potential to accelerate scientific advancement and unlock new\nfrontiers in the exploration of novel, effective, and safe therapeutic\ncandidates, catalysts, and materials. Moreover, CACTUS's ability to integrate\nwith automated experimentation platforms and make data-driven decisions in real\ntime opens up new possibilities for autonomous discovery.",
5
+ "authors": "Andrew D. McNaughton, Gautham Ramalaxmi, Agustin Kruel, Carter R. Knutson, Rohith A. Varikoti, Neeraj Kumar",
6
+ "published": "2024-05-02",
7
+ "updated": "2024-05-02",
8
+ "primary_cat": "cs.CL",
9
+ "cats": [
10
+ "cs.CL",
11
+ "cs.AI",
12
+ "cs.LG",
13
+ "physics.chem-ph",
14
+ "q-bio.QM"
15
+ ],
16
+ "label": "Original Paper",
17
+ "paper_cat": "LLM AND Agent",
18
+ "gt": "Large Language Models (LLMs) are foundation models that are combined under a single paradigm to support various tasks or services. Despite being trained on vast corpora of data, these transformer-based LLMs have a limited understanding of the curated or parsed text.[Chiesurin et al., 2023]. Current research has revealed the possibility of augmenting LLMs with tools that aid in efficiently solving various problems and tasks [Mialon et al., 2023, Xu et al., 2023, Qin et al., 2023]. Previous work has also shown that providing specific prompts, curated towards a specific task, can enhance the time and quality of the text generated by the models [Cai et al., 2023]. Combining these two approaches is the Tool Augmented Language Model (TALM) framework, detailed in Parisi et al. [2022], which outperforms existing models on the tasks it is configured for. However with any of these approaches, although the generated answers may \[email protected] arXiv:2405.00972v1 [cs.CL] 2 May 2024 CACTUS A PREPRINT appear correct, LLMs fail to reason or demonstrate subject knowledge as is typically demonstrated by humans [Huang et al., 2023, Kambhampati, 2024]. Mistakes made by the model due to the statistical relationships it learned from data might appear in a similar way across different applications [Bommasani et al., 2021]. If foundation models become integrated with important systems that leverage the foundation model\u2019s ability to quickly adapt to many different tasks and situations, failures could result in significantly unwanted outcomes. The resourceful LLMs like GPT4 [OpenAI, 2023], LLaMA [Touvron et al., 2023], Gemma [Team et al., 2024], MPT [Team, 2023], Falcon [Almazrouei et al., 2023], and Mistral [Jiang et al., 2023] show improved performance over a range of activities [Chiang et al., 2024, Zheng et al., 2023, Hendrycks et al., 2020]. Despite these strides, the inherent limitations of such models become apparent when faced with challenges that require access to dynamic, real-time, or confidential data, which remain inaccessible within their static training datasets. This gap underscores a critical need for LLMs to evolve beyond their current capacities, leveraging external APIs to fetch or interact with live data, thereby extending their utility in real-world applications [Parisi et al., 2022]. In the domain-specific applications, particularly within the chemical, biological and material sciences, the limitations of LLMs are even more pronounced. The intricate nature of chemical data coupled with the dynamic landscape of drug discovery and development, presents a complex challenge that pure computational models alone cannot address effectively. Recognizing this, the integration of cheminformatics tools with the cognitive and analytical ability of LLMs offers a promising pathway. At the forefront of this transformation are Intelligent Agents, autonomous entities capable of designing, planning, and executing complex chemistry-related tasks with exceptional efficiency and precision [Boiko et al., 2023]. These systems are not only capable of utilizing a variety of LLMs for specific tasks but also adept at employing APIs and internet search tools to gather relevant material and data. For example, integrating an Agent into large, tool-based platforms such as KNIME [Berthold et al., 2007] or Galaxy [Goecks et al., 2010] could form a natural language interface between the user and their analysis. By acting as intermediaries, these Agents could significantly streamline the process of scientific discovery and autonomous experimentation with or without human in the loop. Towards that end and taking inspiration from ChemCrow[Bran et al., 2023], an LLM-assisted chemistry synthesis planner, we have developed an Intelligent Cheminformatics Agent focused on assisting scientists with de novo drug design and molecular discovery. Cheminformatics focuses on storing, retrieving, analyzing, and manipulating chemical data. It provides the framework and methodologies to connect computational linguistics with chemical science. This synergistic approach aims to leverage the strengths of both domains by facilitating a more comprehensive and effective exploration of therapeutic compounds, streamlining the drug development process, and ultimately accelerating the discovery from conceptualization to clinical application. In this work, we developed CACTUS (Chemistry Agent Connecting Tool Usage to Science) an LLM-powered agent that possesses the ability to intelligently determine the most suitable tools for a given task and the optimal sequence in which they should be applied, effectively optimizing workflows for chemical research and development. The implications of these intelligent agents are far-reaching. They enable the autonomous operation of complex tasks from data analysis to experimental planning, hypothesis generation, testing, and push the boundaries of what can be achieved through computational chemistry. The synergistic relationship between human intelligence, artificial intelligence, and specialized software tools holds the potential to transform the landscape of drug discovery, catalysis, material science, and beyond. This relationship and combination of domains makes the molecular discovery process more efficient, accurate, and innovative. As we stand on the precipice of this new era in cheminformatics, the integration of LLMs and computational tools through intelligent agents like CACTUS promises to unlock a future where the limits of scientific discovery are bound only by the depths of our imagination.",
19
+ "main_content": "Tool-augmented language models consist of two major components: external tools and language models. This section will discuss the approaches used to implement the language model agent and provide a focused look at the tools used. We will also go into great detail about the strategies used when prompting our agent and how we performed benchmarking. Each of these steps is a critical component of forming a complete intelligent agent able to solve a wide range of problems with the added ability of quick model swapping. 2.1 The Agent An important consideration when building a TALM is the framework in which it will be implemented. We have selected the commonly used open-source platform, LangChain [Chase, 2022], for this purpose. This framework simplifies the integration of prompts with LLMs through a comprehensive set of pre-built Python modules known as \"chains\". It also provides convenient integration with popular LLM hosting/inference platforms such as the OpenAI API and HuggingFace Transformers [Wolf et al., 2020]. CACTUS utilizes LangChain\u2019s implementation of a custom MRKL 2 CACTUS A PREPRINT agent [Karpas et al., 2022] which can be broken into 3 parts: tools, LLMChain, and agent class. The tools in this instance are a collection of cheminformatics helper functions that wrap well-known Python libraries into well-described tools for an agent to use. These tools are explained in much more detail in Section 2.2. The LLMChain is a LangChain specific feature that helps chain the tools and the agent together. This is the prompt provided to the LLM when running any inference and helps to instantiate the model and parse the user input. In CACTUS, we provide a prompt that guides the agent to answer cheminformatics questions by describing the typical steps involved in answering such questions. The last requirement for CACTUS is the agent class. These are also LangChain implemented functions that are used to interpret the user input after the initial prompt and make decisions on which actions to take to best solve the question. CACTUS sticks with a general purpose implementation of the zero-shot agent class that uses the ReAct [Yao et al., 2022] framework to determine which tool to use from the tool\u2019s description. This combination of tools, LLMChain, and zero-shot agent makes CACTUS an extensible LLM tool that can quickly integrate new tools to solve a range of cheminformatics questions. Here, we introduce mathematical formulation to describe the key components and processes of the CACTUS framework: Let\u2019s consider T = t1, t2, . . . , tn the set of cheminformatics tools available to CACTUS as discussed above, where each tool ti is a function that takes an input xi and produces an output yi: ti(xi) = yi (1) The LLMChain is represented as a function L that takes a user input u and a set of tools T as input, and outputs a sequence of actions A = a1, a2, . . . , am: L(u, T) = A (2) Each action ai in the sequence A corresponds to the application of a specific tool tj on an input xj, resulting in an output yj: ai = tj(xj) = yj (3) The zero-shot agent class is modeled as a function Z that takes the user input u, the set of tools T, and the LLMChain output A as input, and produces a final output o: Z(u, T, A) = o (4) The final output o is the result of executing the sequence of actions A determined by the LLMChain, given the user input u and the available tools T. Here, The ReAct framework used by the zero-shot agent class was represented as a function R that takes the user input u, the set of tools T, and the tool descriptions D = d1, d2, . . . , dn as input, and outputs the most appropriate tool tk to use: R(u, T, D) = tk (5) This combination of cheminformatics tools, LLMChain, and zero-shot agent makes CACTUS an extensible LLM tool that can quickly integrate new tools to solve a range of cheminformatics questions. Figure 1: General workflow of the CACTUS Agent that details how the LLM interprets an input to arrive at the correct tool to use to obtain an answer. Starting from the user input, CACTUS follows a standard \"Chain-of-thought\" reasoning method with a Planning, Action, Execution, and Observation phase to obtain an informed output 3 CACTUS A PREPRINT 2.2 Cheminformatics Tools For the purpose of creating a robust LLM agent able to answer a variety of cheminformatics questions, CACTUS includes a wide range of tools integrating common functions found in Python libraries such as RDKit [Landrum et al., 2013] and SciPy [Virtanen et al., 2020], along with interfaces to databases such as PubChem [Kim et al., 2023], ChEMBL [Davies et al., 2015], and ZINC [Irwin et al., 2020]. These tools allow for a chat-based analysis of molecules starting with a SMILES string and ending with information such as molecular descriptors, similarity, or absorption, distribution, metabolism, and excretion (ADME) attributes. The model consists of ten different tools providing information on various descriptors for any given chemical compound used as input. Table 1 contains a list of currently available tools that can assist in obtaining different physio-chemical properties and molecular descriptors of the input chemical compounds. This includes molecular weight, log of the partition coefficient (LogP), topological polar surface area (TPSA), quantitative estimate of drug-likeness (QED), and synthetic accessibility (SA) of the input chemical compounds. Moreover, using the BOILED-Egg method, CACTUS can also estimate the pharmacokinetic properties like blood-brain barrier permeability and gastrointestinal absorption of any given chemical compound [Daina and Zoete, 2016]. Our model also implements drug-likeness, PAINS, and Brenk filters to identify structural and toxicity alerts. All these tools in our model will assist in identifying and screening both currently available and new lead compounds. Currently restricted to using a simple SMILES as input, future releases will allow for varied user input (compound name, molecular formula, InChI key, CAS number, SMILES, ChEMBL ID, or ZINC ID) where the agent will first convert it to SMILES notation, and then used as input for the available tools. Tool Description MolWt Float [0, \u221e] Molecular weight LogP Float [\u2212\u221e, \u221e] Predicted partition coefficient TPSA Float [0, \u221e] Topological Polar Surface Area QED Float [0, 1] Quantitative Estimate of Druglikeness SA Float [1, 10] Synthetic Accessibility BBB Permeant String [Y es, No] Is in \"yolk\" of BOILED-Egg model GI Absorption String [Low, High] Is in \"white\" of BOILED-Egg model Druglikeness Boolean Passes Lipinski Rule of 5 Brenk Filter Boolean Passes Brenk filter PAINS Filter Boolean Passes PAINS filter Table 1: Cheminformatics tools currently supported by CACTUS. These tools provide a comprehensive assessment of a molecular and physicochemical properties. Apart from conversions between different molecular representations, all tools require input in the SMILES format. By leveraging these tools, CACTUS enables researchers to make informed decisions in the molecular discovery process and prioritize compounds with the most promising characteristics. 2.3 Prompting Strategy One important aspect investigated was the significance of the prompt for the agent. Through the LangChain implementation of LLM agents, there is a default prompt that provides a generic instruction of what tools are available and what the task of the LLM is. However, this is not necessarily primed for understanding domain-specific information. To test the hypothesis we ran 2 scenarios: one where we left the default prompt unchanged and only included tool descriptions (Minimal Prompt), and one where we modified the prompt to align the agent more with the domain of chemistry (Domain Prompt). The belief is that a domain aligned prompt will steer the LLM towards better interpretation of the questions being asked, and therefore be more effective in answering user queries. Since we were using a wide range of LLMs for testing, the minimal prompt also included model-specific tokens so that we weren\u2019t unfairly evaluating models against the domain prompt. 2.4 Benchmarking Evaluation of domain-specific TALMs can be a difficult task but we can follow the examples set by general benchmarking suites [Li et al., 2023, Farn and Shin, 2023, Gen, 2023, Xu et al., 2023]. Therefore, we rely on sets of questions that replicate the typical questions the agent would see and score how many the agent is able to answer correctly without requiring extra prompting effort from the user (i.e. having to rephrase the typed question to get a correct answer). To evaluate CACTUS we created sets of cheminformatics questions that test 3 sets of questions depending on the output of the tool. The first set is of qualitative questions, and is represented by questions that return answers like Yes/No, 4 CACTUS A PREPRINT or True/False. The second is quantitative, which represents tools that return numerical values to be interpreted by the agent. The third is a combination of both qualitative and quantitative which we call full or combined set. Table 2 highlights examples of questions passed as user-input to the CACTUS agent. The qualitative and quantitative datasets each contain 500 questions, and the combined dataset contains 1000. Most tests will be done on the combined dataset as we want to test the LLM agent\u2019s ability to perform a diverse set of tasks. Qualitative Questions Question Step Answer Does CCON=O pass the blood brain barrier? Use BBB Tool w/ SMILES Yes What is the GI absorption of C#C? Use GI tool w/ SMILES Low Quantitative Questions Question Step Answer What is the QED of CCCC=O? Use QED Tool w/ SMILES 0.44 What is the TPSA of C(CS)O Use TPSA Tool w/ SMILES 20.23 Table 2: Table demonstrating examples of the questions asked of the CACTUS agent in the cheminformatics benchmark used in this paper. 3 Results and Discussion The implementation of CACTUS represents a significant step forward in the field of cheminformatics, offering a powerful and flexible tool for researchers and chemists engaged in molecular discovery and drug design. The benchmarking studies conducted on various 7b parameter models demonstrate the robustness and efficiency of the CACTUS framework, highlighting its potential to streamline and accelerate the drug discovery process as an example. 3.1 Benchmarking and Performance Evaluation The performance of CACTUS was evaluated using a comprehensive set of 1000 questions, covering 10 different tools (Table 1, with and without the domain prompt on each 7b parameter model as shown in the Figure 2. Correct answers were scored as correct, while wrong answers, inability to converge on an answer, or inability to use the provided tool correctly were marked as incorrect. In this paper, we did not differentiate between incorrect tool usage and simply providing a wrong answer. Any answers that did not coherently address the question were considered incorrect. We accepted correct answers that contained additional formatted text after the correct answer, although this is not the preferred format. This additional information can be programmatically removed before returning the response to the user, or further prompts can be engineered to reduce additional text. Each type of question in the full question set was asked 100 times, resulting in 10 types of questions corresponding to the 10 tools provided in Table 1. This approach allowed us to identify which tools posed a greater challenge for the model, and where improvements to either the tool description or model prompt could be made. The results shown in Figure 2 highlight the importance of domain-specific prompting in improving the accuracy of the model\u2019s responses; particularly for qualitative questions. This finding aligns with recent research emphasizing the role of prompt engineering in enhancing the performance of language models [Liu et al., 2023]. In the progression of AI and its applications in scientific inquiry, it is crucial to analyze the comparative effectiveness of various models in handling domain-specific tasks. The benchmarking analysis presented in Figure 3 offers significant insights into the performance of different language models when prompted with both minimal and domain-specific information. A comprehensive review of the performance data across the full spectrum of question types reveals that Gemma-7b and Mistral-7b models showcase robustness and versatility, performing admirably regardless of the nature of the prompt. Their consistent accuracy across different types of questions ranging from physiochemical properties like druglikeness and blood-brain barrier permeability to more complex metrics like quantitative estimate of drug-likeness (QED) highlight their reliability for a broad range of inquiries within the domain of molecular science. In contrast, models like Falcon-7b exhibit a noticeable disparity between performances with minimal and domain prompts. This variability suggests that Falcon-7b, while capable, may require more fine-tuned prompting to leverage its full potential effectively. The substantial difference in performance based on the prompt type points to an intrinsic model sensitivity to input structure and content, which can be pivotal in crafting effective inquiry strategies. Furthermore, the successful 5 CACTUS A PREPRINT (a) Benchmark performance on the Gemma-7b model with a minimal prompt on each of the 10 question types. (b) Benchmark performance on the Gemma-7b model with a domain prompt on each of the 10 question types. Figure 2: Comparison of the Gemma -7b model with different prompting strategies on the full question set benchmark shows significant improvement in the qualitative question set when comparing the minimal prompt (Figure 2a) to the domain prompt (Figure 2b), while demonstrating similar performance in the quantitative question set. deployment of smaller models, such as Phi2 and OLMo-1b, on consumer-grade hardware (Figure 4) highlights the potential for democratizing access to powerful cheminformatics tools, enabling researchers with limited computational resources to harness the capabilities of CACTUS. 3.1.1 Open Source Models in Varied Settings This comprehensive model comparison and analysis has broader implications for the employment of open-source models in scientific environments. The ability of models to perform well with domain-specific prompts is particularly 6 CACTUS A PREPRINT encouraging, as it implies that with proper configuration, open-source models can be highly effective tools. The adaptability demonstrated by the Gemma-7b and Mistral-7b models indicates their potential for widespread applicability across various computational settings, from high-performance clusters to more modest research setups. Moreover, the ability to effectively prompt open-source models opens the door to their use in a variety of scientific contexts. It allows researchers to customize models to their specific domain, potentially bridging the gap between generalized AI capabilities and specialized knowledge areas. Figure 3: Comparison of model performance among 7B parameter models using minimal and domain-specific prompts. The Gemma-7b and Mistral-7b models demonstrate strong performance and adaptability across prompting strategies, highlighting their potential for widespread applicability in various computational settings, from high-performance clusters to more modest research setups. The flexibility and performance of these models have significant implications for scientific research, particularly in fields like synthetic organic chemistry and drug discovery. For researchers in these domains, the ability to utilize open-source models effectively can accelerate the discovery process, enhance predictive accuracy, and optimize computational resources. The insights from this benchmarking study provide a roadmap for selecting and tailoring models to specific research needs, thereby maximizing their utility in advancing scientific goals. The benchmarking study of the selected 7b parameter models serves as a testament to the progress in AI-driven research tools. It highlights the necessity of prompt optimization and the promise of open-source models in diverse scientific inquiries. The analysis underscores the potential of these models to become integral components in the computational chemist\u2019s toolkit, paving the way for innovative breakthroughs in molecular design and drug discovery. 3.2 Hardware Performance and Model Efficacy The deployment of CACTUS models through vLLM offers a significant advantage by optimizing performance across a variety of GPUs used for LLM inference. In our benchmarking studies we utilized three types of NVIDIA GPUs: the A100 80GB, V100, and RTX 2080 Ti. Our objective was to evaluate the performance of models under different combinations of model size, GPU type, and prompting strategy (minimal or domain-specific). The performance metric was determined by the inference speed in relation to the model\u2019s accuracy. Figure 4 shows the summary of LLMs deployed under different conditions (GPU hardware used, prompt, and benchmark set used) and how well they performed. The efficiency of these models across diverse hardware highlights their potential for widespread implementation in a range of research settings. 7 CACTUS A PREPRINT The models evaluated include Gemma-7b, Falcon-7b, MPT-7b, Llama2-7b, Mistral-7b, as well as two smaller models, Phi2 and OLMo-1b. The inclusion of these smaller models highlights the potential for successfully deploying models on local resources with limited computational power (e.g., consumer-grade GPUs like the RTX 2080 Ti) while still achieving accurate results. Overall, the model performance was found to be relatively quick on both the 500-question sets (Qualitative/Quantitative) and the 1000-question combined set (Full). One notable outlier was the Llama2-7b model with domain prompting, which took 185 minutes to complete the inference on the full dataset; however, its accuracy was similar to the minimally prompted version. This model is considered to be an outlier and therefor not included in Figure 4. A full list of the data used to plot these summary figures can be found in the Appendix. Figure 4: Comparison of model performance using accuracy and execution time as key metrics. The study evaluates various open-source models available on the HuggingFace including Gemma-7b, Falcon-7b, MPT-7b, Llama2-7b, and Mistral-7b, phi2 and olmo1b. Different combinations of conditions, such as model type (Vicuna, LLaMa, MPT), prompting strategy (minimal or domain-specific), GPU hardware (A100, V100, or consumer-grade), and benchmark size (small or large) were used to assess the model\u2019s capabilities. The most interesting outcome is that smaller models deployed on consumer grade hardware do not perform drastically worse than their larger parameter model counterparts. Looking at the performance of the Phi2 model (2.7B parameters), it quickly and accurately tackles the 500 question quantitative benchmark with similar performance regardless of the GPU used with the A100 80GB version unsurprisingly as the fastest. Another interesting outcome is the performance of the OLMo-1b parameter model on the combined question set and the RTX 2080 Ti GPU. While unable to obtain any correct answers for the minimal prompt, it jumps up to a surprising 52.2% accuracy when provided a domain prompt. These results are promising that these smaller models can be deployed locally by users and still be able to interpret questions, possibly by providing more specialized prompts. In general, inference time increased as question set size increased (e.g., from quantitative/qualitative to full) , while accuracy tended to decrease with longer inference times. Domain prompts achieved faster inference and accuracy than minimal prompts for models like Falcon-7b, MPT-7b, and Mistral-7b. However, there was an exception in the case of the Phi2 model on the full question set, where the minimal prompt resulted in faster inference but lower accuracy. The hardware performance analysis highlights the importance of considering the interplay between model size, GPU capabilities, and prompting strategies when deploying CACTUS models for molecular property prediction and drug discovery. The ability to achieve accurate results with smaller models on consumer-grade hardware opens up the possibility of wider adoption and accessibility of CACTUS for researchers with limited computational resources. Furthermore, the impact of domain-specific prompting on both inference speed and accuracy emphasizes the need for carefully designed prompts tailored to the specific application domain. As CACTUS continues to evolve and integrate with other computational tools and autonomous discovery platforms, optimizing hardware performance will remain a critical consideration. Future research should explore the development of more efficient algorithms and architectures (energy efficiency) for deploying CACTUS models on a variety of hardware configurations, ensuring that the benefits of this powerful tool can be realized across a wide range of research settings and computational resources. 3.3 Issues Encountered and Resolutions During the development and benchmarking of CACTUS agent using open-source models and the LangChain framework, several key challenges were identified. These issues, along with the solutions implemented, provide valuable insights for researchers and developers working on similar workflows. 8 CACTUS A PREPRINT One of the primary issues encountered was the slow inference speed when hosting open-source language models locally on machines utilizing CPUs. Most APIs quickly provide inference results when making calls and this is not something locally hosted models typically replicate well, especially when running on CPUs over GPUs. For this work, we initially used models from HuggingFace and deployed through the HuggingFace Pipelines python package. This allowed us to serve models, but the inference time was quite slow when wrapped in the LangChain agent. To address this, we began utilizing vLLM to host HuggingFace models instead. This substantially decreased our inference time, and allowed for API-like response times from models, even those hosted on less powerful consumer grade GPU hardware. The second major challenge was related to prompt engineering. Our results shown previously highlight that for some models the prompt has a great effect on not only the model accuracy, but the inference time. We spent a good amount of time trying to hone our prompting strategy to yield consistently accurate and efficient results with mixed effect. We ended up needing specialized prompts for each open-source LLM we used, as some were fine-tuned much differently than others and required a very specific prompt style to return usable results. These challenges highlight the need for continued research and development in the areas of model deployment and prompt engineering. Future work will be focused on optimizing the deployment of open-source models on various hardware configurations, including CPUs and GPUs, to ensure that CACTUS can be efficiently utilized across a wide range of computational resources. This may involve the development of novel algorithms and architectures that can better leverage the capabilities of different hardware setups, as well as the creation of more user-friendly tools and frameworks for model deployment and management. In terms of prompt engineering, the development of standardized prompt templates and best practices for prompt engineering in the context of molecular property prediction and drug discovery could help streamline the development process and improve the consistency of results across different models and datasets. 3.4 Future Outlook Molecular Design CACTUS has already demonstrated its potential in estimating basic metrics for input chemical compounds, but its future lies in its evolution into a comprehensive, open-source tool specifically designed for chemists and researchers working on therapeutic drug design and discovery. This will be achieved by the integration of physics-based molecular AI/ML models, such as 3D-scaffold [Joshi et al., 2021], reinforcement learning [McNaughton et al., 2022], and graph neural networks (GNNs) [Knutson et al., 2022] accompanied with molecular dynamics simulations, quantum chemistry calculations, and high-throughput virtual screening [Joshi et al., 2021, Knutson et al., 2022, Joshi et al., 2023, Varikoti et al., 2023, Joshi and Kumar, 2021]. Such capabilities are essential for accurately modeling molecular interactions and predicting the efficacy and safety of potential therapeutic agents [Jiang et al., 2021]. The development plan also includes implementing advanced functionalities for identifying compounds that exhibit structural and chemical similarities, as well as pinpointing key fragments crucial for biological activity. This feature will allow researchers to explore a vast chemical space more efficiently, identifying lead compounds with higher precision. These additions are expected to significantly accelerate and deepen the agent\u2019s ability to understand compound behaviors in 3D spaces and allow researchers to develop more comprehensive and effective workflows for drug discovery and materials design. Additionally, we plan to include tools that identify key fragments and compounds with similar structural and chemical features from the vast available chemical databases. Tools which can calculate physio-chemical, pharmacokinetic properties, and about sixty other descriptors will be added to the agent to identify quantitative structureactivity relationship (QSAR) and quantitative structure-property relationship (QSPR) to help us with screening the compounds and identifying toxic groups. Beyond these technical enhancements, there\u2019s a focus on making CACTUS more explainable and capable of symbolic reasoning. The aim is to address common criticisms of LLMs, particularly their struggle with reasoning and providing explainable outputs. By integrating more advanced symbolic reasoning capabilities, CACTUS will not only become more powerful in its predictive and analytical functions but also provide users with understandable, logical explanations for its recommendations and predictions. This feature would automate the process of predicting how small molecules, such as drug candidates, interact with targets like proteins, thereby providing invaluable insights into the potential efficacy of new compounds. The applications of CACTUS extend beyond drug discovery and can be leveraged in other domains such as chemistry, catalysis, and materials science. In the field of catalysis, CACTUS could aid in the discovery and optimization of novel catalysts by predicting their properties and performance based on their structural and chemical features [Goldsmith et al., 2018]. Similarly, in materials science, CACTUS could assist in the design of new materials with desired properties by exploring the vast chemical space and identifying promising candidates for further experimental validation [Agrawal and Choudhary, 2016]. 9 CACTUS A PREPRINT The future development of CACTUS is geared towards creating an intelligent, comprehensive cheminformatics tool for molecular discovery that not only aids in the identification and design of therapeutic drugs but also ensures a high degree of safety and efficacy. Through the integration of advanced computational techniques and models, alongside improvements in usability and explainability, CACTUS is set to become an indispensable resource in the quest for novel, effective, and safe therapeutic agents, as well as in the discovery and optimization of catalysts and materials. 4 Conclusions In this paper, we have introduced CACTUS, an innovative open-source agent that leverages the power of large language models and cheminformatics tools to revolutionize the field of drug discovery and molecular property prediction. By integrating a wide range of computational tools and models, CACTUS provides a comprehensive and user-friendly platform for researchers and chemists to explore the vast chemical space for molecular discovery and identify promising compounds for therapeutic applications. We assessed CACTUS performance using various open-source LLMs, including Gemma-7b, Falcon-7b, MPT-7b, Llama2-7b, and Mistral-7b, across a set of one thousand chemistry questions. Our findings indicate that CACTUS outperforms baseline LLMs significantly, with the Gemma-7b and Mistral-7b models achieving the highest accuracy regardless of the prompting strategy employed. Additionally, we investigated the impact of domain-specific prompting and hardware configurations on model performance, highlight the importance of prompt engineering and the potential for deploying smaller models on consumer-grade hardware without significant loss in accuracy. The ability to achieve accurate results with smaller models such Phi on consumer-grade hardware opens up the possibility of wider adoption and accessibility of CACTUS, even for researchers with limited computational resources. One of the key takeaways from the development and benchmarking of CACTUS is the importance of addressing the challenges associated with model deployment and prompt engineering. The solutions implemented in this work, such as the use of vLLM for hosting models and the development of tailored prompts for each open-source LLM, serve as a valuable foundation for future efforts in this field. As the field of AI continues to evolve rapidly, it is essential to keep abreast of new developments in language modeling and related technologies to further enhance the capabilities and performance of CACTUS. The development and benchmarking of CACTUS also highlighted key challenges in integrating open-source LLMs with domain-specific tools, such as optimizing inference speed and developing effective prompting strategies. We discussed the solutions implemented to address these challenges, including the use of vLLM for model hosting and the creation of tailored prompts for each LLM. Looking ahead, the future of CACTUS is incredibly promising, with the potential to transform not only drug discovery but also various other domains such as chemistry, catalysis, and materials science. The integration of advanced physicsbased AI/ML models, such as 3D-scaffold, reinforcement learning and graph neural networks, will enable a deeper understanding of compound behaviors in 3D spaces, leading to more accurate predictions of molecular interactions and the efficacy and safety of potential therapeutic agents. Moreover, the addition of tools for identifying key fragments, calculating molecular properties, and screening compounds for toxic groups will significantly enhance the efficiency and precision of the drug discovery process. The focus on improving the explainability and symbolic reasoning capabilities of CACTUS will address common criticisms of large language models and provide users with understandable, logical explanations for the tool\u2019s recommendations and predictions. As CACTUS continues to evolve and integrate with other computational tools and autonomous discovery platforms, it has the potential to revolutionize the way we approach drug discovery, catalyst design, and materials science. By leveraging the power of AI and machine learning, CACTUS can help researchers navigate the vast parameter spaces associated with complex chemical systems, identifying promising candidates for experimental validation and optimization. The future development of CACTUS is geared towards creating an intelligent, comprehensive cheminformatics tool that ensures a high degree of safety and efficacy in the identification and design of therapeutic drugs, catalysts, and materials for various application. Through the integration of advanced computational techniques and models, alongside improvements in usability and explainability, CACTUS is set to become an indispensable resource for researchers across various scientific disciplines. In summary, CACTUS represents a significant milestone in the field of cheminformatics, offering a powerful and adaptable tool for researchers engaged in drug discovery, molecular property prediction, and beyond. As we continue to advance AI-driven scientific discovery, agent like CACTUS will play a pivotal role in shaping the future of research, innovation, and human health. By embracing the potential of open-source language models and cheminformatics tools, we can accelerate the pace of scientific advancement and unlock new frontiers in the quest for novel, effective, and safe therapeutic agents, catalysts, and materials. 10 CACTUS A PREPRINT Code and Data Availability The code to run CACTUS and the associated benchmark data can be found on GitHub: https://github.com/pnnl/cactus. Acknowledgments This research was supported by the I3T Investment, under the Laboratory Directed Research and Development (LDRD) Program at Pacific Northwest National Laboratory (PNNL). The computational work was performed using PNNL\u2019s research computing at Pacific Northwest National Laboratory. The initial concept of integrating LLM and tools received support from the Exascale Computing Project (17-SC-20-SC), a collaborative effort of two U.S. Department of Energy organizations (Office of Science and the National Nuclear Security Administration) responsible for the planning and preparation of a capable exascale ecosystem, including software, applications, hardware, advanced system engineering, and early testbed platforms, in support of the nation\u2019s exascale computing imperative. PNNL is a multi-program national laboratory operated for the U.S. Department of Energy (DOE) by Battelle Memorial Institute under Contract No. DE-AC05-76RL01830. Conflict of Interest The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."
20
+ }
intro_8K/test_introduction_short_2405.00977v1.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.00977v1",
3
+ "title": "Distillation for Multilingual Information Retrieval",
4
+ "abstract": "Recent work in cross-language information retrieval (CLIR), where queries and\ndocuments are in different languages, has shown the benefit of the\nTranslate-Distill framework that trains a cross-language neural dual-encoder\nmodel using translation and distillation. However, Translate-Distill only\nsupports a single document language. Multilingual information retrieval (MLIR),\nwhich ranks a multilingual document collection, is harder to train than CLIR\nbecause the model must assign comparable relevance scores to documents in\ndifferent languages. This work extends Translate-Distill and propose\nMultilingual Translate-Distill (MTD) for MLIR. We show that ColBERT-X models\ntrained with MTD outperform their counterparts trained ith Multilingual\nTranslate-Train, which is the previous state-of-the-art training approach, by\n5% to 25% in nDCG@20 and 15% to 45% in MAP. We also show that the model is\nrobust to the way languages are mixed in training batches. Our implementation\nis available on GitHub.",
5
+ "authors": "Eugene Yang, Dawn Lawrie, James Mayfield",
6
+ "published": "2024-05-02",
7
+ "updated": "2024-05-02",
8
+ "primary_cat": "cs.IR",
9
+ "cats": [
10
+ "cs.IR",
11
+ "cs.CL"
12
+ ],
13
+ "label": "Original Paper",
14
+ "paper_cat": "Distillation",
15
+ "gt": "We define Multilingual Information Retrieval (MLIR) as search over a multilingual collection of monolingual documents to produce a single ranked list [29, 42, 44, 49, 50]. The retrieval system must retrieve and rank documents based only on query relevance, inde- pendent of document language. This is challenging in part because cross-language systems may be unable to exploit surface forms. Our evaluation uses CLEF data [5] with English queries and French, German, Spanish, and English documents; CLEF data [2\u20135] with English queries and French, German, and Italian documents; and Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA \u00a9 2024 Copyright held by the owner/author(s). ACM ISBN 979-8-4007-0431-4/24/07. https://doi.org/10.1145/3626772.3657955 TREC NeuCLIR data [26, 27] with English queries and Chinese, Persian, and Russian documents. Dual-encoder retrieval models such as ColBERT [23] that matches token embeddings, and DPR [22] that matches query and document embeddings, have shown good results in both monolingual [46] and cross-language [32, 37, 53, 56] retrieval. These approaches use pre-trained language models like multilingual BERT [10] and XLM- RoBERTa [6] as text encoders to place queries and documents into a joint semantic space; this allows embedding distances to be calcu- lated across languages. Multilingual encoders are generally trained monolingually on multiple languages [7, 10], which leads to lim- ited cross-language ability. Therefore, careful fine-tuning, such as Translate-Train [37], C3 Pretraining [54] and Native-Train [38], are essential to be able to match across languages [32, 48, 53]. Generalizing from one to multiple document languages is not trivial. Prior work showed that Multilingual Translate-Train (MTT) [29] of ColBERT-X using training data translated into all document languages is more effective than BM25 search over documents translated into the query language. Searching translated documents with the English ColBERT model is even more effective than MTT, but incurs a high translation cost at indexing time compared to MTT\u2019s amortized cost of translating the training corpus. This work aims to develop MLIR training that produces more effective models than its monolingual English counterparts. Knowledge distillation has shown success monolingually [12, 43, 46], so we adapt this concept to train MLIR models. In Translate- Distill [53], a way to train CLIR ColBERT-X models, a teacher model scores monolingual training data using text in whichever language produces its best results. Then when training the student ColBERT- X model, training data is translated into the languages that match the final CLIR task. That work showed that the student model is on par with or more effective than a retrieve-and-rerank system that uses that same teacher model as a reranker. We propose Multilingual Translate-Distill (MTD), a multilingual generalization of Translate- Distill. Instead of training with a single document language, we translate training passages into all document languages. This opens a design space of how to mix languages in training batches. This paper contributes (1) an effective training approach for an MLIR dual-encoder that combines translation and distillation; (2) models trained with MTD that are more effective than the previ- ously reported state-of-the-art MLIR model, ColBERT-X trained with MTT; and (3) a robustness analysis of mini-batch passage mixing strategies. Models and implementation are available on Huggingface Models1 and GitHub.2 1https://huggingface.co/collections/hltcoe/multilingual-translate-distill- 66280df75c34dbbc1708a22f 2https://github.com/hltcoe/colbert-x arXiv:2405.00977v1 [cs.IR] 2 May 2024 SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Eugene Yang, Dawn Lawrie, and James Mayfield Q1 Q2 Q3 Q4 Q5 Q6 (a) Mix Passages Q1 Q2 Q3 Q4 Q5 Q6 (b) Mix Entries Q1 Q1 Q1 Q2 Q2 Q2 (c) Round Robin Entries Figure 1: Three language mixing strategies for Multilingual Translate-Distill. Each row indicates an entry with a query and a list of sampled passages in the training mini-batch. Circles, diamonds, and squares represent different document languages.",
16
+ "main_content": "An IR problem can be \u201cmultilingual\u201d in several ways. For example, Hull and Grefenstette [18] described a multilingual IR problem of monolingual retrieval in multiple languages, as in Blloshmi et al. [1], or alternatively, multiple CLIR tasks in several languages [3\u2013 5, 28, 36]. We adopt the Cross-Language Evaluation Forum (CLEF)\u2019s notion of MLIR: using a query to construct one ranked list across documents in several languages [41]. We acknowledge that this definition excludes mixed-language or code-switched queries and documents, other cases to which \u201cmultilingual\u201d has been applied. Prior to neural retrieval, MLIR systems generally relied on crosslanguage dictionaries or machine translation models [9, 24, 35]. Translating documents into the query language casts MLIR as monolingual in that language [14, 33, 44]. While translating queries into each document language is almost always computationally more economical than translating the documents, it casts the MLIR problem as multiple monolingual problems whose results must be merged to form the final MLIR ranked list [42, 49, 50]. Moreover, quality differences between translation models could bias results by systematically ranking documents in some languages higher [17, 29]. Recent work in representation learning for IR [12, 13, 45] and fast dense vector search algorithms [19, 21, 34] spawned a new class of models called dual-encoders. These models encode queries and documents simultaneously into one or more dense vectors representing tokens, spans, or entire sequences [22, 23, 30, 31]. While replacing the underlying language model with a multilingual one, such as multilingual BERT [10] and XLM-RoBERTa [7], produces systems that accept queries and documents in multiple languages, zero-shot transfer of a model trained only monolingually to a CLIR or MLIR problem is suboptimal; it leads to systems even less effective than BM25 over document translations [29, 37]. Therefore, designing an effective fine-tuning process for transforming multilingual language models into multilingual IR models is critical. Various retrieval fine-tuning approaches have been explored, such as contrastive learning [22, 23, 46], hard-negative mining [12, 16], and knowledge distillation [12, 43, 46]. Knowledge distillation has demonstrated more effective results in both monolingual and cross-language IR [32, 53] than the others. The recently proposed Translate-Distill approach decoupled the input languages of the teacher and student models. This allowed large English rerankers to train ColBERT-X for CLIR, leading to state-of-the-art CLIR effectiveness measured on the NeuCLIR 22 benchmark [26]. Recent work by Huang et al. [17] proposes a language-aware decomposition for prompting (or augmenting) the document encoder. In this work, we explore the simple idea of relying on translations of MS MARCO and distilling the ranking knowledge from a large MonoT5 model with mT5XXL underneath [20, 40, 52]. 3 MULTILINGUAL TRANSLATE-DISTILL Our proposed Multilingual Transalte-Distill (MTD) training approach requires a monolingual training corpus consisting of queries and passages; no relevance labels are required. 3.1 Knowledge Distillation To train a student dual-encoder model for MLIR, we first use two teacher models: a query-passage selector and a query-passage scorer. Following Yang et al. [53], the query-passage selector retrieves \ud835\udc58passages for each query. This can be replaced by any hardnegative mining approach [16, 43] or by adapting publicly available mined passages.3 The query-passage scorer then scores each querypassage pair with high accuracy. The scorer is essentially a reranker from which we would like to distill ranking knowledge implicit in an expensive model such as MonoT5 [40] that is generally too slow to apply by itself. The final product from the two teachers is a set of tuples, each containing a query, a passage, and the associated teacher score. We use these data to train the student dual-encoder model. Specifically, for each training mini-batch of size \ud835\udc5b, we select \ud835\udc5btraining queries and sample \ud835\udc5aretrieved passage IDs. To teach the student model to rank documents across languages, we translate each passage into all of the target languages. When constructing the mini-batch, we determine the language for each passage ID, which we discuss in more detail in the next section. Finally, the loss function is the KL divergence between the teacher and student scores on the query and the translated passages. 3.2 Language Mixing Strategies To train an effective ColBERT-X model for MLIR, each training batch must include documents in more than one language [29]. Training with MTD opens a design space for selecting languages for the mini-batch passages. We experiment with three mixing strategies (see Figure 1): Mix Passages. In each training batch entry, all passages are randomly assigned to one of the document languages. In this case, each language is equally likely to be present during training. Each language also has an equal probability of being assigned to any 3For example, https://huggingface.co/datasets/sentence-transformers/msmarco-hardnegatives. Distillation for Multilingual Information Retrieval SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Table 1: Collection Statistics CLEF NeuCLIR Subset[17] 2003 2022 2023 Languages de, fr, it de, fr, es, en zh, fa, ru # of Docs 0.24M 1.05M 10.04M # of Passages 1.90M 6.96M 58.88M # of Topics 113 60 41 65 Avg. Rel/Topic 40.73 102.42 125.46 67.77 passage in such a way that language representation is balanced, thus a language is just as likely to be assigned to a passage with a high score as a low score. This mixing method directly trains the student model to rank passages in different languages. Mix Entries. Alternatively, we can assign the same randomly selected language to all passages associated with a query. This method ensures the translation quality does not become a possible feature that the student model could rely on if there is a language with which the machine translation model struggles. While not directly learning MLIR, this model jointly learns multiple CLIR tasks with distillation and eventually learns the MLIR task. Round Robin Entries. To ensure the model equally learns the ranking problem for all languages, we experiment with training query repetition to present passages from all languages. In this case, the model learns the CLIR tasks using the same set of queries instead of a random subset when mixing entries. However, this reduces the number of queries per mini-batch given some fixed GPU memory size. Given this memory constraint, round robin may not be feasible if the number of document languages exceeds the number of entries the GPU can hold at once. 4 EXPERIMENTS We evaluate our proposed model on four MLIR evaluation collections: a subset of CLEF00-03 curated by Huang et al. [17]4; CLEF03 with German, French, Spanish, and English [5]; and NeuCLIR 2022 [26] and 2023 [27]. Collection statistics are summarized in Table 1. Queries are English titles concatenated with descriptions. We use MS MARCO [39] to train the MLIR ColBERT-X models with MTD, for which we adopt the PLAID-X implementation released by Yang et al. [53].5 We use the English ColBERTv2 model released by Santhanam et al. [46] that was also trained with knowledge distillation6 and MonoT5 with mT5XXL released by Jeronymo et al. [20]7 as query-passage selector and scorer, respectively. Both the selector and the scorer received English MS MARCO queries and passages to generate training teacher scores. To support MTD training, we translated the MS MARCO passages with Sockeye v2 [11, 15] into the document languages. Student ColBERT-X models are fine-tuned from the XLM-RoBERTa large models [7] using 8 NVidia V100 GPUs (32GB memory) for 200,000 gradient steps with a mini-batch size of 8 entries each associated 4The collection is reconstructed by using the author-provided document IDs, which excludes a large portion of unjudged documents. Documents added in subsequent years are also excluded. Thus some judged relevant documents are also excluded. 5https://github.com/hltcoe/ColBERT-X 6https://huggingface.co/colbert-ir/colbertv2.0 7https://huggingface.co/unicamp-dl/mt5-13b-mmarco-100k with 6 passages on each GPU. We use AdamW optimizer with a 5 \u00d7 10\u22126 learning rate and half-precision floating points. Documents are split into 180 token passages with a stride of 90 before indexing. The number of resulting passages is reported in Table 1. We index the collection with PLAID-X using one residual bit. At search time, PLAID-X retrieves passages, and document scores are aggregated using MaxP [8]. For each query, we return the top 1000 documents for evaluation. To demonstrate MTD effectiveness, we report baseline ColBERT models that are trained differently: English ColBERT [46], ColBERTX with Multilingual Translate-Train (MTT) [29], and ColBERT-X with English Distillation (ED). Since English ColBERT does not accept text in other languages, we index the collection with documents machine-translated into English (marked \u201cDT\u201d in Table 2). ColBERTX models trained with MTT use the training triples released by MS MARCO with hyperparameters similar to the MTD ones except for the number of queries per batch per GPU is increased to 32. Finally, the English Distillation models are only exposed to English queries and passages during fine-tuning instead of the translated text. It performs a zero-shot language transfer at indexing and search time. We also compare our models to the recently published KDSPD [17], which is a language-aware MLIR model that encodes the entire text sequence as a single vector. To provide a broader context, we report sparse retrieval baselines PSQ-HMM [9, 51, 55] and BM25 with translated documents, which are two strong MLIR baselines reported in NeuCLIR 2023 [27]. We report nDCG@20, MAP, and Recall at 1000 for the CLEF03 and NeuCLIR collections. To enable comparison to Huang et al. [17], we report nDCG@10, MAP@100, and Recall@100 on the CLEF00-03 subset. To test statistical superiority between two systems, we use a one-sided paired t-test with 95% confidence on the per-topic metric values. When testing for statistical \u201cequivalence\u201d where the null hypothesis is that the effectiveness of the two systems differ, we use a paired Two One-sided T-Tests (TOST) [25, 47] with a threshold of 0.05 and 95% confidence. 5 RESULTS Table 2 summarizes our experiments. ColBERT-X models trained with MTD are more effective than those with MTT across all four evaluation collections, demonstrating a 5% (CLEF03 0.643 to 0.675 with mix passages) to 26% (NeuCLIR22 0.375 to 0.474 with round robin entries) improvement in nDCG@20 and 15% (CLEF03 0.451 to 0.520 with mix passages) to 47% (NeuCLIR22 0.236 to 0.347 with mix entries) in MAP. MTD-trained ColBERT-X models over documents in their native form are significantly more effective than translating all documents into English and searching with English ColBERT. Since the languages in the two CLEF collections are closer to English than those in NeuCLIR, the ColBERT-X model trained with English texts (Row 5) still provides reasonable effectiveness using (partial) zero-shot language transfer during inference. MTD yields identical effectiveness to ED based on the TOST equivalence test in the two CLEF collections by measuring MAP (Table 2). In contrast, NeuCLIR languages do not benefit from this phenomenon. Instead, training directly with text in document languages enhances both the general language modeling and retrieval ability of the student models. In NeuCLIR 2022 and 2023, student ColBERT-X models SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Eugene Yang, Dawn Lawrie, and James Mayfield Table 2: MLIR system effectiveness. Numbers in superscripts indicate the system of the row is statistically better than the systems in the superscript with 95% confidence by conducting a one-sided paired t-test. Numbers in subscripts indicate the system of the row is statistically identical within 0.05 in value to the systems in the subscripts with 95% confidence by conducting paired TOSTs. Bonferroni corrections are applied to both sets of statistical tests. CLEF00-03 Subset [17] CLEF 2003 NeuCLIR 2022 MLIR NeuCLIR 2023 MLIR Measure nDCG MAP Recall nDCG MAP Recall nDCG MAP Recall nDCG MAP Recall Rank Cutoff 10 100 100 20 1000 1000 20 1000 1000 20 1000 1000 Baselines (0) KD-SPD[17] 0.416 0.220 0.469 \u2013 \u2013 \u2013 \u2013 \u2013 \u2013 \u2013 \u2013 \u2013 (1) PSQ-HMM 0.5290 0.3390 0.6170 0.445 0.282 0.711 0.315 0.193 0.594 0.289 0.225 0.693 (2) DT \u00bb BM25 0.5680 0.38801 0.66201 0.6361 0.4531 0.8571 0.338 0.215 0.633 0.316 0.275 0.756 (3) DT \u00bb ColBERT 0.60901 4 0.42201 4 0.70001 4 0.6691 0.4971 0.88914 0.4031 0.28512 0.708124 0.3611 0.2981 4 0.7861 (4) ColBERT-X MTT 0.61301 3 0.41101 3 0.68701 3 0.6431 0.4511 0.8271 0.375 0.236 0.612 0.330 0.2811 3 0.760 (5) ColBERT-X ED 0.638012 8 0.45701234 678 0.73201234 678 0.69914 8 0.530124 678 0.920124 78 0.393 0.263 0.68714 0.3571 0.3171 0.827124 ColBERT-X MTD with Different Mixing Strategies (6) Mix Passages 0.66601234 78 0.47101234 578 0.74701234 578 0.6751 0.52014 57 0.90114 7 0.44412 0.3401245 78 0.7621245 78 0.4041245 78 0.36712345 78 0.86812345 78 (7) Mix Entries 0.674012345 68 0.46901234 568 0.74501234 568 0.6861 0.52214 56 0.911124 568 0.4611245 0.34712345 68 0.76812345 68 0.397124 68 0.37212345 68 0.87712345 678 (8) Round Robin Entries 0.65601234 567 0.47601234 567 0.751012345 567 0.69912 5 0.5351234 5 0.9221234 57 0.47412345 0.34112345 67 0.7611245 67 0.388124 67 0.34712345 67 0.8561234 67 Table 3: nDCG@20 on training with more languages Training Languages Evaluation Collection CLEF03 NeuCLIR Both Mix Passages CLEF 2003 0.675 0.688 0.694 NeuCLIR 2022 MLIR 0.437 0.444 0.431 NeuCLIR 2023 MLIR 0.377 0.404 0.406 Mix Entries CLEF 2003 0.686 0.679 0.680 NeuCLIR 2022 MLIR 0.424 0.461 0.445 NeuCLIR 2023 MLIR 0.359 0.397 0.379 trained with MTD (Rows 6 to 8) are 9% (NeuCLIR23 0.317 to 0.347 with round robin entries) to 32% (NeuCLIR22 0.263 to 0.347 with mix entries) more effective than ED (Row 5) by measuring MAP. 5.1 Ablation on Language Mixing Strategies Since the TOST equivalence tests show that the three mixing strategies demonstrate statistically similar MAP and Recall for all collections except for a few cases in CLEF 2003 (CLEF 2003 may be an outlier because it has English documents, a known source of bias in MLIR [29]). We conclude that MTD is robust to how languages are mixed during training as long as multiple languages are present in each training mini-batch [29]. Such robustness provides operational flexibility to practitioners creating MLIR models. Since passage translation might not be available for all languages, mixing passages allows selecting passages only from a subset of languages. Mixing entries also allows training entries to be filtered for specific languages if relevance is known to drop after translation. When evaluating with nDCG@20, the differences are larger but less consistent. For the two CLEF collections and NeuCLIR 2022, topics were developed for a single language before obtaining relevance judgments across all languages. These topics may not be well-attested in all document languages, resulting in some CLIR topics with few relevant documents. For these three collections, models trained with mixed CLIR tasks (mix and round-robin entries) are more effective at the top of the ranking. High variation among topics leads to inconclusive statistical significance results, suggesting opportunities for result fusion. NeuCLIR 2023 topics were developed bilingually, so topics are not socially or culturally tied to a single language; this leads to statistically equivalent nDCG@20 results. 5.2 Training Language Ablation Finally, we explore training with languages beyond the ones in the document collection. Table 3 shows MTD-trained models for CLEF 2003, NeuCLIR, and both on each collection. Due to GPU memory constraints, we exclude the round-robin strategy from this ablation. We observe that models trained with the mix passages strategy are more robust than the mix-entries variants when training on CLEF and evaluating on NeuCLIR and vice versa. This shows smaller degradation when facing language mismatch between training and inference. Surprisingly, training on NeuCLIR languages with the mix passage strategy yields numerically higher nDCG@20 than training on CLEF (0.675 to 0.688). When training both CLEF and NeuCLIR languages, effectiveness is generally worse than only training on the evaluation languages. This trend suggests the models might be facing capability limits in the neural model, or picking up artifacts from the quality differences in the translation. This observation demands more experimentation on MLIR dual-encoder models, which we leave for future work. 6 CONCLUSION We propose Multilingual Translate-Distill (MTD) for training MLIR dual-encoder models. We demonstrated that ColBERT-X models trained with the proposed MTD are more effective than using previously proposed MLIR training techniques on four MLIR collections. By conducting statistical equivalence tests, we showed that MTD is robust to the mixing strategies of the languages in the training mini-batch. Distillation for Multilingual Information Retrieval SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA"
17
+ }
intro_8K/test_introduction_short_2405.00978v1.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.00978v1",
3
+ "title": "Language Fairness in Multilingual Information Retrieval",
4
+ "abstract": "Multilingual information retrieval (MLIR) considers the problem of ranking\ndocuments in several languages for a query expressed in a language that may\ndiffer from any of those languages. Recent work has observed that approaches\nsuch as combining ranked lists representing a single document language each or\nusing multilingual pretrained language models demonstrate a preference for one\nlanguage over others. This results in systematic unfair treatment of documents\nin different languages. This work proposes a language fairness metric to\nevaluate whether documents across different languages are fairly ranked through\nstatistical equivalence testing using the Kruskal-Wallis test. In contrast to\nmost prior work in group fairness, we do not consider any language to be an\nunprotected group. Thus our proposed measure, PEER (Probability of\nEqualExpected Rank), is the first fairness metric specifically designed to\ncapture the language fairness of MLIR systems. We demonstrate the behavior of\nPEER on artificial ranked lists. We also evaluate real MLIR systems on two\npublicly available benchmarks and show that the PEER scores align with prior\nanalytical findings on MLIR fairness. Our implementation is compatible with\nir-measures and is available at http://github.com/hltcoe/peer_measure.",
5
+ "authors": "Eugene Yang, Thomas J\u00e4nich, James Mayfield, Dawn Lawrie",
6
+ "published": "2024-05-02",
7
+ "updated": "2024-05-02",
8
+ "primary_cat": "cs.IR",
9
+ "cats": [
10
+ "cs.IR",
11
+ "cs.CL"
12
+ ],
13
+ "label": "Original Paper",
14
+ "paper_cat": "LLM Fairness",
15
+ "gt": "Multilingual information retrieval searches a multilingual docu- ment collection and creates a unified ranked list for a given query [4\u2013 6, 16, 21, 25]. In tasks like navigational search [7], known item retrieval [1, 22], and retrieval for question-answering [10, 19], the user only needs a handful or just one relevant document to satisfy Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA \u00a9 2024 Copyright held by the owner/author(s). ACM ISBN 979-8-4007-0431-4/24/07. https://doi.org/10.1145/3626772.3657943 the information need, and the language of that document does not matter. In contrast, users interested in gaining a broad problem understanding prefer seeing how coverage varies across languages. Analysis of retrieval results has shown that MLIR systems often show a preference for certain languages [16, 21]; we call this the MLIR Fairness problem. Such preference can bias a user\u2019s under- standing of the topic [29, 31]. This problem is particularly apparent in models built on top of multilingual pretrained language mod- els (mPLM) [21], which inherit bias from the text used to build them [11, 17]. This paper presents a new metric to allow quantita- tive study of MLIR Fairness. Prior work in fairness evaluation focuses on either individual or group fairness [34]. Individual fairness ensures that similar documents receive similar treatment; this often corresponds to a Lipschitz condition [3, 15]. Group fairness ensures that a pro- tected group receives treatment at least as favorable as unprotected groups [28, 32, 33]. Group fairness metrics designed for protecting specific groups are not directly applicable to the MLIR fairness problem because the latter has no protected language; we want all languages to be treated equally in a ranked list. To operationalize our notion of MLIR fairness, we propose the Probability of Equal Expected Rank (PEER) metric. By adopting the Kruskal-Wallis \ud835\udc3btest, which is a rank-based, non-parametric variance analysis for multiple groups, we measure the probability that documents of a given relevance level for a query are expected to rank at the same position irrespective of language. We compare PEER to previous fairness metrics, and show its effectiveness on synthetic patterned data, on synthetic assignment of language to real retrieval ranked lists, and on system output for the CLEF 2003 and NeuCLIR 2022 MLIR benchmarks.",
16
+ "main_content": "There is no universally accepted definition of fairness. This paper views languages as groups within a ranking, and characterizes MLIR Fairness as a group fairness problem. Existing group fairness metrics fall into two categories: those that assess fairness independent of relevance, and those that take relevance into account. Ranked group fairness, based on statistical parity proposed by Zehlike et al. [32, 33], demands equitable representation of protected groups in ranking without explicitly considering relevance through statistical testing. Attention Weighted Ranked Fairness (AWRF), introduced by Sapiezynski et al. [28], compares group exposure at certain rank cutoffs against a pre-defined target distribution. It uses the same distribution for both relevant and nonrelevant documents. This means for example that if utility is defined as finding the most relevant documents, a system can gain utility by including more documents from the language with the most relevant documents early in the rankings. In doing so, more nonrelevant documents arXiv:2405.00978v1 [cs.IR] 2 May 2024 SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Eugene Yang, Thomas J\u00e4nich, James Mayfield, and Dawn Lawrie from that language are placed above relevant documents from the other languages. From a fairness perspective, this should be penalized as unfair. Our proposed metric does not rely on a target distribution, so it does not suffer from this utility/fairness tradeoff. Among metrics that incorporate relevance, Singh and Joachims [30] introduced the Disparate Treatment Ratio, which measures the equality of exposure of two groups. This metric is not well suited to MLIR though, since it handles only two groups. Adjacent to fairness, Clarke et al. [13] extended Normalized Discounted Cumulative Gain (nDCG) to incorporate diversity. Their metric, \ud835\udefc-nDCG, assigns document weights based on both relevance and diversity. Diversity though applies to user utility where fairness applies to documents (in our case, the languages of the returned documents) [8]. We nonetheless report \ud835\udefc-nDCG to contextualize our results. Related work on fairness over sequences of rankings [14, 24] requires both more evidence and distributional assumptions compared to fairness of a specific ranking. While similar, our method assumes the position of each document is a random variable. 3 PROBABILITY OF EQUAL EXPECTED RANK In this section, we describe the proposed measure \u2013 PEER: Probability of Equal Expected Rank. We first introduce our notation and the fairness principle, followed by forming the statistical hypothesis of the system\u2019s fairness across document languages. Finally, we define PEER as the \ud835\udc5d-value of the statistical test. Let \ud835\udc51\ud835\udc56\u2208D be the \ud835\udc56-th document in the collection D of size \ud835\udc41. We define the language that \ud835\udc51\ud835\udc56is written in as \ud835\udc59\ud835\udc51\ud835\udc56\u2208{L1, ...L\ud835\udc40}. For convenience, we define the set \ud835\udc3f\ud835\udc57= \b \ud835\udc51\ud835\udc56 \f \f \ud835\udc59\ud835\udc51\ud835\udc56= L\ud835\udc57 \t to be all documents in language L\ud835\udc57. For a given query \ud835\udc5e\u2208Q, we define the degree of document \ud835\udc51\ud835\udc56being relevant (or the relevance grade) to the query \ud835\udc5eas \ud835\udc66\ud835\udc5e \ud835\udc56\u2208 {R (0), R (1), ..., R (\ud835\udc3e)}, where \ud835\udc45(0) indicates not relevant and \ud835\udc45(\ud835\udc3e) is the most relevant level, i.e., graded-relevance with \ud835\udc3elevels. Similarly, we define the set \ud835\udc45(\ud835\udc5e,\ud835\udc58) = n \ud835\udc51\ud835\udc56 \f \f \f \ud835\udc66\ud835\udc5e \ud835\udc56= R (\ud835\udc58) o to be all documents at the R (\ud835\udc58) relevance level. Furthermore, we define the documents in L\ud835\udc57with relevance level R (\ud835\udc58) for a query \ud835\udc5eas \ud835\udc37(\ud835\udc5e,\ud835\udc58) \ud835\udc57 = \ud835\udc3f\ud835\udc57\u2229\ud835\udc45(\ud835\udc5e,\ud835\udc58). In this work, we consider a ranking function \ud835\udf0b: D\u00d7Q \u2192[1...\ud835\udc41] that produces the rank \ud835\udc5f\ud835\udc5e \ud835\udc56\u2208[1...\ud835\udc41]. 3.1 Fairness through Hypothesis Testing We define MLIR fairness using the following principle: Documents in different languages with the same relevance level, in expectation, should be presented at the same rank. We measure the satisfaction level of this principle by treating it as a testable hypothesis. For relevance level R (\ud835\udc58), assuming \ud835\udc5f\ud835\udc5e \ud835\udc56is a random variable over [1...\ud835\udc41], we implement the principle using the null hypothesis: \ud835\udc3b0 : E\ud835\udc51\ud835\udc56\u2208\ud835\udc37(\ud835\udc5e,\ud835\udc58) \ud835\udc4e [\ud835\udc5f\ud835\udc5e \ud835\udc56] = E\ud835\udc51\ud835\udc57\u2208\ud835\udc37(\ud835\udc5e,\ud835\udc58) \ud835\udc4f [\ud835\udc5f\ud835\udc5e \ud835\udc57] \u2200L\ud835\udc4e\u2260L\ud835\udc4f, (1) which is the equivalence of the expected rank among documents in each language with the given relevance level and given query \ud835\udc5e. Such null hypotheses can be tested with the Kruskal-Wallis \ud835\udc3b test (K-W test) [18]. The null hypothesis of this test is that all groups have the same mean (i.e., equivalent mean ranks). The K-W test is like a non-parametric version of the ANOVA F-test, which tests whether each group (languages in our case) comes from the same distribution. Since the K-W test does not assume any particular underlying distribution, it uses the ranking of the data points to make this determination. Unlike prior work such as Zehlike et al. [33] that assumes a binomial distribution for each document over the groups, not assuming the distribution of the query-document scores used for ranking and instead operating directly on ranks yields a robust statistical test. Conceptually, the test statistics \ud835\udc3bfor the K-W test is the ratio between the sum of group rank variance and the total rank variance. The variance ratio obeys a chi-squared distribution; we use its survival function to derive the \ud835\udc5d-value. Specifically, we can express the test statistic \ud835\udc3bas \ud835\udc3b= \u0010 |\ud835\udc45(\ud835\udc5e,\ud835\udc58) | \u22121 \u0011 \u00cd\ud835\udc40 \ud835\udc57=1 \f \f \f\ud835\udc37(\ud835\udc5e,\ud835\udc58) \ud835\udc57 \f \f \f \u0010 \u00af \ud835\udc5f\ud835\udc5e,\ud835\udc58 \ud835\udc57 \u2212\u00af \ud835\udc5f \u00112 \u00cd\ud835\udc40 \ud835\udc57=1 \u00cd \ud835\udc51\ud835\udc56\u2208\ud835\udc37(\ud835\udc5e,\ud835\udc58) \ud835\udc57 \u0010 \ud835\udc5f\ud835\udc5e \ud835\udc56\u2212\u00af \ud835\udc5f \u00112 (2) where \u00af \ud835\udc5f\ud835\udc5e,\ud835\udc58 \ud835\udc57 = 1 |\ud835\udc37(\ud835\udc5e,\ud835\udc58) \ud835\udc57 | \u2211\ufe01 \ud835\udc51\ud835\udc56\u2208\ud835\udc37(\ud835\udc5e,\ud835\udc58) \ud835\udc57 \ud835\udc5f\ud835\udc5e \ud835\udc56 (3) and \u00af \ud835\udc5f= 1 |\ud835\udc45(\ud835\udc5e,\ud835\udc58) | \ud835\udc40 \u2211\ufe01 \ud835\udc57=1 \u2211\ufe01 \ud835\udc51\ud835\udc56\u2208\ud835\udc37(\ud835\udc5e,\ud835\udc58) \ud835\udc57 \ud835\udc5f\ud835\udc5e \ud835\udc56 (4) for a given query \ud835\udc5eand relevance level R (\ud835\udc58). Recall that \ud835\udc45(\ud835\udc5e,\ud835\udc58) and \ud835\udc37(\ud835\udc5e,\ud835\udc58) \ud835\udc57 are sets. For each query \ud835\udc5eand given relevance level, we report the \ud835\udc5d-value of the K-W test, which is the Probability of documents in all languages with given relevance level having Equal Expected Rank, by comparing the \ud835\udc3bstatistic against a chi-squared distribution with \ud835\udc40\u22121 degrees of freedom. The \ud835\udc5d-value provides us with the probability that documents in different languages are ranked fairly within a given relevance level. We denote the \ud835\udc5d-value for a given query \ud835\udc5eand a relevance level R (\ud835\udc58) as \ud835\udc5d(\ud835\udc5e,\ud835\udc58). Our fairness notion is similar to the one proposed by Diaz et al. [14]. However, we operationalize the principle by treating each document as a sample from a distribution given the language and relevance level, instead of assuming the entire ranked list is a sample from all possible document permutations. 3.2 Fairness at Each Relevance Level The impact of unfairly ranking documents in different languages may differ at each relevance level. Such differences can be linked to a specific user model or application. For example, for an analyst actively seeking information for which each language provides different aspects, ranking nonrelevant documents of a particular language at the top does not degrade fairness; finding disproportionately fewer relevant documents in a certain language, on the other hand, may yield biased analytical conclusions. In contrast, for a user seeking answers to a specific question who views the language as just the content carrier, reading more nonrelevant documents from a language may degrade that language\u2019s credibility, leading the user eventually to ignore all content in that language. In this case, we do consider the language fairness of the ranking of nonrelevant documents; in the former case we do not. To accommodate different user models, we define the PEER score as a linear combination of the \ud835\udc5d-value of each relevance level R (\ud835\udc58). Language Fairness in Multilingual Information Retrieval SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA 0 10 20 30 40 50 # Interleaved 0.0 0.5 1.0 (a) Shifting 0 10 20 30 40 50 Rank 0.0 0.5 1.0 (b) Moving Single 0 10 20 30 40 50 Length 0.0 0.5 1.0 (c) Interleaving 0 10 20 30 40 50 Length 0.0 0.5 1.0 (d) Increasing Length 1.0 1.5 2.0 2.5 3.0 Sampling Mean for Relevant Docs 0.0 0.2 0.4 0.6 (e) Score Sampling w/ Non Relevant Docs=1.0 1.0 1.5 2.0 2.5 3.0 Sampling Mean Non-Relevant Docs 0.0 0.2 0.4 0.6 (f) Score Sampling w/ Relevant Docs=1.0 Non Relevant Relevant Figure 1: Ranked lists with different fairness patterns between two languages and binary relevance. Let \ud835\udc64(\ud835\udc58) \u2208[0, 1] be the weights and \u00cd\ud835\udc3e \ud835\udc58=1 \ud835\udc64(\ud835\udc58) = 1, the overall weighted PEER for query \ud835\udc5eis \ud835\udc43\ud835\udc38\ud835\udc38\ud835\udc45(\ud835\udc5e) = \ud835\udc3e \u2211\ufe01 \ud835\udc58=1 \ud835\udc64(\ud835\udc58)\ud835\udc5d(\ud835\udc5e,\ud835\udc58) (5) 3.3 Rank Cutoff and Aggregation While a ranking function \ud835\udf0branks each document in collection D, in practice, a user only examines results up to a certain cutoff. Some IR effectiveness measurements consider elaborate browsing models, such as exponential-decreasing attention in Ranked-biased precision (RBP) [23] or patience-based attention in expected reciprocal rank (ERR) [9]; user behavior though is perpendicular to language fairness, so we consider only a simple cutoff model. With a rank cutoff \ud835\udc4b, we treat only the top-\ud835\udc4bdocuments as the sampling universe for the K-W test. However, since disproportionately omitting documents of a certain relevance level is still considered unfair, before conducting the hypothesis test, we concatenate unretrieved documents (or those ranked below the cutoff) at that relevance level to the ranked list, assigning them a tied rank of \ud835\udc4b+1. This is optimistic, since these documents might rank lower in the actual ranked list. However, this provides a robust penalty for any ranking model that provides only a truncated ranked list. We define the \ud835\udc5d-value as 1.0 when no document is retrieved at a given relevance level in spite of their presence in the collection; from the user perspective, no document at that level is presented, so it is fair (albeit ineffective) across languages. We denote the weighted \ud835\udc5d-value calculated on the top-\ud835\udc4bdocuments as \ud835\udc43\ud835\udc38\ud835\udc38\ud835\udc45(\ud835\udc5e)@\ud835\udc4b. Overall, we report the average weighted PEER over all queries at rank \ud835\udc4b, i.e., \ud835\udc43\ud835\udc38\ud835\udc38\ud835\udc45@\ud835\udc4b= |Q|\u22121 \u00cd \ud835\udc5e\u2208Q \ud835\udc43\ud835\udc38\ud835\udc38\ud835\udc45(\ud835\udc5e)@\ud835\udc4b. Since we treat each document as a random variable of position in a ranked list, what we are measuring is how likely a system is fair between languages instead of how fair each ranked list is. A higher PEER score indicates that the measured MLIR system is more likely to place documents written in different languages but with the same relevance level at similar ranks. 4 EXPERIMENTS AND RESULTS 4.1 Synthetic Data To demonstrate PEER behavior, we create ranked lists of two languages and binary relevance with four methods, each creating lists from very unfair to very fair. Results are illustrated in Figure 1. Shifting starts with all documents in one language ranking higher than those in the other, and slowly interleaves them until the two are alternating. Figure 1(a) shows for fifty documents that when no documents are interleaved (left), fairness is low, with PEER close to 0. As alternation increases, the PEER score increases. In Moving Single, the ranked list consists entirely of one language except for one document. That single document moves from the top (unfair) to the middle of the ranking (fair). In Figure 1(b) with 99 majority language documents, the PEER scores increase as the singleton moves from the top to the middle. Figure 1(c) shows Interleaving, in which the languages alternate and the number of retrieved documents slowly increases. With odd lengths the highest and the lowest ranked documents are in the same language, giving them the same average and 1.0 PEER scores. With even lengths, one language has a slightly higher rank than the other. The difference shrinks with longer ranked lists, resulting in increased PEER scores. In Increasing Length, 100 retrieved documents comprise first an alternating section followed by all documents in a single language, and the size of the alternating section is gradually increased. This is similar to shifting, but with overlapping languages at the top instead of in the middle of the rank list. At the left of Figure 1(d) only the document at rank 1 is minority language, followed by minority language at ranks 1 and 3, and so on. The right of the graph is identical to the right of Figure 1(a). These four patterns demonstrate that PEER scores match our intuition of fairness between languages. The next section evaluates real MLIR retrieval systems on two MLIR evaluation collections. 4.2 Assigning Languages to a Real Ranked List We used NeuCLIR\u201922 runs to create new synthetic runs with relevant documents in the same positions, but with languages assigned to the relevant documents either fairly or unfairly. We randomly SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Eugene Yang, Thomas J\u00e4nich, James Mayfield, and Dawn Lawrie Table 1: Effectiveness and fairness results. Both AWRF and PEER exclude nonrelevant documents. They are removed in AWRF calculation. For PEER, the importance weights of nonrelevant documents are set to 0. Collection Rank Cutoff 20 1000 Measure nDCG \ud835\udefc-nDCG AWRF PEER Recall \ud835\udefc-nDCG AWRF PEER CLEF 2003 QT \u00bb BM25 0.473 0.444 0.513 0.239 0.743 0.579 0.788 0.202 DT \u00bb BM25 0.636 0.640 0.623 0.243 0.857 0.747 0.895 0.299 DT \u00bb ColBERT 0.669 0.674 0.658 0.293 0.889 0.768 0.904 0.328 ColBERT-X ET 0.591 0.592 0.610 0.215 0.802 0.695 0.845 0.327 ColBERT-X MTT 0.643 0.658 0.649 0.318 0.827 0.748 0.860 0.362 NeuCLIR 2022 QT \u00bb BM25 0.305 0.447 0.537 0.453 0.557 0.569 0.752 0.383 DT \u00bb BM25 0.338 0.448 0.542 0.497 0.633 0.580 0.809 0.421 DT \u00bb ColBERT 0.403 0.539 0.635 0.449 0.708 0.652 0.842 0.426 ColBERT-X ET 0.299 0.447 0.578 0.458 0.487 0.561 0.745 0.421 ColBERT-X MTT 0.375 0.545 0.621 0.425 0.612 0.644 0.786 0.386 selected how many relevant documents would be assigned to each language, and created that many language labels. For each label we drew from a normal distribution with either the same mean for the two languages (fair), or different means (unfair). We assigned a drawn number to each label, sorted the labels by that number, and assigned the labels to the relevant documents in the resulting order. We did the same for the nonrelevant documents, ensuring that each language was assigned at least 45% of those documents. Figures 1(e) and (f) vary the sampling mean of the second language\u2019s relevant and nonrelevant documents, respectively, while keeping a first language sampling mean of 1.0. The figures show that PEER captures fairness independently for each relevance level. Since there are far fewer relevant documents, the evidence for fairness is also weaker, resulting in slower decay when changing the sampling mean for relevant documents. 4.3 Real MLIR Systems We evaluate five MLIR systems, including query translation (QT) and document translation (DT) with BM25, DT with English ColBERT [27], and ColBERT-X models trained with English triples (ET) and multilingual translate-train (MTT) [21], on CLEF 2003 (German, Spanish, French, and English documents with English queries) and NeuCLIR 2022 (Chinese, Persian, and Russian documents with English queries). For QT, English queries are translated into each document language and monolingual search results from each language are fused by score. We report \ud835\udefc-nDCG, AWRF (with number of relevant documents as target distribution), and the proposed PEER with rank cutoffs at 20 and 1000. Along with nDCG@20 and Recall@1000, we summarize the results in Table 1. Logically, merging ranked lists from each monolingual BM25 search with translated queries purely by scores is inviting unfair treatment, as scores from each language are incompatible with different query lengths and collection statistics [26]. We observed this trend in both PEER and AWRF, while \ud835\udefc-nDCG strongly correlates with the effectiveness scores and does not distinguish fairness. Neural MLIR models trained with only English text and transferred zero-shot to MLIR with European languages exhibit a strong language bias compared to those trained with document languages [16, 21]. PEER exhibits a similar trend in CLEF 2003, showing ColBERT-X ET is less fair than the MTT counterpart, while AWRF is less sensitive. Lawrie et al. [21] show that preference for English documents in the ET model causes this unfair treatment; this suggests that MLIR tasks without the training language (English) in the document collection would not suffer from such discrepancy. In fact, both PEER and AWRF indicate that MTT model is less fair among the three languages in NeuCLIR 2022, which is likely caused by the quality differences in machine translation [20]. AWRF and PEER disagree on the comparison between English ColBERT on translated documents (DT) and ColBERT-X models. While AWRF suggests DT \u00bb ColBERT 1 is fairer than ColBERT-X MTT in CLEF03, DT creates a larger difference among languages [21]. PEER, in contrast, aligns with prior analysis, giving a lower score to DT \u00bb ColBERT. According to Huang et al. [16], QT \u00bb BM25 has a similar language bias compared to mDPR [12], which was trained with English MS MARCO [2]. PEER suggests a similar conclusion between QT \u00bb BM25 and ColBERT-X ET, which AWRF assigns a larger difference between the two with a rank cutoff of 20. With a rank cutoff of 1000, AWRF strongly correlates with recall (Pearson \ud835\udc5f= 0.93 over both collections), while PEER does not (Pearson \ud835\udc5f= \u22120.55). The 0.904 AWRF value (range 0-1) of DT \u00bb ColBERT on CLEF03 suggests a fair system, while the ranked list does not. This strong relationship shows that AWRF, with target distribution being the ratio of relevant documents, is indeed measuring recall instead of fairness. While it is an artifact of the choice of target distribution, the need to define a target distribution reduces the robustness of AWRF in measuring MLIR Fairness. 5 SUMMARY We propose measuring the Probability of Equal Expected Rank (PEER) for MLIR fairness. As PEER measures the weighted \ud835\udc5d-value of a non-parametric group hypothesis test, it neither requires a target distribution nor makes distributional assumptions; this makes the metric robust. Through comparison to prior analytical work in MLIR Fairness, we conclude that PEER captures the differences and nuances between systems better than other fairness metrics. 1The (ET)+ITD setting in Lawrie et al. [21]. Language Fairness in Multilingual Information Retrieval SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA"
17
+ }
intro_8K/test_introduction_short_2405.00981v1.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.00981v1",
3
+ "title": "Bayesian Optimization with LLM-Based Acquisition Functions for Natural Language Preference Elicitation",
4
+ "abstract": "Designing preference elicitation (PE) methodologies that can quickly\nascertain a user's top item preferences in a cold-start setting is a key\nchallenge for building effective and personalized conversational recommendation\n(ConvRec) systems. While large language models (LLMs) constitute a novel\ntechnology that enables fully natural language (NL) PE dialogues, we\nhypothesize that monolithic LLM NL-PE approaches lack the multi-turn,\ndecision-theoretic reasoning required to effectively balance the NL exploration\nand exploitation of user preferences towards an arbitrary item set. In\ncontrast, traditional Bayesian optimization PE methods define theoretically\noptimal PE strategies, but fail to use NL item descriptions or generate NL\nqueries, unrealistically assuming users can express preferences with direct\nitem ratings and comparisons. To overcome the limitations of both approaches,\nwe formulate NL-PE in a Bayesian Optimization (BO) framework that seeks to\ngenerate NL queries which actively elicit natural language feedback to reduce\nuncertainty over item utilities to identify the best recommendation. We\ndemonstrate our framework in a novel NL-PE algorithm, PEBOL, which uses Natural\nLanguage Inference (NLI) between user preference utterances and NL item\ndescriptions to maintain preference beliefs and BO strategies such as Thompson\nSampling (TS) and Upper Confidence Bound (UCB) to guide LLM query generation.\nWe numerically evaluate our methods in controlled experiments, finding that\nPEBOL achieves up to 131% improvement in MAP@10 after 10 turns of cold start\nNL-PE dialogue compared to monolithic GPT-3.5, despite relying on a much\nsmaller 400M parameter NLI model for preference inference.",
5
+ "authors": "David Eric Austin, Anton Korikov, Armin Toroghi, Scott Sanner",
6
+ "published": "2024-05-02",
7
+ "updated": "2024-05-02",
8
+ "primary_cat": "cs.AI",
9
+ "cats": [
10
+ "cs.AI",
11
+ "cs.CL"
12
+ ],
13
+ "label": "Original Paper",
14
+ "paper_cat": "LLM AND Reasoning",
15
+ "gt": "Personalized conversational recommendation (ConvRec) systems require effective natural language (NL) preference elicitation (PE) strategies that can efficiently learn a user\u2019s top item preferences in cold start settings, ideally requiring only an arbitrary set of NL item descriptions. While the advent of large language models (LLMs) has introduced the technology to facilitate NL-PE conversations [13, 19] we conjecture that monolithic LLMs have limited abilities to strategically conduct active, multi-turn NL-PE dialogues about a set of arbitrary items. Specifically, we hypothesize that LLMs lack the sequential decision-theoretic reasoning to interactively generate queries that avoid over-exploitation or over-exploration of user-item preferences, thus risking over-focusing on already revealed item preferences or wastefully exploring preferences over low-value items. Further challenges faced by monolithic LLM NL-PE approaches include the need to jointly reason over large, poten- tially unseen sets of item descriptions, and the lack of control and interpretability in system behaviour even after prompt engineering or fine-tuning [24]. In contrast, conventional PE algorithms [17, 18, 23, 36, 37], in- cluding Bayesian optimization methods [2, 5, 11, 28, 33], establish formal decision-theoretic policies such as Thompson Sampling (TS) and Upper Confidence Bound (UCB) [15] to balance exploration and exploitation with the goal of quickly identifying the user\u2019s most preferred items. However, these techniques typically assume a user can express preferences via direct item ratings or comparisons \u2013 an unrealistic expectation when users are unfamiliar with most items [1]. While recent work has extended Bayesian PE to a fixed set of template-based queries over pre-defined keyphrases [33], no arXiv:2405.00981v1 [cs.AI] 2 May 2024 Conference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY Austin and Korikov, et al. Figure 1: PEBOL\u2019s belief updates over a cold-start user\u2019s item utilities during three turns of NL dialogue. Bayesian preference beliefs not only facilitate recommendation, but also enable Bayesian optimization policies to guide LLM query generation, avoiding over-exploration (asking about clearly low-value items) and over-exploitation (over-focusing on known preferences). existing work extends Bayesian methodologies to generative NL-PE over a set of generic NL item descriptions. In this paper, we make the following contributions: \u2022 We introduce the first Bayesian optimization formalization of NL-PE for arbitrary NL dialogue over a generic set of NL item descriptions \u2013 establishing a framework for research on augmenting LLMs with decision-theoretic reasoning. \u2022 We present PEBOL (Preference Elicitation with Bayesian Optimization augmented LLMs), a novel NL-PE algorithm which 1) infers item preferences via Natural Language In- ference (NLI) [34] between dialogue utterances and item descriptions to maintain Bayesian preference beliefs and 2) introduces LLM-based acquisition functions, where NL query generation is guided by decision-theoretic strategies such as TS and UCB over the preference beliefs. \u2022 We numerically evaluate PEBOL against monolithic GPT-3.5 NL-PE methods via controlled NL-PE dialogue experiments over multiple NL item datasets and levels of user noise. \u2022 We observe that PEBOL achieves up to 131% improvement in MAP@10 after 10 turns compared to GPT-3.5, despite using a much smaller 400M NLI model for preference inference.",
16
+ "main_content": "Given an objective function \ud835\udc53: X \u2192R, (standard) optimization systematically searches for a point \ud835\udc65\u2217\u2208X that maximizes1 \ud835\udc53. Bayesian optimization focuses on settings where \ud835\udc53is a black-box function which does not provide gradient information and cannot be evaluated exactly \u2013 rather, \ud835\udc53must be evaluated using indirect or noisy observations which are expensive to obtain [9, 26]. To address these challenges, Bayesian optimization maintains probabilistic beliefs over \ud835\udc53(\ud835\udc65) and its observations to guide an uncertaintyaware optimization policy which decides where to next observe \ud835\udc53(\ud835\udc65). Bayesian optimization begins with a prior \ud835\udc5d(\ud835\udc53) which represents the beliefs about \ud835\udc53before any observations are made. Letting \ud835\udc66\ud835\udc56 represent a noisy or indirect observation of \ud835\udc53(\ud835\udc65\ud835\udc56), and collecting a 1We take the maximization direction since this paper searches for items with maximum () 1We take the maximization direction since this paper searches for items with maximum utility for a person. sequence of observations into a dataset D = (x, y), an observation model defines the likelihood \ud835\udc5d(D|\ud835\udc53). We then use the observed data and Bayes theorem to update our beliefs and obtain the posterior \ud835\udc5d(\ud835\udc53|D) = \ud835\udc5d(\ud835\udc53)\ud835\udc5d(D|\ud835\udc53) \ud835\udc5d(D) )\ud835\udc5d(D|\ud835\udc53) \ud835\udc5d(D) . (1) on function \ud835\udefe(\ud835\udc65|D) which deter(D) This posterior informs an acquisition function \ud835\udefe(\ud835\udc65|D) which determines where to next observe \ud835\udc53(\ud835\udc65) in a way that balances exploitation (focusing observations where \ud835\udc53is likely near its maximum) with exploration (probing areas where \ud835\udc53has high uncertainty). 2.2 Preference Elicitation PE has witnessed decades of research, and includes approaches based on Bayesian optimization (e.g., [3, 7, 10, 12, 16]), Bandits (e.g., [5, 20, 21, 37]), constrained optimization [25], and POMDPs [2]. In the standard PE setting, a user is assumed to have some hidden utilities u = [\ud835\udc621, ...,\ud835\udc62\ud835\udc41] over a set I of \ud835\udc41items, where item \ud835\udc56is preferred to item \ud835\udc57if\ud835\udc62\ud835\udc56> \ud835\udc62\ud835\udc57. The goal of PE is typically to search for an item \ud835\udc56\u2217\u2208arg max\ud835\udc56\ud835\udc62\ud835\udc56that maximizes user utility in a minimal number of PE queries, which most often ask a user to express item preferences as item ratings (e.g., [3, 5, 20, 21, 37]) or relative preferences between item pairs or sets (e.g., [2, 7, 10, 11, 13, 28]). An alternative form of PE asks users to express preferences over predefined item features, also through ratingor comparison-based queries [18, 23, 36]. Central to the above PE methods are query selection strategies that balance the exploration and exploitation of user preferences, with TS and UCB algorithms (cf. Sec. 4.2) often exhibiting strong performance [5, 23, 33, 36, 37]. However, none of these methods are able to interact with users through NL dialogue. 2.3 Language-Based Preference Elicitation Yang et al. [33] introduce Bayesian PE strategies using TS and UCB for keyphrase rating queries, where keyphrases are first mined from NL item reviews and then co-embedded with user-item preferences in a recommendation system. Handa et al. [13] propose using LLMs to interface with a conventional Bayesian PE system, suggesting a preprocessing step to extract features from NL descriptions and a verbalization step to fluidly express pairwise item comparison queries. Li et al. [19] prompt an LLM to generate PE queries for some specific domain (e.g., news content, morals), observe user responses, Bayesian Optimization with LLM-Based Acquisition Functions for Natural Language Preference Elicitation Conference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY Figure 2: The PEBOL NL-PE algorithm, which maintains a Bayesian belief state over a user\u2019s item preferences given an arbitrary set of NL item descriptions x. This belief is used by a decision-theoretic policy to balance the exploration and exploitation of preferences by strategically selecting an item description \ud835\udc65\ud835\udc56\ud835\udc61as the basis for LLM query generation. Belief updates are computed through Bayesian inference with NLI entailment scores between item descriptions and query-response pairs. and evaluate LLM relevance predictions for a single item. While these works make progress towards NL-PE, they do not study how LLM query generation can strategically explore user preferences towards an arbitrary item set outside the realm of item-based or category-based feedback. 2.4 Conversational Recommendation Recent work on ConvRec uses language models2 to facilitate NL dialogue while integrating calls to a recommender module which generates item recommendations based on user-item interaction history [4, 22, 30, 32]. He et al. [14] report that on common datasets, zero-shot GPT-3.5/4 outperforms these ConvRec methods, which generally use older language models and require user-item interaction history for their recommendation modules. 2.5 Natural Language Inference Binary Natural Language Inference (NLI) [34] models predict the likelihood that one span of text called a premise is entailed by (i.e., can be inferred from) a second span called the hypothesis. For example, an effective NLI model should predict a high likelihood that the premise \u201cI want to watch Iron Man\u201d entails the hypothesis \u201cI want to watch a superhero movie\u201d. As illustrated by this example, the hypothesis typically must be more general than the premise. NLI models are trained by fine-tuning encoder-only LLMs on NLI datasets [6, 27, 31], which typically consist of short text spans for the premise and hypothesis \u2013 thus enabling relatively efficient performance on similar tasks with a fairly small number LLM parameters. 3 PROBLEM DEFINITION We now present a Bayesian optimization formulation of NL-PE. The goal of NL-PE is to facilitate a NL dialogue which efficiently discovers a user\u2019s most preferred items out of a set of \ud835\udc41items. Each item \ud835\udc56\u2208I has a NL description \ud835\udc65\ud835\udc56, which might be a title, long-form description, or even a sequence of reviews, with the item set I collectively represented by x \u2208X with x = [\ud835\udc651, ...,\ud835\udc65\ud835\udc41]. We assume the user has some (unknown) utility function \ud835\udc53: X \u2192R 2Earlier systems (e.g. [4, 22]) use relatively small RNN-based language models. establishing hidden utilities u = \ud835\udc53(x) so that item \ud835\udc56is preferred to item \ud835\udc57if \ud835\udc62\ud835\udc56> \ud835\udc62\ud835\udc57. Our goal is to find the most preferred item(s): \ud835\udc56\u2217\u2208arg max \ud835\udc56\u2208I \ud835\udc62\ud835\udc56. (2) In contrast to standard Bayesian PE formalisms (c.f. Sec 2.2), we do not assume that the user can effectively convey direct item-level preferences by either: 1) providing item ratings (i.e., utilities) or 2) pairwise or listwise item comparisons. Instead, we must infer user preferences by observing utterances during a NL system-user dialogue. At turn \ud835\udc61of a dialogue, we let \ud835\udc5e\ud835\udc61and \ud835\udc5f\ud835\udc61be the system and user utterance, respectively, with q\ud835\udc61= [\ud835\udc5e1, ...,\ud835\udc5e\ud835\udc61] and r\ud835\udc61= [\ud835\udc5f1, ...,\ud835\udc5f\ud835\udc61] representing all system and user utterances up to\ud835\udc61. In this paper, we call \ud835\udc5e\ud835\udc61the query and \ud835\udc5f\ud835\udc61the response, though extensions to more generic dialogues (e.g., when users can also ask queries) are discussed in Section 7. We let H\ud835\udc61= (q\ud835\udc61, r\ud835\udc61) be the conversation history at turn \ud835\udc61. To formulate NL-PE as a Bayesian optimization problem, we place a prior belief on the user\u2019s utilities \ud835\udc5d(u|x), conditioned on item descriptions since they are available before the dialogue begins. We then assume an observation model that gives the likelihood \ud835\udc5d(r\ud835\udc61|x, u, q\ud835\udc61), letting us define the posterior utility belief as \ud835\udc5d(u|x, H\ud835\udc61) \u221d\ud835\udc5d(r\ud835\udc61|x, u, qt)\ud835\udc5d(u|x). (3) This posterior informs an acquisition function \ud835\udefe(x, H\ud835\udc61) which generates3 a new NL query \ud835\udc5e\ud835\udc61+1 = \ud835\udefe(x, H\ud835\udc61), (4) to systematically search for \ud835\udc56\u2217. The preference beliefs also let us define an Expected Utility (EU) \ud835\udf07\ud835\udc61 \ud835\udc56for every item as \ud835\udf07\ud835\udc61 \ud835\udc56= E\ud835\udc5d(u|x,H\ud835\udc61) [\ud835\udc62\ud835\udc56], (5) which allows the top-\ud835\udc58items to be recommended at any turn based on their expected utilities. Our Bayesian optimization NL-PE paradigm lets us formalize several key questions, including: (1) How do we represent beliefs \ud835\udc5d(u|x, H\ud835\udc61) in user-item utilities u, given NL item descriptions x and a dialogue H\ud835\udc61? 3To represent the generative acquisition of NL outputs, we deviate from the conventional definition of acquisition functions as mapping to R. Conference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY Austin and Korikov, et al. Figure 3: Cherry-picked system-generated dialogues from our NL-PE experiments. The Monolithic GPT-3.5 dialogue (left) demonstrates over-exploitation, with \ud835\udc5e3 directly extending \ud835\udc5e2 after a positive user preference is observed and leading to the extreme case of query repetition (\ud835\udc5e4 = \ud835\udc5e3). In contrast, PEBOL (right) continues exploring even after a positive response, while focusing on promising aspects (three out of four queries elicit a positive response) by using UCB-guided query generation. (2) What are effective models for the likelihood \ud835\udc5d(r\ud835\udc61|x, u, q\ud835\udc61) of observed responses r\ud835\udc61given x, q\ud835\udc61, and user utilities u? (3) How can our beliefs inform the generative acquisition of NL queries \ud835\udc5e\ud835\udc61+1 given H\ud835\udc61to strategically search for \ud835\udc56\u2217? These questions reveal a number of novel research directions discussed further in Section 7. In this paper, we present PEBOL, a NL-PE algorithm based on the above Bayesian optimization NL-PE formalism, and numerically evaluate it against monolithic LLM alternatives through controlled, simulated NL dialogues (cf. Sec. 6). 4 METHODOLOGY Limitations of Monolithic LLM Prompting. An obvious NL-PE approach, described further as baseline in Section 5.1, is to prompt a monolithic LLM with all item descriptions x, dialogue history H\ud835\udc61, and instructions to generate a new query at each turn. However, providing all item descriptions [\ud835\udc651, ...,\ud835\udc65\ud835\udc41] in the LLM context window is very computationally expensive for all but the smallest item sets. While item knowledge could be internalized through finetuning, each item update would imply system retraining. Critically, an LLM\u2019s preference elicitation behaviour cannot be controlled other than by prompt-engineering or further fine-tuning, with neither option offering any guarantees of predictable or interpretable behaviour that balances the exploitation and exploration of user preferences. PEBOL Overview. We propose to addresses these limitations by augmenting LLM reasoning with a Bayesian Optimization procedure in a novel algorithm, PEBOL, illustrated in Figure 2. At each turn \ud835\udc61, our algorithm maintains a probabilistic belief state over user preferences as a Beta belief state (cf. Sec. 4.1). This belief state guides an LLM-based acquisition function to generate NL queries explicitly balancing exploration and exploitation to uncover the top user preferences (cf. Sec. 4.2). In addition, our acquisition function reduces the context needed to prompt the LLM in each turn from all \ud835\udc41item descriptions x to a single strategically selected item description \ud835\udc65\ud835\udc56\ud835\udc61. PEBOL then uses NLI over elicited NL preferences and item descriptions to map dialogue utterances to numerical observations (c.f. Sec 4.3). 4.1 Utility Beliefs 4.1.1 Prior Beliefs. Before any dialogue, PEBOL establishes a prior belief \ud835\udc5d(u|x) on user-item utilities. We assume item utilities are independent so that \ud835\udc5d(u|x) = \ud835\udc41 \u00d6 \ud835\udc56=1 \ud835\udc5d(\ud835\udc62\ud835\udc56|\ud835\udc65\ud835\udc56), (6) and that the prior for each utility \ud835\udc62\ud835\udc56is a Beta distribution \ud835\udc5d(\ud835\udc62\ud835\udc56|\ud835\udc65\ud835\udc56) = Beta(\ud835\udefc0 \ud835\udc56, \ud835\udefd0 \ud835\udc56). (7) Since this paper focuses on fully cold start settings, we assume a uniform Beta prior with (\ud835\udefc0 \ud835\udc56, \ud835\udefd0 \ud835\udc56) = (1, 1). Beta distributions, illustrated in Figure 1, lie in the domain [0, 1] \u2013 a normalized interval for bounded ratings in classical recommendation systems. We can thus interpret utility values of \ud835\udc62\ud835\udc56= 1 or \ud835\udc62\ud835\udc56= 0 to represent a complete like or dislike of item \ud835\udc56, respectively, while values \ud835\udc62\ud835\udc56\u2208(0, 1) provide a strength of preference between these two extremes. 4.1.2 Observation Model. To perform a posterior update on our utility beliefs given observed responses r\ud835\udc61, we need an observation model that represents the likelihood \ud835\udc5d(r\ud835\udc61|x, u, q\ud835\udc61). Modelling the likelihood of r\ud835\udc61is a challenging task, so we will require some simplifying assumptions. Firstly, we assume that the likelihood of a single response \ud835\udc5f\ud835\udc61is independent from any previous dialogue history H\ud835\udc61\u22121, so that: \ud835\udc5d(r\ud835\udc61|x, u, q\ud835\udc61) = \ud835\udc61 \u00d6 \ud835\udc61\u2032=1 \ud835\udc5d(\ud835\udc5f\ud835\udc61\u2032 |x, u,\ud835\udc5e\ud835\udc61\u2032). (8) Note that this independence assumption will allow incremental posterior belief updates, so that \ud835\udc5d(u|x, H\ud835\udc61) \u221d\ud835\udc5d(\ud835\udc5f\ud835\udc61|x, u,\ud835\udc5e\ud835\udc61)\ud835\udc5d(u|x, H\ud835\udc61\u22121). (9) 4.1.3 Binary Item Response Likelihoods and Posterior Update. With the factorized distributions over item utilities and observational likelihood history now defined, we simply have to provide a concrete observational model of the response likelihood conditioned on the query, item descriptions, and latent utility: \ud835\udc5d(\ud835\udc5f\ud835\udc61|x, u,\ud835\udc5e\ud835\udc61). Bayesian Optimization with LLM-Based Acquisition Functions for Natural Language Preference Elicitation Conference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY Figure 4: MAP@10 for MonoLLM and PEBOL-P with uncertainty-informed policies (UCB, TS, ER) \u2013 all methods show preference learning over time. MonoLLM is outperformed by PEBOL on Yelp and MovieLens, but remains competitive with PEBOL up to the fourth turn on Recipe-MPR until demonstrating a sharp performance drop that takes several turns to recover from. Because the prior is factorized over conditionally independent \ud835\udc62\ud835\udc56(cf. (6)), we can likewise introduce individual per-item factorized binary responses \ud835\udc5f\ud835\udc61 \ud835\udc56\u2208{0(dislike), 1(like)} to represent the individual relevance of each item \ud835\udc56to the preference elicited at turn \ud835\udc61. Critically, we won\u2019t actually require an individual response per item \u2014 this will be computed by a natural language inference (NLI) model [6] to be discussed shortly \u2014 but we\u2019ll begin with an individual binary response model for \ud835\udc5f\ud835\udc61 \ud835\udc56for simplicity: \ud835\udc5d(\ud835\udc5f\ud835\udc61 \ud835\udc56|\ud835\udc65\ud835\udc56,\ud835\udc62\ud835\udc56,\ud835\udc5e\ud835\udc61) = Bernoulli(\ud835\udc62\ud835\udc56). (10) With our response likelihood defined, this now leads us to our first pass at a full posterior utility update that we term PEBOL-B for observed Binary rating feedback. Specifically, given observed binary ratings \ud835\udc5f\ud835\udc61 \ud835\udc56, the update at \ud835\udc61= 1 uses the Beta prior (7) with the Bernoulli likelihood (10) to form a standard Beta-Bernoulli conjugate pair and compute the posterior utility belief \ud835\udc5d(\ud835\udc62\ud835\udc56|\ud835\udc65\ud835\udc56, H1) \u221d\ud835\udc5d(\ud835\udc62\ud835\udc56|\ud835\udc65\ud835\udc56)\ud835\udc5d(\ud835\udc5f1 \ud835\udc56|\ud835\udc65\ud835\udc56,\ud835\udc62\ud835\udc56,\ud835\udc5e\ud835\udc61) (11) = Beta(\ud835\udefc1 \ud835\udc56, \ud835\udefd1 \ud835\udc56), (12) where \ud835\udefc1 \ud835\udc56= \ud835\udefc0 \ud835\udc56+ \ud835\udc5f1 \ud835\udc56, \ud835\udefd1 \ud835\udc56= \ud835\udefd0 \ud835\udc56+ (1 \u2212\ud835\udc5f1 \ud835\udc56). Subsequent incremental updates updates follow Eq. (9) and use the same conjugacy to give \ud835\udc5d(\ud835\udc62\ud835\udc56|\ud835\udc65\ud835\udc56, H\ud835\udc61) = Beta(\ud835\udefc\ud835\udc61 \ud835\udc56, \ud835\udefd\ud835\udc61 \ud835\udc56), (13) where \ud835\udefc\ud835\udc61 \ud835\udc56= \ud835\udefc\ud835\udc61\u22121 \ud835\udc56 + \ud835\udc5f\ud835\udc61 \ud835\udc56, \ud835\udefd\ud835\udc61 \ud835\udc56= \ud835\udefd\ud835\udc61\u22121 \ud835\udc56 + (1 \u2212\ud835\udc5f\ud835\udc61 \ud835\udc56). 4.1.4 Natural Language Inference and Probabilistic Posterior Update. As hinted above, effective inference becomes slightly more nuanced since we don\u2019t need to observe an explicit binary response per item in our PEBOL framework. Rather, we receive general preference feedback \ud835\udc5f\ud835\udc61on whether a user generically prefers a text description \ud835\udc5e\ud835\udc61and then leverage an NLI model [6] to infer whether the description \ud835\udc65\ud835\udc56of item \ud835\udc56would be preferred according to this feedback. For instance, for a (\ud835\udc5e\ud835\udc61,\ud835\udc5f\ud835\udc61) pair (\u201cWant to watch a children\u2019s movie?\u201d,\u201cYes\u201d), NLI should infer a rating of \ud835\udc5f\ud835\udc61 1 = 1 for \ud835\udc651 = \u201cThe Lion King\u201d and \ud835\udc5f\ud835\udc61 2 = 0 for \ud835\udc652 = \u201cTitanic\u201d. To deal with the fact that NLI models actually return an entailment probability, our probabilistic observation variant, PEBOL-P leverages the entailment probability that \ud835\udc5f\ud835\udc61 \ud835\udc56= 1 (cf. Sec 4.3), which we denote as \ud835\udc64\ud835\udc61 \ud835\udc56\u2208[0, 1]. We then simply relax our previous binary observations \ud835\udc5f\ud835\udc61 \ud835\udc56to continuous \ud835\udc64\ud835\udc61 \ud835\udc56, thus leading to soft posterior updates \ud835\udefc\ud835\udc61 \ud835\udc56= \ud835\udefc\ud835\udc61\u22121 \ud835\udc56 + \ud835\udc64\ud835\udc61 \ud835\udc56, \ud835\udefd\ud835\udc61 \ud835\udc56= \ud835\udefd\ud835\udc61\u22121 \ud835\udc56 + (1 \u2212\ud835\udc64\ud835\udc61 \ud835\udc56). To visually illustrate how this posterior inference process works in practice, Figure 1 shows the effect of PEBOL\u2019s posterior utility belief updates based on NLI for three query-response pairs \u2013 we can see the system gaining statistical knowledge about useful items for the user from the dialogue. 4.2 LLM-Based Acquisition Functions Recall from Sec. 2.1 that in Bayesian optimization, the posterior informs an acquisition function which determines where to make the next observation. PEBOL generates a new query \ud835\udc5e\ud835\udc61with a two-step acquisition function \ud835\udefe, first using Bayesian Optimization policies (step 1) based on the posterior utility beliefs \ud835\udc5d(u|x, H\ud835\udc61) to select NL context, and then using this selected context to guide LLM prompting (step 2). We express the overall acquisition function \ud835\udefe= \ud835\udefe\ud835\udc3a\u25e6\ud835\udefe\ud835\udc36as a composition of a context acquisition function \ud835\udefe\ud835\udc36 (cf. Sec. 4.2.1) and a NL generation function \ud835\udefe\ud835\udc3a(cf. Sec. 4.2.2). 4.2.1 Context Acquisition via Bayesian Optimization Policies. First, PEBOL harnesses Bayesian optimization policies to select an item description \ud835\udc65\ud835\udc56\ud835\udc61which will be used to prompt an LLM to generate a query about an aspect described by \ud835\udc65\ud835\udc56\ud835\udc61(cf. Sec. 4.2.2). Selecting an item \ud835\udc56\ud835\udc61whose utility \ud835\udc62\ud835\udc56\ud835\udc61is expected to be near the maximum, \ud835\udc62\ud835\udc56\u2217, will generate exploitation queries asking about properties of items that are likely to be preferred by the user. In contrast, selecting an item\ud835\udc56\ud835\udc61associated with high uncertainty in its utility\ud835\udc62\ud835\udc61 \ud835\udc56will generate exploration queries that probe into properties of items for which user preferences are less known. Thus, strategically selecting \ud835\udc65\ud835\udc56\ud835\udc61allows PEBOL to balance the exploration and exploitation behaviour of NL queries, decreasing the risks of becoming stuck in local optima (over-exploitation) or wasting resources exploring low utility item preferences (over-exploration). We define the item selected by the context acquisition function as \ud835\udc56\ud835\udc61= \ud835\udefe\ud835\udc36(x, H\ud835\udc61), (14) and list several alternatives for\ud835\udefe\ud835\udc36, including the well-known strategies of TS and UCB [26]: (1) Thompson Sampling (TS): First, a sample of each item\u2019s utility \u02c6 \ud835\udc62\ud835\udc61 \ud835\udc56is taken from the posterior, \u02c6 \ud835\udc62\ud835\udc61 \ud835\udc56\u223c\ud835\udc5d(\ud835\udc62\ud835\udc56|\ud835\udc65\ud835\udc56, H\ud835\udc61). Then, the item with the highest sampled utility is selected: \ud835\udc56\ud835\udc61= arg max \ud835\udc56 \u02c6 \ud835\udc62\ud835\udc61 \ud835\udc56. (15) TS explores more when beliefs have higher uncertainty and exploits more as the system becomes more confident. (2) Upper Confidence Bound (UCB): Let \ud835\udc43\ud835\udc58(\ud835\udefc, \ud835\udefd) represent the\ud835\udc58\u2019th percentile of Beta(\ud835\udefc, \ud835\udefd), which provides a confidence Conference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY Austin and Korikov, et al. Figure 5: MAP@10 for PEBOL-P with various context acquisition policies. bound on the posterior. UCB selects the item with the highest confidence bound \ud835\udc56\ud835\udc61= arg max \ud835\udc56 \ud835\udc43\ud835\udc58(\ud835\udc5d(\ud835\udc62\ud835\udc56|\ud835\udc65\ud835\udc56, H\ud835\udc61)), (16) following a balanced strategy because confidence bounds are increased by both high utility and high uncertainty. (3) Entropy Reduction (ER): An explore-only strategy that selects the item with the most uncertain utility: \ud835\udc56\ud835\udc61= arg max \ud835\udc56 Var(\ud835\udc5d(\ud835\udc62\ud835\udc56|\ud835\udc65\ud835\udc56, H\ud835\udc61)). (17) (4) Greedy: An exploit-only strategy that selects the item with the highest expected utility \ud835\udf07\ud835\udc61 \ud835\udc56(Eq. 5): \ud835\udc56\ud835\udc61= arg max \ud835\udc56 \ud835\udf07\ud835\udc61 \ud835\udc56. (18) (5) Random: An explore-only heuristic that selects the next item randomly. 4.2.2 Generating Short, Aspect-Based NL Queries. Next, PEBOL prompts an LLM to generate a NL query \ud835\udc5e\ud835\udc61based on the selected item description \ud835\udc65\ud835\udc56\ud835\udc61while also using the dialogue history H\ud835\udc61to avoid repetitive queries. We choose to generate \u201cyes-or-no\u201d queries asking if a user prefers items with some aspect \ud835\udc4e\ud835\udc61, which is a short text span extracted dynamically from \ud835\udc65\ud835\udc56\ud835\udc61to be different from any previously queried aspects \ud835\udc4e1, ...,\ud835\udc4e\ud835\udc61\u22121. We adopt this query generation strategy to: 1) reduce cognitive load on the user, who may be frustrated by long and specific queries about unfamiliar items and 2) better facilitate NLI through brief, general phrases [34]. Letting \ud835\udf19represent the query generation prompt, we let \ud835\udc5e\ud835\udc61,\ud835\udc4e\ud835\udc61= \ud835\udefe\ud835\udc3a(\ud835\udc65\ud835\udc56\ud835\udc61, H\ud835\udc61,\ud835\udf19) (19) be the LLM generated query and aspect at turn \ud835\udc61, with prompting details discussed in Section 5.2.2. An example of such a query and aspect (bold) is \u201cAre you interested in movies with patriotic themes?\u201d, generated by PEBOL in our movie recommendation experiments and shown in Figure 2. 4.3 NL Item-Preference Entailment 4.3.1 Preference Descriptions from Query Response Pairs. Next, PEBOL receives a NL user response \ud835\udc5f\ud835\udc61, which it must convert to individual item preference observations. Since the LLM is instructed to generate \"yes-or-no\" queries \ud835\udc5e\ud835\udc61asking a user if they like aspect \ud835\udc4e\ud835\udc61, we assume the user response will be a \"yes\" or a \"no\", and create a NL description of the users preference \ud835\udf0c\ud835\udc61, letting \ud835\udf0c\ud835\udc61= \ud835\udc4e\ud835\udc61if \ud835\udc5f\ud835\udc61=\u201cyes\u201d, and \ud835\udf0c\ud835\udc61= concat(\u201cnot \u201d,\ud835\udc4e\ud835\udc61) if \ud835\udc5f\ud835\udc61= \u201cno\u201d. For example, given a query that asks if the user prefers the aspect \u201cpatriotism\u201d in an item, if the user response is \u201cyes\u201d, then the user preference \ud835\udf0c\ud835\udc61is \u201cpatriotism\u201d, and \u201cnot patriotism\u201d otherwise. This approach produces short, general preference descriptions that are well suited for NLI models [34]. 4.3.2 Inferring Item Ratings from NL Preferences. Given a NL preference \ud835\udf0c\ud835\udc61, PEBOL must infer whether the user would like an item described by\ud835\udc65\ud835\udc56. Specifically, PEBOL acquires ratings w\ud835\udc61= [\ud835\udc64\ud835\udc61 1, ...,\ud835\udc64\ud835\udc61 \ud835\udc41] (cf. Sec. 4.1.4) by using NLI to predict whether an item description \ud835\udc65\ud835\udc56entails (i.e., implies) the preference \ud835\udf0c\ud835\udc61. For example, we expect that an NLI model would predict that \ud835\udc65\ud835\udc56=\u201cThe Lion King\u201d entails \ud835\udf0c\ud835\udc61=\u201canimated\u201d while \ud835\udc65\ud835\udc57=\u201cTitanic\u201d does not, inferring that a user who expressed preference \ud835\udf0c\ud835\udc61would like item \ud835\udc56but not \ud835\udc57. We use an NLI model \ud835\udc54\ud835\udf03(\ud835\udc65\ud835\udc56, \ud835\udf0c\ud835\udc61) to predict the probability \ud835\udc64\ud835\udc61 \ud835\udc56that \ud835\udc65\ud835\udc56entails \ud835\udf0c\ud835\udc61, and return \ud835\udc5f\ud835\udc61 \ud835\udc56= \u230a\ud835\udc64\ud835\udc61 \ud835\udc56\u2309in the case of binary observations (PEBOL-B) and \ud835\udc64\ud835\udc61 \ud835\udc56in the case of probabilistic observations (PEBOL-P). 4.4 The Complete PEBOL System This concludes the PEBOL specification \u2013 the entire process from prior utility belief to the LLM-based acquisition function generation of a query to the posterior utility update is illustrated in Figure 2. 5 EXPERIMENTAL METHODS We numerically evaluate our PEBOL variations through controlled NL-PE dialogue experiments across multiple datasets and response noise levels \u2013 comparing against a monolithic GPT-3.5-turbo NLPE system, which we refer to as MonoLLM. We do not compare against ConvRec methods [4, 22, 30, 32] because they are not coldstart systems, requiring observed user-item interactions data to drive their recommendation modules. We also do not base our experiments on ConvRec datasets such as ReDIAL [22], since they are made up of pre-recorded conversation histories and cannot be used to evaluate active, cold-start NL-PE systems. 5.1 MonoLLM Baseline A major challenge of using MonoLLM for NL-PE is that item descriptions x either need to be internalized through training or be provided in the context window (cf. Sec. 4). Since we focus on fully cold-start settings, we test the latter approach as a baseline. Specifically, we prompt the LLM to generate new queries to elicit preferences given the full conversation history H\ud835\udc61and x (all prompts are shown in the Supplementary Materials). We evaluate recommendation performance after each turn by using another prompt to recommend a list of ten item names from x given H\ud835\udc61. Due to context window limits, this MonoLLM approach is only feasible for Bayesian Optimization with LLM-Based Acquisition Functions for Natural Language Preference Elicitation Conference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY Figure 6: MAP@10 for PEBOL using binary vs. probabilistic entailment scores. PEBOL-P with the best policy (TS on Yelp and MovieLens, UCB on Recipe-MPR) generally outperforms PEBOL-B. small item sets with short item descriptions; thus, we have to limit |I| to 100 for fair comparison to the MonoLLM baseline. 5.2 Simulation Details We test PEBOL and MonoLLM through NL-PE dialogues with LLMsimulated users whose item preferences are hidden from the system, evaluating recommendation performance over 10 turns of dialogue, and using GPT-3.5-turbo-0613 for all text generation. 5.2.1 User Simulation. For each experiment, we simulate 100 users, each of which likes a single item \ud835\udc56\u2208I. Each user is simulated by GPT-3.5-turbo-0613, which is given item description \ud835\udc65\ud835\udc56and instructed to provide only \u201cyes\u201d or \u201cno\u201d responses to a query \ud835\udc5e\ud835\udc61as if it was a user who likes item \ud835\udc56. 5.2.2 PEBOL Query Generation. Given an item description \ud835\udc65\ud835\udc56and previously generated aspects (\ud835\udc4e1, ...,\ud835\udc4e\ud835\udc61\u22121), an LLM is prompted to generate an aspect \ud835\udc4e\ud835\udc61describing the item \ud835\udc56that is no more than 3 words long. The LLM is then prompted again to generate a \u201cyes-orno\u201d query asking if a user prefers \ud835\udc4e\ud835\udc61. 5.2.3 NLI. We use the 400M FAIR mNLI4 model to predicts logits for entailment, contradiction, and neutral, and divide these logits by an MNLI temperature \ud835\udc47\u2208{1, 10, 100} As per the FAIR guidelines, we pass the temperature-scaled entailment and contradiction scores through a softmax layer and take the entailment probabilities. 5.2.4 User Response Noise. We test three user response noise levels \u2208{0,0.25,0.5} corresponding to the proportion or user responses that are randomly selected between \"yes\" and \"no\". 5.2.5 Omitting Query History Ablation. We test how tracking query history in PEBOL effects performance with an ablation study that removes previously generated aspects (\ud835\udc4e1, ...,\ud835\udc4e\ud835\udc61\u22121) from the aspect extraction prompt. 5.3 Datasets We obtain item descriptions from three real-world datasets: MovieLens25M5, Yelp6, and Recipe-MPR [35] (example item descriptions from each shown in Table 1 in the Supplementary Materials). After the filtering steps below for Yelp and MovieLens, we randomly sample 100 items to create x. For Yelp, we filter restaurant descriptions to be from a single major North American city and to have at least 4https://huggingface.co/facebook/bart-large-mnli 5https://grouplens.org/datasets/movielens/25m/ 6https://www.yelp.com/dataset 50 reviews and five or more category labels. For MovieLens,7 we filter movies to be in the 10% by rating count with at least 20 tags, and let movie descriptions use the title, genre labels, and 20 most common user-assigned tags. 5.4 Research Questions Our experiments explore the following research questions (RQs): \u2022 RQ1: How does PEBOL perform against the GPT-3.5 MonoLLM baseline? \u2022 RQ2: Does PEBOL perform better with binary or probabilistic observations, and how sensitive is the latter to temperature? \u2022 RQ3: How do PEBOL and MonoLLM perform under user response noise? \u2022 RQ4: How do the context selection policies of TS, UCB, ER, Greedy, and Random effect PEBOL performance? \u2022 RQ5: How much does PEBOL performance depend on access to the query history during query generation? 6 EXPERIMENTAL RESULTS 6.1 RQ1 PEBOL vs. MonoLLM Figure 4 shows MAP@10 over 10 dialogue turns for MonoLLM and PEBOL (UCB,TS,ER), with all methods starting near random guessing (reflecting a cold start) and showing clear preference learning over time.8 PEBOL achieves a 131% improvement over MonoLLM in MAP@10 after 10 dialogue turns on Yelp, an 88% improvement on MovieLens, and a 55% improvement on RecipeMPR. MonoLLM also displays a large drop in MAP@10 between turns 4 and 5 on RecipeMPR which it takes an additional five turns to recover from. In contrast, we do not see any comparably large drops in our methods, indicating that the incremental Bayesian belief updating mechanism of PEBOL may make it less prone to catastrophic mistakes. 6.2 RQ2 Binary vs. Probabilistic Responses Figure 6 compares PEBOL performance for PEBOL-B versus PEBOLP. In general, PEBOL-P performed better, likely due to PEBOL-B discarding valuable information from entailment probabilities. Specifically Figure 6 compares PEBOL-B with each policy against PEBOL-P with the best performing policy on each dataset. On MovieLens, 7For all experiments with MovieLens, we use the 16k version of GPT-3.5-turbo-0613, due to MonoLLM requiring extra context length for x. 8For each policy, we use the MNLI temperature that performed best on the most datasets (see Supplementary Materials) with continous responses. Conference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY Austin and Korikov, et al. Figure 7: The effect of including the generated aspect history in the aspect generation prompt. Including the history improves performance, which we hypothesize is due to reducing repeated or uninformative queries. Figure 8: The effect of user response noise on MAP@10 \u2013 error bars are 95% confidence intervals. At all noise levels, MonoLLM is outperformed by all PEBOL-P methods on Yelp and MovieLens and by UCB, Greedy, and ER on Recipe-MPR. we see at least a 34% improvement in MAP from probabilistic entailment over binary entailment starting from turn 2, though we note that TS with binary updates performs comparably to the best probabilistic methods on Yelp. To identify our best continuous method, we first compare the performance of PEBOL with PEBOL-P using MNLI temperatures 1, 10, and 100 with each policy. We keep the temperature with the highest MAP@10 at turn 10 for the most datasets: for Yelp and MovieLens, this is TS with MNLI temperature 1; for Recipe-MPR, it is UCB with MNLI temperature 10. 6.3 RQ3 Effect of User Response Noise Figure 8 shows the impact of user response noise on MAP@10 at turn 10 \u2013 PEBOL generally continues to outperform MonoLLM under user response noise. At turn 10 for all noise levels, MonoLLM is the worst performing method on Yelp and MovieLens, and on RecipeMPR it is always outperformed by UCB, Greedy, and ER. Methods typically maintain their relative performance under various noise levels \u2013 though an exception is MovieLens where TS is the best method under zero noise and nearly the worst one at 25% noise (though the margin between the best and worst method in this case is small). 6.4 RQ4 Comparison of Context Acquisition Policies Figure 5 compares the performance of various PEBOL context acquisition policies \u2013 all policies show active preference learning, other than random item selection on RecipeMPR. There is considerable overlap between methods, however for most turns TS does well on Yelp and MovieLens while being beaten by Greedy, ER, and UCB on Recipe-MPR. As expected due to the randomness in sampling, TS performance is correlated with random item selection, while UCB performs quite similarly to greedy. 6.5 RQ5 Effect of Aspect History in Query Generation We see an improvement in performance from including a list of previously generated aspects in the aspect generation prompt. In Figure 7, the average improvement across all turns and all datasets for including the aspect history is 50% for TS and 49% for UCB. Including the aspect generation history helps to avoid repeat queries, which gain no information and would frustrate a user. 7 CONCLUSION AND FUTURE WORK This paper presents a novel Bayesian optimization formalization of natural language (NL) preference elicitation (PE) over arbitrary NL item descriptions, as well as introducing and evaluating PEBOL, an algorithm for NL Preference Elicitation with Bayesian Optimization augmented LLMs \u2013 and establishing many new research directions. For instance, while PEBOL uses a pointwise query generation strategy that selects one item description at a time for LLM context, future work can explore LLM-based acquisition functions with pairwise and setwise context selection. Such multi-item context selection would enable contrastive query generation that could better discriminate between item preferences. Another direction for future research is the integration of NL-PE methodologies such as PEBOL into conversational recommendation (ConvRec) system architectures (e.g. [8, 29]), which must balance many tasks including recommendation, explanation, and personalized question answering. PE in ConvRec systems thus implies that algorithms need to elicit preferences given arbitrary pairs of system-user utterances, in addition to queries and responses. Bayesian Optimization with LLM-Based Acquisition Functions for Natural Language Preference Elicitation Conference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY"
17
+ }
intro_8K/test_introduction_short_2405.00982v1.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.00982v1",
3
+ "title": "On the Evaluation of Machine-Generated Reports",
4
+ "abstract": "Large Language Models (LLMs) have enabled new ways to satisfy information\nneeds. Although great strides have been made in applying them to settings like\ndocument ranking and short-form text generation, they still struggle to compose\ncomplete, accurate, and verifiable long-form reports. Reports with these\nqualities are necessary to satisfy the complex, nuanced, or multi-faceted\ninformation needs of users. In this perspective paper, we draw together\nopinions from industry and academia, and from a variety of related research\nareas, to present our vision for automatic report generation, and -- critically\n-- a flexible framework by which such reports can be evaluated. In contrast\nwith other summarization tasks, automatic report generation starts with a\ndetailed description of an information need, stating the necessary background,\nrequirements, and scope of the report. Further, the generated reports should be\ncomplete, accurate, and verifiable. These qualities, which are desirable -- if\nnot required -- in many analytic report-writing settings, require rethinking\nhow to build and evaluate systems that exhibit these qualities. To foster new\nefforts in building these systems, we present an evaluation framework that\ndraws on ideas found in various evaluations. To test completeness and accuracy,\nthe framework uses nuggets of information, expressed as questions and answers,\nthat need to be part of any high-quality generated report. Additionally,\nevaluation of citations that map claims made in the report to their source\ndocuments ensures verifiability.",
5
+ "authors": "James Mayfield, Eugene Yang, Dawn Lawrie, Sean MacAvaney, Paul McNamee, Douglas W. Oard, Luca Soldaini, Ian Soboroff, Orion Weller, Efsun Kayi, Kate Sanders, Marc Mason, Noah Hibbler",
6
+ "published": "2024-05-02",
7
+ "updated": "2024-05-02",
8
+ "primary_cat": "cs.CL",
9
+ "cats": [
10
+ "cs.CL",
11
+ "cs.IR"
12
+ ],
13
+ "label": "Original Paper",
14
+ "paper_cat": "LLM Fairness",
15
+ "gt": "The emergence of generative Large Language Models (LLMs) has brought with it the ability to automatically generate all kinds of text. With it, a host of problems\u2014old and new\u2014have (re)emerged that affect these generated texts. The fields of Information Retrieval (IR) and Natural Language Processing (NLP) both have important roles in building new methods to improve text generation and in designing approaches to evaluate the quality of these methods. LLMs can enable new ways for people to satisfy various in- formation needs. Simple information needs (e.g., factoids) can be answered with relatively short generated responses pointing to a single source. However, when information needs are complex, nuanced, or multifaceted, a suitable response must also be more complex. They need to draw together numerous facts gathered from potentially multiple sources to completely and faithfully re- spond to the information need. We refer to this longer-form answer generation as a \u201creport\u201d on a user-specified topic. More formally, we define a report as a text that attempts to satisfy an explicitly stated information need by finding documents arXiv:2405.00982v1 [cs.CL] 2 May 2024 SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA J. Mayfield et al. in a corpus (potentially a mixture of text, images, tables, etc.) that contain relevant information, expressing that information in the text, and providing appropriate citations from the report to the supporting documents. We envision a high-quality report as the ideal response to a user with a complex task in mind, since such a report would succinctly, coherently, and verifiably cover all the information in a corpus pertinent to their information need. Note that this definition makes the framework better suited to reports that inform an analyst than to reports that generate novel analyses. Report writing can be viewed as a natural downstream task of Retrieval Augmented Generation (RAG), where faithfulness has a focus of study [52, 73, 76]. In this view, an LLM generates the report using the report request as part of the prompt and searches the document collection for relevant information that can be added to the prompt to ensure the report\u2019s accuracy. Report generation can also be thought of as summarization. From the summarization viewpoint, a report is an attributed task-based informative abstrac- tive multi-document summary (see Section 3.1.1 for a more detailed explanation of these categories). Such a report might also include portions that are not summaries at all, but are, for example, intro- ductory material or comparisons of the summarized information. We posit that all of these viewpoints are valid, and each informs evaluation for report generation. This work describes an abstract framework for evaluating automated report generation, ARGUE (Automated Report Generation Under Evaluation), that is built on top of lessons learned from prior evaluation approaches in informa- tion retrieval, summarization and text generation. It will be used by the TREC track NeuCLIR in its report generation task.1 The ARGUE framework builds a foundation for a broader research agenda in evaluating automatically generated long-form text beyond reports. Some of ARGUE\u2019s most important features are: \u2022 We use the concept of information nuggets out of the summa- rization literature to capture the content a report should contain. We express each nugget as a question together with a list of acceptable answers to that question. \u2022 Citations are a key report component. A citation is a pointer from a source element in the report (typically a sentence) to a target element in a document (typically the entire document). \u2022 We propose that precision and recall serve as the basis for most content-based measures. ARGUE supports precision measures over the sentences of the report, and recall measures over the information nuggets.",
16
+ "main_content": "This section defines requirements of a report evaluation system. We first define the various actors (and one non-actor) in ARGUE: Report Requester: The person requesting the report. This is the person whose purpose the report should satisfy. Report Audience: The person who will be reading the report. This is often the same as the report requester. Report Writer: The automated system that takes a report request and a document collection as inputs and produces the report. Report Request: A detailed specification of the report to be written. The report request can include: 1https://neuclir.github.io/ \u2022 User story: explains the report requester\u2019s background, situation, and report-writing philosophy, as well as a description of the audience for the report. indicates the content that the report is reaudience for the report. \u2022 Problem statement: indicates the content that the report is required to contain. : describes what is already known about the topic quired to contain. \u2022 Background: describes what is already known about the topic that need not appear in the report. specifies restrictions such as the length of the report that need not appear in the report. \u2022 Constraints: specifies restrictions such as the length of the report or a temporal window for sources. Assessor: Any person making judgments in producing evaluation materials or scoring submitted runs. Assessors include those selecting report topics, writing report requests, identifying nuggets, binding nuggets to documents in the collection, and making other judgments necessary to assign scores to reports. The evaluation we advocate has several key attributes. First, it must ensure that the report is responsive to the report request. It must ensure the report\u2019s key information presented is attested in the document collection, that the report properly cites those documents, and that the information they contain is faithfully captured by the report. It must score a report using evaluation data created by a person. While scoring may be automated, requiring the ground truth data to be human-generated helps to prevent circularity between report generation and report evaluation, thereby reducing the bias the evaluation might have toward e.g., a particular generative model. Finally, the evaluation must have the intention of reusability. Producing a reusable evaluation is challenging because of the level of interpretation required to make the required judgments. Reusability is thus often at odds with the other goals of an evaluation. The information retrieval community has thought through many of the issues underlying reusability, and we present ARGUE to try to take advantage of that experience. While it is nearly impossible to accurately claim that any evaluation component is novel, there are points of emphasis in our proposed evaluation style that we think make it stand out from other extant text generation evaluations. First is the amount and detail of the background information provided in the report request. While other evaluations have provided additional information describing inclusion criteria, in practice systems have often focused only on brief specifications. For example, a narrative giving detailed information about what should and should not be considered relevant, long a part of TREC topics, has rarely been exploited. The arrival of large language models that can easily incorporate such materials makes now an opportune time to focus on including ancillary documentation in a report request, not just for this style of evaluation, but for any text generation evaluation. While we advocate that these ancillary details be made explicit in the evaluation, we acknowledge that in real report-writing applications implicit knowledge might be more practical and adequate for the task. Second, until recently hallucination in text generation system output was not a major focus, primarily because generative systems were not good enough to create convincing hallucinated text. With the rise of large generative LLMs hallucination has become a common part of text generation system output; the evaluation must account for this as well. Borrowing from an IR evaluation perspective, we promote the view of nuggets as opinion, not fact. In report evaluation, nuggets play the role that relevant documents play in IR. Were document On the Evaluation of Machine-Generated Reports SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA relevance treated as fact rather than opinion, it would be virtually impossible to come to agreement on which documents were relevant to a given topic; inter-annotator agreement would be too low. Treating relevance as opinion avoids this problem. In exchange, relevance as opinion adds constraints to the evaluation, primarily that the author of the topic should be the relevance assessor. If relevance is not decided until after system submissions, that means that assessor continuity is important; assessors should be selected such that they can create topics at one time, and assess relevance at a later time, possibly months later. We advocate accepting this tradeoff for nuggets in report generation evaluation. For nuggets, the implication is that items reasonably seen by a report writer as nuggets might not be identified in advance by the assessor. A given evaluation might address this issue through a pyramid approach [61] to identify nugget importance if multiple reference reports are available. Or an evaluation might determine that nugget subjectivity will not change the preference order of meaningfully different systems and ignore it. In either case, we recommend that report sentences bearing and accurately reflecting a citation should not be penalized during scoring, precisely because they might be valid nuggets in someone\u2019s eyes. Constraints such as maximum document length can discourage intentional overgeneration of sentences that have a small chance of matching assessor nuggets. To meet these requirements, four broad questions should be asked about each report being evaluated: Q1 Does the report include the information contained in the document collection that the report requires? Q2 Does it accurately express all such information? Q3 Does it contain appropriate citations to the collection? Q4 Has the information been fitted together into a useful form? Q4 is a crucial part of any text generation evaluation. It covers such attributes as fluency [65], coherence [40, 50], consistency [32], and rhetorical structure [16, 21]. In light of this importance, it has a long history and has been studied in depth elsewhere. Thus, while we leave a place for this in the overall evaluation in ARGUE, we leave it to others to address it in light of the changing NLP landscape. 3 BACKGROUND Here we review related work on report writing and evaluation. 3.1 Report Writing Report writing involves text generation, for which prior work on summarization and RAG provides useful perspectives. 3.1.1 Summarization. In its most general form, a summary is a document whose substantive content is based entirely on the content of other target document(s), and that is more concise than simply presenting the other document(s) in their original form would have been [55]. Summaries have been defined along several axes: \u2022 Single-document or Multi-document [47]: Is the summary built from one document (single-document), or many (multi-document)? \u2022 Extractive or Abstractive [13]: Does the summary primarily draw language from the summarized documents (extractive), or does it generate new language (abstractive)? \u2022 Indicative or Informative [37]: Does the summary help the reader to decide whether to read the summarized document(s) (indicative), or does it include enough content to make it unnecessary to read those document(s) (informative)? \u2022 Generic or Task-Based [83]: Is the summary constructed with no particular task in mind (generic), or is there a specific task that the summary is designed to support (task-based)? \u2022 Attributed or Unattributed [70]: Does the summary include citations to the summarized documents (attributed), or does it lack citations (unattributed)? \u2022 Original or Update [56, 63]: Should the summary include all information (original), or only information that the reader does not already know (update)? \u2022 Closed or Open Domain [26, 92]: Are the documents to summarize supplied (closed domain), or must the system perform a search to identify the appropriate documents (open domain)? The reports in which we are interested are attributed task-based informative abstractive open-domain multi-document summaries that may call for either original or update summaries. 3.1.2 Retrieval-Augmented Generation. Following preliminary research on furnishing transformer architectures with external knowledge sources, Lewis et al. [41] introduce RAG models as a way to improving language model performance on knowledge-intensive tasks, using an encoded Wikipedia collection as a non-parametric memory system. RAG models have since been used to improve dialogue systems [38, 77], machine translation [7, 8], and text-style transfer [44] among other applications [43]. Various approaches have been proposed to incorporate RAG models into summarization [2, 64] and other document generation tasks. One use of retrieval has been to find an example summary, sometimes with retrieved summary reranking [9], to serve as a template for the summary of another document. Retrieval can also be used to improve language model factuality. By curating large, high quality collections, generation can be grounded in supporting documents [4]. This mechanism has been shown to be particularly beneficial for rarer entities and concepts [54]. Finally, RAG enables LLMs to access information that was not available at pre-training time, such as proprietary or copyrighted information [57]. Vision-language modeling [1, 3, 51] enables multimodal retrievalaugmented generation systems that benefit from rich non-textual data [33, 60]. Different modalities facilitate the completion of different tasks, including image understanding [12, 93], open-domain VQA [33, 49], translation [20], and multimodal generation [89]. 3.2 Evaluation As report generation includes elements of several prior tasks, including document retrieval, summarization, question answering, and retrieval-augmented generation, we briefly review salient work on those tasks that we see as related to ARGUE. 3.2.1 Information Retrieval. Evaluation of ad hoc retrieval is typically based on assessor-produced relevance judgments of documents that are selected by pooling system responses in a shared task, or sometimes based on active learning [29, 72]. Obtaining both good precision and good recall is important in real-world systems, so commonly used metrics combine both components (e.g., mean SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA J. Mayfield et al. average precision, nDCG [35]). Statistical significance testing can be performed, for example with Student\u2019s \ud835\udc61-test [78]. In a report-writing scenario, recall is important to allow assessment of how comprehensively the report responds to the report request. Precision is also important for automated report generation; reports are a type of multi-document synthesis, and incorporating content from non-pertinent documents can adversely affect the utility of the report. To create evaluation datasets for report writing, care must be taken to develop report requests that match information available in the document collection. If requests are too broadly scoped, or if too much salient information is present in the collection, it will be difficult (i.e., prohibitively expensive in human labor) to determine the full set of correct nuggets present in the collection. 3.2.2 Summarization. Evaluating automatic summarization can require significant manual effort. In 2001, NIST initiated the Document Understanding Conference (DUC) to develop evaluation methods for summarization. DUC continued until 2007 and then became the summarization track of the Text Analysis Conference (TAC) through 2014. The DUC/TAC summarization evaluations were notable for having people write summaries manually, and using those \u201cmodel\u201d summaries (or \u201creference texts\u201d) as the jumping-off point for metric development. The DUC evaluation procedure measured coverage (that is, recall) through a pairwise comparison between two summaries: the model summary and a \u201cpeer\u201d summary (which could be a generated summary or another model). The model was divided into Elementary Discourse Units (EDUs), essentially clauses [45, 80] while the peer was split on sentence boundaries. An assessor would match each EDU with the sentences in the peer that contained that information, and indicate how much of the meaning of the EDU was expressed in the corresponding matched peer units. Unmarked sentences in the peer were then marked for relevance. Harman and Over [31] found that model summaries from different authors were markedly different, and that assessors also did not agree on model unit coverage ratings. Work also began around DUC 2003 on automatic metrics, specifically comparing the model summary to the peer using word n-gram statistics. Lin and Hovy [48] looked at the BLEU measure developed for machine translation, and found that recall on word unigrams correlated better with the DUC assessments than full BLEU scoring, which incorporates longer n-grams. Following that, they developed ROUGE [46], a recall-oriented metric similar to BLEU. ROUGE has a number of variants depending on how tokens are parsed, how n-grams are selected and assembled, and how scores are aggregated across summaries to obtain a system score. A study by Graham [28] explored a large grid of ROUGE parameters in comparison with BLEU using data from DUC-2004, and found that BLEU and ROUGE-2 (2-grams, stemmed, stopwords removed, computing an average of precision scores) had the highest correlation with human assessment. ROUGE has been used to evaluate summarization [46], Long-Form Question Answering (LFQA) [39, 88] and RAG [41]. ROUGE has well-documented problems as an evaluation metric in e.g., summarization [28] or LFQA [39]. From our perspective, its main problems as an evaluation metric for report generation are its requirement for reference reports (making it expensive), its poor robustness to hallucination (making it inaccurate), and that it does not handle citations (making it incomplete). In 2004, Nenkova and Passonneau [61] proposed the \u201cPyramid Method\u201d for evaluation. Since comparing generated summaries against a model is subject to the inherent variation in model summaries, they propose to abstract the model summaries into Summary Content Units (SCUs). SCUs are clauses that appear (with more or less the same meaning) in multiple model summaries. They are weighted by the number of model summaries that express them. Figure 1 shows an example of two SCUs from parts of four model summaries. In informal usage, SCUs have been referred to as \u201cnuggets.\u201d Rather than being a clause, a nugget might be a description of a concept along with how it was expressed in the models.2 Subsequent research on the pyramid method has focused on automatic creation and alignment of SCUs. For example, Gao et al. [24] performs a dependency parse of the model summary, then represents individual clauses using vector embeddings. Nugget fuzziness can be addressed by using hoppers [59, 79] to bin together differing descriptions that refer to the same item. The main difficulties in using nuggets for report evaluation are that they treat hallucinations (contradictions and misinformation) exactly the same as content that has no matching nugget, and that they do not support citations. We have incorporated nugget-based evaluation into ARGUE, tying nuggets to reports not directly, but rather through cited documents. 3.2.3 Question Answering. Factoid Question Answering (QA) evaluation typically consists of using accuracy or \ud835\udc391 against a gold standard answer (or answer set) [15, 69, 84]. This type of evaluation has many advantages, as it can be easily automated and is simple to annotate. Long-form QA [19, 62] is evaluated similarly to summarization, typically with automated metrics like ROUGE, model-based metrics like BERTScore [91] or BLEURT [75], or human evaluation [39, 88]. 3.2.4 Retrieval-Augmented Generation. Early retrieval augmented generation systems have been evaluated using task-specific metrics on end-to-end tasks. For example, in the context of question answering, exact match and \ud835\udc391 metrics have been used [30, 41]. For summarization, ROUGE and BERTScore on reference summaries are common [26]. These approaches have two limitations: they only measure ability to complete end tasks, and thus cannot assess intermediate stages or evaluate generation across multiple dimensions; and they are not well-suited to capture failures that can be introduced by current generative models [27]. More recently, techniques have proposed to more holistically evaluate RAG systems. Gienapp et al. [25] introduce a theoretical framework for evaluating ad hoc generative retrieval. Chen et al. [11] focus on robustness of RAG systems against various perturbations. Thakur et al. [82] benchmark hallucinations and the ability of RAG systems to identify relevant information for 18 languages. Others have introduced benchmarks to measure the ability of RAG systems to provide citations [6, 23, 53, 90]. While not specifically 2See https://tac.nist.gov/publications/2010/presentations/TAC2010_Summ_Overview. pdf for an example of SCUs as nuggets. On the Evaluation of Machine-Generated Reports SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA A1 In 1998 two Libyans indicted in 1991 for the Lockerbie bombing were still in Libya. B1 Two Libyans were indicted in 1991 for blowing up a Pan Am jumbo jet over Lockerbie, Scotland in 1988. C1 Two Libyans, accused by the United States and Britain of bombing a New York bound Pan Am jet over Lockerbie, Scotland in 1988, killing 270 people, for 10 years were harbored by Libya who claimed the suspects could not get a fair trail in America or Britain. D2 Two Libyan suspects were indicted in 1991. SCU1 (w=4): two Libyans were officially accused of the Lockerbie bombing A1 [two Libyans]1 [indicted]1 B1 [Two Libyans were indicted]1 C1 [Two Libyans,]1 [accused]1 D2 [Two Libyan suspects were indicted]1 SCU2 (w=3): the indictment of the two Lockerbie suspects was in 1991 A1 [in 1991]2 B1 [in 1991]2 D2 [in 1991.]2 Figure 1: A pair of example Summary Content Units. Four semantically similar sentences from four different model summaries are grouped into two SCUs highlighting the key facts from those sentences. From Nenkova and Passonneau [61]. designed for RAG applications, metrics designed to evaluate factuality (e.g., FactScore [58]) or faithful manipulation of long inputs (e.g., BooookScore [10]) can complement application-specific evaluation frameworks. Most approaches to automated evaluation aim to estimate the effectiveness of RAG systems across desirable dimensions (e.g., faithfulness, answer relevance, and context relevance). Techniques include prompting LLMs to evaluate generated summaries [76], and fine-tuning lightweight models on synthetic data [73]. Downstream applications, such as question answering, can also be used to evaluate the effectiveness of RAG systems [74]. 4 PROPOSED FRAMEWORK This section describes our conceptual evaluation framework for automated report generation. We name this abstract framework ARGUE (Automated Report Generation Under Evaluation) for convenience. We model the information need as a report request, which is analogous to the topics in TREC-style ad hoc retrieval evaluation. The report writer is required to respond with a verifiable report, with citations to its information sources. As in retrieval system evaluation, we restrict the system to citing documents in a pre-defined document collection instead of arbitrary information on the web. The framework is thus limited in the range of writing types it can evaluate. In particular, it does not currently support evaluation of reported information that is not explicitly supported by statements in the document collection. This restriction allows experiments that compare systems across research studies and over time. 4.1 Framework Overview In ARGUE, creating a report generation benchmark has three phases. The first phase creates evaluation data. We believe that systems should be evaluated over human-curated data so that they are ranked on effectiveness rather than alignment to machine output. System input comprises a document collection and report requests that describe information needs. The second phase distributes these inputs to participants. Generated reports are expected to be responsive to the information needs statements. A valid report will cite source documents that contain the reported information. Citations are a key attribute of this framework. Other report stylistic requirements might include, for example, a length limit to encourage systems to express information succinctly. If the document collection is in a language different from the report request, or is multilingual, the report may be required to be written in the language of the report request. We envision that the input data will be distributed as part of an evaluation campaign, but this is not required. Assuming an evaluation campaign, generated reports will be received and evaluated by assessors; however, to support reusability, key components will be replaced by automated mechanisms to allow future systems to be scored using the same evaluation data. The third phase scores reports. Since the goal of this framework is to evaluate systems, each system will need to generate multiple reports based on the various report requests. Report scores will be aggregated to assign system scores. Required information in reports will be expressed by assessors in the form of nugget questions and answers. Answers will be attested in the collection and tied to particular documents that attest those answers, thereby tying the nuggets to supporting documents. During scoring, report citations will be used to determine which nuggets are described in the report. Thus there will be a notion of recall over nuggets, which is a new feature in RAG evaluation. Citations will also be used to ensure that non-required information that is included in the report (facts that are not part of the necessary nuggets) is attested in the collection. A precision score over report segments measures how well the report adheres to information found in the collection. This allows hallucination to be addressed, whether it be false information or true information that is unattested. While traditional recall and precision are set measures, they can be modified to account for some nuggets having greater weight than others or to allow report segments to bear multiple citations. 4.2 Evaluation Inputs and Outputs 4.2.1 Evaluation Inputs. The first system input is the collection of items that will be used as source material for the retrieval task. While these items could be documents written in one or more languages, it is also possible for the items to be images, videos, audio files, or some combination. For the reminder of this paper, we will refer to the items as documents. Because of the importance of having citeable units, the document collection will be divided into target elements, which are typically documents, but can be smaller units of text such as passages, paragraphs, or sentences, depending on the needs of the evaluation. In this paper we will assume that an entire document has been selected as the target element. Segmentation into target elements should be done once and distributed with the collection to ensure that all systems are evaluated on an even footing. The document collection should include documents that contain sufficient information relevant to the desired report. Following most information retrieval-based evaluations, documents are assumed to be truthful; verifying the truthfulness of document contents is orthogonal to and beyond the scope of the framework. Instead, the framework focuses on citation, requiring that all reported information cites supporting documents from the evaluation document collection. Information that cites SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA J. Mayfield et al. a document incorrectly or that is missing a required citation is appropriately penalized. The second system input is a set of assessor-developed information needs referred to as report requests. A report will be generated for each report request. Report requests are more extensive and subtler than information needs for previous IR or summarization tasks. See Section 2 for the full report request description. Creation of report requests is a complex process that tries to satisfy multiple, sometimes conflicting goals. It bears many similarities to topic creation for a TREC-style IR evaluation [85]. In topic identification for ARGUE, the topic creator must be familiar both with information retrieval, and with any special requirements of the document collection. For example, a bilingual document collection would require that the topic creator be at least bilingual. A document collection on medical topics would require topic creators who were well-versed in the medical domain. In addition, an IR evaluation typically tries to control the number of documents that are relevant to the topic being developed, in part because doing so can improve reusability. An ARGUE evaluation must control not only the number of documents that contain relevant information, but also the number of nuggets and the number of target elements that align to each nugget. Having too many items in any of these categories leads to high assessment costs; having too few leads to higher score variance and lower ability to distinguish systems. That said, assessors need not capture all information that might satisfy the information need. It is up to the assessor to determine what, in their opinion, is the essential information. 4.2.2 Evaluation Output. The report will be generated by an automated report writer. Reports produced by the report writer should satisfy the constraints listed in Section 2. For the purposes of this framework, we make a convenience assumption that the report requester and the report audience are the same. As an example, the assessor could have the role of analyst, with the purpose of the report being to support the process of drawing analytic conclusions. The generated report will be segmented into report segments, either manually or automatically. For convenience, we will assume in this work that a report segment is a sentence, but it could be some other well-defined portion of report text. Finer-grained segments may enable more nuanced distinctions. Given that precision scores operate over report segments, and given that automated sentence segmentation is imperfect, we believe that it is important that the report writer control the segmentation. Thus, each report must be segmented into sentences by the report writer prior to evaluation. The evaluation should include guidelines on sentence segmentation. The report must also include appropriate citations, pointers from source elements (sentences) to target elements (documents). Each report sentence will bear zero or more citations, as described below. 4.3 Citations Each substantive sentence of a submitted report must cite the document target element(s) from which it was derived. Which sentences are substantive may vary according to the goals of the evaluation. A citation then is a pointer from one report segment to one target element. A given report segment may bear more than one citation, and a given target element may be cited more than once. By traversing such citations the evaluation system can map sentences in the report to documents and then to nuggets. Note that the report writer must know nothing about the nuggets that will be used to evaluate the report; they are known exclusively to the assessor. The assessor may choose to require just one citation per sentence, or, if completeness is to be measured, all valid and salient citations. The validity of a citation has three components. First, the report segment must be supported by the target element. That is, reading the target element should verify the sentence\u2019s accuracy. In a manual evaluation, the assessor decides whether a given sentence is supported by the target element. In an automated evaluation, support of a report segment for a target element could be measured in several ways. The simplest is a semantic match, testing whether the semantics of the two texts match. A number of such automated metrics are available, such as Sentence-BERT [71]. A more accurate but harder measurement would be whether the target element entails the report sentence. Entailment has been a component of evaluation sets such as GLUE [87] and SUPERGLUE [86], and good solutions to the problem have been identified [67]. Second, at the same time, the sentence bearing the citation should be responsive to the report request. This means that the cited target element is linked to a nugget, and that the report segment provides an answer to one of that nugget\u2019s questions (see below for nugget questions). Thus the acceptability of a nugget answer depends on which document the report cites. Again, the assessor will determine whether the report segment answers a nugget question. One way to automate assessment of responsiveness might be to use an automated QA system to find answers to a nugget question, then use a semantic matching system to determine whether the report segment matches one of those answers. Third, some evaluations will also assess whether a talented author in the field of the report would include that citation if they had written the report. An evaluation that simply wants all substantive sentences to bear a citation will omit this component; a more nuanced evaluation of reports in their final form could include it. In either case, judgments will need to be made on which sentences require a citation. Cases where no citation is required include introductory sentences, background sentences that reflect the problem statement, and sentences that summarize other cited sentences. If we are interested only in nugget recall, we can safely ignore whether sentences ought to have citations. But if we are interested in precision, we would not like to penalize a report for containing such non-citing sentences (except perhaps when measuring the quality of the report as a whole). To handle non-citing sentences, it must be determined whether the sentence should have a citation. If a citation is not needed, the report can be scored as if the identified sentences were not present in the report. 4.4 Nuggets The proposed evaluation is centered on nuggets. A nugget is a piece of information that should appear in the report and that could be expressed in a variety of ways in the document collection. 4.4.1 Nugget Definition. A nugget in this framework is a combination of a question and one or more answers to that question that address some aspect of the report request and that are expressed in at least one target element in the collection. Nuggets must be expressed at an appropriate level of granularity for the desired On the Evaluation of Machine-Generated Reports SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA report. If the report answers such a question using appropriate citations into the document collection, we deem it to have succeeded in identifying that nugget; evaluation metrics (described in Section 4.5 below) can then use statistics over the correctly answered, incorrectly answered, and unanswered nugget questions to produce a score for a given report. Answers to nugget questions should express the information that a reasonable person would expect in a report written in response to the report request. The concept of nuggets arose from summarization evaluation [61]. New in this framework is the expression of nuggets as questions with allowable answers. We are interested in evaluation data that can be used to automatically evaluate systems, much like relevance assessments can be used to evaluate an IR system even decades after their creation. We believe this formulation will be helpful in automating report generation evaluation. Nuggets need not capture everything any report responding to the report request might legitimately include. Given that reports by necessity will be shorter than the source documents, the assessor will determine the required information and express that as nuggets, reinforcing the idea that nuggets are opinions instead of facts. The set of answers to a nugget question are drawn from all the answers supported by the document collection. Questions and answers will be in the request language even if, for example, the source information comes from an image or is in a different language. 4.4.2 Nugget Identification. Nuggets are identified by the assessor. Nuggets must be both relevant to the report request and attested in the document collection. In practice, the assessor could either look through retrieved documents to identify important aspects of the topic from the target elements, or identify nuggets a report on the topic ought to include, then search the document collection to see which are attested. A combination of both methods could be used. To ensure reproducibility and enable evaluating recall, it is desirable to identify most (or all) nuggets that should be included. In addition to identifying the set of nuggets for a report request, the assessor must also identify each target element in the document collection that supports an answer to each nugget. To do so, the assessor must have both a way to identify target elements that contain nugget-supporting information, and a way to bind target elements to nugget answers. The former problem is similar to that faced by many IR collection developers of ensuring that all or most relevant items have been discovered. Full collection annotation is not practical for large collections. Three main techniques for identifying relevant documents are interactive search, pooling [36, 81, 94], and active learning [14, 42, 68]. Interactive search is simply having a person use any desired tool to identify relevant documents. In pooling, the assessor judges only documents found in an aggregate of several systems\u2019 top results. Either assessors must have access to systems that together are likely to find most of the relevant documents, or this step must wait until task participants have submitted their runs. It is usually desirable to augment the pools manually using interactive search. In active learning, a classifier identifies relevant documents. Each time the assessor judges a document, the classifier is retrained to take the new judgment into account. Any or all of these techniques might be used to restrict the number of documents that must be examined during nugget identification. The second task, assigning target elements to nuggets, is more challenging. We highlight three challenges here. First is withinnugget variation. For example, one nugget answer might be a superset of another, such as \u201cJune\u201d versus \u201c26 June.\u201d If the more general answer is acceptable, the more specific answer must be included in the answer set to distinguish it from an incorrect answer such as \u201c12 June.\u201d The summarization community introduced hoppers [79] to capture commonality across descriptions that differ in some details. For example, two descriptions of a particular natural disaster might indicate different numbers of casualties; perhaps the descriptions were written at different times or based on two different information sources. Whether hopper-style conflation is used for a given evaluation depends on the desired report type. An overall report on the natural disaster might use hoppers; a report on how different news services covered the disaster might need to distinguish differing descriptions. As with decisions on nugget creation, if hoppers are used, the choice of hoppers is left to the assessor. A second challenge is a single report segment or target element expressing information about more than one nugget. This is handled through multiple citations borne by a single report sentence, and/or multiple mappings between target elements and nuggets. This complicates the bookkeeping needed to give appropriate credit to each nugget, but poses no theoretical problems. A third challenge is a single nugget requiring multiple report sentences or target elements to be fully captured. This challenge arises because nugget question/answer pairs lend themselves well to simple facts expressed in the report, but are less well suited to identifying complex information. Nonetheless we believe that the general framework will be extensible to complex nuggets whose expression is distributed across several report sentences or target elements by allowing complex questions answered by Boolean combinations of target elements, and by exploiting recent research in question answering [17, 18]. 4.4.3 Practical considerations. The following considerations are not requirements of the framework, but instead practical tips we have gleaned working to instantiate this and similar evaluation frameworks. First, we believe that an assessor must be familiar both with IR concepts and any special requirements of collection and evaluation topic area (such as the aforementioned bilingual or medical settings). Second, it may be advantageous for an assessor to produce a gold standard report to help assemble the information that should be in a satisfactory report. Nugget questions can then be composed from that report. Creating a gold standard report also enables a ROUGE evaluation for comparison. Third, IR evaluations usually limit the number of relevant documents to simplify and reduce the cost of evaluation. Report evaluation would also like to control the number of nuggets and document mappings to ensure the evaluation can distinguish good and bad systems; however, this can eliminate from consideration practical use cases that would otherwise be in scope for the task. This tradeoff has traditionally been considered worthwhile, but it should be remembered that it is a tradeoff. Fourth, LLMs can call on memorized knowledge not found in the document collection. Often the LLM training collection is unknown. If the LLM has not seen the evaluation corpus, it will need to rely on hallucination, which will negatively affect evaluation data quality. Finally, while finding all potential nuggets is unnecessary SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA J. Mayfield et al. Figure 2: Report sentence scoring. Answers to eight yes/no questions dictate an outcome for each input sentence. + indicates that the sentence is rewarded, that it is penalized, and 0 that it does not affect the overall report score. Report Request: I am a Hollywood reporter writing an article about the highest grossing films Avengers: Endgame and Avatar. My article needs to include when each of these films was considered the highest grossing films and any manipulations undertaken to bring moviegoers back to the box office with the specific goal of increasing the money made on the film. Gold Standard Report: Avatar originally became the highest grossing film in 2010 [D1]. Avengers: Endgame replaced Avatar as the highest grossing film in 2019 [D1, D2, D3, D8, D10, D12, D13]. It overtook Avatar by adding an additional six minutes of footage to the film to draw viewers back to the movie theater [D4]. Two years later Avatar was re-released in mainland China [D1, D2, D5, D6, D7, D8, D9, D10, D11]. It earned a sufficient amount of money to retake the title of highest-grossing film in 2021 [D5, D11, D6, D7, D2, D8, D9, D1]. Nuggets as Questions and Answers: (1) When did Avatar first become the highest grossing film? \u2022 2010 [D1] (2) When did Avengers: Endgame become the highest grossing film? \u2022 2019 [D1,D2, D3, D8, D10, D12, D13] \u2022 July 2019 [D3, D12, D13] \u2022 July 20, 2019 [D3] \u2022 July 21, 2019 [D13]\u2020 (3) What did studio executives do to the Avengers: Endgame film to become the highest grossing film? \u2022 Added six minutes of additional footage [D4] \u2022 Added footage [D4] \u2022 Added 6 minutes [D4] \u2022 Additional footage at the end of the film [D14] (4) When did Avatar retake the title of highest grossing film? \u2022 2021 [D1, D2, D6,D7,D9,D11] \u2022 March 2021 [D1, D6 ,D7, D9, D11] \u2022 March 13, 2021 [D1, D6, D9] \u2022 Two years after the Avengers: Endgame became the highest grossing film [D2] (5) What event led to Avatar becoming the highest grossing film? \u2022 Re-release in Mainland China [D1, D2, D5, D6, D7, D8, D9, D10] \u2022 Re-release in China [D1, D2, D5, D6, D7, D8, D9, D10] \u2022 Release in Mainland China for a second time [D1, D2, D5, D6, D7, D8, D9, D10] \u2022 Returned to theaters in China [D11] \u2020In Taiwan Time Figure 3: Example evaluation material for a report request. since nugget worthiness is an assessor\u2019s opinion, finding all answers to nugget questions is important for collection reusability, especially as nuggets can only be answered using documents known to have the answer. If nuggets are generated prior to submissions, it might be worth pooling submissions to identify more nugget answers. 4.5 Metrics Many metrics can be used to assess automatically generated reports. Two common IR measures are recall and precision; we focus on these here because they are well-known, easy to calculate, and highlight most of the important scoring issues we face in generated report evaluation. Recall and precision each require a numerator and a denominator. The recall denominator is the number of distinct assessor-identified nuggets; its numerator is the number of correctly reported nuggets (those supported by one or more of the necessary supporting citations in the report). So recall tells us how many of the concepts central to the report were actually reported on. Precision must account for phenomena below the nugget level, so we calculate it over report segments (which again we assume to be sentences). The denominator is the number of report sentences, minus any sentence that does not require a citation or that properly cites information not part of any nugget. The numerator is the number of sentences deemed to bear accurate citations, plus any sentences specified by the evaluation that correctly bear no citation. Figure 2 describes a typical approach to sentence evaluation. The rules embedded in the flowchart are not hard-and-fast, but are likely adequate for many evaluations under this framework. In the flowchart, \u201c+\u201d means the sentence is correct and should be rewarded; \u201c\u2013\u201d means that it is incorrect and should be penalized; and \u201c0\u201d means that the sentence is not included in the score. The flowchart shows how each sentence of the report can be scored. We propose these principles to guide sentence scoring: \u2022 Sentences with citations whose target element does not support them should be penalized (Outcome#1 in Figure 2). \u2022 Properly cited and attested sentences that are not relevant to the report should be ignored (Outcome #2). On the Evaluation of Machine-Generated Reports SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA (1) Avengers: Endgame and Avatar are two of the highest-grossing films in history. Outcome: #4 (2) Avengers: Endgame surpassed Avatar as the highest-grossing film globally, with a box office revenue of $2.787 billion. [D12] Outcome: #2 (3) This achievement was officially announced in July 2019. [D13] Outcome: #3, Nugget: 2 (4) To bring moviegoers back to the box office and increase the revenue of Avengers: Endgame, Marvel Studios implemented several strategies. Outcome: #4 (5) One of these strategies was re-releasing the film with additional content. Outcome: #6 (6) Marvel Studios President Kevin Feige confirmed plans for a re-release, including previously unseen footage, to generate renewed interest and boost ticket sales. [D101] Outcome: #2 (7) The new version of the film did not include any re-edited scenes within the main feature but instead featured additional scenes after the end credits, including a tribute. [D14] Outcome: #3, Nugget: 3 (8) This approach aimed to entice both existing and new audiences to revisit the theaters and contribute to the film\u2019s box office success. Outcome: #6 (9) Similarly, Avatar also made a comeback to reclaim its position as the highestgrossing film. Outcome: #4 (10) The film was re-released in China, where it achieved a single-day box office revenue of $8 million. [D2] Outcome: #3, Nugget: 5 (11) The re-release in China was particularly successful, as it generated significant revenue and propelled Avatar back to the top of the global box office charts. [D1] Outcome: #3, Nugget: 5 (12) The film\u2019s re-release in China was attributed to the impact of the COVID-19 pandemic, which led to the delay of many new releases and created an opportunity for Avatar to regain its popularity. [D2] Outcome: #3, Nugget: 5 (13) In conclusion, Avengers: Endgame and Avatar both held the title of the highestgrossing film globally at different points in time. Outcome: #4 (14) Marvel Studios strategically re-released Avengers: Endgame with additional content to attract audiences and boost ticket sales. Outcome: #6 (15) Avatar capitalized on the re-release trend in China, taking advantage of the pandemic-induced delay of new releases. Outcome: #6 (16) These manipulations aimed to increase the films\u2019 box office revenue and solidify their positions as record-breaking blockbusters. Outcome: #4 Figure 4: Example report evaluation result. \u2022 A sentence that cites a target element supporting a nugget that the sentence fulfills should be rewarded (Outcome #3). \u2022 Sentences that neither have nor require citations should not affect the score (Outcome #4). \u2022 Sentences that should contain a citation but do not should be penalized the first time their claim occurs (Outcomes #5, #6). \u2022 Sentences that claim the absence of a fact should be rewarded or penalized depending on whether the absence is explicitly stated as a nugget (Outcomes #7, #8). For this, a nugget can be created for information that the report request explicitly asks for but is not attested in the collection. Most sentences will bear either zero or one citation. A sentence can bear multiple citations, either because the same information is multiply attested in the collection, or because it is complex. Sentences that cite multiple target elements supporting the same nugget are treated as a single citation. Alternatively, the evaluation may macroaverage citation scores if all sentences are to be given equal weight, or microaverage them if the focus is on citation accuracy. Support by multiple report sentences counts only once per nugget. To automatically score a report, each decision diamond in Figure 2 must be automatable. Some are trivial, such as \u201cHas citation;\u201d others are less so. We believe current technology could do a reasonable job with most of the tasks. For instance, entailment models can likely determine if a document supports a report sentence. Note that originality is not a component of this evaluation; preventing plagiarism, while important, is a specialized area with its own metrics and evaluations [5, 22, 34, 66]. 5 EXAMPLE ASSESSMENT Figure 3 shows an example of the two items required to do manual or automatic assessment. The report request identifies the desired report content. The nugget questions and answers show how each answer is linked to the documents that attest to that answer. The Gold Standard Report that is shown is optional, but a useful intermediate step for the assessor between source document search and nugget question creation. Figure 4 is a report generated in response to the example in Figure 3, broken into report segments to illustrate manual evaluation. Each Outcome: # indicates how the sentence would be categorized using the flowchart in Figure 2. For Outcome: #3, the nugget answer in the sentence is also recorded. In this example, no sentence received a negative assessment because there were no outcomes of #1 or #7. Therefore, precision is 5/(16 \u221211) = 1.0. One nugget was repeated in Lines 10, 11, and 12, so recall is 3/5 = 0.6. Finally, for both Lines 2 and 6, the assessor would have needed to refer to the original source document to assess the statement, since the information in the sentence had not been captured in a required nugget. Assessing such sentences will likely be the most time-consuming part of manual assessment. 6 CONCLUSIONS LLMs have enabled remarkable new ways to satisfy information needs. Rather than simply providing \u201c10 blue links\u201d or an extracted answer snippet, LLMs have the potential to peer into documents to identify information salient to a topic and compile it into highly coherent, long-form text responses. We envision these generated reports will be a central way that some users will satisfy complex, nuanced, or multifaceted information needs. Because we believe that current evaluation methodologies for these report-generation systems are insufficient to maintain quality and guard against known defects, we felt the need for a report evaluation framework based on core principles \u2014 responsiveness to the information need, grounding and verifiability in documents, completeness, and reusability \u2014 while deliberately omitting aspects of report generation that current systems do not seem to struggle with (e.g., coherence, structure, etc.). Our new perspective on report generation evaluation is IR-centric, pulling together triedand-true notions of relevance, recall, and user modeling. We have also demonstrated an instantiation of our framework that could be applied either manually or with automatic systems. Evaluation methodologies inform progress and direct attention. We hope our proposed generated report evaluation framework will spur progress in the development of next-generation information access systems that can provide responsive, complete, and verifiable information on complex, nuanced, and multifaceted topics. WHOSE PERSPECTIVE This paper represents the perspectives of a group of industry and academic researchers at a variety of career stages. SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA J. Mayfield et al."
17
+ }
intro_8K/test_introduction_short_2405.00988v1.json ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.00988v1",
3
+ "title": "Context-Aware Clustering using Large Language Models",
4
+ "abstract": "Despite the remarkable success of Large Language Models (LLMs) in text\nunderstanding and generation, their potential for text clustering tasks remains\nunderexplored. We observed that powerful closed-source LLMs provide good\nquality clusterings of entity sets but are not scalable due to the massive\ncompute power required and the associated costs. Thus, we propose CACTUS\n(Context-Aware ClusTering with aUgmented triplet losS), a systematic approach\nthat leverages open-source LLMs for efficient and effective supervised\nclustering of entity subsets, particularly focusing on text-based entities.\nExisting text clustering methods fail to effectively capture the context\nprovided by the entity subset. Moreover, though there are several language\nmodeling based approaches for clustering, very few are designed for the task of\nsupervised clustering. This paper introduces a novel approach towards\nclustering entity subsets using LLMs by capturing context via a scalable\ninter-entity attention mechanism. We propose a novel augmented triplet loss\nfunction tailored for supervised clustering, which addresses the inherent\nchallenges of directly applying the triplet loss to this problem. Furthermore,\nwe introduce a self-supervised clustering task based on text augmentation\ntechniques to improve the generalization of our model. For evaluation, we\ncollect ground truth clusterings from a closed-source LLM and transfer this\nknowledge to an open-source LLM under the supervised clustering framework,\nallowing a faster and cheaper open-source model to perform the same task.\nExperiments on various e-commerce query and product clustering datasets\ndemonstrate that our proposed approach significantly outperforms existing\nunsupervised and supervised baselines under various external clustering\nevaluation metrics.",
5
+ "authors": "Sindhu Tipirneni, Ravinarayana Adkathimar, Nurendra Choudhary, Gaurush Hiranandani, Rana Ali Amjad, Vassilis N. Ioannidis, Changhe Yuan, Chandan K. Reddy",
6
+ "published": "2024-05-02",
7
+ "updated": "2024-05-02",
8
+ "primary_cat": "cs.CL",
9
+ "cats": [
10
+ "cs.CL",
11
+ "cs.LG",
12
+ "I.2.7; I.2.m"
13
+ ],
14
+ "label": "Original Paper",
15
+ "paper_cat": "LLM Fairness",
16
+ "gt": "Large Language Models (LLMs) have demonstrated human-level performance in text understanding and generation, but their application to text clustering tasks is underexplored. We observed that powerful closed-source LLMs (such as GPT-4 (Achiam et al., 2023) and Claude (Anthropic, 2023)), known for their instruction-following abilities, can provide high-quality clusterings through prompting. However, these models become unaffordable when clustering a large number of sets, due to their high costs. To overcome this limitation, we aim to develop a scalable model based on an open-source LLM that can efficiently and effectively perform the clustering task. We study this problem of transferring the knowledge of clustering task from a powerful closed-source LLM (LLMc) to a scalable open-source LLM (LLMo) under the framework of supervised clustering, where the goal is to learn to cluster unseen entity subsets, given training data comprising several examples of entity subsets with complete clusterings1 (See Figure 1). In this work, we focus particularly on entities described by text. This problem has ap- plications in various domains including e-commerce, news clustering, and email man- \u2217Corresponding author: [email protected] 1Complete clustering of a set refers to a clustering in which every entity in the set is assigned to a cluster. 1 arXiv:2405.00988v1 [cs.CL] 2 May 2024 Preprint. Under review. agement, among others (Finley & Joachims, 2005; 2008; Haider et al., 2007). How- ever, deep learning approaches for solving the supervised clustering problem remain largely unexplored. Existing methods overlook the specific context provided by an en- tity subset and often rely on the latent structural loss function (Fernandes et al., 2012) which involves the sequential computation of maximum spanning forests. In our work, Training data Model Test sample Prediction ? sam pl e 1 sam pl e 2 sam pl e N Figure 1: Illustration of the supervised cluster- ing problem: Each training sample contains a subset of entities along with the correspond- ing ground truth clustering. Given a test sam- ple, which is an unseen entity subset, the goal is to cluster the entities in the test sample. In a sample, color denotes a cluster, and shape denotes an entity. we propose an LLMo-based solution called CACTUS (Context-Aware ClusTering with aUgmented triplet losS) that captures con- textual information, introduces an en- hanced loss function, and incorporates a self-supervised clustering task. The context of an entity subset refers to the unique circumstances that relate the spe- cific entities occurring in the subset. For example, consider the task of clustering a user\u2019s monthly purchases. A purchase of \u2018magnetic tape\u2019 could signify various inten- tions, such as for a science project or picture mounting. Examining the user\u2019s other pur- chases could provide the necessary context to help us determine the use case and place the entity in the appropriate cluster. How- ever, most existing text clustering methods obtain a single embedding for each entity using a language model (Ahmed et al., 2022; Barnabo et al., 2023), thus ignoring the context. In contrast, our model computes entity embeddings that are dependent on the context or entity subset, which allows the model to identify entities with shared themes within the subset. The proposed method takes the entire entity subset as input to the LLM and captures inter-entity interactions using a scalable attention mechanism, as traditional full attention over all entities in a subset can become computationally expensive as subsets grow large. Specifically, in each Transformer layer, for each entity, we compute a single representative embedding that participates in inter-entity attention. Previous methods for supervised clustering applied the latent structural loss to pairwise entity features that are either hand-crafted or obtained from a neural network. While the latent structural loss involves sequential computations of spanning forests, the triplet loss can be parallelized (processing all triplets in a clustering in parallel using more memory) but faces the challenge of different triplets potentially having non-overlapping margin positions (see section 3.3). To address this issue, we augment the complete graph of entities with a neutral entity, which is connected to all other entities by a learnable similarity score that provides a reference for all margin locations. Additionally, to further improve supervised clustering performance, especially in the case of limited availability of ground truth clusterings, we introduce a novel self-supervised clustering task. This task involves randomly sampling seed entities and constructing clusters with different transformations of each seed. This idea is inspired by text data augmentation techniques (Shorten et al., 2021) used in NLP tasks, but we formulate it, for the first time, as a self-supervised clustering task that aligns better with our finetuning phase. To summarize, the main contributions of our work are as follows: \u2022 We propose a novel approach for supervised clustering of entity subsets using context- aware entity embeddings from LLMo with a scalable inter-entity attention mechanism. \u2022 We identify a problem with directly applying triplet loss to supervised clustering when different triplets can potentially have non-overlapping margin locations. To address this, we design a novel augmented triplet loss function. \u2022 We also design a self-supervised clustering task to improve LLMo\u2019s finetuning perfor- mance, especially when only a limited number of ground truth clusterings are available. 2 Preprint. Under review. \u2022 Our experiments demonstrate that the proposed method, CACTUS, outperforms previous unsupervised and supervised clustering baselines on real-world e-commerce query and product clustering datasets. We also conduct ablation studies to show the effectiveness of each of the proposed components.",
17
+ "main_content": "2.1 Traditional methods for supervised text clustering The supervised clustering problem can be formulated as a binary pairwise classification task of predicting if a pair of entities belong to the same cluster. But this approach suffers from the drawback that the pairs are assumed to be i.i.d. (Finley & Joachims, 2005). Thus, structured prediction approaches have been explored as solutions to this problem. Traditional methods used hand-engineered pairwise features as inputs, where each pair of entities is described by a vector. Methods such as structural SVM (Tsochantaridis et al., 2004; Finley & Joachims, 2005) and structured perceptron (Collins, 2002) have been applied to this problem, where a parameterized scoring function is learned such that it assigns higher scores to correct clusterings in the training data. The scoring function depends on the pairwise features and the predicted clustering, and is formulated using correlation clustering (Bansal et al., 2002) or k-means (Finley & Joachims, 2008) frameworks. Observing that many within-cluster entity pairs have weak signals, Yu & Joachims (2009); Fernandes et al. (2012); Haponchyk et al. (2018) introduce maximum spanning forests over complete graphs of entities as latent structures in the scoring function. The inference stage involves finding a clustering with the highest score for a given entity subset. 2.2 Language models for text clustering Despite the widespread use of Language Models (LMs) across diverse domains and applications, their application to \u2018supervised\u2019 clustering remains limited. Haponchyk & Moschitti (2021) and Barnabo et al. (2023) utilize encoder-only LMs to obtain pairwise and individual entity representations, respectively, and finetune the LMs using latent structural loss. The former is not a scalable approach as each entity pair is passed separately through a conventional Transformer model. In contrast to these existing methods, we propose a novel approach that passes the entire entity set to a language model, and efficiently models interentity interactions within the Transformer layers, thereby improving clustering performance by capturing the unique context given by an entity subset. Furthermore, we depart from the latent structural loss (used in these existing works) that involves the sequential step of computing maximum spanning forests and employ an augmented triplet loss function that can be more easily parallelized and also achieves better performance. It is worth noting that LMs have been widely applied to slightly different but more prevalent problems of unsupervised (Grootendorst, 2022; Zhang et al., 2021a;c; Meng et al., 2022) and semi-supervised clustering (Zhang et al., 2021b; Lin et al., 2020; Zhang et al., 2022; An et al., 2023). These tasks involve clustering of a single large entity set, with some pairwise constraints provided for semi-supervised clustering. Some recent works Viswanathan et al. (2023); Zhang et al. (2023); Nakshatri et al. (2023) take advantage of the latest advances in LLMs by using them as oracles to make key decisions during the clustering process. However, these approaches are not suitable for our problem of clustering several entity subsets, as they require a new optimization problem for every new entity subset. Different from these LLM-based methods, our approach involves prompting LLMc to gather complete clusterings of several small entity subsets, which are subsequently used to fine-tune a scalable LLMo that is adapted to capture the underlying context efficiently. 3 Proposed Method This section provides a detailed description of the supervised clustering problem and our proposed method. Our approach involves finetuning an open-source pretrained Transformer encoder model, denoted by LLMo, for the task of context-aware clustering in a supervised 3 Preprint. Under review. m ean pool i ng i nt r aent i t y at t ent i on x L e1, e3, . . . e2, e5, . . . . . . G r ound t r ut h cl ust er i ng Augm ent ed t r i pl et l oss Aggl om er at i ve cl ust er i ng Pr edi ct ed cl ust er i ng Pai r wi se cosi ne si m i l ar i t y i nt er ent i t y at t ent i on Figure 2: Overview of CACTUS: The entities in the input subset are tokenized and passed through LLMo, where the self-attention layers are modified with scalable inter-entity attention (SIA) to obtain context-aware entity embeddings. Pairwise cosine similarities are used for computing loss and predicted clusterings. manner. Here, \u2018context\u2019 refers to the subset in which an entity occurs, which influences the entity\u2019s interpretation. To capture context-awareness efficiently, we modify the selfattention layers of LLMo to implement a scalable inter-entity attention mechanism, which is described in section 3.2. We identify limitations of directly applying the triplet loss to supervised clustering and propose an augmented triplet loss function as a solution in section 3.3. We further pretrain LLMo on a dataset-specific self-supervised clustering task before the finetuning phase, which is described in Appendix B due to space constraints. During inference, given an entity subset, we extract context-aware entity embeddings from the finetuned model, compute pairwise similarities, and feed them to an agglomerative clustering algorithm to obtain the predicted clustering. We refer to the overall method as CACTUS (Context-Aware ClusTering with aUgmented triplet losS). Figure 2 provides an overview of the proposed approach. 3.1 Preliminaries Let E be the universal set of entities in a dataset. For an entity subset E \u2286E, a clustering C = (C, f ) contains a set of clusters C and an entity-to-cluster assignment function f : E \u21a0 C2. We say that two clusterings, C = (C, f ) and C\u2032 = (C\u2032, f \u2032), over the same enitity subset E are equivalent if they induce the same partitioning of items i.e., if the pairwise co-cluster relationships are preserved. Formally, the clusterings C and C\u2032 are equivalent if and only if \u2200e1, e2 \u2208E, we have f (e1) = f (e2) \u21d0 \u21d2f \u2032(e1) = f \u2032(e2). (1) A labeled clustering dataset D = {(E1, C1), ..., (E|D|, C|D|)} contains |D| samples where each sample contains an entity subset Ek \u2286E and the corresponding ground truth clustering Ck. We describe the process of collecting cluster assignments from LLMc in Appendix A. These clusterings serve as ground truth in the dataset, which is partitioned into training, validation, and test splits. Given an entity subset Ek, our goal is to predict a clustering that is equivalent to the ground truth clustering Ck. We use text(e) to denote the textual description of entity e. 3.2 Context-awareness using Scalable Inter-entity Attention (SIA) Here, we describe the architecture of LLMo, which is a Transformer encoder model that is finetuned on the supervised clustering task using ground truth clusterings from LLMc. A common approach for text clustering involves obtaining a single embedding vector separately for each entity using a language model and defining a similarity or distance function in the embedding space, which is used in a clustering algorithm. We refer to this approach as NIA (No Inter-entity Attention) because there is no interaction between different entities in the embedding module. To capture context, i.e., to model entity embeddings that 2\u21a0denotes a surjective function. 4 Preprint. Under review. depend on the entity subset they occur in, we can also pass the entire subset in the input sequence and pool each entity\u2019s token embeddings. We refer to this approach as FIA (Full Inter-entity Attention), because all the token pairs among different entities are considered in the attention matrix. This is not very practical, especially, when entity descriptions are long and as the subsets grow large. So, we design a scalable inter-entity attention (SIA) mechanism that computes one representative embedding per entity which is used for interentity interactions. Though there are scalable attention methods for handling long sequences in Transformers (Beltagy et al., 2020; Kitaev et al., 2020; Ainslie et al., 2020), this is the first work to explore scalable attention in the context of clustering. The proposed SIA approach is described in detail below. We use the encoder of Flan-T5-base (Chung et al., 2022) as the underlying model and modify its attention layers for SIA. Let E = {e1, ..., e|E|} be an entity subset, where tokens of entity ei are denoted as tokenize(text(ei)) = (e1 i , ..., e|ei| i ). A Transformer-based LM gathers initial token embeddings and iteratively updates them using stacked Multi-Head Attention (MHA) and Feed Forward Network (FFN) layers. The Multi-Head Attention (MHA) layer traditionally computes all token-token pairwise attention scores, making it computationally intensive for long inputs. In SIA mechanism, we propose modifications to the MHA layer to make it more scalable for our clustering task. We split the attention computation into intra-entity and inter-entity components and make the latter more efficient by using pooled entity representations. Let ej i \u2208Rd denote the embedding of token ej i (jth token of ith entity) in the input to an MHA layer, and \u00af ei = 1 |ei| \u2211k ek i denote the mean-pooled representation of entity ei. The MHA layer transforms the embedding ej i to \u02dc ej i \u2208Rd as follows. For simplicity, we show the computations for a single attention head and skip the projection layer at the end of MHA. \u02dc ej i = |ei| \u2211 k=1 \u03b1intra(ej i, ek i )WVek i | {z } intra-entity attention + |E| \u2211 m=1 m\u0338=i \u03b1inter(ej i, em)WV \u00af em | {z } inter-entity attention (2) \u03b1intra(inter)(ej i, .) = exp(Attintra(inter)(ej i, .)) \u2211|ei| k=1 exp(Attintra(ej i, ek i )) + \u2211|E| m=1 m\u0338=i exp(Attinter(ej i, em)) (3) Attintra(ej i, ek i ) = (WQej i)T(WKek i ) + \u03d5(k \u2212i) (4) Attinter(ej i, em) = (WQej i)T(WK \u00af em) (5) where WQ, WK, WV \u2208Rd\u00d7d are the query, key, and value projection matrices, respectively. Eq. (2) shows that a token within one entity attends to aggregated representations of other entities rather than individual tokens within those entities. The traditional softmax computation is altered in (3) to separate the intra and inter-entity terms. The intra-entity attention (4) includes a relative positional encoding term, denoted by \u03d5(.), while the inter-entity attention (5) does not. This is because the order of tokens within an entity is relevant while the order of entities in a subset is irrelevant. The token embeddings from the last Transformer layer are mean-pooled entity-wise to obtain the context-aware entity embeddings. Complexity: Considering a subset of N entities where each entity contains L tokens, and a fixed embedding dimension d, the computational complexity of self-attention in the NIA embedding method is O(NL2) because there are NL tokens in the entity subset, and each token only attends to the L tokens within the same entity. In contrast, using the FIA approach increases the complexity to O(N2L2) as each token attends to all NL tokens from all entities. SIA provides a compromise between these two methods; it has O(NL(L + N)) complexity because each token attends to the L tokens within the same entity and to N \u22121 representative entity embeddings. 5 Preprint. Under review. 3.3 Augmented triplet loss After obtaining context-aware entity embeddings, we compute cosine similarity between all entity pairs in a subset: sim(ei, ek) = \u00af e\u22a4 i \u00af ek \u2225\u00af ei\u2225\u2225\u00af ek\u2225 (6) The similarities are used to obtain predicted clusterings using the average-link agglomerative clustering algorithm. For the loss function, using these pairwise similarities as edge weights, we can construct a fully connected graph where each entity is a node. Previous methods for supervised clustering employed structural loss, which uses a scoring function based on a maximum spanning forest of the fully connected graph. This uses Kruskal\u2019s MST algorithm, which sequentially adds edges to the spanning forest and leads to slower loss computation. In contrast, the triplet loss (Schroff et al., 2015), which was shown to be a competitive baseline in Barnabo et al. (2023), can be easily parallelized as each triplet can be processed independently of the others. For each entity in the input subset, the triplet loss considers other entities within the same cluster as positives and the remaining entities as negatives. For an entity subset E with ground truth clustering C = (C, f ), the triplet loss is given by Ltriplet(E, C, \u0398) = 1 |T(C)| \u2211 (e,ep,en)\u2208T(C) (\u03b3 \u2212sim(e, ep) + sim(e, en))+ (7) where \u0398 are the parameters of the context-aware entity embedding module, \u03b3 is the margin which is a hyperparameter, and T(C) = {(e, ep, en) : e, ep, en \u2208E; e \u0338= ep; f (e) = f (ep) \u0338= f (en)} is the set of triplets. Figure 3: Example of an entity subset with 3 clusters containing 2 entities each. There exists an intra-cluster (yellow) edge with similarity less than some inter-cluster (green-blue) edges. For margin=0.3, the triplet loss (eq. 7) is at its minimum while the proposed augmented triplet loss (eq. 8) is not. The triplet loss formulation presents a challenge due to potential non-overlapping margin locations across different triplets. Margin location refers to the range between similarities from anchor entity (e) to positive (ep) and negative (en) entities within a triplet. For example, in Figure 3 with three clusters containing two entities each, the pairwise similarities shown result in the minimum value for triplet loss. However, there exist inter-cluster edges with higher similarity than an intra-cluster edge, which results in \u2018green\u2019 and \u2018blue\u2019 clusters being merged by the agglomerative clustering algorithm before the \u2018yellow\u2019 cluster is formed. This phenomenon can also occur for intra and inter-cluster edges in different entity subsets, which makes it difficult to choose a global threshold for agglomerative clustering during inference. To avoid such problems, we augment the complete graph with a neutral node that is connected to all other entities via a learnable neutral edge similarity sneu. The neutral node is incorporated into the augmented triplet loss to encourage intra and inter-cluster edge similarities to lie on opposite sides of sneu. The new loss function is given by Laug-triplet(E, C, \u0398) = 1 |T(C)|+|Pintra(C)|+|Pinter(C)| \u001a \u2211 (e,ep,en)\u2208T(C) (\u03b3 \u2212sim(e, ep) + sim(e, en))+ + \u2211 (e,ep)\u2208Pintra(C) (\u03b3 2 \u2212sim(e, ep) + sneu)+ + \u2211 (e,en)\u2208Pinter(C) (\u03b3 2 \u2212sneu + sim(e, en))+ \u001b (8) where Pintra(C) = {(e, ep) : (e, ep, .) \u2208T(C)} is the set of entity pairs in the same cluster and Pinter(C) = {(e, en) : (e, ., en) \u2208T(C)} is the set of entity pairs in different clusters. The newly added loss terms encourage the intra-cluster (inter-cluster) pairwise similarities to be \u03b3 2 higher (lower) than the neutral edge similarity. Thus, the neutral edge softly constraints the margin location for all triplets. 6 Preprint. Under review. Table 1: Dataset statistics. (* Since the Gifts dataset is proprietary, we provide approximate numbers for the statistics reported.) Gifts* Arts Games Instruments Office No. of entities \u223c365K 22,595 16,746 10,522 27,532 No. of entity sets \u223c42K 55,629 54,995 27,420 100,775 Avg. size of entity set \u223c46 5.4 5.7 5.6 5.0 Avg. no. of clusters per entity set \u223c6 2.6 2.8 2.8 2.7 Avg. no. of entities per cluster \u223c8 2.1 2.1 2.0 1.9 Avg. no. of words per entity \u223c3 11.6 6.9 10.5 13.9 4 Experiments In this section, we describe the datasets used for our experiments and compare the proposed method to existing unsupervised and supervised clustering baselines using external clustering evaluation metrics. Additionally, we conduct ablation studies to analyze the effectiveness of the different components of our method. Finally, we present a qualitative study to illustrate how context-awareness improves clustering performance. 4.1 Experimental setup We compile five datasets for our experiments, including four from Amazon product reviews (Ni et al., 2019) and one proprietary dataset called Gifts. The Amazon datasets including Arts, Games, Instruments, and Office, consist of sequences of products reviewed by users, with each user\u2019s product sequence treated as one entity subset. We use preprocessed datasets from Li et al. (2023), considering product titles as textual descriptions of entities. The Gifts dataset contains search queries related to \u2018gifts\u2019 from an e-commerce platform. Each entity subset contains potential next queries for a particular source query. Dataset statistics are summarized in Table 1. On average, the Amazon datasets contain 5 to 6 entities per entity subset, while Gifts contains approximately 46 entities. In each dataset, we randomly sample 3K entity sets for test split and 1K sets for validation split and use the remaining for training. For all datasets, we use a proprietary LLMc to collect ground truth clusterings. We run self-supervised pretraining for the Amazon datasets but not for Gifts, as the queries in Gifts are very short, making it difficult to obtain multiple transformations of a query. We evaluate the predicted clusterings from LLMo by comparing them to ground truth clusterings. Thus, we use the following extrinsic clustering evaluation metrics: Rand Index (RI), Adjusted Rand Index (ARI), Normalized Mutual Information (NMI), Adjusted Mutual Information (AMI), and F1-score Haponchyk et al. (2018). 4.2 Comparison with baselines As unsupervised clustering baselines, we employ the K-Means, Spectral, and Agglomerative clustering algorithms. The entity embeddings for unsupervised baselines are obtained from the pretrained Flan-T5-base encoder. For K-Means and Spectral clustering, we determine the number of clusters for each entity set using either the silhouette method or the average number from the training set based on validation metrics. For agglomerative clustering, we use cosine similarity with average linkage and determine the threshold based on the validation set. Given the scarcity of existing supervised clustering baselines, we incorporate only one such method from Barnabo et al. (2023) (SCL). NSC (Haponchyk & Moschitti, 2021) was not included as it demands substantial GPU memory and often leads to OOM errors. For a fair comparison, we employ FlanT5-base encoder as the LLM for all baselines and the results are shown in Table 2. CACTUS significantly outperforms all the unsupervised and supervised baselines. Compared to SCL, CACTUS improves the AMI and ARI metrics by 12.3%-26.8% and 15.3%-28.2%, respectively. Among the unsupervised methods, agglomerative clustering yields the best result in most cases. 7 Preprint. Under review. Table 2: Comparison of the proposed method to previous unsupervised and supervised clustering baselines. The first three are unsupervised methods and the last two are supervised clustering methods. (Agglo. stands for agglomerative clustering. *For the proprietary Gifts dataset, we report improvements against K-Means.) Model NMI AMI RI ARI F1 Gifts* K-Means +0.000 +0.000 +0.000 +0.000 +0.000 Spectral +0.020 +0.024 -0.002 +0.006 +0.014 Agglo. +0.047 +0.009 -0.019 +0.011 +0.027 SCL +0.167 +0.196 +0.065 +0.195 +0.114 CACTUS +0.207 +0.260 +0.098 +0.263 +0.144 Arts K-Means 0.660 0.167 0.690 0.250 0.766 Spectral 0.642 0.192 0.683 0.272 0.790 Agglo. 0.692 0.219 0.707 0.290 0.781 SCL 0.725 0.371 0.751 0.435 0.833 CACTUS 0.764 0.461 0.795 0.540 0.868 Games K-Means 0.681 0.213 0.712 0.247 0.767 Spectral 0.688 0.230 0.718 0.263 0.771 Agglo. 0.640 0.268 0.691 0.291 0.799 SCL 0.718 0.442 0.763 0.462 0.849 CACTUS 0.777 0.540 0.813 0.565 0.876 Instruments K-Means 0.678 0.181 0.705 0.213 0.764 Spectral 0.686 0.196 0.713 0.229 0.767 Agglo. 0.707 0.226 0.719 0.257 0.776 SCL 0.728 0.436 0.765 0.451 0.849 CACTUS 0.786 0.553 0.817 0.578 0.883 Office K-Means 0.731 0.267 0.748 0.332 0.808 Spectral 0.735 0.275 0.752 0.340 0.809 Agglo. 0.748 0.324 0.760 0.383 0.829 SCL 0.772 0.445 0.792 0.500 0.866 CACTUS 0.821 0.562 0.842 0.626 0.902 Table 3: Results on validation set using different architectures for entity set encoder. Proposed method (section 3.2) is indicated by *. Augmented triplet loss is used to train all models. Set encoder AMI ARI F1 Arts NIA 0.354 0.409 0.826 SIA (KV-mean) 0.398 0.450 0.840 SIA (first) 0.396 0.461 0.841 SIA (hid-mean)* 0.398 0.467 0.845 FIA 0.423 0.494 0.851 Office NIA 0.442 0.495 0.867 SIA (KV-mean) 0.470 0.526 0.875 SIA (first) 0.493 0.552 0.881 SIA (hid-mean) 0.513 0.568 0.885 FIA 0.493 0.553 0.879 Figure 4: GPU memory usage for inference using NIA, SIA (hid-mean), and FIA methods. 4.3 Ablation studies We conduct ablation experiments to assess the effectiveness of the various proposed components, including context-aware entity embeddings, augmented triplet loss function, and self-supervised clustering task. For faster training, we utilize 3K training entity sets instead of the whole dataset for ablation studies. We focus on AMI, ARI, and F1 scores and skip NMI and RI as the latter can sometimes be high for random clusterings and are not adjusted for chance unlike AMI and ARI (Vinh et al., 2009). Set encoder We compare five different methods of obtaining entity embeddings; the results are shown in Table 3. The NIA, SIA (hid-mean), and FIA methods are described in Section 3.2. We explore two more scalable attention mechanisms: SIA (KV-mean) where keys and values are pooled instead of the hidden representations, and SIA (first) where the first token in each entity is used as the representative token for inter-entity attention. Both SIA and FIA methods obtain better results than NIA which demonstrates the importance of capturing the context given by an entity set. The FIA method achieves the best results on the Arts dataset, while SIA (hid-mean) achieves the best results on the Office dataset. Among the three SIA methods, SIA (hid-mean) yields the highest metrics on both datasets. Figure 4 shows increasing GPU memory usage during inference from NIA to SIA (hid-mean) to FIA embedding methods. SIA achieves better results than FIA on the Office dataset, despite consuming 42% less memory. 8 Preprint. Under review. Table 4: Results on validation set using different supervised clustering loss functions for training. SIA (first) architecture is used for the set encoder. Loss AMI ARI F1 Arts cross-entropy 0.374 0.441 0.832 structural loss 0.385 0.441 0.835 triplet 0.389 0.444 0.837 augmented triplet 0.396 0.461 0.841 Office cross-entropy 0.488 0.548 0.876 structural loss 0.494 0.549 0.881 triplet 0.497 0.543 0.880 augmented triplet 0.493 0.552 0.881 Table 5: Results on validation set with and without self-supervision. SIA (hidmean) architecture is used for the set encoder. (SS: Self-supervision) SS AMI ARI F1 Arts \u2717 0.398 0.467 0.845 \u2713 0.446 0.502 0.855 Office \u2717 0.513 0.568 0.885 \u2713 0.552 0.608 0.894 1. l eat her r epai r doct or com pl et e di y ki t pr em i xed gl ue pai nt . . . 2. usa gl ue st i cks 5 l b box appr ox 90 st i cks f ul l si ze st i cks . . . 3. gel l y candl e wax 55 ounce cl ear 4. t he candl em aker s st or e 05417002641 nat ur al soy wax 10 l b bag whi t e 2 3 1 4 SI A 0. 43 0. 82 0. 34 0. 25 0. 09 0. 15 2 3 1 4 NI A 0. 00 0. 83 0. 02 0. 04 0. 16 0. 08 Figure 5: Case Study: Predicted clusterings with pairwise similarities using SIA and NIA methods. The SIA method correctly identifies the common cluster membership of the first two entities where NIA fails. The stopping threshold for agglomerative clustering is chosen based on the results of the validation set. Loss function We compare different loss functions including the triplet and augmented triplet loss functions described in Section 3.3, the structural loss Haponchyk & Moschitti (2021), and binary cross-entropy loss for pairwise classification. The results are shown in Table 4. The augmented triplet loss achieves the highest AMI, ARI, and F1 scores on the Arts dataset and the highest ARI and F1 scores on the Office dataset. Self-supervision Table 5 shows the clustering performance of our model with and without the proposed self-supervised pretraining phase as described in Section B. We initialize the model with pretrained FlanT5 weights in both cases but include an extra dataset-specific pretraining phase before finetuning while using self-supervision. Self-supervised clustering improves AMI, ARI, and F1 on both datasets. 4.4 Qualitative analysis We will qualitatively demonstrate the significance of context-aware embeddings using an example. Referring to Figure 5, using SIA embeddings, our model accurately identifies two products each under the \u2018Glue Products\u2019 and \u2018Candle Making Supplies\u2019 clusters. However, with NIA embeddings, the model fails to capture the similarity between the two glue products. Specifically, in the NIA embeddings, the first product, a leather repair glue paint, is placed closer to other leather repair products in the universal entity set but far away from products containing glue sticks. The SIA approach leverages the context provided by the current entity set and places the leather repair glue paint and glue sticks (the first two entities) in the same cluster. 5 Conclusion This paper presented a novel approach for supervised clustering of entity subsets using context-aware entity embeddings from LLMs. Context-awareness is achieved through a scalable inter-entity attention mechanism that facilitates interactions among different entities 9 Preprint. Under review. at each layer of the LLM. We also proposed an augmented triplet loss to address challenges encountered when directly applying triplet loss to supervised clustering. A self-supervised clustering task is introduced by drawing inspiration from text-augmentation techniques, which helps in enhancing the fine-tuning performance. We demonstrated that by integrating the proposed components, our model outperforms existing methods by a significant margin on extrinsic clustering evaluation metrics. Future research could investigate alternative techniques for inter-entity attention, explore additional loss functions and self-supervision tasks, and extend the current work to more application domains."
18
+ }
intro_8K/test_introduction_short_2405.01008v2.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.01008v2",
3
+ "title": "On Mechanistic Knowledge Localization in Text-to-Image Generative Models",
4
+ "abstract": "Identifying layers within text-to-image models which control visual\nattributes can facilitate efficient model editing through closed-form updates.\nRecent work, leveraging causal tracing show that early Stable-Diffusion\nvariants confine knowledge primarily to the first layer of the CLIP\ntext-encoder, while it diffuses throughout the UNet.Extending this framework,\nwe observe that for recent models (e.g., SD-XL, DeepFloyd), causal tracing\nfails in pinpointing localized knowledge, highlighting challenges in model\nediting. To address this issue, we introduce the concept of Mechanistic\nLocalization in text-to-image models, where knowledge about various visual\nattributes (e.g., \"style\", \"objects\", \"facts\") can be mechanistically localized\nto a small fraction of layers in the UNet, thus facilitating efficient model\nediting. We localize knowledge using our method LocoGen which measures the\ndirect effect of intermediate layers to output generation by performing\ninterventions in the cross-attention layers of the UNet. We then employ\nLocoEdit, a fast closed-form editing method across popular open-source\ntext-to-image models (including the latest SD-XL)and explore the possibilities\nof neuron-level model editing. Using Mechanistic Localization, our work offers\na better view of successes and failures in localization-based text-to-image\nmodel editing. Code will be available at\nhttps://github.com/samyadeepbasu/LocoGen.",
5
+ "authors": "Samyadeep Basu, Keivan Rezaei, Priyatham Kattakinda, Ryan Rossi, Cherry Zhao, Vlad Morariu, Varun Manjunatha, Soheil Feizi",
6
+ "published": "2024-05-02",
7
+ "updated": "2024-05-08",
8
+ "primary_cat": "cs.CV",
9
+ "cats": [
10
+ "cs.CV"
11
+ ],
12
+ "label": "Original Paper",
13
+ "paper_cat": "Diffusion AND Model",
14
+ "gt": "In recent years, substantial strides in conditional image generation have been made through diffusion-based text-to- image generative models, including notable examples like Stable-Diffusion (Rombach et al., 2021), Imagen (Saharia *Equal contribution 1University of Maryland 2Adobe Research. Correspondence to: Samyadeep Basu <[email protected]>. Proceedings of the 41 st International Conference on Machine Learning, Vienna, Austria. PMLR 235, 2024. Copyright 2024 by the author(s). et al., 2022), and DALLE (Ramesh et al., 2021). These models have captured widespread attention owing to their impressive image generation and editing capabilities, as ev- idenced by leading FID scores on prominent benchmarks such as MS-COCO (Lin et al., 2014). Typically trained on extensive billion-scale image-text pairs like LAION- 5B (Schuhmann et al., 2022), these models encapsulate a diverse array of visual concepts, encompassing color, artis- tic styles, objects, and renowned personalities. A recent work (Basu et al., 2023) designs an interpretability framework using causal tracing (Pearl, 2001) to trace the location of knowledge about various styles, objects or facts in text-to-image generative models. Essentially, causal trac- ing finds the indirect effects of intermediate layers (Pearl, 2001), by finding layers which can restore a model with cor- rupted inputs to its original state. Using this framework, the authors find that knowledge about various visual attributes is distributed in the UNet, whereas, there exists a unique causal state in the CLIP text-encoder where knowledge is localized. This unique causal state in the text-encoder can be leveraged to edit text-to-image models in order to remove style, objects or update facts effectively. However, we note that their framework is restricted to early Stable-Diffusion variants such as Stable-Diffusion-v1-5. In our paper, we first revisit knowledge localization for text- to-image generative models, specifically examining the ef- fectiveness of causal tracing beyond Stable-Diffusion-v1-5. While causal tracing successfully identifies unique localized states in the text-encoder for Stable-Diffusion variants, in- cluding v1-5 and v2-1, it fails to do so for recent models like SD-XL (Podell et al., 2023) and DeepFloyd1 across different visual attributes. In the UNet, causal states are distributed across a majority of open-source text-to-image models (ex- cluding DeepFloyd), aligning with findings in Basu et al. (2023). Notably, for DeepFloyd, we observe a lack of strong causal states corresponding to visual attributes in the UNet. To address the universal knowledge localization framework absence across different text-to-image models, we introduce the concept of mechanistic localization that aims to identify a small number of layers which control the generation of distinct visual attributes, across a spectrum of text-to-image 1https://github.com/deep-floyd/IF 1 arXiv:2405.01008v2 [cs.CV] 8 May 2024 Preprint Text-Encoder Replace the embedding of original prompt with a target prompt (e.g., \u2018a painting of a house\u2019) for this layer Low Fidelity to \u2019Van Gogh\u2019 style (b) UNet layers Cross-Attn Layers which use a different prompt than other layers Original Prompt: \u2018A house in the style of Van Gogh\u2019 \u201cMechanistic localization\u201d Output Modified! Cross-Attn layers Causal Intervention (Prior Works) Text-Encoder Causal Layer Distributed Knowledge - No \u201cMechanistic localization\u201d LocoGen (Ours) (a) Prompt: \u2018A house in the style of Van Gogh\u2019 Original Generation Figure 1. LOCOGEN: Identifying UNet layers that, when given different input, can alter visual attributes (e.g., style, objects, facts). (a) Earlier works (Basu et al., 2023) which show distributed knowledge using causal interventions. (b) LOCOGEN where a few cross-attention layers receive a different prompt-embedding than the original, leading to generation of images without the particular style. models. To achieve this, we propose LOCOGEN, a method that finds a subset of cross-attention layers in the UNet such that when the input to their key and value matrices is changed, output generation for a given visual attribute (e.g., \u201cstyle\u201d) is modified (see Figure 1). This intervention in the intermediate layers has a direct effect on the output \u2013 there- fore LOCOGEN measures the direct effect of intermediate layers, as opposed to indirect effects in causal tracing. Leveraging LOCOGEN, we probe knowledge locations for different visual attributes across popular open-source text-to-image models such as Stable-Diffusion-v1, Stable- Diffusion-v2, OpenJourney2, SD-XL (Podell et al., 2023) and DeepFloyd. For all models, we find that unique loca- tions can be identified for visual attributes (e.g., \u201cstyle\u201d, \u201cobjects\u201d, \u201cfacts\u201d). Using these locations, we then perform weight-space model editing to remove artistic \u201cstyles\u201d, mod- ify trademarked \u201cobjects\u201d and update outdated \u201cfacts\u201d in text-to-image models. This weight-space editing is per- formed using LOCOEDIT which updates the key and value matrices using a closed-form update in the locations identi- fied by LOCOGEN. Moreover, for certain attributes such as \u201cstyle\u201d, we show that knowledge can be traced and edited to a subset of neurons, therefore highlighting the possibilities of neuron-level model editing. Contributions. In summary, our contributions include: \u2022 We highlight the drawbacks of existing interpretability methods such as causal tracing for localizing knowl- edge in latest text-to-image models. \u2022 We introduce LOCOGEN which can universally iden- tify layers that control for visual attributes across a large spectrum of open-source text-to-image models. \u2022 By examining edited models using LOCOEDIT along 2https://huggingface.co/prompthero/openjourney with LOCOGEN, we observe that this efficient approach is successful across a majority of text-to-image models.",
15
+ "main_content": "Intepretability of Text-to-Image Models. To our understanding, there\u2019s limited exploration into the inner workings of text-to-image models, such as Stable-Diffusion. DAAM (Tang et al., 2023; Hertz et al., 2022) scrutinizes diffusion models through the analysis of cross-attention maps between text tokens and images, highlighting their semantic precision. (Chefer et al., 2023) understand the decomposition of concepts in diffusion models. (Basu et al., 2023) leverage causal tracing to understand how knowledge is stored in text-to-image models such as Stable-Diffusion-v1. Editing Text-to-Image Models. The capacity to modify a diffusion model\u2019s behavior without starting from scratch was initially investigated in Concept-Ablation (Kumari et al., 2023) and Concept-Erasure (Gandikota et al., 2023). Another method, TIME (Orgad et al., 2023), alters all the cross-attention layers\u2019 key and value matrices to translate between concepts, though lacks interpretability and applications on a real-use case of model editing. (Basu et al., 2023) edits text-to-image models in the text-encoder space by leveraging a singular causal state. However, existing works overlook newer text-to-image models (e.g., SD-XL and DeepFloyd), which we delve into in detail. 3. Preliminaries Diffusion models start with an initial random real image x0, the noisy image at time step t is expressed as xt = \u221a\u03b1tx0 + \ufffd (1 \u2212\u03b1t)\u03f5. Here, \u03b1t determines the strength of the random Gaussian noise, gradually diminishing as the time step increases, ensuring that xT \u223cN(0, I). age0, the noisy image at time step is expressed as xt = \u221a\u03b1tx0 + \ufffd (1 \u2212\u03b1t)\u03f5. Here, \u03b1t determines the strength of the random Gaussian noise, gradually diminishing as the time step increases, ensuring that xT \u223cN(0, I). The denoising network \u03f5\u03b8(xt, c, t), is pre-trained to denoise the noisy image xt and produce xt\u22121. Typically, the con2 Preprint Layers SD-v2-1 UNet SD-XL-UNet DeepFloyd-UNet 70 227 163 CLIP-Score Figure 2. Causal tracing for UNet. Similar to (Basu et al., 2023), we find that knowledge is causally distributed across the UNet for text-to-image models such as SD-v2-1 and SD-XL. For DeepFloyd we do not observe any significant causal state in the UNet. DeepFloyd Layers CLIP-Score 24 46 0 0 SD-XL Figure 3. Causal tracing for text-encoder. Unlike SD-v1-5 and SD-v2-1, we find that a singular causal states does not exist in the text-encoder for SD-XL and DeepFloyd. ditional input c for the denoising network \u03f5\u03b8(.) is a textembedding derived from a caption c through a text-encoder, denoted as c = v\u03b3(c). The noising as well as the denoising operation can also occur in a latent space defined by z = E(x) (Rombach et al., 2021) for better efficiency. The pre-training objective learns to denoise in the latent space as denoted by: L(z, c) = E\u03f5,t||\u03f5 \u2212\u03f5\u03b8(zt, c, t)||2 2, where zt = E(xt) and E is an encoder such as VQVAE (van den Oord et al., 2017). 4. On the Effectiveness of Causal Tracing for Text-to-Image Models In this section, we empirically observe the effectiveness of causal tracing to models beyond Stable-Diffusion-v15. In particular, we find the ability of causal tracing to identify localized control points in Stable-Diffusion-v2-1, OpenJourney, SD-XL and DeepFloyd. Causal Tracing in UNet. In Figure 2, we find that knowledge across different visual attributes is distributed in the UNet for all the text-to-image models (except for DeepFloyd), similar to Stable-Diffusion-v1-5. However, the degree of distribution varies between different text-to-image models. While knowledge about various visual attributes is densely distributed in Stable-Diffusion variants, for SDXL we find that the distribution is extremely sparse (e.g., only 5% of the total layers are causal). For DeepFloyd, we observe that there are no strong causal states in the UNet. We provide more qualitative visualizations on causal tracing across the these text-to-image models in Appendix A. Overall, these results reinforce the difficulty of editing knowledge in the UNet directly due to (i) distribution of causal states or (ii) absence of any. Causal Tracing in Text-Encoder. Basu et al. (2023) show that there exists a unique causal state in the text-encoder for Stable-Diffusion-v1-5 and Stable-Diffusion-v2-1 which can be used to perform fast model editing. In Figure 3, we find that such an unique causal state is absent in the text-encoder for DeepFloyd and SD-XL. We note that DeepFloyd uses a T5-text encoder, whereas SD-XL uses a a combination of CLIP-ViT-L and OpenCLIP-ViT-G (Radford et al., 2021). Our empirical results indicate that an unique causal state arises only when a CLIP text-encoder is used by itself in a text-to-image model. 5. LOCOGEN: Towards Mechanistic Knowledge Localization Given the lack of generalizability of knowledge localization using causal tracing as shown in Section 4, we introduce LOCOGEN , which can identify localized control regions for visual attributes across all text-to-image models. 5.1. Knowledge Control in Cross-Attention Layers During the inference process, the regulation of image generation involves the utilization of classifier-free guidance, as outlined in Ho & Salimans (2021) which incorporates scores from both the conditional and unconditional diffusion models at each time-step. Specifically, the classifier-free guidance is applied at each time-step to combine the conditional (\u03f5\u03b8(zt, c, t)) and unconditional score estimates (\u03f5\u03b8(zt, t)). The result is a combined score denoted as \u02c6 \u03f5(zt, c, t). \u02c6 \u03f5(zt, c, t) = \u03f5\u03b8(zt, c, t) + \u03b1 (\u03f5\u03b8 (zt, c, t) \u2212\u03f5\u03b8(zt, t)) , \u2200t \u2208[T, 1] . (1) This combined score is used to update the latent zt using DDIM sampling (Song et al., 2020) at each time-step to obtain the final latent code z0. We term the model \u03f5\u03b8(zt, c, t) as the Clean Model and the final image generated as Iclean. We note that text is incorporated in the process of generation using cross-attention layers denoted by {Cl}M l=1 within \u03f5\u03b8(zt, c, t) \u2200t \u2208[T, 1]. These layers include key and value matrices \u2013 {W K l , W V l }M l=1 that take text-embedding c of the input prompt and guide the generation toward the text prompt. Generally, the text-embedding c is same across all these layers. However, in order to localize and find control points for different visual attributes, we replace the original text-embedding c with a target prompt embedding c\u2032 across a small subset of the cross-attention layers and measure its direct effect on the generated image. 5.1.1. ALTERED INPUTS We say that a model receives altered input when a subset of cross-attention layers C\u2032 \u2282{Cl}M l=1 receive a different text-embedding c\u2032 than the other cross-attention layers that 3 Preprint SDv1-5 Open Journey SDXL SDv2-1 Layer 8 Original Layer 8 Original Layer 6 Original Layer 6 Original Layer 8 Original Layer 8 Original Layer 6 Original Layer 6 Original Layer 8 Original Layer 8 Original Layer 8 Original Layer 7 Original Layer 45 Original Layer 45 Original Layer 45 Original Layer 45 Original 'A house in the style of Van Gogh\u2019 'A town in the style of Monet\u2019 \u2018R2D2 in street\u2019 \u2018President of United States\u2019 'A house in the style of Van Gogh\u2019 'A house in the style of Van Gogh\u2019 'A house in the style of Van Gogh\u2019 'A town in the style of Monet\u2019 'A town in the style of Monet\u2019 'A tree in the style of Van Gogh\u2019 \u2018R2D2\u2019 \u2018R2D2\u2019 \u2018The British Monarch\u2019 \u2018The British Monarch\u2019 \u2018The British Monarch\u2019 \u2018President of United States\u2019 Original Prompt: Original Prompt: Original Prompt: Original Prompt: Figure 4. Interpretability Results: Images generated by intervening on the layers identified by LOCOGEN across various opensource text-to-image models. We compare the original generation vs. generation by intervening on the layers identified with LOCOGEN along with a target prompt. We find that across various text-to-image models, visual attributes such as style, objects, facts can be manipulated by intervening only on a very small fraction of cross-attention layers. take c as input. We name these layers as controlling layers. We denote by Ialtered the image generated using this model and Equation (1) with altered inputs when zT is given as the initial noise. We denote the model \u03f5\u03b8(zt, c, c\u2032, t) with the altered inputs as the Altered Model with the following inference procedure: \u02c6 \u03f5(zt, c, c\u2032, t) = \u03f5\u03b8(zt, c, c\u2032, t) + \u03b1(\u03f5\u03b8(zt, c, c\u2032, t) \u2212\u03f5\u03b8(zt, t)) . As an example, to find the layers where style knowledge corresponding to a particular artist is stored, {Cl}M l=1 \u2212C\u2032 receive text-embeddings corresponding to the prompt \u2018An <object> in the style of <artist>\u2019, whereas the layers in C\u2032 receive text-embeddings corresponding to the prompt \u2018An <object> in the style of painting\u2019. If the generated image with these inputs do not have that particular style, we realize that controlling layers C\u2032 are responsible for incorporating that specified style in the output (see Figure 1). In fact, this replacement operation enables finding locations across different cross-attention layers where various visual attribute knowledge is localized. 5.1.2. LOCOGEN ALGORITHM Our goal is to find controlling layers C\u2032 for different visual attributes. We note that the cardinality of the set |C\u2032| = m is a hyper-parameter and the search space for C\u2032 is exponential. Given |C\u2032| = m, there are \u0000M m \u0001 possibilities for C\u2032, thus, we restrict our search space to only adjacent cross-attention layers. In fact, we consider all C\u2032 such that C\u2032 = {Cl}j+m\u22121 l=j for j \u2208[1, M \u2212m + 1]. Selecting the hyper-parameter m. To select the cardinality of the set C\u2032, we run an iterative hyper-parameter search with m \u2208[1, M], where M is selected based on the maximum number of cross-attention layers in a given text-to-image generative model. At each iteration of the hyper-parameter search, we investigate whether there exists a set of m adjacent cross-attention layers that are responsible for the generation of the specific visual attribute. We find minimum m that such controlling layers for the particular attribute exists. To apply LOCOGEN for a particular attribute, we obtain a set of input prompts T = {Ti}N i=1 that include the particular attribute and corresponding set of prompts T \u2032 = {T \u2032 i}N i=1 where T \u2032 i is analogous to Ti except that the particular attribute is removed/updated. These prompts serve to create altered images and assess the presence of the specified attribute within them. Let ci be the text-embedding of Ti and c\u2032 i be that of T \u2032 i. Given m, we examine all M \u2212m + 1 possible candidates for controlling layers. For each of them, we generate N 4 Preprint SD-v1-5 SD-v2-1 SD-XL \u2013 Style, Facts SD-XL Objects Layers CLIP-Score Style Objects Facts Objects, Facts Style Facts Objects, Style Style, Facts Objects (a) (b) (c) (d) Figure 5. CLIP-Score of the generated images with original prompt for style, objects and target prompt for facts after intervening on layers through LOCOGEN. Lower CLIP-Score for objects, style indicate correct localization, whereas a higher CLIP-Score indicates such for facts. (a) For SD-v1-5 (m=2), objects, facts can be controlled from Layer 6, whereas style can be controlled from Layer 8. (b) For SD-v2-1(m=3), facts are controlled from Layer 7, style and objects from Layer 8. (c,d): For SD-XL, style (m=3), facts(m=5) are controlled from Layer 45, whereas objects are controlled from Layer 15. altered images where i-th image is generated by giving c\u2032 i as the input embedding to selected m layers and ci to other ones. Then we measure the CLIP-Score (Hessel et al., 2021) of original text prompt Ti to the generated image for style, objects and target text prompt T \u2032 i to the generated image for facts. For style and objects, drop in CLIP-Score shows the removal of the attribute while for facts increase in score shows similarity to the updated fact. We take the average of the mentioned score across all 1 \u2264i \u2264N. By doing that for all candidates, we report the one with minimum average CLIP-Score for style, objects and maximum average CLIP-Score for facts. These layers could be candidate layers controlling the generation of the specific attribute. Algorithm 1 provides the pseudocode to find the best candidate. Figure 5 shows CLIP-Score across different candidates. Algorithm 1 LOCOGEN Input: m, {Ti}N i=1, {T \u2032 i}N i=1, {ci}N i=1, {c\u2032 i}N i=1 Output: Candidate controlling set for j \u21901, . . . , M \u2212m do C\u2032 \u2190{Cl}j+m\u22121 l=j for i \u21901, . . . , N do si \u2190CLIP-SCORE (Ti, Ialtered) s\u2032 i \u2190CLIP-SCORE (T \u2032 i, Ialtered) aj \u2190AVERAGE \u0000{si}N i=1 \u0001 \u25b7for objects, style aj \u2190AVERAGE \u0000{s\u2032 i}N i=1 \u0001 \u25b7for facts j\u2217\u2190arg minj aj \u25b7for objects, style j\u2217\u2190arg maxj aj \u25b7for facts return aj\u2217, {Cl}j\u2217+m\u22121 l=j\u2217 We set a threshold for average CLIP-Score and find the minimum m such that there exists m adjacent cross-attention layers whose corresponding CLIP-Score meets the requirement. We point the reader to Appendix G for the values of m selected for different models and thresholds. Dataset for Prompts. We use the prompts used in (Basu et al., 2023; Kumari et al., 2023) to extract locations in the UNet which control for various visual attributes such as objects, style and facts. More details in Appendix C. 5.2. Empirical Results In this section, we provide empirical results highlighting the localized layers across various open-source text-to-image generative models: Stable-Diffusion Variants. Across both models, as depicted qualitatively in Figure 4 and quantitatively in Figure 5-(a), we observe the presence of a distinctive subset of layers that govern specific visual attributes. In the case of both SD-v1-5 and SD-v2-1, the control for \u201cstyle\u201d is centralized at l = 8 with m = 2. In SD-v1-5, the control for \u201cobjects\u201d and \u201cfacts\u201d emanates from the same locations: l = 6 and m = 2. However, in SD-v2-1, \u201cobjects\u201d are controlled from l = 8, while \u201cfacts\u201d are influenced by l = 7. Despite sharing a similar UNet architecture and undergoing training with comparable scales of pre-training data, these models diverge in the text-encoder utilized. This discrepancy in text-encoder choice may contribute to the variation in how they store knowledge concerning different attributes. Open-Journey. We note that Open-Journey exhibits control locations similar to SD-v1-5 for various visual attributes. As illustrated in Figure 4 and Figure 5-(a), \u201cobjects\u201d and \u201cfacts\u201d are governed from l = 6, while \u201cstyle\u201d is controlled from l = 8. Despite the architectural resemblance between Open-Journey and SD-v1-5, it\u2019s important to highlight that Open-Journey undergoes fine-tuning on a subset of images generated from Mid-Journey. This suggests that the control locations for visual attributes are more closely tied to the underlying model architecture than to the specifics of the training or fine-tuning data. SD-XL. Within SD-XL, our investigation reveals that both 5 Preprint Original Edited Original Edited Original Edited Original Edited SDv1-5 Open Journey SDv2-1 Edit: Remove Style of \u2018Van Gogh\u2019 Edit : Remove Style of \u2018Monet\u2019 Edit: Modify trademarked \u2018Snoopy\u2019 Edit: Update with correct \u2018British Monarch\u2019 SD-XL Figure 6. LOCOEDIT (Model editing) results at locations identified by LOCOGEN across various open-source text-to-image models. We observe that locations identified by our interpretability framework can be edited effectively to remove styles, objects and update facts in text-to-image models. We provide more visualizations in Appendix B. Original Layer 6 Original Layer 12 \u2018President of United States\u2019 \u2018The British Monarch\u2019 Original Prompt Original Layer 16 Original Layer 12 Original Prompt \u2018Ocean in Monet Style\u2019 \u2018Women working in garden in Van Gogh style\u2019 Figure 7. Interpretability Results for DeepFloyd. We find the control points for visual attributes to be dependent on the underlying prompts, rather than the visual attribute. \u201cstyle\u201d and \u201cfacts\u201d can be effectively controlled from l = 45, with m = 3 as evidenced in Figure 4 and Figure 5-(c). For the attribute \u201cobjects,\u201d control is situated at l = 15, albeit with a slightly larger value of m = 5. In summary, SD-XL, consisting of a total of 70 cross-attention layers, underscores a significant finding: various attributes in image generation can be governed by only a small subset of layers. DeepFloyd. Across SD-v1-5, SD-v2-1, Open-Journey, and SD-XL, our findings indicate that visual attributes like \u201cstyle\u201d, \u201cobjects\u201d and \u201cfacts,\u201d irrespective of the specific prompt used, can be traced back to control points situated within a limited number of layers. However, in the case of DeepFloyd, our observations differ. We find instead, that all attributes display localization dependent on the specific prompt employed. To illustrate, factual knowledge related to \u201cThe British Monarch\u201d is governed from l = 6 with m = 3, whereas factual knowledge tied to \u201cThe President of the United States\u201d is controlled from l = 12 (see Figure 7). This divergence in localization patterns highlights the nuanced behavior of DeepFloyd in comparison to the other models examined. More results can be referred in Appendix B.5. Human-Study Results. We run a human-study to verify that LOCOGEN can effectively identify controlling layers for different visual attributes. In our setup, evaluators assess 132 image pairs, each comprising an image generated by Clean Model and an image generated by Altered Model whose identified cross-attention layers takes different inputs. Evaluators determine whether the visual attribute is changed in the image generated by Altered Model(for instance, the artistic Van Gogh style is removed from the original image or not). Covering 33 image pairs, generated with different prompts per model, with five participating evaluators, our experiments reveal a 92.58% verification rate for the impact of LOCOGEN-identified layers on visual attributes. See more details in Appendix J. 6. LOCOEDIT: Editing to Ablate Concepts In this section, we analyse the effectiveness of closed-form edit updates in the layers identified by LOCOGEN across different text-to-image generative models. 6 Preprint SD-v1-5 SD-v2-1 OpenJourney SD-XL Original Edited Figure 8. Quantitative Model Editing Results for Text-toImage Models. We observe a drop in CLIP-Score for \u201cstyle\u201d and \u201dobjects\u201d, while an increase in CLIP-Score for \u201cfacts\u201d therefore highlighting correct edits. 6.1. Method Algorithm 1 extracts the exact set of cross-attention layers from which the knowledge about a particular visual attribute (e.g., style) is controlled. We denote this set as Cloc, where Cloc \u2282C and |Cloc| = m. This set of extracted crossattention layers Cloc, each containing value and key matrices is denoted as Cloc = { \u02c6 W K l , \u02c6 W V l }m l=1. The objective is to modify these weight matrices { \u02c6 W K l , \u02c6 W V l }m l=1 such that they transform the original prompt (e.g., \u2019A house in the style of Van Gogh\u2019) to a target prompt (e.g., \u2019A house in the style of a painting\u2019) in a way that the visual attribute in the generation is modified. Similar to Section 5.1.2, we use a set of input prompts Torig = {T o i }N i=1 consisting of prompts featuring the particular visual attribute. Simultaneously, we create a counterpart set Ttarget = {T t i }N i=1 where each T t i is identical to T o i but lacks the particular attribute in focus. Let co i \u2208Rd be the text-embedding of the last subject token in T o i and ct i \u2208Rd be that of T t i . We obtain matrix Xorig \u2208RN\u00d7d by stacking vectors co 1, co 2, . . . , co N and matrix Xtarget \u2208RN\u00d7d by stacking ct 1, ct 2, . . . , ct N. To learn a mapping between the key and the value embeddings, we solve the following optimization for each layer l \u2208[1, m] corresponding to the key matrices as: min W K l \u2225XorigW K l \u2212Xtarget \u02c6 W K l \u22252 2 + \u03bbK\u2225W K l \u2212\u02c6 W K l \u22252 2 where \u03bbK is the regularizer. Letting Yorig = XorigW K l the optimal closed form solution for the key matrix is: W K l = (XT origXorig + \u03bb1I)\u22121(XT origYtarget + \u03bbK \u02c6 W K l ) Same is applied to get optimal matrix for value embeddings. 6.2. Model Editing Results Stable-Diffusion Variants, Open-Journey and SD-XL. In Figure 6 and Figure 8, it becomes apparent that LOCOEDIT effectively integrates accurate edits into the locations identified by LOCOGEN. Qualitatively examining the visual edits in Figure 6, our method demonstrates the capability to remove artistic \u201cstyles\u201d, modify trademarked \u201cobjects,\u201d and update outdated \u201cfacts\u201d within a text-to-image model with accurate information. This visual assessment is complemented by the quantitative analysis in Figure 8, where we observe that the CLIP-Score of images generated by the edited model, given prompts containing specific visual attributes, consistently registers lower than that of the clean model for \u201cobjects\u201d and \u201cstyle.\u201d For \u201cfacts,\u201d we gauge the CLIP-Score of images from the model with the correct facts, wherein a higher CLIP-Score indicates a correct edit, as illustrated in Figure 8. Combining both qualitative and quantitative findings, these results collectively underscore the effectiveness of LOCOEDIT across SD-v1-5, SD-v2-1, Open-Journey, and SD-XL. However, it\u2019s noteworthy that the efficacy of closed-form edits varies among different textto-image models. Specifically, in the case of \u201cstyle,\u201d we observe the most substantial drop in CLIP-Score between the edited and unedited models for SD-v1-5 and Open-Journey, while the drop is comparatively less for SD-v2-1 and SDXL. Conversely, for \u201cfacts,\u201d we find that all models perform similarly in updating with new information. Limitations with DeepFloyd Closed-Form Edits. DeepFloyd, despite revealing distinct locations through LOCOGEN (albeit depending on the underlying prompt), exhibits challenges in effective closed-form edits at these locations. Appendix M provides qualitative visualizations illustrating this limitation. The model employs a T5-encoder with bidirectional attention, diverging from other text-to-image models using CLIP-variants with causal attention. Closedform edits, relying on mapping the last-subject token embedding to a target embedding, are typically effective in text-embeddings generated with causal attention, where the last-subject token holds crucial information. However, the T5-encoder presents a hurdle as tokens beyond the last subject token contribute essential information about the target attribute. Consequently, restricting the mapping to the lastsubject token alone proves ineffective for a T5-encoder. While LOCOGEN along with LOCOEDIT makes model editing more interpretable \u2013 we also find that localized-model editing is better than updating all layers in the UNet as shown in Appendix F. We also compare our method with existing editing methods (Basu et al., 2023; Kumari et al., 2023; Gandikota et al., 2023) in Appendix I. We find that our editing method is at par with existing baselines, with the added advantage of generalizability to models beyond Stable-Diffusion-v1-5. In Appendix L, we also show the 7 Preprint Original Image 50 Neurons Layer A painting of rocky ocean shore under the luminous night sky in the style of\u00a0Van Gogh Painting of a grieving woman in the style of\u00a0Pablo Picasso A painting of a river in the style of\u00a0Monet 100 Neurons Figure 9. Neuron-Level Model Editing Qualitative. Results when applying neuron-level dropout on identified neurons in layers specified with LOCOGEN on Stable Diffusion v1.5. The second and third columns display images with 50 and 100 modified neurons out of 1280 in controlling layers, respectively. The last column shows images with a different embedding in controlling layers. robustness of our method to generic prompts. 7. On Neuron-Level Model Editing In this section, we explore the feasibility of effecting neuronlevel modifications to eliminate stylistic attributes from the output of text-to-image models. According to layers identified with LOCOGEN, our objective is to ascertain whether the selective dropout of neurons at the activation layers within the specified cross-attention layers (key and value embeddings) can successfully eliminate stylistic elements. To accomplish this objective, we first need to identify which neurons are responsible for the generation of particular artistic styles, e.g., Van Gogh. We examine the activations of neurons in the embedding space of key and value matrices in identified cross-attention layers. More specifically, we pinpoint neurons that exhibit significant variations when comparing input prompts that include a particular style with the case that input prompts do not involve the specified style. To execute this process, we collect a set of N1 prompts that feature the specific style, e.g. Van Gogh. We gather text-embeddings of the last subject token of these prompts denoted by c1, c2, ..., cN1, where ci \u2208Rd. We also obtain a set of N2 prompts without any particular style and analogously obtain {c\u2032 1, c\u2032 2, ..., c\u2032 N2}, where c\u2032 i \u2208Rd. Next, for the key or value matrix W \u2208Rd\u00d7d\u2032, we consider key or value embedding of these input prompts, i.e., {zi}N1 i=1 \u222a{z\u2032 i}N2 i=1 where zi = ciW and z\u2032 i = c\u2032 iW. We Salvador Dali Van Gogh Monet Pablo Picasso Greg Rutkowski 0.1 0.2 0.3 Original 30 Neurons 50 Neurons 100 Neurons Layer Figure 10. Neuron-Level Model Editing Quantitative. Average CLIP-Score of generated images to text prompt \u2019style of <artist>\u2019. Brown bars show similarity to original generated image; red, orange, and green bars show similarity to generated image when 30, 50, and 100 neurons are modified, respectively; and blue bars refer to images when controlling layers receive other prompt. note that zi, z\u2032 i \u2208Rd\u2032. Subsequently, for each of these d\u2032 neurons, we assess the statistical difference in their activations between input prompts that include a particular style and those without it. Specifically, we compute the z-score for each neuron within two groups of activations: z1, z2, ..., zN1 and z\u2032 1, z\u2032 2, . .. , z\u2032 N2. The neurons are then ranked based on the absolute value of their z-score, with the top neurons representing those that exhibit significant differences in activations depending on the presence or absence of a particular concept in the input prompt. During generation, we drop-out these neurons and see if particular style is removed or not. As seen in Figure 9, neuron-level modification at inference time is effective at removing styles. This shows that knowledge about a particular style can be even more localized to a few neurons. It is noteworthy that the extent of style removal increases with the modification of more neurons, albeit with a trade-off in the quality of generated images. This arises because modified neurons may encapsulate information related to other visual attributes. To quantify the effectiveness of this approach, we measure the drop in CLIP-Score for modified images across various styles. Figure 10 presents a bar-plot illustrating these similarity scores. Notably, drop in CLIP-Score demonstrates that neuron-level model editing effectively removes the styles associated with different artists in the generated images. We refer to Appendix K.1 for more details on neuron-level model editing experiments. 8. Conclusion In our paper, we comprehensively examine knowledge localization across various open-source text-to-image models. We initially observe that while causal tracing proves effective for early Stable-Diffusion variants, its generalizability diminishes when applied to newer text-to-image models like DeepFloyd and SD-XL for localizing control points associated with visual attributes. To address this limitation, we introduce LOCOGEN, capable of effectively identifying lo8 Preprint cations within the UNet across diverse text-to-image models. Harnessing these identified locations within the UNet, we evaluate the efficacy of closed-form model editing across a range of text-to-image models leveraging LOCOEDIT , uncovering intriguing properties. Notably, for specific visual attributes such as \u201cstyle\u201d, we discover that knowledge can even be traced to a small subset of neurons and subsequently edited by applying a simple dropout layer, thereby underscoring the possibilities of neuron-level model editing. 9. Impact Statement This paper presents work to advance the understanding of the inner workings of open-source text-to-image generative models. Our interpretability method can advance the understanding of how knowledge is represented in generative models and does not have any potential negative implications on the society. Our editing method can address societal concerns (e.g., an artist asking the model owner to delete their style) in an effective way and to the best of our knowledge does not have any negative societal consequences."
16
+ }
intro_8K/test_introduction_short_2405.01029v2.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.01029v2",
3
+ "title": "MVMoE: Multi-Task Vehicle Routing Solver with Mixture-of-Experts",
4
+ "abstract": "Learning to solve vehicle routing problems (VRPs) has garnered much\nattention. However, most neural solvers are only structured and trained\nindependently on a specific problem, making them less generic and practical. In\nthis paper, we aim to develop a unified neural solver that can cope with a\nrange of VRP variants simultaneously. Specifically, we propose a multi-task\nvehicle routing solver with mixture-of-experts (MVMoE), which greatly enhances\nthe model capacity without a proportional increase in computation. We further\ndevelop a hierarchical gating mechanism for the MVMoE, delivering a good\ntrade-off between empirical performance and computational complexity.\nExperimentally, our method significantly promotes zero-shot generalization\nperformance on 10 unseen VRP variants, and showcases decent results on the\nfew-shot setting and real-world benchmark instances. We further conduct\nextensive studies on the effect of MoE configurations in solving VRPs, and\nobserve the superiority of hierarchical gating when facing out-of-distribution\ndata. The source code is available at:\nhttps://github.com/RoyalSkye/Routing-MVMoE.",
5
+ "authors": "Jianan Zhou, Zhiguang Cao, Yaoxin Wu, Wen Song, Yining Ma, Jie Zhang, Chi Xu",
6
+ "published": "2024-05-02",
7
+ "updated": "2024-05-06",
8
+ "primary_cat": "cs.AI",
9
+ "cats": [
10
+ "cs.AI",
11
+ "cs.LG"
12
+ ],
13
+ "label": "Original Paper",
14
+ "paper_cat": "Mixture AND of AND Experts",
15
+ "gt": "Vehicle routing problems (VRPs) are a class of canonical combinatorial optimization problems (COPs) in operation research and computer science, with a wide spectrum of 1College of Computing and Data Science, Nanyang Techno- logical University, Singapore 2School of Computing and Infor- mation Systems, Singapore Management University, Singapore 3Department of Information Systems, Eindhoven University of Technology, The Netherlands 4Institute of Marine Science and Technology, Shandong University, China 5Singapore Institute of Manufacturing Technology (SIMTech), Agency for Science, Tech- nology and Research (A*STAR), Singapore. Correspondence to: Yaoxin Wu <[email protected]>. Proceedings of the 41 st International Conference on Machine Learning, Vienna, Austria. PMLR 235, 2024. Copyright 2024 by the author(s). applications in logistics (Cattaruzza et al., 2017), transporta- tion (Wu et al., 2023), and manufacturing (Zhang et al., 2023). The intrinsic NP-hard nature makes VRPs expo- nentially expensive to be solved by exact solvers. As an alternative, heuristic solvers deliver suboptimal solutions within reasonable time, but need substantial domain exper- tise to be designed for each problem. Recently, learning to solve VRPs has received much attention (Bengio et al., 2021; Bogyrbayeva et al., 2024), with fruitful neural solvers being developed. Most of them apply deep neural networks to learn solution construction policies via various training paradigms (e.g., reinforcement learning (RL)). Besides gain- ing decent performance, they are characterized by less com- putational overhead and domain expertise than conventional solvers. However, prevailing neural solvers still need net- work structures tailored and trained independently for each specific VRP, instigating prohibitive training overhead and less practicality when facing multiple VRPs. In this paper, we aim to develop a unified neural solver, which can be trained for solving a range of VRP variants si- multaneously, and has decent zero-shot generalization capa- bility on unseen VRPs. A few recent works explore similar problem settings. Wang & Yu (2023) applies multi-armed bandits to solve multiple VRPs, while Lin et al. (2024) adapts the model pretrained on one base VRP to target VRPs by efficient fine-tuning. They fail to achieve zero-shot generalization to unseen VRPs due to the dependence on net- works structured for predetermined problem variants. Liu et al. (2024) empowers the neural solver with such general- izability by the compositional zero-shot learning (Ruis et al., 2021), which treats VRP variants as different combinations of a set of underlying attributes and uses a shared network to learn their representations. However, it still leverages ex- isting network structure proposed for simple VRPs, which is limited by its model capacity and empirical performance. Motivated by the recent advance of large language mod- els (LLMs) (Kaplan et al., 2020; Floridi & Chiriatti, 2020; Touvron et al., 2023), we propose a multi-task VRP solver with mixture-of-experts (MVMoE). Typically, a mixture-of- expert (MoE) layer replaces a feed-forward network (FFN) with several \"experts\" in a Transformer-based model, which are a group of FFNs with respective trainable parameters. An input to the MoE layer is routed to specific expert(s) by a gating network, and only parameters in selected expert(s) 1 arXiv:2405.01029v2 [cs.AI] 6 May 2024 MVMoE: Multi-Task Vehicle Routing Solver with Mixture-of-Experts are activated (i.e., conditional computation (Jacobs et al., 1991; Jordan & Jacobs, 1994)). In this manner, partially acti- vated parameters can effectively enhance the model capacity without a proportional increase in computation, making the training and deployment of LLMs viable. Therefore, to- wards a more generic and powerful neural solver, we first propose an MoE-based neural VRP solver, and present a hierarchical gating mechanism for a good trade-off between empirical performance and computational complexity. We choose the setting from Liu et al. (2024) as a test bed due to its potential to solve an exponential number of new VRP variants as any combination of the underlying attributes. Our contributions are summarized as follows. 1) We propose a unified neural solver MVMoE to solve multiple VRPs, which first brings MoEs into the study of COPs. The sole MVMoE can be trained on diverse VRP variants, and facili- tate a strong zero-shot generalization capability on unseen VRPs. 2) We develop a hierarchical gating mechanism for MVMoE to attain a favorable balance between empirical performance and computational overhead. Surprisingly, it exhibits much stronger out-of-distribution generalization capability than the base gating. 3) Extensive experiments demonstrate that MVMoE significantly improves the zero- shot generalization against baselines on 10 unseen VRP variants, and achieves decent results on the few-shot setting and real-world instances. We further provide extensive stud- ies on the effect of MoE configurations (such as the position of MoEs, the number of experts, and the gating mechanism) on the zero-shot generalization performance.",
16
+ "main_content": "Neural VRP Solvers. Two mainstreams exist in literature on learning to solve VRPs: 1) Construction-based solvers, which learn policies to construct solutions in an end-to-end manner. Vinyals et al. (2015) proposes Pointer Network to estimate the optimal solution to the traveling salesman problem (TSP) in an autoregressive way. The follow-up works apply RL to explore better approximate solutions to TSP (Bello et al., 2017) and capacitated vehicle routing problem (CVRP) (Nazari et al., 2018). Kool et al. (2018) proposes an attention-based model (AM) that uses Transformer to solve a series of VRPs independently. By leveraging the symmetry property in solutions, Kwon et al. (2020) proposes the policy optimization with multiple optima (POMO) to further promote the performance in solving TSP and CVRP. Other construction-based solvers are often developed on top of AM and POMO (Kwon et al., 2021; Li et al., 2021a; Kim et al., 2022; Berto et al., 2023; Chen et al., 2023; Grinsztajn et al., 2023; Chalumeau et al., 2023; Hottung et al., 2024). Besides the autoregressive manner, several works construct a heatmap to solve VRPs in a non-autoregressive manner (Joshi et al., 2019; Fu et al., 2021; Kool et al., 2022; Qiu et al., 2022; Sun & Yang, 2023; Min et al., 2023; Ye et al., 2023; Kim et al., 2024). 2) Improvement-based solvers, which learn policies to iteratively refine an initial solution until a termination condition is satisfied. The policies are often trained in contexts of classic local search (Croes, 1958; Shaw, 1998) or specialized heuristic solvers (Helsgaun, 2017) for obtaining more efficient or effective search components (Chen & Tian, 2019; Lu et al., 2020; Hottung & Tierney, 2020; d O Costa et al., 2020; Wu et al., 2021; Xin et al., 2021; Hudson et al., 2022; Zhou et al., 2023a; Ma et al., 2023). In general, constructionbased solvers can efficiently achieve desired performance, whereas improvement-based solvers have the potential to deliver better solutions given prolonged inference time. Recent research uncovers the deficient generalization capability of neural solvers, which suffer from drastic performance decrement on unseen data (Joshi et al., 2021). Previous works mainly focus on the cross-size generalization (Fu et al., 2021; Hou et al., 2023; Son et al., 2023; Luo et al., 2023; Drakulic et al., 2023) or cross-distribution generalization (Zhang et al., 2022; Geisler et al., 2022; Bi et al., 2022; Jiang et al., 2023) or both (Manchanda et al., 2022; Zhou et al., 2023b; Wang et al., 2024) on a single problem. In this paper, we step further to explore the generalization across different VRP variants (Wang & Yu, 2023; Liu et al., 2024; Lin et al., 2024). Mixture-of-Experts. The original idea of MoEs was proposed three decades ago (Jacobs et al., 1991; Jordan & Jacobs, 1994). In early concepts, the expert was defined as an entire neural network, and hence MoEs was similar to an ensemble of neural networks. Eigen et al. (2013) launchs the era when researchers start applying MoEs as components of neural networks. As an early success of MoEs applied in large neural networks, Shazeer et al. (2017) introduces the sparsely-gated MoEs in language modeling and machine translation, achieving state-of-the-art results at the time with only minor losses in computational efficiency. Follow-up works mainly focus on improving the gating mechanism (Lewis et al., 2021; Roller et al., 2021; Zuo et al., 2022; Zhou et al., 2022; Puigcerver et al., 2024; Xue et al., 2024) or applications to other domains (Lepikhin et al., 2020; Riquelme et al., 2021; Fedus et al., 2022b). We refer interested readers to Yuksel et al. (2012); Fedus et al. (2022a) for a comprehensive survey. 3. Preliminaries In this section, we first present the definition of CVRP, and then introduce its variants featured by additional constraints. Afterwards, we delineate recent construction-based neural solvers for VRPs (Kool et al., 2018; Kwon et al., 2020). VRP Variants. We define a CVRP instance of size n over 2 MVMoE: Multi-Task Vehicle Routing Solver with Mixture-of-Experts -0.5 +0.2 -0.3 +0.1 Linehaul Backhaul Depot [0, 3.0] (O) (B) (L) (TW) 0.2 0.2 0.2 0.1 0.3 < L [1.0, 1.5] [1.4, 1.8] [2.0, 2.4] [2.4, 2.8] Figure 1. Illustrations of sub-tours with various constraints: open route (O), backhaul (B), duration limit (L), and time window (TW). a graph G = {V, E}, where V includes a depot node v0 and customer nodes {vi}n i=1, and E includes edges e(vi, vj) between node vi and vj(i \u0338= j). Each customer node is associated with a demand \u03b4i, and a capacity limit Q is set for each vehicle. The solution (i.e., tour) \u03c4 is represented as a sequence of nodes, consisting of multiple sub-tours. Each sub-tour represents that a vehicle starts from the depot, visits a subset of customer nodes and returns to the depot. The solution is feasible if each customer node is visited exactly once, and the total demand in each sub-tour does not exceed the capacity limit Q. We consider the Euclidean space with the cost function c(\u00b7) defined as the total length of the tour. The objective is to find the optimal tour \u03c4 \u2217with the minimal cost: \u03c4 \u2217= arg min\u03c4\u2208\u03a6 c(\u03c4|G), where \u03a6 is the discrete search space that contains all feasible tours. On top of CVRP (featured by the capacity constraint (C)), several VRP variants involve additional practical constraints. 1) Open Route (O): The vehicle does not need to return to the depot v0 after visiting customers; 2) Backhaul (B): The demand \u03b4i is positive in CVRP, representing a vehicle unloads goods at the customer node. In practice, a customer can have a negative demand, requiring a vehicle to load goods. We name the customer nodes with \u03b4i > 0 as linehauls and the ones with \u03b4i < 0 as backhauls. Hence, VRP with backhaul allows the vehicle traverses linehauls and backhauls in a mixed manner, without strict precedence between them; 3) Duration Limit (L): To maintain a reasonable workload, the cost (i.e., length) of each route is upper bounded by a predefined threshold; 4) Time Window (TW): Each node vi \u2208V is associated with a time window [ei, li] and a service time si. A vehicle must start serving customer vi in the time slot from ei to li. If the vehicle arrives earlier than ei, it has to wait until ei. All vehicles must return to the depot v0 no later than l0. The aforementioned constraints are illustrated in Fig. 1. By combining them, we can obtain 16 typical VRP variants, which are summarized in Table 3. Note that the combination is not a trivial addition of different constraints. For example, when the open route is coupled with the time window, the vehicle does not need to return to the depot, and hence the constraint imposed by l0 at the depot is relaxed. We present more details of VRP variants and the associated data generation process in Appendix A. Learning to Solve VRPs. Typical neural solvers (Kool et al., 2018; Kwon et al., 2020) parameterize the solution construction policy by an attention-based neural network \u03c0\u03b8, which is trained to generate a solution in an autoregressive way. The feasibility of the generated solution is guaranteed by the masking mechanism during decoding. Without loss of generality, we consider RL training paradigm, wherein the solution construction process is formulated as a Markov Decision Process (MDP). Given an input instance, the encoder processes it and attains all node embeddings, which, with the context representation of the constructed partial tour, represent the current state. The decoder takes them as inputs and outputs the probabilities of valid nodes (i.e., actions) to be selected. After a complete solution \u03c4 is constructed, its probability can be factorized via the chain rule such that p\u03b8(\u03c4|G) = QT t=1 p\u03b8(\u03c0(t) \u03b8 |\u03c0(<t) \u03b8 , G), where \u03c0(t) \u03b8 and \u03c0(<t) \u03b8 denote the selected node and constructed partial tour at step t, and T is the number of total steps. The reward is defined as the negative tour length, i.e., R = \u2212c(\u03c4|G). Given a baseline function b(\u00b7) for training stability, the policy network \u03c0\u03b8 is often trained by REINFORCE (Williams, 1992) algorithm, which applies estimated gradients of the expected reward to optimize the policy as below, \u2207\u03b8La(\u03b8|G) = Ep\u03b8(\u03c4|G)[(c(\u03c4) \u2212b(G))\u2207\u03b8 log p\u03b8(\u03c4|G)]. (1) 4. Methodology In this section, we present the multi-task VRP solver with MoEs (MVMoE), and introduce the gating mechanism. Without loss of generality, we aim to learn a constructionbased neural solver (Kool et al., 2018; Kwon et al., 2020) for tackling VRP variants with the five constraints introduced in Section 3. The structure of MVMoE is illustrated in Fig. 2. 4.1. Multi-Task VRP Solver with MoEs Multi-Task VRP Solver. Given an instance of a specific VRP variant, the static features of each node vi are expressed by Si = {yi, \u03b4i, ei, li}, where yi, \u03b4i, ei, li denote the coordinate, demand, start and end time of the time window, respectively. The encoder takes these static node features as inputs, and outputs d-dimensional node embeddings hi. At the tth decoding step, the decoder takes as input the node embeddings and a context representation, including the embedding of the last selected node and dynamic features Dt = {ct, tt, lt, ot}, where ct, tt, lt, ot denote the remaining capacity of the vehicle, the current time, the length of the current partial route, and the presence indicator of the open route, respectively. Thereafter, the decoder outputs the probability distribution of nodes, from which a valid node is selected and appended to the partial solution. A complete solution is constructed in an autoregressive manner by iterating the decoding process. 3 MVMoE: Multi-Task Vehicle Routing Solver with Mixture-of-Experts Multi-Head Attention with MoE Node Embedding V Q Probability Update & Loop Solution N x Multi-Head Self-Attention Add & Normalize Add & Normalize Encoder Linear MoE Dot-Product & Softmax Dynamic Feature Last Node Decoder K Static Feature MoE Layer Top 2 -\" $& = ['&, )&, *&, +&] -% -) -* \u210e\" \u210e% \u210e) \u210e* \u210e) \u210e\" \u210e\" \u210e% \u210e* /+ 0+ 1+ 2+ %\" %# %$ %) -(!)# 0#(!) 3 0'(!) -(!)' MOEs(*) & Figure 2. The model structure of MVMoE. [Green part]: Given an input instance, the encoder and decoder output node embeddings and probabilities of nodes to be selected, respectively. The gray nodes are masked to satisfy problem-specific constraints for feasibility. The node with a deeper color denote a later node embedding. [Yellow part]: In an MoE layer, where we take the (node-level) input-choice Top2 gating as an example, the input x (i.e., node) is routed to two experts that derive the two largest probabilities from the gating network G. In each training step, we randomly select a VRP variant, and train the neural network to solve associated instances in a batch. In this way, MVMoE is able to learn a unified policy that can tackle different VRP tasks. If only a subset of static or dynamic features are involved in the current selected VRP variant, the other features are padded to the default values (e.g., zeros). For example, given a CVRP instance, the static features of the ith customer node are S(C) i = {yi, \u03b4i, 0, 0}, and the dynamic features at the tth decoding step are D(C) t = {ct, 0, lt, 0}. In summary, motivated by the fact that different VRP variants may include some common attributes (e.g., coordinate, demand), we define the static and dynamic features as the union set of attributes that exist in all VRP variants. By training on a few VRP variants with these attributes, the policy network has the potential to solve unseen variants, which are characterized by different combinations of these attributes, i.e., the zero-shot generalization capability (Liu et al., 2024). Mixture-of-Experts. Typically, an MoE layer consists of 1) m experts {E1, E2, . . . , Em}, each of which is a linear layer or FFN with independent trainable parameters, and 2) a gating network G parameterized by WG, which decides how the inputs are distributed to experts. Given a single input x, G(x) and Ej(x) denote the output of the gating network (i.e., an m-dimensional vector), and the output of the jth expert, respectively. In light of this, the output of an MoE layer is calculated as, MoE(x) = m X j=1 G(x)jEj(x). (2) Intuitively, a sparse vector G(x) only activates a small subset of experts with partial model parameters, and hence saves the computation. Typically, a TopK operator can achieve such sparsity by only keeping the K-largest values while setting others as the negative infinity. In this case, the gating network calculates the output as G(x) = Softmax(TopK(x \u00b7 WG)). Given the fact that larger sparse models do not always lead to better performance (Zuo et al., 2022), it is crucial yet tricky to design effective and efficient gating mechanisms to endow each expert being sufficiently trained, given enough training data. To this effect, some works have been put forward in language and vision domains, such as designing an auxiliary loss (Shazeer et al., 2017) or formulating it as a linear assignment problem (Lewis et al., 2021) in pursuit of the load balancing. MVMoE. By integrating the above parts, we obtain the multi-task VRP solver with MoEs. The overall model structure is shown in Fig. 2, where we employ MoEs in both the encoder and decoder. In specific, we substitute MoEs for the FFN layer in the encoder, and substitute MoEs for the final linear layer of multi-head attention in the decoder. We refer more details of the structure of MVMoE to Appendix B. We empirically find our design is effective in generating high-quality solutions, and especially employing MoEs in the decoder tends to exert a greater influence on performance (see Section 5.2). We jointly optimize all trainable parameters \u0398, with the objective formulated as follows, min \u0398 L = La + \u03b1Lb, (3) where La denotes the original loss function of the VRP solver (e.g., the REINFORCE loss used to train the policy for solving VRP variants in Eq. (1)), Lb denotes the loss function associated with MoEs (e.g., the auxiliary loss used to ensure load balancing in Eq. (19) in Appendix B), and \u03b1 is a hyperparameter to control its strength. 4.2. Gating Mechanism We mainly consider the node-level (or token-level) gating, by which each node is routed independently to experts.1 In 1In addition, we also investigate another two gating levels, i.e., instance-level and problem-level gating, which are presented in Section 5.2 and Appendix B. 4 MVMoE: Multi-Task Vehicle Routing Solver with Mixture-of-Experts 2.31 0.45 0.74 0.65 1.43 -0.68 1.20 -1.02 0.98 -0.53 0.76 1.97 -0.35 1.26 0.87 0.36 2.31 0.45 0.74 0.65 1.43 -0.68 1.20 -1.02 0.98 -0.53 0.76 1.97 -0.35 1.26 0.87 0.36 !! !\" !# !$ \"! \"\" \"# \"$ !! !\" !# !$ \"! \"\" \"# \"$ Figure 3. An illustration of the score matrix and gating algorithm. Left panel: Input-choice gating. Right panel: Expert-choice gating. The selected experts or nodes are in color. The arrow marks the dimension, along which the TopK experts or nodes are selected. each MoE layer, the extra computation originates from the forward pass of the gating network and the distribution of nodes to the selected experts. While employing MoEs in the decoder can significantly improve the performance, the number of decoding steps T increases as the problem size n scales up. It suggests that compared to the encoder with a fixed number of gating steps N (\u226aT), applying MoEs in the decoder may substantially increase the computational complexity. In light of this, we propose a hierarchical gating mechanism to make the better use of MoEs in the decoder for gaining a good trade-off between empirical performance and computational complexity. Next, we detail the nodelevel and hierarchical gating mechanism. Node-Level Gating. The node-level gating routes inputs at the granularity of nodes. Let d denote the hidden dimension and WG \u2208Rd\u00d7m denote trainable parameters of the gating network in MVMoE. Given a batch of inputs X \u2208RI\u00d7d, where I is the total number of nodes (i.e., batch size B \u00d7 problem scale n), each node is routed to the selected experts based on the score matrix H = (X \u00b7 WG) \u2208RI\u00d7m predicted by the gating network. We illustrate an example of the score matrix in Fig. 3, where xi denotes the ith node, and Ej denotes the jth expert in the node-level gating. In this paper, we mainly consider two popular gating algorithms (Shazeer et al., 2017; Zhou et al., 2022): 1) Inputchoice gating: Each node selects TopK experts based on H. Typically, K is set to 1 or 2 to retain a reasonable computational complexity. The input-choice gating is illustrated in the left panel of Fig. 3, where each node is routed to two experts with the largest scores (i.e., Top2). However, this method cannot guarantee load balancing. An expert may receive much more nodes than the others, resulting in a dominant expert while leaving others underfitting. To address this issue, most works employ an auxiliary loss to equalize quantities of nodes sent to different experts during training. Here we use the importance & load loss (Shazeer et al., 2017) as Lb in Eq. (3) to mitigate load imbalance (see Appendix B). 2) Expert-choice gating: Each expert selects Base Gating Top 2 Dense Layer Sparse Layer If Else Hierarchical Gating &\" &# ' %\" %# %$ %) %\" %# %) %$ 3 3 & Figure 4. A base gating (i.e., the input-choice gating with K = 2) and its hierarchical gating counterpart. In the latter, the gating network G1 routes inputs to the sparse layer ({G2, E1, E2, E3, E4}) or the dense layer D. If the sparse layer is chosen, the gating network G2 routes nodes to experts accoring to the base gating. TopK nodes based on H. Typically, K is set to I\u00d7\u03b2 m , where \u03b2 is the capacity factor reflecting the average number of experts utilized by a node. The expert-choice gating is illustrated in the right panel of Fig. 3, where each expert selects two nodes with the largest scores given \u03b2 = 2. While this gating algorithm explicitly ensures load balancing, some nodes may not be chosen by any expert. We refer more details of the above gating algorithms to Appendix B. Hierarchical Gating. In the VRP domain, it is computationally expensive to employ MoEs in each decoding step, since 1) the number of decoding steps T increases as the problem size n rises; 2) the problem-specific feasibility constraints must be satisfied during decoding. To tackle the challenges, we propose to employ MoEs only in partial decoding steps. Accordingly, we present a hierarchical gating, which learns to effectively and efficiently utilize MoEs during decoding. We illustrate the proposed hierarchical gating in Fig. 4. An MoE layer with the hierarchical gating includes two gating networks {G1, G2}, m experts {E1, E2, . . . , Em}, and a dense layer D (e.g., a linear layer). Given a batch of inputs X \u2208RI\u00d7d, the hierarchical gating routes them in two stages. In the first stage, G1 decides to distribute inputs X to either the sparse or dense layer according to the problem-level representation X1. In specific, we obtain X1 by applying the mean pooling along the first dimension of X, and process it to obtain the score matrix H1 = (X1 \u00b7 WG1) \u2208R1\u00d72. Then, we route the batch of inputs X to the sparse or dense layer by sampling from the probability distribution G1(X) = Softmax(H1). Here we employ the problem-level gating in G1 for the generality and efficiency of the hierarchical gating (see Appendix D for further discussions). In the second stage, if X is routed to the sparse layer, the gating network G2 is activated to route nodes to experts on the node-level by using aforementioned gating algorithms (e.g., the input-choice gating). Otherwise, X is routed to the dense layer D and transformed into D(X) \u2208RI\u00d7d. In summary, the hierarchical gating learns to output G1(X)0 Pm j=1 G2(X)jEj(X) or G1(X)1D(X) 5 MVMoE: Multi-Task Vehicle Routing Solver with Mixture-of-Experts Table 1. Performance on 1K test instances of trained VRPs. * represents 0.000%, with which the gaps are computed. Method n = 50 n = 100 Method n = 50 n = 100 Obj. Gap Time Obj. Gap Time Obj. Gap Time Obj. Gap Time CVRP HGS 10.334 * 4.6m 15.504 * 9.1m VRPTW HGS 14.509 * 8.4m 24.339 * 19.6m LKH3 10.346 0.115% 9.9m 15.590 0.556% 18.0m LKH3 14.607 0.664% 5.5m 24.721 1.584% 7.8m OR-Tools 10.540 1.962% 10.4m 16.381 5.652% 20.8m OR-Tools 14.915 2.694% 10.4m 25.894 6.297% 20.8m OR-Tools (x10) 10.418 0.788% 1.7h 15.935 2.751% 3.5h OR-Tools (x10) 14.665 1.011% 1.7h 25.212 3.482% 3.5h POMO 10.418 0.806% 3s 15.734 1.488% 9s POMO 14.940 2.990% 3s 25.367 4.307% 11s POMO-MTL 10.437 0.987% 3s 15.790 1.846% 9s POMO-MTL 15.032 3.637% 3s 25.610 5.313% 11s MVMoE/4E 10.428 0.896% 4s 15.760 1.653% 11s MVMoE/4E 14.999 3.410% 4s 25.512 4.903% 12s MVMoE/4E-L 10.434 0.955% 4s 15.771 1.728% 10s MVMoE/4E-L 15.013 3.500% 3s 25.519 4.927% 11s OVRP LKH3 6.511 0.198% 4.5m 9.828 * 5.3m VRPL LKH3 10.571 0.790% 7.8m 15.771 * 16.0m OR-Tools 6.531 0.495% 10.4m 10.010 1.806% 20.8m OR-Tools 10.677 1.746% 10.4m 16.496 4.587% 20.8m OR-Tools (x10) 6.498 * 1.7h 9.842 0.122% 3.5h OR-Tools (x10) 10.495 * 1.7h 16.004 1.444% 3.5h POMO 6.609 1.685% 2s 10.044 2.192% 8s POMO 10.491 -0.008% 2s 15.785 0.093% 9s POMO-MTL 6.671 2.634% 2s 10.169 3.458% 8s POMO-MTL 10.513 0.201% 2s 15.846 0.479% 9s MVMoE/4E 6.655 2.402% 3s 10.138 3.136% 10s MVMoE/4E 10.501 0.092% 3s 15.812 0.261% 10s MVMoE/4E-L 6.665 2.548% 3s 10.145 3.214% 9s MVMoE/4E-L 10.506 0.131% 3s 15.821 0.323% 10s VRPB OR-Tools 8.127 0.989% 10.4m 12.185 2.594% 20.8m OVRPTW OR-Tools 8.737 0.592% 10.4m 14.635 1.756% 20.8m OR-Tools (x10) 8.046 * 1.7h 11.878 * 3.5h OR-Tools (x10) 8.683 * 1.7h 14.380 * 3.5h POMO 8.149 1.276% 2s 11.993 0.995% 7s POMO 8.891 2.377% 3s 14.728 2.467% 10s POMO-MTL 8.182 1.684% 2s 12.072 1.674% 7s POMO-MTL 8.987 3.470% 3s 15.008 4.411% 10s MVMoE/4E 8.170 1.540% 3s 12.027 1.285% 9s MVMoE/4E 8.964 3.210% 4s 14.927 3.852% 11s MVMoE/4E-L 8.176 1.605% 3s 12.036 1.368% 8s MVMoE/4E-L 8.974 3.322% 4s 14.940 3.941% 10s based on both problem-level and node-level representations. Overall, the hierarchical gating improves the computational efficiency with a minor loss on the empirical performance. To balance the efficiency and performance of MVMoE, we use the base gating in the encoder and its hierarchical gating counterpart in the decoder. Note that the hierarchical gating is applicable to different gating algorithms, such as the input-choice gating (Shazeer et al., 2017) and expert-choice gating (Zhou et al., 2022). We also explore a more advanced gating algorithm (Puigcerver et al., 2024) for reducing the number of routed nodes and thus the computational complexity. But its empirical performance is unsatisfactory in the VRP domain (see Section 5.3). 5. Experiments In this section, we empirically verify the superiority of the proposed MVMoE, and provide insights into the application of MoEs to solve VRPs. We consider 16 VRP variants with five constraints. Due to page limit, we present more experimental results in Appendix C. All experiments are conducted on a machine with NVIDIA Ampere A100-80GB GPU cards and AMD EPYC 7513 CPU at 2.6GHz. Baselines. Traditional solvers: We employ HGS (Vidal, 2022) to solve CVRP and VRPTW instances with default hyperparameters (i.e., the maximum number of iterations without improvement is 20000). We run LKH3 (Helsgaun, 2017) to solve CVRP, OVRP, VRPL and VRPTW instances with 10000 trails and 1 run. OR-Tools (Furnon & Perron, 2023) is an open source solver for complex optimization problems. It is more versatile than LKH and HGS, and can solve all 16 VRP variants considered in this paper. We use the parallel cheapest insertion as the first solution strategy, and use the guided local search as the local search strategy in OR-Tools. For n = 50/100, we set the search time limit as 20s/40s to solve an instance, and also provide its results given 200s/400s (i.e., OR-Tools (x10)). For all traditional solvers, we use them to solve 32 instances in parallel on 32 CPU cores following Kool et al. (2018). Neural solvers: We compare our method to POMO (Kwon et al., 2020) and POMO-MTL (Liu et al., 2024). While POMO is trained on each single VRP, POMO-MTL is trained on multiple VRPs by multi-task learning. Note that POMO-MTL is the dense model counterpart of MVMoE, which is structured by dense layers (e.g., FFNs) rather than sparse MoEs. In specific, POMO-MTL and MVMoE/4E possess 1.25M and 3.68M parameters, but they activate a similar number of parameters for each single input. Training. We follow most setups in (Kwon et al., 2020). 1) For all neural solvers: Adam optimizer is used with the learning rate of 1e \u22124, the weight decay of 1e \u22126, and the batch size of 128. The model is trained for 5000 epochs, with each containing 20000 training instances (i.e., 100M training instances in total). The learning rate is decayed by 10 for the last 10% training instances. We consider two problem scales n \u2208{50, 100} during training, according to (Liu et al., 2024). 2) For multi-task solvers: The training problem set includes CVRP, OVRP, VRPB, VRPL, VRPTW, and OVRPTW (see Appendix C.1 for further discussions). In each batch of training, we randomly sample a problem from the set and generate its instances. Please refer to Appendix A for details of the generation procedure. 3) For our method: We employ m = 4 experts with K = \u03b2 = 2 in each MoE layer, and set the the weight \u03b1 of the auxiliary loss Lb as 0.01. The default gating mechanism of MVMOE/4E is the node-level input-choice gating in both the encoder 6 MVMoE: Multi-Task Vehicle Routing Solver with Mixture-of-Experts Table 2. Zero-shot generalization on 1K test instances of unseen VRPs. * represents 0.000%, with which the gaps are computed. Method n = 50 n = 100 Method n = 50 n = 100 Obj. Gap Time Obj. Gap Time Obj. Gap Time Obj. Gap Time OVRPB OR-Tools 5.764 0.332% 10.4m 8.522 1.852% 20.8m OVRPL OR-Tools 6.522 0.480% 10.4m 9.966 1.783% 20.8m OR-Tools (x10) 5.745 * 1.7h 8.365 * 3.5h OR-Tools (x10) 6.490 * 1.7h 9.790 * 3.5h POMO-MTL 6.116 6.430% 2s 8.979 7.335% 8s POMO-MTL 6.668 2.734% 2s 10.126 3.441% 9s MVMoE/4E 6.092 5.999% 3s 8.959 7.088% 9s MVMoE/4E 6.650 2.454% 3s 10.097 3.148% 10s MVMoE/4E-L 6.122 6.522% 3s 8.972 7.243% 9s MVMoE/4E-L 6.659 2.597% 3s 10.106 3.244% 9s VRPBL OR-Tools 8.131 1.254% 10.4m 12.095 2.586% 20.8m VRPBTW OR-Tools 15.053 1.857% 10.4m 26.217 2.858% 20.8m OR-Tools (x10) 8.029 * 1.7h 11.790 * 3.5h OR-Tools (x10) 14.771 * 1.7h 25.496 * 3.5h POMO-MTL 8.188 1.971% 2s 11.998 1.793% 8s POMO-MTL 16.055 8.841% 3s 27.319 7.413% 10s MVMoE/4E 8.172 1.776% 3s 11.945 1.346% 9s MVMoE/4E 16.022 8.600% 4s 27.236 7.078% 11s MVMoE/4E-L 8.180 1.872% 3s 11.960 1.473% 9s MVMoE/4E-L 16.041 8.745% 4s 27.265 7.190% 10s VRPLTW OR-Tools 14.815 1.432% 10.4m 25.823 2.534% 20.8m OVRPBL OR-Tools 5.771 0.549% 10.4m 8.555 2.459% 20.8m OR-Tools (x10) 14.598 * 1.7h 25.195 * 3.5h OR-Tools (x10) 5.739 * 1.7h 8.348 * 3.5h POMO-MTL 14.961 2.586% 3s 25.619 1.920% 12s POMO-MTL 6.104 6.306% 2s 8.961 7.343% 8s MVMoE/4E 14.937 2.421% 4s 25.514 1.471% 13s MVMoE/4E 6.076 5.843% 3s 8.942 7.115% 9s MVMoE/4E-L 14.953 2.535% 4s 25.529 1.545% 12s MVMoE/4E-L 6.104 6.310% 3s 8.957 7.300% 9s OVRPBTW OR-Tools 8.758 0.927% 10.4m 14.713 2.268% 20.8m OVRPLTW OR-Tools 8.728 0.656% 10.4m 14.535 1.779% 20.8m OR-Tools (x10) 8.675 * 1.7h 14.384 * 3.5h OR-Tools (x10) 8.669 * 1.7h 14.279 * 3.5h POMO-MTL 9.514 9.628% 3s 15.879 10.453% 10s POMO-MTL 8.987 3.633% 3s 14.896 4.374% 11s MVMoE/4E 9.486 9.308% 4s 15.808 9.948% 11s MVMoE/4E 8.966 3.396% 4s 14.828 3.903% 12s MVMoE/4E-L 9.515 9.630% 3s 15.841 10.188% 10s MVMoE/4E-L 8.974 3.488% 4s 14.839 3.971% 10s VRPBLTW OR-Tools 14.890 1.402% 10.4m 25.979 2.518% 20.8m OVRPBLTW OR-Tools 8.729 0.624% 10.4m 14.496 1.724% 20.8m OR-Tools (x10) 14.677 * 1.7h 25.342 * 3.5h OR-Tools (x10) 8.673 * 1.7h 14.250 * 3.5h POMO-MTL 15.980 9.035% 3s 27.247 7.746% 11s POMO-MTL 9.532 9.851% 3s 15.738 10.498% 10s MVMoE/4E 15.945 8.775% 4s 27.142 7.332% 12s MVMoE/4E 9.503 9.516% 4s 15.671 10.009% 11s MVMoE/4E-L 15.963 8.915% 4s 27.177 7.473% 11s MVMoE/4E-L 9.518 9.682% 4s 15.706 10.263% 10s 0 2 4 6 8 10 Epoch 8.3 8.4 8.5 8.6 8.7 8.8 8.9 9.0 Gap (%) VRPBLTW POMO-MTL MVMoE/4E MVMoE/4E-L 0 2 4 6 8 10 Epoch 9.0 9.2 9.4 9.6 9.8 Gap (%) OVRPBLTW POMO-MTL MVMoE/4E MVMoE/4E-L Figure 5. Few-shot generalization on unseen VRPs. and decoder layers. MVMoE/4E-L is a computationally light version that replaces the input-choice gating with its hierarchical gating counterpart in the decoder. Inference. For all neural solvers, we use greedy rollout with x8 instance augmentation following Kwon et al. (2020). We report the average results (i.e., objective values and gaps) over the test dataset that contains 1K instances, and the total time to solve the entire test dataset. The gaps are computed with respect to the results of the best-performing traditional VRP solvers (i.e., * in Tables 1 and 2). 5.1. Empirical Results Performance on Trained VRPs. We evaluate all methods on 6 trained VRPs and gather all results in Table 1. The single-task neural solver (i.e., POMO) achieves better performance than multi-task neural solvers on each single problem, since it is restructured and retrained on each VRP independently. However, its average performance over all trained VRPs is quite inferior as shown in Table 4 in Appendix C, since each trained POMO is overfitted to a specific VRP. For example, the average performance of POMO solely trained on CVRP is 16.815%, while POMO-MTL and MVMoE/4E achieve 2.102% and 1.925%, respectively. Notably, our neural solvers consistently outperform POMO-MTL. MVMoE/4E performs slightly better than MVMoE/4E-L at the expense of more computation. Despite that, MVMoE/4E-L exhibits stronger out-of-distribution generalization capability than MVMoE/4E (see Tables 7 and 8 in Appendix C). Generalization on Unseen VRPs. We evaluate multi-task solvers on 10 unseen VRP variants. 1) Zero-shot generalization: We directly test the trained solvers on unseen VRPs. The results in Table 2 reveal that the proposed MVMoE significantly outperforms POMO-MTL across all VRP variants. 2) Few-shot generalization: We also consider the few-shot setting on n = 50, where a trained solver is fine-tuned on the target VRP using 10K instances (0.01% of total training instances) in each epoch. Without loss of generality, we conduct experiments on VRPBLTW and OVRPBLTW following the training setups. The results in Fig. 5 showcase MVMoE generalizes more favorably than POMO-MTL. 5.2. Ablation on MoEs Here we explore the effect of different MoE settings on the zero-shot generalization of neural solvers, and provide insights on how to effectively apply MoEs to solve VRPs. Due to the fast convergence, we reduce the number of epochs to 2500 on VRPs of the size n = 50, while leaving other setups unchanged. We set MVMoE/4E as the default baseline, 7 MVMoE: Multi-Task Vehicle Routing Solver with Mixture-of-Experts None Raw Enc_Every_Two Enc_Last_Two Enc_All Dec Enc_All_Dec Position of MoEs 6.1 6.2 6.3 6.4 6.5 6.6 6.7 6.8 Gap (%) (a) POMO-MTL_50M MVMoE/4E_50M MVMoE/8E_50M MVMoE/16E_50M MVMoE/8E_100M MVMoE/16E_200M Number of Experts 5.6 5.7 5.8 5.9 6.0 6.1 6.2 6.3 6.4 Gap (%) (b) POMO-MTL Node-Input Node-Expert Node-Random Instance-Input Instance-Expert Instance-Random Problem-Random Gating Mechanism 6.0 6.5 7.0 7.5 8.0 8.5 9.0 9.5 10.0 Gap (%) (c) POMO-MTL Input_Choice Soft MoEs Neural Hierarchical Heuristic Hierarchical Advanced Gating 5.7 5.8 5.9 6.0 6.1 6.2 6.3 Gap (%) (d) POMO-MTL Input_Choice Soft MoEs Neural Hierarchical Heuristic Hierarchical Training Efficiency 0 20 40 60 80 100 120 140 160 Time (hr) (e) n = 50 n = 100 Figure 6. Left three panels: The effect of MoE settings on the average zero-shot generalization performance (a) the position of MoEs; (b) the number of experts; (c) the gating mechanism. Right two panels: Further analyses (d) average zero-shot generalization performance of each method employing various gating algorithms in the decoder; (e) training efficiency of each gating algorithm. and ablate on different components of MoEs below. Position of MoEs. We consider three positions to apply MoEs in neural solvers: 1) Raw feature processing (Raw): The linear layer, which projects raw features into initial embeddings, are replaced by MoEs. 2) Encoder (Enc): The FFN in an encoder layer is replaced by MoEs. Typically, MoEs are widely used in every-two or last-two layers (i.e., every or last two layers with even indices \u2113\u2208[0, N \u2212 1]) (Riquelme et al., 2021). Besides, we further attempt to use MoEs in all encoder layers. 3) Decoder (Dec): The final linear layer of the multi-head attention is replaced by MoEs in the decoder. We show the average performance over 10 unseen VRPs in Fig. 6(a). The results reveal that applying MoEs at the shallow layer (e.g., Raw) may worsen the model performance, while using MoEs in all encoder layers (Enc_All) or decoder (Dec) can benefit the zero-shot generalization. Therefore, in this paper, we employ MoEs in both encoder and decoder to pursue a strong unified model architecture to solve various VRPs. Number of Experts. We increase the number of experts in each MoE layer to 8 and 16, and compare the derived MVMoE/8E/16E models to MVMoE/4E. We first train all models using the same number (50M) of instances. After that, we also train MVMoE/8E/16E with more data and computation to explore potential better results, based on the scaling laws (Kaplan et al., 2020). In specific, we provide MVMoE/8E/16E with more data by using larger batch sizes, which linearly scale up against the number of experts (i.e., MVMoE/4E/8E/16E are trained on 50M/100M/200M instances with batch sizes 128/256/512, respectively). The results in Fig. 6(b) show that increasing the number of experts with more training data further unleashes the power of MVMoE, indicating the efficacy of MoEs in solving VRPs. Gating Mechanism. We investigate the effect of different gating levels and algorithms, including three levels (i.e., node-level, instance-level and problem-level) and three algorithms (i.e., input-choice, expert-choice and random gatings), with their details presented in Appendix B. As shown in Fig. 6(c), the node-level input-choice gating performs the best, while the node-level expert-choice gating performs the worst. Interestingly, we observe that the expert-choice gating in the decoder makes MVMoE hard to be optimized. It may suggest that each gating algorithm could have its most suitable position to serve MoEs. However, after an attempt to tune this configuration (i.e., by using MoEs only in the encoder), its performance is still inferior to the baseline, with an average gap of 7.190% on unseen VRPs. 5.3. Additional Results We further provide experiments and discussions on more advanced gating algorithms, training efficiency, benchmark performance, and scalability. We refer readers to more empirical results (e.g., sensitivity analyses) in Appendix C. Advanced Gating. Besides the input-choice and expertchoice gating algorithms evaluated above, we further consider soft MoEs (Puigcerver et al., 2024), which is a recent advanced gating algorithm. Specifically, it performs an implicit soft assignment by distributing K slots (i.e., convex combinations of all inputs) to each expert, rather than a hard assignment between inputs and experts as done by the conventional sparse and discrete gating networks. Since only K (e.g., 1 or 2) slots are distributed to each expert, it can save much computation. We train MVMoE on n = 50 by using node-level soft MoEs in the decoder, following training setups. We also show the result of employing heuristic (random) hierarchical gating in the decoder. However, their results are unsatisfactory as shown in Fig. 6(d). Training Efficiency. Fig. 6(e) shows the training time of employing each gating algorithm in the decoder, combining with their results reported in Fig. 6(d), demonstrating the efficacy of the proposed hierarchical gating in reducing the training overhead with only minor losses in performance. Benchmark Performance. We further evaluate the out-ofdistribution (OOD) generalization performance of all neural solvers on CVRPLIB benchmark instances. Detailed results can be found in Tables 7 and 8 in Appendix C. Surprisingly, we observe that MVMoE/4E performs poorly on large-scale 8 MVMoE: Multi-Task Vehicle Routing Solver with Mixture-of-Experts instances (e.g., n > 500). It may be caused by the generalization issue of sparse MoEs when transferring to new distributions or domains, which is still an open question in the MoE literature (Fedus et al., 2022a). In contrast, MVMoE/4E-L mostly outperforms MVMoE/4E, demonstrating more favourable potential of the hierarchical gating in promoting the OOD generalization capability. It is worth noting that all neural solvers are only trained on the simple uniformly distributed instances with the size n = 100. Embracing more varied problem sizes (cross-size) and attribute distributions (cross-distribution) into the multi-task training (cross-problem) may further consolidate their performance. Scalability. Given that supervised learning based approaches appear to be more scalable than RL-based approaches in the current literature, we try to build upon a more scalable method, i.e., LEHD (Luo et al., 2023). Concretely, we train a dense model LEHD and a light sparse model with 4 experts LEHD/4E-L on CVRP. The training setups are kept the same as Luo et al. (2023), except that we train all models for only 20 epochs for the training efficiency. We use the hierarchical MoE in each decoder layer of LEHD/4E-L. The results are shown in Table 8, which demonstrates the potential of MoE as a general idea that can further benefit recent scalable methods. Moreover, during the solution construction process, recent works (Drakulic et al., 2023; Gao et al., 2023) typically constrain the search space within a neighborhood of the currently selected node, which is shown to be effective in handling large-scale instances. Integrating MVMoE with these simple yet effective techniques may further improve large-scale performance. 6. Conclusion Targeting a more generic and powerful neural solver for solving VRPs, we propose a multi-task vehicle routing solver with MoEs (MVMoE), which can solve a range of VRPs concurrently, even in a zero-shot manner. We provide valuable insights on how to apply MoEs in neural VRP solvers, and propose an effective and efficient hierarchical gating mechanism. Empirically, MVMoE demonstrates strong generalization capability on zero-shot, few-shot settings, and real-world benchmark. Despite this paper presents the first attempt towards a large VRP model, the scale of parameters is still far less than LLMs. We leave 1) the development of scalable MoE-based models in solving large-scale VRPs, 2) the venture of generic representations for different problems, 3) the exploration of interpretability of gating mechanisms (Nguyen et al., 2023; 2024), and 4) the investigation of scaling laws in MoEs (Krajewski et al., 2024) to the future work. We hope our work benefit the COP community in developing large optimization (or foundation) models2. 2https://github.com/ai4co/awesome-fm4co Acknowledgements This research is supported by the National Research Foundation, Singapore under its AI Singapore Programme (AISG Award No: AISG3-RP-2022-031), the Singapore Ministry of Education (MOE) Academic Research Fund (AcRF) Tier 1 grant, the National Natural Science Foundation of China (Grant 62102228), and the Natural Science Foundation of Shandong Province (Grant ZR2021QF063). We would like to thank the anonymous reviewers and (S)ACs of ICML 2024 for their constructive comments and dedicated service to the community. Jianan Zhou would like to personally express deep gratitude to his grandfather, Jinlong Hu, for his meticulous care and love during last 26 years. Eternal easy rest in sweet slumber. Impact Statements This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none which we feel must be specifically highlighted here."
17
+ }
intro_8K/test_introduction_short_2405.01051v1.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.01051v1",
3
+ "title": "Generating User Experience Based on Personas with AI Assistants",
4
+ "abstract": "Traditional UX development methodologies focus on developing ``one size fits\nall\" solutions and lack the flexibility to cater to diverse user needs. In\nresponse, a growing interest has arisen in developing more dynamic UX\nframeworks. However, existing approaches often cannot personalise user\nexperiences and adapt to user feedback in real-time. Therefore, my research\nintroduces a novel approach of combining Large Language Models and personas, to\naddress these limitations. The research is structured around three areas: (1) a\ncritical review of existing adaptive UX practices and the potential for their\nautomation; (2) an investigation into the role and effectiveness of personas in\nenhancing UX adaptability; and (3) the proposal of a theoretical framework that\nleverages LLM capabilities to create more dynamic and responsive UX designs and\nguidelines.",
5
+ "authors": "Yutan Huang",
6
+ "published": "2024-05-02",
7
+ "updated": "2024-05-02",
8
+ "primary_cat": "cs.SE",
9
+ "cats": [
10
+ "cs.SE",
11
+ "cs.HC"
12
+ ],
13
+ "label": "Original Paper",
14
+ "paper_cat": "LLM Fairness",
15
+ "gt": "User Interface (UI) and User Experience (UX) are integral com- ponents in software engineering (SE) that serve to bridge the gap between human requirements and system functionalities. UI and UX aim to optimise the interaction between the computer and the human via the interface to ensure ease of use and intuitiveness. A well-implemented UI/UX not only diminishes the cognitive load on the user but also reduces the time and e\ufb00ort required for users to understand and navigate through a system [8]. Hence, properly designed UI/UX signi\ufb01cantly a\ufb00ects system e\ufb03ciency, user satis- faction, and overall performance [20]. In the rapidly advancing technological landscape, users\u2019 desire for customised options and personalised experiences has surged, emphasising the importance of customisable and adaptive UX [23]. In addition, there is a grow- ing recognition of the necessity for human-centric requirements that cater to individuals with speci\ufb01c needs, such as those with disabilities or diverse backgrounds [5]. Customizable UX allows users to control and tailor the design based on their preferences. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for pro\ufb01t or commercialadvantage and that copies bearthis notice and the full citation on the \ufb01rstpage. Copyrights for third-party components of this work mustbe honored. For all other uses, contact the owner/author(s). ICSE-Companion \u201924, April 14\u201320, 2024, Lisbon, Portugal \u00a9 2024 Copyright held by the owner/author(s). ACM ISBN 979-8-4007-0502-1/24/04. https://doi.org/10.1145/3639478.3639810 It represents an important step toward user-centric interfaces but often fails to deliver a truly personalised experience [6, 12]. Adap- tive UX goes beyond customisation, employing the ability to under- stand user behaviours, preferences and context [14]. Consequently, the system proactively alters the elements of UI to serve users bet- ter, e.g., visual appearance, typography, colour schemes, iconogra- phy and interactive elements like buttons, forms, and navigation menus [9]. While the idea of a truly adaptive system seems appeal- ing, its practical implementation is challenging due to the diverse needs of users. Additionally, manually designing such a system is laborious, compounded by the need to maintain consistency due to business requirements, e.g., branding and aesthetics. Personas are often used in the \ufb01eld of UX as archetypical user pro\ufb01les to inform designers about speci\ufb01c user behaviours, needs and goals from the system [18]. Their strength lies in providing a clear, focused understanding of end-users, especially when di- rect access to human bene\ufb01ciaries is limited, enabling designers to make informed decisions. The recent advances in arti\ufb01cial in- telligence (AI) techniques o\ufb00er great potential for adaptive UI and addressing the challenges mentioned above via automation. Large Language Models (LLMs) are the recent successors in the area of AI techniques that have shown considerable promise in automat- ing di\ufb00erent SE tasks, e.g., code generation [10], requirements man- agement [2], test generation [16], and persona generation [25, 26]. LLMs, trained on vast amounts of data, are excellent candidates for generating adaptive designs due to their ability to understand context, infer user intentions, and generate coherent responses [4]. This PhD research intends to explore the potential of LLMs com- bined with rich personas, which are more comprehensive and de- tailed than standard personas, to develop adaptive UX for diverse users. Speci\ufb01cally, I aim to create an adaptive UX framework that tailorsuser interfaces according to individual preferences and needs, focusing on the design, adapting and leveraging personas (and user requirements). Next, I discuss the related work on adaptive UX and the use of personas (Section 2), and the research plan with research questions (RQs) (Section 3). This PhD project is in the early stages; hence, in Section 4, I discuss the proposed approach and research directions.",
16
+ "main_content": "Adaptive UI/UX design uses a model-based approach as well as an AI-based approach [19, 21]. The model-based approach involves the creation of adaptive designs using architectural models. These models consist of one or multiple layers of architecture that ICSE-Companion \u201924, April 14\u201320, 2024, Lisbon, Portugal Yutan Huang process multimodal data to generate adaptive UXs [11]. This approach primarily focuses on enhancing UX features such as layout, content, and modality, however, while it achieves diversi\ufb01cation by leveraging di\ufb00erent models, it often lacks the invaluable input of user feedback and iterative re\ufb01nement derived from legacy systems [1]. Additionally, the methodology for runtime feature selection is often underdeveloped in this approach, which limits its ability to adapt to changing user needs and preferences [7]. This model-based approach seeks to create variations in UX but may fall short in addressing real-time user interactions and feedback [7]. In contrast, the AI-based approach has gained prominence in recent years, capitalizing on the capabilities of AI to generate both text and graphics. Researchers have employed AI tools such as Sketch2Code, MetaMorph, and ChatGPT to dynamically generate UIs based on user interactions and requirements [17, 22]. The use of AI in adaptive UX design introduces a range of possibilities. Yang et al. identi\ufb01ed four key channels through which AI augments the value of adaptive UX: self-inferences, world inferences, optimal inferences, and utility inferences. These channels represent AI\u2019s ability to provide users with self-understanding, contextual understanding, optimal solutions, and utility-based responses, signi\ufb01cantly enriching the user experience [24]. These four channels serve as foundational concepts for adaptive UX generation with AI and are essential for guiding designers to create more personalized and user-centric interfaces [3]. Despite the potential of AI-based approaches, it\u2019s becoming increasingly evident that solutions utilizing Large Language Models (LLMs) are at the forefront of this technology\u2019s application. These LLMs, which are now among the most commonly implemented forms of AI, heavily rely on the quality of prompts provided to them [15]. In the context of user experience (UX) design, these prompts\u2019 precision and relevance directly impact the outcomes\u2019 quality, as demonstrated in recent studies [13]. E\ufb00ective prompt engineering is a critical aspect of AI-driven adaptive UX requirements, and it is an area that requires careful consideration and re\ufb01nement [2]. The model-based and AI-based approaches in adaptive UX design have illustrated diverse possibilities. However, it\u2019s important to note that these approaches commonly lack rigorous evaluation and iterative feedback from users and designers, forming a signi\ufb01cant gap in the existing research landscape. This review provides the context for understanding the need for our research, which aims to address these limitations and enhance the \ufb01eld of adaptive UX design by constructing an intelligent User interface that uses ML techniques with a framework to guide experts through the process of creating adaptive UI with user experience. 3 Research Plan The main research aim of this PhD research is to develop a framework for generating adaptive UX using LLMs and personas structured in the following steps (guided by the research questions mentioned under each step). Foundational Understanding: How is adaptive UX de\ufb01ned and understood in the current literature? Which UX fragments can be adapted and generated automatically? Role of Personas in Adaptive UX: What are the critical elements within personas that lend themselves to the creation of adaptive UX? Are there gaps or limitations in current persona models that could hinder the development of adaptive UX designs? Role of LLMs in Adaptive UX: To what degree can LLMs contribute to the development of adaptive UX? How do LLMs interpret and utilise persona information to generate UX designs? Which prompting techniques in LLMs yield the best adaptive UX results? Framework Development and Evaluation: Do users and practitioners \ufb01nd the adaptive UX generated by our framework useful? What are the challenges when leveraging LLMs for adaptive UX? 4 Solution Approach Foundational Understanding Systematic Literature Review and UX experiment: My foundational understanding begins with a systematic literature review on adaptive UI/UX, exploring de\ufb01nitions, methods, and applications in academic and professional contexts to identify aspects of UX that have been automated previously. Concurrently, I will conduct experiments to create UI automatically using LLMs, with insights from the literature, to validate my \ufb01ndings and identify potential UI fragments that can be adapted easily (e.g., interface designs, colours, buttons). This will establish a foundation for developing an informed adaptive UI/UX framework. Role of Personas in Adaptive UI/UX Expert Insight and Model Comparison: To \ufb01gure out the important parts of personas that help create adaptive UI and \ufb01nd any shortcomings in current persona representations, I will \ufb01nd key persona elements related to adaptive UI in practice by interviewing experienced UX designers. I will then compare di\ufb00erent representations of persona contents and prioritise what is important to include in a persona for adaptive UX generation. The comparative analysis and interviews in parallel will help re\ufb01ne persona representations and triangulate our \ufb01ndings. Role of LLMs in Adaptive UI/UX Exploring LLM\u2019s Capability in Adaptive UI Creation: I plan to carry out a set of experiments revolving around prompt engineering, an example would be using GPT-model-based LLMs and feeding them user preference and background information with personas. These experiments can examine the e\ufb00ectiveness of LLMs in generating user-tailored designs. Framework Development and Evaluation Evolving UI/UX Framework through User and Practitioner Feedback: I aim to develop a UX framework based on LLMs to guide adaptive UX creation. This framework will be dynamic, evolving through iterative enhancements for robustness and e\ufb00ective adaptive UX design. Leveraging LLM capabilities, I seek to establish a foundational, adaptable tool for UX development. Assessment and Re\ufb01nementof the UI/UX Framework through UserCentric Feedback: The evaluation of adaptive UI design and UI/UX framework will involve engaging users and experts to interact with and test the developed UIs by using them as a daily routine and provide users with tasks to complete.Their feedback will inform the integration of prompt engineering into our framework, and enhancing a smooth transition from design-time to run-time approach. Generating User Experience Based on Personas with AI Assistants ICSE-Companion \u201924, April 14\u201320, 2024, Lisbon, Portugal 5 Conclusion In conclusion, the research aims to address a gap in adaptive UX design by integrating LLMs and personas, striking a balance between technical capabilities and a human-centric approach. The challenge lies in aligning LLMs\u2019 technical prowess with the nuanced insights of personas. The expectedimpact includes enhanced adaptability and personalization in UX designs, and setting new standards in UX methodology. Acknowledgement Yutan is supported by ARC Laureate Fellowship FL190100035."
17
+ }
intro_8K/test_introduction_short_2405.01063v1.json ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.01063v1",
3
+ "title": "Fair Recommendations with Limited Sensitive Attributes: A Distributionally Robust Optimization Approach",
4
+ "abstract": "As recommender systems are indispensable in various domains such as job\nsearching and e-commerce, providing equitable recommendations to users with\ndifferent sensitive attributes becomes an imperative requirement. Prior\napproaches for enhancing fairness in recommender systems presume the\navailability of all sensitive attributes, which can be difficult to obtain due\nto privacy concerns or inadequate means of capturing these attributes. In\npractice, the efficacy of these approaches is limited, pushing us to\ninvestigate ways of promoting fairness with limited sensitive attribute\ninformation.\n Toward this goal, it is important to reconstruct missing sensitive\nattributes. Nevertheless, reconstruction errors are inevitable due to the\ncomplexity of real-world sensitive attribute reconstruction problems and legal\nregulations. Thus, we pursue fair learning methods that are robust to\nreconstruction errors. To this end, we propose Distributionally Robust Fair\nOptimization (DRFO), which minimizes the worst-case unfairness over all\npotential probability distributions of missing sensitive attributes instead of\nthe reconstructed one to account for the impact of the reconstruction errors.\nWe provide theoretical and empirical evidence to demonstrate that our method\ncan effectively ensure fairness in recommender systems when only limited\nsensitive attributes are accessible.",
5
+ "authors": "Tianhao Shi, Yang Zhang, Jizhi Zhang, Fuli Feng, Xiangnan He",
6
+ "published": "2024-05-02",
7
+ "updated": "2024-05-02",
8
+ "primary_cat": "cs.IR",
9
+ "cats": [
10
+ "cs.IR",
11
+ "cs.CY",
12
+ "cs.LG"
13
+ ],
14
+ "label": "Original Paper",
15
+ "paper_cat": "LLM Fairness",
16
+ "gt": "Recommender system has become a vital technology in various Web applications, including job searching [20], online advertising [39], and e-commerce [33]. By providing personalized information filter- ing based on user interests, recommender systems have significant social influence [44]. Unfair recommendations that discriminate against specific user groups [35] would negatively impact the Web and society. For instance, previous research [20] on job recommen- dations found that women were exposed to fewer advertisements for high-paying jobs and career coaching services compared to men, perpetuating gender pay gap in the workforce. Clearly, it is of para- mount importance to maintain fairness in recommender systems to ensure their trustworthiness and regulatory compliance [32]. The existing research to optimize the fairness of recommenda- tions can be further classified into four main categories: \u2022 Data-oriented methods [6, 30], which adjusts the training data according to sensitive attributes by resampling [6] and adding antidote data [30], etc. \u2022 Adversarial learning-based methods [39, 45], which learn fair representations with a min-max game to prevent encoding of sensitive attributes in user embeddings or graph structures. \u2022 Regularization-based methods [16, 24, 41, 47], which incorporate fairness metrics such as absolute difference in utility between groups with different sensitive attributes into the objective func- tion [16, 41, 47] or the reward function [24]. \u2022 Re-ranking methods [22, 40], which directly adjusts recommenda- tion results to enhance fairness across user groups with different sensitive attributes. We contend that the current methods are significantly con- strained by their reliance on full access to sensitive attributes [35], a condition that is often difficult to satisfy in real-world scenarios. Firstly, users have the entitlement, as per regulatory frameworks arXiv:2405.01063v1 [cs.IR] 2 May 2024 SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Tianhao Shi, et al. like the General Data Protection Regulation (GDPR) [32] of the European Union, to decline the disclosure of their sensitive data at any moment. For instance, only 17% of participants reported willing of sharing their income information in electronic commerce [1]. Moreover, many online retail and advertising recommendations can be accessed without registering or submitting personal infor- mation [33]. Consequently, a conundrum emerges: how can fair recommendation be obtained when only limited sensitive attributes are available [35]? To address this issue, a default choice is reconstructing the miss- ing sensitive attributes of users from the available personal informa- tion such as historical interactions [25]. However, reconstruction errors are inevitable due to the inherent complexity and noise of user information in recommender systems. Moreover, striving for perfectly reconstructed attributes can potentially violate user privacy, raising ethical concerns and being constrained by legal lim- itations. For instance, GDPR [32] mandates obtaining user consent for modeling user profiles (including sensitive attributes). Conse- quently, a subset of users may not have their sensitive attributes reconstructed1. The aforementioned challenges underscore the sig- nificance of devising alternative approaches that are robust to the reconstruction errors of sensitive attributes. To this end, we propose a new framework to pursue fair recom- mendations under reconstructed sensitive attributes with errors called Distributionally Robust Fair Optimization (DRFO). Instead of optimizing fairness over the reconstructed sensitive attributes, DRFO minimizes the worst-case unfairness over an ambiguity set of all potential distributions of missing sensitive attributes to ac- count for the impact of reconstruction errors. Furthermore, for users who forbid the reconstruction of their sensitive attributes, DRFO can provide fair recommendations for them by considering a larger ambiguity set of distributions. We theoretically demonstrate that DRFO ensures recommendation fairness in the face of sensi- tive attribute reconstruction errors, even in the absence of such reconstruction. Extensive experiments on two datasets verify the effectiveness of our approaches. The main contributions are summarized as follows: \u2022 We propose a new framework for optimizing the user-side fair- ness in recommender systems with limited sensitive attributes. \u2022 We provide theoretical evidence that our method can ensure fair recommendations in the face of sensitive attribute reconstruction errors, even in the absence of such reconstruction. \u2022 We conduct extensive experiments on two datasets, validating the rationality and effectiveness of the proposed framework.",
17
+ "main_content": "In this study, we aim to achieve fairness in recommender systems with limited sensitive attributes. We consider the widely used Demographic Parity (DP) as an exemplary fairness criterion to investigate this problem. In the following, we first introduce the notation used in this paper and then provide a brief overview of DP fairness. 2.1 Notation In this study, we use uppercase letters (e.g., \ud835\udc45) to represent random variables, lowercase letters (e.g., \ud835\udc5f) for specific values of these 1This can also be regarded as a special scenario with significant reconstruction errors. variables, and calligraphic fonts (e.g., R) to represent the sample space of the variable. Let \ud835\udc48, \ud835\udc49, \ud835\udc46, and \ud835\udc45denote the user, item, user sensitive attribute, and rating, respectively. Let D denote the historical data. Each sample within D is denoted as (\ud835\udc62, \ud835\udc63,\ud835\udc60,\ud835\udc5f), where \ud835\udc62\u2208U, \ud835\udc63\u2208V, \ud835\udc5f\u2208R, and \ud835\udc60\u2208S. In this work, we consider the binary rating, i.e., R = {0, 1}. Additionally, we assume the sensitive feature to be binary but potentially unknown for a subset of users. We split D into two parts: D\ud835\udc58with known sensitive features and D\ud835\udc5awith missing sensitive features, and we have D = D\ud835\udc58\u222aD\ud835\udc5a and D\ud835\udc58\u2229D\ud835\udc5a= \u2205. For convenience, we denote the samples in D with the sensitive feature \ud835\udc46= \ud835\udc60as D (\ud835\udc60), similar for D (\ud835\udc60) \ud835\udc58 and D (\ud835\udc60) \ud835\udc5a. 2.2 Fairness 2.2 Fairness Demographic Parity [5] is a widely studied fairness criterion in recommendation [16, 47]. DP\u2019s definition is the model\u2019s rating prediction \u02c6 \ud835\udc45should be independent of the sensitive attribute\ud835\udc46. A model achieving DP fairness would generate recommendations without relying on \ud835\udc46, thereby satisfying legal requirements or user demands against discrimination on model output [16]. We follow previous work [47] to quantify DP with the mean absolute difference (MAD) between ratings of different groups: \ufffd \ufffdE \ufffd\u02c6 \ud835\udc45|\ud835\udc46= 0 \ufffd \u2212E \ufffd\u02c6 \ud835\udc45|\ud835\udc46= 1 \ufffd\ufffd \ufffd, (1) where E \ufffd\u02c6 \ud835\udc45|\ud835\udc46= 0 \ufffd and E \ufffd\u02c6 \ud835\udc45|\ud835\udc46= 1 \ufffd denote the expectation of prediction \u02c6 \ud835\udc45over groups with \ud835\udc46= 0 and \ud835\udc46= 1, respectively. A diminished \ufffd \ufffdE \ufffd\u02c6 \ud835\udc45|\ud835\udc46= 0 \ufffd \u2212E \ufffd\u02c6 \ud835\udc45|\ud835\udc46= 1 \ufffd\ufffd \ufffd, (1) where E \ufffd\u02c6 \ud835\udc45|\ud835\udc46= 0 \ufffd and E \ufffd\u02c6 \ud835\udc45|\ud835\udc46= 1 \ufffd denote the expectation of prediction \u02c6 \ud835\udc45over groups with \ud835\udc46= 0 and \ud835\udc46= 1, respectively. A diminished MAD level signifies a heightened degree of DP fairness. Fair learning. To achieve DP fairness, we could take the regularizationbased method [41], which directly incorporates the MAD metric into the training objective. Formally, \ufffd \ufffd \ufffd \ufffd \ufffd \ufffd\ufffd \ufffd ng objective. Formally, min \ud835\udf03 \ud835\udc3f(\ud835\udf03) + \ud835\udf06 \ufffd \ufffdED(0) \ufffd\u02c6 \ud835\udc45 \ufffd \u2212ED(1) \ufffd\u02c6 \ud835\udc45 \ufffd\ufffd \ufffd, (2) yper-parameter to control the strength of the fairtion term, E (\ud835\udc60) \ufffd\u02c6 \ud835\udc45 \ufffd is the average predicted rating \ufffd \ufffd \ufffd \ud835\udf03 () + \ufffd \ufffdD() \ufffd \ufffd \u2212D() \ufffd \ufffd\ufffd \ufffd where \ud835\udf06is a hyper-parameter to control the strength of the fairness regularization term, ED (\ud835\udc60) \ufffd\u02c6 \ud835\udc45 \ufffd is the average predicted rating over D (\ud835\udc60), i.e., ED (\ud835\udc60) \ufffd\u02c6 \ud835\udc45 \ufffd = 1 | D (\ud835\udc60) | \ufffd (\ud835\udc62,\ud835\udc63)\u2208D (\ud835\udc60) \u02c6 \ud835\udc5f\ud835\udc62,\ud835\udc63, and \ud835\udc3f(\ud835\udf03) is a recommendation loss (e.g., binary cross-entropy loss [12]). Typically, the regularization can be transformed into a set of \ufffd \ufffd ness regularization term, ED (\ud835\udc60) \ufffd\u02c6 \ud835\udc45 \ufffd is the average predicted rating over D (\ud835\udc60), i.e., ED (\ud835\udc60) \ufffd\u02c6 \ud835\udc45 \ufffd = 1 | D (\ud835\udc60) | \ufffd (\ud835\udc62,\ud835\udc63)\u2208D (\ud835\udc60) \u02c6 \ud835\udc5f\ud835\udc62,\ud835\udc63, and \ud835\udc3f(\ud835\udf03) is a recommendation loss (e.g., binary cross-entropy loss [12]). Typically, the regularization can be transformed into a set of constraints that minimize the discrepancy between the average predictions of a specific group and the overall predictions, min \ud835\udf03 \ud835\udc3f(\ud835\udf03) min \ud835\udf03 \ud835\udc3f(\ud835\udf03) s.t. ED(\ud835\udc60) \ufffd\u02c6 \ud835\udc45 \ufffd \u2212ED \ufffd\u02c6 \ud835\udc45 \ufffd = 0, \ud835\udc60= 0, 1, (3) straint ensures the expected predicted rating in D (\ud835\udc60) D \ufffd \ufffd \u2212D \ufffd \ufffd where the constraint ensures the expected predicted rating in D (\ud835\udc60) is equal to the expected predicted rating in the entire dataset D where the constraint ensures the expected predicted rating in D () is equal to the expected predicted rating in the entire dataset D. Preserving any single constraint in Equation (3) is sufficient to promote fairness under the binary-sensitive attribute scenario while preserving multiple constraints is intended for non-binary cases. 3 METHODOLOGY To build a fair recommender system that addresses the chanllenge of missing sensitive attributes among some users, a seemingly workable solution is to directly apply Fair Learning with Reconstructed Sensitive Attributes (FLrSA). In this section, we outline this solution and highlight its limitations. Subsequently, we introduce the proposed Distributionally Robust Fairness Optimization to overcome the impact of reconstruction errors. Finally, we discuss the extension of DRFO for situations where certain users are reluctant to have their sensitive attributes reconstructed. Fair Recommendations with Limited Sensitive Attributes: A Distributionally Robust Optimization Approach SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA 3.1 FLrSA To achieve fair recommendations in the presence of missing sensitive attributes, a seemingly workable solution involves reconstructing the missing attributes and subsequently applying fair learning methods based on the reconstructed sensitive attributes. Specifically, we can follow the two steps below: Step 1: Sensitive attribute reconstruction. Previous research has shown that user sensitive attributes can be reconstructed using available user information, such as historical interactions [33, 36] and zip codes [7]. Therefore, we can train a classifier over D\ud835\udc58 to predict missing user sensitive attributes in D\ud835\udc5a. Let \u02c6 \ud835\udc46\u2208{0, 1} denote the reconstructed sensitive attribute. Subsequently, we can incorporate the reconstructed sensitive attributes into D\ud835\udc5aand generate a new dataset \u02c6 D\ud835\udc5a. Step 2: Fair learning. Next, we perform fair learning over D\ud835\udc58and \u02c6 D\ud835\udc5abased on Equation (3), which is reformulated as: min \ud835\udf03 \ud835\udc3f(\ud835\udf03) s.t. \ud835\udf02(\ud835\udc60) \ud835\udc58ED(\ud835\udc60) \ud835\udc58 \u0002 \u02c6 \ud835\udc45 \u0003 + \ud835\udf02(\ud835\udc60) \ud835\udc5aE \u02c6 D(\ud835\udc60) \ud835\udc5a \u0002 \u02c6 \ud835\udc45 \u0003 \u2212ED \u0002 \u02c6 \ud835\udc45 \u0003 = 0, \ud835\udc60= 0, 1, (4) where \u02c6 D (\ud835\udc60) \ud835\udc5a is a subset of \u02c6 D\ud835\udc5awith the reconstructed attribute \u02c6 \ud835\udc46= \ud835\udc60, ED (\ud835\udc60) \ud835\udc58 \u0002 \u02c6 \ud835\udc45 \u0003 is the average predicted rating over D (\ud835\udc60) \ud835\udc58 , and: \ud835\udf02(\ud835\udc60) \ud835\udc58 = |D (\ud835\udc60) \ud835\udc58 | | D (\ud835\udc60) \ud835\udc58 | + | \u02c6 D (\ud835\udc60) \ud835\udc5a| , \ud835\udf02(\ud835\udc60) \ud835\udc5a= | \u02c6 D (\ud835\udc60) \ud835\udc5a| |D (\ud835\udc60) \ud835\udc58 | + | \u02c6 D (\ud835\udc60) \ud835\udc5a| . (5) Limitations. This method relies on the accuracy of sensitive attribute reconstruction. However, achieving an entirely accurate classifier poses challenges due to the inherent complexity and noise of user information in recommender systems. This leads to reconstruction errors, which could compromise the fairness of recommendations for some users. Furthermore, this method relies on the permission of sensitive attribute reconstruction from users, which may not be achievable for all users due to legal restrictions. 3.2 DRFO Reconstruction errors significantly constrain the vanilla FLrSA, as they introduce a discrepancy between the reconstructed and the unknown true distribution concerning sensitive attributes. Consequently, relying solely on the reconstructed distribution may compromise fairness performance. Nevertheless, the unknown true distribution lies within the proximity of the reconstructed distribution. By ensuring fairness in the vicinity of the reconstructed distribution, the model could achieve robust fairness for the unknown true distribution [34]. This inspires the development of DRFO, a novel approach to fairness with limited sensitive attributes. DRFO has two main parts: 1) building the ambiguity set which encompasses the unknown true distribution based on the reconstructed sensitive attributes, and 2) ensuring fairness within the entire ambiguity set using DRO. Figure 1 provides an overview of DRFO. For our discussion convenience, we assume that all users grant permission for the reconstruction of sensitive attributes in this subsection. The scenario where some users do not permit reconstruction due to privacy concerns will be discussed in Section 3.3. Building ambiguity set. An ambiguity set is a set of distributions centered around the reconstructed distribution. We denote the ambiguity set as B(\ud835\udf0c\ud835\udc60; \u02c6 \ud835\udc44(\ud835\udc60))={ \u02dc \ud835\udc44(\ud835\udc60) |\ud835\udc51\ud835\udc56\ud835\udc60\ud835\udc61( \u02dc \ud835\udc44(\ud835\udc60), \u02c6 \ud835\udc44(\ud835\udc60)) \u2264\ud835\udf0c\ud835\udc60}, where \u02c6 \ud835\udc44(\ud835\udc60) 1 0 \u22ee 1 ? ? \u22ee ? 0 0 \u22ee 1 Reconstructed Attributes Unknown Attributes Known Attributes Ambiguity Set (DRFO) Reconstructed Distribution (FLrSA) Unknown True Distribution DRO Mechanism Known Distribution Reconstructor Fair Learning Reconstructed Known Figure 1: Illustration of FLrSA and DRFO for providing fair recommendations with limited sensitive attributes. After the reconstruction of unknown sensitive attributes, the FLrSA directly applies fair learning with the reconstructed distribution. Conversely, DRFO builds an ambiguity set that encompasses the unknown true distribution and guarantees fairness across the entire ambiguity set. denotes the reconstructed distribution, \ud835\udc51\ud835\udc56\ud835\udc60\ud835\udc61(\u00b7) denotes a distance metric, and \ud835\udf0c\ud835\udc60denotes robust radius. By setting an appropriate value \ud835\udf0c\ud835\udc60, we can ensure that this set encompasses the unknown true distribution \ud835\udc44(\ud835\udc60). In the context of facilitating fair learning, as expressed in Equation (4) involving E \u02c6 D (\ud835\udc60) \ud835\udc5a[ \u02c6 \ud835\udc45(\ud835\udc48,\ud835\udc49)], our method focuses on building an ambiguity set of joint distributions (\ud835\udc48,\ud835\udc49) conditioned on a specific sensitive attribute value \ud835\udc60. Specifically, we assume that the distribution of (\ud835\udc48,\ud835\udc49) conditioned on \ud835\udc46=\ud835\udc60follows \ud835\udc44(\ud835\udc60). And we represent the joint distribution of (\ud835\udc48,\ud835\udc49) conditioned on the reconstructed sensitive attribute \u02c6 \ud835\udc46=\ud835\udc60as \u02c6 \ud835\udc44(\ud835\udc60). Robust fair learning. Robust learning solves an optimization problem that satisfies the fairness constraints in B(\ud835\udf0c\ud835\udc60; \u02c6 \ud835\udc44(\ud835\udc60)) as: min \ud835\udf03 \ud835\udc3f(\ud835\udf03) s.t. \ud835\udf02(\ud835\udc60) \ud835\udc58ED(\ud835\udc60) \ud835\udc58 \u0002 \u02c6 \ud835\udc45 \u0003 + \ud835\udf02(\ud835\udc60) \ud835\udc5aE(\ud835\udc48,\ud835\udc49)\u223c\u02dc \ud835\udc44(\ud835\udc60) \u0002 \u02c6 \ud835\udc45 \u0003 \u2212ED \u0002 \u02c6 \ud835\udc45 \u0003 = 0, \u2200\u02dc \ud835\udc44(\ud835\udc60) \u2208B(\ud835\udf0c\ud835\udc60; \u02c6 \ud835\udc44(\ud835\udc60) ) ,\ud835\udc60= 0, 1, (6) where E(\ud835\udc48,\ud835\udc49)\u223c\u02dc \ud835\udc44(\ud835\udc60) \u0002 \u02c6 \ud835\udc45 \u0003 denotes the expectation of \u02c6 \ud835\udc45under the distribution \u02dc \ud835\udc44(\ud835\udc60). Here, we transform the fairness constraints in Equation (4) from solely relying on the reconstructed distribution to guaranteeing fairness across the entire ambiguity set2. In this way, as long as the ambiguity set encompasses the unknown true distribution, we can achieve robust fairness. Apparently, the key of DRFO lies in 1) building an appropriate ambiguity set which encompasses the unknown true distribution, and 2) solving the optimization problem in Equation (6). Next, we elaborate how to achieve them. 3.2.1 Building Ambiguity Set. We now consider establishing an appropriate ambiguity set B(\ud835\udf0c\ud835\udc60; \u02c6 \ud835\udc44(\ud835\udc60)) that encompasses the unknown true distribution \ud835\udc44(\ud835\udc60). However, direct estimation of the exact distance between \ud835\udc44(\ud835\udc60) and the reconstructed distribution 2We do not directly solve the optimization problem under these equality constraints in Equation (6). Instead, we convert it into an optimization of the worst-case scenario problem, as expressed in Equation (11). SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Tianhao Shi, et al. \u02c6 \ud835\udc44(\ud835\udc60) is impossible as true sensitive attributes are unavailable. Instead, we estimate an upper bound on the distance between them. Similar to previous work [34], using the Total Variation (TV) distance [21] (\ud835\udc47\ud835\udc49\u2208[0, 1], Appendix A.1) as the distance metric, we could estimate an upper bound based on the error rate of sensitive attribute reconstruction. This is described in the following theorem. Theorem 1. Assuming that the reconstructed sensitive attributes \u02c6 \ud835\udc46have the same prior distribution as the true sensitive attributes \ud835\udc46, i.e., \ud835\udc43( \u02c6 \ud835\udc46) = \ud835\udc43(\ud835\udc46), the TV distance between \ud835\udc44(\ud835\udc60) and \u02c6 \ud835\udc44(\ud835\udc60) is upperbounded by the probability of incorrectly reconstructing the sensitive attributes, i.e., \ud835\udc47\ud835\udc49(\ud835\udc44(\ud835\udc60), \u02c6 \ud835\udc44(\ud835\udc60)) \u2264\ud835\udc43(\ud835\udc46\u2260\u02c6 \ud835\udc46|\ud835\udc46= \ud835\udc60). The proof is provided in Appendix A.1. This theorem suggests that, assuming the distribution of the reconstructed sensitive attributes \u02c6 \ud835\udc46is identical to the true distribution of \ud835\udc463, we can create the ambiguity set B(\ud835\udf0c\ud835\udc60; \u02c6 \ud835\udc44(\ud835\udc60)) as follows: B \u0010 \ud835\udf0c\ud835\udc60; \u02c6 \ud835\udc44(\ud835\udc60) \u0011 = n \u02dc \ud835\udc44(\ud835\udc60) |\ud835\udc47\ud835\udc49( \u02dc \ud835\udc44(\ud835\udc60), \u02c6 \ud835\udc44(\ud835\udc60) ) \u2264\ud835\udf0c\ud835\udc60 o , \ud835\udf0c\ud835\udc60= \ud835\udc43(\ud835\udc46\u2260\u02c6 \ud835\udc46|\ud835\udc46= \ud835\udc60), (7) where \ud835\udf0c\ud835\udc60can be approximately estimated using the validation set4, following existing works [34]. This ambiguity set encompasses the true distribution\ud835\udc44(\ud835\udc60), meaning that we successfully build the target ambiguity set with \u02c6 \ud835\udc44(\ud835\udc60) and \ud835\udf0c\ud835\udc60. 3.2.2 Robust Fair Learning. We next consider solving the optimization problem in Equation (6). Following [13, 28], we convert it into an empirical form (i.e., representing it using the empirical distribution), enabling us to optimize it in a data-driven manner. Meanwhile, to tackle the challenges posed by the complexity of solving the constrained optimization problem, we transform it into a solvable Lagrangian problem with the algorithm proposed in [27]. Empirical form. To solve Equation (6) with learning methods, the key step involves using \u02c6 D\ud835\udc5ato estimate E(\ud835\udc48,\ud835\udc49)\u223c\u02dc \ud835\udc44(\ud835\udc60) \u0002 \u02c6 \ud835\udc45 \u0003. Denote the empirical estimation as E \u02dc \ud835\udc44(\ud835\udc60), \u02c6 D\ud835\udc5a \u0002 \u02c6 \ud835\udc45 \u0003, we can obtain the empirical form of the optimization problem as follows: min \ud835\udf03 \ud835\udc3f(\ud835\udf03) s.t. \ud835\udf02(\ud835\udc60) \ud835\udc58ED(\ud835\udc60) \ud835\udc58 \u0002 \u02c6 \ud835\udc45 \u0003 + \ud835\udf02(\ud835\udc60) \ud835\udc5aE \u02dc \ud835\udc44(\ud835\udc60), \u02c6 D\ud835\udc5a \u0002 \u02c6 \ud835\udc45 \u0003 \u2212ED \u0002 \u02c6 \ud835\udc45 \u0003 = 0, \u2200\u02dc \ud835\udc44(\ud835\udc60) \u2208B(\ud835\udf0c\ud835\udc60; \u02c6 \ud835\udc44(\ud835\udc60) ), \ud835\udc60= 0, 1, (8) where \u02c6 \ud835\udc44(\ud835\udc60), \u02dc \ud835\udc44(\ud835\udc60) are also converted into empirical distributions [4]. \u02c6 \ud835\udc44(\ud835\udc60) = { \u02c6 \ud835\udc5e(\ud835\udc60) \ud835\udc62,\ud835\udc63|(\ud835\udc62, \ud835\udc63, \u02c6 \ud835\udc60) \u2208\u02c6 D\ud835\udc5a}, where \u02c6 \ud835\udc5e(\ud835\udc60) \ud835\udc62,\ud835\udc63denotes the probability weight for the sample (\ud835\udc62, \ud835\udc63, \u02c6 \ud835\udc60) \u2208\u02c6 D\ud835\udc5aand \u02c6 \ud835\udc5e(\ud835\udc60) \ud835\udc62,\ud835\udc63= 1/| \u02c6 D (\ud835\udc60) \ud835\udc5a| if \u02c6 \ud835\udc60= \ud835\udc60 else \u02c6 \ud835\udc5e(\ud835\udc60) \ud835\udc62,\ud835\udc63= 0; similarly, we have \u02dc \ud835\udc44(\ud835\udc60) = { \u02dc \ud835\udc5e(\ud835\udc60) \ud835\udc62,\ud835\udc63|(\ud835\udc62, \ud835\udc63, \u02c6 \ud835\udc60) \u2208 \u02c6 D\ud835\udc5a}; E \u02dc \ud835\udc44(\ud835\udc60), \u02c6 D\ud835\udc5a \u0002 \u02c6 \ud835\udc45 \u0003 represents the empirical expectation of \u02c6 \ud835\udc45in \u02c6 D\ud835\udc5aunder the distribution \u02dc \ud835\udc44(\ud835\udc60), and B \u0010 \ud835\udf0c\ud835\udc60; \u02c6 \ud835\udc44(\ud835\udc60)\u0011 denotes the empirical form of ambiguity set defined in Equation (7), formally, we have: \u2022 The empirical expectation of \u02c6 \ud835\udc45, i.e., E \u02dc \ud835\udc44(\ud835\udc60), \u02c6 D\ud835\udc5a \u0002 \u02c6 \ud835\udc45 \u0003 : E \u02dc \ud835\udc44(\ud835\udc60), \u02c6 D\ud835\udc5a \u0002 \u02c6 \ud835\udc45 \u0003 = \u2211\ufe01 (\ud835\udc62,\ud835\udc63) \u2208\u02c6 D\ud835\udc5a \u02dc \ud835\udc5e(\ud835\udc60) \ud835\udc62,\ud835\udc63\u00b7 \u02c6 \ud835\udc5f\ud835\udc62,\ud835\udc63; (9) 3If the assumption is violated, a more lenient TV distance upper bound estimation is needed (further discussed in Appendix A.1). In our experiments, as the observed commendable accuracy in reconstructing sensitive attributes leads to a modest disparity between \ud835\udc43(\ud835\udc46) and \ud835\udc43( \u02c6 \ud835\udc46), we approximate the assumption holds true here. 4In practice, by assessing the difference between the validation and test sets and incorporating it with the error rate of the validation set, we can set an upper bound on the reconstruction of sensitive attribute errors, thus enabling estimation of \ud835\udf0c\ud835\udc60. Algorithm 1: DRFO Input: Dataset with known sensitive attributes D\ud835\udc58, dataset with unkown sensitive attributes D\ud835\udc5a. Hyper-parameters \ud835\udf06\ud835\udc60, and learning rate \ud835\udefc\ud835\udf03for \ud835\udf03, learning rate \ud835\udefc\ud835\udc5efor \u02dc \ud835\udc44(\ud835\udc60) (\ud835\udc60= 0, 1). 1 Random split D\ud835\udc58into training and validation sets, and train a sensitive attribute classifier; 2 Reconstruct sensitive attributes \u02c6 \ud835\udc46for D\ud835\udc5a, getting \u02c6 D\ud835\udc5a, and for each sample (\ud835\udc62, \ud835\udc63) \u2208\u02c6 D\ud835\udc5a, compute \u02c6 \ud835\udc5e(\ud835\udc60) \ud835\udc62,\ud835\udc63(=1/| \u02c6 D\ud835\udc5a| if \u02c6 \ud835\udc46= \ud835\udc60else 0), obtaining \u02c6 \ud835\udc44(\ud835\udc60) = { \u02c6 \ud835\udc5e(\ud835\udc60) \ud835\udc62,\ud835\udc63|(\ud835\udc62, \ud835\udc63) \u2208\u02c6 D\ud835\udc5a} ; 3 Estimate \ud835\udf0c\ud835\udc60\u2248\ud835\udc43(\ud835\udc46\u2260\u02c6 \ud835\udc46|\ud835\udc46= \ud835\udc60) with the validation set held out by D\ud835\udc58; 4 for \ud835\udc61= 1, . . . ,\ud835\udc47do 5 Update \ud835\udf03using gradient descent: \ud835\udf03\u2190\ud835\udf03\u2212\ud835\udefc\ud835\udf03\u2207\ud835\udf03\ud835\udc3f(\ud835\udf03) \u2212\u00cd1 \ud835\udc60=0 \ud835\udf06\ud835\udc60\ud835\udefc\ud835\udf03\u2207\ud835\udf03\ud835\udc3f\ud835\udc60(\ud835\udf03, \u02dc \ud835\udc44(\ud835\udc60)); 6 for \ud835\udc60= 0, 1 do 7 Update \u02dc \ud835\udc44(\ud835\udc60) using gradient ascent: \u02dc \ud835\udc44(\ud835\udc60) \u2190\u02dc \ud835\udc44(\ud835\udc60) + 1 \u00cd \ud835\udc60=0 \ud835\udf06\ud835\udc60\ud835\udefc\ud835\udc5e\u2207\u02dc \ud835\udc44(\ud835\udc60) \ud835\udc3f\ud835\udc60(\ud835\udf03, \u02dc \ud835\udc44(\ud835\udc60)) ; 8 Project \u02dc \ud835\udc44(\ud835\udc60) onto B(\ud835\udf0c\ud835\udc60; \u02c6 \ud835\udc44(\ud835\udc60)); 9 end 10 end \u2022 The empirical form of ambiguity set B \u0010 \ud835\udf0c\ud835\udc60; \u02c6 \ud835\udc44(\ud835\udc60)\u0011 : B \u0010 \ud835\udf0c\ud835\udc60; \u02c6 \ud835\udc44(\ud835\udc60) \u0011 = n \u02dc \ud835\udc44(\ud835\udc60) \u2208R| \u02c6 D\ud835\udc5a| : 1 2 \u2211\ufe01 (\ud835\udc62,\ud835\udc63) \u2208\u02c6 D\ud835\udc5a \f \f \f \u02dc \ud835\udc5e(\ud835\udc60) \ud835\udc62,\ud835\udc63\u2212\u02c6 \ud835\udc5e(\ud835\udc60) \ud835\udc62,\ud835\udc63 \f \f \f \u2264\ud835\udf0c\ud835\udc60, \u2211\ufe01 (\ud835\udc62,\ud835\udc63) \u2208\u02c6 D\ud835\udc5a \u02dc \ud835\udc5e(\ud835\udc60) \ud835\udc62,\ud835\udc63= 1, \u02dc \ud835\udc5e(\ud835\udc60) \ud835\udc62,\ud835\udc63\u22650 o , (10) where 1 2 \u00cd (\ud835\udc62,\ud835\udc63)\u2208\u02c6 D\ud835\udc5a \f \f \f \u02dc \ud835\udc5e(\ud835\udc60) \ud835\udc62,\ud835\udc63\u2212\u02c6 \ud835\udc5e(\ud835\udc60) \ud835\udc62,\ud835\udc63 \f \f \f \u2264\ud835\udf0c\ud835\udc60represents the empirical implementation of the constraint \ud835\udc47\ud835\udc49( \u02dc \ud835\udc44(\ud835\udc60) , \u02c6 \ud835\udc44(\ud835\udc60)) \u2264\ud835\udf0c\ud835\udc60(Equation (14) in Appendix A.1), \u00cd (\ud835\udc62,\ud835\udc63)\u2208\u02c6 D\ud835\udc5a\u02dc \ud835\udc5e(\ud835\udc60) \ud835\udc62,\ud835\udc63= 1 and \u02dc \ud835\udc5e(\ud835\udc60) \ud835\udc62,\ud835\udc63\u22650 are used to ensure that the empirical distribution \u02dc \ud835\udc44(\ud835\udc60) represents a valid probability distribution. Empirical Lagrangian form. The optimization problem in Equation (8) is difficult to solve because it involves fairness constraints for all potential distributions \u02dc \ud835\udc44(\ud835\udc60) in B(\ud835\udf0c\ud835\udc60; \u02c6 \ud835\udc44(\ud835\udc60)). To overcome this challenge, we convert the fairness constraints into regularizers and optimize the worst-case unfairness within B(\ud835\udf0c\ud835\udc60; \u02c6 \ud835\udc44(\ud835\udc60)) to ensure fairness for the entire set [11]. Finally, we reformulate Equation (8) as a min-max optimization problem, given by: min \ud835\udf03 max \u02dc \ud835\udc44(\ud835\udc60) \u2208B(\ud835\udf0c\ud835\udc60; \u02c6 \ud835\udc44(\ud835\udc60) ), \ud835\udc60=0,1 \ud835\udc3f(\ud835\udf03) + \u2211\ufe01 \ud835\udc60 \ud835\udf06\ud835\udc60\ud835\udc3f\ud835\udc60(\ud835\udf03, \u02dc \ud835\udc44(\ud835\udc60) ), (11) where \ud835\udf06\ud835\udc60is the hyper-parameter to control the strength of the fairness regularizer \ud835\udc3f\ud835\udc60(\ud835\udf03, \u02dc \ud835\udc44(\ud835\udc60)) obtained via Lagrangian trick: \ud835\udc3f\ud835\udc60(\ud835\udf03, \u02dc \ud835\udc44(\ud835\udc60) ) = \f \f \f \f\ud835\udf02(\ud835\udc60) \ud835\udc58ED(\ud835\udc60) \ud835\udc58 \u0002 \u02c6 \ud835\udc45 \u0003 + \ud835\udf02(\ud835\udc60) \ud835\udc5aE \u02dc \ud835\udc44(\ud835\udc60), \u02c6 D\ud835\udc5a \u0002 \u02c6 \ud835\udc45 \u0003 \u2212ED \u0002 \u02c6 \ud835\udc45 \u0003\f \f \f \f . (12) Here, \u02dc \ud835\udc44(\ud835\udc60) becomes learnable parameters. We update it in B(\ud835\udf0c\ud835\udc60; \u02c6 \ud835\udc44(\ud835\udc60)) via \u2018max\u2019 optimization to find the worst case for fairness. Fair Recommendations with Limited Sensitive Attributes: A Distributionally Robust Optimization Approach SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Learning algorithm. To solve the optimization problem in Equation (11), we use the algorithm in [27] for solving DRO, which is summarized in Algorithm 1. At each iteration of the algorithm, we first update the model parameters \ud835\udf03to minimize the total loss in Equation (11) (line 5). Then, for each sensitive attribute value \ud835\udc60\u2208{0, 1}, we update \u02dc \ud835\udc44(\ud835\udc60) to maximize the fairness-related loss \ud835\udc3f\ud835\udc60(\ud835\udf03, \u02dc \ud835\udc44(\ud835\udc60)) in Equation (11) (line 7), and project the updated result onto B(\ud835\udf0c\ud835\udc60; \u02c6 \ud835\udc44(\ud835\udc60)) (line 8, avoiding \u02dc \ud835\udc44(\ud835\udc60) beyond the set) to obtain the final \u02dc \ud835\udc44(\ud835\udc60) for that iteration. 3.3 Discussion We now discuss the extension of DRFO for situations where certain users are not willing to reconstruct their sensitive attributes. The main challenge lies in the inability to reconstruct sensitive attributes, which is essential for building the ambiguity set in DRFO. However, even with a random distribution, we can build a broad ambiguity set that covers the true distribution of these users by using a large robust radius. Thereafter, by optimizing the worstcase unfairness with the broad ambiguity set, we could still achieve fairness. Let D\ud835\udc4fdenote the interaction data of such users. To implement this approach, we first randomly assign sensitive attributes for users in D\ud835\udc4f, serving as reconstructed sensitive attribute \u02c6 \ud835\udc46in DRFO and obtaining \u02c6 D\ud835\udc4f. Then, we define empirical \u02c6 \ud835\udc44(\ud835\udc60) \ud835\udc4f , \u02dc \ud835\udc44(\ud835\udc60) \ud835\udc4f , and B(\ud835\udf0c\u2032 \ud835\udc60; \u02c6 \ud835\udc44(\ud835\udc60) \ud835\udc4f ) for \u02c6 D\ud835\udc4f, referring to those of \u02c6 D\ud835\udc5a. To build the broad ambiguity set, we set the robust radius \ud835\udf0c\u2032 \ud835\udc60to 1. For users with missing sensitive attributes and permitting reconstruction, we use D\ud835\udc5f to denote their interactions (D\ud835\udc5f\u222aD\ud835\udc4f= D\ud835\udc5a, D\ud835\udc5f\u2229D\ud835\udc4f= \u2205). Then we define \u02c6 D\ud835\udc5f, \u02c6 \ud835\udc44(\ud835\udc60) \ud835\udc5f , \u02dc \ud835\udc44(\ud835\udc60) \ud835\udc5f , and B(\ud835\udf0c\ud835\udc60; \u02c6 \ud835\udc44(\ud835\udc60) \ud835\udc5f ) for D\ud835\udc5fin the same way as that of \ud835\udc37\ud835\udc5adescribed in Section 3.2. Finally, we could optimize the fairness for \u02c6 D\ud835\udc4fand \u02c6 D\ud835\udc5fin a similar way to \u02c6 D\ud835\udc5ausing DRO. Adding such optimization parts into Equation (8), we obtain the final optimization problem: min \ud835\udf03 max \u02dc \ud835\udc44(\ud835\udc60) \ud835\udc5f \u2208B(\ud835\udf0c\ud835\udc60; \u02c6 \ud835\udc44(\ud835\udc60) \ud835\udc5f ) \u02dc \ud835\udc44(\ud835\udc60) \ud835\udc4f \u2208B(\ud835\udf0c\u2032 \ud835\udc60; \u02c6 \ud835\udc44(\ud835\udc60) \ud835\udc4f ) \ud835\udc60=0,1 \ud835\udc3f(\ud835\udf03) + 1 \u2211\ufe01 \ud835\udc60=0 \ud835\udf06\ud835\udc60 \f \f \f \f\ud835\udf02(\ud835\udc60) \ud835\udc58ED(\ud835\udc60) \ud835\udc58 \u0002 \u02c6 \ud835\udc45 \u0003 + \ud835\udf02(\ud835\udc60) \ud835\udc5f E \u02dc \ud835\udc44(\ud835\udc60) \ud835\udc5f , \u02c6 D\ud835\udc5f \u0002 \u02c6 \ud835\udc45 \u0003 + \ud835\udf02(\ud835\udc60) \ud835\udc4fE \u02dc \ud835\udc44(\ud835\udc60) \ud835\udc4f , \u02c6 D\ud835\udc4f \u0002 \u02c6 \ud835\udc45 \u0003 \u2212ED \u0002 \u02c6 \ud835\udc45 \u0003 \f \f \f \f, (13) where E \u02dc \ud835\udc44(\ud835\udc60) \ud835\udc5f , \u02c6 D\ud835\udc5f \u0002 \u02c6 \ud835\udc45 \u0003 and E \u02dc \ud835\udc44(\ud835\udc60) \ud835\udc4f, \u02c6 D\ud835\udc4f \u0002 \u02c6 \ud835\udc45 \u0003 represents the empirical expectation of \u02c6 \ud835\udc45under the distribution \u02dc \ud835\udc44(\ud835\udc60) \ud835\udc5f and \u02dc \ud835\udc44(\ud835\udc60) \ud835\udc4f, respectively, computed similarly to Equation (9), and\ud835\udf02(\ud835\udc60) \ud835\udc58 = | D (\ud835\udc60) \ud835\udc58 | | D (\ud835\udc60) \ud835\udc58 |+| \u02c6 D (\ud835\udc60) \ud835\udc5f |+| \u02c6 D (\ud835\udc60) \ud835\udc4f | , \ud835\udf02(\ud835\udc60) \ud835\udc5f = | \u02c6 D (\ud835\udc60) \ud835\udc5f | |D (\ud835\udc60) \ud835\udc58 |+| \u02c6 D (\ud835\udc60) \ud835\udc5f |+| \u02c6 D (\ud835\udc60) \ud835\udc4f | , and \ud835\udf02(\ud835\udc60) \ud835\udc4f = | \u02c6 D (\ud835\udc60) \ud835\udc4f | | D (\ud835\udc60) \ud835\udc58 |+| \u02c6 D (\ud835\udc60) \ud835\udc5f |+| \u02c6 D (\ud835\udc60) \ud835\udc4f | . The learning algorithm can be formulated following the structure outlined in Algorithm 1. 4 EXPERIMENTS In this section, we conduct extensive experiments to answer the following research questions: RQ1: Can DRFO provide fair recommendations with limited sensitive attributes? RQ2: How do reconstruction errors affect the effectiveness of DRFO in ensuring fairness? What is the performance of DRFO in terms of fairness Table 1: Statistics of the evaluation datasets. Dataset #Items #Users #Interactions Mean Ratings \ud835\udc46= 0 \ud835\udc46= 1 \ud835\udc46= 0 \ud835\udc46= 1 \ud835\udc46= 0 \ud835\udc46= 1 ML-1M 3,244 1,153 3,144 228,191 713,590 0.5866 0.5661 Tenrec 14,539 3,108 2,299 308,217 337,958 0.4849 0.4676 on both users with known sensitive attributes and those with unknown sensitive attributes? RQ3: Can DRFO ensure fairness when some users prohibit the reconstruction of sensitive attributes? 4.1 Experimental Settings 4.1.1 Datasets. We conduct experiments on two open benchmark datasets: MovieLens 1M5 [10] and Tenrec6 [42]. \u2022 MoviveLens 1M (ML-1M): This is a movie recommendation dataset including user ratings of movies on a scale of 1-5 and sensitive user attributes such as \u2018gender\u2019. We select \u2018gender\u2019 as the sensitive attribute and transform the rating values into binary labels using a threshold of 3, where ratings greater than three are labeled as positive (\u20181\u2019) and the rest as negative (\u20180\u2019). \u2022 Tenrec: This dataset contains user feedback on articles or videos alongside anonymized user information from Tencent\u2019s recommendation platforms. We focus on the \u2018QB-video\u2019 sub-dataset for our experiments, using the \u2018like\u2019 feedback as the rating label and considering \u2018gender\u2019 as the sensitive attribute. In this work, we adopt the approach proposed in [36] to reconstruct sensitive attributes only using user historical interactions. To ensure the effectiveness of the reconstruction, it is crucial for users to have a sufficient number of historical interactions in the two datasets. Therefore, we apply 50-core filtering [26, 38] to select users with more than 50 interactions while performing 10-core filtering for items. Table 1 presents the statistical details of the processed datasets after filtering. We partition the datasets into training, validation, and testing sets using a ratio of 0.7:0.15:0.15. 4.1.2 Compared Methods. Achieving fair recommendations with limited sensitive attributes is a novel research problem, and no existing method is specifically designed to address it in recommender systems (to our knowledge). To establish baselines, we select two intuitive methods (RegK and FLrSA) and a method (CGL) proposed in other fields for solving this problem. \u2013 RegK [41] is a regularization-based approach that applies fair regularization only to users with known sensitive attributes. \u2013 FLrSA [41] first reconstructs missing sensitive attributes and then applies the fairness regularization method using both the reconstructed and the known sensitive attributes. \u2013 CGL [14] is a representative method for fairness with limited sensitive attributes in computer vision. It reconstructs sensitive attributes and addresses low-confidence reconstructions via random replacement. Subsequently, it employs a regularizationbased approach using the obtained attributes. We adapt it to recommender systems. Although RegK and FLrSA are intuitively designed, similar methods have been employed in other fields to study fairness with limited sensitive attributes [14]. Additionally, we include two fairnessunawareness methods as reference points. 5https://grouplens.org/datasets/movielens/ 6https://static.qblv.qq.com/qblv/h5/algo-frontend/tenrec_dataset.html SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Tianhao Shi, et al. Table 2: Comparison between baselines and DRFO on ML-1M and Tenrec w.r.t. the fairness metric DP and recommendation performance metric RMSE under varying known sensitive attribute ratios. Lower DP and RMSE indicate better results. Dataset Model DP \u2193 RMSE \u2193 10% 30% 50% 70% 90% 10% 30% 50% 70% 90% ML-1M Basic MF 0.0225 0.0225 0.0225 0.0225 0.0225 0.4147 0.4147 0.4147 0.4147 0.4147 Oracle 0.0009 0.0009 0.0009 0.0009 0.0009 0.4148 0.4148 0.4148 0.4148 0.4148 RegK 0.0159 0.0132 0.0058 0.0014 0.0013 0.4144 0.4147 0.4147 0.4150 0.4149 FLrSA 0.0095 0.0119 0.0038 0.0035 0.0003 0.4147 0.4147 0.4148 0.4151 0.4147 CGL 0.0082 0.0145 0.0056 0.0035 0.0005 0.4147 0.4144 0.4148 0.4150 0.4149 DRFO 0.0034 0.0042 0.0011 0.0013 0.0006 0.4201 0.4201 0.4191 0.4223 0.4198 Tenrec Basic MF 0.0068 0.0068 0.0068 0.0068 0.0068 0.4503 0.4503 0.4503 0.4503 0.4503 Oracle 0.0001 0.0001 0.0001 0.0001 0.0001 0.4504 0.4504 0.4504 0.4504 0.4504 RegK 0.0053 0.0052 0.0053 0.0040 0.0051 0.4558 0.4520 0.4530 0.4500 0.4500 FLrSA 0.0073 0.0073 0.0073 0.0032 0.0013 0.4503 0.4503 0.4503 0.4597 0.4594 CGL 0.0073 0.0073 0.0074 0.0029 0.0009 0.4503 0.4503 0.4503 0.4518 0.4596 DRFO 0.0019 0.0018 0.0013 0.0018 0.0009 0.4590 0.4575 0.4565 0.4595 0.4596 \u2013 Basic MF [18] refers to the basic Matrix Factorization (MF) model that is trained without addressing fairness issues. \u2013 Oracle [41] assumes knowledge of all users\u2019 sensitive attributes and employs the regularization-based method to achieve fair recommendations. Its results serve as an upper bound on fairness. To ensure a fair comparison, we implement the fairness regularization term based on Equation (2) to pursue demographic parity fairness, using the classical Matrix Factorization model as the backbone recommendation model for all methods. 4.1.3 Evaluation Protocol. In our experiments, we simulate scenarios involving unknown sensitive attributes by randomly retaining the sensitive attribute for a subset of users while masking it for others. The retention ratio, varying in {0.1, 0.3, 0.5, 0.7, 0.9}, determines the proportion of users whose sensitive attributes are preserved during both training and validation phases. During testing, however, the sensitive attributes of all users are accessible to evaluate fairness. We use the MAD metric in Equation (1) to measure fairness (DP) and the root mean squared error (RMSE) to measure recommendation performance. A smaller value for both metrics indicates better performance in terms of fairness or recommendation. 4.1.4 Implementation Details. For a fair comparison, we optimize all models using the Adam optimizer [17] with the default embedding size of 32. Before applying the fairness methods, we pre-train the MF model using grid search to determine the optimal learning rate from the range of {1\ud835\udc52\u22122, 1\ud835\udc52\u22123} and the best weight decay from the range of {1\ud835\udc52\u22121, 1\ud835\udc52\u22122, ..., 1\ud835\udc52\u22127}. For the fairness models, we initialize their backbone MF model with the pre-trained one and then fine-tune them with a fixed learning rate of 1\ud835\udc52\u22123. We tune the fairness regularization coefficient in the range of {0.01, 0.05, 0.1, 0.5, 1, 5, 10} for the baselines. For DRFO, we set the hyper-parameter that controls the strength of fairness constraints to 10 for ML-1M and 1 for Tenrec, and set the learning rate \ud835\udefc\ud835\udc5efor updating \u02dc \ud835\udc44(\ud835\udc60) to 1\ud835\udc52\u22123 for ML-1M and 1\ud835\udc52\u22122 for Tenrec. Additionally, for methods that involve reconstructing unknown sensitive attributes, we use the same classifier proposed in [36] that is trained to fit known sensitive attributes using historical user interactions. To ensure a fair comparison, we select hyper-parameters that achieve the best fairness (measured by DP) while also maintaining at least 98% of the best recommendation performance (measured 10 30 50 70 90 100 Known Senstive Attributes Ratio (%) 0.000 0.005 0.010 0.015 0.020 Fairness (DP) (a) Fairness results on ML-1M. 10 30 50 70 90 100 Known Senstive Attributes Ratio (%) 0.000 0.002 0.004 0.006 0.008 Fairness (DP) (b) Fairness results on Tenrec. RegK FLrSA CGL DRFO Oracle Basic MF Figure 2: Fairness comparison between baselines and DRFO on ML-1M and Tenrec for varying known sensitive attribute ratios. Lower DP values indicate better fairness. by RMSE) achieved by the basic MF on the validation set. This approach is commonly used in previous studies to balance the tradeoff between fairness and recommendation performance [15, 35]. We release our code at: https://github.com/TianhaoShi2001/DRFO. 4.2 Overall Performance (RQ1) We first evaluate methods when all users\u2019 sensitive attributes are allowed to be reconstructed. We summarize the results in Table 2 for varying proportions of users with known sensitive attributes. The corresponding fairness curves for each method are shown in Figure 2 to illustrate the impact of the retention ratio. From the table and figure, we have drawn the following observations: \u2022 Basic MF method exhibits notably inferior fairness compared to regularization-based methods (e.g., FLrSA) with a substantial portion (\u226590%) of known sensitive attributes, particularly when compared to the Oracle model. This emphasizes the effectiveness of regularization-based approaches in enhancing recommendation fairness. However, as the proportion of known sensitive attributes decreases (\u226450%), the fairness performance of regularization-based methods rapidly declines, highlighting the importance of addressing limited sensitive attribute issues for improving fairness. \u2022 When the proportion of known sensitive attributes is small (\u226450%), FLrSA successfully outperforms RegK on ML-1M but fails on Tenrec in fairness. This can be attributed to the fact that reconstructing sensitive attributes for Tenrec is more difficult7, thus 7It is supported by the lower reconstruction accuracy (AUC) observed on Tenrec. Fair Recommendations with Limited Sensitive Attributes: A Distributionally Robust Optimization Approach SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA 10 20 30 40 50 Reconstruction Error Rate (%) 0.002 0.004 0.006 0.008 0.010 0.012 0.014 0.016 Fairness (DP) 30% Known Sensitive Attributes 10 20 30 40 50 Reconstruction Error Rate (%) 0.000 0.002 0.004 0.006 0.008 0.010 0.012 Fairness (DP) 50% Known Sensitive Attributes RegK FLrSA DRFO Oracle (a) Fairness results on ML-1M. 10 20 30 40 50 Reconstruction Error Rate (%) 0.000 0.001 0.002 0.003 0.004 0.005 0.006 0.007 Fairness (DP) 30% Known Sensitive Attributes 10 20 30 40 50 Reconstruction Error Rate (%) 0.000 0.001 0.002 0.003 0.004 0.005 0.006 0.007 Fairness (DP) 50% Known Sensitive Attributes (b) Fairness results on Tenrec. Figure 3: Fairness performance under different levels of reconstruction errors for sensitive attributes. suffering more reconstruction errors and invalidating the vanilla reconstruction-based method FLrSA. These findings affirm the significant impact of reconstruction errors on the efficacy of reconstruction-based approaches to improve fairness. \u2022 Despite taking reconstruction errors into consideration, CGL performs similarly to FLrSA in fairness. This may be due to the fact that CGL randomly assigns sensitive attributes to samples with low reconstruction confidence, which does not ensure the resolution of the reconstruction error problem. \u2022 DRFO consistently achieves a higher level of fairness, compared to all baselines except for Oracle, even when the proportion of users with known sensitive attributes is low. This confirms its effectiveness in addressing reconstruction errors to achieve fairer recommendations. The superiority of the approach can be attributed to its DRO-based fair optimization, which minimizes the worst-case unfairness over a distribution set to achieve fairness for the true sensitive attributes contained in the set. \u2022 DRFO achieves slightly lower recommendation performance than baselines due to its aggressive fairness pursuit by minimizing worst-case unfairness across potential distributions. Nevertheless, our early stopping strategy selects the fairest model within a maximum 2% drop in RMSE on the validation, ensuring a low drop (around 2%) in RMSE on the testing for all methods. Despite this modest sacrifice in recommendation performance, DRFO improves fairness by over 100% in many cases, particularly when numerous users lack sensitive attributes. 4.3 In-depth Analyses (RQ2) We will conduct further experiments to analyze the influence of reconstruction errors on different methods and study the fairness performance of different users. S=0 (K) S=1 (K) S=0 (U) S=1 (U) 0.000 0.004 0.008 0.012 (a) Fairness results on ML-1M. S=0 (K) S=1 (K) S=0 (U) S=1 (U) 0.000 0.002 0.004 0.006 0.008 (b) Fairness results on Tenrec. RegK FLrSA CGL DRFO Figure 4: Absolute difference of average predicted scores of different groups from global average predictions. Higher difference means more unfairness. \u2018K\u2019 stands for \u2018known\u2019, and \u2018U\u2019 stands for \u2018unknown\u2019. \u2018S=0 (K)\u2019 denotes the users with the known sensitive attribute of 0, similarly for others. 4.3.1 The Effect of Reconstruction Errors. In this subsection, we directly investigate the impact of reconstruction errors on the fairness achievement of DRFO and baselines. We set the ratio of users with known sensitive attributes to be either 30% or 50%, and for the remaining users, we randomly flip the true sensitive attributes of some users to form the reconstructed sensitive attributes, thus introducing reconstruction errors. The flipping ratio varies from 10% to 50%, allowing us to simulate different levels of reconstruction errors8. We compare the fairness of RegK, FLrSA, and DRFO under different reconstruction error levels, excluding CGL due to the lack of reconstruction confidence, which is a necessary condition for CGL. The resulting fairness comparison is presented in Figure 3, where we exclude the recommendation performance as the observed drop is limited to approximately 2% (we also omit these results in the following experiments for the same reasons). Based on the figure, we make the following observations: \u2022 As the reconstruction error increases, both DRFO and FLrSA encounter a decline in fairness. Nevertheless, DRFO\u2019s smoother curve and consistent superiority over RegK highlight its robustness to reconstruction errors. The deterioration in DRFO\u2019s fairness can be explained by Equation (7), where increases in reconstruction error cause DRFO to select a larger ambiguity set, intensifying optimization challenges for achieving fairness. \u2022 Upon surpassing a 20% reconstruction error, FLrSA no longer surpasses RegK in fairness. Even with just a 10% reconstruction error, FLrSA fails to match Oracle in fairness. These results emphasize the necessity of addressing reconstruction errors to uphold the reconstruction method\u2019s validity. 4.3.2 Case Study. We next investigate whether our DRFO ensures fairness for both users with known sensitive attributes and users with unknown sensitive attributes. To achieve this, we propose a metric that measures the absolute difference between a specific group\u2019s average predicted rating and the overall average rating, similar to MAD in Equation (1). Larger values of this metric indicate more unfairness. We compute this metric for four different groups: users with \ud835\udc46= 1 but \ud835\udc46is unknown, users with \ud835\udc46= 0 but \ud835\udc46is unknown, users with known \ud835\udc46= 1, and users with known \ud835\udc46= 0. 8A 50% reconstruction error is significant, akin to random guessing, and we flip attributes for males and females at a 1:1 ratio to maintain unchanged gender distributions. SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Tianhao Shi, et al. 0 25 50 75 100 Reconstrution Ratio (%) 0.000 0.005 0.010 0.015 0.020 Fairness (DP) 30% Known Sensitive Attributes 0 25 50 75 100 Reconstrution Ratio (%) 0.002 0.004 0.006 0.008 Fairness (DP) 50% Known Sensitive Attributes RegK FLrSA CGL DRFO Oracle (a) Fairness results on ML-1M. 0 25 50 75 100 Reconstrution Ratio (%) 0.000 0.001 0.002 0.003 0.004 0.005 0.006 0.007 Fairness (DP) 30% Known Sensitive Attributes 0 25 50 75 100 Reconstrution Ratio (%) 0.000 0.001 0.002 0.003 0.004 0.005 0.006 0.007 Fairness (DP) 50% Known Sensitive Attributes (b) Fairness results on Tenrec. Figure 5: Fairness results in scenarios where a portion of users does not allow reconstruction of their attributes among the users with unknown sensitive attributes. We summarize the result of the compared method in Figure 4, where 30% of users have known sensitive attributes. In Figure 4, baselines reveal significant absolute differences between groups\u2019 average and the global average, particularly noticeable for those with unknown sensitive attributes, implying higher unfairness. These results highlight baselines\u2019 inefficacy in achieving fair recommendations, especially for users with unknown attributes. In contrast, our proposed DRFO approach exhibits a small unfairness for all groups. Furthermore, the unfairness of the groups with known and unknown sensitive attributes is comparable, indicating that our method can successfully overcome the challenge of limited sensitive attributes and provide fair recommendations for users both with known and unknown sensitive attributes. 4.4 Fairness for Users Refusing to Reconstruct Sensitive Attributes (RQ3) To delve deeper into DRFO\u2019s ability in ensuring fairness for users refusing sensitive attribute reconstruction due to privacy concerns, we perform supplementary experiments. We explore scenarios where a fraction of users (30% or 50%) possess known sensitive attributes. Among the remaining users, we randomly designate a subset, varying from 0% to 100%, as individuals permitting sensitive attribute reconstruction, while others opt out of it. To accommodate non-reconstructable sensitive attributes, we adapt the baselines FLrSA and CGL. For FLrSA, fair constraints are omitted for users not permitting sensitive attribute reconstruction. In the case of CGL, sensitive attributes are randomly assigned to users disallowing reconstruction, mimicking its low-confidence strategy for reconstructed sensitive attributes. A summary of fairness performance across all compared methods is presented in Figure 5. Figure 5 shows baselines\u2019 failures in ensuring fairness when some users do not grant sensitive attribute reconstruction. Conversely, DRFO maintains relatively fair results. This validates DRFO\u2019s fairness improvement without full sensitive attribute reconstruction by employing a sufficiently large ambiguity set to encompass potential attributes and minimizing the worst-case unfairness within the set. 5 RELATED WORK 5.1 Fairness in Recommendation Fairness research in recommender systems employs two primary paradigms: individual fairness [39, 45], which treats similar individuals similarly, and group fairness [16, 43], which ensures equitable treatment of different groups. Our study specifically focuses on user-side fairness in recommendation at a group level, using Demographic Parity [2, 5, 16] as a representative criterion. To promote fairness in recommendations, existing approaches can be broadly categorized into three types [23, 35]: fair pre-processing, fair learning, and fair adjustment. Fair pre-processing methods mitigate bias in data by resampling [6] or adding antidote data [30]. Fair learning methods modify optimization targets to learn fair recommendation models, either through fairness criteria as regularizers [16, 41, 47] or adversarial learning [39, 45]. Fair adjustment methods [22, 40] reorder the recommendation results to attain a fairer outcome. However, these methods typically require full access to sensitive attributes, which may not be feasible in real-world situations. In contrast, our work tackles the challenge of addressing unfairness when only limited sensitive attributes are available. 5.2 Fairness with Missing Sensitive Attributes Fair recommendation research has paid less attention to fairness when some sensitive attributes are missing, but progress has been made in other fairness-focused domains. Some methods are specifically designed to enhance Rawlsian Fairness [31] using DRO [11] or Adversarial Reweighting Learning [19], but they cannot extend to other fairness metrics like DP. Another direction utilizes observable attributes (e.g., zip code) as proxies for sensitive attributes [9, 46], which differs from our method of reconstructing sensitive attributes. The effectiveness of these approaches relies on the strength of the correlation between the sensitive attribute and the observable nonsensitive one. Furthermore, some works enhance fairness by reconstructing missing sensitive attributes [3, 14] or sensitive representations [8] and leveraging this reconstructed information. However, they may have limited effectiveness due to reconstruction errors. In contrast, our DRO-based fair learning method ensures fairness in the presence of reconstruction errors. Within recommender systems, FairLISA [45] first studies the problem of missing sensitive attributes. It utilizes a discriminator trained with known attributes to remove the sensitive information from embeddings of users whose sensitive attributes are unavailable. Unlike it, our method employs the DRO paradigm to offer robust fair recommendations for users with missing sensitive attributes. Additionally, FairLISA and our work explore the fairness of limited sensitive attributes from different fairness metrics [35]. While FairLISA focuses on the fairness of removing sensitive information from user embeddings, our focus is on the fairness of inter-group metric differences. Fair Recommendations with Limited Sensitive Attributes: A Distributionally Robust Optimization Approach SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA 5.3 Distributionally Robust Optimization Distributionally Robust Optimization (DRO) [27, 29] is a well-known framework that handles uncertainty. It seeks a solution that performs well under all possible distributions within a defined set by optimizing for the worst-case expected loss [27, 29]. Previous studies have utilized DRO to investigate fairness. Hashimoto et al. [11] employ DRO to improve the performance of groups with high classification errors, and Wen et al. [37] use DRO to improve worst-case user experience in recommendations. These approaches focus on Rawlsian Fairness [31], which cannot be extended to other fairness criteria (e.g., DP). RobFair [37] utilizes DRO to provide robust fair recommendations, overcoming the challenge of user preference shift which may compromise the fairness achieved during the training phase. Unlike RobFair, our work investigates fair recommendations when some sensitive attributes are missing and uses DRO to handle the uncertainty of missing sensitive attributes. Additionally, Wang et al. [34] introduce two methods that utilize DRO to learn fair classifiers with noisy sensitive attributes, which is most closely related to our work. In difference, our work considers scenarios where sensitive attributes cannot be reconstructed and discusses how to estimate the upper bound of TV distance when the assumption in Theorem 1 is slightly violated. 6 CONCLUSION In this study, we aim to enhance fairness in recommender systems with limited sensitive attributes. We point out that directly reconstructing sensitive attributes may suffer from reconstruction errors and have privacy concerns. To overcome the challenges, we propose a DRO-based fair learning framework. It builds an ambiguity set based on reconstructed sensitive attributes of users who grant reconstruction, and subsequently optimizes the worst-case unfairness within the entire set. We provide theoretical evidence that our methods could build an appropriate ambiguity set that encompasses the unknown true distribution, thus providing robust fair recommendations with limited sensitive attributes. Additionally, extensive experiments on two real-world datasets confirm the efficacy of our methods. In the future, we plan to extend our approach to address fairness in non-binary sensitive attributes. Furthermore, our current work only considers using DP as a fairness objective, and we aim to extend the approach to other group fairness metrics. A APPENDIX A.1 Total Variation Distance In this section, we first present an introduction to the Total Variation (TV) distance, including its definition and some basic properties. Subsequently, we proceed to prove Theorem 2, and from this proof, we will be able to deduce Theorem 1. Afterward, we discuss the situation when the assumptions of Theorem 2 are not satisfied. Definition A.1 (Total Variation Distance). The total variation distance between probability measures \ud835\udc43and \ud835\udc44on a measurable space (\u03a9, F ) is defined as the supremum of the absolute difference between the probabilities assigned to a measurable event \ud835\udc34\u2208F by \ud835\udc43and \ud835\udc44. If \u03a9 is countable, the total variation distance can also be expressed in terms of the L1-norm [21] as follows: \ud835\udc47\ud835\udc49(\ud835\udc43,\ud835\udc44) = sup \ud835\udc34\u2208F |\ud835\udc43(\ud835\udc34) \u2212\ud835\udc44(\ud835\udc34) | = 1 2 \u2225\ud835\udc43\u2212\ud835\udc44\u22251 . (14) Theorem 2. Given a measurable event \ud835\udc34, assuming \ud835\udc43(\ud835\udc46= \ud835\udc60) = \ud835\udc43( \u02c6 \ud835\udc46= \ud835\udc60) for a given \ud835\udc60\u2208{0, 1}, the total variation distance between two conditional probabilities \ud835\udc43(\ud835\udc34|\ud835\udc46= \ud835\udc60) and \ud835\udc43(\ud835\udc34| \u02c6 \ud835\udc46= \ud835\udc60) is bounded by the probability that the sensitive attribute is incorrectly reconstructed, i.e., \ud835\udc47\ud835\udc49(\ud835\udc43(\ud835\udc34|\ud835\udc46= \ud835\udc60), \ud835\udc43(\ud835\udc34| \u02c6 \ud835\udc46= \ud835\udc60)) \u2264\ud835\udc43(\ud835\udc46\u2260\u02c6 \ud835\udc46|\ud835\udc46= \ud835\udc60). Proof. Assuming \ud835\udc43(\ud835\udc46=\ud835\udc60)=\ud835\udc43( \u02c6 \ud835\udc46=\ud835\udc60), the following equation holds: \ud835\udc43( \u02c6 \ud835\udc46= \ud835\udc60|\ud835\udc46= \ud835\udc60) = \ud835\udc43( \u02c6 \ud835\udc46= \ud835\udc60,\ud835\udc46= \ud835\udc60) \ud835\udc43(\ud835\udc46= \ud835\udc60) = \ud835\udc43(\ud835\udc46= \ud835\udc60, \u02c6 \ud835\udc46= \ud835\udc60) \ud835\udc43( \u02c6 \ud835\udc46= \ud835\udc60) = \ud835\udc43(\ud835\udc46= \ud835\udc60| \u02c6 \ud835\udc46= \ud835\udc60), and we can also deduce \ud835\udc43( \u02c6 \ud835\udc46\u2260\ud835\udc60|\ud835\udc46=\ud835\udc60)=\ud835\udc43(\ud835\udc46\u2260\ud835\udc60| \u02c6 \ud835\udc46=\ud835\udc60). Then, given a measurable event \ud835\udc34and two conditional probabilities \ud835\udc43(\ud835\udc34|\ud835\udc46= \ud835\udc60) and \ud835\udc43(\ud835\udc34| \u02c6 \ud835\udc46= \ud835\udc60). We have: \ud835\udc47\ud835\udc49(\ud835\udc43(\ud835\udc34|\ud835\udc46= \ud835\udc60), \ud835\udc43(\ud835\udc34| \u02c6 \ud835\udc46= \ud835\udc60)) = sup \ud835\udc34 \f \f\ud835\udc43(\ud835\udc34|\ud835\udc46= \ud835\udc60) \u2212\ud835\udc43(\ud835\udc34| \u02c6 \ud835\udc46= \ud835\udc60) \f \f = sup \ud835\udc34 \f \f\ud835\udc43(\ud835\udc34|\ud835\udc46= \ud835\udc60, \u02c6 \ud835\udc46= \ud835\udc60)\ud835\udc43( \u02c6 \ud835\udc46= \ud835\udc60|\ud835\udc46= \ud835\udc60)+\ud835\udc43(\ud835\udc34|\ud835\udc46= \ud835\udc60, \u02c6 \ud835\udc46\u2260\ud835\udc60)\ud835\udc43( \u02c6 \ud835\udc46\u2260\ud835\udc60|\ud835\udc46= \ud835\udc60) \u2212\ud835\udc43(\ud835\udc34|\ud835\udc46= \ud835\udc60, \u02c6 \ud835\udc46= \ud835\udc60)\ud835\udc43(\ud835\udc46= \ud835\udc60| \u02c6 \ud835\udc46= \ud835\udc60) \u2212\ud835\udc43(\ud835\udc34|\ud835\udc46\u2260\ud835\udc60, \u02c6 \ud835\udc46= \ud835\udc60)\ud835\udc43(\ud835\udc46\u2260\ud835\udc60| \u02c6 \ud835\udc46= \ud835\udc60) \f \f = sup \ud835\udc34 \f \f \f\ud835\udc43(\ud835\udc34|\ud835\udc46= \ud835\udc60, \u02c6 \ud835\udc46= \ud835\udc60) \u0010 \ud835\udc43( \u02c6 \ud835\udc46= \ud835\udc60|\ud835\udc46= \ud835\udc60) \u2212\ud835\udc43(\ud835\udc46= \ud835\udc60| \u02c6 \ud835\udc46= \ud835\udc60) \u0011 \u2212\ud835\udc43( \u02c6 \ud835\udc46\u2260\ud835\udc46|\ud835\udc46= \ud835\udc60) \u0010 \ud835\udc43(\ud835\udc34|\ud835\udc46= \ud835\udc60, \u02c6 \ud835\udc46\u2260\ud835\udc60) \u2212\ud835\udc43(\ud835\udc34| \u02c6 \ud835\udc46= \ud835\udc60,\ud835\udc46\u2260\ud835\udc60) \u0011\f \f \f = sup \ud835\udc34 \f \f \f0 \u2212\ud835\udc43( \u02c6 \ud835\udc46\u2260\ud835\udc46|\ud835\udc46= \ud835\udc60) \u0010 \ud835\udc43(\ud835\udc34|\ud835\udc46= \ud835\udc60, \u02c6 \ud835\udc46\u2260\ud835\udc60) \u2212\ud835\udc43(\ud835\udc34| \u02c6 \ud835\udc46= \ud835\udc60,\ud835\udc46\u2260\ud835\udc60) \u0011\f \f \f \u2264\ud835\udc43( \u02c6 \ud835\udc46\u2260\ud835\udc46|\ud835\udc46= \ud835\udc60). \u25a1 The above derivation completes the proof of Theorem 2, thereby deriving Theorem 1. Note that we assume \ud835\udc43(\ud835\udc46) = \ud835\udc43( \u02c6 \ud835\udc46) in proving Theorem 2. If the assumption is violated, as the general reconstructive ability to accurately restore true sensitive attributes, we presume a slight deviation between \ud835\udc43(\ud835\udc46) and \ud835\udc43( \u02c6 \ud835\udc46), that is, \ud835\udc43( \u02c6 \ud835\udc46) \u2212\ud835\udc43(\ud835\udc46) = \ud835\udeff\ud835\udc5d(|\ud835\udeff\ud835\udc5d| \u226a\ud835\udc43(\ud835\udc46)), then \ud835\udc47\ud835\udc49(\ud835\udc43(\ud835\udc34|\ud835\udc46= \ud835\udc60), \ud835\udc43(\ud835\udc34| \u02c6 \ud835\udc46= \ud835\udc60)) = sup \ud835\udc34 \f \f \f\ud835\udc43(\ud835\udc34|\ud835\udc46= \ud835\udc60, \u02c6 \ud835\udc46= \ud835\udc60) \u0010 \ud835\udc43( \u02c6 \ud835\udc46= \ud835\udc60|\ud835\udc46= \ud835\udc60) \u2212\ud835\udc43(\ud835\udc46= \ud835\udc60| \u02c6 \ud835\udc46= \ud835\udc60) \u0011 \u2212 \u0010 \ud835\udc43(\ud835\udc34|\ud835\udc46\u2260\ud835\udc60, \u02c6 \ud835\udc46= \ud835\udc60)\ud835\udc43(\ud835\udc46\u2260\ud835\udc60| \u02c6 \ud835\udc46= \ud835\udc60) \u2212\ud835\udc43(\ud835\udc34|\ud835\udc46= \ud835\udc60, \u02c6 \ud835\udc46\u2260\ud835\udc60)\ud835\udc43( \u02c6 \ud835\udc46\u2260\ud835\udc60|\ud835\udc46= \ud835\udc60) \u0011\f \f \f \u2248sup \ud835\udc34 \f \f \f \u0010 \ud835\udc58\ud835\udc43(\ud835\udc34|\ud835\udc46= \ud835\udc60, \u02c6 \ud835\udc46= \ud835\udc60)\ud835\udeff\ud835\udc5d\u2212\ud835\udc58\ud835\udc43(\ud835\udc34|\ud835\udc46= \ud835\udc60, \u02c6 \ud835\udc46\u2260\ud835\udc60)\ud835\udeff\ud835\udc5d \u0011 \u2212\ud835\udc43( \u02c6 \ud835\udc46\u2260\ud835\udc46|\ud835\udc46= \ud835\udc60) \u0010 \ud835\udc43(\ud835\udc34|\ud835\udc46= \ud835\udc60, \u02c6 \ud835\udc46\u2260\ud835\udc60) \u2212\ud835\udc43(\ud835\udc34| \u02c6 \ud835\udc46= \ud835\udc60,\ud835\udc46\u2260\ud835\udc60) \u0011\f \f \f \u2264\ud835\udc43( \u02c6 \ud835\udc46\u2260\ud835\udc46|\ud835\udc46= \ud835\udc60) + \ud835\udc58|\ud835\udeff\ud835\udc5d| , where \ud835\udc58= \ud835\udc43(\ud835\udc46=\ud835\udc60, \u02c6 \ud835\udc46=\ud835\udc60) \ud835\udc43(\ud835\udc46=\ud835\udc60)\ud835\udc43( \u02c6 \ud835\udc46=\ud835\udc60) , and the approximate equality is obtained through the probability formula and first-order approximation (details omitted for simplicity). From the derivation, we know that when there is a marginal discrepancy between \ud835\udc43(\ud835\udc46) and \ud835\udc43( \u02c6 \ud835\udc46), the estimated upper bound on the TV distance should be looser. However, the impact on the upper bound is modest, rendering its omission reasonable. Also, in our experiments, using only the first term (i.e., \ud835\udc43( \u02c6 \ud835\udc46\u2260\ud835\udc46|\ud835\udc46= \ud835\udc60)) to build ambiguity sets has proven to achieve robust fairness, implying the rationale of directly utilizing the conclusion of Theorem 2 to estimate the upper bound of the TV distance. ACKNOWLEDGMENTS This work is supported by the National Key Research and Development Program of China (2022YFB3104701), the National Natural Science Foundation of China (62272437), and the CCCD Key Lab of Ministry of Culture and Tourism. SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Tianhao Shi, et al."
18
+ }
intro_8K/test_introduction_short_2405.01097v1.json ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.01097v1",
3
+ "title": "Silencing the Risk, Not the Whistle: A Semi-automated Text Sanitization Tool for Mitigating the Risk of Whistleblower Re-Identification",
4
+ "abstract": "Whistleblowing is essential for ensuring transparency and accountability in\nboth public and private sectors. However, (potential) whistleblowers often fear\nor face retaliation, even when reporting anonymously. The specific content of\ntheir disclosures and their distinct writing style may re-identify them as the\nsource. Legal measures, such as the EU WBD, are limited in their scope and\neffectiveness. Therefore, computational methods to prevent re-identification\nare important complementary tools for encouraging whistleblowers to come\nforward. However, current text sanitization tools follow a one-size-fits-all\napproach and take an overly limited view of anonymity. They aim to mitigate\nidentification risk by replacing typical high-risk words (such as person names\nand other NE labels) and combinations thereof with placeholders. Such an\napproach, however, is inadequate for the whistleblowing scenario since it\nneglects further re-identification potential in textual features, including\nwriting style. Therefore, we propose, implement, and evaluate a novel\nclassification and mitigation strategy for rewriting texts that involves the\nwhistleblower in the assessment of the risk and utility. Our prototypical tool\nsemi-automatically evaluates risk at the word/term level and applies\nrisk-adapted anonymization techniques to produce a grammatically disjointed yet\nappropriately sanitized text. We then use a LLM that we fine-tuned for\nparaphrasing to render this text coherent and style-neutral. We evaluate our\ntool's effectiveness using court cases from the ECHR and excerpts from a\nreal-world whistleblower testimony and measure the protection against\nauthorship attribution (AA) attacks and utility loss statistically using the\npopular IMDb62 movie reviews dataset. Our method can significantly reduce AA\naccuracy from 98.81% to 31.22%, while preserving up to 73.1% of the original\ncontent's semantics.",
5
+ "authors": "Dimitri Staufer, Frank Pallas, Bettina Berendt",
6
+ "published": "2024-05-02",
7
+ "updated": "2024-05-02",
8
+ "primary_cat": "cs.CY",
9
+ "cats": [
10
+ "cs.CY",
11
+ "cs.CL",
12
+ "cs.HC",
13
+ "cs.IR",
14
+ "cs.SE",
15
+ "H.3; K.4; H.5; K.5; D.2; J.4"
16
+ ],
17
+ "label": "Original Paper",
18
+ "paper_cat": "LLM Fairness",
19
+ "gt": "In recent years, whistleblowers have become \u201ca powerful force\u201d for transparency and accountability, not just in the field of AI [9], but also in other technological domains and across both private- and public-sector organizations. Institutions such as the AI Now Insti- tute [9] or the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems [22] have emphasized the key role of whistle- blower protection for societal well-being and often also the or- ganizations\u2019 own interests [21]. However, whistleblowing may be a threat for the organizations whose malfeasance is being re- vealed; thus (potential) whistleblowers often fear or face retaliation. Computationally-supported anonymous reporting seems to be a way forward, but even if reporting frameworks are sufficiently secure system- and network-wise, the report itself may allow in- ferences towards the whistleblower\u2019s identity due to its content and the whistleblower\u2019s writing style. Non-partisan organizations such as Whistleblower-Netzwerk e.V. (WBN) provide guidance on concise writing. Our interactions with WBN confirm that whistle- blower testimonies often include unnecessary personal details. Existing approaches modifying the texts of such reports appear promising, but they take an overly limited view of anonymity and \u2013 like whistleblower protection laws \u2013 address only parts of the prob- lem. This is detailed in Section 2. To improve on these approaches, we propose, implement, and evaluate a novel classification and mitigation strategy for rewriting texts that puts the whistleblower into the loop of assessing risk and utility. Our contributions are threefold. First (Section 3), we analyse the interleaved contributions of different types of identifiers in arXiv:2405.01097v1 [cs.CY] 2 May 2024 Staufer, et al. texts to derive a description of the problem for anonymous whistle- blowing in terms of a trade-off between risk (identifiability of the whistleblower) and utility (of the rewritten text retaining sufficient information on the specific event details). We derive a strategy for assigning re-identification risk levels of concern to textual features composed of an automated mapping and an interactive adjustment of concern levels. Second (Section 4), we describe our toolwhich implements this strategy. It applies (i) the word/term-to-concern mapping using natural language processing to produce a sanitized but possibly ungrammatical intermediate text version, (ii) a Large Language Model (LLM) that we fine-tuned for paraphrasing to ren- der this text coherent and style-neutral, and (iii) interactivity to draw on the user\u2019s context knowledge. Third (Section 5), we evalu- ate the resulting risk-utility trade-off. We measure the protection against authorship attribution attacks and utility loss statistically using an established benchmark dataset and show that it can sig- nificantly reduce authorship attribution accuracy while retaining utility. We also evaluate our our tool\u2019s effectiveness in masking direct and quasi-identifiers using the Text Anonymization Bench- mark [48] and demonstrate its effectiveness on excerpts from a real-world whistleblower testimony. Section 6 sketches current lim- itations and future work. Section 7 describes ethical considerations and researchers\u2019 positionality, and it discusses possible adverse impacts.",
20
+ "main_content": "This section describes the importance of, and threats to, whistleblowing (Section 2.1) and the promises and conceptual and practical challenges of \u201canonymity\u201d in reporting (Section 2.2). We survey related work on the anonymization/de-identification of text and argue why it falls short in supporting whistleblowing (Section 2.3). 2.1 Challenges of Safeguarding Whistleblowers Whistleblowers play a crucial role in exposing wrongdoings like injustice, corruption, and discrimination in organizations [6, 41]. However, their courageous acts often lead to negative consequences, such as subtle harassment and rumors, job loss and blacklisting, and, in extreme cases, even death threats [34, 37, 58]. In Western nations, whistleblowing is largely viewed as beneficial to society [66], leading to protective laws like the US Sarbanes-Oxley Act of 2002 and the European Union\u2019s \u201cWhistleblowing Directive\u201d (Directive 2019/1937). The latter, for example, mandates the establishment of safe reporting channels and protection against retaliation. It also requires EU member states to provide whistleblowers with legal, financial, and psychological support. However, the directive faces criticism for its limitations. Notably, it does not cover all publicsector entities [63, p. 3] and leaves key decisions to member states\u2019 discretion [1, p. 652]. This discretion extends to the absence of mandatory anonymous reporting channels and permits states to disregard cases they consider \u201cclearly minor\u201d, leaving whistleblowers without comprehensive protection for non-material harms like workplace bullying [63, p. 3]. Furthermore, according to White [70], the directive\u2019s sectoral approach and reliance on a list of specific EU laws causes a patchwork of provisions, creating a complex and possibly confusing legal environment, particularly for those sectors impacting human rights and life-and-death situations. Last but not least, organizations often react negatively to whistleblowing due to the stigma of errors, even though recognizing these mistakes would be key to building a culture of responsibility [5, p. 12] and improving organizations and society [69]. The reality for whistleblowers is thus fraught with challenges, from navigating legal uncertainties to dealing with public perception [26, 51, 52], leaving many whistleblowers with no option but to report their findings anonymously [50]. However, \u201canonymous\u201d reporting channels alone do not guarantee anonymity [5]. 2.2 Anonymity, (De-)anonymization, and (De-/Re-)Identification Anonymity is not an alternative between being identified uniquely or not at all, but \u201cthe state of being not identifiable within a set of subjects [with potentially the same attributes], the anonymity set\u201d [46, p.9]. Of the manifold possible approaches towards this goal, state-of-the-art whistleblowing-support software as well as legal protections (where existing) focus on anonymous communications [5]. This, however, does not guarantee anonymous reports. Instead, a whistleblower\u2019s anonymity may still be at risk due to several factors, including: (i) surveillance technology, such as browser cookies, security mechanisms otherwise useful to prevent unauthenticated uses, cameras, or access logs, (ii) the author\u2019s unique writing style, and (iii) the specific content of the message [33]. Berendt and Schiffner [5] refer to the latter as \u201cepistemic non-anonymizability\u201d, i.e., the risk of being identified based on the unique information in a report, particularly when the information is known to only a few individuals. In some cases, this may identify the whistleblower uniquely. Terms and their understanding in the domain of anonymity vary. We use the following nomenclature: anonymization is a modification of data that increases the size of the anonymity set of the person (or other entity) of interest; conversely, de-anonymization decreases it (to some number \ud835\udc58\u22651). De-anonymization to \ud835\udc58= 1, which includes the provision of an identifier (e.g., a proper name), is called re-identification. The removal of some identifying information (e.g., proper names), called de-identification, often but not necessarily leads to anonymization [4, 68]. In structured data, direct identifiers (e.g., names or social security numbers) are unique to an individual, whereas quasi-identifiers like age, gender, or zip code, though not unique on their own, can be combined to form unique patterns. Established mathematical frameworks for quantifying anonymity, such as Differential Privacy (DP) [16], and metrics such as k-anonymity [53], along with their refinements [27, 31], can be used when anonymizing datasets. Unstructured data such as text, which constitutes a vast majority of the world\u2019s data, requires its own safeguarding methods, which fall into two broader categories [28]. The first, NLP-based text sanitization, focuses on linguistic patterns to reduce (re-)identification risk. The second, privacy-preserving data publishing (PPDP), involves methods like noise addition or generalization to comply with pre-defined privacy requirements [15]. A Semi-automated Text Sanitization Tool for Mitigating the Risk of Whistleblower Re-Identification 2.3 Related Work: Text De-Identification and Anonymization, Privacy Models, and Adversarial Stylometry De-identification methods in text sanitization mask identifiers, primarily using named entity recognition (NER) techniques. These methods, largely domain-specific, have been particularly influential in clinical data de-identification, as evidenced, for instance, by the 2014 i2b2/UTHealth shared task [62]. However, they do not or only partially address the risk of indirect re-identification [4, 38]. For example, S\u00e1nchez et al. [55, 56, 57] make the simplifying assumption that replacing noun phrases which are rare in domain-specific corpora or on the web with more general ones offers sufficient protection. Others use recurrent neural networks [12, 30], reinforcement learning [71], support vector machines [65], or pre-trained language models [23] to identify and remove entities that fall into pre-defined categories. However, all of these approaches ignore or significantly underestimate the actual risks of context-based re-identification. More advanced anonymization methods, in turn, also aim to detect and remove identifiers that do not fit into the usual categories of named entities or are hidden within context. For example, Reddy and Knight [49] detect and obfuscate gender, and Adams et al. [2] introduce a human-annotated multilingual corpus containing 24 entity types and a pipeline consisting of NER and co-reference resolution to mask these entities. In a more nuanced approach, Papadopoulou et al. [44] developed a \u201cprivacy-enhanced entity recognizer\u201d that identifies 240 Wikidata properties linked to personal identification. Their approach includes three key measures to evaluate if a noun phrase needs to be masked or replaced by a more general one [43]. The first measure uses RoBERTa [29] to assess how \u201csurprising\u201d an entity is in its context, assuming that more unique entities carry higher privacy risks. The second measure checks if web search results for entity combinations mention the individual in question, indicating potential re-identification risk. Lastly, they use a classifier trained with the Text Anonymization Benchmark (TAB) corpus [48] to predict masking needs based on human annotations. Kleinberg et al.\u2019s [24] \u201cTextwash\u201d employs the BERT model, fine-tuned on a dataset of 3717 articles from the British National Corpus, Enron emails, and Wikipedia. The dataset was annotated with entity tags such as \u201cPERSON_FIRSTNAME\u201d, \u201cLOCATION\u201d, and an \u201cOTHER_IDENTIFYING_ATTRIBUTE\u201d category for indirect reidentification risks, along with a \u201cNONE\u201d category for tokens that are non-re-identifying. A quantitative evaluation (0.93 F1 score for detection accuracy, minimal utility loss in sentiment analysis, and part-of-speech tagging) and its qualitative assessment (82% / 98% success in anonymizing famous / semi-famous individuals) show promise. However, the more recent gpt-3.5-turbo can re-identify 72.6% of the celebrities from Textwash\u2019s qualitative study on the first attempt, highlighting the evolving complexity of mitigating the risk of re-identification in texts [45]. In PPDP, several privacy models for structured data have been adapted for privacy guarantees in text. While most are theoretical [28], \u201cC-sanitise\u201d [54] determines the disclosure risk of a certain term t on a set of entities to protect (C), given background knowledge K, which by default is the probability of an entity co-occurring with a term t in the web. Additionally, DP techniques have been adapted to text, either for generating synthetic texts [20] or for obscuring authorship in text documents [68]. This involves converting text into word embeddings, altering these vectors with DP techniques, and then realigning them to the nearest words in the embedding model [73, 74]. However, \u201cword-level differential privacy\u201d [35] faces challenges: it maintains the original sentence length, limiting variation, and can cause grammatical errors, such as replacing nouns with unrelated adjectives, due to not considering word types. Authorship attribution (AA) systems use stylistic features such as vocabulary, syntax, and grammar to identify an author. State-ofthe-art approaches involve using Support Vector Machines [64, 72], and more recently, fine-tuned LLMs like BertAA [3, 18, 64]. The \u201cValla\u201d benchmark and software package standardizes evaluation methods and includes fifteen diverse datasets [64]. Contrasting this, adversarial stylometry modifies an author\u2019s writing style to reduce AA systems\u2019 effectiveness [61]. Advancements in machine translation [67] have also introduced new methods based on adversarial training [60], though they sometimes struggle with preserving the original text\u2019s meaning. Semi-automated tools, such as \u201cAnonymouth\u201d [36], propose modifications for anonymity in a user\u2019s writing, requiring a significant corpus of the user\u2019s own texts. Moreover, recent advances in automatic paraphrasing using fine-tuned LLMs demonstrated a notable reduction in authorship attribution, but primarily for shorter texts [35]. To the best of our knowledge, there is no \u2013 and maybe there can be no \u2013 complete list of textual features contributing to the reidentification of individuals in text. As Narayanan and Shmatikov [40] highlight, \u201cany attribute can be identifying in combination with others\u201d [p. 3]. In text, we encounter elements like characters, words, and phrases, each carrying varying levels of meaning [19]. Single words convey explicit lexical meaning as defined by a vocabulary (e.g. \u201cemployee\u201d), while multiple words are bound by syntactic rules to express more complex thoughts implicitly in phrases (\u201cyoungest employee\u201d) and sentences (\u201cShe is the youngest employee\u201d). In addition, the European Data Protection Supervisor (EDPS) and Spanish Data Protection Agency (AEPD) [17] state that anonymization can never be fully automated and needs to be \u201ctailored to the nature, scope, context and purposes of processing as well as the risks of varying likelihood and severity for the rights and freedoms of natural persons\u201d [p. 7]. To take these insights and limitations into account, our semiautomated text sanitization tool leverages insights on the removal of identifying information but involves the whistleblower (the user) in the decision-making process. 3 RISK MODELLING AND RISK MITIGATION APPROACH In this section, we derive the problem statement (Section 3.2) from an analysis of different identifier types (Section 3.1). Following an overview of our approach (Section 3.3), we detail the anonymization operations for textual features (Section 3.4) and the automatic assignment of default concern levels (Section 3.5). Staufer, et al. 3.1 Identifier Types, Author Identifiability, and Event Details in the Whistleblowing Setting Whistleblowing reports convey information about persons, locations, and other entities. At least some of them need to be identified in order for the report to make any sense. The following fictitious example consists of three possible versions of a report in order to illustrate how different types of identifiers may contribute to the re-identification of the anonymously reporting employee Jane Doe, a member of the Colours and Lacquer group in the company COLOURIFICS. V1 On 24 January 2023, John Smith poured polyurethane resin into the clover-leaf-shaped sink of room R23. V2 After our group meeting on the fourth Tuesday of January 2023, the head of the Colours and Lacquer Group poured a toxin into the sink of room R23. V3 Somebody poured a liquid into a recepticle on some date in a room of the company. In V1, \u201cJohn Smith\u201d is the lexical identifier1 of the COLOURIFICS manager John Smith, as is \u201c24 January 2023\u201d of that date. Like John Smith, room R23 is a unique named entity in the context of the company and also identified lexically. \u201cPolyurethane resin\u201d is the lexical identifiers of a toxin (both are common nouns rather than names of individual instances of their category). The \u201cclover-leaf-shaped\u201d serves as a descriptive identifier of the sink. In V2, John Smith is still identifiable via the descriptive identifier \u201chead of the Colours and Lacquer Group\u201d, at least on 24 January 2023 (reconstructed with the help of a calendar and COLOURIFIC\u2019s personnel files). \u201cOur\u201d group meeting is an indexical identifier that signals that the whistleblower is one of the, say five employees in the Colours and Lacquer Group. The indexical information is explicit in V2 given the background knowledge that only employees in this group were co-present (for example, in the company\u2019s key-card logfiles). The same information may be implicit in V1 (if it can be seen from the company\u2019s organigram who John Smith is and who works in his group). Both versions provide for the inference that Jane Doe or any of her four colleagues must have been the whistleblower. If, in addition, only Jane Doe stayed behind \u201cafter the meeting\u201d, that detail in V2 descriptively identifies her uniquely2. V3 contains only identifiers of very general categories. Many other variants are possible (for example, referencing, in a V4, \u201cthe head of our group\u201d, which would enlarge the search space to all groups that had a meeting in R23 that day). The example illustrates the threats (i)-(iii) of Section 2.2. It also shows that the whistleblower\u2019s \u201canonymity\u201d (or lack thereof) is only one aspect of a more general and graded picture of who and what can be identified directly, indirectly, or not at all \u2013 and what this implies for the whistleblower\u2019s safety as well as for the report\u2019s effectiveness. 1The classification of identifiers is due to Phillips [47]. Note that all types of identifiers can give rise to personal data.. in the sense of the EU\u2019s General Data Protection Regulation (GDPR), Article 4(1): \u201cany information which is related to an identified or identifiable natural person\u201d, or personally identifiable data in the senses used in different US regulations. See [11] for legal aspects in the context of whistleblowing. 2If John Smith knows that only she observed him, she is also uniquely identified in V1, but for the sake of the analysis, we assume that only recorded data/text constitute the available knowledge. Inspired by Domingo-Ferrer\u2019s [14] three types of (data) privacy, we distinguish between the identifiability of the whistleblower Jane Doe (author 3 identifiability \ud835\udc34\ud835\udc56\ud835\udc51) and descriptions of the event or other wrongdoing, including other actors (event details \ud835\udc38\ud835\udc51\ud835\udc61). Given the stated context knowledge, we obtain an anonymity set of size \ud835\udc58= 1 for John Smith in V1 and V2. Jane Doe is in an anonymity set of size \ud835\udc58= 5 or even \ud835\udc58= 1 in V2. In V1, that set may be of size \ud835\udc58= 5 (if people routinely work only within their group) or larger (if they may also join other groups). Thus, the presence of a name does not necessarily entail a larger risk. Both are in an anonymity set containing all the company\u2019s employees at the reported date in V3 (assuming no outsiders have access to company premises). The toxin and the sink may be in a smaller anonymity set in V1 than in V2 or V3, and they could increase further (for example, if only certain employees have access to certain substances). Importantly, the identifiability of people and other entities in \ud835\udc38\ud835\udc51\ud835\udc61can increase the identifiability of the whistleblower. V3 illustrates a further challenge: the misspelled receptacle may be a typical error of a specific employee, and the incorrect placement of the temporal before the spatial information suggests that the writer may be a German or Dutch native speaker. In addition to errors, also correct variants carry information that stylometry can use for authorship attribution, which obviously can have a large effect on \ud835\udc34\ud835\udc56\ud835\udc51. The whistleblower would, on the one hand, want to reduce all such identifiabilities as much as possible. On the other hand, the extreme generalization of V3 creates a meaningless report that neither the company nor a court would follow up on. This general problem can be framed in terms of risk and utility, which will be described next. 3.2 The Whistleblowing Text-Writing Problem: Risk, Utility, And Many Unknowns A potential whistleblower faces the following problem: \u201cmake \ud835\udc34\ud835\udc56\ud835\udc51 as small as possible while retaining as much \ud835\udc38\ud835\udc51\ud835\udc61as necessary\u201d. We propose to address this problem by examining the text and possibly rewriting it. In principle, this is an instance of the oft-claimed trade-off between privacy (or other risk) and utility. In a simple world of known repositories of structured data, one could aim at determining the identifying problem (e.g., by database joins to identify the whistleblower due to some attributive information they reveal about themselves and by multiple joins for dependencies such as managers and teams) and compute how large the resulting anonymity set (or \ud835\udc34\ud835\udc56\ud835\udc51as its inverse) is. Given a well-defined measure of information utility, different points on the trade-off curve would then be welldefined and automatically derivable solutions to a mathematical optimization problem. However, texts offer a myriad of ways to express a given relational information. The space of information that could be crossreferenced, sometimes in multiple steps, is huge and often unknown to the individual. Consequently, in many cases, it is not possible 3We assume that the potential whistleblower is also the author of the report. This is the standard setting. Modifications for the situation in which a trusted third party writes the report on their behalf are the subject of future work. A Semi-automated Text Sanitization Tool for Mitigating the Risk of Whistleblower Re-Identification to determine the anonymity set size with any mathematical certainty. In addition, setting a threshold could be dangerous: even if the anonymity set is \ud835\udc58> 1, protection is not guaranteed \u2013 for example, the whole department of five people could be fired in retaliation. At the same time, exactly how specific a re-written text needs to be about \ud835\udc34\ud835\udc56\ud835\udc51and \ud835\udc38\ud835\udc51\ud835\udc61in order to make the report legally viable 4 cannot be decided without much more context knowledge. For example, the shape of the sink into which a toxic substance is poured probably makes no difference to the illegality, whereas the identity of the substance may affect it. These unknowns have repercussions both for tool design (Section 3.3) and for evaluation design (Section 5.1.1). 3.3 Risk Mitigation Approach and Tool Design: Overview Potential whistleblowers would be ill-served by any fully automated tool that claims to be able to deliver a certain mathematically guaranteed anonymization. Instead, we propose to provide them with a semi-automated tool that does have some \u201canonymity-enhancing defaults\u201d that illustrate with the concrete material how textual elements can be identifying and how they can be rendered less identifying. Our tool starts with the heuristic default assumption that identifiability is potentially always problematic and then lets the user steer our tool by specifying how \u201cconcerning\u201d specific individual elements are and choosing, interactively, the treatment of each of them that appears to give the best combination of \ud835\udc34\ud835\udc56\ud835\udc51and \ud835\udc38\ud835\udc51\ud835\udc61. By letting the author/user assign these final risk scores in the situated context of the evolving text, we enable them to draw on a maximum of implicit context knowledge. Our approach and tool proceed through several steps. We first determine typical textual elements that can constitute or be part of the different types of identifiers. As can be seen in Table 1, most of them can affect \ud835\udc34\ud835\udc56\ud835\udc51and \ud835\udc38\ud835\udc51\ud835\udc61. Since identification by name (or, by extension, pronouns that co-reference names) does not even need additional background knowledge and since individuals are more at risk than generics, we classify some textual features as \u201chighly concerning\u201d, others as having \u201cmedium concern\u201d, and the remainder as \u201cpotentially concerning\u201d. We differentiate between two types of proper nouns. Some names refer to typical \u201cnamed entities\u201d, which include, in particular, specific people, places, and organizations, as well as individual dates and currency amounts. These pose particular person-identification risk in whistleblowing scenarios.5 \u201cOther proper nouns\u201d, such as titles of music pieces, books and artworks generally only pose medium risk. For stylometric features, we explicitly categorize outof-vocabulary words, misspelled words, and words that are surprising given the overall topic of the text. Other low-level stylometric features, such as punctuation patterns, average word and sentence length, or word and phrase repetition, are not (and in many cases, such as with character n-gram pattern, cannot be [25]) explicitly identified. Instead, we implicitly/indirectly account for them as a byproduct of the LLM-based rephrasing. For all other parts of 4\u201ca situation in which a plan, contract, or proposal is able to be legally enforced\u201d, https://ludwig.guru/s/legally+viable, retrieved 2024-01-02 5PERSON, GPE (region), LOC (location), EVENT, LAW, LANGUAGE, DATE, TIME, PERCENT, MONEY, QUANTITY, and ORDINAL speech, we propose to use replacement strategies based on dataanonymization operations that are proportional to the risk (Table 2). Given the complexities of natural language and potential context information, the latter two operations are necessarily heuristic; thus, our tool applies the classification and the risk mitigation strategy as a default which can then be adapted by the user. Table 1: Overview of the approach from identifier types to default risk. Identifier Type Textual Feature Aid/Edt Default Risk Lexical Names of named entities \ud835\udc34\ud835\udc56\ud835\udc51,\ud835\udc38\ud835\udc51\ud835\udc61 High Lexical Other proper nouns \ud835\udc38\ud835\udc51\ud835\udc61 Medium Indexical Pronouns \ud835\udc34\ud835\udc56\ud835\udc51,\ud835\udc38\ud835\udc51\ud835\udc61 High Descriptive Common nouns \ud835\udc38\ud835\udc51\ud835\udc61,(\ud835\udc34\ud835\udc56\ud835\udc51) Potential Descriptive Modifiers \ud835\udc38\ud835\udc51\ud835\udc61,(\ud835\udc34\ud835\udc56\ud835\udc51) Potential Descriptive (via pragmatic inferences) Out-of-vocabulary wordsa \ud835\udc34\ud835\udc56\ud835\udc51, (\ud835\udc38\ud835\udc51\ud835\udc61) Medium Misspelled wordsa \ud835\udc34\ud835\udc56\ud835\udc51 Medium Surprising wordsb \ud835\udc34\ud835\udc56\ud835\udc51 Medium Other stylometric features \ud835\udc34\ud835\udc56\ud835\udc51 N/Ac aTreated as noun. bNouns or proper nouns. cNot explicitly specified. Indirectly accounted for through rephrasing. Table 2: Mitigation strategies based on assigned risk (LvC = level of concern, NaNEs = names of named entities, OPNs = other proper nouns, CNs = common nouns, Mods = modifiers, PNs = proper nouns, OSFs = other stylometric features). LvC NaNEs OPNs CNs Mods PNs OSFs High Suppr. Suppr. Suppr. Suppr. Suppr. Pert. Medium Pert. Generl. Generl. Pert. Suppr. Pert. 3.4 Anonymization Operations for Words and Phrases In our sanitization pipeline, we conduct various token removal and replacement operations based on each token\u2019s POS tag and its assigned level of concern (LvC), which can be \u201cpotentially concerning\u201d, \u201cmedium concerning\u201d, or \u201chighly concerning\u201d. Initially, we consider all common nouns, proper nouns, adjectives, adverbs, pronouns, and named entities6 as potentially concerning. Should the user or our automatic LvC estimation (see subsection 3.5) elevate the concern to either medium or high, we apply anonymization operations that are categorized into generalization, perturbation, and suppression. Specific implementation details are elaborated on in section 4. 6By this, we mean names of named entities, e.g. \u201cBerlin\u201d for GPE, but we use named entities instead for consistency with other literature. Staufer, et al. 3.4.1 Generalization. The least severe type of operation targets common nouns and other proper nouns marked as medium concerning. We assume their specificity (not necessarily their general meaning) poses re-identification risks. Thus, more general terms can be used to preserve meaning while mitigating the risk of re-identification. \u2022 Common nouns like \u201ccar\u201d are replaced with hypernyms from WordNet, such as \u201cvehicle\u201d. \u2022 Other proper nouns become broader Wikidata terms, e.g. \u201cpolitical slogan\u201d for \u201cMake America Great Again\u201d. 3.4.2 Perturbation. This applies to modifiers7 and named entities annotated as medium concerning. In this process, original words are retained but are assigned zero weight in the paraphrase generation, along with their synonyms and inflections. This approach relies on the LLM to either (a) find similar but non-synonymous replacement words or (b) completely rephrase the sentence to exclude these words. For example, \u201cMicrosoft, the giant tech company, ...\u201d could be paraphrased as \u201cA leading corporation in the technology sector...\u201d. 3.4.3 Suppression. The most severe type of operation is applied to common nouns, other proper nouns, modifiers and named entities annotated as highly concerning, and to pronouns that are either medium concerning or highly concerning. We assume these words are either too unique or cannot be generalized. \u2022 For common nouns and other proper nouns, dependent phrases are omitted (e.g., \u201cWe traveled to the London Bridge in a bus.\u201d becomes \u201cWe traveled in a bus.\u201d). \u2022 Modifiers are removed (e.g., \u201cHe used to be the principal dancer\u201d becomes \u201cHe used to be a dancer\u201d). \u2022 Named entities are replaced with nondescript phrases (e.g., \u201cBarack Obama\u201d becomes \u201ccertain person\u201d). \u2022 Pronouns are replaced with \u201csomebody\u201d (e.g., \u201cHe drove the bus.\u201d becomes \u201cSomebody drove the bus.\u201d). 3.5 Automatic Level of Concern (LvC) Estimation In our whistleblowing context, we deem the detection of outsidedocument LvC via search engine queries, as proposed by Papadopoulou et al. [44] (refer to related work in 2.3), impractical. This is because whistleblowers are typically not well-known, and the information they disclose is often novel, not commonly found on the internet. Therefore, instead of relying on external data, we focus on innerdocument LvC, setting up a rule-based system and allowing users to adjust the LvC based on their contextual knowledge. Further, we assume that this pre-annotation of default concern levels raises awareness for potential sources of re-identification. \u2022 Common nouns and modifiers, by default, are potentially concerning. As fundamental elements in constructing a text\u2019s semantic understanding, they could inadvertently reveal re-identifying details like profession or location. However, without additional context, their LvC is not definitive. \u2022 Other proper nouns, unexpected words, misspelled words and out-of-vocabulary words default to medium 7The current version of our tool considers only adjectives and adverbs as modifiers. concerning. Unlike categorized named entities, other proper nouns only indirectly link to individuals, places, or organizations. Unexpected words may diminish anonymity, according to Papadopoulou et al. [44], while misspelled or out-ofvocabulary words can be strong stylometric indicators. \u2022 Named entities are considered highly concerning by default, as they directly refer to specific entities in the world, like people, organizations, or locations, posing a significant re-identification risk. 4 IMPLEMENTATION Our semi-automated text sanitization tool consists of a sanitization pipeline (Sections 4.1 and 4.2) and a user interface (Section 4.3). The pipeline uses off-the-shelf Python NLP libraries (spaCy, nltk, lemminflect, constituent_treelib, sentence-transformers) and our paraphrasing-tuned FLAN T5 language model. FLAN T5\u2019s errorcorrecting capabilities [39, 42] aid in reconstructing sentence fragments after words or phrases with elevated levels of concern have been removed. The user interface is built with standard HTML, CSS, and JavaScript. Both components are open source and on GitHub8. 4.1 Anonymization Operations for Words and Phrases 4.1.1 Generalization. Common nouns undergo generalization by first retrieving their synsets and hypernyms from WordNet, followed by calculating the cosine similarity of their sentence embeddings with those of the hypernyms. This calculation ranks the hypernyms by semantic similarity to the original word, enabling the selection of the most suitable replacement. By default, we select the closest hypernym. Other proper nouns are generalized as follows: We first query Wikipedia to identify the term, using the allmpnet-base-v2 sentence transformer to disambiguate its meaning through cosine similarity. Next, we find the most relevant Wikidata QID and its associated hierarchy. We then flatten these relationships and replace the entity with the next higher-level term in the hierarchy. 4.1.2 Perturbation. We add randomness to modifiers and named entities through LLM-based paraphrasing, specifically, by using the FLAN-T5 language model, which we fine-tuned for paraphrase generation (Section 4.2). To achieve perturbation9, we give the tokens in question and their synonyms and inflections zero weight during next token prediction. This forces the model to either use a less probable word (controlled by the temperature hyperparameter) or rephrase the sentence to omit the token. Using a LLM for paraphrase generation has the added benefit that it mends fragmented sentences caused by token suppression and yields a neutral writing style, adjustable through the no_repeat_ngram_size hyperparameter. 8https://github.com/dimitristaufer/Semi-Automated-Text-Sanitization 9The strategies \u201csuppression\u201d and \u201cgeneralization\u201d are straightforward adaptations of the classical methods for structured data. Perturbation \u201creplaces original values with new ones by interchanging, adding noise or creating synthetic data\u201d [7]. Interchanging would create ungrammatical texts, and noise can only be added to certain data. We, therefore, generate synthetic data via LLM-Rephrasing, disallowing the highly specific words / terms and their synonyms while producing a new but grammatical text. A Semi-automated Text Sanitization Tool for Mitigating the Risk of Whistleblower Re-Identification 4.1.3 Suppression. Common nouns and other proper nouns are suppressed by removing the longest phrase containing them with the constituent_treelib library. Sentences with just one noun or proper noun are entirely removed. Otherwise, the longest phrase, be it a main clause, verb phrase, prepositional phrase, or noun phrase, is identified, removed, and replaced with an empty string. Modifiers are removed (e.g., \u201cHe is their principal dancer\u201d \u2192\u201cHe is their \u00b7 dancer\u201d). Pronouns are replaced with the static string \u201csomebody\u201d. For example, \u201cHis apple\u201d \u2192\u201cSomebody apple\u201d (after replacement) \u2192\u201cSomebody\u2019s apple\u201d (after paraphrase generation). Named entities are replaced with static phrases based on their type. For example, \u201cJohn Smith sent her 2 Million Euros from his account in Switzerland\u201d \u2192\u201ccertain person sent somebody certain money from somebody account in certain location\u201d (after suppressing pronouns and named entities) \u2192\u201cA certain individual sent a specific amount of money to whoever\u2019s account in some particular place\u201d (after paraphrase generation). 4.2 Paraphrase Generation We fine-tuned two variants of the FLAN T5 language models, FLAN T5Base and FLAN T5XL, using the \u201cchatgpt-paraphrases\u201d dataset, which uniquely combines three large paraphrasing datasets for varied topics and sentence types. It includes question paraphrasing from the \u201cQuora Question Pairs\u201d dataset, context-based paraphrasing from \u201cSQuAD2.0\u201d, and summarization-based paraphrases from the \u201cCNN-DailyMail News Text Summarization\u201d dataset. Furthermore, it was enriched with five diverse paraphrase variants for each sentence pair generated by the gpt-3.5-turbo model, resulting in 6.3 million unique pairs. This diversity enhances our model\u2019s paraphrasing capabilities and reduces overfitting. For training, we employed Parameter-Efficient Fine-Tuning (PEFT) using LoRA (Low-Rank Adaptation), which adapts the model to new data without the need for complete retraining. We quantized the model weights to enhance memory efficiency using bitsandbytes. We trained FLAN T5Base on a NVIDIA A10G Tensor Core GPU for one epoch (35.63 hours) on 1 mio. paraphrase pairs, using an initial learning rate of 1e-3. After one epoch, we achieved a minimum Cross Entropy loss of 1.195. FLAN T5XL was trained for one epoch (22.38 hours) on 100,000 pairs and achieved 0.88. For inference, we configure max_length to 512 tokens to cap the output at T5\u2019s tokenization limit. do_sample is set to True, allowing for randomized token selection from the model\u2019s probability distribution, enhancing the variety of paraphrasing. Additionally, parameters like temperature, no_repeat_ngram_size, and length_penalty are adjustable via the user interface, providing control over randomness, repetition avoidance, and text length. 4.3 User Interface Our web-based user interface communicates with the sanitization pipeline via Flask endpoints. It visualizes token LvCs (gray, yellow, red), allows dynamic adjustments of these levels, and starts the sanitization process. Moreover, a responsive side menu allows users to select the model size and tune hyperparameters for paraphrasing. The main window (Figure 1) shows the original and the sanitized texts, with options for editing and annotating. Figure 1: The UI\u2019s main window showing the input text (left) and the sanitized text (right). We made up the input and converted it to \u201cInternet Slang\u201d (https://www.noslang.com/ reverse) to showcase how an extremely obvious writing style is neutralized. 5 EVALUATION We evaluate our tool quantitatively (Sections 5.1 and 5.2) and demonstrate its workings and usefulness with an example from a realworld whistleblower testimony (Section 5.3). They complement each other in that the first focuses on identification via writing style and the second two on identification via content. 5.1 Re-Identification Through Writing Style: IMDb62 Movie Reviews Dataset 5.1.1 Evaluation metrics. The large unknowns of context knowledge imply that evaluations cannot rely on straightforward measurement methods for \ud835\udc34\ud835\udc56\ud835\udc51and \ud835\udc38\ud835\udc51\ud835\udc61. We, therefore, work with the following proxies. Text-surface similarities To understand the effect of language model size and hyperparameter settings on lexical and syntactic variations from original texts, we utilize two ROUGE scores: ROUGE-L (Longest Common Subsequence) to determine to which extent the overall structure and sequence of information in the text changes. And ROUGE-S (Skip-Bigram) to measure word pair changes and changes in phrasings. Risk Without further assumptions about the (real-world casespecific) background knowledge, it is impossible to exactly quantify the ultimate risk of re-identification (see Section 3.1). We therefore only measure the part of \ud835\udc34\ud835\udc56\ud835\udc51where (a) the context knowledge is more easily circumscribed (texts from the same author) and (b) benchmarks are likely to generalize across case studies: the risk of re-identification based on stylometric features, measured as authorship attribution accuracy (AAA). Utility It is also to be expected that the rewriting reduces \ud835\udc38\ud835\udc51\ud835\udc61, yet again it is impossible to exactly determine (without realworld case-specific background knowledge and legal assessment) whether the detail supplied is sufficient to allow for legal follow-up of the report or even only to create alarm that could then be followed up. We, therefore, measure \ud835\udc38\ud835\udc51\ud835\udc61utility through two proxies: a semantic similarity measure and a sentiment classifier. To estimate semantic similarity (SSim), we calculate the cosine similarity of both texts\u2019 sentence Staufer, et al. embeddings using the SentenceTransformer10 Python framework. To determine the absolute sentiment score difference (SSD), we classify the texts\u2019 sentiment using an off-the-shelf BERT-based classifier11 from Hugging Face Hub. All measures are normalized to take on values between 0 and 1, and although the absolute values of the scores between these endpoints (except for authorship attribution) cannot be interpreted directly, the comparison of relative orders and changes will give us a first indication of the impacts of different rewriting strategies on \ud835\udc34\ud835\udc56\ud835\udc51and \ud835\udc38\ud835\udc51\ud835\udc61. 5.1.2 Data, language models, and settings. We investigate protection against authorship attribution attacks with the popular IMDb62 movie reviews dataset [59], which contains 62,000 movie reviews by 62 distinct authors. We assess AAA using the \u201cValla\u201d software package [64], specifically its two most effective models: one based on character n-grams and the other on BERT. This approach covers both ends of the the authorship attribution spectrum [3], from lowlevel, largely topic-independent character n-grams to the contextrich features of the pre-trained BERT model. The evaluation was conducted on AWS EC2 \u201cg4dn.xlarge\u201d instances with NVIDIA T4 GPUs. We processed 130 movie reviews for each of the 62 authors across twelve FLAN T5 configurations, totaling 96,720 texts with character counts spanning from 184 to 5248. Each review was sanitized with its textual elements assigned their default LvCs (see 3.5). Both model sizes, \u201cBase\u201d (250M parameters) and \u201cXL\u201d (3B parameters) were tested with temperature values T of 0.2, 0.5, and 0.8, as well as with no_repeat_ngram_size (NRNgS) set to 0 or 2. The former, temperature, controls the randomness of the next-word predictions by scaling the logits before applying softmax, which makes the predictions more or less deterministic. For our scenario, this causes smaller or greater perturbation of the original text\u2019s meaning. The latter, NRNgS, disallows n consecutive tokens to be repeated in the generated text, which for our scenario means deviating more or less from the original writing style. The Risk-utility trade-offs of all configurations are compared to three baselines: \ud835\udc35\ud835\udc4e\ud835\udc60\ud835\udc52\ud835\udc59\ud835\udc56\ud835\udc5b\ud835\udc521 is the original text. In \ud835\udc35\ud835\udc4e\ud835\udc60\ud835\udc52\ud835\udc59\ud835\udc56\ud835\udc5b\ud835\udc522, similar to state-of-the-art related work [24, 44], we only redact named entities by replacing them with placeholders, such as \u201c[PERSON]\u201d and do not utilize our language model. Similarly, in \ud835\udc35\ud835\udc4e\ud835\udc60\ud835\udc52\ud835\udc59\ud835\udc56\ud835\udc5b\ud835\udc523 we only remove named entities but rephrase the texts using our bestperforming model configuration regarding AA protection. 5.1.3 Results. The n-gram-based and BERT-based \u201cValla\u201d classifiers achieved AAA baselines of 98.81% and 98.80%, respectively. As expected, the AAA and text-surface similarities varied significantly depending on the model configuration. The XL-model generated texts with much smaller ROUGE-L and ROUGE-S scores, i.e. more lexical and syntactic deviation from the original texts. Using \ud835\udc41\ud835\udc45\ud835\udc41\ud835\udc54\ud835\udc46= 2 slightly decreased AAA in all configurations while not significantly affecting semantic similarity, which is why we use this for all the following results. 10all-mpnet-base-v2 11bert-base-multilingual-uncased 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Authorship Attribution Accuracy (AAA) 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Semantic Similarity (SSim) BASE (NRNgS = 0) BASE (NRNgS = 2) XL (NRNgS = 0) XL (NRNgS = 2) Baseline 1 Baseline 2 Baseline 3 (a) Risk-utility trade-off between AAA and SSim. 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Authorship Attribution Accuracy (AAA) 0.00 0.05 0.10 0.15 Sentiment Score Difference (SSD) BASE (NRNgS = 0) BASE (NRNgS = 2) XL (NRNgS = 0) XL (NRNgS = 2) Baseline 1 Baseline 2 Baseline 3 (b) Risk-utility trade-off between AAA and SSD. Figure 2: Risk-utility trade-offs. Figure 2 (a) shows the risk-utility trade-off between AAA and SSim. \u201cTop-left\u201d (0,1) would be the fictitious best result. For each model configuration, increasing \ud835\udc47caused AAA to drop but also decreased utility by \u223c8%/4% (BASE/XL) for SSim and \u223c12%/3% (BASE/XL) for SSD. The figure shows that the investigated settings create a trade-off curve, with XL (\ud835\udc47= 0.8, \ud835\udc41\ud835\udc45\ud835\udc41\ud835\udc54\ud835\udc46= 2) allowing for a large reduction in AAA (to 31.22%, as opposed to the original text \ud835\udc4f\ud835\udc4e\ud835\udc60\ud835\udc52\ud835\udc59\ud835\udc56\ud835\udc5b\ud835\udc521 of 98.81%), while BASE (\ud835\udc47= 0.2, \ud835\udc41\ud835\udc45\ud835\udc41\ud835\udc54\ud835\udc46= 0) retains the most SSim (0.731, as opposed to the original texts, which have \ud835\udc46\ud835\udc46\ud835\udc56\ud835\udc5a= 1 to themselves). Figure 2 (b) shows the risk-utility trade-off between AAA and SSD (the plot shows 1-SSD to retain \u201ctop left\u201d as the optimal point). The results mirror those of AAA-SSim, except for \ud835\udc4f\ud835\udc4e\ud835\udc60\ud835\udc52\ud835\udc59\ud835\udc56\ud835\udc5b\ud835\udc522: because only named entities (not considered sentiment-carrying) are removed, the sentiment score changes only minimally. 5.1.4 Discussion. In summary, all our models offer a good compromise between baselines representing state-of-the-art approaches. They have lower risk and higher or comparable utility compared to \ud835\udc4f\ud835\udc4e\ud835\udc60\ud835\udc52\ud835\udc59\ud835\udc56\ud835\udc5b\ud835\udc523, where only named entities are removed. This indicates A Semi-automated Text Sanitization Tool for Mitigating the Risk of Whistleblower Re-Identification the effectiveness of LLM-based rephrasing in authorship attribution. \ud835\udc35\ud835\udc4e\ud835\udc60\ud835\udc52\ud835\udc59\ud835\udc56\ud835\udc5b\ud835\udc522, which involves suppressing named entities and rephrasing, shows the lowest risk due to limited content left for the LLM to reconstruct, resulting in mostly short, arbitrary sentences, as reflected by low SSim scores. 5.2 Re-Identification Through Content: European Court of Human Rights Cases Pil\u00e1n et al.\u2019s [48] Text Anonymization Benchmark (TAB) includes a corpus of 1,268 English-language court cases from the European Court of Human Rights, in which directlyand quasi-identifying nominal and adjectival phrases were manually annotated. It solves several issues that previous datasets have, such as being \u201cpseudoanonymized\u201d, including only few categories of named entities, not differentiating between identifier types, containing only famous individuals, or being small. TAB\u2019s annotation is focused on protecting the identity of the plaintiff (also referred to as \u201capplicant\u201d). 5.2.1 Evaluation Metrics. TAB introduces two metrics, entity-level recall (\ud835\udc38\ud835\udc45\ud835\udc51\ud835\udc56/\ud835\udc5e\ud835\udc56) to measure privacy protection and token-levelweighted precision (\ud835\udc4a\ud835\udc43\ud835\udc51\ud835\udc56+\ud835\udc5e\ud835\udc56) for utility preservation. Entity-level means that an entity is only considered safely removed if all of its mentions are.\ud835\udc4a\ud835\udc43\ud835\udc51\ud835\udc56+\ud835\udc5e\ud835\udc56uses BERT to determine the information content of a token t by estimating the probability of t being predicted at position i. Thus, precision is low if many t with high information content are removed. Both metrics use micro-averaging over all annotators to account for multiple valid annotations. Because our tool automatically rephrases the anonymized texts, we make two changes. First, since we cannot reliably measure \ud835\udc4a\ud835\udc43\ud835\udc51\ud835\udc56+\ud835\udc5e\ud835\udc56, we fall back to our previously introduced proxies for measuring \ud835\udc38\ud835\udc51\ud835\udc61utility. Secondly, we categorize newly introduced entities from LLM hallucination that may change the meaning of the sanitized text. The legal texts, which must prefer direct and commonly-known identifiers, are likely to present none or far fewer of the backgroundknowledge-specific re-identification challenges of our domain. Thus, again the metrics used here should be regarded as proxies. Risk We measure\ud835\udc34\ud835\udc56\ud835\udc51using \ud835\udc38\ud835\udc45\ud835\udc51\ud835\udc56/\ud835\udc5e\ud835\udc56and count slightly rephrased names of entities as \u201cnot removed\u201d using the Levenshtein distance. For example, rephrasing \u201cUSA\u201d as \u201cU.S.A\u201d has the same influence on \ud835\udc38\ud835\udc45\ud835\udc51\ud835\udc56/\ud835\udc5e\ud835\udc56as failing to remove \u201cUSA\u201d. Utility We estimate \ud835\udc38\ud835\udc51\ud835\udc61through SSim. In addition, we determine all entities in the sanitized text that are not in the original text (again using the Levenshtein distance). We categorize them into (1) rephrased harmful entities (semantically identical to at least one entity that should have been masked), (2) rephrased harmless entities, and (3) newly introduced entities. We measure semantic similarity by calculating the cosine similarity of each named entity phrase\u2019s sentence embedding to those in the original text. 5.2.2 Data, language models, and settings. The TAB corpus comprises the first two sections (introduction and statement of facts) of each court case. For our evaluation, we use the test split which contains 127 cases of which each has, on average, 2174 characters (356 words) and 13.62 annotated phrases. We perform all experiments using the \u201cXL\u201d (3B parameter) model with temperature values T of 0.2, 0.5, and 0.8, as well as with NRNgS set to 2. 5.2.3 Results and Discussion. \ud835\udc38\ud835\udc45\ud835\udc51\ud835\udc56/\ud835\udc5e\ud835\udc56and SSim vary slightly, but not significantly for different T values. For T = 0.2, we get an entitylevel recall on quasi-identifiers (\ud835\udc38\ud835\udc45\ud835\udc5e\ud835\udc56) of 0.93, which is slightly better than Pil\u00e1n et al.\u2019s [48] best performing model trained directly on the TAB corpus (0.92). However, our result for direct identifiers \ud835\udc38\ud835\udc45\ud835\udc51\ud835\udc56 is 0.53, while theirs achieves 1.0, i.e. does not miss a single highrisk entity. Closer inspection reveals that our low results for direct identifiers come mainly from (i) the SpaCy NER failing to detect the entity type CODE (e.g. \u201c10424/05\u201d) and (ii) the LLM re-introducing names of named entities that are spelled slightly differently (e.g. \u201cMr Abdisamad Adow Sufi\u201d instead of \u201cMr Abdisamad Adow Sufy\u201d). Regarding utility, all three model configurations achieve similar SSim scores ranging from 0.67 (T = 0.8) to 0.69 (T = 0.2). These results fall into the same range achieved using the IMDb62 movie reviews dataset. However, in addition to re-introducing entities that should have been masked, we found that, on average, the LLM introduces 5.24 new entities (28.49%) per court case. While some of these, depending on the context, can be considered harmless noise (e.g. \u201cEuropean Supreme Tribunal\u201d), manual inspection revealed that many change the meaning and legitimacy of the sanitized texts. For example, 4.7% contain names of people that do not appear in the original text, 43.3% contain new article numbers, 20.5% contain new dates, and 11.8% include names of potentially unrelated countries. The frequency of such hallucinations could also be a consequence of the specific text genre of court cases, and future work should examine to what extent this also occurs in whistleblower testimonies and how it affects the manual post-processing over the generated text that is previewed in our semi-automated tool. 5.3 Re-Identification Through Content: Whistleblower Testimony Excerpts We further investigated our tool\u2019s rewritings of two excerpts (Tables 3, 4) from a whistleblower\u2019s hearing in the Hunter Biden tax evasion case, as released by the United States House Committee on Ways and Means.12 This qualitative view on our results provides for a detailed understanding of which identifiers were rewritten and how.13 5.3.1 Approach. First, we compiled the essential \ud835\udc38\ud835\udc51\ud835\udc61upon which we based our analysis on. Next, we assessed the textual features in both excerpts to enhance our tool\u2019s automatic Level of Concern (LvC) estimations, aiming for the lowest author identifiability (\ud835\udc34\ud835\udc56\ud835\udc51). Finally, we input these annotations into the user interface to produce the rewritings. 5.3.2 \ud835\udc38\ud835\udc51\ud835\udc61and \ud835\udc34\ud835\udc56\ud835\udc51. Based on the information from the original texts in tables 3 and 4 alone, we define \ud835\udc38\ud835\udc51\ud835\udc61as follows, with \ud835\udc38\ud835\udc51\ud835\udc611, \ud835\udc38\ud835\udc51\ud835\udc612 being a subset of excerpt 1 and \ud835\udc38\ud835\udc51\ud835\udc613 a subset of excerpt 2. A Semi-automated Text Sanitization Tool for Mitigating the Risk of Whistleblower Re-Identification FAccT \u201924, June 3\u20136, 2024, Rio de Janeiro, Brazil the e\uffffectiveness of LLM-based rephrasing in authorship attribution. \u232b0B4;8=42, which involves suppressing named entities and rephrasing, shows the lowest risk due to limited content left for the LLM to reconstruct, resulting in mostly short, arbitrary sentences, as re\uffffected by low SSim scores. 5.2 Re-Identi\uffffcation Through Content: European Court of Human Rights Cases Pil\u00e1n et al.\u2019s [48] Text Anonymization Benchmark (TAB) includes a corpus of 1,268 English-language court cases from the European Court of Human Rights, in which directlyand quasi-identifying nominal and adjectival phrases were manually annotated. It solves several issues that previous datasets have, such as being \u201cpseudoanonymized\u201d, including only few categories of named entities, not di\ufffferentiating between identi\uffffer types, containing only famous individuals, or being small. TAB\u2019s annotation is focused on protecting the identity of the plainti\uffff(also referred to as \u201capplicant\u201d). 5.2.1 Evaluation Metrics. TAB introduces two metrics, entity-level recall (\u21e2'38/@8) to measure privacy protection and token-levelweighted precision (, %38+@8) for utility preservation. Entity-level means that an entity is only considered safely removed if all of its mentions are., %38+@8 uses BERT to determine the information content of a token t by estimating the probability of t being predicted at position i. Thus, precision is low if many t with high information content are removed. Both metrics use micro-averaging over all annotators to account for multiple valid annotations. Because our tool automatically rephrases the anonymized texts, we make two changes. First, since we cannot reliably measure , %38+@8, we fall back to our previously introduced proxies for measuring \u21e23C utility. Secondly, we categorize newly introduced entities from LLM hallucination that may change the meaning of the sanitized text. The legal texts, which must prefer direct and commonly-known identi\uffffers, are likely to present none or far fewer of the backgroundknowledge-speci\uffffc re-identi\uffffcation challenges of our domain. Thus, again the metrics used here should be regarded as proxies. Risk We measure\ud43483 using \u21e2'38/@8 and count slightly rephrased names of entities as \u201cnot removed\u201d using the Levenshtein distance. For example, rephrasing \u201cUSA\u201d as \u201cU.S.A\u201d has the same in\uffffuence on \u21e2'38/@8 as failing to remove \u201cUSA\u201d. Utility We estimate \u21e23C through SSim. In addition, we determine all entities in the sanitized text that are not in the original text (again using the Levenshtein distance). We categorize them into (1) rephrased harmful entities (semantically identical to at least one entity that should have been masked), (2) rephrased harmless entities, and (3) newly introduced entities. We measure semantic similarity by calculating the cosine similarity of each named entity phrase\u2019s sentence embedding to those in the original text. 5.2.2 Data, language models, and se\uffffings. The TAB corpus comprises the \uffffrst two sections (introduction and statement of facts) of each court case. For our evaluation, we use the test split which contains 127 cases of which each has, on average, 2174 characters (356 words) and 13.62 annotated phrases. We perform all experiments using the \u201cXL\u201d (3B parameter) model with temperature values T of 0.2, 0.5, and 0.8, as well as with NRNgS set to 2. 5.2.3 Results and Discussion. \u21e2'38/@8 and SSim vary slightly, but not signi\uffffcantly for di\ufffferent T values. For T = 0.2, we get an entitylevel recall on quasi-identi\uffffers (\u21e2'@8) of 0.93, which is slightly better than Pil\u00e1n et al.\u2019s [48] best performing model trained directly on the TAB corpus (0.92). However, our result for direct identi\uffffers \u21e2'38 is 0.53, while theirs achieves 1.0, i.e. does not miss a single highrisk entity. Closer inspection reveals that our low results for direct identi\uffffers come mainly from (i) the SpaCy NER failing to detect the entity type CODE (e.g. \u201c10424/05\u201d) and (ii) the LLM re-introducing names of named entities that are spelled slightly di\ufffferently (e.g. \u201cMr Abdisamad Adow Su\uffff\u201d instead of \u201cMr Abdisamad Adow Sufy\u201d). Regarding utility, all three model con\uffffgurations achieve similar SSim scores ranging from 0.67 (T = 0.8) to 0.69 (T = 0.2). These results fall into the same range achieved using the IMDb62 movie reviews dataset. However, in addition to re-introducing entities that should have been masked, we found that, on average, the LLM introduces 5.24 new entities (28.49%) per court case. While some of these, depending on the context, can be considered harmless noise (e.g. \u201cEuropean Supreme Tribunal\u201d), manual inspection revealed that many change the meaning and legitimacy of the sanitized texts. For example, 4.7% contain names of people that do not appear in the original text, 43.3% contain new article numbers, 20.5% contain new dates, and 11.8% include names of potentially unrelated countries. The frequency of such hallucinations could also be a consequence of the speci\uffffc text genre of court cases, and future work should examine to what extent this also occurs in whistleblower testimonies and how it a\uffffects the manual post-processing over the generated text that is previewed in our semi-automated tool. 5.3 Re-Identi\uffffcation Through Content: Whistleblower Testimony Excerpts We further investigated our tool\u2019s rewritings of two excerpts (Tables 3, 4) from a whistleblower\u2019s hearing in the Hunter Biden tax evasion case, as released by the United States House Committee on Ways and Means.12 This qualitative view on our results provides for a detailed understanding of which identi\uffffers were rewritten and how.13 5.3.1 Approach. First, we compiled the essential \u21e23C upon which we based our analysis on. Next, we assessed the textual features in both excerpts to enhance our tool\u2019s automatic Level of Concern (LvC) estimations, aiming for the lowest author identi\uffffability (\ud43483). Finally, we input these annotations into the user interface to produce the rewritings. 5.3.2 \u21e23C and \ud43483. Based on the information from the original texts in tables 3 and 4 alone, we de\uffffne \u21e23C as follows, with \u21e23C1, \u21e23C2 being a subset of excerpt 1 and \u21e23C3 a subset of excerpt 2. \u21e23C := 8 > > > > > > > > < > > > > > > > > : \u201cThe Tax Division approved charges but for no apparent reason changed their decision to a declination.\u201d, \u201cThe declination occurred after signi\uffffcant e\uffffort was put into the investigation by the whistleblower.\u201d, \u201cIn their e\uffffort in doing what is right, the whistleblower su\uffffered on a professional and personal level.\u201d 12https://waysandmeans.house.gov/?p=39854458 [Accessed 29-April-2024], \u201c#2\u201d 13To answer these questions, it is immaterial whether the text sample describes a concrete act of wrongdoing (as in our \uffffctitious Ex. 1) or not (as here). 12https://waysandmeans.house.gov/?p=39854458 [Accessed 29-April-2024], \u201c#2\u201d 13To answer these questions, it is immaterial whether the text sample describes a concrete act of wrongdoing (as in our fictitious Ex. 1) or not (as here). Staufer, et al. In \ud835\udc52\ud835\udc65\ud835\udc501 (Table 3), we classified \u201cjoining the case\u201d (first-person indexical) and implications of a nation-wide investigation as highly concerning. Additionally, we marked all \u201ccase\u201d mentions as highly concerning to evaluate consistent suppression. \u201cDOJ Tax\u201d, being a stylometric identifier because it is no official abbreviation, received a medium LvC, and \u201cthousands of hours\u201d was similarly categorized, potentially indicating the authors role as lead in the case. In \ud835\udc52\ud835\udc65\ud835\udc502 (Table 4), we classified the lexical identifier \u201c2018\u201d, which could be cross-referenced relatively easily, as well as all descriptive identifiers concerning the author\u2019s sexual orientation and outing as highly concerning. Furthermore, emotional descriptors (\u201csleep, vacations, gray hairs, et cetera\u201d) are given medium LvC, similar to references of case investment (\u201cthousands of hours\u201d and \u201c95 percent\u201d), mirroring the approach from \ud835\udc52\ud835\udc65\ud835\udc501. 5.3.3 Results and Discussion. \ud835\udc38\ud835\udc65\ud835\udc501\ud835\udc60\ud835\udc4e\ud835\udc5b\ud835\udc56\ud835\udc61\ud835\udc56\ud835\udc67\ud835\udc52\ud835\udc51retains \ud835\udc38\ud835\udc51\ud835\udc612, but not \ud835\udc38\ud835\udc51\ud835\udc611, as \u201cDOJ Tax\u201d is replaced with \u201cproper noun\u201d due to the nonexistence of a corresponding entity in Wikidata. Consequently, it defaults to the token\u2019s POS tag. For \ud835\udc34\ud835\udc56\ud835\udc51, all identified risks were addressed (e.g., \u201cconsiderable time\u201d replaces \u201cthousands of hours.\u201d). However, the generalization of \u201ccase\u201d led to inconsistent terms like \u201cmatter\u201d, \u201csituation\u201d, and \u201cissue\u201d due to the \ud835\udc41\ud835\udc45\ud835\udc41\ud835\udc54\ud835\udc46= 2 setting. This is beneficial for reducing authorship attribution accuracy but may confuse readers not familiar with the original context. \ud835\udc38\ud835\udc65\ud835\udc502\ud835\udc60\ud835\udc4e\ud835\udc5b\ud835\udc56\ud835\udc61\ud835\udc56\ud835\udc67\ud835\udc52\ud835\udc51maintains parts of \ud835\udc38\ud835\udc51\ud835\udc613, though terms like \u201cX amount of time\u201d and \u201cY amount of the investigation\u201d add little value due to their lack of specificity. Notably, \u201camount o of\u201d represents a rare LLM-induced spelling error, underscoring the need for human editing for real-world use. The emotional state\u2019s broad generalization to \u201cphysical health, leisure, grey body covering\u201d is odd and less suitable than a singular term would be. Despite this, \ud835\udc38\ud835\udc65\ud835\udc502\ud835\udc60\ud835\udc4e\ud835\udc5b\ud835\udc56\ud835\udc61\ud835\udc56\ud835\udc67\ud835\udc52\ud835\udc51 effectively minimizes \ud835\udc34\ud835\udc56\ud835\udc51by addressing all other identified risks. Table 3: LvC-annotated whistleblower testimony \ud835\udc52\ud835\udc65\ud835\udc501 (excerpt 1) with identifiers (top) and \ud835\udc52\ud835\udc65\ud835\udc501\ud835\udc60\ud835\udc4e\ud835\udc5b\ud835\udc56\ud835\udc61\ud835\udc56\ud835\udc67\ud835\udc52\ud835\udc51(bottom). Original: \u201cPrior to joining the case, DOJ Tax had approved tax charges for the case and the case was in the process of progressing towards indictment [...] After working thousands of hours on that captive case, poring over evidence, interviewing witnesses all over the U.S., the decision was made by DOJ Tax to change the approval to a declination and not charge the case.\u201d Lexical IDs: DOJ Tax; U.S. Indexical IDs: [implicit: me] joining the case (first person) Descriptive IDs: interviewing witnesses all over the U.S. (nationwide investigation); thousands of hours (author involvement) Sanitized: \u201cThe proper noun had approved tax charges for the matter and the situation was moving towards indictment, but after spending considerable time on that captive matter, poring over evidence, the decision was made by proper noun to defer the approval and not charge the issue.\u201d Table 4: LvC-annotated whistleblower testimony \ud835\udc52\ud835\udc65\ud835\udc502 (excerpt 2) with identifiers (top) and \ud835\udc52\ud835\udc65\ud835\udc502\ud835\udc60\ud835\udc4e\ud835\udc5b\ud835\udc56\ud835\udc61\ud835\udc56\ud835\udc67\ud835\udc52\ud835\udc51(bottom). Original: \u201cI had opened this investigation in 2018, have spent thousands of hours on the case, worked to complete 95 percent of the investigation, have sacrificed sleep, vacations, gray hairs, et cetera. My husband and I, in identifying me as the case agent, were both publicly outed and ridiculed on social media due to our sexual orientation.\u201d Lexical IDs: 2018; thousands of hours; 95 percent Indexical IDs: me as the case agent (role of author); My husband (author\u2019s marital status) Descriptive IDs: I had opened this investigation in 2018 (can be cross-referenced); My husband and I + publicly outed and ridiculed [...] due to our sexual orientation (author\u2019s sexual orientation and public event); sacrificed sleep, [...], gray hairs (emotional state) Sanitized: \u201cI had opened this investigation on a certain date, had spent X amount of time on the case, worked to complete Y amount of the investigation, sacrificing my physical health, leisure, grey body covering, etc.\u201d 6 CONCLUSIONS, LIMITATIONS AND FUTURE WORK We evaluated our our tool\u2019s effectiveness using ECHR court cases and excerpts from a real-world whistleblower testimony and measured the protection against authorship attribution attacks and information loss statistically using the popular IMDb62 movie reviews dataset. Our method can significantly reduce authorship attribution accuracy from 98.81% to 31.22%, while preserving up to 73.1% of the original content\u2019s semantics, as measured by the established cosine similarity sentence embeddings. Our qualitative analysis revealed that minor wording changes significantly impact \ud835\udc34\ud835\udc56\ud835\udc51and \ud835\udc38\ud835\udc51\ud835\udc61, and highlighted our tool\u2019s strengths in reducing \ud835\udc34\ud835\udc56\ud835\udc51 through generalization, perturbation, and suppression. Our tool\u2019s usefulness in real-world whistleblowing scenarios remains to be tested, particularly with human users. Challenges arise from the possibility of the tool introducing unrelated entities through model hallucination and its limitations in addressing complex syntactic structures and co-references. Still, our LLM-based approach has proved to be promising in matters of counteracting the limitations of state-of the art approaches. The fine-tuned model effectively reduces authorship attribution and improves text coherence \u2013 two of the main shortcomings of previous works. At the same time, it introduces novel challenges, such as limited control over the accuracy and consistency of the rephrased content. Future work will focus on refining our tool through evaluations involving human participants and domain experts. Given the crucial importance of context knowledge for re-identification risks and the challenges in identifying all textual features that contribute to re-identification, future work will also pay increasing attention to enhancing anonymization awareness. This would not only apply to the whistleblowing use case, but extend to the protection of free speech in other areas too, including journalism, political activism, and social media. A Semi-automated Text Sanitization Tool for Mitigating the Risk of Whistleblower Re-Identification We envision an interactive awareness tool as a more dynamic alternative to conventional static writing guides on whistleblowing platforms. This tool would incorporate insights from our research as well as insights from practitioners, aiming to educate users about subtle textual nuances that could pose re-identification risks, thereby creating a deeper understanding and more effective use of anonymization practices in high-risk disclosures. At the same time, we need to draw on practitioners\u2019 and legal experts\u2019 knowledge to better understand what textual changes are detrimental (or conducive) to utility and incorporate these insights into the guidance provided by the awareness tool. 7 ETHICAL CONSIDERATIONS, RESEARCHERS POSITIONALITY, AND POSSIBLE ADVERSE IMPACTS In the following paragraphs, we discuss five key challenges, interweaving a potential adverse impacts statement, an ethical considerations statement (what we have done or can do), and positionalities. We are computer scientists (some of us with a background also in social and legal sciences) who have programming expertise (instrumental for mitigating challenges C1\u2013C4), understanding of data protection law (C1), research expertise in bias and fairness, including methods for risk mitigation when working with LLMs (C2), and collaborators with human-subjects studies expertise (C3). None of us has been a whistleblower. We outline below how future collaborators and/or deployers with other positionalities can contribute relevant complementary expertise on C1\u2013C5. C1 \u2013 Data Protection: Our tool does not collect or store any user data. Original as well as re-written texts are discarded after each run, and they are not used to train the model further. Our tool does not require an internet connection beyond the initial downloading of pre-trained language models and optional queries to Wikidata servers. While querying Wikidata enhances the efficacy of our tool by enabling the generalization of certain words, users should be aware that these queries might expose confidential information to external servers. To mitigate this risk, our implementation remains functional when offline, albeit with slightly reduced efficacy due to the lack of real-time Wikidata look-ups. In a real-life deployment, technical and organizational measures would need to be implemented in order to safeguard the confidential personal or organizational data that remain in the reports; this will also require security and legal expertise. C2 \u2013 Bias and (Un-)fairness: Our tool may inadvertently introduce or perpetuate biases present in the training data. FLAN T5 was trained on C4, which is generated from the April 2019 Common Crawl dataset. Dodge et al. [13] discovered that C4 has a \u201cnegative sentiment bias against Arab identities\u201d and excludes \u201cdocuments associated with Black and Hispanic authors\u201d as well as documents \u201cmentioning sexual orientations\u201d [p. 8] by its blocklist filter. Therefore, similar to other pre-trained models [32], FLAN T5 is \u201cpotentially vulnerable to generating equivalently inappropriate content or replicating inherent biases\u201d [8, p. 52]. This may bias our level of concern measures. For example, certain names, professions, or locations may be classified as \u201cmedium concerning\u201d or \u201chighly concerning\u201d more often because they are considered \u201csurprising\u201d, which may unfairly impact the narratives involving them. Future work should, therefore, include evaluating and mitigating these biases and possibly experiments with other datasets and pre-trained models. C3 \u2013 Over-Reliance and Retaliation: The results of our quantitative evaluation are promising, but an extensive qualitative evaluation is necessary to determine whether our approach translates to realworld situations. Therefore, users of our tool must remain aware of its potential to alter the original intent of their text significantly and, depending on the context, possibly offer limited protection against retaliation. Over-reliance on our tool may lead to a false sense of security, resulting in increased vulnerability to retaliation. We intend to assess the extent of this form of automation bias [10] in a subsequent user study, discuss with people who are working in the field (e.g., whistleblower protection activists) how to best reduce it, and also evaluate these future mitigation measures. C4 \u2013 Resource consumption: Training LLMs is resource-intensive. By re-using the existing model and enlisting distilled LLM learning, this impact could be reduced in future work. C5 \u2013 Tool Misuse: Even though our tool aims to mitigate the risk of whistleblower re-identification, malicious actors might misuse our tool for obfuscating dangerous information or illegally converting copyrighted material. By providing our source code and fine-tuned models publicly, we open avenues for ethical use and misuse alike. Therefore, we emphasize that our sole aim in developing our tool is to facilitate legal, ethical whistleblowing. Future refinements and real-world evaluations will require collaboration with legal and social experts to better understand the practical implications and potential misuse scenarios. ACKNOWLEDGMENTS We acknowledge funding from the German Federal Ministry of Education and Research (BMBF) \u2013 Nr 16DII134. This publication has been partially supported by the EXDIGIT (Excellence in Digital Sciences and Interdisciplinary Technologies) project, funded by Land Salzburg under grant number 20204-WISS/ 263/6-6022."
21
+ }
intro_8K/test_introduction_short_2405.01102v1.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.01102v1",
3
+ "title": "Less is More: on the Over-Globalizing Problem in Graph Transformers",
4
+ "abstract": "Graph Transformer, due to its global attention mechanism, has emerged as a\nnew tool in dealing with graph-structured data. It is well recognized that the\nglobal attention mechanism considers a wider receptive field in a fully\nconnected graph, leading many to believe that useful information can be\nextracted from all the nodes. In this paper, we challenge this belief: does the\nglobalizing property always benefit Graph Transformers? We reveal the\nover-globalizing problem in Graph Transformer by presenting both empirical\nevidence and theoretical analysis, i.e., the current attention mechanism overly\nfocuses on those distant nodes, while the near nodes, which actually contain\nmost of the useful information, are relatively weakened. Then we propose a\nnovel Bi-Level Global Graph Transformer with Collaborative Training\n(CoBFormer), including the inter-cluster and intra-cluster Transformers, to\nprevent the over-globalizing problem while keeping the ability to extract\nvaluable information from distant nodes. Moreover, the collaborative training\nis proposed to improve the model's generalization ability with a theoretical\nguarantee. Extensive experiments on various graphs well validate the\neffectiveness of our proposed CoBFormer.",
5
+ "authors": "Yujie Xing, Xiao Wang, Yibo Li, Hai Huang, Chuan Shi",
6
+ "published": "2024-05-02",
7
+ "updated": "2024-05-02",
8
+ "primary_cat": "cs.LG",
9
+ "cats": [
10
+ "cs.LG",
11
+ "cs.AI"
12
+ ],
13
+ "label": "Original Paper",
14
+ "paper_cat": "Graph AND Structure AND Learning",
15
+ "gt": "Graph-structured data, an essential and prevalent form in the real world, plays a vital role in modeling object inter- actions, such as social networks, transportation networks, and protein-protein interaction networks. Graph Neural Net- 1School of Computer Science, Beijing University of Posts and Telecommunications, Beijing, China 2School of Software, Beihang University, Beijing, China. Correspon- dence to: Xiao Wang <xiao [email protected]>, Chuan Shi <[email protected]>. Proceedings of the 41 st International Conference on Machine Learning, Vienna, Austria. PMLR 235, 2024. Copyright 2024 by the author(s). works (GNNs) (Kipf & Welling, 2017; Veli\u02c7 ckovi\u00b4 c et al., 2018; Hamilton et al., 2017), as representative graph ma- chine learning methods, effectively utilize their message- passing mechanism to extract useful information and learn high-quality representations from graph data. However, GNNs face challenges with layer stacking due to over- smoothing (Li et al., 2018; Nt & Maehara, 2019; Oono & Suzuki, 2020) and over-squashing (Topping et al., 2021; Deac et al., 2022) problems, which limit their receptive fields to near neighbors. In contrast, Transformers (Vaswani et al., 2017), with their global attention mechanism, have shown exceptional expressive capability, which makes sig- nificant strides in various fields, including natural language processing (Devlin et al., 2018) and computer vision (Doso- vitskiy et al., 2021). Incorporating Transformers into graph data presents an excellent solution to these challenges since they naturally construct a fully connected graph and adaptively learn interaction relationships with the powerful global attention mechanism. In graph-level tasks like molecular property prediction, numerous Graph Transformers leveraging global atten- tion have achieved remarkable success (Ying et al., 2021; Kreuzer et al., 2021; Wu et al., 2021; Ramp\u00b4 a\u02c7 sek et al., 2022). This success is largely attributed to their global perception capability. Inspired by the successful applications in graph- level tasks, researchers have attempted to solve the scalabil- ity challenge posed by the O(N 2) complexity of the global attention mechanism and make efforts to adapt this mecha- nism for node-level task, aiming at expanding the receptive field and enhancing the model\u2019s expressive ability (Kuang et al., 2022; Zhang et al., 2022; Zhu et al., 2023; Liu et al., 2023a; Wu et al., 2022; Kong et al., 2023; Wu et al., 2023). Although the global attention module has been recognized as the fundamental unit of Graph Transformer, the following question remains largely unknown: Does the globalizing property always benefit Graph Transformers? Understanding the attention mechanism in Graph Trans- formers, particularly its globalizing property, can provide valuable guidelines and insights for the development of ad- vanced Graph Transformers. In this study, we reveal the over-globalizing problem in Graph Transformers by pre- senting both empirical evidence and theoretical analysis. 1 arXiv:2405.01102v1 [cs.LG] 2 May 2024 Less is More: on the Over-Globalizing Problem in Graph Transformers In particular, we empirically find that there is an inconsis- tency between the distribution of learned attention scores across all node pairs and the distribution of nodes that are actually informative, i.e., the global attention mechanism tends to focus on higher-order nodes, while the useful in- formation often appears in lower-order nodes. Despite that higher-order nodes may provide additional information, the current attention mechanism overly focuses on those nodes. Theoretically, we demonstrate that an excessively expanded receptive field can diminish the effectiveness of the global attention mechanism, further implying the existence of the over-globalizing problem. Once the weakness of the global attention mechanism in Graph Transformers is identified, another question natu- rally emerges: how to improve the current global atten- tion mechanism to prevent the over-globalizing problem in Graph Transformers, while still keeping the ability to extract valuable information from high-order nodes? Usually, one can alleviate this problem by implicitly or explicitly inte- grating a local module (e.g., GNNs) to complement Graph Transformers (Zhao et al., 2021; Zhang et al., 2022; Kuang et al., 2022; Kong et al., 2023; Liu et al., 2023a; Wu et al., 2023). However, the different properties of local smoothing in GNNs and over-globalizing in Graph Transformers raise a fundamental question about which information will predom- inantly influence the node representation. Moreover, the prevalent approach of fusing local and global information through linear combination is inadequate and potentially leads to incorrect predictions, even in situations where using either local or global information alone could have achieved accurate predictions. In this paper, we propose a novel Bi-Level Global Graph Transformer with Collaborative Training (CoBFormer). Specifically, we first partition the graph into distinct clus- ters with the METIS algorithm. Subsequently, we propose the bi-level global attention (BGA) module, which con- sists of an intra-cluster Transformer and an inter-cluster Transformer. This module effectively mitigates the over- globalizing problem while keeping a global receptive abil- ity by decoupling the information within intra-clusters and between inter-clusters. To capture the graph structure infor- mation neglected by the BGA module, a Graph Convolution Network (GCN) is adopted as the local module. Finally, we propose collaborative training to integrate the informa- tion learned by the GCN and BGA modules and boost their performance. We summarize our contributions as follows: \u2022 We demonstrate a crucial phenomenon: Graph Trans- formers typically yield the over-globalizing problem of attention mechanism for node classification. Both the theoretical analysis and empirical evidence are pro- vided to show that this problem will fundamentally affect Graph Transformers. Our discoveries provide a perspective that offers valuable insights into the im- provement of Graph Transformers. \u2022 We propose CoBFormer, a Bi-Level Global Graph Transformer with Collaborative Training, which effec- tively addresses the over-globalizing problem. Theo- retical analysis implies that our proposed collaborative training will improve the model\u2019s generalization abil- ity. \u2022 Extensive experiments demonstrate that CoBFormer outperforms the state-of-the-art Graph Transformers and effectively solves the over-globalizing problem.",
16
+ "main_content": "We denote a graph as G = (V, E), where the node set V contains N nodes and the edge set E contains E edges. All edges formulate an adjacency matrix A = [auv] \u2208 {0, 1}N\u00d7N, where auv = 1 if there exists an edge from node u to v, and 0 otherwise. Graph G is often associated with a node feature matrix X = [xu] \u2208RN\u00d7d, where xu is a d dimensional feature vector of node u. The label set is denoted as Y. Labels of nodes are represented with a label matrix Y = [yu] \u2208RN\u00d7|Y|, where yu is the one-hot label of node u. We use bold uppercase letters to represent matrices and bold lowercase letters to represent vectors. Graph Transformers. Graph Transformers allow each node in a graph to attend to any other nodes by its powerful global attention mechanism as follows: Attn(H) = Softmax \ufffdQKT \u221a h = HW, K = HW, V = QK \u221a h \ufffd V, HW Attn(H) = Softmax \ufffd \u221a h \ufffd V, Q = HWQ, K = HWK, V = HWV , (1) where H \u2208RN\u00d7h denotes the hidden representation matix and h is the hidden representation dimension. WQ, WK, WV \u2208Rh\u00d7h are trainable weights of linear projection layers. The attention score matrix is \u02c6 A = Softmax \ufffd QKT \u221a h \ufffd \u2208RN\u00d7N, containing the attention scores of any node pairs. \u03b1uv is the element of \u02c6 A, representing the QK \u221a h \u2208 rs. The attention score matrix is \u02c6 A = \ufffd \u2208RN\u00d7N, containing the attention scores rs. \u03b1uv is the element of \u02c6 A, representing the \ufffd \ufffd \u2208 of any node pairs. \u03b1uv is the element of \u02c6 A, representing the attention score between node u and v. It can be seen that Graph Transformers globally update the node representations by multiplying the attention score matrix \u02c6 A with the node representation matrix V. 3. Over-Globalizing Problem Empirical observations. In this section, we closely examine the distribution of the attention scores \u03b1uv to investigate what information the attention mechanism captures. Specifically, we define the proportion of the k-th hop neighbors 2 Less is More: on the Over-Globalizing Problem in Graph Transformers 0 1 2 3 4 5 6 7 8 k-hop 0.2 0.4 0.6 0.8 1.0 k u Cora CiteSeer PubMed Actor Deezer (a) 0 1 2 3 4 5 6 7 8 k-hop 0.0 0.1 0.2 0.3 0.4 Attn-k of VT Cora CiteSeer PubMed Actor Deezer (b) 0 1 2 3 4 5 6 7 8 k-hop 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 Attn-k of NodeFormer Cora CiteSeer PubMed Actor Deezer (c) Figure 1. (a) The average Ck u with different k-hop on five real-world datasets. (b) The Attn-k of Vanilla Transformer. (c) The Attn-k of NodeFormer. sharing the same label with node u as follows: Ck u = |v \u2208N k(u) : yu = yv| |N k(u)| , (2) where N k(u) denotes the k-th hop neighbors of node u. Larger Ck u indicates a higher proportion of useful nodes in the k-th hop neighbors. Then we denote the average attention scores allocated to the k-th hop neighbors as Attnk, which is formulated as: Attn-k = Eu\u2208V X v\u2208N k(u) \u03b1uv. (3) Larger Attn-k implies that the model pays more attention to the k-th hop information. We present the changes of the average Ck u across three homophilic graphs (Cora, CiteSeer and PubMed) and two heterophilic graphs (Actor and Deezer) in Figure 1(a). We can observe that: (1) For homophilic graphs, Ck u will gradually decrease as the k increases. (2) For heterophilic graphs, Ck u will rapidly decrease when k = 1 and then remains nearly unchanged. This demonstrates that homophilic graphs benefit more from the local structure for node classification, while heterophilic graphs gain more information from the global receptive field. Then we visualize the Attn-k of Vanilla Transformer (VT) and NodeFormer (Wu et al., 2022) to check whether the trend of Attn-k is consistent with Figure 1(a). As can be seen in Figures 1(b) and 1(c), surprisingly, we find that the majority of the attention scores are actually allocated to distant higher-order neighbors, regardless of whether the graphs are homophilic or heterophilic. We identify this phenomenon as the over-globalizing problem in Graph Transformers, underscoring the limitations of relying solely on a global attention module. Theoretical analysis. Here we further theoretically explore the impact of the over-globalizing problem in Graph Transformers. Ideally, Graph Transformers would allocate higher attention scores to nodes with similar embeddings, thereby implicitly learning a graph structure that ensures the smoothness of embeddings among adjacent nodes. Consequently, \u2225Z \u2212\u02c6 AZ\u2225F would be relatively small (Shuman et al., 2013; Kalofolias, 2016). Here Z symbolizes the node embeddings. So we employ \u2225Z \u2212\u02c6 AZ\u2225F to evaluate the smoothness of the embeddings among adjacent nodes learned by Graph Transformers. A smaller \u2225Z\u2212\u02c6 AZ\u2225F indicates a better smoothness, suggesting that Graph Transformers can effectively recognize useful nodes and aggregate information from them, achieving better node classification performance. Then we investigate the factors influencing \u2225Z \u2212\u02c6 AZ\u2225F . Before that, we denote Cu as the proportion of nodes belonging to the same class in the reachable set of node u. If the reachable set of node u is the K-hop neighbors, then Cu can be formulated as follows: Cu = PK k=0 Ck u|N k(u)| PK k=0 |N k(u)| . (4) Now we can establish the connection between \u2225Z \u2212\u02c6 AZ\u2225F , \u03b1uv and Cu as follows: Theorem 3.1. For a given node u and a well-trained Graph Transformer, let \u03b7u = Ev\u2208V,yu=yv exp( qukT v \u221a d ), \u03b3u = Ev\u2208V,yu\u0338=yv exp( qukT v \u221a d ). Then, we have: \u2225Z \u2212\u02c6 AZ\u2225F \u2264 \u221a 2L X u\u2208V X v\u2208V,yu\u0338=yv \u03b1uv = \u221a 2L X u\u2208V 1 1 + Cu 1\u2212Cu \u03b7u \u03b3u . (5) where L is a Lipschitz constant. The proofs are given in Appendix A.1. Theorem 3.1 indicates that \u2225Z\u2212\u02c6 AZ\u2225F is bounded by the sum of the attention scores of node pairs with different labels and negatively correlated with Cu, since \u03b7u and \u03b3u are constants for a given Graph Transformer. Then we further study the variations of Cu in Theorem 3.2. 3 Less is More: on the Over-Globalizing Problem in Graph Transformers Theorem 3.2. To analyze the impact of k on Ck u, we assume that each node has an equal probability 1 |Y| of belonging to any given class. Given the edge homophily \u03c1 = |(u,v)\u2208E:yu=yv| |E| , Ck u can be recursively defined as: Ck u = \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 1, if k = 0 \u03c1, if k = 1 1+|Y|\u03c1Ck\u22121 u \u2212\u03c1\u2212Ck\u22121 u |Y|\u22121 . if k = 2, 3, \u00b7 \u00b7 \u00b7 (6) And Ck u possesses the following properties: \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 C\u221e u = 1 |Y| Ck u \u2265Ck+1 u , if \u03c1 \u2265 1 |Y|, k = 0, 1 \u00b7 \u00b7 \u00b7 C2k u > C2(k+1) u , if \u03c1 < 1 |Y|, k = 0, 1 \u00b7 \u00b7 \u00b7 C2k+1 u < C2(k+1)+1 u , if \u03c1 < 1 |Y|. k = 0, 1 \u00b7 \u00b7 \u00b7 (7) We provide the proof in Appendix A.2. Theorem 3.2 indicates that in homophilic graphs, where \u03c1 is relatively large, Ck u will gradually decrease and converge to 1 |Y|, as the k increases. However, in heterophilic graphs, where \u03c1 is relatively small, it will fluctuate around 1 |Y| and eventually converge to 1 |Y|. Combining with Theorem 3.1, we find that in homophilic graphs, as the receptive field expands, the gradually decreased Ck u will lead to a reduced Cu and a larger \u2225Z \u2212\u02c6 AZ\u2225F , implying that an over-expanded receptive field adversely affects the global attention. Conversely, in heterophilic graphs, global attention brings in additional information that cannot be captured within the local neighborhood. Based on Theorem 3.2, we visualize the theoretical variations of Ck u in Figure 2. Compared with Figure 1(a), we can find that our theories align well with real-world scenarios. More visualization results of theoretical scenarios are provided in Appendix D.2. 0 2 4 6 8 10 k-hop 0.2 0.4 0.6 0.8 1.0 k u =0.9 =0.8 =0.7 =0.6 =0.5 =0.4 =0.3 =0.2 =0.1 Figure 2. The variations of Ck u under various \u03c1 settings for a scenario involving only two classes with uniformly distributed labels. Experimental analysis. Inspired by Theorem 3.1, we define the Attention Signal/Noise Ratio (Attn-SNR) as the metric to quantify the ability of Graph Transformers to distinguish useful nodes as follows: Definition 3.3. The Attention Signal/Noise Ratio (AttnSNR) is: Attn-SNR = 10 lg P yu=yv \u03b1uv P yu\u0338=yv \u03b1uv ! . (8) For a given Graph Transformer, a smaller Attn-SNR usually implies that the attention mechanism pays more attention to nodes with different labels, which may be caused by the over-globalizing problem. We evaluate Vanilla Transformer and NodeFormer utilizing Attn-SNR and accuracy on Cora and Citeseer. Furthermore, we deliberately improve the Attn-SNR of Vanilla Transformer by doubling the attention scores between nodes sharing the same label, and report its performance. The results are presented in Table 1, indicating that: (1) Vanilla Transformer (VT) typically shows the least Attn-SNR, resulting in the poorest performance. NodeFormer (NF) exhibits a higher Attn-SNR and achieves superior performance. (2) Remarkably, the Denoised Vanilla Transformer (VT-D), artificially directed to achieve higher Attn-SNR, demonstrates better performance than Vanilla Transformer. This is because the over-globalizing problem can be alleviated by doubling the attention scores between nodes with the same label, which are more likely to appear in the local neighborhood, thereby enhancing the model\u2019s classification capability. Table 1. The Attn-SNR and testing accuracy of different models. Dataset Metric VT NF VT-D Cora Attn-SNR -6.97 0.43 12.05 Accuracy 55.18 80.20 82.12 CiteSeer Attn-SNR -7.19 -5.09 8.72 Accuracy 50.72 71.50 61.80 4. The Proposed Method In this section, we introduce the Bi-Level Global Graph Transformer with Collaborative Training (CoBFormer). An overview of CoBFormer is shown in Figure 3. Specifically, we first use the METIS algorithm (Karypis & Kumar, 1998) to partition the graph into different clusters. Then we propose a novel bi-level global attention (BGA) module, which can decouple the information within intra-clusters and between inter-clusters by an intra-cluster Transformer and an inter-cluster Transformer. Additionally, we incorporate a GCN as the local module to learn the graph structural information. Finally, we propose collaborative training to integrate the information obtained by the GCN and BGA modules and theoretically prove that the generalization ability can be improved with our proposed collaborative training. 4 Less is More: on the Over-Globalizing Problem in Graph Transformers Partition into Clusters Collaborative Training 0.2 0.7 0.1 0.1 0.6 0.3 0.25 0.6 0.15 0.15 0.5 0.35 \u2217\ud835\udf0f \u2217\ud835\udf0f # \ud835\udc66! # \ud835\udc66\" \u0302 \ud835\udc60! \u0302 \ud835\udc60\" GCNConv GCNConv \ud835\udc34, \ud835\udc4b \ud835\udc34, \ud835\udc3b Pool & Lin MultiHead Attention FFN % \ud835\udc3b \ud835\udc44\u2032 \ud835\udc3e\u2032 \ud835\udc49\u2032 Fuse Inter-Cluster Linear MultiHead Attention FFN \ud835\udc3b! \ud835\udc44! \ud835\udc3e! \ud835\udc49 ! Intra-Cluster Lin -T Lin -G GCN BGA Figure 3. The overall framework of our proposed CoBFormer. 4.1. Bi-Level Global Attention Module Traditional Graph Transformers utilize the global attention mechanism to capture information between any node pairs, causing the over-globalizing problem. Therefore, we need to guarantee that local information can be captured, so as to alleviate the problem. To achieve this goal, we first partition the graph into P non-overlapping clusters using METIS (Karypis & Kumar, 1998). We denote the set of clusters as P = {Gi}, where Gi = {Vi, Ei} represents a subgraph of G, satisfying S Gi = G, T Gi = \u2205. The local information is usually within each cluster, so we employ an intra-cluster Transformer. The node features in cluster p are represented as Xp \u2208R N P \u00d7d. We apply an MLP to project the original node features into a latent space as H1 p = MLP(Xp). Subsequently, the hidden representations H1 p \u2208R N P \u00d7h are fed into the intra-cluster Transformer to learn the updated hidden representations \u02c6 Hk p: \u02c6 Hk p = FFN Softmax QpKT p \u221a h ! Vp ! , Qp = Hk pWk Q, Kp = Hk pWk K, Vp = Hk pWk V , (9) where Wk Q, Wk K and Wk V \u2208Rh\u00d7h are trainable weights of the linear projection layers in the k-th intra-cluster Transformer, and FFN represents a Feed-Forward Neural Network. A residual connection (He et al., 2016) is applied in each attention block and FFN block. Subsequently, we apply mean pooling to \u02c6 Hk p to obtain the cluster representations Pk \u2208RP \u00d7h. The p-th row of Pk, represented as pk p, is calculated by pk p = MEAN( \u02c6 Hk p) \u2208 R1\u00d7h. Following this, the cluster representations Pk are fed into the inter-cluster Transformer: \u02c6 Pk = FFN Softmax Q\u2032K\u2032T \u221a h ! V\u2032 ! , Q\u2032 = PkWk Q\u2032, K\u2032 = PkWk K\u2032, V\u2032 = PkWk V \u2032, (10) where Wk Q\u2032, Wk K\u2032 and Wk V \u2032 \u2208Rh\u00d7h are trainable weights of the linear projection layers in the k-th inter-cluster Transformer. Although the inter-cluster Transformer learns the attentions between different clusters, it can approximate the global attention mechanism in Equation (1) and effectively learn the global information as in Proposition 4.1. Proposition 4.1. Given u \u2208Vp, v \u2208Vq, along with a welltrained inter-cluster attention score matrix \u02d9 A \u2208RP \u00d7P . Let \u02d9 \u03b1pq represent the attention score between clusters p and q. Then the approximate attention score between node u and v can be expressed as \u02c6 \u03b1uv = \u02d9 \u03b1pq |Vq|. The proof can be found in Appendix A.4. It can be seen that the attention score \u03b1uv of the global attention mechanism can be approximately represented as \u02d9 \u03b1pq |Vq|, which can be calculated by our inter-cluster Transformer. Now with both \u02c6 Hk p and \u02c6 Pk, we concatenate the node representation with its corresponding cluster representation and calculate the output node representations Hk+1 p with a fusion linear layer parameterized by Wf: Hk+1 p = \u0010 \u02c6 Hk p\u22251 N P \u02c6 pT p \u0011 Wf, (11) here \u2225indicates the concatenation operation, 1 N P is an allone column vector with a dimensional of N P and \u02c6 pp is the hidden representation of cluster p. By decoupling the information within inner-clusters and between inter-clusters, our BGA module can alleviate the over-globalizing problem while preserving the expressive ability of global attention. Concurrently, our method achieves significant computational and memory efficiency by focusing on intra-cluster 5 Less is More: on the Over-Globalizing Problem in Graph Transformers and inter-cluster attention. The time and space complexity of our BGA module are O( N 2 P + P 2), reaching O(N 4 3 ) under the optimal condition. Note that the efficiency of our BGA module can be further improved by linear attention techniques (Choromanski et al., 2020; Wang et al., 2020; Wu et al., 2023). 4.2. Collaborative Training With the proposed BGA module capturing intra-cluster and inter-cluser information, we further introduce a GCN as the local module to supplement the graph structure information ignored by the BGA module. Instead of directly employing a linear combination of information from the local module and the global attention module for node classification, we propose a collaborative training approach for the GCN and BGA modules. We denote the labeled node set as VL and the unlabeled node set as VU, with L and U representing their respective label matrix. In previous works, a model is trained by predicting the label distribution of the labeled nodes with a cross-entropy loss. However, this method does not guarantee satisfactory performance on unlabeled nodes. Here, we employ two linear layers, Lin-G and Lin-T, to map the outputs of the GCN and BGA modules onto the label space: \u02c6 ZG = Lin-G(GCN(A, X)), \u02c6 ZT = Lin-T(BGA(X, P)). (12) Then we use the SoftMax function to calculate the predicted labels and soft labels (Hinton et al., 2015): \u02c6 YG = SoftMax(\u02c6 ZG), \u02c6 YT = SoftMax(\u02c6 ZT ), \u02c6 SG = SoftMax(\u02c6 ZG \u2217\u03c4), \u02c6 ST = SoftMax(\u02c6 ZT \u2217\u03c4), (13) where \u03c4 is a temperature coefficient used to control the smoothness of the soft labels. The objective function can be formulated as: Lce = \u2212 \u0000Eyu,u\u2208VL log(\u02c6 yG u ) + Eyu,u\u2208VL log(\u02c6 yT u ) \u0001 , Lco = \u2212 \u0000E\u02c6 sG u ,u\u2208VU log(\u02c6 sT u ) + E\u02c6 sT u ,u\u2208VU log(\u02c6 sG u ) \u0001 , L = \u03b1 \u2217Lce + (1 \u2212\u03b1) \u2217Lco. (14) where yu represents the true label of node u. \u02c6 yG u and \u02c6 yT u are the predicted labels of node u by the GCN and BGA modules, respectively. \u02c6 sG u and \u02c6 sT u denote the soft labels generated by each. Lce is the cross-entropy loss, a standard choice for classification tasks. Lco is designed to encourage mutual supervision between the GCN and BGA modules. The parameter \u03b1 is used to balance the contributions of Lce and Lco. Now we prove that our proposed collaborative training can improve the generalization ability of our GCN module and BGA module, thereby achieving better classification performance. Theorem 4.2. Consider P(L, U) as the true label distribution, PG(L, U) as the predicted label distribution by the GCN, and PT (L, U) as the predicted label distribution by the BGA module. The following relations hold: EP (L,U) log PG(L, U) =EP (L) log PG(L)+ EPT (U|L) log PG(U|L)\u2212 KL(PT (U|L)\u2225P(U|L)), EP (L,U) log PT (L, U) =EP (L) log PT (L)+ EPG(U|L) log PT (U|L)\u2212 KL(PG(U|L)\u2225P(U|L)), (15) where KL(\u00b7\u2225\u00b7) is the Kullback-Leibler divergence. The proof is given in Appendix A.3. EP (L,U) log PG(L, U) is the cross entropy between P(L, U) and PG(L, U). We aim to maximize it so that our model can achieve the best performance on labeled nodes and unlabeled nodes. However, it cannot be maximized directly since the label distribution of unlabeled nodes is unknown. Theorem 4.2 suggests that EP (L,U) log PG(L, U) can be decomposed into three terms. The first term is the cross entropy between P(L) and PG(L), which can be maximized by optimizing the Lce. It will ensure a good performance on labeled nodes. The second term is the cross entropy between PT (U|L) and PG(U|L), which can be maximized by optimizing the Lco. This term indicates that we can further improve the performance of the GCN module on unlabeled nodes by collaboratively training with the BGA module. Note that the third term is the Kullback-Leibler divergence between PT (U|L) and P(U|L), which is a constant when we optimize the GCN module. Therefore, the performance of the GCN module can be improved by the loss in Equation (14). Similarly, the performance of the BGA module can be improved. 5. Experiments Datasets. We select seven datasets to evaluate, including homophilic graphs, i.e., Cora, CiteSeer, Pubmed (Yang et al., 2016), Ogbn-Arxiv, Ogbn-Products (Hu et al., 2020) and heterophilic graphs, i.e., Actor, Deezer (Lim et al., 2021b). For Cora, CiteSeer, PubMed, we adopt the public split offered by PyG (Fey & Lenssen, 2019). For Ogbn-Arxiv and Ogbn-Products, we use the public splits in OGB (Hu et al., 2020). For Actor and Deezer, we perform five random splits of the nodes into train/valid/test sets, with the ratio of 50%:25%:25% (Lim et al., 2021b). The detailed statistics of the datasets can be found in Appendix B. Baselines. We compare our method with five baselines, including two classic GNNs: GCN (Kipf & Welling, 2017) and GAT (Veli\u02c7 ckovi\u00b4 c et al., 2018), and three state-of-theart Graph Transformers: NodeFormer (Wu et al., 2022), 6 Less is More: on the Over-Globalizing Problem in Graph Transformers Table 2. Quantitative results (% \u00b1 \u03c3) on node classification. Dataset Metric GCN GAT NodeFormer NAGphormer SGFormer CoB-G CoB-T Cora Mi-F1 81.44 \u00b1 0.78 81.88 \u00b1 0.99 80.30 \u00b1 0.66 79.62 \u00b1 0.25 81.48 \u00b1 0.94 84.96 \u00b1 0.34 85.28 \u00b1 0.16 Ma-F1 80.65 \u00b1 0.91 80.56 \u00b1 0.55 79.12 \u00b1 0.66 78.78 \u00b1 0.57 79.28 \u00b1 0.49 83.52 \u00b1 0.15 84.10 \u00b1 0.28 CiteSeer Mi-F1 71.84 \u00b1 0.22 72.26 \u00b1 0.97 71.58 \u00b1 1.74 67.46 \u00b1 1.33 71.96 \u00b1 0.13 74.68 \u00b1 0.33 74.52 \u00b1 0.48 Ma-F1 68.69 \u00b1 0.38 65.67 \u00b1 2.28 67.28 \u00b1 1.87 64.47 \u00b1 1.58 68.49 \u00b1 0.65 69.73 \u00b1 0.45 69.82 \u00b1 0.55 PubMed Mi-F1 79.26 \u00b1 0.23 78.46 \u00b1 0.22 78.96 \u00b1 2.71 77.36 \u00b1 0.96 78.04 \u00b1 0.41 80.52 \u00b1 0.25 81.42 \u00b1 0.53 Ma-F1 79.02 \u00b1 0.19 77.82 \u00b1 0.22 78.14 \u00b1 2.51 76.76 \u00b1 0.91 77.86 \u00b1 0.32 80.02 \u00b1 0.28 81.04 \u00b1 0.49 Actor Mi-F1 30.97 \u00b1 1.21 30.63 \u00b1 0.68 35.42 \u00b1 1.37 34.83 \u00b1 0.95 37.72 \u00b1 1.00 31.05 \u00b1 1.02 37.41 \u00b1 0.36 Ma-F1 26.66 \u00b1 0.82 20.73 \u00b1 1.58 32.37 \u00b1 1.38 32.20 \u00b1 1.11 34.11 \u00b1 2.78 27.01 \u00b1 1.77 34.96 \u00b1 0.68 Deezer Mi-F1 63.10 \u00b1 0.40 62.20 \u00b1 0.41 63.59 \u00b1 2.24 63.71 \u00b1 0.58 66.68 \u00b1 0.47 63.76 \u00b1 0.62 66.96 \u00b1 0.37 Ma-F1 62.07 \u00b1 0.31 60.99 \u00b1 0.56 62.70 \u00b1 2.20 62.06 \u00b1 1.28 65.22 \u00b1 0.68 62.32 \u00b1 0.94 65.63 \u00b1 0.36 Arxiv Mi-F1 71.99 \u00b1 0.14 70.22 \u00b1 0.19 67.98 \u00b1 0.60 71.38 \u00b1 0.20 72.50 \u00b1 0.28 73.17 \u00b1 0.18 72.76 \u00b1 0.11 Ma-F1 51.89 \u00b1 0.19 45.98 \u00b1 0.26 46.24 \u00b1 0.20 51.38 \u00b1 0.47 52.83 \u00b1 0.31 52.31 \u00b1 0.40 51.64 \u00b1 0.09 Products Mi-F1 73.32 \u00b1 0.26 75.04 \u00b1 0.18 70.71 \u00b1 0.27 76.41 \u00b1 0.53 71.57 \u00b1 0.37 78.09 \u00b1 0.16 78.15 \u00b1 0.07 Ma-F1 35.66 \u00b1 0.42 33.85 \u00b1 0.30 30.09 \u00b1 0.02 37.48 \u00b1 0.38 33.19 \u00b1 0.40 38.21 \u00b1 0.22 37.91 \u00b1 0.44 NAGphormer (Chen et al., 2022), and SGFormer (Wu et al., 2023). Note that in our proposed CoBFormer, the GCN module (CoB-G) and BGA module (CoB-T) each predict the node label independently, so we report their performance simultaneously. Experimental implementation details are given in Appendix C. Node Classification Results. Table 2 reports the experimental results on node classification. We select Micro-F1 and Macro-F1 as metrics to conduct a comprehensive performance comparison. We can observe that: (1) Both GCN and BGA modules of CoBFormer outperform all baselines in homophilic graphs by a substantial margin, demonstrating the effectiveness of CoBFormer. (2) In heterophilic graphs, the performance of our BGA module is comparable to, even surpasses, the best baseline, SGFormer. This indicates that our BGA module can successfully capture global information. (3) Traditional Graph Transformers exhibit superior performance on heterophilic graphs when compared with GCN and GAT. However, their advantage in homophilic graphs is relatively limited. This suggests that local information plays a more crucial role in homophilic graphs, whereas global information significantly enhances model performance in heterophilic graphs. These results are consistent with our analysis in Section 3. 6. Ablation Studies & Analysis Ablation Study. We carry out ablation studies on Cora, PubMed, and Deezer to evaluate the two fundamental components of our CoBFormer: the BGA module and the collaborative training approach, where the results are shown in Table 3. Key observations include: (1) The accuracy of our BGA module consistently outperforms vanilla global attention on all datasets, irrespective of the use of collaboraTable 3. Test accuracy and GPU memory of various CoBFormer variants. \u2018V-A\u2019 denotes the vanilla global attention. \u2018B-A\u2019 represents the BGA module. \u2018C-T\u2019 indicates whether collaborative training is applied. Dataset V-A B-A C-T CoB-G CoB-T MEM Cora \u221a \u00d7 \u00d7 81.44 54.86 0.85G \u221a \u00d7 \u221a 83.78 83.82 0.85G \u00d7 \u221a \u00d7 81.44 68.72 0.38G \u00d7 \u221a \u221a 84.96 85.28 0.38G PubMed \u221a \u00d7 \u00d7 79.26 71.22 8.42G \u221a \u00d7 \u221a 80.38 80.36 8.42G \u00d7 \u221a \u00d7 79.26 74.52 0.50G \u00d7 \u221a \u221a 80.52 81.42 0.50G Deezer \u221a \u00d7 \u00d7 62.07 66.49 20.23G \u221a \u00d7 \u221a 63.67 66.86 20.23G \u00d7 \u221a \u00d7 62.07 66.56 3.97G \u00d7 \u221a \u221a 63.76 66.96 3.97G tive training, demonstrating the effectiveness of our BGA module. (2) Collaborative training leads to significant accuracy improvement in both the GCN and BGA modules, indicating that it enhances the model\u2019s generalization ability by encouraging mutual learning. (3) The BGA module significantly reduces GPU memory, addressing scalability concerns. Specifically, GPU memory usage is largely reduced by 94% for PubMed and 80% for Deezer. Over-Globalizing Problem. To demonstrate our CoBFormer\u2019s ability to alleviate the over-globalizing problem, we visualize the Attn-k of our CoBFormer in Figure 4. Compared with Figures 1(b) and 1(c), CoBFormer allocates more attention scores in the local region than Vanilla Transformer and NodeFormer, indicating that our BGA module can effectively alleviate the over-globalizing problem by decoupling 7 Less is More: on the Over-Globalizing Problem in Graph Transformers 0 1 2 3 4 5 6 7 8 k-hop 0.00 0.05 0.10 0.15 0.20 0.25 0.30 Attn-k of CoBFormer Cora CiteSeer PubMed Actor Deezer Figure 4. The Attn-k of CoBFormer. the intra-cluster information and inter-cluster information. We further calculate the Attn-SNR and test accuracy to show our model\u2019s capabilities to distinguish useful nodes and extract valuable information, where the results are shown in Figure 5. Obviously, the CoB-T significantly improves the Attn-SNR and substantially boosts performance on the Cora, CiteSeer, and PubMed. It underscores that our CoBFormer can effectively mitigate the over-globalizing problem. For Actor and Deezer, our CoB-T achieves performance comparable to that of VT, implying that our CoBFormer can effectively capture global information. -8 -6 -4 -2 0 2 4 6 Cora CiteSeer PubMed Actor Deezer Attn-SNR(%) CoB-T VT 30 40 50 60 70 80 90 Cora CiteSeer PubMed Actor Deezer Accuracy(%) CoB-T VT Figure 5. The Attn-SNR and Accuracy of VT and our CoBFormer. 60 65 70 75 80 85 90 \u237a=1.0 \u237a=0.9 \u237a=0.8 \u237a=0.7 Accuracy(%) Cora CoB-G CoB-T 50 55 60 65 70 75 80 \u237a=1.0 \u237a=0.9 \u237a=0.8 \u237a=0.7 Accuracy(%) CiteSeer CoB-G CoB-T 70 72 74 76 78 80 82 \u237a=1.0 \u237a=0.9 \u237a=0.8 \u237a=0.7 Accuracy(%) PubMed CoB-G CoB-T 60 62 64 66 68 \u237a=1.0 \u237a=0.9 \u237a=0.8 \u237a=0.7 Accuracy(%) Deezer CoB-G CoB-T Figure 6. The average test accuracy of CoBFormer for different \u03b1. Parameter Study. We analyze the key parameters, the 83 83.5 84 84.5 85 85.5 86 \ud835\udf0f=0.3 \ud835\udf0f=0.5 \ud835\udf0f=0.7 \ud835\udf0f=0.9 Accuracy(%) Cora CoB-G CoB-T 72 72.5 73 73.5 74 74.5 75 \ud835\udf0f=0.3 \ud835\udf0f=0.5 \ud835\udf0f=0.7 \ud835\udf0f=0.9 Accuracy(%) CiteSeer CoB-G CoB-T 76 77 78 79 80 81 82 \ud835\udf0f=0.3 \ud835\udf0f=0.5 \ud835\udf0f=0.7 \ud835\udf0f=0.9 Accuracy(%) PubMed CoB-G CoB-T 60 62 64 66 68 \ud835\udf0f=0.3 \ud835\udf0f=0.5 \ud835\udf0f=0.7 \ud835\udf0f=0.9 Accuracy(%) Deezer CoB-G CoB-T Figure 7. The average test accuracy of CoBFormer for different \u03c4. collaborative learning strength coefficient \u03b1 and the temperature coefficient \u03c4. We vary \u03b1 in {1.0, 0.9, 0.8, 0.7} and report the performance in Figure 6. We can see that our model achieves notable performance improvements for all values except when \u03b1 = 1. Furthermore, it exhibits consistent performance across various \u03b1 values, underscoring the effectiveness and robustness of our collaborative training approach. Then we fix the best \u03b1, and report the performance of various \u03c4 in {0.9, 0.7, 0.5, 0.3}. Figure 7 suggests that the choice of \u03c4 significantly impacts performance, emphasizing the importance of selecting an appropriate \u03c4 for optimal results. More parameter studies are in Appendix D.1. 7. Related Work Graph Neural Networks. Representative GNNs, such as GCN (Kipf & Welling, 2017) and GAT (Veli\u02c7 ckovi\u00b4 c et al., 2018), leverage a message-passing mechanism to recursively aggregate neighbor information. However, due to over-smoothing (Li et al., 2018; Nt & Maehara, 2019; Oono & Suzuki, 2020) and over-squashing (Topping et al., 2021; Deac et al., 2022) problems, GNNs typically cannot stack multiple layers to capture information from distant nodes. Moreover, early designs of GNNs largely rely on the homophily assumption (McPherson et al., 2001) that nodes of the same type are more likely to be connected. Although some GNNs are designed for heterophilic graphs (Pei et al., 2020; Lim et al., 2021a; Chien et al., 2021; Bo et al., 2021), they still suffer from the same issues of over-smoothing and over-squashing, resulting in a limited receptive field. Graph Transformers. Transformers (Vaswani et al., 2017), benefiting from their global attention, naturally construct a fully connected graph with learnable edge weights, offering a novel approach to address the issues of oversmoothing and over-squashing in GNNs. Extensive works have achieved remarkable success in graph-level tasks (Ying et al., 2021; Kreuzer et al., 2021; Wu et al., 2021; Ramp\u00b4 a\u02c7 sek et al., 2022). This success is primarily attributed to their global perception capability, which is vital for graph-level 8 Less is More: on the Over-Globalizing Problem in Graph Transformers tasks. Influenced by the success in graph-level tasks, researchers are now exploring the integration of the global attention mechanism into node-level tasks (Kuang et al., 2022; Zhang et al., 2022; Zhu et al., 2023; Liu et al., 2023a; Wu et al., 2022; Kong et al., 2023; Wu et al., 2023). These approaches aim to broaden the receptive field in large-scale graphs and amplify the expressive potential of the models. 8. Conclusion In this paper, we discover the over-globalizing problem in Graph Transformers by presenting the theoretical insights and empirical results. We then propose CoBFormer, a bilevel global graph transformer with collaborative training, aiming at alleviating the over-globalizing problem and improving the generalization ability. Extensive experiments verify the effectiveness of CoBFormer. Potential Impacts This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none of which we feel must be specifically highlighted here."
17
+ }
intro_8K/test_introduction_short_2405.01103v1.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.01103v1",
3
+ "title": "LLM Security Guard for Code",
4
+ "abstract": "Many developers rely on Large Language Models (LLMs) to facilitate software\ndevelopment. Nevertheless, these models have exhibited limited capabilities in\nthe security domain. We introduce LLMSecGuard, an open-source framework that\noffers enhanced code security through the synergy between static code analyzers\nand LLMs. LLMSecGuard aims to equip practitioners with code solutions that are\nmore secure than the code initially generated by LLMs. It also benchmarks LLMs,\nproviding valuable insights into the evolving security properties of these\nmodels.",
5
+ "authors": "Arya Kavian, Mohammad Mehdi Pourhashem Kallehbasti, Sajjad Kazemi, Ehsan Firouzi, Mohammad Ghafari",
6
+ "published": "2024-05-02",
7
+ "updated": "2024-05-02",
8
+ "primary_cat": "cs.SE",
9
+ "cats": [
10
+ "cs.SE",
11
+ "cs.CR"
12
+ ],
13
+ "label": "Original Paper",
14
+ "paper_cat": "LLM Fairness",
15
+ "gt": "The use of Large Language Models (LLMs) such as ChatGPT and Copilot has become popular for software development activities such as coding, design, comprehension, etc. [7, 17]. Nevertheless, hallucination, i.e., \u201cpresenting incorrect information as if it is cor- rect\u201d, poses serious challenges for LLM users [18]. This issue is more prevalent in domains where reliable training content is scarce, such as in the realm of code security. Indeed, recent studies have shown that code models are widely adopted for code generation [12], but they have limited capabilities in the software security domain [2, 25]. Therefore, a vulnerable code piece that an LLM mistakenly recom- mends as a secure solution could compromise the entire system\u2019s security if it is used without enough scrutiny. We introduce LLMSecGuard, a framework designed to exten- sively examine the security properties of LLMs and leverage their analytical capabilities for secure code development. It applies static security analysis on LLM-generated code to uncover potential se- curity issues and guides LLMs in resolving such issues in the code. EASE 2024, June 18\u201321, 2024, Salerno, Italy 2024. ACM ISBN 978-x-xxxx-xxxx-x/YY/MM...$15.00 https://doi.org/10.1145/xxx Moreover, LLMSecGuard assesses the security properties of LLMs and benchmark them across different CWEs. In summary, LLMSec- Guard contributes to more secure software development. Unlike existing code assistants, developers can integrate unlimited LLMs and code analysis engines into this framework through REST APIs. LLMSecGuard is open-source and publicly available on GitHub.1 The remainder of this paper is structured as follows. In Section 2, we motivate this work. In Section 3, we introduce LLMSecGuard, and in Section 4, we explain its two use cases. In Section 5, we present related work. In Section 6, we outline our plans, and in Section 7, we conclude this paper.",
16
+ "main_content": "LLM Security Guard for Code Arya Kavian University of Science and Technology of Mazandaran, Iran Mohammad Mehdi Pourhashem Kallehbasti University of Science and Technology of Mazandaran, Iran Sajjad Kazemi University of Science and Technology of Mazandaran, Iran Ehsan Firouzi Technische Universit\u00e4t Clausthal Germany n Mohammad Ghafari Technische Universit\u00e4t Clausthal Germany ABSTRACT V Many developers rely on Large Language Models (LLMs) to facilitate software development. Nevertheless, these models have exhibited limited capabilities in the security domain. We introduce LLMSecGuard, an open-source framework that offers enhanced code security through the synergy between static code analyzers and LLMs. LLMSecGuard aims to equip practitioners with code solutions that are more secure than the code initially generated by LLMs. It also benchmarks LLMs, providing valuable insights into the evolving security properties of these models. CCS CONCEPTS nt V \u2022 Security and privacy \u2192Software security engineering. KEYWORDS int Security analysis, secure code generation, code models eprint V ACM Reference Format: Arya Kavian, Mohammad Mehdi Pourhashem Kallehbasti, Sajjad Kazemi, Ehsan Firouzi, and Mohammad Ghafari. 2024. LLM Security Guard for Code. In 28th International Conference on Evaluation and Assessment in Software Engineering (EASE 2024), June 18\u201321, 2024, Salerno, Italy. ACM, New York, NY, USA, 4 pages. https://doi.org/10.1145/xxx t Ve Security issues are pervasive in multiple domains [4], from mobile applications [9, 11] and web servers [10], to critical software systems [27]. There are program analysis tools designed to uncover security issues, but studies indicate their limitations [5, 6], as well as their lack of popularity among developers [14]. Unfortunately, the security landscape may not improve as we witness the popularity of language models for code generation [12]. It is concerning that developers place undue trust in these models, which are known to generate insecure code examples [2, 25]. In the following, we explain how we could iteratively ask ChatGPT 3.5 to generate secure code examples for symmetric encryption. Encryption plays a crucial role in our digital world, but its correct adoption has proven to be challenging for developers [15]. For instance, the analysis of 489 open-source Java projects showed that 85% of cryptography APIs are misused [16]. Importantly, existing tools that identify and fix such issues do not fulfill developers\u2019 expectations [1, 28]. Consider the StackOverflow post ID 3451670, which suffers from security issues such as \u201cECB encryption mode\u201d, \u201cstatic or constant key\u201d, \u201cstatic salt for key derivation\u201d, and \u201chardcoded password\u201d. When we presented this inquiry to ChatGPT, requesting a \u201csecure Java code example\u201d, the resulting response still contained identical security issues. We then explicitly instructed ChatGPT that \u201cI expect a secure encryption algorithm, a secure encryption mode, and a secure Initialization Vector (IV). Do not use hard-coded secrets, suggest best practices, and make sure that your example provides confidentiality and integrity\u201d. Listing 1 presents the response.2 1https://github.com/aryakvnust/LLMSecGuard 2https://chat.openai.com/share/8ce9eb6e-88b7-44dc-807a-51d04e7d446c arXiv:2405.01103v1 [cs.SE] 2 May 2024 Preprint Version EASE 2024, June 18\u201321, 2024, Salerno, Italy Arya Kavian, Mohammad Mehdi Pourhashem Kallehbasti, Sajjad Kazemi, Ehsan Firouzi, and Mohammad Ghafari 1 public class SecureEncryptionExample { 2 public static void main(String[] args) throws Exception { 3 String username = \"[email protected]\"; 4 String password = \"Password1\"; 5 String secretID = \"BlahBlahBlah\"; 6 String SALT2 = \"deliciously salty\"; 7 // Generate a secure random IV (Initialization Vector) 8 SecureRandom secureRandom = new SecureRandom(); 9 byte[] iv = new byte[12]; 10 secureRandom.nextBytes(iv); 11 // Derive a key from the password using a secure method 12 byte[] key = deriveKey(SALT2, username, password); 13 // Create a SecretKey object using the derived key 14 SecretKey secretKey = new SecretKeySpec(key, \"AES\"); 15 // Instantiate the cipher with AES in GCM mode 16 Cipher cipher = Cipher.getInstance(\"AES/GCM/NoPadding\"); 17 ... 18 } 19 private static byte[] deriveKey(String salt, String username, String password) throws Exception { 20 // Concatenate salt, username, and password 21 String combined = salt + username + password; 22 // Use a secure hash function (SHA-256) to derive a key 23 MessageDigest md = MessageDigest.getInstance(\"SHA-256\"); 24 byte[] hash = md.digest(combined.getBytes()); 25 return Arrays.copyOf(hash, 16); 26 } 27 } Listing 1: ChatGPT\u2019s improved response to Post ID 3451670 ChatGPT identified the constant key, and it correctly suggested the secure \u201cGCM\u201d encryption mode as a best practice. However, violations such as \u201cstatic salt for key generation\u201d and \u201chardcoded password\u201d were in the response without any proper warning. The suggested code, at line 23, included the \u201cSHA-256\u201d hash function for key generation, which is not a best practice. Anyway, when we asked about the security risk at line 6 (i.e., static salt), ChatGPT responded: \u201cUsing a hard-coded string as a salt is generally not considered a good practice for security. Ideally, salt should be unique for each user and should be randomly generated for each password. Using a constant salt for all users could make the system more vulnerable to certain types of attacks, such as rainbow table attacks\u201d. It then provided the code example in Listing 2 as a replacement. 1 // Generate a secure random salt 2 SecureRandom secureRandom = new SecureRandom(); 3 byte[] salt = new byte[16]; 4 secureRandom.nextBytes(salt); 5 // Convert the salt to a Base64-encoded string for storage 6 String SALT2 = Base64.getEncoder().encodeToString(salt); Listing 2: ChatGPT\u2019s suggestion to replace a static salt Subsequently, we asked, \u201cAre there any other security risks in your provided code?\u201d. It failed to detect the hardcoded password, but at this stage, it correctly suggested a key derivation function (KDF) instead of the hash function for generating a password-based key. Upon a direct inquiry about the line where a hardcoded password existed, it offered a secure suggestion. Through proper interactions with ChatGPT, specifically by listing potential security issues and their locations within the code, it is feasible to harness ChatGPT\u2019s power to generate secure code examples. However, novice developers may not possess the necessary expertise to provide such inputs. Hence, we introduce LLMSecGuard, a framework that links LLMs and static analysis tools to overcome this limitation. 3 LLMSecGuard We introduce LLMSecGuard, an open-source framework that offers enhanced code security through the synergy between code security analyzers and LLMs. The primary objective of LLMSecGuard is (i) to enhance the security of LLM-generated code, and (ii) to benchmark the security properties of LLMs. It adopts a RESTful architecture, implemented in Python using Django and Flask, and uses SQLite for data persistence. LLMSecGuard is publicly available on GitHub.3 LLMSecGuard Prompt Agent Prompt Security Agent Benchmark Agent Potential Vulnerabilities Report Benchmark Prompts Benchmark uses Analysis Engine LLM Figure 1: LLMSecGuard\u2019s components Figure 1 illustrates the schematic architecture of LLMSecGuard, comprising three main components: \u201cPrompt Agent\u201d, \u201cSecurity Agent\u201d, and \u201cBenchmark Agent\u201d. Prompt Agent. This component is responsible for receiving a prompt and providing other components with the code model\u2019s response. Upon receipt of a prompt, \u201cPrompt Agent\u201d undertakes the task of engineering a response. Particularly, it can reformulate a prompt, pass the prompt to code models, collect the response, and forward the result to other components. Security Agent. This component has a pivotal role in uncovering security issues in LLM-generated code. Precisely, it is entrusted to pass the code to static code analysis engines (such as Semgrep and Weggli), and to collect potential security vulnerabilities. Benchmark Agent. This component puts different LLMs to security test. Particularly, it evaluates the security properties of LLMs based on a set of challenges, structured in JSON format. Each challenge includes a prompt and the expected outcome, and \u201cBenchmark Agent\u201d is responsible for flagging LLMs that pass the challenge. 3https://github.com/aryakvnust/LLMSecGuard Preprint Version LLM Security Guard for Code EASE 2024, June 18\u201321, 2024, Salerno, Italy 4 USAGE SCENARIOS We describe the LLMSecGuard\u2019s main configurations including its connection with external entities such as LLMs and code analyzers. We then explain the interplay between different components through two main usage scenarios, namely \u201cbenchmarking\u201d and \u201ccode generation\u201d. 4.1 Setup There are several key configurations for running LLMSecGuard that users can adjust according to their preferences. LLMSecGuard requires a minimum of one LLM and one code analysis engine to operate effectively. This can be easily set up by providing the API information (e.g., API endpoint and API key) associated with each entity. There is no limitation, and users can add as many entities as they wish. Currently, we have instantiated LLMSecGuard with ANYSCALE Llama2 [19], and Weggli and Semgrep static code security analysis tools [22, 26]. Weggli is a robust and efficient semantic code analysis for C and C++ programs, and Semgrep is designed for programs written in languages such as C#, Java, Java, JavaScript, Python, PHP, Ruby, Scala, etc. LLMSecGuard relies on CyberSecEval, a benchmark specifically designed to evaluate the cybersecurity aspects of LLMs functioning as coding assistants [3]. It comprises exhaustive test cases to assess the potential of LLMs in producing insecure code and facilitating cyberattacks. Nonetheless, users have the flexibility to update this benchmark or replace it with one of their preferences. LLMSecGuard includes a few execution parameters as well. The primary one is the \u201ctermination condition\u201d designed to prevent the system from looping indefinitely. This condition determines how many iterations a code piece should undergo improvement and analysis if potential vulnerabilities persist. The \u201cbenchmark interval\u201d parameter determines how often the benchmark process is executed, with options such as biweekly or monthly intervals. Given that LLMs evolve and their security properties may change, regular updates are necessary to ensure that results remain current. Lastly, users can choose the LLMs of interest for benchmarking, select specific code analysis engines, and specify challenges that should be considered for measurement. 4.2 Security Benchmarking In this scenario, a set of existing prompts, referred to as benchmark prompts, undergo evaluation. In particular, the \u201cBenchmark Agent\u201d sends each prompt in the benchmark to the \u201cPrompt Agent\u201d. This component dispatches the prompt to every LLM considered for benchmarking, collects responses for each prompt, and forwards them to the \u201cBenchmark Agent\u201d. Each prompt presents a challenge, and \u201cBenchmark Agent\u201d determines the extent to which an LLM succeeds in resolving it by comparing the response to the expected outcome. For instance, in terms of vulnerable code, it measures the quantity and severity of potential CWEs. Each LLM is assigned a performance score for each challenge, and ultimately, LLMs are ranked. At present, LLMSecGuard relies on the CyberSecEval benchmark, which incorporates extensive prompts and regular expressions to uncover potential issues. 4.3 Secure Code Generation Figure 2 illustrates the workflow in this scenario. It begins with the user providing a prompt of interest, anticipating the output to be a piece of code. The \u201cPrompt Agent\u201d assumes control and forwards the user\u2019s prompt to the top LLM determined in the benchmarking scenario (unless the user chooses a different LLM). Upon receiving the LLM\u2019s response, this component then transfers the produced code to the \u201cSecurity Agent\u201d for security inspection. The \u201cSecurity Agent\u201d collaborates with external analysis engines (configured earlier) to uncover potential vulnerabilities and respective lines in the code. If the termination condition is met (i.e., there is no vulnerability or the maximum analysis iterations is reached), the code along with its vulnerability information, is immediately reported back to the user. In cases where the termination condition is not met, this information is relayed to the \u201cPrompt Agent\u201d. This component formulates a new prompt based on the collected vulnerability information and queries the LLM once again. No Yes Terminate Prompt Analysis Prompt Security Analysis Code Potential Vulnerabilities Code and\u00a0Analysis Outcome LLM Analysis Engine Figure 2: LLMSecGuard\u2019s secure code generation workflow 5 RELATE WORK The popularity of AI-generated code has attracted the attention of security researchers. Pearce et al. assessed Copilot\u2019s performance in suggesting code across 89 scenarios aligned with MITRE\u2019s \u201cTop 25\u201d CWEs, revealing that approximately 40% of the generated programs contained vulnerabilities [21]. Nonetheless, in a later study, Asare et al. came to a different conclusion [2]. Precisely, Copilot did not consistently reproduce past vulnerabilities introduced by humans. In about 25% of cases, Copilot even proposed fixes for these vulnerabilities, suggesting a potentially lower susceptibility to security risks compared to human programmers. Fu et al. analyzed 435 code snippets generated by Copilot in public GitHub projects and found that 35% of the snippets exhibit CWEs [8]. Mousavi et al. designed 48 programming tasks for five common security APIs in Java and evaluated the performance of ChatGPT in developing these tasks. They uncovered a concerning level of security, specifically, they found an average misuse rate of 70% in these tasks [20]. Preprint Version EASE 2024, June 18\u201321, 2024, Salerno, Italy Arya Kavian, Mohammad Mehdi Pourhashem Kallehbasti, Sajjad Kazemi, Ehsan Firouzi, and Mohammad Ghafari Researchers have also developed benchmarks for assessing the security of LLM-generated code. Bhatt et al. developed CYBERSECEVAL, a benchmark tailored to evaluate the cybersecurity risks posed by LLMs [3]. Hajipour et al. examined code models for generating high-risk security weaknesses and built a collection of diverse nonsecure prompts for various vulnerability scenarios, which can be used as a benchmark to compare security weaknesses in LLMs [13]. 6 FUTURE WORK We plan to investigate whether LLMSecGuard will effectively support developers in writing secure code in real-world scenarios. Two groups of developers, both utilizing LLMs for coding, will be recruited for our study. We will assign the same programming tasks to each group, instructing one to utilize LLMs freely and the other to exclusively use LLMSecGuard during the coding process. We will measure the time taken to complete each task, the number, and the severity of vulnerabilities. We will compare the results across two groups as well as based on participants\u2019 experience. Pending positive evaluations, our ultimate goal is to integrate LLMSecGuard into at least one popular IDE (Integrated Development Environment), as deemed essential by developers for a seamless user experience [24]. Furthermore, this integration would allow LLMSecGuard to gather development context, thereby enabling the formulation of better prompts for code generation. Future work could also investigate code changes in each iteration between \u201cPrompt Agent\u201d and \u201cSecurity Agent\u201d, as well as examine how the engineering of prompts and vulnerability information affects the performance of LLMs. Finally, it is noteworthy that although a code snippet may not inherently contain vulnerabilities, its integration into a program or execution within a specific environment could still pose security risks and therefore warrants investigation [23]. 7 CONCLUSION We introduced LLMSecGuard, an open-source framework developed to equip developers with code solutions that are more secure than the code initially generated by Large Language Models (LLMs). These code suggestions are obtained through the integration of LLMs and static security code analyzers. LLMSecGuard also measures the security properties of LLMs and provides researchers and practitioners with a current security comparison of different LLMs in the wild."
17
+ }
intro_8K/test_introduction_short_2405.01116v1.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.01116v1",
3
+ "title": "\"In-Context Learning\" or: How I learned to stop worrying and love \"Applied Information Retrieval\"",
4
+ "abstract": "With the increasing ability of large language models (LLMs), in-context\nlearning (ICL) has evolved as a new paradigm for natural language processing\n(NLP), where instead of fine-tuning the parameters of an LLM specific to a\ndownstream task with labeled examples, a small number of such examples is\nappended to a prompt instruction for controlling the decoder's generation\nprocess. ICL, thus, is conceptually similar to a non-parametric approach, such\nas $k$-NN, where the prediction for each instance essentially depends on the\nlocal topology, i.e., on a localised set of similar instances and their labels\n(called few-shot examples). This suggests that a test instance in ICL is\nanalogous to a query in IR, and similar examples in ICL retrieved from a\ntraining set relate to a set of documents retrieved from a collection in IR.\nWhile standard unsupervised ranking models can be used to retrieve these\nfew-shot examples from a training set, the effectiveness of the examples can\npotentially be improved by re-defining the notion of relevance specific to its\nutility for the downstream task, i.e., considering an example to be relevant if\nincluding it in the prompt instruction leads to a correct prediction. With this\ntask-specific notion of relevance, it is possible to train a supervised ranking\nmodel (e.g., a bi-encoder or cross-encoder), which potentially learns to\noptimally select the few-shot examples. We believe that the recent advances in\nneural rankers can potentially find a use case for this task of optimally\nchoosing examples for more effective downstream ICL predictions.",
5
+ "authors": "Andrew Parry, Debasis Ganguly, Manish Chandra",
6
+ "published": "2024-05-02",
7
+ "updated": "2024-05-02",
8
+ "primary_cat": "cs.IR",
9
+ "cats": [
10
+ "cs.IR"
11
+ ],
12
+ "label": "Original Paper",
13
+ "paper_cat": "LLM Fairness",
14
+ "gt": "In-context learning (ICL), unlike supervised learning, does not in- volve training a set of parameters \ud835\udf03on labeled examples. Rather, the posteriors are now a function of the following: a) text of the input test instance, b) the decoder parameters of a pre-trained large lan- guage model (LLM), c) a prompt instruction, and d) optionally, a set of \ud835\udc58input examples (commonly called \ud835\udc58-shot learning). Formally, \ud835\udc43(\ud835\udc66|x) = \ud835\udc53(x, P\ud835\udc58(x);\ud835\udf19LLM), (1) where, different from a supervised setup, the function \ud835\udc53does not have a parameterized representation that can be learned using a training set with gradient descent. The function itself depends on the pre-trained parameters \ud835\udf19LLM of an LLM, the current inputs for which a label is to be predicted, and a prompt comprising a set of \ud835\udc58 text units denoted by P\ud835\udc58(x). Since the decoder of an LLM generates a sequence of words of the form of \ud835\udc641, . . . ,\ud835\udc64\ud835\udc41(\ud835\udc41being the maximum length of a sequence), the class posterior likelihoods are computed in the following way. A set of classes (say for a \ud835\udc5d-way classification problem) is mapped to \ud835\udc5ddifferent equivalent sets of words, say \ud835\udc49(\ud835\udc66), where \ud835\udc66\u2208Z\ud835\udc5d \u2013 these sets commonly being called verbalisers [32]. For instance, for a binary classification problem (e.g., that of a movie review as shown in Figure 2), \ud835\udc5d= 2 (i.e., \ud835\udc66\u2208{0, 1}), and a reasonable way to define the verbaliser sets could be via the following words: \ud835\udc49(0) = {\u2018false\u2019, \u2018negative\u2019}, and \ud835\udc49(1) = {\u2018true\u2019, \u2018positive\u2019}. Note that the word \u2018learning\u2019 in ICL is a misnomer because there are no updates to the decoder parameters of an LLM. For more details on ICL, please refer to these excellent surveys [20, 47]. 2.2 The role of IR One of the most important components of ICL (as shown in Figure 2) is the search component that outputs a top-\ud835\udc58candidate set of similar instances from the training set, i.e., P\ud835\udc58(x) of Equation 1. Although, in principle, it is possible to include random examples from the training set in the prompt, it has been shown that localised examples (i.e., examples that are topically similar to the current instance) yield better performance [44, 47]. The reason why this works can be traced to the fundamental principle of reproducing kernel Hilbert spaces (RKHS) machine learning \u2013 that a predictor function is an aggregation of parameterised kernel functions pivoted around training data instances [60]. It is thus crucial to retrieve as many relevant examples as possible from the training set while imposing a practical constraint on the number of such examples for efficiency reasons \u2013 a classic trade-off of recall and precision in IR ad-hoc retrieval; the only difference is that relevance for ICL needs to be defined in terms of the utility or usefulness of an example towards the correct prediction. A similar question explored in IR is where to stop reading a ranked list because there is little utility in retrieving documents due to the low probability of finding relevant documents beyond a certain rank cutoff [2, 4]. What is more challenging is that this rank cut-off depends on the number of relevant documents occurring in the collection for a specific query, that is to say, while some queries with well-defined information needs are associated with a small number of relevant documents satisfying the specific relevance criterion, other queries with broader information needs usually are associated with a higher number of relevant documents [6]. In core IR research, this problem is usually addressed by estimating the retrieval qualities of queries \u2013 the assumption being that well- specified queries yield better retrieval results (in terms of precision and recall), whereas ill-specified ones suffer from poor retrieval quality due to the apparent ambiguity of information need. This mo- tivation paves the path to the following section, where we discuss how query performance prediction (QPP) can also be beneficial to the related problem of retrieving similar examples in ICL.",
15
+ "main_content": "3.1 A Variable Number of Examples The observation in IR that different queries exhibit different levels of retrieval performance can be utilised for ICL, where we can draw an analogy that some test instances are associated with better candidates for training examples (i.e., examples which are useful in the sense that including them as a part of the prompt leads to correct predictions), and hence including a small number of them should be adequate. On the other hand, the retrieval quality for some test instances (used as queries for ICL) does not yield good candidates. As a result, one needs to look down the ranked list further to collect useful examples. We call this methodology of using a variable number of demonstrations for ICL inference by the name \u2018Adaptive In-Context Learning\u2019, or AICL for short. The idea of AICL centres around choosing the context P\ud835\udc58(x) in a data-driven manner, i.e., making \ud835\udc58 a function of the data (current instance x) itself. This is somewhat analogous to choosing different values of \ud835\udc58for a \ud835\udc58-NN based nonparametric modeling [87], as shown in Figure 3. The motivation is that classifying some instances would be more difficult than others, in which cases they are potentially to be benefited from a larger value of\ud835\udc58(more context). On the other hand, for relatively easy data instances using too much context may be detrimental for effective prediction. Formally speaking, the difference of AICL with that of ICL (Equation 1) is that the value \ud835\udc58, indicating the size of the neighborhood, is no longer a constant. Instead, we denote it by a parameterised function \ud835\udf05(x) such that xxx; (2) \ud835\udc43(\ud835\udc66|x) = \ud835\udc53(x, P\ud835\udf05(x) (x);\ud835\udf19LLM), (2) SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Parry, Ganguly, and Chandra Example Retriever Instruction Formatter Index Too slow for a science fiction Ideally paced for a science fiction Fast-paced with no unnecessary side plots Test Sentence The review Ideally paced for a science fiction is a <POSITIVE> review The review Too slow for a science fiction is a <MASK> review The review Ideally paced for a science fiction is a <POSITIVE> review The review Fast-paced with no unnecessary side plots is a <POSITIVE> review The review Too slow for a science fiction is a <MASK> review <MASK> = <POSITIVE> True Label: <MASK> = <NEGATIVE> True Label: Figure 2: Example workflow of In-Context Learning for sentiment classification. The illustrative example shows a sample test instance for which a single demonstration (as retrieved from the training set) does not result in the correct prediction (prediction shown at the top). The example also shows that increasing the number of demonstrations from one to two results in the correct prediction (shown at the bottom). Demonstrations included within the prompt are shown in blue. Figure 3: Motivation behind using a variable sized neighborhood for \ud835\udc58-NN classification [87]: An instance close to a decision boundary (black \u2018?\u2019) is likely to have a higher heterogeneity in its class distribution, thus indicating the necessity of a larger neighborhood for an effective classification. where \ud835\udf05: x \u21a6\u2192{0, . . . , \ud835\udc40}, \ud835\udc40being an upper bound on the number of example instances. We now suggest how unsupervised or supervised approaches may be applied to choose the rank cutoff \ud835\udf05. 3.2 Unsupervised Rank Cutoff Among unsupervised approaches, two main ideas in IR research can be used to determine the number of examples in ICL. Score Distribution-based Models. The first thread of work is based on the hypothesis that the scores of relevant and non-relevant documents follow a different statistical distribution, e.g., Arampatzis et al. propose to use a mixture of Normal-Exponential distributions \u2013 Normal for relevant and Exponential for non-relevant documents \u2013 to model the score distribution of top-ranked documents. The work in [2] uses expectation maximisation (EM) to estimate the parameters of this mixture distribution and thereby predict the most likely cutoff rank beyond which the probability of finding a relevant document is considerably low. Such ideas of utilising the characteristic differences between the score distributions of relevant and non-relevant documents have also been used for query performance prediction (QPP) [12]. While an EM from retrieval scores allows provision for applying a variable number of examples, the following are some ICL-specific challenges that need to be researched. \u2022 With the notion of relevance being changed to \u2018downstream utility\u2019, the score distributions of useful and not useful examples may not follow the same mixture distribution of NormalExponential as reported in [2, 12]. It will be an interesting future research direction to investigate the latent relations between the similarity scores and the downstream utility of the examples in the context of ICL. \u2022 With a threshold on the score-distributions, it is difficult to restrict the cutoff to a maximum value, which is essential for ICL due to a maximum limit on the input size to an LLM. \u2022 A score distribution-based approach does not explicitly consider the information from the queries themselves (equivalently, the test instances in ICL). We now describe another thread of work in IR research that may help alleviate the last two limitations. QPP-based Models. Different from the rank cut-off strategies, query performance prediction (QPP) models seek to estimate the retrieval quality of a query. As a direct analogy, such methods can be applied to the top-similar examples retrieved in ICL with a different objective of predicting the usefulness of the examples. Most of the classic works in QPP involve unsupervised approaches that make use of the information from the set of top-retrieved documents to estimate how topically distinct are the top-retrieved documents from the rest of the collection \u2013 a large difference indicating potentially better retrieval quality [10]. Various evidences extracted from the top-retrieved documents have been shown to be useful for different post-retrieval QPP estimation methods. This includes i) the KL divergence between the language model of the top-retrieved documents and the collection model in Clarity [10], ii) the aggregated values of the information gains of each top-retrieved document with respect to the collection in WIG (Weighted Information Gain) [88], iii) the skew of the RSVs (Retrieval Status Values) measured with variance in NQC (Normalized Query Commitment) [72], iv) ideas based on the clustering hypothesis for a pairwise document similarity matrix [19], and, more recently, v) the characteristics of the embedded space of documents and queries [21, 66]. A suitable adaptation of these existing techniques can be applied in a two-stage pipeline to determine the number of examples in ICL. As a first step, one can employ a QPP methodology to predict the retrieval quality (in terms of the usefulness) of a set of ordered examples \u2013 a high value likely indicating that the useful examples \u201cIn-Context Learning\u201d or: How I learned to stop worrying and love \u201cApplied Information Retrieval\u201d SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Algorithm 1: LLM \ud835\udc58-shot predictions Input: x \u2013 an instance from the training set Input: \ud835\udc58(< \ud835\udc40) \u2013 number of examples (max \ud835\udc40) Output: \u0394\ud835\udc5d\u2013 Softmax posteriors begin \ud835\udc41\ud835\udc58(x) \u2190{z1, . . . , z\ud835\udc58} Instruction \u2190\u201cPredict the type of \u27e8x\u27e9as one of {\u27e8\ud835\udc360\u27e9, . . . , \u27e8\ud835\udc36\ud835\udc5d\u22121\u27e9} given the following example\u201d. for \ud835\udc56\u21901 to \ud835\udc58do Instruction.append(\u201cExample: \u27e8z\ud835\udc56\u27e9is a representative of class \u27e8\ud835\udc66(z\ud835\udc56)\u27e9\u201d) \u0394\ud835\udc5d\u2190LLM(Instruction) return \u0394\ud835\udc5d can potentially be found at the very top ranks, as a result of which, a small number of examples should potentially work well. On the other hand, a low QPP estimate likely indicates that the very top ranked examples are not likely to be useful for downstream prediction, in which case it should be better to employ a large number of examples. This approach of selecting rank cutoffs (with an upper bound) as a function of the QPP scores has been applied to determine a variable depth of relevance assessments required for a robust retrieval evaluation [25]. 3.3 Supervised Rank Cutoff Instead of devising a heuristic to predict the number of training examples to use for a test instance x, i.e., \ud835\udf05(x), a supervised approach can be applied to solve this as a classification problem, i.e., \ud835\udf05\u2261Softmax(xT\ud835\udf03), where \ud835\udf03is a set of layer(s) of parameters. The underlying hypothesis is that if we provide enough training data constituting the optimal number of examples for a range of topical content, we should be able to learn to predict the likely number of examples to use for unseen text during inference time. To train a classifier that maps a text to a number between 1 to \ud835\udc40(the maximum number of examples), it is necessary to obtain the ground-truth labels, i.e., the optimal number of examples, for each instance in the training set. We propose to obtain this by the following methodology: Given a training set instance x, one can employ a similarity function (e.g., BM25) to retrieve a candidate set of \ud835\udc40examples {z1, . . . , z\ud835\udc40}. Since x is an instance from the training set, we can utilise its label to check if the \ud835\udc58-shot predictions using an LLM are correct. It may happen that correct predictions are obtained for several values of \ud835\udc58\u2208{1, . . . , \ud835\udc40}. Several strategies can be adapted to define the ground-truth number of examples. For instance, one can stop early and simply select the smallest \ud835\udc58that results in a correct prediction. Alternatively, a potentially more robust procedure would be to exhaustively check through all possible values of \ud835\udc58= 1, . . . , \ud835\udc40, and select the one that results in a correct prediction with the least uncertainty [67, 75]. The workflow of this least uncertainty-based selection of the ground truth for the number of ICL examples is shown in Algorithm 2. Algorithm 1, which is invoked during the ground-truth construction, shows a sample prompt template for text classification. After executing Algorithm 2, we obtain a set of ground-truth labels K which could then be used to train a classifier, parameterised Algorithm 2: Optimal number of examples Input: T \u2013 a training set of labelled instances Output: K = \u222ax\u2208T\ud835\udc58\u2217(x) \u2013 Number of examples yielding the most confident and correct predictions for each instance x \u2208T begin for x \u2208T do max_confidence \u21900; \ud835\udc58\u2217\u21901 for \ud835\udc57\u21900 to \ud835\udc40do \u0394\ud835\udc5d\u2190LLM \ud835\udc58-shot predictions(x, \ud835\udc57) // Call Algorithm 1, i.e., try to predict with \ud835\udc57examples \u02c6 \ud835\udc66(x) \u2190argmax\u0394\ud835\udc5d // Get the predicted class confidence\u2190\u0394 \u02c6 \ud835\udc66(x)I( \u02c6 \ud835\udc66(x) = \ud835\udc66(x)) // Check if the predicted class is the correct one and record the prediction confidence if confidence > max_confidence then max_confidence \u2190confidence // Keep track of the least uncertain correct prediction \ud835\udc58\u2217\u2190\ud835\udc57 K \u2190K \u222a\ud835\udc58\u2217 return K by \ud835\udf03, via optimising: argmin\ud835\udf03 \u2211\ufe01 x\u2208T,\ud835\udc58\u2217\u2208K L(xT\ud835\udf03,\ud835\udc58\u2217), (3) where L is a standard loss function, e.g., the cross-entropy. During inference, for each x \u2208E (E denoting an evaluation set), we propose to apply the classifier \ud835\udf05: x \u21a6\u2192{1, . . . , \ud835\udc40} \u2013 trained via Equation 3 \u2013 to predict the number of examples, and eventually conduct a \ud835\udf05(x)-shot prediction on x (Equation 2). 3.4 Open Research Questions and Challenges Till now in this section, we described how unsupervised and supervised approaches can be applied to dynamically select the number of examples to be used for an ICL-based prediction. In this section, we discuss some research directions that could be explored to adapt ICL in alternative ways to further improve its effectiveness. First, we would like to point out to the existing work on generating query variants, as a part of a data augmentation strategy, to devise alternative formulations of the same or similar information needs. This has been shown to improve the effectiveness of rankers [28], query performance prediction [15, 86] relevance feedback [7], and even act as a tool to measure consistency of IR models [70]. Given the recent success of zero-shot query generation capabilities of LLMs [1, 83], we believe that augmenting a test instance with alternative text representations can be useful to eventually improve retrieval quality (and hence potentially improve the downstream ICL effectiveness). The unsupervised and supervised approaches for predicting the number of examples per query (test instance) may also lead to better ICL effectiveness, as per the existing findings that variants do actually help improve QPP [15, 86]. We thus formulate the following two research questions aligned along this direction. \u2022 RQ-3.1: Can query variants generated by LLMs (or otherwise) improve the prediction of the number of examples to use for each instance? \u2022 RQ-3.2: Can relevance feedback based approaches with or without the use of generated query variants help reorder the top-\ud835\udc58 SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Parry, Ganguly, and Chandra initially retrieved candidate set of examples towards a better prediction of the number of examples? The other direction of work involves a dynamic selection of not just the neighborhood size but also other ICL parameters. For instance, the verbaliser [69] sets can be selected dynamically from a set of alternatives based on the input instance. Further, a prompt can also be selected dynamically again based on the input instance; an unsupervised approach exploring this idea has already been studied in [75]. Generally speaking, the research question that can potentially be explored is the following. \u2022 RQ-3.3: Can other ICL parameters also be chosen in a data-driven manner to lead to better effectiveness, e.g., the verbaliser, the prompt, or even an LLM itself (akin to a mixture of experts)? 4 RANK ICL EXAMPLES \u21a6\u2192SUPERVISED IR? In this section, we discuss another crucial aspect of ICL that can potentially be improved by developing ranking models specifically suited to a different notion of relevance: ICL downstream task-specific usefulness of examples. The concept of an effective example in core neural IR is well-researched, particularly the notion of \u2018hard\u2019 negatives during fine-tuning [27, 36]. These negatives have improved downstream precision on ranking tasks [84] and, more generally, representation learning [29]. Specific to few-shot learning, Rubin et al. [67] employed a noise contrastive estimation (NCE) loss [30] to train a bi-encoder-based pairwise ranker using SBERT [64] embeddings. For training the ranking model, pairs of instances (relevant and non-relevant examples) were collected in the following way. For each pivot instance x from a training set, the authors employed BM25 to constitute a top-\ud835\udc58candidate set of examples. Each pair (x, z\ud835\udc56) was then tested to check whether a 1-shot prediction with z\ud835\udc56was correct, in which case, z\ud835\udc56was classified as a relevant example for x, or else it was considered as a non-relevant one. Batches comprising relevant and non-relevant pairs were then constituted to train a standard NCE loss. While the work of Rubin et al. [67] is a definitive step towards leveraging a task-specific notion of relevance, the investigation should not be considered complete. Several potentially promising research directions should be explored to improve ICL effectiveness further. We now provide a survey of neural ranking literature introducing core paradigms which may be utilised in example selection. Bi-Encoder architecture. A bi-encoder architecture encodes text into a latent representation that can be compared in a vector space; in the context of a retrieval task, these texts would be queries and documents. While a bi-encoder is implemented either with a Siamese network of shared parameters [64] or as a single encoder [48], the latter has become prevalent in recent years [36, 84]. The performance of neural models in search was significantly improved with the release of BERT [18]. Karpukhin et al. [36] first proposed the use of \u2018hard\u2019 negatives mined from BM25 to improve the precision of BERT-based rankers. Gao et al. [27] then proposed a variant of the NCE objective, \u2018Localised Contrastive Estimation\u2019, in which multiple negatives are sampled for each query to account for the variance in the notion of non-relevance. In doing so, they also showed the effectiveness of hard negatives mined from finetuned rankers. To further improve the quality of negative samples, Xiong et al. [85] proposed that a model could choose negatives during training to allow negatives to become continuously \u2018harder\u2019 as fine-tuning progresses. At a conceptual level, bi-encoders generally represent a text as a single embedding by using the representation of the BERT [CLS] token as a proxy for the entire sequence. Other pooling methods are effective, including maximum sequence similarity [13] and late interaction in which a max pooling is performed over the token-level similarity of each query token to document tokens [37]. More recent works instead use a BERT-style encoder with a shallow decoder, which places greater emphasis on the ability of the encoder during pre-training. This architectural development has yielded not only state-of-the-art recall but new pre-training styles, including lexical grounding [71] and text reconstruction [84]. The separate encoding of queries and documents allows for the offline encoding of documents which can vastly improve online latency. This is often coupled with an approximate nearest neighbour search in a vector space [31, 37]. More specifically, after training a bi-encoder model, the parameters of the trained model act as \u2018embeddings\u2019 for each document in the collection. During inference time, a query is first embedded into a vector. Then an approximate nearest neighbour search, e.g., HNSW [50], is conducted on an indexed representation of these dense document vectors. Therefore, exploring the potential benefits gained from efficient, dense endto-end retrieval of training examples for effective ICL can be an interesting research direction. Cross-Encoder architecture. A cross-encoder instead jointly encodes a query and document at inference time [55], allowing deep interactions between texts that are impossible in a bi-encoder architecture. Empirically, these models are more precise than biencoders at the expense of latency, as representations cannot be pre-computed in a standard setting. Both BERTand T5-based architectures have been proposed [55, 56]; in the case of a BERT model, a feed-forward classification head is used to output class probabilities of relevance [55]. In the case of a sequence-to-sequence model, token logits are taken as surrogates for class probabilities [56]. Recent developments in LLMs have prompted research in these large decoder-only models as text rankers. A list-wise approach is commonly taken in which a model receives multiple documents for a given query and outputs a permutation of the original ranking [62, 77]. The development of these models is still in its infancy but it offers opportunities to investigate highly precise ranking models potentially in sample mining beyond simple ad-hoc search. Therefore, it is reasonable to assume that employing a crossencoder to learn ranking examples by their downstream usefulness should yield better results than a bi-encoder-based approach. An interesting research direction would hence be to investigate the optimal architecture within an ICL pipeline, considering the efficiency-effectiveness trade-off. Teacher Distillation. Moreover, a rich literature exists on distilling the more computationally expensive cross-encoder models into the simpler bi-encoder, the former acting as a teacher model and the latter as a student [31]. Distilling a teacher model into a bi-encoder one allows provision for end-to-end dense retrieval without requiring any sparse index to retrieve a candidate top-\ud835\udc58. Two core paradigms of distillation are homogeneous architecture and heterogeneous architecture distillation. The former generally \u201cIn-Context Learning\u201d or: How I learned to stop worrying and love \u201cApplied Information Retrieval\u201d SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA will distill one model into a newly initialised copy via minimisation of a divergence metric over either the final hidden state [43] or internal states such as attention layers [34]. The latter minimises prediction error between teacher and student models commonly via a mean squared error criterion over triplet residuals (residual between positive and negative example scores), allowing for \u2018crossarchitecture knowledge distillation\u2019 [31] as a scalar relevance score is not architecture dependent. This approach has become a core component of many state-of-the-art dense retrieval models, frequently beginning with a cross-encoder teacher used to mine hard negatives and teacher scores before a second stage distillation is performed using the previous distilled model as a teacher [71, 84]. A parallel area of work gaining traction is knowledge sharing between a retrieval system and a generative model [33, 34, 40]. This paradigm is directly correlated with our perspective with recent work finding success in directly optimising a retriever to maximise downstream QA performance [34]. However, these systems are currently brittle with Cuconasu et al. [11] finding that the addition of irrelevant content before a gold label answer as context to a QA system can improve performance against any intuition, suggesting much work can be done in this area to optimise how we present a model with ICL examples. 4.1 Combined Utility of ICL Examples There exists a fundamental difference between relevance of documents in IR and usefulness of examples in ICL. In IR, a document\u2019s relevance is independent of the relevance of another document, and when combined, the information still remains relevant. The situation is more complex for ICL. More precisely speaking, two labeled instances in ICL that are useful examples by themselves (i.e., when used as a 1-shot demonstration results in a correct prediction) may not be yielding a correct prediction when combined for a 2-shot inference [46]. This is likely to happen because the decoder, on obtaining too much of a context, can be biased towards a specific topical cluster of words corresponding to the incorrect class descriptor. While more investigation is required to analyse the empirical likelihood of this phenomenon of \u2018non-cooperation\u2019 between examples occurring, it is worth exploring what adjustments may be needed at a methodology level to even define an ideal ranking of the training data examples for few-shot ICL. The objective in this case is not just to maximise the number of \u2018relevant documents\u2019 (as per the IR analogy) within the top-\ud835\udc58, but rather to ensure the combined usefulness of the examples. A possible direction towards this could be to adapt a listwise ranking model with this modified notion of combined relevance (usefulness). A more computationally efficient approach would be to operate at the level of pairs, i.e., predict which pairs are concordant and discordant. An element of each pair takes on a Boolean value (either useful as a 1-shot example or not), which means that the number of different ways in which a pair can be either concordant or discordant is the number of possible Boolean functions of 2 variables, which is 16 (two such sample functions are Boolean OR, where if one of the examples is useful so is the combination, and XNOR where a pair is discordant if either of the examples are useful as 1-shot). Since, in the general case, the number of Boolean functions of \ud835\udc5bvariables is 22\ud835\udc5b, listwise training with \ud835\udc5b> 3 will likely be computationally prohibitive. Open research questions. Before concluding this section, we now summarise the importance of the following IR-specific research questions for ranking examples in ICL. \u2022 RQ-4.1: Is ICL sensitive to the choice of a neural retrieval model, i.e., can we get an improvement using a basic Siamese model over SBERT as envisaged in [67]? \u2022 RQ-4.2: How faithful is the assumption that a combination of several 1-shot useful examples remain useful for ICL prediction? \u2022 RQ-4.3: If the answer to RQ-4.2 is negative, then there is a significant scope of improving over standard learning to rank approach by explicitly modeling concordance (or the lack of it) of the usefulness of examples in ICL. How can we adjust ranking models, and how much improvement can we achieve over a baseline of the standard few-shot? 5 INFORMATIVE EXAMPLES \u21a6\u2192FACETED IR? In this section, we discuss the last of our proposed verticals towards an effective ICL workflow as outlined in Figure 2, which is that of seeking to provide relevant but diverse contexts to an LLM. More precisely speaking, topical diversity of the examples should play an important role in preventing a decoder bias towards a single topic. This is more true for text generation tasks, such as non-factoid question answering, where an LLM decoder needs to be aware of the different sub-topics to be able to construct a comprehensive answer. Even for classification tasks, diverse examples are likely to help a decoder consider a majority of likely topics (the verbalisers of which map to descriptors of closely related categories) during inference, thus minimising the risks of misclassification. Faceted search has been well studied in IR. Explained simply, a faceted search system extracts the multiple different aspects of the information need from the top-retrieved set and maps each retrieved document to one of these aspects [9, 22\u201324]. Faceted search is particularly useful for queries with broader information needs, where it can assist a user to reformulate their information need to one of the more specific aspects of the broader one, e.g., transform a query \u2018dehumidifiers\u2019 to \u2018price range of dehumidifiers\u2019 where the intention (information need facet) is to buy one [6]. Faceted search is closely related to the concept of diversified ranking [8], where search systems seek to improve the retrieval effectiveness for all possible aspects of a broader information need, e.g., for the earlier example query on \u2018dehumidifiers\u2019, retrieve documents related to information on both the aspects of price ranges, technical specifications, product reviews, and general knowledge on dehumidifiers. Santos et al. [68] propose to leverage query variants (what the paper calls \u2018sub-queries\u2019) and their top-retrieved lists for constructing a list of documents potentially relevant to each facet of the original query. Close to diversity is the concept of fair search which seeks to mitigate biases towards any particular aspects of information need, and recently neural approaches have become common to balance relevance with fairness [57]. From a search user\u2019s perspective, it has been shown that diversified retrieval systems play an important role in improving the search experience, by providing greater coverage of a topic and mitigating potential bias in search results [51]. Similarly, a greater SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Parry, Ganguly, and Chandra topical coverage and a less topical bias can potentially lead an LLM decoder towards contexts more useful for a downstream task. In fact, Levy et al. [38] show that diversifying the few-shot examples on the basis of abstract syntax tree (AST) structures improves the downstream task of compositional generalisation. This indeed shows a positive direction of research where the considerable volume of work conducted on faceted search and diversification by the IR community can be useful for ICL. However, similar to relevance, the notion of diversity would also need suitable adjustments for ICL. A suitable notion of diversity should not just consider similarities between the input examples but rather also their class labels and, more importantly, similarities in the ways in which they affect an LLM decoder\u2019s generation path. Two examples which both output similar output trees should not be considered diverse. In principle, one can potentially adapt the classification methodology that we proposed to learn the optimal number of examples based on minimising the prediction uncertainties for the purpose of classifying if a given pair of examples is diverse or not. Furthermore, we posit that neural approaches that take into account both relevance and fairness or diversity (again both in the context of downstream ICL) should find a use-case in ICL to help diversify the useful examples. Open research questions. Based on the discussions in this section, we now outline the following research directions. \u2022 RQ-5.1: How sensitive is ICL towards the topical diversity of the examples? \u2022 RQ-5.2: How can the standard notion of diversity be extended to consider the latent dependence between the input and the output of an LLM decoder aligning towards a specific downstream task? \u2022 RQ-5.3: How may existing IR metrics for diversity (e.g., \ud835\udefc-nDCG [8]) be adapted to measure how effective is the example retrieval for downstream ICL? \u2022 RQ-5.4: How can multi-objective neural ranking models be trained to jointly learn downstream specific usefulness and diversity for ICL? 6 PRELIMINARY EVALUATION In this section, we report the results of our initial investigation, which was conducted to answer a subset of research questions of the first vertical, i.e., to develop an effective adaptive version of ICL that can dynamically select the number of examples. 6.1 Research Questions and Dataset Research Questions Investigated. In Section 3.2, we discussed the possibilities of applying QPP-inspired unsupervised approaches for selecting a cutoff point in the ranked list of examples. On the other hand, in Section 3.3 we proposed a classifier-based approach to learn the optimal number of examples. In our experiments, we compare the supervised approach of Algorithm 2 and an NQC-based unsupervised approach for adaptive \ud835\udc58-shot and compare both with static \ud835\udc58-shot on standard datasets for text classification. Explicitly stated, we investigate the following research question. \u2022 CRQ-1: Does adaptively selecting the number of examples in ICL lead to improved downstream effectiveness? \u2022 CRQ-2: Does an unsupervised approach obtain a reasonable performance as compared to a supervised one? Since our experiments answer the above questions, they are not open, unlike the ones we expounded on in this paper. Therefore, we prefix these questions with a \u2018C\u2019 (closed). Dataset. We conduct experiments on three text classification datasets, namely AGNews [16], Jigsaw Toxic Comment1 and SST2 [74]. Below, we provide more details on each dataset. \u2022 AGNews: AGNews is a topic classification dataset constituting news articles from the Web. Each document in the dataset belongs to one of the following 4 classes: World, Sports, Business, and Sci/Tech. The total number of training instances is 120, 000, while the test set size is 7, 600. Each class contains 30, 000 samples from the train set and 1, 900 instances from the test set. \u2022 Jigsaw Toxic Comments: Due to its societal impact, toxicity prediction is a problem of considerable practical interest. This dataset, released by Jigsaw and Google as a part of a Kaggle competition, comprises of comments extracted from Wikipedia\u2019s talk page, each being annotated by human evaluators across six categories representing toxic behaviors: toxic, \u2018severe toxic\u2019, obscene, threat, insult, and \u2018identity hate\u2019. \u2022 SST2: The Stanford Sentiment Treebank (SST) is a corpus with fully labeled parse trees that allows for a complete analysis of the compositional effects of sentiment in a language. The corpus consists of 11, 855 sentences extracted from movie reviews. Being parsed with the Stanford parser, it constitutes a total of 215, 154 unique phrases from the parse trees, each annotated by 3 human judges. The SST2 (also called SST-binary) dataset is a subset of SST, specifically prepared for the task of binary classification. More precisely, neutral sentences from SST were discarded, and two level, each for the negative and the positive classes were merged thus yielding two classes in total. 6.2 Methods and Parameters Our proposed methods for Adaptive ICL (AICL). As novel methods for adaptive ICL, we employ the following: \u2022 The supervised strategy of Algorithm 2, which we call supervised adaptive ICL (SAICL). \u2022 A QPP-based unsupervised strategy (as per the generic direction outlined in Section 3.2), where we compute the rank cutoff in a relatively simple way, stated as follows. First, given a top-\ud835\udc40set of candidate examples, we compute a normalised value of the NQC estimator [72] (we employ a max normalisation, the normalisation constant being the max NQC value from the training set). We then quantise the normalised values into \ud835\udc40equi-spaced intervals ranging from 0 to the max NQC value. As per the hypothesis that a higher NQC value indicates a better retrieval quality, we employ the inverse linear relation and end up selecting a value close to 0 for higher NQC, and a value close to \ud835\udc40for smaller ones. We call this method QPP-AICL. Baselines. As baselines to compare SAICL and QPP-AICL against, we employ the following: 1https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge \u201cIn-Context Learning\u201d or: How I learned to stop worrying and love \u201cApplied Information Retrieval\u201d SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA \u2022 0-shot: This approach simply inputs an instruction without supplying any examples. \u2022 Static ICL (SICL): This refers to the standard method of supplying a fixed number of semantically-similar examples as input, similar to [44]. This is different from AICL in that the number of examples in the prompt is always fixed, however, the examples themselves vary for different test inputs based on semantic similarity. For a fair comparison with AICL methods, we report the results obtained with three different values of \ud835\udc58: 1, \u2308\ud835\udc40 2 \u2309and \ud835\udc40 representing the most conservative (in terms of the input size), average, and least conservative situations. In our case, \ud835\udc40= 5, which means that our standard ICL experiments operate with the 1-shot, 3-shot and 5-shot settings. Model and hyper-parameter settings. Among a relatively large number of available choices for available LLMs \u2013 either opensource models or black-box cloud APIs \u2013 we, in particular, conduct our experiments on GPT-J [76]. GPT-J is an open-source GPT-3-like model trained on the Pile dataset [26]. GPT-J-6B yields performance comparable to the 6.7 billion parameter GPT-3 (Curie) on a variety of tasks [81]. The maximum context length (in terms of number of tokens) of GPT-J is 2048. In our experiments, we vary \ud835\udc40the maximum number of examples, from 1 to 5 (for static ICL this is denoted by \ud835\udc58). For a fair comparison, we use the identical prompt template (as shown in Algorithm 1) and greedy decoding with the same verbalizers across all methods employed in our experiments. 6.3 Results Table 1 shows the results (in terms of macro-averaged precision, recall and F1) obtained by the different ICL strategies. It can be seen that SAICL turns out to be the best among the competing approaches. The reason it outperforms the best baseline (static ICL) is that SAICL is able to effectively adapt the number of examples to use, thereby preventing itself from the degradation effects of non-relevant (not useful) examples. In effect, it learns a latent relationship between the topical content and the quantity of context required to guide the decoder\u2019s output in the right direction effectively. Moreover, SAICL is able to operate more effectively with smaller input sizes (see the average value of \ud835\udc58and also the average size of the input in terms of the number of tokens), which means that it is computationally faster as compared to static ICL (SICL). Our observations reveal that CRQ-1 is answered in the affirmative, i.e., an adaptive selection of the number of examples in ICL does improve downstream effectiveness and efficiency. The results with the unsupervised QPP-based approach (QPPAICL) turned out to be worse than the baseline of static ICL. From a broader perspective, this points to an important finding that offthe-shelf IR approaches without modifications specifically suited to the underlying characteristics of the downstream tasks in ICL may not directly yield improvements in the effectiveness of ICL. For instance, NQC seeks to estimate relevance of documents, and as we have argued before, that relevance has a different interpretation for the ICL examples. Although the observations with QPP-AICL answers CRQ-2 in negative, i.e., an unsupervised approach for an adaptive selection of ICL examples is substantially worse than a supervised one, they do suggest that methodologies developed by Table 1: Macro-averaged precision, recall and F1-scores for different in-context learning (ICL) methodologies. The column \ud835\udc58denotes the number of few-shot examples. For AICL approaches, this column denotes the average number of examples used for the respective method. \u2018AIS\u2019 denotes the average input size measured in terms of the number of tokens rounded off to the nearest integer. Evaluation Dataset Method \ud835\udc58 Precision Recall F-score AIS AGNews 0-shot 0 0.6569 0.5932 0.5849 60 SICL 1 0.9015 0.9017 0.9016 125 SICL 3 0.9008 0.8997 0.8989 252 SICL 5 0.8963 0.8930 0.8917 380 QPP-AICL 3 0.8545 0.8499 0.8486 220 SAICL 1.87 0.9080 0.9096 0.9067 175 Toxicity 0-shot 0 0.5689 0.6238 0.5769 103 SICL 1 0.5760 0.6989 0.5505 195 SICL 3 0.6092 0.7180 0.6254 335 SICL 5 0.6078 0.7248 0.6217 431 QPP-AICL 3 0.5906 0.6942 0.5977 289 SAICL 3.46 0.6194 0.6983 0.6303 359 SST2 0-shot 0 0.7503 0.5022 0.3379 30 SICL 1 0.8703 0.8703 0.8703 61 SICL 3 0.9140 0.9137 0.9137 121 SICL 5 0.9245 0.9230 0.9230 181 QPP-AICL 3 0.8556 0.8479 0.8470 106 SAICL 4.12 0.9302 0.9304 0.9302 154 researchers in the future for answering any of the open research questions discussed in this paper should be fundamentally grounded in modeling the notion of relevance (usefulness of examples) in a robust and effective manner. 7 CONCLUSION In this perspective paper, we discuss how some of the recent developments in generative AI (specifically in-context learning or ICL) can provide a scope to IR/NLP researchers to revisit some of the well-researched IR topics in a new light, where the notion of relevance of a document to an information need changes to that of usefulness of a few-shot example for a downstream AI task, e.g., text classification, question answering etc. More specifically, we suggest three main verticals in which this research can be structured each offering a set of open questions related to core IR research. The first vertical aims at adaptively adjusting an ICL workflow, e.g., choosing the number of examples to be used in a data-driven manner. Initial empirical investigations reported in this perspective paper shows that this direction is promising. The second vertical mainly covers devising novel ranking models to better distinguish (and thereby retrieve at better ranks) a useful few-shot context from a noisy one. Finally, the third vertical concerns an investigation of topical diversity in the few-shot examples for better downstream prediction. We believe that the research questions that we have proposed in this paper will benefit the research community to exploit this synergy between ICL and IR, and eventually guide the development of new algorithms and techniques. SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Parry, Ganguly, and Chandra"
16
+ }
intro_8K/test_introduction_short_2405.01130v1.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.01130v1",
3
+ "title": "Automated Virtual Product Placement and Assessment in Images using Diffusion Models",
4
+ "abstract": "In Virtual Product Placement (VPP) applications, the discrete integration of\nspecific brand products into images or videos has emerged as a challenging yet\nimportant task. This paper introduces a novel three-stage fully automated VPP\nsystem. In the first stage, a language-guided image segmentation model\nidentifies optimal regions within images for product inpainting. In the second\nstage, Stable Diffusion (SD), fine-tuned with a few example product images, is\nused to inpaint the product into the previously identified candidate regions.\nThe final stage introduces an \"Alignment Module\", which is designed to\neffectively sieve out low-quality images. Comprehensive experiments demonstrate\nthat the Alignment Module ensures the presence of the intended product in every\ngenerated image and enhances the average quality of images by 35%. The results\npresented in this paper demonstrate the effectiveness of the proposed VPP\nsystem, which holds significant potential for transforming the landscape of\nvirtual advertising and marketing strategies.",
5
+ "authors": "Mohammad Mahmudul Alam, Negin Sokhandan, Emmett Goodman",
6
+ "published": "2024-05-02",
7
+ "updated": "2024-05-02",
8
+ "primary_cat": "cs.CV",
9
+ "cats": [
10
+ "cs.CV"
11
+ ],
12
+ "label": "Original Paper",
13
+ "paper_cat": "Diffusion AND Model",
14
+ "gt": "Virtual Product Placement (VPP) refers to the unobtru- sive, digital integration of branded products into visual con- tent, which is often employed as a stealth marketing strat- egy [15]. Advertising solutions utilizing VPP have signifi- cant appeal due to their high customizability, effectiveness across diverse customer bases, and quantifiable efficiency. *The author performed this work as an intern at Amazon Web Services (AWS). Accepted at 6th AI for Content Creation (AI4CC) workshop at CVPR 2024. (Preprint) (a) Background (b) Inpainting Figure 1. An illustration of the proposed VPP system with an Amazon Echo Dot device. The input background image is shown in (a), and the inpainted output image is shown in (b) where an Amazon Echo Dot device is placed on the kitchen countertop by automatic identification of optimal location. Previous research underscores the impact of product place- ment within realms such as virtual reality [22] and video games [5]. With the recent advancements in generative AI technologies, the potential for product placement has been further expanded through the utilization of diffusion models. Significant research has focused on the develop- ment of controlled inpainting via diffusion models, albeit largely without an explicit emphasis on advertising applica- tions [1, 8, 11]. However, these methods can be fine-tuned with a small set of 4 to 5 product sample images to generate high-quality advertising visual content. In this paper, we propose a novel, three-stage, fully automated system that carries out semantic inpainting of products by fine-tuning a pre-trained Stable Diffusion (SD) model [18]. In the first stage, a suitable location is identified for product placement using visual question answering and text-conditioned instant segmentation. The output of this stage is a binary mask highlighting the identified location. Subsequently, this masked region undergoes inpainting us- ing a fine-tuned SD model. This SD model is fine-tuned by 1 arXiv:2405.01130v1 [cs.CV] 2 May 2024 DreamBooth [19] approach utilizing a few sample images of the product along with a unique identifier text prompt. Finally, the quality of the inpainted image is evaluated by a proposed Alignment Module, a discriminative method that measures the image quality, or the alignment of the gen- erated image with human expectations. An illustration of the proposed VPP system is presented in Figure 1 with an Amazon Echo Dot device. Controlled inpainting of a specific product is a challeng- ing task. For example, the model may fail to inpaint the intended object at all. If a product is indeed introduced through inpainting, the product created may not be real- istic and may display distortions of shape, size, or color. Similarly, the background surrounding the inpainted prod- uct may be altered in such a way that it either meaning- fully obscures key background elements or even completely changes the background image. This becomes especially problematic when the background images contain human elements, as models can transform them into disturbing vi- suals. As a result, the proposed Alignment Module is de- signed to address these complications, with its primary fo- cus being on the appearance, quality, and size of the gener- ated product. To exert control over the size of the generated product, morphological transformations, specifically erosion, and di- lation, are employed. By adjusting the size of the mask through dilation or erosion, the size of the inpainted product can be effectively increased or decreased. This allows the system to generate a product of an appropriate size. In summary, the main contributions of this paper are twofold. The first pertains to the design of a fully au- tomated Virtual Product Placement (VPP) system capable of generating high-resolution, customer-quality visual con- tent. The second involves the development of a discrimi- native method that automatically eliminates subpar images, premised on the content, quality, and size of the product generated. The remainder of this paper is organized as follows. In section 2 we will delve into the related literature, with a spe- cific emphasis on semantic inpainting methods utilizing dif- fusion models, and section 3 will highlight the broad contri- butions of the paper. Next, the proposed end-to-end pipeline for automatic VPP will be discussed in section 4. This in- cludes a detailed examination of the three primary stages of the solution, along with the three sub-modules of the Alignment Module. Thereafter, we will elucidate the exper- imental design and evaluation methodologies adopted and report the corresponding results in section 5. Subsequently, deployment strategy and web application design will be ex- plained in section 6. Finally, the paper will conclude with an outline of the identified limitations of our proposed method- ology in section 7, complemented by a discussion on poten- tial avenues for future research.",
15
+ "main_content": "Recently, there has been significant progress in developing semantic or localized image editing using diffusion models largely without an explicit focus on digital marketing. Nevertheless, new generative AI approaches promise significant advances in VPP technology. For instance, in Blended Diffusion [1], the authors proposed a method of localized image editing using image masking and natural language. The area of interest is first masked and then modified using a text prompt. The authors employed a pre-trained CLIP model [17] along with pre-trained Denoising Diffusion Probabilistic Models (DDPM) [7] to generate natural images in the area of interest. Similar to Blended Diffusion, Couairon et. al. [3] proposed a method of semantic editing with a mask using a diffusion model. However, instead of taking the mask from the user, the mask is generated automatically. Nevertheless, a text query input from the user is utilized to generate the mask. The difference in noise estimates, as determined by the diffusion model based on the reference text and the query text, is calculated. This difference is then used to infer the mask. The image is noised iteratively during the forward process and in the reverse Denoising Diffusion Implicit Model (DDIM) [21] steps, the denoised image is interpolated with the same step output of the forward process using masking. Paint by Word proposed by Bau et. al. [2], is also similar, however instead of a diffusion model they utilized a Generative Adversarial Networks (GAN) [4] with a mask for semantic editing guided by text. On the other hand, Imagic [8] also performs text-based semantic editing on images using a diffusion model but without using any mask. Their approach consists of three steps. In the beginning, text embedding for a given image is optimized. Then the generative diffusion model is optimized for the given image with fixed-optimized text embedding. Finally, the target and optimized embedding are linearly interpolated to achieve input image and target text alignment. Likewise, a semantic editing method using a pre-trained text-conditioned diffusion model focusing on the mixing of two concepts is proposed by [12]. In this method, a given image is noised for several steps and then denoised with text condition. During the denoising process, the output of a denoising stage is also linearly interpolated with the output of a forward noise mixing stage. Hertz et. al. [6] took a different approach to semantic image editing where text and image embeddings are fused using cross-attention. The cross-attention maps are incorporated with the Imagen diffusion model [20]. However, instead of editing any given image, their approach edits a generated image using a text prompt which lacks any interest when VPP is concerned. Alternatively, Stochastic Differential Edit (SDEdit) [16] synthesizes images from stroke 2 paintings and can edit images based on stroke images. For image synthesis, coarse colored strokes are used and for editing, colored stroke on real images or image patches on target images is used as a guide. It adds Gaussian noise to an image guide of a specific standard deviation and then solves the corresponding Stochastic Differential Equations (SDE) to produce the synthetic or edited image. To generate images from a prompt in a controlled fashion and to gain more control over the generated image, Li et. al proposed grounded text-to-image generation (GLIGEN) [11]. It feeds the model the embedding of the guiding elements such as bounding boxes, key points, or semantic maps. Using the same guiding components, inpainting can be performed in a target image. DreamBooth [19] fine-tunes the pre-trained diffusion model to expand the dictionary of the model for a specific subject. Given a few examples of the subject, a diffusion model such as Imagen [20] is fine-tuned using random samples generated by the model itself and new subject images by optimizing a reconstruction loss. The new subject images are conditioned using a text prompt with a unique identifier. Fine-tuning a pre-trained diffusion model with a new subject is of great importance in the context of VPP. Therefore, in this paper DreamBooth approach is utilized to expand the model\u2019s dictionary by learning from a few sample images of the product. 3. Contributions In this paper, a method of automated virtual product placement and assessment in images using diffusion models is designed. Our broad contributions are as follows: 1. We introduce a novel fully automated VPP system that carries out automatic semantic inpainting of the product in the optimal location using language-guided segmentation and fine-tuned stable diffusion models. 2. We proposed a cascaded three-stage assessment module named \u2018Alignment Module\u2019 designed to sieve out lowquality images that ensure the presence of the intended product in every generated output image. 3. Morphological transformations are employed such as dilation and erosion to adjust the size of the mask, therefore, to increase or decrease the size of the inpainted product allowing generating a product of appropriate size. 4. Experiments are performed to validate the results by blind evaluation of the generated images with and without the Alignment module resulting in 35% improvement in average quality. 5. The inpainted product generated by the proposed system is not only qualitatively more realistic compared to the previous inpainting approach [23] but also shows a superior quantitative CLIP score. 4. Methodology Fine-tuned Model DreamBooth VILT Visual Question Answering \u201cdesk\u201d \u201cwhich object in the image has a flat surface area?\u201d CLIPSeg Semantic Segmentation Content Score Quality Score Volume Score Stage 2 Stage 1 Stage 3 Figure 2. The block diagram of the proposed solution for the VPP system where each of the three stages is distinguished by varied color blocks. In stage 1, a suitable placement for product inpainting is determined by creating a mask using CLIPSeg and VILT models. Next, in stage 2, semantic inpainting is performed in the masked area using the fine-tuned DreamBooth model. Finally, stage 3 contains the cascaded sub-modules of the Alignment Module to discard low-quality images. 4.1. Proposed Method For semantic inpainting, we utilized the DreamBooth algorithm [19] to fine-tune stable diffusion using five representative images of the product and a text prompt with a unique identifier. Even with a limited set of five sample images, the fine-tuned DreamBooth model was capable of generating images of the product integrated with its background. Nevertheless, when inpainting was conducted with this fine-tuned model, the resulting quality of the inpainted product was significantly compromised. To enhance the quality of the product in the inpainted image, we augmented the sample images through random scaling and random cropping, consequently generating a total of 1,000 product images used to fine-tune SD. 4.2. Product Localization Module The proposed VPP system operates in three stages. A core challenge in product placement lies in pinpointing a suitable location for the item within the background. In the first stage, this placement is indicated via the generation of a binary mask. To automate this masking process, we leveraged the capabilities of the Vision and Language Transformer (ViLT) Visual Question Answering (VQA) model [9] in conjunction with the Contrastive Language3 Image Pretraining (CLIP) [17]-based semantic segmentation method, named CLIPSeg [13]. Notably, each product tends to have a prototypical location for its placement. For example, an optimal location for an Amazon Echo Dot device is atop a flat surface, such as a desk or table. Thus, by posing a straightforward query to the VQA model, such as \u201dWhich object in the image has a flat surface area?\u201d, we can pinpoint an appropriate location for the product. Subsequently, the identified location\u2019s name is provided to the CLIPSeg model, along with the input image, resulting in the generation of a binary mask for the object. 4.3. Product Inpainting Module In the second stage, the input image and the generated binary mask are fed to the fine-tuned DreamBooth model to perform inpainting on the masked region. Product inpainting presents several challenges: the product might not manifest in the inpainted region; if it does, its quality could be compromised or distorted, and its size might be disproportionate to the surrounding context. To systematically detect these issues, we introduce the third stage: the Alignment Module. 4.4. Product Alignment Module The Alignment Module comprises three sub-modules: Content, Quality, and Volume. The Content sub-module serves as a binary classifier, determining the presence of the product in the generated image. If the product\u2019s probability of existence surpasses a predefined threshold, then the Quality score is calculated for that image. This score evaluates the quality of the inpainted product in relation to the sample images originally used to train the SD model. Finally, if the image\u2019s quality score exceeds the set quality threshold, the Volume sub-module assesses the product\u2019s size in proportion to the background image. The generated image will be successfully accepted and presented to the user only if all three scores within the Product Quality Alignment Module meet their respective thresholds. Within the Content module, an image captioning model [14] is employed to generate a caption, which is then refined by incorporating the product\u2019s name. The super-class name of the product can also be utilized. Both the captions and the inpainted image are fed into the CLIP model to derive a CLIP score. If the modified caption scores above 70%, it\u2019s inferred that the product exists in the inpainted image. The Quality module contrasts the mean CLIP image features of the sample images with the CLIP image feature of the generated image. The greater the resemblance of the inpainted product to the sample images, the higher the quality score. A threshold of 70% has been established. The Volume module finally gauges the size of the inpainted product. The generated image is processed through the CLIP model, accompanied by three distinct textual size prompts. Given that \u201ca small dog sitting on a desk next to a computer\u201d \u201ca small dog sitting on a desk next to a computer with an echo dot\u201d \u201cInput Image\u201d \u201cGenerated Image\u201d Caption Generator CLIP Score Fine-tuned Caption Product Exist (a) Content Sub-module \u201cSample Images\u201d \u201cGenerated Image\u201d Mean CLIP Image Feature CLIP Image Feature Cosine Similarity Quality Score (b) Quality Sub-module \u201cGenerated Image\u201d CLIP Score \u201ctoo large {product}\u201d \u201cregular size {product}\u201d \u201ctoo small {product}\u201d Product Size (c) Volume Sub-module Figure 3. Block diagram of each of the components of the Alignment Module. The Content sub-module is built using a pre-trained caption generator and CLIP models shown in (a). The generated caption is fine-tuned by adding the name of the intended product to the caption. For the Quality sub-module, the image features of the same CLIP model are utilized shown in (b). Finally, in the Volume sub-module, the same CLIP model with three different size text prompts is used shown in (c). size perception can be subjective and varies based on camera proximity, a milder threshold of 34% (slightly above a random guess) has been selected. The comprehensive block diagram of the proposed VPP system is illustrated in Figure 2, with the three stages distinguished by varied color blocks. The block diagrams for each sub-module can be found in Figure 3. 4 The Volume sub-module provides insights regarding the size of the inpainted product. To modify the product\u2019s size, the mask\u2019s dimensions must be adjusted. For this task, morphological transformations, including mask erosion and dilation, can be employed on the binary mask. These transformations can either reduce or augment the mask area, allowing the inpainting module to produce a product image of the desired size. The relationship between alterations in the mask area and the size of the inpainted product across various erosion iterations is depicted in Figure 4. Approximately, 25 iterations of erosion consume around 3 milliseconds, making it highly cost-effective. 0 10 20 25 Figure 4. Application of erosion to the mask where a kernel of size (5 \u00d7 5) is used for 0, 10, 20, and 25 iterations shown in the figure consecutively. The resulting output is presented at the bottom of the corresponding mask to show the size reduction of the generated product in the output image. 5. Experimental Results Experiments were conducted to evaluate the performance of the proposed VPP system. For these experiments, five sample images of an \u201cAmazon Echo Dot\u201d were chosen. 1, 000 augmented images of each product created from these five sample images were used to fine-tune the DreamBooth model using the text prompt \u201dA photorealistic image of a sks Amazon Alexa device.\u201d The model was fine-tuned for 1, 600 steps, employing a learning rate of 5 \u00d7 10\u22126, and a batch size of 1. The fine-tuned model can inpaint products into the masked region. However, issues such as lack of product appearance, poor resolution, and disproportionate shape persist. The goal of the proposed Alignment Module is to automatically detect these issues. If identified, the problematic images are discarded, and a new image is generated from different random noise. Only if a generated image meets all the module\u2019s criteria it is presented to the user. Otherwise, a new image generation process is initiated. This loop continues for a maximum of 10 iterations. 5.1. Assessing Alignment Module To assess the effectiveness of the Alignment Module, images were generated both with and without it. For each submodule, as well as for the overall Alignment Module, 200 images were generated: 100 with the filter activated and 100 without (referred to as the \u201dNaive\u201d case). To prevent bias, all images were given random names and were consolidated into a single folder. These images were also independently evaluated by a human, whose scores served as the ground truth. This ground truth information was saved in a separate file for the final evaluation, which followed a blindfolded scoring method. All the experiments were also repeated for another product named \u201cLupure Vitamin C\u201d. 5.2. Evaluation Metrics The evaluation and scoring method of each of the submodules of the Alignment module is described in the consecutive segments. \u2022 Content Score For the image content score, images are categorized into two classes: \u2018success\u2019 if the product appears, and \u2018failure\u2019 otherwise. When the content module is utilized, the Failure Rate (FR), defined as the ratio of Failure to Success, is below 10% for both of the products. \u2022 Quality Score For the quality score, images are rated on a scale from 0 to 10: 0 indicates the absence of a product, and 10 signifies a perfect-looking product. To evaluate in conjunction with the CLIP score, both the Mean Assigned Quality Score (MAQS) and Mean Quality Score (MQS) are calculated. MAQS represents the average score of images labeled between 0 and 10, while MQS is the output from the quality module, essentially reflecting cosine similarity. \u2022 Volume Score For the volume module, images are also rated on a scale from 0 to 10: 0 for a highly unrealistic size, and 10 for a perfect size representation. When evaluating the volume module, the content module is not utilized. Since the size score necessitates the presence of a product, images without any product are excluded from this evaluation. To gauge performance, the Mean Assigned Size Score (MASS) is calculated in addition to the CLIP score. 5.2.1 Overall Results The results of individual evaluations are presented in Table 1. It can be observed from this table that using any of the sub-modules consistently produced better outcomes compared to when no filtering was applied across various metrics. The results of the comprehensive evaluation, encompassing all sub-modules, can be found in Table 2. 5 Table 1. Individual evaluation of content, quality, and volume sub-modules within the overall Alignment Module. \u201cNaive\u201d represents the outputs without any filtering sub-modules. Content classifies the presence of the product in the generated images. Quality measures the proximity of the generated product to the sample product images used to fine-tune the diffusion model. Finally, Volume identifies the size category of the product. Naive Content Naive Quality Naive Volume Amazon Echo Dot Success 72 94 CLIP 32.49 \u00b1 3.69 33.80 \u00b1 2.69 CLIP 32.58 \u00b1 3.70 33.42 \u00b1 2.69 Failure 28 6 MAQS 4.41 \u00b1 3.23 6.41 \u00b1 1.90 MASS 3.01 \u00b1 2.68 4.81 \u00b1 2.31 FR 38.89% 6.38% MQS 0.75 \u00b1 0.14 0.83 \u00b1 0.06 Lupure Vitamin C Success 87 100 CLIP 24.61 \u00b1 2.4 25.23 \u00b1 2.66 CLIP 24.22 \u00b1 3.01 24.51 \u00b1 2.89 Failure 13 0 MAQS 5.65 \u00b1 2.85 6.47 \u00b1 1.09 MASS 5.64 \u00b1 3.05 7.14 \u00b1 1.53 FR 14.94% 0.0% MQS 0.81 \u00b1 0.13 0.86 \u00b1 0.04 Table 2. Comparison of the proposed method with and without using the Alignment Module in addition to the Paint-By-Example (PBE) [23] inpainting model. The \u201cNaive\u201d performance represents the generated output without applying the Alignment Module. The \u201cAlignment\u201d column represents the generated outputs where three cascaded filtering sub-modules are used, i.e., the Alignment Module. Amazon Echo Dot Lupure Vitamin C PBE Naive Alignment PBE Naive Alignment CLIP 31.44 \u00b1 3.43 32.85 \u00b1 3.19 33.85 \u00b1 2.54 27.01 \u00b1 2.10 24.71 \u00b1 2.64 24.89 \u00b1 2.90 MAQS 1.13 \u00b1 1.30 4.65 \u00b1 3.60 6.31 \u00b1 2.39 1.75 \u00b1 1.51 6.60 \u00b1 3.01 7.81 \u00b1 1.13 MASS 1.22 \u00b1 1.60 3.05 \u00b1 2.98 4.70 \u00b1 2.81 2.43 \u00b1 2.07 6.25 \u00b1 3.08 7.30 \u00b1 1.59 MQS 0.64 \u00b1 0.08 0.75 \u00b1 0.14 0.82 \u00b1 0.05 0.67 \u00b1 0.06 0.82 \u00b1 0.12 0.86 \u00b1 0.05 FR 78.57% 29.87% 0.00% 38.89% 17.64% 0.00% (a) (b) (c) (d) Figure 5. Inpainted product image of Paint-by-Example (PBE). PBE generates high-quality images which explains the higher CLIP score in the case of Lupure Vitamin C. However, the inpainted product does not look similar to the desired product at all resulting in very poor mean assigned quality and size scores. Output images for Amazon Echo Dot is shown in (a) and (b), and for Lupure Vitamin C is shown in (c) and (d). Figure 6. Empirical performance of Alignment Module for Amazon Echo Dot. Noticeably, no output is generated without any product when the Alignment Module is employed. Moreover, the mean quality score has increased from 4.65 to 6.31. 5.3. Comparison with Paint-By-Example The proposed method is compared with the Paint-ByExample (PBE) [23] inpainting model and Table 2 shows the performance comparison of the proposed method along with PBE. PBE can generate very high-quality images, however, the inpainted product in the generated image does not look alike the desired product at all as shown in Figure 5 resulting in very poor MAQS and MASS. Whereas the inpainted product of our proposed method resembles much of the original product shown in Figure Figure 7. 6 5.4. Frequency Distribution The frequency distribution and density function of the assigned quality scores in the case of \u201cNaive\u201d and \u201cAlignment\u201d for Amazon Echo Dot is presented in Figure 6. The density mean has shifted from 4.65 to 6.31 when Alignment Module is adopted indicating the effectiveness of the proposed module. 6. Path to Production 6.1. Product API The location identifier, fine-tuned model, and Alignment Module are combined to develop an easy-to-use VPP Streamlit web app 1. This app is hosted on Amazon Sagemaker using an \u201cml.p3.2xlarge\u201d instance, which is a single V100 GPU with 16GB of GPU memory. The demo app\u2019s interface is illustrated in Figure 8. In the top-left \u2018Image\u2019 section, users can either upload their own background image or choose from a selection of sample background images to generate an inpainted product image. The web app provides extensive flexibility for tuning the parameters of the Alignment Module so that users can comprehend the effects of these parameters. In the \u2018seed\u2019 text box, a value can be input to control the system output. The segmentation threshold for CLIPSeg defaults to 0.7, but users can refine this value using a slider. Within the \u2018Mask Params\u2019 section, the number of dilation and erosion iterations can be set and visualized in real-time. The filter, represented by the Alignment Module, can be toggled on or off. The \u2018Max Attempt\u2019 slider determines the number of regeneration attempts if the model doesn\u2019t produce a satisfactory output. However, if a seed value is specified, the model will generate the output only once, regardless of the set value. Lastly, in the \u2018Filter Params\u2019 section, users can fine-tune the threshold values for each sub-module of the Alignment Module, specifically for content, quality, and volume. The \u201cshow stats\u201d button beneath the input image displays the mask alongside details of the model outputs. These details include the seed value, placement, generated and modified captions, and the content, quality, and volume/size scores. By visualizing the mask and its area, users can apply erosion or dilation to adjust the product\u2019s size. The default threshold values for content, quality, and volume are 0.7, 0.7, and 0.34, respectively. While these values can be adjusted slightly higher, it\u2019s recommended to also set the \u2019Max Attempt\u2019 to 10 in such cases. A higher threshold means that the generated output is more likely to fail the criteria set by the Alignment Module. 1STREAMLIT: https://streamlit.io/ 6.2. Future Considerations for Product Scalability Fine-tuning stable diffusion using DreamBooth can take up to 30 minutes, depending on dataset size, image resolution, and extent of training. When considering a customer with hundreds or thousands of products, this process could take days to complete model training across different products. Our pipeline is deployed on Amazon SageMaker, a managed service that supports the automatic scaling of deployed endpoints. This service can dynamically accommodate large computational needs by provisioning additional instances as required. As such, fine-tuning 100 SD models for 100 different products would still only take about 30 minutes if 100 instances were utilized in parallel. The fine-tuned models are stored in an Amazon S3 (Simple Storage Service) bucket, with each model being 2.2 GB in size. Consequently, 100 fine-tuned models would occupy approximately 220 GB of storage space. A pertinent question arises: Can we strike a space-time trade-off by training a single model with a unique identifier for each product? If this is feasible, the space requirement would be reduced to a consistent 2.2 GB. However, that one model would need more extensive training specifically training steps would increase by a factor of 100 for 100 products, thereby lengthening the computation time. This approach remains untested and warrants future exploration [10]. 7. Conclusion In this paper, we present a novel, fully automated, endto-end pipeline for Virtual Product Placement. The proposed method automatically determines a suitable location for product placement into a background image, performs product inpainting, and finally evaluates image quality to ensure only high-quality images are presented for the downstream task. Using two different example products, experiments were conducted to evaluate the effectiveness of the proposed pipeline, the performance of the individual sub-modules, and the overarching Alignment Module. Notably, when upon employing the Alignment Module, the Failure Ratio (FR) plummeted down to 0.0% for both investigated products. Additionally, images produced with the Alignment Module achieved superior CLIP, quality, and size scores. Qualitatively, the produced images present a clean and natural semantic inpainting of the product within the background image. The accompanying web application facilitates pipeline deployment by enabling image generation through a user-friendly interface with extensive image finetuning capabilities. The high-quality integration of products into images underscores the potential of the proposed VPP in the realms of digital marketing and advertising. 7 Amazon Echo Dot Background and Inpainted Images Lupure Vitamin C Background and Inpainted Images Figure 7. Qualitative results of the proposed VPP system. Experiments are performed using two different products, Amazon Echo Dot as shown on top, and Lupure Vitamin C as shown on bottom. The original training images are shown on the left, and then the pairs of background and inpainted output images are presented side by side. Figure 8. The interface of the VPP web app demo was built using Streamlit hosted in Amazon SageMaker. The uploaded background image is shown under the title \u201cInput Image\u201d and the inpainting image with an Amazon Echo Dot is shown under the title \u201cOutput Image\u201d. Moreover, the generated mask produced by the location identifier and the other intermediate details of the proposed VPP system is also presented in the interface. 8"
16
+ }
intro_8K/test_introduction_short_2405.01143v1.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.01143v1",
3
+ "title": "Are We Really Achieving Better Beyond-Accuracy Performance in Next Basket Recommendation?",
4
+ "abstract": "Next basket recommendation (NBR) is a special type of sequential\nrecommendation that is increasingly receiving attention. So far, most NBR\nstudies have focused on optimizing the accuracy of the recommendation, whereas\noptimizing for beyond-accuracy metrics, e.g., item fairness and diversity\nremains largely unexplored. Recent studies into NBR have found a substantial\nperformance difference between recommending repeat items and explore items.\nRepeat items contribute most of the users' perceived accuracy compared with\nexplore items. Informed by these findings, we identify a potential \"short-cut\"\nto optimize for beyond-accuracy metrics while maintaining high accuracy. To\nleverage and verify the existence of such short-cuts, we propose a\nplug-and-play two-step repetition-exploration (TREx) framework that treats\nrepeat items and explores items separately, where we design a simple yet highly\neffective repetition module to ensure high accuracy, while two exploration\nmodules target optimizing only beyond-accuracy metrics. Experiments are\nperformed on two widely-used datasets w.r.t. a range of beyond-accuracy\nmetrics, viz. five fairness metrics and three diversity metrics. Our\nexperimental results verify the effectiveness of TREx. Prima facie, this\nappears to be good news: we can achieve high accuracy and improved\nbeyond-accuracy metrics at the same time. However, we argue that the real-world\nvalue of our algorithmic solution, TREx, is likely to be limited and reflect on\nthe reasonableness of the evaluation setup. We end up challenging existing\nevaluation paradigms, particularly in the context of beyond-accuracy metrics,\nand provide insights for researchers to navigate potential pitfalls and\ndetermine reasonable metrics to consider when optimizing for accuracy and\nbeyond-accuracy metrics.",
5
+ "authors": "Ming Li, Yuanna Liu, Sami Jullien, Mozhdeh Ariannezhad, Mohammad Aliannejadi, Andrew Yates, Maarten de Rijke",
6
+ "published": "2024-05-02",
7
+ "updated": "2024-05-02",
8
+ "primary_cat": "cs.IR",
9
+ "cats": [
10
+ "cs.IR"
11
+ ],
12
+ "label": "Original Paper",
13
+ "paper_cat": "LLM Fairness",
14
+ "gt": "Recommender systems have become an essential instrument for connecting people to the content, services, and products they need. In e-commerce, more and more consumers purchase food and house- hold products online instead of visiting physical retail stores [18]. The COVID-19 pandemic has only accelerated this shift [33]. In this scenario, consumers usually purchase a set of items at the same time, a so-called basket. Next basket recommendation (NBR) is a type of sequential recommendation that caters to this scenario: baskets are the target of recommendation and historical sequential data consists of users\u2019 interactions with baskets. NBR has increasingly been at- tracting attention in recent years [2]. Many methods, based on differ- ent machine learning techniques, have been proposed for accurate recommendations, e.g., Markov chain (MC)-based methods [37, 41], frequency and nearest neighbor-based methods [12, 15], RNN-based methods [14, 19, 34, 49], and self-attention methods [9, 39, 50]. Repetition vs. exploration in NBR. Recently, Li et al. [24] have assessed the performance of state-of-the-art NBR in terms of repeat and explore items: items that a user has interacted with before and items that they have never interacted with before, respectively. The authors distinguish between the task of repetition recommen- dation (recommending repeat items) and the task of exploration arXiv:2405.01143v1 [cs.IR] 2 May 2024 SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Li et al. Table 1: Comparison of the repetition and exploration tasks in NBR. Aspect Repetition Exploration Task difficulty Easy Difficult Number of items Dozens Thousands Item interactions Previous None Users\u2019 interest With feedback Without feedback Task type Re-consume Infer new recommendation (recommending explore items). Repetition and ex- ploration recommendations have different levels of difficulty, where recommending items that are regularly present in a user\u2019s baskets is shown to be a far easier task [24]. Building on these findings, repetition-only [1, 17] and exploration-only [21] methods have been proposed to optimize the accuracy of next basket recommendation. Accuracy and beyond-accuracy metrics. Even though accuracy naturally serves as the most important objective of recommenda- tions, it is widely recognized that it should not be the sole focus. Beyond-accuracy metrics such as item fairness [11, 13, 46, 47] and diversity [8, 52, 53] also play crucial roles in evaluating recom- mendation services. Such beyond-accuracy metrics have gained increasing attention and have been optimized in a range of rec- ommendation scenarios [48, 53]. In the NBR scenario, however, beyond-accuracy metrics have been far less studied than accuracy- based metrics. In this paper, we help to address this knowledge gap. Following the paradigm of multiple-objective recommender systems [16], it is widely recognized that there is a trade-off be- tween accuracy and beyond-accuracy metrics. E.g., diversity goals are reckoned to stand in contrast with accuracy. Put differently, a method achieving a better beyond-accuracy performance while maintaining the same level of accuracy performance is considered to be a success [48, 53]. And how can we achieve a reasonable balance between accuracy and beyond-accuracy metrics in NBR? Potential \u201cshort-cuts\u201d to balancing accuracy and beyond- accuracy metrics. Besides the imbalance between repetition and exploration [22\u201325], Li et al. also found that repeat items contribute most of the accuracy, whereas the explore items in the recom- mended basket contribute very little to the user\u2019s perceived utility. As Table 1 summarizes, there are essential differences between the repetition and exploration tasks, which explain the substantial performance differences between the two tasks. Inspired by these findings, we hypothesize that there may be a \u201cshort-cut\u201d strategy to optimize for both accuracy and beyond- accuracy metrics, which contains two aspects: (i) accuracy: Predict repeat items to achieve good accuracy: predicting repeat items is much easier than predicting explore items [24], and (ii) beyond-ac- curacy: Use explore items to improve beyond-accuracy metrics: it is very difficult to recommend quality explore items. Thus, exchange the low accuracy that is typically achieved on such items for be- yond-accuracy metrics, i.e., trade accuracy for diversity and item fairness. We call this NBR strategy a short-cut strategy because it avoids making the fundamental trade-off between accuracy and beyond-accuracy metrics. TREx framework. To operationalize our short-cut idea, and check whether the \u201cshort-cut\u201d strategy can be made to work, we pro- pose the two-step repetition-exploration (TREx) framework. TREx decouples the prediction of repeat items and explore items. Specif- ically, TREx uses separate models for predicting (a) repeat items, and (b) explore items, and then combines the outcomes of the two prediction models to generate the next basket. In contrast, existing NBR methods usually output the scores/probabilities of all items and then select the top-\ud835\udc58items to fill up a basket to be recommended, ignoring the differences between repeat and explore items. For TREx\u2019s repeat item prediction, we propose a simple yet ef- fective probability-based method, which considers the item char- acteristics and users\u2019 repurchase frequency. For exploration rec- ommendations, we design two strategies that cater to the different beyond-accuracy metrics. The flexibility of TREx allows us to design suitable models for repetition and exploration, with the possibility of controlling the proportions of repetition and exploration to inves- tigate the relations between accuracy and various beyond-accuracy metrics. Findings and reflections. We consider two types of widely-used beyond-accuracy metrics, i.e., diversity and item fairness. Specif- ically, we investigate five fairness metrics (i.e., logEUR, logRUR, EEL, EED, and logDP) [28, 36] and three diversity metrics (i.e., ILD, Entropy, and DS) [48]. To provide an overall understanding of these metrics, we group them according to different levels of connection with accuracy as follows: (i) Strong connection: logRUR, (ii) Weak connection: logEUR, EEL, EED (iii) No connection: logDP, ILD, Entropy, DS. Briefly, the strong connection between logRUR and accuracy stems from the fact that logRUR uses ground truth relevance to discount the exposure, making sure that only correctly predicted items contribute to effective exposure. The connection be- tween logEUR, EEL, and accuracy is weak because they just ensure the exposure distribution across groups of recommended results is close to the group exposure distribution of ground truth, without considering whether the exposure is contributed by correctly pre- dicted items. Since the position weighting model of EED considers ground truth, EED shows a weak connection. There is no connec- tion between accuracy and logDP, ILD, Entropy, and DS because their exposure distributions across groups are designed to reflect a specific distribution. The strength of the connection between a beyond-accuracy metric and accuracy determines whether there is a short-cut towards optimizing both accuracy and the beyond- accuracy metric. We perform experiments on two brick-and-mortar retailers\u2019 NBR datasets, considering six NBR baselines and eight metrics. The experimental results show that: (1) State-of-the-art accuracy can be achieved by only recommending repeat items via the proposed simple yet effective repetition model. (2) Leveraging the \u201cshort-cut\u201d using TREx achieves \u201cbetter\u201d beyond accuracy performance w.r.t. seven out of eight beyond-accuracy metrics. (3) In terms of the item fairness metric having a strong connection with the accuracy (i.e., logRUR), it is more difficult to achieve better beyond-accuracy metrics via the proposed strategy. Stepping back. Instead of blindly claiming TREx with the designed modules as a state-of-the-art method for optimizing both accuracy and various beyond-accuracy metrics, we reflect and challenge our TREx SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA evaluation paradigm in the definition of success in this setting. The core question is: Are we really achieving better beyond-accuracy perfor- mance in next basket recommendation? Two perspectives offer different ways forward for researchers and practitioners to address this question: (1) If we are willing to sacrifice the accuracy of the exploration, then superior beyond-accuracy performance can be achieved by leveraging the \u201cshort-cut\u201d strategy via TREx, which is straight- forward and efficient. This \u201cshort-cut\u201d strategy must be con- sidered before developing more sophisticated and elaborate approaches. (2) Conversely, if we believe it is unreasonable to sacrifice the accu- racy of exploration [45], the existence of the \u201cshort-cut\u201d strategy reveals flaws in our current evaluation paradigm to demon- strate an NBR method\u2019s superiority. A fine-grained analysis (i.e., distinguishing between repetition and exploration) needs to be performed to check whether \u201cbetter\u201d beyond-accuracy is achieved by triggering the \u201cshort-cut\u201d strategy, which would hurt the exploration accuracy after all. Our contributions. The main contributions of the paper are: \u2022 We identify a \u201cshort-cut\u201d strategy (i.e., sacrificing accuracy for exploration and using explore items to optimize for beyond- accuracy metrics), which could achieve \u201cbetter\u201d beyond-accuracy metrics without degrading accuracy. \u2022 We propose a simple repetition recommendation model consid- ering item features and users\u2019 repurchase frequency, which can achieve the state-of-the-art NBR accuracy by only recommending repeat items. \u2022 We propose TREx, a flexible two-step repetition-exploration frame- work for NBR, which allows us to control the trade-off between accuracy and beyond-accuracy metrics w.r.t. the recommended baskets. \u2022 We conduct experiments on two datasets w.r.t. eight beyond- accuracy metrics, and find that leveraging \u201cshort-cuts\u201d via TREx can achieve better performance on a wide range of metrics. We also find that the stronger the connection with accuracy, the more challenging it becomes to utilize a \u201cshort-cut\u201d strategy to enhance a beyond-accuracy metric. \u2022 We reflect on, and challenge, existing evaluation paradigms, and find that a fine-grained level analysis can provide a complemen- tary view of a method\u2019s performance.",
15
+ "main_content": "We summarize related research on next basket recommendation and beyond-accuracy metrics. Next basket recommendation. The NBR problem has been studied for many years. Factorizing personalized Markov chains (FPMC) [37] leverages matrix factorization and Markov chains to model users\u2019 general interest and basket transition relations. HRM [41] applies aggregation operations to learn a hierarchical representation of baskets. RNNs have been adapted to the NBR task to learn longterm trends by modeling the whole basket sequence. E.g., Dream [49] uses max/avg pooling to encode baskets. Sets2Sets [14] adapts an attention mechanism and adds frequency information to improve performance. Some methods [19, 44] consider the underlying item relations to get a better representation. Yu et al. [50] argue that item-item relations between baskets are important, and leverage GNNs to use these relations. Some methods [3, 20, 39, 42] exploit auxiliary information, including product categories, amounts, prices, and explicit timestamps. TIFUKNN [15] and UP-CF@r [12], frequency-neighbor-based methods, model temporal patterns, and then combine these with neighbor information or user-wise collaborative filtering. Li et al. [24] provide several metrics to evaluate repetition and exploration performance in the NBR task and find that the repetition task is easier than the exploration task. Inspired by this analysis, repetition-only [1, 17] and exploration-only [21] models were proposed for next basket recommendation. Existing NBR work mainly focuses on optimizing accuracy whereas this paper extends to various beyond-accuracy metrics for NBR. Beyond-accuracy metrics. In addition to accuracy, there are various beyond-accuracy metrics (i.e., diversity, fairness, novelty, serendipity, coverage) we need to consider when making recommendations [11]. Diversity is a crucial factor in meeting the diverse demands of users [7, 35, 43, 52]. Recently, empirical and revisitation studies [29, 48] have been conducted to explore the trade-off between accuracy and diversity. The concepts of fairness and item exposure have emerged as crucial considerations since items and producers play pivotal roles within a recommender system and its ecosystem. Related metrics measure whether items receive a fair share of exposure according to different definitions of fairness. Current research on fairness primarily focuses on individual or group fairness, either from the customer\u2019s perspective, adopting a user-centered approach [5], or from the provider\u2019s viewpoint, adopting an item-centered approach [30, 51], or a two-sided approach [31, 46, 47]. Recently, Liu et al. [28] evaluated item fairness on existing NBR methods to investigate the robustness of different fairness metrics. Unlike the work listed above, this paper is not limited to optimizing a specific type of metric. It examines the possibility of leveraging a \u201cshort-cut\u201d strategy to seemingly optimize various beyond-accuracy metrics and provides insights w.r.t. evaluation paradigms when extending NBR optimization and evaluation to these beyond-accuracy metrics. 3 TASK FORMULATION AND DEFINITIONS We describe the next basket recommendation problem and formalize the notions of repetition and exploration. Our notation is summarized in Table 2. Next basket recommendation. Given a set of users \ud835\udc48= {\ud835\udc621, \ud835\udc622, ..., \ud835\udc62\ud835\udc5b} and items \ud835\udc3c= {\ud835\udc561,\ud835\udc562, . . . ,\ud835\udc56\ud835\udc5a}, \ud835\udc46\ud835\udc62= {\ud835\udc351 \ud835\udc62, \ud835\udc352 \ud835\udc62, . . . , \ud835\udc35\ud835\udc61 \ud835\udc62} represents the historical interaction sequence for \ud835\udc62, where \ud835\udc35\ud835\udc61 \ud835\udc62is the user\u2019s basket at the time step \ud835\udc61. \ud835\udc35\ud835\udc61 \ud835\udc62consists of a set of items \ud835\udc56\u2208\ud835\udc3c, and the goal of the next basket recommendation task is to predict \ud835\udc43\ud835\udc62= \ud835\udc35\ud835\udc61+1 \ud835\udc62 , the following basket of items that the user would probably like, based on the user\u2019s past interactions \ud835\udc46\ud835\udc62, i.e., \ud835\udc43\ud835\udc62= \u02c6 \ud835\udc35\ud835\udc61+1 \ud835\udc62 = \ud835\udc53(\ud835\udc46\ud835\udc62), (1) generation algorithm. We assume that the () where \ud835\udc53is our basket generation algorithm. We assume that the user\u2019s attention and screen space is limited; hence, like previous studies [24, 28], we recommend fixed-size baskets of sizes 10 or 20. SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Li et al. Table 2: Notation used in the paper; fairness related notation is adapted from [28, 36]. Symbol Description \ud835\udc62\u2208\ud835\udc48 Users \ud835\udc56\u2208\ud835\udc3c Items \ud835\udc46\ud835\udc62 Sequence of historical baskets for \ud835\udc62 \ud835\udc35\ud835\udc61 \ud835\udc62 \ud835\udc61-th basket in \ud835\udc46\ud835\udc62, a set of items \ud835\udc56\u2208\ud835\udc3c \ud835\udc3c\ud835\udc5f\ud835\udc52\ud835\udc5d \ud835\udc62,\ud835\udc61 Set of repeat items for \ud835\udc62up to timestamp \ud835\udc61 \ud835\udc3c\ud835\udc52\ud835\udc65\ud835\udc5d\ud835\udc59 \ud835\udc62,\ud835\udc61 Set of explore items for \ud835\udc62up to timestamp \ud835\udc61 \ud835\udc47\ud835\udc62 Ground-truth basket for \ud835\udc62that we aim to predict \ud835\udc47rep \ud835\udc62 Set of repeat items in the ground truth basket \ud835\udc47\ud835\udc62for \ud835\udc62 \ud835\udc47expl \ud835\udc62 Set of explore items in the ground truth basket \ud835\udc47\ud835\udc62for \ud835\udc62 \ud835\udc43\ud835\udc62 Predicted basket for \ud835\udc62 \ud835\udc43rep \ud835\udc62 Set of repeat items in the predicted basket \ud835\udc43\ud835\udc62for \ud835\udc62 \ud835\udc43expl \ud835\udc62 Set of explore items in the predicted basket \ud835\udc43\ud835\udc62for \ud835\udc62 \ud835\udc3a(\ud835\udc43) Group alignment matrix for items in \ud835\udc43 \ud835\udc3a+ Popular group \ud835\udc3a\u2212 Unpopular group a\ud835\udc43 Exposure vector for items in \ud835\udc43 \ud835\udf16\ud835\udc43 The exposure of groups in \ud835\udc43(\ud835\udc3a(\ud835\udc43)\ud835\udc47a\ud835\udc43) Repetition and exploration. We assume that the set of items is fixed. Although this might not be the case in real-world settings, modeling the addition and deletion of items in the set of items is out of the scope of this paper. With this assumption in mind, the addition of every new basket to the users\u2019 history, may translate into fewer items left to explore. To differentiate between the items coming from the exploration and repeat consumption behavior, for a user \ud835\udc62and timestamp \ud835\udc61, a set of items \ud835\udc3crep \ud835\udc62,\ud835\udc61\u2282\ud835\udc3care considered to be the \u201crepeat items.\u201d The set of explore items \ud835\udc3cexpl \ud835\udc62,\ud835\udc61 is simply its complement within the overall item set \ud835\udc3c. We define \ud835\udc3crep \ud835\udc62,\ud835\udc61as: \ud835\udc3crep \ud835\udc62,\ud835\udc61= \ud835\udc3c\ud835\udc5f\ud835\udc52\ud835\udc5d \ud835\udc62,\ud835\udc61\u22121 \u222a\ud835\udc35\ud835\udc61 \ud835\udc62. (2) This also means that \ud835\udc3crep \ud835\udc62,1 \u2282\u00b7 \u00b7 \u00b7 \u2282\ud835\udc3crep \ud835\udc62,\ud835\udc61\u22121 \u2282\ud835\udc3crep \ud835\udc62,\ud835\udc61. Conversely, we have \ud835\udc3cexpl \ud835\udc62,\ud835\udc61 \u2282\ud835\udc3cexpl \ud835\udc62,\ud835\udc61\u22121 \u2282\u00b7 \u00b7 \u00b7 \u2282\ud835\udc3cexpl \ud835\udc62,1 . The task of predicting the next basket for a user \ud835\udc62is equivalent to predicting which items from \ud835\udc3c\ud835\udc5f\ud835\udc52\ud835\udc5d \ud835\udc62,\ud835\udc61and \ud835\udc3c\ud835\udc52\ud835\udc65\ud835\udc5d\ud835\udc59 \ud835\udc62,\ud835\udc61 will appear in \ud835\udc35\ud835\udc61+1 \ud835\udc62 . One way to solve this problem is to decouple it into two subtasks: the repetition subtask that aims to predict which items from \ud835\udc3c\ud835\udc5f\ud835\udc52\ud835\udc5d \ud835\udc62,\ud835\udc61to recommend, and the exploration task that recommends items from \ud835\udc3c\ud835\udc52\ud835\udc65\ud835\udc5d\ud835\udc59 \ud835\udc62,\ud835\udc61 . Table 1 shows the different characteristics w.r.t. the repetition and exploration tasks. 4 EVALUATION METRICS Next, we describe the accuracy and beyond-accuracy metrics (i.e., fairness and diversity) considered in the paper.1 Accuracy. In terms of accuracy, we use three metrics that are widely used for the NBR task: \ud835\udc45\ud835\udc52\ud835\udc50\ud835\udc4e\ud835\udc59\ud835\udc59@\ud835\udc58, \ud835\udc41\ud835\udc37\ud835\udc36\ud835\udc3a@\ud835\udc58, and \ud835\udc43\ud835\udc3b\ud835\udc45@\ud835\udc58. \ud835\udc45\ud835\udc52\ud835\udc50\ud835\udc4e\ud835\udc59\ud835\udc59 measures the ability to find all items that the user will purchase in the next basket; NDCG is a ranking metric that also considers the order of the items; PHR is a user level measurement which 1Due to space limitations, we only provide brief introductions of each metric; more detailed information (e.g., function, responsibility, etc.) can be found in the original papers and relevant survey papers [28, 36, 53]. represents the ratio of users whose recommended basket contains the item in the ground-truth. Fairness. Assume \ud835\udf0b(\ud835\udc43| \ud835\udc62) is a user-dependent distribution and \ud835\udf0c(\ud835\udc62) is a distribution over users; overall, the recommended item rankings among all users follow the following distribution: \ud835\udf0c(\ud835\udc62)\ud835\udf0b(\ud835\udc43| \ud835\udc62). \ud835\udf16\ud835\udc43= \ud835\udc3a(\ud835\udc43)Ta\ud835\udc43is the group exposure within a recommended basket.2 Its expected value \ud835\udf16\ud835\udf0b= \ud835\udc38\ud835\udf0b\ud835\udf0c[\ud835\udf16\ud835\udc43] is the group exposure among all the recommended baskets. Following [28, 36], we select a set of well-known fairness metrics and cover two types of fairness considerations as follows:3 (1) Equal opportunity. Promote equal treatment based on merit or utility, regardless of group membership [28, 36]. (i) Exposed Utility Ratio (EUR) [38] quantifies the deviation from the objective that the exposure of each group is proportional to its utility \ud835\udc4c(\ud835\udc3a). (ii) Realized Utility Ratio (RUR) [38] models actual user engagement, the click-through rates for the groups \u0393 (\ud835\udc3a) are proportional to their utility. (iii) Expected Exposure Loss (EEL) [10] is the distance between expected exposure and target exposure \ud835\udf16\u2217, which is the exposure under the ideal policy. (2) Statistical parity. Ensure comparable exposure among groups. (i) Expected Exposure Disparity (EED) [10] measures the inequality in exposure distribution across groups. (ii) Demographic Parity (DP) [38] measures the ratio of average exposure given to the two groups. Following [36], we reformulate DP as logDP to tackle the issue of empty-group scenarios and improve interpretability. Exposed Utility Ratio (logEUR) and Realized Utility Ratio (logRUR) are defined in a similar manner. Diversity. Following [48], we consider the following widely-used diversity metrics, which satisfy users\u2019 diversified demands. (i) Intra-List Distance (ILD) [6, 7] measures the average distance between every pair of items in the recommendation list (\ud835\udc43\ud835\udc62), where \ud835\udc51\ud835\udc56\ud835\udc57 is the Euclidean distance between the respective embeddings of categories (ii) Entropy [43, 54] quantifies the dispersion of item category distribution in the recommendation list \ud835\udc43\ud835\udc62; a higher degree of dispersion in the category distribution corresponds to increased diversity. (iii) Diversity Score (DS) [27] is calculated as the number of interacted/recommended categories divided by the number of interacted/recommended items. As shown in Table 3, we can group beyond-accuracy metrics according to their connection with accuracy. 5 A TWO-STEP REPETITION-EXPLORATION FRAMEWORK Given the differences depicted in Table 1, we propose a two-step repetition-exploration (TREx) framework for NBR. TREx assembles recommendations from a repetition and an exploration module. TREx allows one to easily swap out the sub-algorithms used for repetition and exploration. In the first step, we model the repetition and exploration behavior separately to get candidates from both sources. Then, we generate the recommended basket from those candidates in the second step. The main architectural differences between previous approaches to the NBR problem, which typically 2The formula to compute the exposure vector a\ud835\udc43using different position weighting models can be found in [28, 36]. 3Item fairness metric Inequity of Amortized Attention [4] is not used in this paper since some baselines do not have predicted relevance for items. TREx SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Table 3: Summary of fairness and diversity metrics; fairness metrics are adapted from [36]. \u2191indicates that higher values are better; \u2193indicates that lower values are better; \u25e6means that the closer the value is to 0, the better the performance. Category Metrics Goal Better Accuracy connection Equal opportunity logRUR Click-through rate proportional to relevance \u25e6 Strong logEUR Exposure proportional to relevance \u25e6 Weak EEL Exposure matches ideal (from relevance) \u2193 Weak Statistical parity EED Exposure well-distributed \u2193 Weak logDP Exposure equal across groups \u25e6 None Diversity ILD Average distance between categories for each pair of items in the list \u2191 None Entropy Entropy of item category distribution in the list \u2191 None DS Number of categories divided by the number of items in the list \u2191 None Algorithm 1: TREx Framework Data: Basket sequence \ud835\udc46, basket size \ud835\udc58, repetition confidence threshold \ud835\udc63 Result: Recommended basket \ud835\udc35\ud835\udc61+1 \ud835\udc62 for each user \ud835\udc62, 1 Calculate the repetition feature RepI (\ud835\udc56) for each item; 2 for each user \ud835\udc62do 3 Get repeat items \ud835\udc3crep \ud835\udc62,\ud835\udc61, and explore items \ud835\udc3cexpl \ud835\udc62,\ud835\udc61; 4 Calculate the repetition score RepS\ud835\udc62(\ud835\udc56) for each \ud835\udc56\u2208\ud835\udc3crep \ud835\udc62,\ud835\udc61; 5 Remove items \ud835\udc56from \ud835\udc3crep \ud835\udc62,\ud835\udc61, when RepS\ud835\udc62(\ud835\udc56) < \ud835\udc63; 6 Rank \ud835\udc3crep \ud835\udc62,\ud835\udc61according to RepS\ud835\udc62(\ud835\udc56) in descending order; 7 Initialize next basket \ud835\udc35\ud835\udc61+1 \ud835\udc62 ; 8 if |\ud835\udc3crep \ud835\udc62,\ud835\udc61| < \ud835\udc58then 9 Fill \ud835\udc35\ud835\udc61+1 \ud835\udc62 using \ud835\udc3crep \ud835\udc62,\ud835\udc61; 10 m \u2190\ud835\udc58\u2212|\ud835\udc3crep \ud835\udc62,\ud835\udc61|; 11 Fill m empty slots of \ud835\udc35\ud835\udc61+1 \ud835\udc62 using explore items via exploration module; 12 else 13 Fill \ud835\udc35\ud835\udc61+1 \ud835\udc62 using top-\ud835\udc58of \ud835\udc3crep \ud835\udc62,\ud835\udc61; 14 end 15 end consists of a single treatment of all items, and TREx, which treats repeat and explore items differently. The pseudo-code for TREx is given in Algorithm 1. Next, we describe the three modules that make up TREx.4 5.1 Repetition module As the repetition task is a much simpler task than exploration, we therefore design a repetition module targeted at improving the accuracy. Intuitively, if a user consumed an item several times in the past, they are likely to repurchase that item in the next basket. Thus, frequency information is a strong signal for repetition prediction [40]. The personal item frequency (PIF) introduced in TIFUKNN [15] and the recency window in UP-CF@r[12] both capture temporal dependencies by focusing more on recent behavior. However, they do not capture the item characteristics w.r.t. repurchasing. For example, a purchase of a bottle of milk and a pan is more likely to be followed by a repurchase of milk rather than a 4Theoretically, TREx allows us to choose or design the suitable repetition and exploration modules both targeted at the accuracy to achieve state-of-the-art performance. However, we aim to investigate the \u201cshort-cut\u201d and relationship between accuracy and various beyond-accuracy metrics. pan, even if both currently have the same purchase frequency. To consider both item features and user interest simultaneously, we use the repetition score RepS\ud835\udc62(\ud835\udc56) to represent the repurchase score of item \ud835\udc56for user \ud835\udc62. This score is decomposed into two parts, the item-specific repurchase feature RepI (\ud835\udc56) and the user\u2019s interest \ud835\udc38\ud835\udc62 \ud835\udc56 in item \ud835\udc56. Formally: RepS\ud835\udc62(\ud835\udc56) = \ud835\udc38\ud835\udc62 \ud835\udc56\u00b7 RepI (\ud835\udc56) . (3) This corresponds to line 4 in Algorithm 1. Given the items in the dataset \ud835\udc3c= {\ud835\udc561,\ud835\udc562, . . . ,\ud835\udc56\ud835\udc5a}, we need to derive the repurchase feature RepI (\ud835\udc56) for each item in the training set. First, the repurchase frequency \ud835\udc45\ud835\udc52\ud835\udc5d\ud835\udc39(\ud835\udc56) can be calculated by gathering the statistical information across users. To mitigate the impact of abnormally high values in some users, we introduce a hyperparameter \ud835\udefcto discount the repurchase frequency of item \ud835\udc56. Rep\ud835\udc39(\ud835\udc56) = \u00cd \ud835\udc48(item \ud835\udc56repurchase frequency)\ud835\udefc #users who bought item \ud835\udc56at least once . (4) In addition, some items might only have a few samples, which might lead to low confidence about their repetition feature estimation. We leverage the average estimate RepF across all items as supplementary information to help items with a few samples. Then, the final repetition feature is given by: RepI (\ud835\udc56) = Rep\ud835\udc39(\ud835\udc56) + RepF \ud835\udc41\ud835\udc56 , (5) where \ud835\udc41\ud835\udc56is the number of users who bought item \ud835\udc56. Thus, the average RepF will have a small effect on RepI (\ud835\udc56) when we have more samples to compute item-specific features. This corresponds to line 1 in Algorithm 1. The item frequency in a user\u2019s historical baskets can partially reflect the user\u2019s interest. Yet, user interests can also be dynamic. To model temporal dependencies, we introduce a time-decay factor \ud835\udefd, which makes the recent interactions have more impact on the interest \ud835\udc38\ud835\udc62 \ud835\udc56. Assume that a specific item \ud835\udc56was purchased by the user \ud835\udc62several times in their historical baskets {\ud835\udc35\ud835\udc591 \ud835\udc62, \ud835\udc35\ud835\udc592 \ud835\udc62, . . . , \ud835\udc35\ud835\udc59\ud835\udc5a \ud835\udc62}; the corresponding position set is denoted as \ud835\udc3f\ud835\udc56= {\ud835\udc591,\ud835\udc592, . . . ,\ud835\udc59\ud835\udc5a}; then \ud835\udc38\ud835\udc62 \ud835\udc56is defined as: \ud835\udc38\ud835\udc62 \ud835\udc56= \u00cd\ud835\udc5a \ud835\udc57=1 \ud835\udefd\ud835\udc47\u2212\ud835\udc59\ud835\udc57, (6) where \ud835\udc47represents the length of the user\u2019s basket. TREx\u2019s repeat recommendation model takes item features, user interests, and the temporal order of baskets into consideration. We treat the items in baskets independently and calculate the repetition score RepS for all items that appeared in the previous baskets for each user, which will be used in the final basket generation process. SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Li et al. Table 4: Statistics of the processed datasets. Dataset #items #users Avg. basket size Avg. #baskets per user Repeat ratio Explore ratio Instacart 29,399 19,210 10.06 15.91 0.60 0.40 Dunnhumby 37,162 2,482 10.07 43.17 0.43 0.57 5.2 Exploration module As it is more challenging than repetition, exploration is also an important aspect of NBR. To complement the repetition module, we design different exploration modules, targeting item fairness and diversity, respectively. For each user \ud835\udc62, the exploration candidates \ud835\udc3c\ud835\udc52\ud835\udc65\ud835\udc5d\ud835\udc59 \ud835\udc62,\ud835\udc61 are the set of items that the user never bought before. Item fairness. According to [24], we find that NBR methods usually have varying degrees of popularity bias, which means they recommend more popular items compared to the ground truth and harm item fairness. Thus, we recommend unpopular items \ud835\udc56\u2208\ud835\udc3a\u2212for the exploration module for the sake of approaching the distribution of ground truth and decreasing the exposure gap between the popular and the unpopular groups. Specifically, we randomly sample explore items based on a sampling probability, which is calculated from the purchase frequency of unpopular items. Diversity. Diversity optimizes for more dispersed categories in the predicted basket. For each user, we record categories of repetition candidates, rank exploration candidates according to their popularity, and select explore items to fill in the \ud835\udc35\ud835\udc61+1 \ud835\udc62 in turn. The category of each explore item differs from the categories already in \ud835\udc35\ud835\udc61+1 \ud835\udc62 . 5.3 Basket generation module To construct the final basket to be recommended by TREx for the accuracy objective, we adopt a repetition greedy approach and first consider the item candidates generated by the repetition module and fill the remaining slots via the exploration module. TRExFairness and TRExdiversity denote TREx with the exploration module targeted at fairness and diversity, respectively. For a user \ud835\udc62, we get their repetition score RepSu(\ud835\udc56), where \ud835\udc56\u2208\ud835\udc3crep \ud835\udc62,\ud835\udc61(Algorithm 1, lines 3\u2013 4). First, we define a confidence threshold \ud835\udc63for the repetition score and repetition items are removed from the \ud835\udc56\u2208\ud835\udc3crep \ud835\udc62,\ud835\udc61when the corresponding RepSu(\ud835\udc56) < \ud835\udc63(line 5).5 Then, \ud835\udc3crep \ud835\udc62,\ud835\udc61can be seen as the repetition candidates set. If the number of repetition candidates exceeds the basket size, the items with a high score will have priority to fill the basket (Algorithm 1, line 13). If the number of repetition candidates is smaller than the basket size, the basket is first filled with all items in the repetition candidates set \ud835\udc3crep \ud835\udc62,\ud835\udc61. Then, we fill up the basket using the explore items via the exploration module, where \ud835\udc5arepresents the number of empty slots (lines 9\u201311). 6 EXPERIMENTS 6.1 Experimental setup Datasets. We conduct experiments on two widely-used datasets: (i) Instacart,6 which includes a large number of grocery orders from 5The confidence threshold \ud835\udc63controls the proportion of repeat items and explore items in the recommendation, as well as the accuracy and beyond-accuracy trade-off in this paper. We sweep repetition confidence bound \ud835\udc63to get TREx variants with different accuracy and beyond-accuracy metrics performance. 6https://www.kaggle.com/c/instacart-market-basket-analysis/data users; following [28, 32], \u223c20000 users are randomly selected to conduct experiments; and (ii) Dunnhumby,7 which contains two years\u2019 household-level transactions of 2500 frequent shoppers at a retailer. Following [1, 28], we sample users who have at least three baskets and remove items that appeared less than five times. The two datasets vary in the repeat ratio, i.e., the proportion of repeat items in the ground-truth baskets [24]. We focus on the fixed size (10 or 20) NBR problem. The statistics of the processed datasets are shown in Table 4. In our experiments, each dataset is partitioned according to [1, 12, 28, 32]. The training baskets encompass all user baskets except the last one. In cases where users have over 50 baskets in the training data, only their last 50 baskets are considered for inclusion in the training set. The final baskets of all users are then divided equally between a 50% validation set and a 50% test set. Figure 1 shows the distribution of users across repeat ratios, which is the proportion of repeat items in the ground-truth basket. 0.0 0.2 0.4 0.6 0.8 1.0 Repeat Ratio 0 500 1000 1500 2000 Number of users Instacart 0.0 0.2 0.4 0.6 0.8 1.0 Repeat Ratio 0 50 100 150 200 250 Dunnhumby Figure 1: Distribution of users across different repeat ratios for Instacart and Dunnhumby. NBR baselines. We compare TREx with 8 representative baselines, which we select based on their characteristics in the analysis performed in [24, 28], divided into three groups: 6.1.1 Simple baselines. (i) G-TopFreq uses the \ud835\udc58most popular items in the dataset to form the recommended next basket. (ii) P\u2013 TopFreq is a personalized TopFreq method, which treats the most frequent \ud835\udc58items in historical records of the user as the next basket. (iii) GP-TopFreq [24] is a simple combination of P-TopFreq and G-TopFreq, which first use P-TopFreq to fill the basket, then use G-TopFeq to fill the remaining slots. 6.1.2 Nearest neighbor-based methods. (i) TIFUKNN [15] is a state-of-art method that models the temporal dynamics of frequency information of users\u2019 past baskets to introduce Personalized Frequency Information (PIF), then it uses KNN-based method on the PIF. (ii) UP-CF@r [12] is a combination of recency aware user-wise popularity and user-wise collaborative filtering. 6.1.3 Neural network-based methods. (i) Dream [49] models users\u2019 global sequential basket behavior for NBR using recurrent neural network (RNN). (ii) DNNTSP [50] is a state-of-art method that leverages a GNN and self-attention techniques. It encodes item-item relations via a graph and employs a self-attention mechanism to capture temporal dependencies of users\u2019 basket sequences. (iii) ReCANet [1] is a repeat-only model for NBR, which uses user-item representations with historical consumption patterns via RNN. Configurations. To assess group fairness (Section 4), we follow configurations from previous research [26, 28]; the group of items is determined by their popularity (i.e., the number of purchases 7https://www.dunnhumby.com/source-files/ TREx SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Instacart Dunnhumby 0.0 0.1 0.2 0.3 Recall P-TopFreq +T +T+RF Instacart Dunnhumby 0.0 0.1 0.2 0.3 NDCG P-TopFreq +T +T+RF Figure 2: Performance of TREx-Rep when we add a timedecay factor \ud835\udefd(+T), add both \ud835\udefdand item-specific repetition feature \ud835\udc45\ud835\udc52\ud835\udc5d\ud835\udc3c(\ud835\udc56) (+T+RF). recorded in the historical baskets of the dataset). The top 20% of items with the highest purchase frequency as the popular group (\ud835\udc3a+), while the remaining 80% of items are assigned to the unpopular group (\ud835\udc3a\u2212). For the baseline methods, a grid search is performed to find the optimal hyper-parameters via the validation set. For TIFUKNN, the number of neighbors \ud835\udc58is tuned on {100, 300, 500, 900, 1100, 1300}, the number of groups \ud835\udc5ais tuned on {3, 7, 11, 15, 19, 23}, the within-basket time-decayed ratio \ud835\udc5f\ud835\udc4fand the group time-decayed ratio \ud835\udc5f\ud835\udc54are selected from {0.1, 0.2, . . . , 0.9, 1}, and the fusion weight \ud835\udefcis selected from {0, 0.1, . . . , 0.9, 1}. For UPCF@r, recency window \ud835\udc5fis tuned on {1, 5, 10, 25, 100, \u221e}, locality \ud835\udc5eis tuned on [1, 5, 10, 50, 100, 1000], and asymmetry \ud835\udefcis tuned on {0, 0.25, 0.5, 0.75, 1}. For Dream, DNNTSP, and ReCANet, the item and user embedding size is tuned on {16, 32, 64, 128}. As to TREx, for the repetition module, \ud835\udefcis selected from {0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0}, and the time-decay factor \ud835\udefdis selected from {0.7, 0.75, 0.8, 0.85, 0.9, 0.95, 1.0}. To facilitate reproducibility, we release the source code and all hyper-parameters in an online repository: https://github.com/lynEcho/TREX. 6.2 Overall accuracy performance By decoupling the repetition and exploration tasks, TREx-Rep optimizes for the repeat items prediction and accounts for the accuracy of the NBR performance. Table 5 shows the experimental results for TREx-Rep and the baselines. We observe that TREx-Rep surpasses two complex deep learning-based methods (i.e., Dream and DNNTSP) by a large margin on the Dunnhumby and Instacart datasets, and TREx-Rep always achieves or matches the SOTA accuracy on both datasets across different accuracy metrics. Note that, TREx-Rep achieves a competitive accuracy performance by only using part of the available slots in the basket.8 Compared to the deep learning methods with complex architectures that try to learn basket representations and model temporal relations, TREx-Rep is very efficient due to its simplicity. To investigate the effect of the repetition features and the improvement in repetition performance in NBR. We conduct experiments on TREx-Rep by gradually adding the time-decay factor \ud835\udefdand item-specific repetition feature RepI (\ud835\udc56). The results are shown in Figure 2. The accuracy increases when we gradually integrate different factors into TREx-Rep, which indicates that both the time-decay factor \ud835\udefdand the item-specific repetition feature \ud835\udc45\ud835\udc52\ud835\udc5d\ud835\udc3c(\ud835\udc56) contribute 8As TREx-Rep only recommends repeat items, the basket could not be fulfilled when the number of user\u2019s repeat items (historical items) is smaller than the basket size. ReCANet also only recommends repeat items, however, it is a complex neural-based model, which is much slower than the proposed TREx-Rep module. 0.2 0.4 0.6 0.8 1.0 Sample ratio 0 5 10 Recall improvement (%) Instacart Dunnhumby Figure 3: The recall improvement of (+T+RF) over (+T) when the training sample ratio changes from 0.2 to 1. 1.00 1.05 1.10 1.15 1.20 ILD 0.40 0.41 0.42 0.43 0.44 0.45 0.46 Recall + Instacart TREx-Div G-TopFreq Dream P-TopFreq GP-TopFreq UP-CF@r TIFUKNN DNNTSP ReCANet 2.2 2.4 2.6 2.8 Entropy 0.40 0.41 0.42 0.43 0.44 0.45 0.46 + Instacart 0.30 0.35 0.40 0.45 0.50 DS 0.40 0.41 0.42 0.43 0.44 0.45 0.46 + Instacart 0.7 0.8 0.9 1.0 ILD 0.14 0.16 0.18 0.20 0.22 Recall + Dunnhumby 1.4 1.5 1.6 1.7 Entropy 0.14 0.16 0.18 0.20 0.22 + Dunnhumby 0.20 0.25 0.30 0.35 0.40 DS 0.14 0.16 0.18 0.20 0.22 + Dunnhumby 1.00 1.05 1.10 1.15 1.20 ILD 0.40 0.41 0.42 0.43 0.44 0.45 0.46 Recall + Instacart TREx-Div G-TopFreq Dream P-TopFreq GP-TopFreq UP-CF@r TIFUKNN DNNTSP ReCANet 2.2 2.4 2.6 2.8 Entropy 0.40 0.41 0.42 0.43 0.44 0.45 0.46 + Instacart 0.30 0.35 0.40 0.45 0.50 DS 0.40 0.41 0.42 0.43 0.44 0.45 0.46 + Instacart 0.7 0.8 0.9 1.0 ILD 0.14 0.16 0.18 0.20 0.22 Recall + Dunnhumby 1.4 1.5 1.6 1.7 Entropy 0.14 0.16 0.18 0.20 0.22 + Dunnhumby 0.20 0.25 0.30 0.35 0.40 DS 0.14 0.16 0.18 0.20 0.22 + Dunnhumby Figure 4: Performance of TREx\ud835\udc37\ud835\udc56\ud835\udc63\ud835\udc52\ud835\udc5f\ud835\udc60\ud835\udc56\ud835\udc61\ud835\udc66at different \ud835\udc63values, compared with different NBR methods in terms of different diversity metrics. The red + marker indicates the direction with both high accuracy and diversity. to the accuracy performance of TREx-Rep. Significant improvements over only using the time-decay factor \ud835\udefdcan be observed on the Dunnhumby dataset when the item-specific repetition feature \ud835\udc45\ud835\udc52\ud835\udc5d\ud835\udc3c(\ud835\udc56) is also adopted to compute the repetition score \ud835\udc45\ud835\udc52\ud835\udc5d\ud835\udc46\ud835\udc62(\ud835\udc56). Note that the improvement of adding \ud835\udc45\ud835\udc52\ud835\udc5d\ud835\udc3c(\ud835\udc56) to TREx-Rep on the Instacart dataset is relatively small. We conjecture that items in the Instacart dataset are more regular products, that have little difference in repetition feature with each other. Figure 3 shows the performance when using different amounts of training samples, the improvement in recall resulting from adding \ud835\udc45\ud835\udc52\ud835\udc5d\ud835\udc3c(\ud835\udc56) increases when we use more training data since we have more samples for estimating the repetition feature \ud835\udc45\ud835\udc52\ud835\udc5d\ud835\udc3c(\ud835\udc56). 6.3 Beyond-accuracy performance We conduct experiments to verify whether TREx with the designed models (i.e., TREx\ud835\udc37\ud835\udc56\ud835\udc63\ud835\udc52\ud835\udc5f\ud835\udc60\ud835\udc56\ud835\udc61\ud835\udc66and TREx\ud835\udc39\ud835\udc4e\ud835\udc56\ud835\udc5f\ud835\udc5b\ud835\udc52\ud835\udc60\ud835\udc60) could achieve better performance on representative diversity and item fairness metrics. Note that, the recommended basket remains fixed for a specific user in existing baselines, resulting in fixed performance regarding both accuracy and beyond-accuracy metrics on each dataset. In contrast, TREx provides the flexibility to adjust the trade-off between accuracy and beyond-accuracy metrics by adjusting the repetition confidence bound \ud835\udc63. This allows for a more nuanced control over the recommendation process compared to traditional baselines. SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Li et al. Table 5: Comparison of TREx-Rep (repetition-module only) against baselines and two types of state-of-art methods; boldface indicates the maximum; underlining indicates the second best performing method. \u2020 indicates that TREx-Rep results achieve the same level of performance as SOTA baselines (paired t-test). Dataset Metric G-TopFreq P-TopFreq GP-TopFreq UP-CF@r TIFUKNN Dream DNNTSP ReCANet TREx-Rep Instacart Recall@10 0.0704 0.3143 0.3150 0.3377 0.3456 0.0704 0.3295 0.3490 0.3476\u2020 NDCG@10 0.0817 0.3339 0.3343 0.3582 0.3657 0.0817 0.3434 0.3699 0.3661\u2020 PHR@10 0.4600 0.8447 0.8460 0.8586 0.8639 0.4600 0.8581 0.8668 0.8655\u2020 Recall@20 0.0973 0.4138 0.4168 0.4405 0.4559 0.0979 0.4339 0.4562 0.4557\u2020 NDCG@20 0.0962 0.3889 0.3902 0.4161 0.4271 0.0968 0.4018 0.4303 0.4269\u2020 PHR@20 0.5302 0.8921 0.8959 0.9045 0.9098 0.5346 0.9033 0.9097 0.9092\u2020 Dunnhumby Recall@10 0.0897 0.1628 0.1628 0.1699 0.1763 0.0896 0.0871 0.1730 0.1815\u2020 NDCG@10 0.0798 0.1562 0.1562 0.1639 0.1683 0.0759 0.0792 0.1625 0.1689\u2020 PHR@10 0.3795 0.5399 0.5399 0.5536 0.5729 0.3873 0.4303 0.5655 0.5761\u2020 Recall@20 0.1046 0.2075 0.2075 0.2168 0.2227 0.1081 0.1442 0.2252 0.2257\u2020 NDCG@20 0.0877 0.1787 0.1787 0.1885 0.1917 0.0853 0.1021 0.1879 0.1921\u2020 PHR@20 0.4392 0.6116 0.6116 0.6326 0.6342 0.4558 0.5378 0.6377 0.6390\u2020 Diversity. The experimental results w.r.t. the accuracy and different diversity metrics (i.e., ILD, Entropy, and DS) are shown in Figure 4.9 We have the following observations: (1) Compared to methods (i.e., TIFUKNN and ReCANet) with the best accuracy, TRExDiversity can achieve better performance in terms of all three diversity metrics while preserving the same level of accuracy on both datasets. (2) In contrast to other baseline methods (excluding TIFUKNN and ReCANet), TRExDiversity showcases the ability to recommend baskets with enhanced accuracy and diversity simultaneously. Item fairness. The experimental results regarding the accuracy and five fairness metrics (LogRUR, logEUR, logDP, EEL, and EED) are depicted in Figure 5. Based on our analysis, we make the following observations: (i) On the Dunnhumby dataset, TREx\ud835\udc39\ud835\udc4e\ud835\udc56\ud835\udc5f\ud835\udc5b\ud835\udc52\ud835\udc60\ud835\udc60 demonstrates superior fairness w.r.t. logDP and logEUR while maintaining the same level of accuracy performance as the best-performing baselines (i.e., TIFUKNN and ReCANet). Similarly, on Dunnhumby, TREx\ud835\udc39\ud835\udc4e\ud835\udc56\ud835\udc5f\ud835\udc5b\ud835\udc52\ud835\udc60\ud835\udc60showcases enhanced fairness across four fairness metrics (logDP, logEUR, EEL, and EED) while achieving accuracy performance comparable to the best-performing baselines. (ii) TRExFairness demonstrates its capability to recommend baskets with improved accuracy and fairness w.r.t. logDP and logEUR concurrently, when compared to complex baselines such as Dream, UP-CF@r, and DNNTSP. (iii) In terms of logRUR, TRExFairness exhibits inferior performance in fairness while maintaining similar accuracy levels compared to several existing baselines. Moreover, as both accuracy and fairness decrease simultaneously, a win-win and lose-lose scenario is evident rather than a conventional trade-off relationship in this fairness evaluation. Connections with accuracy. To get a better understanding of the possibility of leveraging the \u201cshort-cut\u201d via TREx to improve beyond-accuracy metrics, we conduct an analysis by categorizing these beyond-accuracy metrics into different groups based on their connections with accuracy (see Section 4 and Table 3). We can observe that TREx can easily achieve better performance w.r.t. beyond-accuracy metrics have no connections with the accuracy (i.e., ILD, Entropy, DS, and logDP) on two datasets. When 9G-TopFreq and Dream exhibit low recall, fairness, and diversity, which prevents them from being visible in Figures 4 and 5. beyond-accuracy metrics (e.g., logEUR, EEL, and EED) exhibit weak associations with accuracy, TREx outperforms alternative methods in some instances (4 out of 6). However, in cases where beyondaccuracy metrics are strongly correlated with accuracy (e.g., logRUR), TREx struggles to achieve superior performance. Since only accurate predictions contribute to improvements in logRUR fairness, leveraging the exploration module to optimize such beyondaccuracy metrics is very challenging. 6.4 Reflections and discussions The above results verify our hypothesis and demonstrate the effectiveness of leveraging a \u201cshort-cut\u201d strategy to achieve better beyond-accuracy under the current evaluation paradigms. It is controversial to use this \u201cshort-cut\u201d strategy in real-world scenarios when NBR practitioners consider beyond-accuracy metrics. In scenarios where the accuracy of exploration is not important to practitioners and only overall accuracy is of concern, the \u201cshortcut\u201d strategy proves to be a straightforward and efficient means to achieve better performance w.r.t. various beyond-accuracy metrics. TREx must be considered or serve as a baseline before designing more sophisticated methods, such as including multi-objective loss functions [7, 20], integer programming [53], and so on. However, in some scenarios, it is unreasonable to sacrifice the exploration accuracy despite it being low. Therefore, the existence of the \u201cshort-cut\u201d strategy reveals the potential flaws of the existing evaluation paradigms (i.e., using overall metrics to define success). We look into the exploration accuracy [24] of TRExDiversity when it outperforms several existing baselines in terms of both overall accuracy and diversity (i.e., success according to existing evaluation paradigm). Table 6 shows the huge decrease in the accuracy of exploring items in the recommended basket of TRExDiversity, compared to these baselines, since the designed module in TRExDiversity is mainly designed for improving diversity and does not consider accuracy. In this sense, we can not simply claim the superiority of TRExDiversity compared to these baselines just depends on the overall performance. Note that, the fundamental reason for the existence of this \u201cshortcut\u201d is that predicting accurate explore items is much more difficult TREx SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA 0.2 0.4 0.6 0.8 logEUR 0.30 0.31 0.32 0.33 0.34 0.35 0.36 Recall + Instacart TREx-Fair G-TopFreq Dream P-TopFreq GP-TopFreq UP-CF@r TIFUKNN DNNTSP ReCANet 0.5 0.6 0.7 0.8 logRUR 0.30 0.31 0.32 0.33 0.34 0.35 0.36 + Instacart 0.00 0.02 0.04 EEL 0.30 0.31 0.32 0.33 0.34 0.35 0.36 + Instacart 1.1 1.2 1.3 1.4 EED 0.30 0.31 0.32 0.33 0.34 0.35 0.36 + Instacart 3.0 3.2 3.4 3.6 logDP 0.30 0.31 0.32 0.33 0.34 0.35 0.36 + Instacart 0.5 1.0 1.5 logEUR 0.08 0.10 0.12 0.14 0.16 0.18 0.20 Recall + Dunnhumby 1.5 1.6 1.7 1.8 logRUR 0.08 0.10 0.12 0.14 0.16 0.18 0.20 + Dunnhumby 0.2 0.4 0.6 EEL 0.08 0.10 0.12 0.14 0.16 0.18 0.20 + Dunnhumby 2.0 2.2 2.4 2.6 EED 0.08 0.10 0.12 0.14 0.16 0.18 0.20 + Dunnhumby 2.5 3.0 3.5 4.0 logDP 0.08 0.10 0.12 0.14 0.16 0.18 0.20 + Dunnhumby Figure 5: Performance of TREx\ud835\udc39\ud835\udc4e\ud835\udc56\ud835\udc5f\ud835\udc5b\ud835\udc52\ud835\udc60\ud835\udc60at different \ud835\udc63values, compared with different NBR methods in terms of different fairness metrics. The red + marker indicates the direction with both high accuracy and fairness. than predicting repeat items, and exploration prediction only accounts for a limited user\u2019s overall accuracy [22\u201325]. Given that exploration prediction contributes only minimally to the overall accuracy of users, it becomes feasible to allocate resources toward optimizing other beyond-accuracy metrics instead of accuracy itself. Therefore, beyond using the overall performance to measure accuracy and beyond-accuracy metrics, a fine-grained level evaluation could help to provide a more rigid identification of the success when considering beyond-accuracy metrics. Table 6: Exploration accuracy [24] of TREx\ud835\udc37\ud835\udc56\ud835\udc63\ud835\udc52\ud835\udc5f\ud835\udc60\ud835\udc56\ud835\udc61\ud835\udc66compared with NBR methods that are inferior to it within existing evaluation paradigms. Dataset Metric TIFUKNN Dream DNNTSP TREx-Div Instacart Recall\ud835\udc52\ud835\udc65\ud835\udc5d\ud835\udc59@10 0.0014 0.0322 0.0014 0.0002 PHR\ud835\udc52\ud835\udc65\ud835\udc5d\ud835\udc59@10 0.0037 0.1431 0.0040 0.0009 Recall\ud835\udc52\ud835\udc65\ud835\udc5d\ud835\udc59@20 0.0077 0.0526 0.0072 0.0008 PHR\ud835\udc52\ud835\udc65\ud835\udc5d\ud835\udc59@20 0.0198 0.2120 0.0217 0.0031 Dunnhumby Recall\ud835\udc52\ud835\udc65\ud835\udc5d\ud835\udc59@10 0.0042 0.0111 0.0017 0.0000 PHR\ud835\udc52\ud835\udc65\ud835\udc5d\ud835\udc59@10 0.0139 0.0521 0.0085 0.0019 Recall\ud835\udc52\ud835\udc65\ud835\udc5d\ud835\udc59@20 0.0069 0.0214 0.0028 0.0016 PHR\ud835\udc52\ud835\udc65\ud835\udc5d\ud835\udc59@20 0.0232 0.1045 0.0115 0.0065 7 CONCLUSION We have expanded the research objectives of NBR to go beyond sole accuracy to encompass both accuracy and beyond-accuracy metrics. We have recognized a potential \u201cshort-cut\u201d strategy to optimize beyond-accuracy metrics while preserving high accuracy levels. To capitalize on and validate the presence of such \u201cshort-cuts,\u201d we have introduced a plug-and-play framework called two-step repetitionexploration (TREx) considering the differences between repetition and exploration tasks. This framework treats repeat items and explore items as distinct entities, employing a straightforward yet highly effective repetition module to uphold accuracy standards. Concurrently, two exploration modules have been devised to target the optimization of beyond-accuracy metrics. We have conducted experiments on two publicly available datasets w.r.t. eight representative beyond-accuracy metrics, including item fairness (i.e., logEUR, LogRUR, logDP, EEL, and EED) and diversity (i.e., ILD, Entropy, and DS). Our experimental results demonstrate the effectiveness of our proposed \u201cshort-cut\u201d strategy, which can achieve better beyondaccuracy performance w.r.t. several fairness and diversity metrics on different datasets. Additionally, we group beyond-accuracy metrics according to the strength of their connection with accuracy. Our analysis reveals that the stronger the connection with accuracy, the more difficult it becomes to employ a \u201cshort-cut\u201d strategy to optimize these beyond-accuracy metrics, favoring the metrics with a stronger connection to avoid such short-cuts. As to the broader implications of our work, we have discussed the reasonableness of leveraging the \u201cshort-cut\u201d strategy to trade the accuracy of exploration for beyond-accuracy metrics in various scenarios. The presence of this \u201cshort-cut\u201d highlights a potential flaw in the definition of success within existing evaluation paradigms, particularly in scenarios where exploration accuracy is important despite being low [45]. A fine-grained level evaluation should be performed in NBR to offer a more precise identification of achieving \u201cbetter\u201d performance in such a scenario. Despite the simplicity of the \u201cshort-cut\u201d strategy and TREx, our paper sheds light on the research direction of considering both accuracy and beyond-accuracy metrics in NBR. Rather than blindly embracing sophisticated methods in NBR, follow-up research should realize the existence of the \u201cshort-cut\u201d and potential flaws of existing evaluation paradigms in this research direction. Acknowledgements. This work is partially supported by the Dutch Research Council (NWO), under project numbers 024.004.022, NWA.1389.20.183, KICH3.LTP.20.006, and VI.Vidi.223.166. All content represents the opinion of the authors, which is not necessarily shared or endorsed by their respective employers and/or sponsors. SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Li et al."
16
+ }
intro_8K/test_introduction_short_2405.01159v1.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.01159v1",
3
+ "title": "TartuNLP at EvaLatin 2024: Emotion Polarity Detection",
4
+ "abstract": "This paper presents the TartuNLP team submission to EvaLatin 2024 shared task\nof the emotion polarity detection for historical Latin texts. Our system relies\non two distinct approaches to annotating training data for supervised learning:\n1) creating heuristics-based labels by adopting the polarity lexicon provided\nby the organizers and 2) generating labels with GPT4. We employed parameter\nefficient fine-tuning using the adapters framework and experimented with both\nmonolingual and cross-lingual knowledge transfer for training language and task\nadapters. Our submission with the LLM-generated labels achieved the overall\nfirst place in the emotion polarity detection task. Our results show that\nLLM-based annotations show promising results on texts in Latin.",
5
+ "authors": "Aleksei Dorkin, Kairit Sirts",
6
+ "published": "2024-05-02",
7
+ "updated": "2024-05-02",
8
+ "primary_cat": "cs.CL",
9
+ "cats": [
10
+ "cs.CL"
11
+ ],
12
+ "label": "Original Paper",
13
+ "paper_cat": "Parameter AND Efficient AND Fine AND Tuning",
14
+ "gt": "This short report describes the system developed the TartuNLP team for the Emotion Polarity De- tection task of the EvaLatin 2024 Evaluation Cam- paign (Sprugnoli et al., 2024). The goal of the task was to label Latin texts from three historical authors with four emotion polarity labels as positive, neg- ative, neutral or mixed. For this task, no training data was provided, but only a polarity lexicon and a small evaluation set with 44 annotated sentences. Our approach entails two steps. First, we an- notated data for supervised model training a) via heuristic rules using the provided polarity lexicon and b) using GPT-4 (see Section 2). Secondly, we adopted knowledge transfer with parameter- efficient training via adapters (Houlsby et al., 2019) followed by task-specific fine-tuning on the data an- notated in the first step (see Section 3). The knowl- edge transfer was applied both cross-lingually via pretraining on an English sentiment analysis task, and monolingually by training on an unannotated Latin text corpus. We made two submissions to the shared task: one with heuristically annotated training data and another with the GPT-4 annotated labels. Both submissions obtained competitive results, with the submission with GPT-4 labels obtaining the first place overall. The code for the system is available on GitHub.1",
15
+ "main_content": "For the Emotion Polarity Detection task, no training data was provided. However, the organizers provided two useful resources: a polarity lexicon and 1https://github.com/slowwavesleep/ ancient-lang-adapters/tree/lt4hala Label Heuristics LLM-based positive 6535 1334 negative 2243 1028 mixed 5884 221 neutral 735 4698 Total 15396 7281 Table 1: Statistics of the annotated training data. a small gold annotated sample. We employed two distinct approaches to annotate the training data based on these resources: a heuristics-based and an LLM-based. The annotated data from both approaches is available on HuggingFace Hub.2 The label distribution for the annotated data is presented in Table 1. 2.1. Heuristics-based annotation In this approach, we employed the provided polarity lexicon similarly to the lexicon-based classifier by Sprugnoli et al. (2023). First, data from all available Universal Dependencies (Zeman et al., 2023) sources (Version 2.13, the most recent one at the time of writing) in Latin was collected : 1) Index Thomisticus Treebank (ITTB); 2) Late Latin Charter Treebank (LLCT); 3) UDante; 4) Perseus; 5) PROIEL treebank. Then, the sentences containing no nouns or adjectives in the lexicon were removed. The filtered sentences were assigned labels based on the following rules: 2https://huggingface.co/datasets/ adorkin/evalatin2024 1 arXiv:2405.01159v1 [cs.CL] 2 May 2024 1) If all words in the sentence are neutral according to the polarity lexicon, the sentence was labeled as neutral; 2) If the mean polarity of the words in the sentence is in the range from -0.1 to 0.1, then the sentence was labeled as mixed; 3) If the mean polarity is larger than 0.1, then the sentence was labeled as positive; 4) If the mean polarity is less than 0.1, then the sentence was labeled as negative. Our expectation from this approach was that training a model on lexicon-annotated data would result in a model with better generalization capabilities than simply applying the lexicon classifier. The total amount of sentences annotated this way was 15396. 2.2. LLM-based annotation In this approach, we made use of the OpenAI\u2019s GPT-4 model via the API (gpt-4-turbopreview3). The sentences were again sampled from the Universal Dependencies sources. The model was given the description of the problem and one example per label from the gold annotations file. The model was tasked with assigning the given sentence a label and providing an explanation as to why it assigned that particular label. With this approach, we expected that GPT-4 could simulate the annotation process done by an expert in Latin. According to the first author\u2019s somewhat limited understanding of Latin and based on a small sample of annotations and explanations done by the model, the output seems reasonable. We set out to spend about 15 euros per data annotation, which after removing sentences with invalid labels resulted in 7281 annotated sentences. 3. Description of the system The system in our submission is based on the BERT architecture (Devlin et al., 2019). More specifically, we employed the multilingual version of RoBERTa (Zhuang et al., 2021)\u2014XLMRoBERTa (Conneau et al., 2020), which was trained on the data that included Latin. We treated Emotion Polarity Detection as a multiclass classification problem and fine-tuned the model accordingly. However, instead of full finetuning, we trained a stack of adapters: a language adapter and a task adapter. Training adapters involves adding a small number of trainable parameters to the model while freezing the rest of the parameters (Houlsby et al., 2019). In addition to making the training considerably faster, adapters mitigate overfitting and catastrophic forgetting, which 3https://platform.openai.com/docs/ models/gpt-4-and-gpt-4-turbo are common problems when dealing with small amounts of training data. We implemented our system by using the transformers4 and the adapters5 libraries. We expected the model to benefit from both mono-lingual and cross-lingual knowledge transfer; therefore, the training process comprised several stages. First, we fine-tuned a Latin language adapter on a publicly available Latin Corpus6 collected from the Latin Library7. In the next phase of training, we trained a task-specific classification adapter on the English IMDB movie reviews dataset8. The dataset contains only two labels: positive and negative. We created an adapter with a classification head with four classes, two of which remained unused during this stage. Finally, we stacked the task adapter previously trained on English on top of the language adapter, and continued training the task adapter on the annotated data in Latin. The language adapter was trained for ten epochs with a learning rate 1e-4. For further usage, we took the last checkpoint. The task adapter was trained on data in English for five epochs with a learning rate of 5e-4, and we also took the last checkpoint. Finally, for the submissions, we trained a model on both sets of annotated data for 50 epochs with a 5e-4 learning rate. We used the provided gold annotation example as the validation set for training and measured the F-score on it after each epoch. For submission, we selected the best checkpoint based on the validation F-score. 4. Results We made two submissions to the Emotion Polarity Detection task; the first one (TartuNLP_1) fine-tuned on the dataset with the heuristic labels, and the second one (TartuNLP_2) fine-tuned on the dataset with the LLM-generated labels. Both submissions obtained competitive results, with the model trained on the LLM-annotated labels (TartuNLP_2) taking the overall first place and the model trained on the heuristics-annotated data (TartuNLP_1) taking the second place on micro average F1-score and the third place on the macro average F1-score (see Table 2). While the scores obtained by the two models are quite close, there is frequent disagreement in their predictions: out of 294 test examples, the models 4https://github.com/huggingface/ transformers 5https://github.com/adapter-hub/ adapters 6https://github.com/mathisve/ LatinTextDataset 7https://www.thelatinlibrary.com/ 8https://huggingface.co/datasets/imdb 2 Model Micro Average F1 Macro Average F1 TartuNLP_2 0.34 0.29 TartuNLP_1 0.32 0.27 NostraDomina_1 0.22 0.28 NostraDomina_2 0.22 0.22 Table 2: The overall results of all teams. negative positive neutral mixed Predicted negative positive neutral mixed Actual 67 17 53 16 20 26 24 4 4 1 15 0 14 11 18 4 0 10 20 30 40 50 60 (a) TartuNLP_1 with lexicon-based heuristic labels. negative positive neutral mixed Predicted negative positive neutral mixed Actual 75 33 43 2 15 37 20 2 2 3 15 0 14 14 17 2 0 10 20 30 40 50 60 70 (b) TartuNLP_2 with GPT4-generated labels. Figure 1: Confusion matrices for both submissions. disagreed in 140 examples. In case of disagreement, the heuristicsand LLM-based models made correct predictions in 40 and 57 examples respectively. Meanwhile, in case of agreement, the models correctly predicted the labels of 72 examples out of 154. The confusion matrices for both models (see Figure 1) are similar. The models had the most trouble with the mixed class, while the negative class was the easiest to predict; this is in line with findings by Sprugnoli et al. (2023), who reported the lowest inter-annotator agreement for the mixed class, while the negative class had the highest agreement, assuming that the test data of the shared task was annotated in a similar manner. We performed a small ablation study on the labeled test data released by the organizers after evaluating the shared task results to measure the effect of the knowledge transfer methods used: 1) Monolingual knowledge transfer from the wider Latin corpus in training the language adapter; 2) Cross-lingual knowledge transfer from the English IMDB sentiment dataset in training the task adapter. The results of the study, shown in Table 3, were somewhat unexpected. First of all, we observe that the base model with no knowledge transfer is already as good or better than the submitted models adopting both types of knowledge transfer. Secondly, the monolingual knowledge transfer by training the language adapter improves the microaveraged F1-score with both types of labels. Finally, the model with the LLM-generated labels benefits more from the monolingual language adapter training resulting in a model that noticeably outperforms our initial submission. 5. Discussion The model with LLM-generated labels obtained better results than the model with lexicon-based heuristic labels, although the final results of both submitted systems are relatively close. However, the ablation study testing the effectiveness of both monolingual and cross-lingual knowledge transfer demonstrated that the model trained on the LLMannotated data can show even better results when omitting the cross-lingual transfer from English. This is despite the fact that the number of LLMannotated examples was nearly twice as small, suggesting that the LLM annotations are of higher quality than the labels based on lexicon-informed heuristics. Despite our model trained on the LLM-annotated data taking the overall first place, the absolute values are somewhat low and sometimes below the baseline. There might be several reasons related to the choice of the data source and the annotation scheme and procedures. First, many of the exam3 Ablation Micro Avg F1 Macro Avg F1 Val F1 Heuristic labels without knowledge transfer 0.33 0.26 0.48 Heuristic labels + Monolingual language transfer 0.34 0.25 0.48 Heuristic labels + Cross-lingual task transfer 0.30 0.23 0.55 Heuristic labels + Both (TartuNLP_1) 0.32 0.27 0.47 LLM labels without knowledge transfer 0.37 0.30 0.55 LLM labels + Monolingual language transfer 0.38 0.30 0.61 LLM labels + Cross-lingual task transfer 0.37 0.29 0.53 LLM labels + Both (TartuNLP_2) 0.34 0.29 0.48 Table 3: The results of the ablation study. ples appear to be expository or narrative in nature. It is difficult to assign a particular emotive polarity to the texts of that kind. Furthermore, Sprugnoli et al. (2023) mention that the annotators were instructed to assign labels on the sentence level. However, they were also presented with the wider context of the sentence. This leads us to believe that some labels are actually contextual, especially when the annotated sentence contains only a single word (for example, the sentence \"Mentior?\" is labeled as mixed). Secondly, the manual analysis of the examples shows that it is quite difficult to distinguish between mixed and neutral texts. This appears to be true for the trained models, as well. One possibility of improvement is to reframe the task as a multi-label classification problem instead. The model would be expected to predict the probabilities for the negative and positive labels independently. If the probability of both labels is low, the assigned label can be \"neutral\"; if both probabilities are high, the label can be \"mixed\"; otherwise, the label corresponding to the highest probability would be assigned. 6. Conclusion This paper described our solution to the Emotion Polarity Detection task of the EvaLatin Evaluation Campaign. Our submission obtained with a model trained on a dataset with LLM-generated labels achieved the overall first place, showing that LLMbased annotations can be useful for processing texts in Latin. 7. Bibliographical"
16
+ }
intro_8K/test_introduction_short_2405.01175v1.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.01175v1",
3
+ "title": "Uncertainty-aware self-training with expectation maximization basis transformation",
4
+ "abstract": "Self-training is a powerful approach to deep learning. The key process is to\nfind a pseudo-label for modeling. However, previous self-training algorithms\nsuffer from the over-confidence issue brought by the hard labels, even some\nconfidence-related regularizers cannot comprehensively catch the uncertainty.\nTherefore, we propose a new self-training framework to combine uncertainty\ninformation of both model and dataset. Specifically, we propose to use\nExpectation-Maximization (EM) to smooth the labels and comprehensively estimate\nthe uncertainty information. We further design a basis extraction network to\nestimate the initial basis from the dataset. The obtained basis with\nuncertainty can be filtered based on uncertainty information. It can then be\ntransformed into the real hard label to iteratively update the model and basis\nin the retraining process. Experiments on image classification and semantic\nsegmentation show the advantages of our methods among confidence-aware\nself-training algorithms with 1-3 percentage improvement on different datasets.",
5
+ "authors": "Zijia Wang, Wenbin Yang, Zhisong Liu, Zhen Jia",
6
+ "published": "2024-05-02",
7
+ "updated": "2024-05-02",
8
+ "primary_cat": "cs.CV",
9
+ "cats": [
10
+ "cs.CV",
11
+ "cs.AI"
12
+ ],
13
+ "label": "Original Paper",
14
+ "paper_cat": "Semantic AND Segmentation AND Image",
15
+ "gt": "Deep neural networks have been developed for many years and achieved great outcomes. However, its superiority relies on large-scale data labeling. In some real situations, like agriculture, it is difficult to obtain labeled data. To alleviate the burden of data labeling, many methods like domain adaption Chen et al. (2018, 2017b); Hoffman et al. (2018); Kim et al. (2019); Long et al. (2017a), and self-training Busto et al. (2018); Chen et al. (2019); Inoue et al. (2018); Lee et al. (2013); Saito et al. (2017a); Zou et al. (2018) have been proposed. For example, BERT Devlin et al. (2018) and GPT Radford et al. (2018, 2019); Brown et al. (2020), directly leverage a large amount of unlabeled data to pretrain the model. However, they cannot be generally applied in other areas. Among these methods, self training methodsScudder (1965); He et al. (2019) show promising results and it attracts much attention. Self training is a semi-supervised learning method Chapelle et al. (2009), which iteratively generates task specific pseudo-labels using a model trained on some labeled data. It then retrains the model using the labeled data. However, there are many issues in this bootstrap process, one of them is the noise in the pseudo-labeled data. Some researchers resolve this problem by learning from noisy labels Natarajan et al. (2013); Reed et al. (2014); Sukhbaatar et al. (2014); Yu et al. (2018). It can also be optimized by sample selection Mukherjee and Awadallah (2020a) or label smoothing Zou et al. (2019a). However, none of the previous works focused on data properties. Recently, a novel 36th Conference on Neural Information Processing Systems (NeurIPS 2022). arXiv:2405.01175v1 [cs.CV] 2 May 2024 Figure 1: Uncertainty-aware representations. In the right part of this figure, dashed curves represent the basis distributions while the blue curve represent the uncertainty-aware representation and uncertainty-aware labels of the data. The expectation of the labels could be used as the final label and the variance could be used to evaluate the uncertainty. Figure 2: One self training round. Pseudo-label generation (a) use EM algorithm to update the Gaussian basis and the classifier, then it generates some pseudo-labels with uncertainty information while the classifier is also trained in this stage. Then in model retraining stage (b), an uncertainty- aware training strategy is used to update the whole model (CNN and classifier). knowledge distillation Hinton et al. (2015) is proposed to distill the large dataset into a small one Sucholutsky and Schonlau (2019); Wang et al. (2018).The intuition of these methods is to find the key samples, like means in the feature spaces, to capture the data properties. These means could also be referred as basis of the data. They can be used to formulate the latent representations of the data in a probabilistic way using expectation maximization algorithm Li et al. (2019); Moon (1996). Therefore, as shown in figure 1, we propose a probabilistic model to extract uncertainty for self- training. Concretely, expectation maximization algorithm is adapted to get the probabilistic latent representations of the data and their corresponding pseudo-label distributions can be obtained. Then the samples are selected based on the variance of the (pseudo-)label distribution where distributions with lower variance represent good (pseudo-)labels. Finally, an uncertainty-aware training process is used to retrain the model using the new dataset where the expectation of distributions becomes the final pseudo-labels. Overall, our contributions in this paper are: 2 \u2022 Adapt Expectation Maximization algorithm to perform basis transformation on data features. We use neural networks for expectation maximization process to generate the latent prob- abilistic representations of the data using base transformation. These representations are low-rank while keeping the uncertainty information and deprecating the noises. \u2022 A novel regularizer is used for pseudo-label generation. Variance and classification loss are combined in the pseudo-label generation process to get the best pseudo-label distributions which contain comprehensive uncertainty information. \u2022 A basis generation process with basis regularizer is proposed. An attention-like module (ATT block) is introduced here to extract basis from the dataset or feature space. To make the basis more robust, we propose a basis regularizer to make all basis orthogonal, which could lower the rank of final latent representations.",
16
+ "main_content": "Self-training: Self-training is a wide and meaningful research area in semi-supervised learning Amini and Gallinari (2002); Yarowsky (1995); Grandvalet et al. (2005), one basic direction in this area is to train a student net using a teacher net Laine and Aila (2016); Tarvainen and Valpola (2017); Luo et al. (2018), some other works use a pseudo-label-based method for self-training Zou et al. (2018). In this paper, we choose to use pseudo-label-based method while keeping the uncertainty information in the label, an iterative training framework is proposed according to the self-training paradigm and uncertainty information to improve the network performance. Expectation-Maximization and Gaussian Mixture Model: Expectation-maximization (EM) Dempster et al. (1977) is to find solutions for latent variables models using likelihood maximization algorithm while Gaussian mixture model (GMM) Richardson and Green (1997) is also one kind of EM algorithm with specific constraints. Latent variables models with GMM could naturally capture the uncertainty information considering the data properties. In GMM, the data could be represented in the distribution form: p( \u02c6 xn) = K \ufffd k=1 \ufffd k=1 znkN(xn|\u00b5k, \u03a3k), (1) where the latent representation \u02c6 xn is viewed as a linear superposition of k Gaussian basis N(xn|\u00b5k, \u03a3k) and K is the basis number, znk represents the weight of this linear composition. In the GMM, znk could be updated in the E step: znew nk = N(\u00b5new k , \u03a3k) \ufffdK j=1 N(\u00b5new j , \u03a3j) , (2) basis is set to be identity matrix I in this paper, so the \u03a3 update \ufffd Notably, the \u03a3k in the Gaussian basis is set to be identity matrix I in this paper, so the \u03a3 update process is ignored in our algorithm. 3 Problem definition In this part, we formally define the uncertainty-aware self-training problem. Given a set of labeled samples {XL, YL} and a set of unlabeled data XU where XU and XL belong to same domain. Then the goal is to find a latent representation \u02c6 X and uncertainty-aware pseudo-labels YU by using a CNN feature extractor and a simple classifier. As shown in Figure 2, our problem could be solved by alternating the following steps Zou et al. (2019a): a) Pseudo-label generation: Given all the data, EM algorithm is used to generate the pseudo-labels with uncertainty information while the classifier is also trained in this process based on a combined loss to reduce the variance of pseudo-labels and optimize the classification accuracy for labeled data. 3 Figure 3: Whole training process for basis initialization net. Concretely, we train the model like classical machine learning training process and add a small module (attention block) to extract the processed weights which then become the initialized basis of EM algorithm. b) Network retraining. Data are sampled from the pseudo-labeled data based on the label variance, then the sampled data, along with the original labeled data, are used to train the whole classification network. 4 Uncertainty-aware self training To generate the pseudo-label for unlabeled data XU, we first use a base extraction net trained on labeled data to get basis for XL, then these bases could be used as the initialized \u00b5(0) of EM stage to speed up the convergence. Notably, as mentioned in related work section, the \u03a3 is set to be identity matrix and not updated in our algorithm considering a good basis should have identical variance. After the initialization, the EM algorithm is adapted to update the \u00b5 while the prediction net is simultaneously updated in the EM stage. Concretely, the details of base extraction net is shown in section 4.1, then two losses which are used in the EM stage to update the pseudo label generator parameters (classifier in figure 2 a) are demonstrated in section 4.2. After the definition of losses, the whole EM stage is described in section 4.2.1. 4.1 Basis Extraction net As shown in figure 3, we demonstrate the generalized basis initialization net. In this paper, we use classification as an example where the model trained in this stage has 3 components: \u2022 Feature extractor. In fig 3, CNN functions as the feature extractor. The weights we extracted are from this part. \u2022 Classifier. The fully connected layer could be the classifier in our setting, this part is for the original machine learning tasks like classification. \u2022 Weight extractor. An additional ATT block is added to extract the informative basis from the feature space. Clearly in training process, there are 2 tasks: classification and weights extraction. For classification, we use classical classification loss negative log likelihood loss (Lnll). Then for weight extraction part, we want our weights to be basis with low rank, so they need to be orthogonal: L2 = W \u2217W T \u2212I (3) Where W is the weight and I is the unity matrix. Therefore, the loss becomes: Ls1 = Lnll + L2 (4) 4 In Attention block (ATT block), given a matrix X \u2208RN\u00d7d which contains the features of all data samples, we try to extract the inherent low-rank properties of features by basis extraction. The basis extraction, says the problem to find the most informative projection of features, can be formally expressed as min\u00b5 \r \rX \u2212\u00b5Z \r \r F s.t.\u00b5T \u00b5 = I Z = \u00b5T X (5) where \u00b5 \u2208RK\u00d7d represents the basis matrix of the latent features. Through the process, the inherent data structure can be founded. However, as an unsupervised method, the problem is reported easily suffer from the model collapse problems. Considering the important label information in classification problems. then we can modify the problem above into a semi-supervised manner as min\u00b5 \r \rX \u2212\u00b5Z \r \r F + \r \rZZT \u2212Y Y T \r \r F + \r \r\u00b5T \u00b5 \u2212I \r \r F s.tZ = \u00b5T X (6) where Y donates all the labels. We can solve the problems above with standard gradient decent methods. Then, after stage I, we generated some basis which the latent space features of data samples effectively and precisely. 4.2 Pseudo-label generation Recall that the latent representation should be transformed into the pseudo label using a function f\u03b8. Given a latent representation \u02c6 xn will obey the fallowing distribution: p( \u02c6 xn) = K X k=1 znkN(xn|\u00b5k, \u03a3k), (7) where K is the number of basis, G(\u00b5, \u03a3) is the final distribution basis representation. Then the corresponding pseudo label for sample \u02c6 xn(m) is \u02c6 yn(m) = f\u03b8( \u02c6 xn(m)). With the will know reparameter trick, distribution p(yn) can be formally expressed as p(yn) = ZZ p(yn|xn)p(xn|\u03f5)dxnd\u03f5, \u03f5 \u223cN(0, I) (8) where p(xn|\u03f5) = K X k=1 znk\u00b5k + \u03a3k\u03f5 (9) Then, we could easily compute the variance V AR( \u02c6 yn) and expectation E( \u02c6 yn) using these sampled pseudo label. For latent representations in XL which have label yn, the loss function for f\u03b8 is: LossL = E( \u02c6 yn) \u2212yn (10) For latent representations in XU which don\u2019t have label, the loss is basically the variance, therefore the final loss for pseudo label prediction model is: L = \u03bbLossL + (1 \u2212\u03bb)V AR( \u02c6 yn), (11) where \u03bb = 1 if the latent representation is from XU and vice versa. 4.2.1 Expectation-Maximization Now we can get the ideally orthogonal base vectors from weights and use them as initialized \u00b5 in the base generation block and compute the loss. Then in this section, we formally define the adapted EM process. At first, we need to update znk: 5 znew nk = K(xn, \u00b5k) PK j=1 K(xn, \u00b5j) , (12) where K(a, b) is a kernel function to evaluate the similarity between a and b. Then in the algorithm, the t-th Z could be formulated as: z(t) = softmax(\u03bbX(\u00b5(t\u22121)) T ), (13) where \u03bb is manually set to control Z distribution. Then in the M step (likelihood maximization), we update the \u00b5 based on the weighted summation of X to make them in one space. Then the update process in t-th iteration could be formulated as: \u00b5(t) k = z(t) nkxn PN m=1 z(t) mk (14) After T iterations, we could get the final basis \u00b5k(T), \u03a3k(T) and the prediction model \u03b8k(T). The generated pseudo label for each sample is a distribution, which can be formulated as: yn = f\u03b8(xn), (15) where f\u03b8 is a linear transformation, so distribution of yn could be easily calculated. The whole process of pseudo-label generation is summarized in algorithm 1. Algorithm 1: Pseudo-label generation Input :XL, XU, YL, f\u03b8 Output :\u00b5k(T), \u03a3k(T), \u03b8k(T) Initialize \u00b5k(0), \u03a3k(0), \u03b8(0) for t \u21901 to T do update znk(t) (eq 13) compute \u02c6 xn(t) (eq 10) compute pseudo-label yn (eq 15) compute loss function (eq 11) update \u03b8(t) using back propagation update \u00b5k(t) (eq 14) return 4.3 Network retraining Because in section 4.1, we define the problem as a classification task, so in this part we simply use classification as our final task. Considering we have the distribution for pseudo-labels, there are mainly two steps in the retraining part sample selection and model retraining. Method A\u2192W D\u2192W W\u2192D A\u2192D D\u2192A W\u2192A Mean ResNet-50 He et al. (2016) 68.4\u00b10.2 96.7\u00b10.1 99.3\u00b10.1 68.9\u00b10.2 62.5\u00b10.3 60.7\u00b10.3 76.1 DAN Long et al. (2015) 80.5\u00b10.4 97.1\u00b10.2 99.6\u00b10.1 78.6\u00b10.2 63.6\u00b10.3 62.8\u00b10.2 80.4 RTN Long et al. (2016) 84.5\u00b10.2 96.8\u00b10.1 99.4\u00b10.1 77.5\u00b10.3 66.2\u00b10.2 64.8\u00b10.3 81.6 DANN Ganin et al. (2016) 82.0\u00b10.4 96.9\u00b10.2 99.1\u00b10.1 79.7\u00b10.4 68.2\u00b10.4 67.4\u00b10.5 82.2 ADDA Tzeng et al. (2017) 86.2\u00b10.5 96.2\u00b10.3 98.4\u00b10.3 77.8\u00b10.3 69.5\u00b10.4 68.9\u00b10.5 82.9 JAN Long et al. (2017b) 85.4\u00b10.3 97.4\u00b10.2 99.8\u00b10.2 84.7\u00b10.3 68.6\u00b10.3 70.0\u00b10.4 84.3 GTA Sankaranarayanan et al. (2018) 89.5\u00b10.5 97.9\u00b10.3 99.8\u00b10.4 87.7\u00b10.5 72.8\u00b10.3 71.4\u00b10.4 86.5 MRKLD+LRENT Zou et al. (2019b) 89.4\u00b10.7 98.9\u00b10.4 100\u00b10.0 88.7\u00b10.8 72.6\u00b10.7 70.9\u00b10.5 86.8 Ours 92.2\u00b10.5 98.2\u00b10.3 99.6\u00b10.4 87.2\u00b10.5 72.8\u00b10.3 72.4\u00b10.4 87.1 Table 1: Comparison on Office-31 experiments 6 Method Aero Bike Bus Car Horse Knife Motor Person Plant Skateboard Train Truck Mean Source Saito et al. (2017b) 55.1 53.3 61.9 59.1 80.6 17.9 79.7 31.2 81 26.5 73.5 8.5 52.4 MMD Long et al. (2015) 87.1 63 76.5 42 90.3 42.9 85.9 53.1 49.7 36.3 85.8 20.7 61.1 DANN Ganin et al. (2016) 81.9 77.7 82.8 44.3 81.2 29.5 65.1 28.6 51.9 54.6 82.8 7.8 57.4 ENT Grandvalet et al. (2005) 80.3 75.5 75.8 48.3 77.9 27.3 69.7 40.2 46.5 46.6 79.3 16 57 MCD Saito et al. (2018) 87 60.9 83.7 64 88.9 79.6 84.7 76.9 88.6 40.3 83 25.8 71.9 ADR Saito et al. (2017b) 87.8 79.5 83.7 65.3 92.3 61.8 88.9 73.2 87.8 60 85.5 32.3 74.8 SimNet-Res152Pinheiro (2018) 94.3 82.3 73.5 47.2 87.9 49.2 75.1 79.7 85.3 68.5 81.1 50.3 72.9 GTA-Res152 Sankaranarayanan et al. (2018) 77.1 MRKLD+LRENT Zou et al. (2019b) 88.0 79.2 61.0 60.0 87.5 81.4 86.3 78.8 85.6 86.6 73.9 68.8 78.1 Ours 89.1 81.7 82.1 57.7 83.2 79.7 83.9 77.2 86.2 82.7 83.8 65.9 79.4 Table 2: Comparison on VisDA17 experiments Method Backbone Road SW Build Wall Fence Pole TL TS Veg. Terrain Sky PR Rider Car Truck Bus Train Motor Bike mIoU Source 42.7 26.3 51.7 5.5 6.8 13.8 23.6 6.9 75.5 11.5 36.8 49.3 0.9 46.7 3.4 5 0 5 1.4 21.7 CyCADA Hoffman et al. (2018) DRN-26 79.1 33.1 77.9 23.4 17.3 32.1 33.3 31.8 81.5 26.7 69 62.8 14.7 74.5 20.9 25.6 6.9 18.8 20.4 39.5 Source 36.4 14.2 67.4 16.4 12 20.1 8.7 0.7 69.8 13.3 56.9 37 0.4 53.6 10.6 3.2 0.2 0.9 0 22.2 MCD Saito et al. (2018) DRN-105 90.3 31 78.5 19.7 17.3 28.6 30.9 16.1 83.7 30 69.1 58.5 19.6 81.5 23.8 30 5.7 25.7 14.3 39.7 Source 75.8 16.8 77.2 12.5 21 25.5 30.1 20.1 81.3 24.6 70.3 53.8 26.4 49.9 17.2 25.9 6.5 25.3 36 36.6 AdaptSegNet Tsai et al. (2018) DeepLabv2 86.5 36 79.9 23.4 23.3 23.9 35.2 14.8 83.4 33.3 75.6 58.5 27.6 73.7 32.5 35.4 3.9 30.1 28.1 42.4 AdvEnt Vu et al. (2019) DeepLabv2 89.4 33.1 81 26.6 26.8 27.2 33.5 24.7 83.9 36.7 78.8 58.7 30.5 84.8 38.5 44.5 1.7 31.6 32.4 45.5 Source 29.2 FCAN Zhang et al. (2018) DeepLabv2 46.6 Ours DeepLabv2 87 47.7 80.3 25.9 26.3 47.9 34.7 29 80.9 45.7 80.3 60 29.2 81.7 37.9 47.5 37.2 29.8 47.7 50.4 Table 3: Adaptation results of experiments transferring from GTA5 to Cityscapes. 4.3.1 Sample selection After pseudo-label generation process, the generated pseudo-labels are formulated in a distribution format (Gaussian form) shown in equation 8 which contains variance and mean information. Then for classification task, a class-dependent selection Mukherjee and Awadallah (2020b) could be performed to construct a dataset with hard labels DS,U = {xu,s \u2208Su,c, yu}. Here, Su,c \u2208XU is constructed based on the score rank of each sample, if the sample\u2019s pseudo-label has higher variance, then it\u2019s more likely to be discarded. For yu, one can simply use its mean as its hard pseudo label, but here we want to accurately model the uncertainty information. Therefore, we randomly sample hard labels from the pseudo-label distribution to incorporate the uncertainty information encoded in the distribution. 4.3.2 Uncertainty aware retraining After the sample selection, a retraining dataset is derived as Dr = {XL, YL} S{xu,s, yu}, then for the retraining part, the final goal is to minimize following loss: minW LL + LU V ar(y) (16) Where W is the model parameter, LL and LU represent the task loss for labeled data and unlabeled data respectively, here in this classification example, they represent same classification loss like cross entropy. V ar(y) represents the sample uncertainty, for samples x \u2208XU, variance is same to the variance in the distribution to catch the uncertainty information of teacher model. In this setting, samples with higher variance, which basically means that the previous model is not confident on this sample, have lower weights in the back propagation process of training. After the retraining, one round shown in figure 2 is completed. Then we simply repeat the whole process until the ideal results are derived. Method Backbone Road SW Build Wall* Fence* Pole* TL TS Veg. Sky PR Rider Car Bus Motor Bike mIoU mIoU* Source DRN-105 14.9 11.4 58.7 1.9 0 24.1 1.2 6 68.8 76 54.3 7.1 34.2 15 0.8 0 23.4 26.8 MCD Saito et al. (2018) 84.8 43.6 79 3.9 0.2 29.1 7.2 5.5 83.8 83.1 51 11.7 79.9 27.2 6.2 0 37.3 43.5 Source DeepLabv2 55.6 23.8 74.6 6.1 12.1 74.8 79 55.3 19.1 39.6 23.3 13.7 25 38.6 AdaptSegNetTsai et al. (2018) 84.3 42.7 77.5 4.7 7 77.9 82.5 54.3 21 72.3 32.2 18.9 32.3 46.7 Source ResNet-38 32.6 21.5 46.5 4.8 0.1 26.5 14.8 13.1 70.8 60.3 56.6 3.5 74.1 20.4 8.9 13.1 29.2 33.6 CBST Zou et al. (2019b) 53.6 23.7 75 12.5 0.3 36.4 23.5 26.3 84.8 74.7 67.2 17.5 84.5 28.4 15.2 55.8 42.5 48.4 AdvEnt Vu et al. (2019) DeepLabv2 85.6 42.2 79.7 8.7 0.4 25.9 5.4 8.1 80.4 84.1 57.9 23.8 73.3 36.4 14.2 33 41.2 48 Source DeepLabv2 64.3 21.3 73.1 2.4 1.1 31.4 7 27.7 63.1 67.6 42.2 19.9 73.1 15.3 10.5 38.9 34.9 40.3 Ours 68 29.9 76.3 10.8 1.4 33.9 22.8 29.5 77.6 78.3 60.6 28.3 81.6 23.5 18.8 39.8 42.6 48.9 Table 4: Adaptation results of experiments transferring from SYNTHIA to Cityscapes. 7 5 Experiment In this section, we demonstrate the advantages of proposed methods by comparing the performance of proposed methods with the SOTA confidence-aware self-training strategy on 2 tasks image classification and image segmentation. To make the results comparative, we basically follow the settings in Zou et al. (2019b) which achieves SOTA results in confidence-aware self-training domain, details will be illustrated in following sections. 5.1 Dataset and evaluation metric 5.1.1 Image classification. For domain adaption in image classification task, VisDA17 Peng et al. (2018) and Office-31 Saenko et al. (2010) are used to evaluate the algorithm performance. In VisDA17, there are 12 classes with 152, 409 virtual images for training while 55, 400 real images from MS-COCO Lin et al. (2014) are target dataset. For Office-31, 31 classes collected from Amazon(A, 2817 images), Webcam(W, 795 images) and DSLR(D, 498 images) domains are included. We strictly follow the settings in Saenko et al. (2010); Sankaranarayanan et al. (2018); Zou et al. (2019b) which evaluate the domain adaption performance on A \u2192W, D \u2192W, W \u2192D, A \u2192D, D \u2192A, W \u2192A. For evaluation, we simply use the accuracy for each class and mean accuracy across all classes as the evaluation metric. 5.1.2 Semantic segmentation For domain adaption in image segmentation tasks, 2 virtual datasets GTA5 Richter et al. (2016), SYNTHIA Ros et al. (2016) and 1 real dataset Cityscapes Cordts et al. (2016) are used to evaluate the performance of proposed method. Concretely, GTA5 contains 24, 966 images based on the game GTA5, SYNTHIA-RAND-CITYSCAPES (subset of SYNTHIA) has 9400 images. For the experiment setup, we also strictly follow Hoffman et al. (2018); Tsai et al. (2018); Zou et al. (2019b) which use Cityscapes as target domain and view virtual datasets (GTA5 and CITYSCAPES) as training domain. For evaluation, the Intersection over Union (IoU) is used to measure the performance of models where. 5.2 Experiment setup To make our results comparable with current SOTA confidence-aware method, we adapt the settings in Zou et al. (2019b). Besides, all the training process is performed on 4 Tesla V100 GPUs which have 32GB memory. Image Classification: ResNet101/ ResNet-50 He et al. (2016) are used as backbones, which are pretrained based on ImageNet Deng et al. (2009). Then in source domain, we fine-tune the model using SGD while the learning rate is 1 \u00d7 10\u22124, weight decay is set to be 5 \u00d7 10\u22125, momentum is 0.8 and the batch size is 32. In the self-training round, the parameters are same except for the different learning rates which are 5 \u00d7 10\u22124. Image Segmentation: In image segmentation part, we mainly use the older DeepLab v2 Chen et al. (2017a) as backbone to align with previous results. DeepLab v2 is first pretrained on ImageNet and then finetuned on source domain using SGD. Here we set learning rate as 5 \u00d7 10\u22124, weight decay is set to be 1 \u00d7 10\u22125, momentum is 0.9, the batch size is 8 while the patch size is 512 \u00d7 1024. In self-training, we basically run 3 rounds which has 4 retraining epochs. 5.3 Experiment results Comparison on image classification. As shown in table 1 and table 2, compared with previous SOTA result in confidence-aware self-training and other self-training algorithms, although our algorithm does not achieve best performance in all sub-tasks, the mean results (87.1 and 79.4 for Office-31 and VisDA17 respectively) achieves SOTA while our results (derivations and means) are obtained from 5 runs of the experiment. Comparison on image segmentation.As shown in table 3 and 4, in semantic segmentation task, our results of average IoU (mIoU) achieves SOTA among confidence-aware self-training algorithms. 8 6 Conclusion and future work In this paper, we propose a new confidence-aware self-training framework and compare our algorithm with current SOTA results of confidence-aware self-training which proves that our pseudo-label could better catch the uncertainty information and thus alleviate the over-confident issue in self-training. Furthermore, the idea underlying our method could be used in many self-training related tasks while the over-confidence is a common question faced by most self-training algorithms."
17
+ }
intro_8K/test_introduction_short_2405.01266v1.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.01266v1",
3
+ "title": "MFTraj: Map-Free, Behavior-Driven Trajectory Prediction for Autonomous Driving",
4
+ "abstract": "This paper introduces a trajectory prediction model tailored for autonomous\ndriving, focusing on capturing complex interactions in dynamic traffic\nscenarios without reliance on high-definition maps. The model, termed MFTraj,\nharnesses historical trajectory data combined with a novel dynamic geometric\ngraph-based behavior-aware module. At its core, an adaptive structure-aware\ninteractive graph convolutional network captures both positional and behavioral\nfeatures of road users, preserving spatial-temporal intricacies. Enhanced by a\nlinear attention mechanism, the model achieves computational efficiency and\nreduced parameter overhead. Evaluations on the Argoverse, NGSIM, HighD, and\nMoCAD datasets underscore MFTraj's robustness and adaptability, outperforming\nnumerous benchmarks even in data-challenged scenarios without the need for\nadditional information such as HD maps or vectorized maps. Importantly, it\nmaintains competitive performance even in scenarios with substantial missing\ndata, on par with most existing state-of-the-art models. The results and\nmethodology suggest a significant advancement in autonomous driving trajectory\nprediction, paving the way for safer and more efficient autonomous systems.",
5
+ "authors": "Haicheng Liao, Zhenning Li, Chengyue Wang, Huanming Shen, Bonan Wang, Dongping Liao, Guofa Li, Chengzhong Xu",
6
+ "published": "2024-05-02",
7
+ "updated": "2024-05-02",
8
+ "primary_cat": "cs.RO",
9
+ "cats": [
10
+ "cs.RO",
11
+ "cs.AI"
12
+ ],
13
+ "label": "Original Paper",
14
+ "paper_cat": "Temporal AND Graph",
15
+ "gt": "The integration of autonomous vehicles (AVs) with human- driven vehicles and pedestrians necessitates advanced trajec- tory prediction models. Central to these models is their ability to predict the future trajectories of various road users, lever- aging historical data. Despite significant advancements, a pivotal challenge persists: modeling the often unpredictable driving behaviors of road users. These behaviors, shaped by a blend of traffic dynamics, road layouts, and individual cognitive inclinations, offer a unique window into the real-time decision-making processes of humans in complex traffic settings [Schwarting et al., \u2217Authors contributed equally; \u2020Corresponding author. 2019; Li et al., 2023]. Our research has illuminated the piv- otal role of understanding human behavioral patterns in tra- jectory predictions. Recognizing and predicting human driv- ing behavior is not merely about tracing a vehicle\u2019s path; it\u2019s about understanding the cognitive processes that dictate those paths. By understanding behaviors, AVs can anticipate sud- den changes in human-driven vehicles or pedestrian move- ments, leading to safer co-navigation. Furthermore, behavior- focused predictions can aid in scenarios where traditional data might be ambiguous or incomplete, relying on human behavioral patterns to fill in the gaps. Through the integration of decision-making theories, cognitive psychology, and traf- fic behavior studies [Yin et al., 2021], trajectory prediction models can be enriched, fostering a harmonious coexistence of AVs and human-driven entities on the road. High Definition (HD) maps, conventionally considered pivotal for trajectory prediction, pose intrinsic challenges. Their creation is resource-intensive, and in the rapidly chang- ing milieu of urban environments, they can quickly become obsolete [Gao et al., 2020; Ren et al., 2024]. This has given rise to map-free models, a paradigm shift that operates in- dependently of HD map data. However, while these mod- els adeptly handle dynamic environments, they may lack the granularity provided by comprehensive road network data. This gap is aptly addressed by the advent of deep learning techniques, notably Graph Neural Networks (GNNs) [Liang et al., 2020; Gao et al., 2020]. GNNs, adept at assimilating extensive data from road users, offer nuanced insights into their interactions and the overarching socio-cognitive context, thereby compensating for the lack of detailed HD maps. Our contributions are as follows: 1. An advanced map-free architecture for trajectory pre- diction that obviates the need for HD maps, resulting in significant computational savings. 2. A novel dynamic geometric graph that captures the essence of continuous driving behavior, circumvent- ing the limitations of manual labeling. We have integrated metrics and behavioral criteria, drawing from traffic psychology, cognitive neuroscience, and decision-making frameworks, to craft a model that of- fers more than mere predictions\u2014it elucidates. arXiv:2405.01266v1 [cs.RO] 2 May 2024 3. Benchmark assessments underscore MFTraj\u2019s superior- ity. Demonstrating a commendable performance eleva- tion of nearly 5.0% on the Argoverse, NGSIM, HighD, and MoCAD datasets, its robustness is further accen- tuated by its consistent performance even with a data shortfall of 25%- 62.5%, underscoring its adaptability and profound understanding of diverse traffic dynamics.",
16
+ "main_content": "Recent years have seen an explosion of research in trajectory prediction for autonomous driving (AD), thanks to the burgeoning field of deep learning. These cutting-edge approaches [Liao et al., 2024a; Messaoud et al., 2021; Tian et al., 2024; Liao et al., 2024f] have demonstrated superior performance in complex traffic scenarios. However, they often encounter challenges in adequately representing spatial relationships such as graphic inputs of the scene. To address this, HD maps, rich in scene and semantic information, have attracted increasing research attention. Considering that Convolutional Neural Networks (CNNs) excel at extracting spatial features from input data, such as spatial features from inputs like vectorized maps or raster images, several studies [Zhao et al., 2021; Gilles et al., 2021; Khandelwal et al., 2020] have merged sequential networks with CNNs. This hybrid approach effectively captures both temporal and spatial features from HD maps, providing enriched contextual information for motion prediction tasks. Recent research has explored Graph Neural Networks (GNNs) [Liang et al., 2020; Zeng et al., 2021; Mohamed et al., 2020; Liao et al., 2024c], such as Graph Convolutional Networks (GCNs) and Graph Attention Networks (GATs), Transformers [Zhou et al., 2022; Liao et al., 2024d; Chen et al., 2022a; Ngiam et al., 2022], and generative models such as Adversarial Networks (GANs) [Zhou et al., 2023] and Variational Auto Encoders (VAEs) [Walters et al., 2021; Liao et al., 2024e] for direct encoding of HD maps. For example, VectorNet [Gao et al., 2020] simplifies maps by extracting key points from lane splines and encoding them using GNNs. Moreover, LaneGCN [Liang et al., 2020] and TPCN [Ye et al., 2021] build lane graphs using central line segments, employing GCN to capture dynamic interaction. In addition, HiVT [Zhou et al., 2022] and SSLLanes [Bhattacharyya et al., 2023] represent map elements by relative positions, improving the transformer model for time series trajectory data. Despite their effectiveness, the limited availability of HD maps and the extensive resources needed for their creation and maintenance impede the widespread adoption of autonomous driving, particularly in areas lacking current HD map coverage. In response to these challenges, this study introduces a map-free model that utilizes generative models and a VRNN [Chung et al., 2015] to account for the variable nature of traffic scenarios. We propose a novel adaptive GCN to model the complexity of real-time interactions in traffic scenes. To streamline model complexity, we apply the Linformer framework [Wang et al., 2020] for a balance between computational efficiency and prediction accuracy in AD applications. 3 Methodologies 3.1 Inputs and Outputs This study focuses on predicting the trajectory of the target vehicle in interactive driving scenarios, considering all vehicles within the AV\u2019s (termed the target vehicle) sensing range. At time t, the ego vehicle anticipates the target vehicle\u2019s trajectory for the upcoming tf steps. Our model, drawing from historical data X, considers past trajectories of both the target vehicle (indexed by 0) and its surrounding agents (indexed from 1 to n) over a predefined horizon th. Formally, X = \ufffd Xt\u2212th:t 0 ; Xt\u2212th:t i \u2200i \u2208[1, n] \ufffd (1) \u2212th:t = {xt\u2212th 0 , xt\u2212th+1 0 , . . . , xt 0} and Xt\u2212th:t i = \u2212t+1 represent the historical trajectories \ufffd \u2200 \u2208 \ufffd where Xt\u2212th:t 0 = {xt\u2212th 0 , xt\u2212th+1 0 , . . . , xt 0} and Xt\u2212th:t i = {xt\u2212th i , xt\u2212th+1 i , . . . , xt i} represent the historical trajectories of the target vehicle and those of the surrounding agents from time t \u2212th to t, respectively. The output of the model is the future trajectory of the target vehicle during the prediction horizon tf: Y = \ufffd yt+1 0 , yt+2 0 , . . . , yt+tf \u22121 0 , yt+tf 0 \ufffd (2) s the 2D coordinates of the target vehicle at time t. \ufffd \ufffd where yt 0 is the 2D coordinates of the target vehicle at time t. Our model uniquely operates without relying on maps, using only the historical trajectory data of the target and nearby agents. The model needs an input sequence of length th and remains functional even if the historical data is not perfectly sequential. For sporadic missing data points, due to reasons like occlusions or sensor glitches, we employ simple linear imputation or similar methods. In addition, Figure 1 illustrates our proposed model\u2019s hierarchical design. Following the encoder-decoder format, it features four key components: behavior-aware, position-aware, interaction-aware modules, and the residual decoder. We delve into each module\u2019s specifics below. 3.2 Behavior-aware Module Moving away from traditional methods that classify driver behaviors into fixed and discrete categories, we offer a more adaptable and flexible solution with our behavior-aware module, which utilizes a continuous portrayal of behavioral attributes. This approach draws inspiration from the multipolicy decision-making framework [Markkula et al., 2020], integrating elements of traffic psychology [Toghi et al., 2022] and dynamic geometric graphs (DGGs) [Boguna et al., 2021] to effectively capture intricate driving behaviors amid ongoing driving maneuvers and evolving traffic conditions. Dynamic Geometric Graphs. We first model the interactions of different agents with a DGG. At time t, the graph Gt is defined as: Gt = {V t, Et} (3) where V t = {vt , vt . . . , vt } is the set of nodes, vt is the i{} where V t = {vt 0, vt 1 . . . , vt n} is the set of nodes, vt i is the ith node representing the i-th agent, Et = {et 0, et 1 . . . , et n} is the set of edges representing potential interactions between agents, and et i is the edge between the node vt i and other agents that have potential influences with it. An interaction is assumed to exist only if two agents, e.g., vi and vj, are in close proximity to each other, i.e., their shortest distance Figure 1: Architecture of the proposed trajectory prediction model. d \u0000vt i, vt j \u0001 is less than or equal to a predefined threshold r. Therefore, we define et i = {vt ivt j | (j \u2208N t i )} (4) where N t i = \b vt j \u2208V t\\ {vt i} | d \u0000vt i, vt j \u0001 \u2264r, i \u0338= j \t . Correspondingly, the symmetrical adjacency matrix At of Gt can be given as: At(i, j) = \u001ad \u0000vt i, vt j \u0001 if d \u0000vt i, vt j \u0001 \u2264r, i \u0338= j 0 otherwise (5) Centrality Measures. Centrality measures are graphtheoretic metrics that are widely used to understand various aspects of network structures. These measures provide valuable insights into the importance, influence, and connectivity of nodes or vertices within a graph. As shown in Table 1, we use six centralities to characterize driving behavior. These measures allow evaluation of individual agent behavior within the DGG and reveal key agents and overall connectivity of the traffic graph. (1) Degree Centrality: Reflects the number of connections an agent has, correlating with its influence on and susceptibility to others. It\u2019s defined as: Jt i (D) = \f \fN t i \f \f + Jt\u22121 i (D) (6) where |Nit| denotes the total elements in N t i . (2) Closeness Centrality: Indicates an agent\u2019s reachability, suggesting its potential influence over others. Defined by: Jt i (C) = |N t i | \u22121 P \u2200vt j\u2208N t i d \u0000vt i, vt j \u0001 (7) (3) Eigenvector Centrality: Measures an agent\u2019s importance by considering both quantity and quality of connections. Expressed as: Jt i (E) = P \u2200vt j\u2208N t i d \u0000vt i, vt j \u0001 \u03bb (8) where \u03bb is the eigenvalue, indicating the collective influence exerted by an agent and its network. (4) Betweenness Centrality: Highlights agents that act as bridges or bottlenecks in traffic, crucial in congested situations. Formulated as: Jt i (B) = X \u2200vt s,vt k\u2208V t \u03c3j,k(vt i) \u03c3j,k (9) where V t denotes the set of all agents present in the scene, \u03c3j,k signifies the total number of shortest paths between agent vt j and agent vt k, and \u03c3j,k(vi) represents the number of those paths traversing the agent vt i. (5) Power Centrality: Identifies agents in recurrent interactions, hinting at traffic patterns. Defined by: Jt i (P) = X k Ak ii k! (10) where Ak ii denotes the i-th diagonal element of the adjacency matrix raised to the k-th power, while k! signifies the factorial of k, shedding light on its contribution to the network\u2019s Table 1: Centrality measures and their interpretations. Centrality Measures Magnitude (Original Measure) Gradient (1st Derivative) Curvature (2nd Derivative) Degree (J t i (D)) Closeness (J t i (C)) Agent\u2019s potential and capability for interaction in the traffic environment Agent\u2019s sensitivity to traffic density variations Driver\u2019s capability to react to fluctuating traffic conditions Eigenvector (J t i (E)) Betweenness (J t i (B)) Agent\u2019s significance in dynamic traffic scenarios Variation in agent\u2019s importance in dynamic traffic scenes Influence of driver behavior alterations on overall traffic conditions Power (J t i (P)) Katz (J t i (K)) Extent of influence an agent exerts on others via direct and indirect interactions at time t Agent\u2019s adaptability to shifts in driving behaviors Agent\u2019s capacity to modify interactions in complex and congested traffic scenarios structural integrity and dynamism. (6) Katz Centrality: Emphasizes both direct and distant interactions of an agent, capturing intricate driving patterns. Given as: Jt i (K) = X k X j \u03b1kAk ij + \u03b2k, \u2200i, j \u2208[0, n], where \u03b1k < 1 \u03bbmax (11) where n represents the number of agents in the real-time traffic scenario, \u03b1k is the decay factor, \u03b2k denotes the weight assigned to the immediate neighboring agents, and Ak ij is the i,j-th element of the k-th power of the adjacency matrix. Behavior-aware Criteria. Inspired by the relationship between velocity, acceleration, and jerk, we introduce behavioral criteria. These criteria, consisting of Behavior Magnitude Index (BMI) Ct i, Behavior Tendency Index (BTI) Lt i, and Behavior Curvature Index (BCI) It i, evaluate different driving behaviors for the target vehicle and its surroundings. They compute thresholds, gradients, and concavities of centrality measures that reflect behaviors such as lane changes, acceleration, deceleration, and driving style. We find that behaviors with significant, fluctuating centrality values in short time frames are likely to affect nearby agents, consistent with human risk perception and time-sensitive decision-making. They are respectively given as follows: Ct i = h\f \f \fJ t i (D) \f \f \f , \f \f \fJ t i (C) \f \f \f , \f \f \fJ t i (E) \f \f \f , \f \f \fJ t i (B) \f \f \f , \f \f \fJ t i (P ) \f \f \f , \f \f \fJ t i (K) \f \f \f iT (12) Lt i = \"\f \f \f \f \f \u2202J t i (D) \u2202t \f \f \f \f \f , \f \f \f \f \f \u2202J t i (C) \u2202t \f \f \f \f \f , \u00b7 \u00b7 \u00b7 , \f \f \f \f \f \u2202J t i (K) \u2202t \f \f \f \f \f #T (13) It i = \"\f \f \f \f \f \u22022J t i (D) \u22022t \f \f \f \f \f , \f \f \f \f \f \u22022J t i (C) \u22022t \f \f \f \f \f , \u00b7 \u00b7 \u00b7 , \f \f \f \f \f \u2202J t i (K) \u22022t \f \f \f \f \f #T (14) Behavior Encoder. Incorporating behavior-aware criteria, symbolized as J = {Ct\u2212th:t 0:n , Lt\u2212th:t 0:n , It\u2212th:t 0:n }, our behavior encoder comprises VRNN and GRU components. This encoder succinctly models relationships between random variables across time, yielding precise sequential behavioral features \u00af Ot\u2212th:t behavior. Formally: \u00af Ot\u2212th:t behavior = \u03d5GRU (\u03d5VRNN(J )) (15) where \u03d5GRU and \u03d5VRNN denote the GRU and VRNN functions. This encoder captures human driving patterns and their temporal dynamics. Next, behavioral features \u00af Ot\u2212th:t behavior, fuse with positional features from the position-aware module, subsequently processed by the interaction-aware module for comprehensive feature extraction. 3.3 Position-aware Module Contrary to traditional methods that emphasize absolute positions [Wang et al., 2022a; Gao et al., 2020] or fixed grids [Deo and Trivedi, 2018], our model emphasizes relative positions. The position-aware module captures individual and group spatial dynamics, interpreting the scene\u2019s geometric nuances. These insights are then encoded to produce positional features. Pooling Mechanism. Our pooling mechanism captures dynamic position data from the traffic environment around the target vehicle, utilizing individual stk i and multi-agent stk i,j position vectors. This strategy gleans historical trajectories and spatial relationships without depending on fixed positions or grids. The relationships are formulated as: stk i = {ptk i \u2212ptk\u22121 i }, ptk i,j = {ptk i \u2212ptk j } (16) Position Encoder. The position encoder employs an LSTM to transform discrete position vectors into continuous spatiotemporal representations, thereby enhancing temporal and spatial interactions between agents and map elements. Given the historical position vectors for the target and surrounding agents, it embeds them temporally: \u00af Ot\u2212th:t position = \u03d5LSTM \u0000\u00af ht\u2212th:t\u22121 i , st\u2212th:t\u22121 i , pt\u2212th:t\u22121 i,j \u0001 (17) where \u00af Ot\u2212th:t\u22121 position is the positional features output by the position encoder, and \u03d5LSTM denotes the two-layer LSTM encoder, and \u00af ht\u2212th:t\u22121 i represents the hidden position state updated by the encoder on a frame-by-frame basis, with the weights of the LSTM shared among all agents. 3.4 Interaction-aware Module Effective trajectory prediction in complex traffic scenarios hinges upon a system\u2019s ability to comprehend and anticipate interactions among vehicles. Classic GCN-based methods, although proficient at encapsulating geometric inter-agent relationships, often exhibit limitations in fluid traffic conditions due to their fixed adjacency matrix configurations. To tackle this, we introduce a novel adaptive structure-aware GCN, taking cues from advancements in crystal graphs and material design. This novel approach stands out by its capability to craft spatial feature matrices dynamically, adjusting to the number of agents observed in real-time, which ensures a more fluid and adaptable response to traffic changes. A graphical illustration of this concept is provided in Figure 2. Breaking away from conventional models that predominantly lean on distance-based positional features, our design holistically blends continuous behavioral features into Figure 2: Overview of our adaptive structure-aware GCN. The real-time trajectories of the target and observed agents are captured using a topology graph to form a feature matrix. This matrix undergoes aggregation, updating, and iteration within the GCN. As new agents are observed in real-time, the GCN dynamically adjusts its topology, updating features for the added nodes. its graph structure. This not only addresses the multifaceted spatio-temporal interactions but also considers the intricate physical interplays between agents, offering a noticeable enhancement in prediction precision. Our design blueprint encompasses an adaptive convolutional neural network rooted in a fully connected interaction multigraph. This structure is adept at simultaneously capturing sequential behavioral and dynamic positional interactions among agents. The multigraph\u2019s operational layer is distinguished by nodes, which symbolize sequential behavioral features \u00af Ot\u2212th:t behavior and edges representing positional features \u00af Ot\u2212th:t position, as defined below: \u02dc zk i = F \u0010 \u02dc zk\u22121 i , \u02dc rk\u22121 i,j \u0011 = \u02dc zk\u22121 i + \u03d5sgm \u0010 \u02dc rk\u22121 i,j Wk\u22121 g + bk\u22121 g \u0011 \u2299\u03d5spu \u0010 \u02dc rk\u22121 i,j Wk\u22121 h + bk\u22121 h \u0011 (18) where the variable k denotes the layer within the GCN, k \u2208[1, 3], and the symbols \u2299, \u03d5sgm, and \u03d5spu represent the element-wise product, sigmoid activation function, and softplus activation function, respectively. Consequently, Wk\u22121 g and Wk\u22121 h are learnable matrices, bk\u22121 g , and bk\u22121 h are the bias of the k\u2212th layer. \u02dc rk\u22121 i,j can be represented as follows: \u02dc rk\u22121 i,j = \u0000\u02dc zk\u22121 i \r \r\u02dc zk\u22121 j \r \r pt\u2212th:t i,j \u0001 (19) Additionally, the initial feature vector \u02dc z(0) i is defined as follows: \u02dc z(0) i = \u0010 \u00af Ot\u2212th:t behavior\u2225\u00af Ot\u2212th:t position \u0011 (20) Furthermore, the output of the adaptive structure-aware GCN for the target vehicle i is then passed to Linformer, an extension architecture of Transformer, Furthermore, the output of the adaptive structure-aware GCN for the target vehicle i is subsequently fed into a lightweight transformer-based framework\u2014\u2014 Linformer [Wang et al., 2020], to efficiently quantify and compute the dynamic attention weight vectors for the surrounding agents, ultimately output the contextual mapping \u00af O. This allows for a favorable trade-off between accuracy and efficiency. 3.5 Residual Decoder The residual decoder, comprising a linear residual and projection layer, processes node vectors to forecast the target vehicle\u2019s future trajectory, producing the prediction Yt:t+tf 0 . This is given by: Y = Yt:t+tf 0 = F\u03b8 \u0000F\u03b8( \u00af O) \u0001 (21) such that, F\u03b8(\u00b7) = \u03d5ReLU [\u03d5GN (\u03d5Linear(\u00b7))] (22) where \u03d5ReLU denotes the ReLU activation function, and \u03d5GN denotes the Group Normalization (GN) function [Wu and He, 2018], which is applied to improve the training stability of our model. In addition, the \u03d5Linear corresponds to the fully connected layer, while F\u03b8 denotes the residual decoder function. Table 2: Performance comparison of various models on complete and missing datasets for Argoverse. Models use either HD map or vectorized map (Map) and trajectory (Traj.) data or solely Trajectory data, with some not specifying (\u2019-\u2019). Metrics include minADE (k=1), minFDE (k=1), and MR (k=1). Bold and underlined values represent the best and second-best performance in each category. Model Input minADE (m)\u2193 minFDE (m)\u2193 MR (%)\u2193 Argoverse Baseline [Chang et al., 2019] Map + Traj. 2.96 6.81 81.00 Constant Velocity [Chang et al., 2019] 3.55 7.89 75.00 SGAN [Gupta et al., 2018] 3.61 5.39 87.11 TPNet [Fang et al., 2020] Map + Traj. 2.33 5.29 PRIME [Song et al., 2022] Map + Traj. 1.91 3.82 58.67 Uulm-mrm (2rd) [Chang et al., 2019] Map + Traj. 1.90 4.19 63.47 Jean (1st) [Mercat et al., 2020] Map + Traj. 1.74 4.24 68.56 WIMP [Khandelwal et al., 2020] Map + Traj. 1.82 4.03 62.88 Scene-Transformer [Ngiam et al., 2022] Map + Traj. 1.81 4.06 59.21 TNT [Zhao et al., 2021] Map + Traj. 1.77 3.91 59.72 mmTransformer [Liu et al., 2021] Map + Traj. 1.77 4.00 61.78 CtsConv (Aug.) [Walters et al., 2021] Map + Traj. 1.77 4.05 HOME [Gilles et al., 2021] Map + Traj. 1.72 3.73 58.40 LaneGCN [Liang et al., 2020] Map + Traj. 1.71 3.78 59.05 GOHOME [Gilles et al., 2022] Map + Traj. 1.69 3.65 57.21 LaneRCNN [Zeng et al., 2021] Map + Traj. 1.68 3.69 56.85 DenseTNT [Gu et al., 2021] Map + Traj. 1.68 3.63 58.43 VectorNet [Gao et al., 2020] Map + Traj. 1.66 3.67 TPCN [Ye et al., 2021] Map + Traj. 1.66 3.69 58.80 SSL-Lanes [Bhattacharyya et al., 2023] Map + Traj. 1.63 3.56 56.71 LTP [Wang et al., 2022a] Map + Traj. 1.62 3.55 56.25 HiVT-128 [Zhou et al., 2022] Map + Traj. 1.60 3.52 MFTraj Traj. 1.59 3.51 55.44 MFTraj (drop 3-frames) Traj. 1.68 3.59 56.95 MFTraj (drop 5-frames) Traj. 1.76 3.74 59.08 MFTraj (drop 8-frames) Traj. 1.86 3.90 61.12 MFTraj (drop 10-frames) Traj. 1.97 3.96 62.72 4 Experiments 4.1 Experimental Setup Datasets. We tested model\u2019s efficacy on Argoverse [Chang et al., 2019], NGSIM [Deo and Trivedi, 2018], HighD [Krajewski et al., 2018], and MoCAD [Liao et al., 2024b] datasets. Data Segmentation. For Argoverse, we predicted a 3-second trajectory from a 2-second observation, while for NGSIM, HighD, and MoCAD, we use 6-second intervals split into 2 seconds of observation and 4 seconds of prediction. These datasets, referred to as the complete dataset, help assess our model in diverse traffic scenarios. Recognizing that realworld conditions often lead to incomplete data, we further assessed our model\u2019s resilience using the Argoverse dataset by introducing four subsets with varying levels of missing data: drop 3-frames, drop 5-frames, drop 8-frames, and drop 10-frames. These missing datasets simulate data loss scenarios. For data gaps, we applied simple linear interpolation. Metrics. Our experimental protocol was aligned with the Argoverse Motion Forecasting Challenge and prior work [Liao et al., 2024b], we evaluated the performance of our model using standard metrics: minADE, minFDE, MR, and RMSE. Implementation Details. We implemented our model using PyTorch and PyTorch-lightning on an NVIDIA DGX-2 with eight V100 GPUs. Using the smooth L1 loss as our loss function, the model was trained with the Adam optimizer, a batch size of 32, and learning rates of 10\u22123 and 10\u22124. 4.2 Experimental Results Performance Evaluation on the Complete Dataset. Tables 2 and Table 4 present a comparative evaluation of our trajectory prediction model against 25 baselines from 2016 to 2023. Unlike most approaches that depend on HD maps or vectorized map data, our model omits map-based inputs. Still, it consistently outperforms the baselines across metrics like minADE, minFDE, MR, and RMSE for both Argoverse and MoCAD datasets. Specifically, for the Argoverse dataset, MFTraj outperforms most of the SOTA models by margins of 2.9% in minADE, 2.4% in minFDE, and 3.8% in MR, while being on par with HiVT. It excels particularly in challenging long-term predictions (4s-5s) on NGSIM, HighD, and MoCAD datasets, with reductions in forecast error surpassing at least 11.5%, 29.6%, and 21.9%, respectively. This emphasizes its potential for accurate long-term predictions in highway and urban settings. Performance Evaluation on the Missing Dataset. Table 2 showcases the resilience of our model when faced with incomplete data sets. Our model consistently outperforms all other baselines on the drop 3-frames and drop 5-frames datasets. Notably, on the drop 3-frames dataset, it surpasses nearly all state-of-the-art (SOTA) models trained on full data, highlighting its remarkable predictive strength even with missing data. While its performance on the drop 5frames dataset excels over most baselines, there are exceptions in specific metrics against models like TNT, WIMP, and mm Transformer. As the number of missing frames increases, as in the drop 8-frames and drop 10-frames datasets, there\u2019s an expected decline in performance. Yet, even with half the input data missing, our model still competes strongly against top baselines, emphasizing its potential in environments with data interruptions. Comparative Analysis of Model Performance and Complexity. In Table 3, we compare our model\u2019s performance and complexity with various SOTA baselines. While our model doesn\u2019t have the lowest parameter count, it excels in all performance metrics. Impressively, it achieves this while using 90.42% and 87.18% fewer parameters than WIMP and Scene-Transformer, respectively. Compared to top-10 SOTA models, our model not only surpasses them in accuracy but is also as efficient, if not more so, than HiVT-128, SSLLanes, LaneGCN, and HOME+GOHOME. This underlines our model\u2019s optimal balance of robustness, efficiency, and trajectory prediction accuracy. Table 3: Comparative evaluation of MFTraj with SOTA baselines. Model minADE (m)\u2193 minFDE (m)\u2193 MR (%)\u2193 #Param (K) WIMP [Khandelwal et al., 2020] 1.82 4.03 62.88 >20,000 Scene-Transformer [Ngiam et al., 2022] 1.81 4.06 59.21 15,296 CtsConv (Aug.) [Walters et al., 2021] 1.77 4.05 1,078 mmTransformer [Liu et al., 2021] 1.77 4.00 61.78 2,607 LaneGCN [Liang et al., 2020] 1.71 3.78 59.05 3,701 HOME+GOHOME [Gilles et al., 2022] 1.69 3.65 57.21 5,100 DenseTNT [Gu et al., 2021] 1.68 3.63 58.43 1,103 SSL-Lanes [Bhattacharyya et al., 2023] 1.63 3.56 56.71 1,840 HiVT-128 [Zhou et al., 2022] 1.60 3.52 2,529 MFTraj 1.59 3.51 55.44 1,961 4.3 Ablation Studies We executed an ablation study to assess the impact of individual components within our trajectory prediction model, with the results summarized in Table 5. Model F, i.e., MFTraj, which integrates all components, stands out in all metrics, signifying the synergy of its parts. When the behavior-aware module is excluded in Model A, there are noticeable drops in minADE, minFDE, and MR by 12.6%, 8.8%, and 8.5% reTable 4: Evaluation results for MFTraj and the other SOTA baselines without using HD maps in the NGSIM, HighD and MoCAD datasets over a different horizon. RMSE (m) is the evaluation metric, with some not specifying (\u201c-\u201d). Bold and underlined values represent the best and second-best performance in each category. Dataset Model Prediction Horizon (s) 1 2 3 4 5 NGSIM S-LSTM [Alahi et al., 2016] 0.65 1.31 2.16 3.25 4.55 S-GAN [Gupta et al., 2018] 0.57 1.32 2.22 3.26 4.40 CS-LSTM [Deo and Trivedi, 2018] 0.61 1.27 2.09 3.10 4.37 DRBP[Gao et al., 2023] 1.18 2.83 4.22 5.82 DN-IRL [Fernando et al., 2019] 0.54 1.02 1.91 2.43 3.76 WSiP [Wang et al., 2023] 0.56 1.23 2.05 3.08 4.34 CF-LSTM [Xie et al., 2021] 0.55 1.10 1.78 2.73 3.82 MHA-LSTM [Messaoud et al., 2021] 0.41 1.01 1.74 2.67 3.83 HMNet [Xue et al., 2021] 0.50 1.13 1.89 2.85 4.04 TS-GAN [Wang et al., 2022b] 0.60 1.24 1.95 2.78 3.72 Stdan [Chen et al., 2022b] 0.39 0.96 1.61 2.56 3.67 iNATran [Chen et al., 2022a] 0.39 0.96 1.61 2.42 3.43 DACR-AMTP [Cong et al., 2023] 0.57 1.07 1.68 2.53 3.40 FHIF [Zuo et al., 2023] 0.40 0.98 1.66 2.52 3.63 MFTraj 0.38 0.87 1.52 2.23 2.95 HighD S-GAN [Gupta et al., 2018] 0.30 0.78 1.46 2.34 3.41 WSiP [Wang et al., 2023] 0.20 0.60 1.21 2.07 3.14 CS-LSTM [Deo and Trivedi, 2018] 0.22 0.61 1.24 2.10 3.27 MHA-LSTM [Messaoud et al., 2021] 0.19 0.55 1.10 1.84 2.78 NLS-LSTM [Messaoud et al., 2019] 0.20 0.57 1.14 1.90 2.91 DRBP[Gao et al., 2023] 0.41 0.79 1.11 1.40 EA-Net [Cai et al., 2021] 0.15 0.26 0.43 0.78 1.32 CF-LSTM [Xie et al., 2021] 0.18 0.42 1.07 1.72 2.44 Stdan [Chen et al., 2022b] 0.19 0.27 0.48 0.91 1.66 iNATran [Chen et al., 2022a] 0.04 0.05 0.21 0.54 1.10 DACR-AMTP [Cong et al., 2023] 0.10 0.17 0.31 0.54 1.01 GaVa [Liao et al., 2024d] 0.17 0.24 0.42 0.86 1.31 MFTraj 0.07 0.10 0.19 0.38 0.56 MoCAD S-GAN [Gupta et al., 2018] 1.69 2.25 3.30 3.89 4.69 CS-LSTM [Deo and Trivedi, 2018] 1.45 1.98 2.94 3.56 4.49 MHA-LSTM [Messaoud et al., 2021] 1.25 1.48 2.57 3.22 4.20 NLS-LSTM [Messaoud et al., 2019] 0.96 1.27 2.08 2.86 3.93 WSiP [Wang et al., 2023] 0.70 0.87 1.70 2.56 3.47 CF-LSTM [Xie et al., 2021] 0.72 0.91 1.73 2.59 3.44 Stdan [Chen et al., 2022b] 0.62 0.85 1.62 2.51 3.32 HLTP [Liao et al., 2024a] 0.55 0.76 1.44 2.39 3.21 BAT [Liao et al., 2024b] 0.35 0.74 1.39 2.19 2.88 MFTraj 0.34 0.70 1.32 2.01 2.57 spectively, highlighting its pivotal role. Model B, with absolute coordinates, underperforms, emphasizing the relevance of spatial relationships. Model C, without the interactionaware module and Linformer extension, and Model D, lacking Linformer, both show diminished performance. Similarly, Model E, which uses a standard GCN instead of the adaptive one, also lags, underscoring the latter\u2019s efficiency. In essence, this study solidifies the importance of each component in Model F. Every part, from understanding behavioral nuances to updating features effectively, bolsters the model\u2019s precision and resilience. In essence, this study solidifies the importance of each component in Model F. Every part, from understanding behavioral nuances to updating features effectively, bolsters the model\u2019s precision and resilience. Figure 3: Qualitative results of MFTraj and HiVT on Agroverse. 4.4 Qualitative Results Figure 3 presents the qualitative results of our model using the Argoverse dataset. We\u2019ve limited the display to the tarTable 5: Ablation analysis of individual components in Argoverse. Ablation Models (\u2206Model F) minADE (m)\u2193 minFDE (m)\u2193 MR (%)\u2193 Model A 1.82 3.85 60.61 Model B 1.69 3.59 56.14 Model C 1.78 3.71 59.07 Model D 1.71 3.61 57.59 Model E 1.68 3.70 56.94 Model F 1.59 3.51 55.44 Figure 4: Qualitative results of MFTraj on NGSIM. Target vehicle is depicted in red, while its surrounding agents are shown in blue. get vehicle\u2019s trajectories for clarity. Interestingly, without the aid of HD maps, our model adeptly discerns road semantics, enabling it to make precise and logical predictions for target vehicles in intricate urban settings. Importantly, Figure 4 illustrates a comparison between the trajectories predicted by MFTraj and the SOTA baselines in the same traffic scenarios. MFTraj outperforms Stdan and WSiP in trajectory prediction, especially in complex scenarios such as lane changes and merging. These results demonstrate the superior adaptability and reliability of MFTraj in complex traffic conditions. 5 Conclusion This work presents a map-free and behavior-aware trajectory prediction model for AVs, integrating four components: behavior-aware, position-aware, interaction-aware modules, and a residual decoder. These components work in concert to analyze and interpret various inputs, understand humanmachine interactions, and account for the inherent uncertainty and variability in the prediction. Evaluated with the Argoverse, NGSIM, HighD, and MoCAD datasets, MFTraj outperformed SOTA baselines in prediction accuracy and efficiency without additional map information. Furthermore, this approach ensures its robustness and adaptability even in the presence of significant missing data; it achieved impressive performance even with a 50% sequential input data deficit. This underscores the resilience and efficiency of MFTraj in predicting future vehicle trajectories and suggests its potential to drastically reduce the data requirements for training AVs, especially in corner cases, like data-missing and limited data scenes. Acknowledgements This research is supported by the Science and Technology Development Fund of Macau SAR (File no. 0021/2022/ITP, 0081/2022/A2, 001/2024/SKL), and University of Macau (SRG2023-00037-IOTSC)."
17
+ }
intro_8K/test_introduction_short_2405.01270v1.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.01270v1",
3
+ "title": "The Importance of Model Inspection for Better Understanding Performance Characteristics of Graph Neural Networks",
4
+ "abstract": "This study highlights the importance of conducting comprehensive model\ninspection as part of comparative performance analyses. Here, we investigate\nthe effect of modelling choices on the feature learning characteristics of\ngraph neural networks applied to a brain shape classification task.\nSpecifically, we analyse the effect of using parameter-efficient, shared graph\nconvolutional submodels compared to structure-specific, non-shared submodels.\nFurther, we assess the effect of mesh registration as part of the data\nharmonisation pipeline. We find substantial differences in the feature\nembeddings at different layers of the models. Our results highlight that test\naccuracy alone is insufficient to identify important model characteristics such\nas encoded biases related to data source or potentially non-discriminative\nfeatures learned in submodels. Our model inspection framework offers a valuable\ntool for practitioners to better understand performance characteristics of deep\nlearning models in medical imaging.",
5
+ "authors": "Nairouz Shehata, Carolina Pi\u00e7arra, Anees Kazi, Ben Glocker",
6
+ "published": "2024-05-02",
7
+ "updated": "2024-05-02",
8
+ "primary_cat": "cs.LG",
9
+ "cats": [
10
+ "cs.LG"
11
+ ],
12
+ "label": "Original Paper",
13
+ "paper_cat": "Graph AND Structure AND Learning",
14
+ "gt": "Understanding biological sex-based differences in brain anatomy provides valuable insights into both neurodevelop- mental processes and cognitive functioning. Recent strides in the field of geometric deep learning [1], particularly the ad- vent of Graph Neural Networks (GNNs), have revolutionised the analysis of complex, non-Euclidean data [2] to make pre- dictions at a node, edge, or graph-level. This allows us to treat brain shapes as graphs, leveraging the power of GNNs to learn from complex structural anatomical data [3]. Discrimi- native feature embeddings can be withdrawn from these mod- els, representing brain shapes as a continuous vector of nu- merical features that capture valuable structural and geomet- rical information for downstream prediction tasks [4]. Tech- niques like Principal Component Analysis (PCA) can be used to reduce the dimensionality of graph embeddings for visuali- sation, aiding the exploration of subgroup biases in the feature space beyond the target label. This analysis may help practi- tioners ensure the reliability of their predictions, and is partic- ularly important in applications where GNNs feature embed- dings may be leveraged for new tasks, such as fine-tuning, domain transfer, or multi-modal approaches. In this study, we dissect GNN models trained under differ- ent settings for the task of sex classification using 3D meshes of segmented brain structures. We inspect the learned fea- ture embeddings at different layers within a multi-graph neu- ral network architecture. Through this granular analysis, we reveal critical insights into the inner workings of our models, identifying important effects of different modelling choices. This research demonstrates the utility of conducting a model inspection framework as part of model development, high- lighting insights that may guide practitioners in the selection of models with desired characteristics, avoiding biases, over- fitting and better understanding the driving forces behind pre- dictions.",
15
+ "main_content": "2.1. Imaging datasets We used four neuroimaging datasets, including data from the UK Biobank imaging study (UKBB) 1 [5], the Cambridge Centre for Ageing and Neuroscience study (CamCAN) [6, 7], the IXI dataset2, and OASIS3 [8]. Both UKBB and CamCAN brain MRI data were acquired with Siemens 3T scanners. The IXI dataset encompassed data collected from three clinical sites, each employing different scanning systems. CamCAN and IXI are acquired from healthy volunteers, while UKBB is an observational population study. The OASIS3 dataset consists of 716 subjects with normal cognitive function and 318 patients exhibiting varying stages of cognitive decline. For all four datasets, subjects with missing biological sex or age information were excluded. Data from UKBB was split into three sets, with 9,900 scans used for training, 1,099 for validation, and 2,750 for testing. CamCAN, IXI and OASIS3 1Accessed under application 12579. 2https://brain-development.org/ixi-dataset/ arXiv:2405.01270v1 [cs.LG] 2 May 2024 were used as external test sets, with sample sizes of 652, 563, and 1,034, respectively. The UKBB data is provided with a comprehensive preprocessing already applied, using FSL FIRST [9] to automatically segment 15 subcortical brain structures from T1-weighted brain MRI, including the brain stem, left/right thalamus, caudate, putamen, pallidum, hippocampus, amygdala, and accumbens-area. We apply our own pre-processing pipeline to the CamCAN, IXI, and OASIS3 datasets, closely resembling the UKBB pre-processing. Our pipeline includes skull stripping using ROBEX 3 [10], bias field correction using N4ITK [11], and brain segmentation via FSL FIRST. 2.2. Graph representation The anatomical brain structures are represented by meshes as an undirected graph composed of nodes, connected by edges forming triangular faces. The number of nodes for most structures is 642 and up to 1,068, whereas the number of edges per structure ranges between 3,840 and 6,396. The meshes are automatically generated by the FSL FIRST tool. 2.2.1. Node features Each graph node can carry additional information, encoded as feature vectors. This can include spatial coordinates or more complex geometric descriptors. While computer vision has transitioned from hand-crafted features to end-to-end deep learning, we have previously demonstrated the value of using geometric feature descriptors in GNN-based shape classification [12]. We employ Fast Point Feature Histograms (FPFH) [13], a pose invariant feature descriptor shown to substantially boost classification performance. To compute FPFH features on a mesh, a point feature histogram is first generated, involving the selection of neighboring points within a defined radius around each query point. The Darboux frame is subsequently defined, and angular variations are computed. This process involves several steps, including the estimation of normals and the calculation of angular variations, resulting in a vector of 33 features at each node. 2.2.2. Mesh registration Mesh registration is an optional pre-processing step, with the goal to remove spatial variability across subjects and datasets. Here, we investigate the use of rigid registration aligning all meshes for a specific brain structure to a standardised orientation using the closed-form Umeyama approach [14]. This method employs a singular value decompositionbased optimisation to obtain an optimal rigid transformation between two given meshes. For each of the 15 brain structures, we select a reference mesh from a random subject from the UKBB dataset, and align the meshes from all other 3https://www.nitrc.org/projects/robex Fig. 1: Model architecture consisting of a graph convolutional network (GCN) submodel feeding graph embeddings into a classification head with two fully connected layers (FC1 and FC2). Where N is the number of brain substructures, 15. For our model inspection, we read out the feature vectors from the GCN submodel, FC1, and FC2. subjects to this reference. As a result, shape variability due to orientation and position differences is minimised and the remaining variability is expected to primarily represent anatomical differences across subjects. 2.3. Multi-graph neural network architecture Our general GNN architecture is comprised of two main components; the GCN submodel which aims to learn graph embeddings over 3D meshes using multiple graph convolutional layers [12] and an MLP classification head that takes the graph embeddings as inputs and performs the final classification using two fully connected layers (cf. Fig. 1). The input to our models are 15 subgraphs representing 15 brain structures, extracted from T1-weighted brain scans. We consider two approaches for learning graph embeddings with GCN submodels. The first approach, referred to as shared submodel, uses a single GCN submodel that learns from all 15 subgraphs. Here, the weights of the graph convolutional layers are shared across brain structures. The shared submodel approach is parameter-efficient and aims to learn generic shape features. For the second approach, referred to as non-shared submodel, each subgraph is fed into a structure-specific GCN submodel. The non-shared submodel approach has more parameters and may capture structure-specific shape features. In both approaches, the architecture of the GCN submodel is identical and consists of three graph convolutional layers [15] with Rectified Linear Unit (ReLU) activations. A global average pooling layer is used as a readout layer, aggregating node representations into a single graph-level feature embedding. The embeddings from individual structures are stacked to form a subject-level feature embedding which is passed to the classification head. 2.4. Model inspection Our model inspection approach is focused on evaluating the separability of the target label (biological sex, Male and Female) and data source classes (UKBB, CamCAN, IXI or OASIS3) through feature inspection. Each test set sample is passed through the complete pipeline and its feature embeddings are saved at three different stages: at the output layer of the GCN submodel and at the first (FC1) and second (FC2) fully connected layers of the classification head. The dimensions of these embeddings are, respectively, 480 (15 substructures times the hidden layer size, 32), 32 and 2. To allow for visual inspection, the feature embeddings from the GCN and FC1 layers are inputted to a PCA model to reduce their dimensionality. The PCA modes capture the directions of the largest variation in the high-dimensional feature space, allowing us to visualise feature separation in 2D scatter plots. We randomly sample 500 subjects from each dataset for the visualisations. Given that all the models were trained to classify biological sex, a clear separation should be expected between the Male and Female classes in the first PCA modes. 3. EXPERIMENTS & RESULTS For a thorough evaluation, we trained and tested the four models shared and non-shared GCN submodels, and with and without mesh rigid registration on identical data splits. All code was developed using PyTorch Geometric and PyTorch Lightning for model implementation and data handling. We used the Adam optimiser [16] with a learning rate of 0.001, and employed the standard cross entropy loss for classification. Random node translation was used as a data augmentation strategy with a maximum offset of 0.1mm [17]. This was shown to improve performance in our previous study [12]. Model selection was done based on the loss of the validation set. Our code is made publicly available4. 3.1. Classification performance Figure 2 summarises the classification performance of the four models, showing the ROC curves together with the area under the curve (AUC) metric, reported separately for each of the four test datasets. There are two main observations: (i) There are very little differences in the absolute performance across the four models. Comparing the shared vs non-shared submodel, the AUC performance is comparable. When comparing models with and without mesh registration, we find the generalisation gap decreases between in-distribution test (UKBB) and the external test data (CamCAN, IXI, OASIS3). However, we also observe a small drop in performance on the in-distribution test data when using mesh registration, compared to not using registration. A practitioner using internal test results for final model selection may opt for using a shared submodel, due to its parameter efficiency, without mesh registration, due to convenience. As we will see next, this choice may be suboptimal as test accuracy alone is insufficient to identify important model characteristics. 4https://github.com/biomedia-mira/medmesh 3.2. Effect of using structure-specific submodels For the models that use a shared submodel, we observe that the GCN feature embeddings are non-discriminative with respect to the target label. Separation seems completely missing in the shared model without registration (see Fig. 3a), with only weak separation in the shared model with registration (see Fig. 3c). For these models, the classification heads will primarily contribute to the model performance. For the models with a non-shared submodel, we find a much better separability for the GCN features with and without mesh registration (cf. Figs. 3b, d). Here, the GCN features will meaningfully contribute to the models\u2019 classification performance. 3.3. Effect of mesh registration When studying the effect of mesh registration, we can clearly observe that without registration, the GCN feature embeddings from the submodel strongly encode data source, showing separate clusters for UKBB and external test data (cf. Figs. 3a,b). When introducing mesh registration as a pre-processing step, we note a significant improvement, with an almost entirely removed separation of datasets in the GCN layer independent of whether a shared and non-shared submodel is used (Figs. 3c, d). The separability of the target label in the GCN layer is well defined for the non-shared submodel (Fig. 3d), while remaining weak for the shared submodel (Fig. 3c). Rigid registration as a pre-processing step seems to not only improve the learning efficiency of the GCN submodel, but also its ability to generalise across data distributions. 4. CONCLUSION Our findings underscore the limitations of relying solely on test accuracy for model selection, particularly when focusing on in-distribution test accuracy. We demonstrate that this may lead practitioners to select models with undesired characteristics where GCN features are non-discriminative for the prediction task and/or strongly encode biases such as data source. Using a comprehensive model inspection, we were able to identify variations in the model characteristics and better understand what drives the final prediction (GCN submodel vs classification head). The importance of this becomes evident when considering applications such as finetuning, domain transfer, or multi-modal approaches, where GCN feature embeddings may be leveraged for new tasks. Our model inspection framework can be easily applied to other models, tasks, and purposes. It was previously used to detect biases in chest radiography disease detection models [18]. Here, we strongly advocate for the wider use of model inspection as an integral part of comparative performance analyses. We hope that our work can contribute to improving the reliability of model selection in all areas of deep learning for biomedical image analysis. (a) (b) (c) (d) Fig. 2: Sex classification performance for four models; (a) shared and (b) non-shared submodel without mesh registration, (c) shared and (d) non-shared submodel with mesh registration. We observe that the generalisation gap between the in-distribution test data (UKBB) and the external test data (CamCAN, IXI, OASIS3) closes with mesh registration. Overall, there are only small differences in performance, illustrating that test accuracy alone is insufficient to identify variations in model characteristics. (a) Shared submodel, without mesh registration (b) Non-shared submodel, without mesh registration (c) Shared submodel, with mesh registration (d) Non-shared submodel, with mesh registration Fig. 3: Effect of modelling choices on feature separability for four different models at their the GCN layer (left), first fully connected layer FC1 (middle), and output layer FC2 (right). Models: (a,c) shared and (b,d) non-shared GCN submodel, and (a,b) without and (c,d) with mesh registration. For each model, we show the separation by target label in the top row, and the separation by dataset in the bottom row. Effect of submodel: The models in (a,c) with a shared submodel are unable to learn discriminative GCN features for the prediction task, while the models in (b,d) with a non-shared submodel show much better task-related separability in the GCN features. Effect of registration: The models models in (a,b) without registration strongly encode information about the data source in the GCN layer. This is much reduced for the models in (c,d) with mesh registration. 5. ACKNOWLEDGMENTS Nairouz Shehata is grateful for the support by the Magdi Yacoub Heart Foundation and Al Alfi Foundation. 6. COMPLIANCE WITH ETHICAL STANDARDS This study uses secondary, fully anonymised data which is publicly available and is exempt from ethical approval. 7."
16
+ }
intro_8K/test_introduction_short_2405.01280v1.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.01280v1",
3
+ "title": "Reinforcement Learning for Edit-Based Non-Autoregressive Neural Machine Translation",
4
+ "abstract": "Non-autoregressive (NAR) language models are known for their low latency in\nneural machine translation (NMT). However, a performance gap exists between NAR\nand autoregressive models due to the large decoding space and difficulty in\ncapturing dependency between target words accurately. Compounding this,\npreparing appropriate training data for NAR models is a non-trivial task, often\nexacerbating exposure bias. To address these challenges, we apply reinforcement\nlearning (RL) to Levenshtein Transformer, a representative edit-based NAR\nmodel, demonstrating that RL with self-generated data can enhance the\nperformance of edit-based NAR models. We explore two RL approaches: stepwise\nreward maximization and episodic reward maximization. We discuss the respective\npros and cons of these two approaches and empirically verify them. Moreover, we\nexperimentally investigate the impact of temperature setting on performance,\nconfirming the importance of proper temperature setting for NAR models'\ntraining.",
5
+ "authors": "Hao Wang, Tetsuro Morimura, Ukyo Honda, Daisuke Kawahara",
6
+ "published": "2024-05-02",
7
+ "updated": "2024-05-02",
8
+ "primary_cat": "cs.CL",
9
+ "cats": [
10
+ "cs.CL"
11
+ ],
12
+ "label": "Original Paper",
13
+ "paper_cat": "Model AND Based AND Reinforcement AND Learning",
14
+ "gt": "Non-autoregressive (NAR) language models (Gu et al., 2018) generate translations in parallel, en- abling faster inference and having the potential for real-time translation applications. However, de- spite their computational efficiency, NAR models have been observed to underperform autoregressive (AR) models due to the challenges posed by the large decoding space and difficulty in capturing dependency between target words accurately (Gu et al., 2018). To bridge the performance gap, many NAR architectures and training methods have been proposed, including edit-based mod- els like Insertion Transformer (Stern et al., 2019) and Levenshtein Transformer (Gu et al., 2019). Prior research has also explored knowledge distilla- *Work done during internship at CyberAgent AI Lab. tion (Ghazvininejad et al., 2019), which is effective but introduces additional complexity. Unlike AR models, preparing teacher data and designing appropriate training objectives have always been challenging for NAR models (Li et al., 2023). Teacher forcing with inappropriate teacher data may exacerbate the exposure bias prob- lem (Ranzato et al., 2016), affecting model perfor- mance. Reinforcement learning (RL) is known for its ability to tackle the exposure bias (Ranzato et al., 2016) and alleviate the object mismatch is- sue (Ding and Soricut, 2017). Despite its impor- tance, explorations of RL for NAR are still scarce. Shao et al. (2021) proposed a method for reducing the estimation variance. However, this method is only applicable to NAR models with a fixed output length, which is unsuitable for edit-based models. In this paper, we empirically analyze conditions for performance improvement in applying RL to edit-based NAR models in neural machine transla- tion (NMT). Specifically, we focus on Levenshtein Transformer (LevT) (Gu et al., 2019), a promi- nent edit-based NAR architecture that has shown promise in reducing decoding latency and flexible length adjustment. We demonstrate that RL with self-generated data significantly improves LevT\u2019s performance. Importantly, our methods are orthog- onal to existing research on NAR architectures, indicating potential for widespread applicability. We explore two RL approaches: stepwise reward maximization, which computes rewards after each edit operation, and episodic reward maximization, which only computes rewards after all generations are completed. We analyze these two approaches\u2019 respective advantages and disadvantages and em- pirically verify them. Furthermore, through a series of experiments, we investigate the impact of tem- perature settings on softmax sampling, aiming to identify the optimal temperature that strikes a bal- ance between exploration and exploitation during the RL training process. arXiv:2405.01280v1 [cs.CL] 2 May 2024",
15
+ "main_content": "Reinforcement Learning Reinforcement learning has been widely applied to improve the performance of AR NMT models (Ranzato et al., 2016; Bahdanau et al., 2016; Wu et al., 2016) because its ability to train models to optimize nondifferentiable score functions and tackle the exposure bias problem (Ranzato et al., 2016). In practice, REINFORCE (Williams, 1992) with a baseline is commonly used for estimating the policy gradient, which can be computed as follows: \u25bd\u03b8L(\u03b8) \u2248\u2212(r(y) \u2212b(s)) \u25bd\u03b8 log\u03c0\u03b8(y|s), (1) where r is the reward function, b is the baseline, y is a sample from policy \u03c0\u03b8 and state s. Softmax with Temperature In the domain of RL, we need to consider the explorationexploitation trade-off (Sutton and Barto, 2018), where temperature \u03c4 is an important parameter. \u03c4 is used to control the softness of the softmax distribution, pi = exp(yi/\u03c4) \ufffd i exp(yi/\u03c4). (2) ads to a more uniform distribution, \ufffd A larger \u03c4 leads to a more uniform distribution, promoting exploration, while a smaller \u03c4 creates a more peaky distribution, emphasizing exploitation. Kiegeland and Kreutzer (2021) shows that training with an increased temperature can mitigate the peakiness effect due to RL (Choshen et al., 2020), indicating that a suitable temperature is significant for RL training in NMT. RL for NAR Compared to AR methods, studies of reinforcement learning for NAR remain unexplored. Shao et al. (2021) proposed a method to reduce the estimation variance of REINFORCE by fixing the predicted word at position t and sampling words of other positions for n times. However, this method is only applicable to models with a fixed length, which is unsuitable for edit-based models. Levenshtein Transformer Levenshtein Transformer (Gu et al., 2019) is an NAR model based on three edit operations: delete tokens, insert placeholders, and replace placeholders with new tokens. It uses a supervised dual-policy learning algorithm to minimize the Levenshtein distance (Levenshtein, 1965) for training and greedy sampling for decoding. The decoding stops when two consecutive refinement iterations return the same output or a maxFigure 1: The illustration of Levenshtein Transformer\u2019s decoding process (Gu et al., 2019). In each decoding iteration, three edit operations are performed sequentially: delete tokens, insert placeholders, and replace placeholders with new tokens. imum number of iterations (set to 10) is reached. We illustrate the decoding process in Figure 1. LevT\u2019s dual-policy learning generates teacher data by corrupting the ground truth and reconstructing it with its adversary policy. This mechanism not only offers a unique approach to data generation but also underscores the inherent difficulty in preparing teacher data. This introduces concerns regarding the exposure bias, particularly whether the training process can maintain consistency with the text during decoding. To address this issue, we employ RL approaches that use self-generated data for training. 3 Approaches In this section, we present our reinforcement learning approaches in detail. We train a Levenshtein Transformer model as our baseline using the dualpolicy learning algorithm. Based on it, we introduce two distinct RL approaches within the REINFORCE framework: stepwise reward maximization and episodic reward maximization. Moreover, we present our methods for temperature control. Stepwise Reward Maximization General RL training methods for AR NMT models are all episodic1, as it is difficult to calculate BLEU (Papineni et al., 2002) when the sentence is not fully generated. In contrast, NAR models can calculate BLEU on outputs at each decoding step. From the perspective of estimating a more accurate gradient, we propose stepwise reward maximization, which 1In this context, \u201cepisodic\u201d denotes training based on entirely generated sequences Figure 2: The illustration of the two RL approaches. (A) is the stepwise reward maximization, which randomly samples from a previous node for each edit operation and calculates BLEU and RL gradient after each edit operation (except for the insert operation, since it is not easy to calculate BLEU after inserting placeholders). (B) is the episodic reward maximization, where each sample is edited multiple times in a linear fashion, without branching into different paths, and BLEU and RL gradient are calculated only after the completion of all edit operations. At every orange node, we sample k times from this node (in this example, the sample size k is 2). calculates reward for each edit operation2 using score differences from one previous edit. Since every step\u2019s reward is calculated separately, this approach should be easier to learn than episodic approaches (Sutton and Barto, 2018). However, it is also more prone to learning bias since the editing process is inherently multi-step. This drawback should not be emphasized since maximizing the reward for each step will likely maximize the episodic reward in NAR models\u2019 training. We use a leave-one-out baseline (Luo, 2020) for b(s) in Equation 1 instead of the greedy baseline proposed in SCST (Rennie et al., 2017) because the greedy decoding is too strong in LevT, which makes gaining positive rewards in SCST difficult and may reduce learning efficiency. For each edit, we sample k actions from the policy at this point. Then, we calculate the baseline as follows: bi(s) = 1 k \u22121 X j\u0338=i r(yj), (3) where yj is the jth sample from the current policy. The final RL gradient estimation becomes \u25bd\u03b8L(\u03b8) \u2248\u2212(r(yi) \u2212bi(s)) \u25bd\u03b8 log\u03c0\u03b8(yi|s). (4) In a straightforward implementation, one might consider applying sampling again to all k samples 2In practice, since it is not easy to calculate BLEU after inserting placeholders, we consider placeholder insertion and token replacement as one edit operation. from the last edit. However, this will cause a combination explosion when the number of edit operations increases. Practically, we randomly choose a sample from the previous edit to perform the subsequent operations. We show an illustration of the sampling process in (A) of Figure 2 and pseudo code of our algorithm in Appendix A. Episodic Reward Maximization We also introduce episodic reward maximization, which calculates rewards only once for each sample and gives all actions the same weight. It is a more traditional way to train NMT models in RL. It allows unbiased learning but may not be efficient. We use the leave-one-out baseline for the episodic reward as well as the stepwise reward. We sample k samples from the initial input. Each sample will be edited multiple times without a branch. After the final edit, we calculate the rewards and baselines. We show an illustration of the sampling process in (B) of Figure 2 and pseudo code of our algorithm in Appendix B. Temperature Control Applying RL to NAR differs significantly from AR because there could be various types of actions rather than just predicting the next token, like deletion and insertion. Due to this difficulty, NAR may need more fine-grained temperature control during training. To investigate the impact of exploration and exploitation in the training process, we explore five different settings of the temperature. Due to the large decoding space of Levenshtein Transformer, default temperature 1 may result in poor rewards, and too small temperature may result in peaky distribution, which are both harmful to learning. We use three constant temperature settings set to 0.1, 0.5, and 1 to verify the effect of temperature magnitude. An annealing schedule is known for balancing the trade-off between model accuracy and variance during training (Jang et al., 2016). There are two ways of thinking here. First, to reduce the exposure bias, we want to get close to the decoding scenario, which is greedy decoding in our experiments. Thus, we can apply a regular annealing schedule to gradually reduce the temperature from 1 to 0.1 during training. The temperature function can be written as follows: \u03c4i+1 = max(\u03c4i \u2217exp(\u2212log(\u03c40/\u03c4T ) T ), \u03c4T ), (5) where T is the number of total training steps, and \u03c40 and \u03c4T are the initial and the target temperatures. Second, using high temperatures in the early stages of training may lead to poor rewards and result in low learning efficiency. We can apply an inverted annealing schedule to gradually increase the temperature from 0.1 to 1, guaranteeing stable training in the early stages and gradually increasing the exploration space for efficient training. The temperature function can be written as follows: \u03c4i+1 = min(\u03c4i/exp(\u2212log(\u03c4T /\u03c40) T ), \u03c4T ). (6) In each decoding iteration, multiple edit operations occur, and each operation has a different decoding space size. It may be beneficial to optimize this by using varying temperatures for each operation in every iteration. This is a complicated research question and we leave this exploration to future work. 4 Experiments 4.1 Experimental Setup Data & Evaluation We use WMT\u201914 EnglishGerman (EN-DE) (Bojar et al., 2014) and WAT\u201917 English-Japanese (EN-JA) Small-NMT datasets (Nakazawa et al., 2017) for experiments. We use BPE token-based BLEU scores for evaluations. Data preprocessing follows Gu et al. (2019). Baseline We use Levenshtein Transformer as our baseline. Following Gu et al. (2019), we trained a LevT with 300K steps and a max batch size of 65,536 tokens per step. However, like Reid et al. (2023), we cannot reproduce the results of Gu et al. (2019). We use our results in this paper. RL According to Gu et al. (2019), most decodings are gotten in 1-4 iterations, and the average number of decoding iterations is 2.43. To minimize the gap between the training and decoding states, we start with a null string and conduct 3 iterations (8 edits) for each sample during RL training. We set the total training steps T to 50,000, with a max batch size of 4,096 tokens per step. To prevent the out-of-memory issue, we limit the decoding space of placeholder insertion from 256 to 64. The sample size k of the baseline is set to 5. Our implementation is based on Fairseq3. Computational Cost The pre-training phase of LevT on a GCP VM instance with A100x4 GPUs requires roughly 3 days, while the subsequent RL fine-tuning process takes approximately 1 day to complete. 4.2 Results We show the BLEU scores of our approaches in Table 1. The episodic reward model4 showed notable improvement over the baseline. The score is even close to the distillation model, which requires a heavy pre-training5 of AR models. However, the stepwise reward model showed only limited improvement. To explain this, we focus on the advantage, r(y) \u2212b(s), included in the policy gradient (Equation 1), as a larger value of the advantage can increase the policy gradient\u2019s magnitude. A higher standard deviation (SD) of the advantages indicates larger fluctuations in policy gradients. Table 2 shows the SDs of the advantages of the stepwise reward model, with notably higher values in the early stages of edit operations compared to later stages. This suggests that the stepwise reward model disproportionately focuses on early operations, potentially leading to uneven learning and reduced performance. In contrast, the episodic reward model applies the same rewards and advantages across all operations, facilitating more uniform learning and improved performance. 3https://github.com/facebookresearch/fairseq 4The term \u201cepisode/stepwise reward model\u201d specifically refers to the model trained using the \u201cepisode/stepwise reward maximization\u201d approach. 5To produce a distillation model, we need to train an autoregressive Transformer first, which needs additional 3 days of training on our machine. Model EN-DE EN-JA LevT 24.03 31.76 LevT + distillation 26.49 LevT + RL (stepwise) 24.29 31.73 LevT + RL (episodic) 25.72 32.75 Table 1: The BLEU scores of our approaches and the baseline. Temperatures are set to 1. Due to the limited computational resources, we only trained the distillation model for the EN-DE dataset using the ready-made distillation dataset. Iteration Edit Operation EN-DE EN-JA 1 Insert + Replace 9.99 8.59 2 Delete 2.05 1.35 Insert + Replace 3.28 2.48 3 Delete 1.67 1.29 Insert + Replace 3.04 1.60 Table 2: Stepwise reward model\u2019s standard deviation (SD) of the advantage in each edit operation. Insertion and replacement share the same reward. We only report scores of applying RL to the model without distillation since we found that RL significantly improved the model without distillation (max 1.69 points) compared to when distillation was applied (max 0.5 point). Moreover, when confronted with distillation models, it raises questions such as which data we should use for RL training, the original or the distillation one. We leave these research questions to future work. We show the BLEU scores of different temperature settings in Table 3. Model performance varies significantly with temperature settings (max 1.01 points in EN-JA). Among constant setting models, the model with a temperature of 0.5 performed best in EN-DE, and the model with a temperature of 0.1 performed best in EN-JA, indicating that too large temperature harms RL training. The two models using annealing schedules performed great in both tasks, showing the effectiveness of the annealing algorithms for improving learning efficiency. However, the annealing models did not always outperform the constant models, which suggests the difficulty of seeking the optimal temperature setting for NAR models\u2019 RL training. Also, we found the inverted annealing model (\u03c4=0.1\u21921) begins dropping performance after 10,000 steps training in EN-JA, indicating that the speed of annealing will significantly affect the model training quality. Temperature EN-DE EN-JA Constant (\u03c4 = 1) 25.72 32.75 Constant (\u03c4 = 0.5) 25.98 33.45 Constant (\u03c4 = 0.1) 25.76 33.60 Annealing (\u03c4 = 1 \u21920.1) 25.83 33.76 Annealing (\u03c4 = 0.1 \u21921) 25.90 33.43 Table 3: The BLEU scores of episodic reward models using different temperature settings. We also quickly surveyed the relationship between performance and the number of decoding iterations in RL. The model performance dropped when we reduced the number of iterations to 2 during training and remained flat when we increased it to 4, indicating that our setting is reasonable. 5 Conclusion and Future Work This paper explored the application of reinforcement learning to edit-based non-autoregressive neural machine translation. By incorporating RL into the training process, we achieved a significant performance improvement. By empirically comparing stepwise and episodic reward maximization, we analyzed the advantages and disadvantages of these RL approaches. We plan to have a deeper exploration of stepwise reward maximization and find a way to alleviate training inequality for multiple edit operations in the future. Our investigation of temperature settings in NAR softmax sampling provided insights into striking a balance between exploration and exploitation during training. Although our annealing methods perform well, they are not optimal and still depend on manually adjusting the parameters such as total training steps. In the future, we plan to develop a self-adaption temperature control method using various indicators like entropy and advantage SD. The experiments in this paper focused on the basics, and we plan to do more study for practical applications in future work. As our methods are orthogonal to existing research on NAR architectures, our next step involves exploring the methods\u2019 applicability across a broader spectrum, including state-of-the-art models. Additionally, we plan to investigate how to effectively apply RL to the distillation model, the impact of different baseline designs on performance, and the impact of RL on output diversity. Applying RL to NAR is a massive and complex research question. We look forward to more researchers joining this topic."
16
+ }
intro_8K/test_introduction_short_2405.01345v1.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.01345v1",
3
+ "title": "The Power of Question Translation Training in Multilingual Reasoning: Broadened Scope and Deepened Insights",
4
+ "abstract": "Bridging the significant gap between large language model's English and\nnon-English performance presents a great challenge. While some previous studies\nattempt to mitigate this gap with translated training data, the recently\nproposed question alignment approach leverages the model's English expertise to\nimprove multilingual performance with minimum usage of expensive, error-prone\ntranslation. In this paper, we explore how broadly this method can be applied\nby examining its effects in reasoning with executable code and reasoning with\ncommon sense. We also explore how to apply this approach efficiently to\nextremely large language models using proxy-tuning. Experiment results on\nmultilingual reasoning benchmarks mGSM, mSVAMP and xCSQA demonstrate that the\nquestion alignment approach can be used to boost multilingual performance\nacross diverse reasoning scenarios, model families, and sizes. For instance,\nwhen applied to the LLaMA2 models, our method brings an average accuracy\nimprovements of 12.2% on mGSM even with the 70B model. To understand the\nmechanism of its success, we analyze representation space, chain-of-thought and\ntranslation data scales, which reveals how question translation training\nstrengthens language alignment within LLMs and shapes their working patterns.",
5
+ "authors": "Wenhao Zhu, Shujian Huang, Fei Yuan, Cheng Chen, Jiajun Chen, Alexandra Birch",
6
+ "published": "2024-05-02",
7
+ "updated": "2024-05-02",
8
+ "primary_cat": "cs.CL",
9
+ "cats": [
10
+ "cs.CL"
11
+ ],
12
+ "label": "Original Paper",
13
+ "paper_cat": "LLM AND Reasoning",
14
+ "gt": "Although large language models (LLMs) have shown the ability to solve complex prob- lems (Chowdhery et al., 2022; Touvron et al., 2023), LLMs still struggle in multilingual con- texts (Shi et al., 2022; Huang et al., 2023; Qin et al., 2023). This is unsurprising, given that their 1The project will be available at: https://github. com/NJUNLP/QAlign. LLaMA2-7B LLaMA2-13B LLaMA2-70B LLaMA3-8B 20 30 40 50 60 70 Accuracy (%) 38.4 43.9 50.8 50.0 49.6 57.1 63.0 61.8 mGSM Monoreason QAlign Monoreason Figure 1: An illustration of the benefits of performing question alignment (QAlign) to enhance the LLaMA model\u2019s performance across ten languages. Experiment results on more reasoning scenarios, model families and sizes will be reported in the experiment section. training data is predominantly made of English text and instructions (Blevins and Zettlemoyer, 2022; Wang et al., 2023). However, an important challenge remains: how to improve LLM perfor- mance on reasoning tasks in languages other than English with scarce multilingual resources. Previous studies attempt to incorporate more multilingual data for instruction-tuning (Chen et al., 2023) or continued pre-training (Nguyen et al., 2023). However, training for reasoning across potentially dozens or hundreds of lan- guages is both costly and inefficient, and some- times the necessary resources are either unavail- able or of very poor quality (Zhu et al., 2024). A compelling solution is to leverage the LLM\u2019s English proficiency in non-English contexts (Shi et al., 2022). In our previous work (Zhu et al., 2024), we focus on multilingual mathematical reasoning and demonstrate that the devised two- step training framework can effectively enable the sharing of English reasoning capabilities across different languages. Specifically, it employs ques- arXiv:2405.01345v1 [cs.CL] 2 May 2024 tion translation training to strengthen language alignment, and then utilizes cutting-edge English instruction data to unlock LLM\u2019s reasoning capa- bilities for multilingual contexts. Despite the progress made in our previous work, there remains three limitations: (1) single rea- soning scenario: the scope of consideration was limited to math reasoning with English chain-of- thought, with other reasoning scenarios not yet be- ing taken into account. (2) opaque training effects: the impact of the two-stage training on the LLM\u2019s working patterns and each training stage\u2019s contri- bution remain unclear. (3) limited model size: the effectiveness and efficiency of this approach has yet to be confirmed on LLMs significantly larger than 13 billions parameters. In this paper, we comprehensively address these limitations. First, we examine how broadly appli- cable the training framework is across three chal- lenging and varied reasoning tasks. These involve distinict types of problem-solving languages and reasoning objectives: math reasoning with chain- of-thought in multiple languages, math reasoning with exectuable Python code (Toshniwal et al., 2024), and common sense reasoning (Lin et al., 2021). By utilizing specialized data, we lever- age the LLM\u2019s targeted English expertise to en- hance its performance in other languages. We also explore the case of non-English generation. We discover that incorporating En-X general do- main translation data during the second training stage can implicitly bias the LLM to generate non- English chain-of-thought. To gain clearer insights, we analyze how our fine-tuned model utilizes its English reasoning ca- pability in non-English contexts. Our analysis be- gins with the representation space, which funda- mentally determines how the model derives the output answer. We discover that question trans- lation training significantly affects the distribution of multilingual representations in LLM\u2019s middle to top layers, aligning them within the same se- mantic space as English. Additionally, we exam- ine the LLM\u2019s step-by-step answers and find that this unified representation space makes the model to employ more similar problem-solving process to address multilingual questions. Besides, we investigate how well question alignment aproach scales and whether it offers benefits to the most powerful open-source LLMs. We explore to apply question alignment to ex- tremely large models, e.g., LLaMA2-70B (Tou- vron et al., 2023), LLaMA3-70B (Meta, 2024), Mixtral-8x22B (Jiang et al., 2024). While fully fine-tuning LLM\u2019s parameters is straightforward, the computation cost becomes prohibitive as the number of parameters increases significantly. In constrast, we illustrate that proxy-tuning (Liu et al., 2024) offers as an efficient alternative for both dense models and Mixture-of-Experts (MoE) models. We also discover that by carefully select- ing the proxy model, we can achieve 98% perfor- mance of fully fine-tuning without updating any parameters. Experiments on several multilingual reason- ing benchmarks MGSM, MSVAMP and XCSQA demonstrate that the question alignment approach is a broad technique applicable across a wide vari- ety of difficult reasoning tasks and its modulaized training pipeline enables us to efficiently unlocks LLM\u2019s multilingual capabilities in targeted skills. By applying our approach to extremely large lan- guage models, we can further strengthen their multilingual reasoning abilities. For instance, our fine-tuned LLaMA2-70B achieves 63.0% average accuracy on MGSM (Figure 1), pushing the mul- tilingual reasoning performance of open-source models to new a boundary.",
15
+ "main_content": "LLM\u2019s language bias With a large number of parameters pre-trained on large-scale corpora and instruction-tuned on powerful instruction data, LLMs have demonstrated a remarkable level of intelligence (Touvron et al., 2023; Jiang et al., 2023). However, extensive empirical analyses have shown that LLMs still face challenges in multilingual contexts, especially with low-resource languages (Shi et al., 2022; Huang et al., 2023; Zhu et al., 2023). This is not surprising, given that English predominates both in the pretraining data (Blevins and Zettlemoyer, 2022) and instruction data (Wang et al., 2023). In this paper, we focus on the core capability of LLM, i.e., the reasoning capability, and aim to push forward the boundaries of LLMs in multilingual reasoning. Advancing multilingual reasoning Previously, there have been some attempts to improve LLM\u2019s performance on multilingual reasoning. Huang et al. (2023) and Qin et al. (2023) prompt ChatGPT (OpenAI, 2022) to translate non-English Response (Code) Question (En) Training Stage-2 Response Alignment Response (Code) LLM LLM Question (X) Inference Stage Training Stage-2 Response Alignment Question (En) LLM LLM Question (X) Inference Stage Training Stage-1 Question Alignment Response (En) Response (En) Question (En) Question (De) Question (Zh) Question (Ja) Translation (X) Response (En) Training Stage-2 Response Alignment Inference Stage Response (X) LLM LLM Question (En) Translation (En) Question (X) LLM Question (Fr) Question (Sw) I II III IV Figure 2: Illustration of the two-step training framework and its extension version. Subfigure I and II illustrate the training and inference process of the orignal training framework, in which the fine-tuned model usually responds in English to non-English questions. By maintaining the question alignment stage unchanged and modifying the response alignment stage, we adapt this framework to a wider range of scenarios. In subfigure III, we incorporate En-X translation data in the second stage of training to bias the LLM to generate non-English chain-of-thought. In subfigure IV, we use code instruction data for the second stage of training to unlock the LLM\u2019s capability for reasoning with executable code. query into English and then generate responses based on these translations. However, it has been found that this prompting method is not always effective for open-source LLMs (Zhu et al., 2024). To improve multilingual capabilities of open-source LLMs, Chen et al. (2023) and She et al. (2024) follow the translate-training paradigm and use machine-translated multilingual data for instruction-tuning. But translating chainof-thought response is error-prone (Zhu et al., 2024) and continuously translating an increasing volume of English instruction data is prohibitive. To make the best use of continuously evolving English instruction data and avoid drawbacks of translate-training paradigm, Zhu et al. (2024) propose the question alignment approach to guide LLMs to implicitly relate non-English questions to their English counterparts and solve the reasoning problems with their English expertise. Aligned with that philosophy, Yoon et al. (2024) also recognize the importance of building language alignment and explore the use of an additional multilingual encoder, the encoder of MT5 (Xue et al., 2021), to map multilingual queries into the LLM\u2019s English semantic space for unified processing. In this paper, we build upon our previous efforts and delve deeper to broaden the scope of our work and provide deepened insights. 3 Methodology In this section, we will recap the two-step training framework (Zhu et al., 2024) (Section 3.1) and introduce how we extend this framework to diverse reasoning scenarios (Section 3.2). Finally, we present efficient recipes to scale this framework to extremely large language models (Section 3.3). 3.1 Recap: Two-stage Training Framework Our training work consists of two stages: question alignment and response alignment (illustrated in Figure 2). During quesiton alignment, we utilize X-En question translation data (Xe, Xl) to train the model to associate non-English questions Xl with their English counterparts Xe (subfigure I in Figure 2). This process enhances language alignment within the large language model. The optimization objective can be written as: arg min \u03b8 X l\u2208L \u2212log p\u03b8(Xe|Xl) where l is the target language, L is the set of considered non-English languages, and \u03b8 denotes the model\u2019s parameters. During response alignment, we utilize English cutting-edge instruction data De to unlock LLM\u2019s English reasoning capability (subfigure II in Figure 2). The optimization objective is: arg min \u03b8 X {Xe,Ye}\u2208De \u2212log p\u03b8(Ye|Xe) where Xe and Ye denotes the English question and its corresponding response respectively. Thanks to the previously established language alignment, the LLM can effective leverage its English expertise in non-English question contexts. 3.2 Extending to Diverse Reasoning Scenarios with Specialized Data In this section, we present how we incorporate specialized data to adapt the two-step training framework to diverse reasoning scenarios. Math Reasoning with chain-of-thought in multiple languages The first reasoning scenario is solving mathematical reasoning task through chain-of-thought. To unlock the model\u2019s capability on this, we utilize the instruction data depicted in Figure 3. In each question-response pair, the math question provides a background and requires a numerical answer, while the response includes both the numerical answer and an annotated stepby-step natural language reasoning process used to derive the answer. The intermediate chain-ofthought not only enhances reasoning accuracy but also improves explainability and verifiability of LLM\u2019s reasoning behaviour (Wei et al., 2022). Training Stage II: Response Alignment Randy has 60 mango trees on his farm. He also has 5 less than half as many coconut trees as mango trees. How many trees does Randy have in all on his farm? Half of the number of Randy's mango trees is 60/2 = 30 trees. So Randy has 30 5 = 25 coconut trees. Therefore, Randy has 60 + 25 = 85 trees on his farm. tuning stage I model with cutting-edge English-only instruction data \u03d5 arg min \u03d5 \u2211 {X,Y}\u2208D \u2212log p\u03d5(Y|X) Question X Response Y Question Response Figure 3: Illustration of our employed instruction data. We use this instruction data to teach model to solve mathematical reasoning task with chain-of-thought. However, we notice that the model, when finetuned in the manner described above, tends to respond in English to multilingual questions. In certain situations, we would like the model to respond in the same language as the question, thereby enabling users from diverse linguistic backgrounds to understand the model\u2019s problem solving process (Kew et al., 2023). We discover that it is possible to achieve this by incorporating publicly available general domain translation data. By mixing English translation data with En-X translation data during response alignment (subfigure III in Figure 2), the fine-tuned LLM will benefit from multi-task training and is capable of spontaneously producing multilingual outputs, even when no annotated multilingual responses are available during training. Training Randy has 60 mango trees on his farm. He also has 5 less than half as many coconut trees as mango trees. How many trees does Randy have in all on his farm? Let's solve this problem using Python code. <llm-code> mango_trees = 60 half_mango_trees = mango_trees / 2 coconut_trees = half_mango_trees 5 total_trees = mango_trees + coconut_trees total_trees </llm-code> <llm-code-output> 85.0 </llm-code-output> tuning stage I model Question X Response Y Question Response Question Figure 4: Illustration of our employed instruction data. We use this instruction data to teach model to solve mathematical reasoning task with executable code. Math reasoning with executable code The second reasoning scenario involves solving mathematical reasoning task with executable code. In this scenario, the response no longer contains a chain-of-though in natrual language; instead, it consists of Python code. One major advantage of reasoning through executable code is that it helps avoid basic computational errors (Toshniwal et al., 2024). To unlock the model\u2019s capacity on this, we utilize the instruction data depicted in Figure 4. Specifically, the model needs to generate python code enclosed within \u201c<llm-code>\u201d and \u201c</llm-code>\u201d tags. A Python interpreter will be used to execute this code block. The derived numerical answer, corresponding to the value of the variable in the last line of the code block, is enclosed within \u201c<llm-code-output>\u201d and \u201c</llmcode-output>\u201d tags. Common sense reasoning The third reasoning scenario is common sense reasoning. Different from math reasoning, this scenario requires the model to solve the given question based on its common sense knowledge. To unlock the model\u2019s capability on this, we utilize the instruction data depicted in Figure 5. In this task, the model needs to select the appropriate answer from the given options through common sense reasoning. When does an apple tree have apples that are not yet ready to eat? Option: (A) summer (B) fall (C) spring (D) valley (E) washington state The answer is (A). Question Response Figure 5: Illustration of our employed instruction data. We use this instruction data to teach model to solve common sense reasoning task. 3.3 Scaling to Extremely Large LM Extremely large language models, e.g., LLaMA270B, Mixtral-8\u00d722B, often demonstrate state-ofthe-art performance among open-source LLMs. Therefore, it is non-trivial to examine how well the training framework scales to these models and whether it offers further benefits on the multilingual performance of these powerful LLMs. Vanilla fine-tuning While fully fine-tuning extremely large models is a straightforward solution, the process can be prohibitively time-consuming and computationally expensive, due to the vast number of parameters that need to be optimized. Efficient proxy-tuning Inspired by Liu et al. (2024), we explore proxy-tuning as an efficient alternative solution. The objective of proxy-tuning is to guide a large pre-trained model M to behave like a tuned model without updating any parameters. This is achieved by employing a small pretrained model M\u2212and a small instruction-tuned model M+, which serve as the expert model and the anti-expert model. The small models must have the same vocabulary as the large model to support arithmetic operations on prediction distributions. The underlying assumption of proxytuning is that the difference in logits between M\u2212 and M+ can approximate the difference between M and a truly-tuned model. Specifically, at each inference step t, we condition the base model M, the expert M+ and the anti-expert M\u2212on the question X and the generated prefix Y<t. The probability distribution for the proxy-tuned model \u02dc M is derived from the prediction distributions of these models: p \u02dc M(Yt|X, Y<t) \u221dpM(Yt|X, Y<t) pM+(Yt|X, Y<t) pM\u2212(Yt|X, Y<t) In subsequent experiments, we will illustrate the effectiveness of this strategy across diverse model families. We also discover that the selection of the small expert and anti-expert model is crucial for the final performance. By carefully selecting small models for proxy-tuning, we can almost recover the performance of fully fine-tuning in a trainingfree manner. 4 Experiment setting Base LLMs We consider a range of the most powerful open-source LLMs for our experiments. In most experiments, we use LLaMA2-7B and LLaMA2-13B as the base models. In experiments involving reasoning with python code, we use CodeLLaMA-7B (Roziere et al., 2023) as the base model. In experiments involving extremely large language models, we consider LLaMA270B, LLaMA3-70B, Mixtral-8x7B and Mixtral8x22B from diverse model families. Training dataset In multlingual mathematical reasoning, we use multilingual questions from GSM8KINSTRUCT (Chen et al., 2023) for question alignment. For response alignment, we use METAMATHQA (Yuan et al., 2023) to teach LLM to reasoning with chain-of-thought. To teach LLM to reasoning with Python code, we use OPENMATHINSTRUCT (Toshniwal et al., 2024). In multlingual common sense reasoning, we use multilingual questions from XCSQA-TEST for question alignment and use English supervised data in XCSQA-TRAIN (Lin et al., 2021) for response alignment. Statistics of involved datasets are reported in Table 1. Dataset # Lang # Sample Usage Question Response Scenario METAMATHQA 1 395,000 Training ! ! 1 \u20dd OPENMATHINSTRUCT 1 1,343,849 Training ! ! 2 \u20dd GSM8KINSTRUCT 10 73,559 Training ! % 1 \u20dd2 \u20dd MGSM 10 2,500 Evaluation ! ! 1 \u20dd2 \u20dd MSVAMP 10 10,000 Evaluation ! ! 1 \u20dd2 \u20dd XCSQA-TRAIN 1 8,888 Training ! ! 3 \u20dd XCSQA-TEST 15 17,184 Training ! % 3 \u20dd XCSQA-DEV 15 16,000 Evaluation ! ! 3 \u20dd Table 1: Statistics of involved datasets. \u201c# Lang\u201d denotes the number of languages covered by the dataset and \u201c# Sample\u201d refers to the total number of samples it contains. The labels \u201cQuestion\u201d and \u201cResponse\u201d denotes whether each sample includes a question annotated with a response. The symbols 1 \u20dd2 \u20dd3 \u20ddcorrespond to the three scenarios discussed in Section 3.2: math reasoning with chain-of-thought, math reasoning with executable code and common sense reasoning. Training details We use QAlign2 as our code base. We use consistent training hyper-parameters across two stages of training. At each stage, we fine-tune LLM\u2019s full parameters for 3 epoch on eight NVIDIA A100 GPUs. The learning rate is set to 2e-5, with a batch size of 128. Baseline models The primary baseline for our approach is fine-tuning with cuttingedge English instruction data (MonoReason). Comparing this baseline with our approach (QAlign\u2192MonoReason) can directly illustrate the benefits of performing question alignment and the effectiveness of leveraging English expertise. In our experiments on mathematical reasoning, we also present results from fine-tuning with various types of English instruction data to demonstrate the supiority of using the most advanced instruction data within our framework. Evaluation dataset We use MGSM (Shi et al., 2022) and MSVAMP (Chen et al., 2023) to evaluate LLM\u2019s performance on multilingual mathematical reasoning. We use XCSQA-DEV to evaluate LLM\u2019s performance on multilingual common sense reasoning. We use exact-match accuracy as the evaluation metric. Dataset statistics are reported in Table 1. 5 Experiment results In this section, we will report experiment results and introduce our main findings. The two-step training framework is broadly applicable across diverse reasoning scenar2https://github.com/NJUNLP/QAlign ios On the mathematical reasoning benchmarks MGSM and MSVAMP, the question alignment approach effectively mitigates the performance gap between English and non-English by leveraging LLM\u2019s English expertise. The improvement in average multilingual performance is evident both when the model reasons with chainof-thought (Table 2) and reasons with executable Python code (Figure 6). We can also see that this approach is effective across different model sizes. Our fine-tuned 70B models reach a new performance ceiling on multilingual reasoning benchmarks, achieving an accuracy of 63.0% on MGSM and 66.4% on MSVAMP. On the common sense reasoning benchamrk XCSQA, the improvements are also significant. The added question alignment stage consistently enhances LLM\u2019s reasoning accuracy across all 15 languages, demonstrating its broad applicability. Incorporating En-X translation data during response alignment can implicitly encourage LLM to generate non-English CoT As shown in Table 3, the question-aligned model (QAlign\u2192MonoReason) usually exhibits extremely low question-response language consistency3 for non-English tasks, because it tends to respond in English to multilingual questions. After incorporating En-X Flores4 (Goyal et al., 2022) translation data, with English instruction data for 3We perform language identification with the OPENLID toolkit (Burchell et al., 2023). 4We use both the development set and test set of the Flores dataset as the multilingual translation data for training. There are totally 36,000 translation pairs covering nine X-En translation directions. MGSM (reasoning with chain-of-thought) System (7B) Bn Th Sw Ja Zh De Fr Ru Es En Avg. SFT\u2020 (Touvron et al., 2023) 3.2 4.8 5.2 15.2 22.4 37.2 34.4 28.0 32.4 43.2 22.6 RFT\u2020 (Yuan et al., 2023) 2.4 2.0 2.8 6.8 16.8 33.6 34.0 29.2 34.0 44.8 20.6 MAmmoTH\u2020 (Yue et al., 2023) 3.6 4.8 2.4 10.8 17.2 33.2 32.8 26.0 32.4 49.6 21.3 WizardMath\u2020 (Luo et al., 2023) 2.0 4.0 3.4 24.0 22.4 30.4 30.4 30.8 34.8 47.6 23.0 MetaMath\u2020 (Yu et al., 2023) 6.4 4.0 3.2 39.2 38.8 56.8 52.8 47.2 58.0 63.2 37.0 MonoReason 7.6 5.6 5.2 34.0 45.2 54.0 56.8 51.6 58.8 65.5 38.4 QAlign\u2192MonoReason (Ours) 32.4 39.6 40.4 44.0 48.4 54.8 56.8 52.4 59.6 68.0 49.6 System (13B) Bn Th Sw Ja Zh De Fr Ru Es En Avg. SFT\u2020 (Touvron et al., 2023) 6.0 6.8 7.6 25.2 32.8 42.8 40.8 39.2 45.2 50.4 29.7 RFT\u2020 (Yuan et al., 2023) 3.2 4.4 3.6 26.4 33.6 38.4 44.8 41.6 46.8 52.0 29.5 MAmmoTH\u2020 (Yue et al., 2023) 3.6 5.2 1.6 19.2 31.2 45.6 39.6 36.8 50.0 56.4 28.9 WizardMath\u2020 (Luo et al., 2023) 6.4 5.6 5.6 22.0 28.0 40.4 42.0 34.4 45.6 52.8 28.3 MetaMath\u2020 (Yu et al., 2023) 11.6 6.4 7.6 42.8 49.2 64.8 65.2 63.6 65.2 67.2 44.4 MonoReason 12.4 11.2 6.4 42.0 46.0 64.0 62.4 61.6 64.8 68.4 43.9 QAlign\u2192MonoReason (Ours, FT) 38.4 49.6 46.0 52.4 59.2 62.0 62.4 64.4 67.2 69.2 57.1 System (70B) Bn Th Sw Ja Zh De Fr Ru Es En Avg. MonoReason 13.2 6.4 7.2 56.4 56.8 69.6 71.2 73.2 75.6 78.4 50.8 QAlign\u2192MonoReason (Ours) 44.0 55.6 50.4 61.6 61.2 67.6 70.8 68.8 73.6 76.0 63.0 MSVAMP (reasoning with chain-of-thought) System (7B) Bn Th Sw Ja Zh De Fr Ru Es En Avg. SFT\u2020 (Touvron et al., 2023) 11.5 18.2 17.2 31.6 35.2 39.0 39.1 39.1 39.2 38.8 30.9 RFT\u2020 (Yuan et al., 2023) 7.7 16.9 14.9 33.9 34.9 40.8 41.5 39.5 42.5 42.7 31.3 MAmmoTH\u2020 (Yue et al., 2023) 4.3 6.3 4.2 26.7 26.8 39.6 39.9 33.7 42.9 45.1 26.3 WizardMath\u2020 (Luo et al., 2023) 16.1 17.0 10.3 37.9 36.3 39.2 37.7 37.4 44.8 48.5 32.5 MetaMath\u2020 (Yu et al., 2023) 14.2 17.8 16.5 53.2 53.1 61.4 60.7 58.9 61.2 65.5 46.3 MonoReason 15.0 17.1 15.4 51.9 54.4 60.9 62.2 59.3 63.3 65.5 46.2 QAlign\u2192MonoReason (Ours) 41.7 47.7 54.8 58.0 55.7 62.8 63.2 61.1 63.3 65.3 57.2 System (13B) Bn Th Sw Ja Zh De Fr Ru Es En Avg. SFT\u2020 (Touvron et al., 2023) 13.9 23.4 19.8 41.8 43.3 46.2 47.8 47.8 46.1 50.9 38.1 RFT\u2020 (Yuan et al., 2023) 12.2 24.8 19.4 42.4 42.3 45.1 45.2 46.5 45.6 47.1 37.1 MAmmoTH\u2020 (Yue et al., 2023) 5.0 13.7 12.9 42.2 47.7 52.3 53.8 50.7 53.9 53.4 38.6 WizardMath\u2020 (Luo et al., 2023) 13.7 16.3 12.5 29.5 37.0 48.7 49.4 43.8 49.4 56.3 35.7 MetaMath\u2020 (Yu et al., 2023) 14.6 15.7 17.4 57.0 56.6 67.3 64.7 63.7 65.9 67.7 49.1 MonoReason 20.6 20.5 19.1 57.0 58.8 68.4 68.1 67.5 68.9 68.9 51.8 QAlign\u2192MonoReason (Ours) 49.2 55.5 55.2 64.3 63.8 69.5 68.1 66.4 66.4 67.6 62.6 System (70B) Bn Th Sw Ja Zh De Fr Ru Es En Avg. MonoReason 21.9 16.1 22.3 66.9 68.2 73.9 71.5 72.0 74.7 72.8 56.0 QAlign\u2192MonoReason (Ours) 50.8 56.5 60.8 68.3 67.7 72.0 71.4 69.1 72.4 74.5 66.4 Table 2: Results on MGSM and MSVAMP dataset. \u201cAvg.\u201d represents the average multilingual performance and bold text denotes the highest score among systems of the same size. The dagger symbol denotes that the results for these models are taken from the published results of Zhu et al. (2024). the second traning stage, the fine-tuned model more frequently responds in the same language as the question, demonstrating a significant increase in language consistency (52.3% vs. 9.7%). An intuitive explaination to this phenomenon is that EnX translation training implicitly biases the LLM to convert its English thought to non-English. However, we also notice that while this approach increases question-response language consistency, it compromises reasoning accuracy. Future work will need to address this trade-off and balance both language consistency and reasoning accuracy without significant sacrifices in either area. Question alignment approach scales well to extremely large language models with both vanilla fine-tuning and efficient proxy-tuning. Table 2 shows that question alignment approach scales well on the 70B model. However, when the number of parameters to be optimized increases significantly, the computational cost and time required for fine-tuning become prohibitively high5. Consequently, we explore proxy-tuning as an efficient alternative to fully fine-tuning. Experiment results are reported in Table 4. At first, we apply question alignment to various small base models and observe that this approach consistently yields substantial improvements, with at least an 5For instance, in our experiments, instruction-tuning LLaMA2-70B with METAMATHQA dataset (comprising 395K question-response pairs) on eight A100 GPUs takes 15 days. Bengali Thai Swahili Japanese Chinese German French Russian Spanish English Average 0 20 40 Accuracy mGSM (reasoning with executable code) Monoreason (7B) QAlign Monoreason (7B) Bengali Thai Swahili Japanese Chinese German French Russian Spanish English Average 20 40 60 Accuracy mSVAMP (reasoning with executable code) Monoreason (7B) QAlign Monoreason (7B) Urdu Hindi Swahili Japanese Vietanmese Polish Chinese Dutch Russian Italian German Portuguese French Spanish English Average 20 40 60 80 Accuracy XCSQA (common sense reasoning) Monoreason (7B) QAlign Monoreason (7B) Figure 6: Experiment results on MGSM, MSVAMP and XCSQA. On the former two dataset, the fine-tuned model will solve the mathematical problems with python code. On the latter dataset, the fine-tuned model will solve multi-choice problems concerning with common sense reasoning. System (7B) Question-Response Language Consistency (MGSM) Bn Th Sw Ja Zh De Fr Ru Es En Avg. QAlign\u2192MonoReason 0.0 0.0 2.9 0.0 0.0 0.5 0.1 0.1 0.0 93.8 9.7 QAlign\u2192MonoReason+Flores 26.8 42.7 49.3 63.1 26.8 63.2 36.9 82.4 37.9 93.4 52.3 System (7B) Reasoning Accuracy (MGSM) Bn Th Sw Ja Zh De Fr Ru Es En Avg. QAlign\u2192MonoReason 32.4 39.6 40.4 44.0 48.4 54.8 56.8 52.4 59.6 68.0 49.6 QAlign\u2192MonoReason+Flores 21.2 21.6 24.8 28.0 41.6 50.8 55.2 45.6 54.4 66.4 41.0 Table 3: Effects of incorporating En-X Flores translation training for the second stage training. En-X translation training can implicitly bias LLM to generate non-English chain-of-thought and increase the question-response language consistency. 11.1% average increase in average non-English performance. Then, by using the small fine-tuned model as the expert model and its untuned version as the anti-expert model, we can guide the large pre-trained model to behave like a tuned model without updating any parameters. Across different model families, proxy-tuning consistently enhances performance in both non-English and English tasks, effectively extrapolating our results from small models to extremely large models. Furthermore, experiments on LLaMA2 demonstrate that carefully selecting small models for proxy-tuning, such as using LLaMA2-13B as the small proxy model, allows us to achieve 98% of the performance of fully fine-tuning (61.8 vs. 63.0). 6 Mechanism Analysis In this section, we conduct further analysis to gain a deeper understanding of the training regime and its impact on LLM\u2019s working pattern. The analysis is performed in the scenario of mathematical reasoning with chain-of-thought on MGSM dataset. Question alignment produces a more unified semantic space, facilitating the utilization of English expertise in non-English contexts Our analysis begins with the representation space, which fundamentally determines how the model derives the output answer. We input multilingual questions from the MGSM dataset into MonoReason and QAlign\u2192MonoReason models, and then visualize6 the representations of the last input to6For visualization, we use T-SNE (Van der Maaten and Hinton, 2008) for dimension reduction. Small tuned Large tuned Small untuned Large untuned MGSM Non-En En Avg. MonoReason (7B) 35.4 65.5 38.4 QAlign\u2192MonoReason (7B) 47.6 68.0 49.6 QAlign\u2192MonoReason (7B) LLaMA2 (7B) LLaMA2 (70B) 55.8 70.8 57.3 MonoReason (13B) 41.2 68.4 43.9 QAlign\u2192MonoReason (13B) 55.7 69.2 57.1 QAlign\u2192MonoReason (13B) LLaMA2 (13B) LLaMA2 (70B) 60.1 76.8 61.8 QAlign\u2192MonoReason (70B) 61.5 76.0 63.0 MonoReason (8B) 47.3 74.4 50.0 QAlign\u2192MonoReason (8B) 58.4 72.0 59.8 QAlign\u2192MonoReason (8B) LLaMA3 (8B) LLaMA3 (70B) 64.0 77.2 65.4 MonoReason (7B) 35.2 70.4 38.7 QAlign\u2192MonoReason (7B) 48.2 70.8 50.4 QAlign\u2192MonoReason (7B) Mistral (7B) Mixtral (8\u00d77B) 49.4 74.4 51.9 QAlign\u2192MonoReason (7B) Mistral (7B) Mixtral (8\u00d722B) 55.6 78.0 57.9 Table 4: Experiment results on proxy-tuning. \u201cNon-En\u201d and \u201cAvg.\u201d denotes the average multilingual performance on non-English languages and all ten languages respectively. Bold text denotes the highest score among the same model families. Layer 1 MonoReason (7B) Layer 8 Layer 16 Layer 24 Layer 32 QAlign MonoReason (7B) Bengali Thai Swahili Japanese Chinese German French Russian Spanish English Figure 7: Visualization analysis on the representations of the last input tokens from multilingual questions. For brevity, we uniformly sample 5 layers out of the total 32 layers to illustrate the visualized distribution. Different colors denote the languages of the questions. kens, which decides the content of the first output tokens and serves as an important clue for how the model starts its reasoning process (Wendler et al., 2024). The visualization results are shown in Figure 7. For both models, the context representations of multilingual queries always stay apart in the bottom layer (1st and 8th layers). But from the middle to top layers, a notable difference emerges between our alignment-enhanced model and its unaligned counterpart: question alignment produce a more compact space, enabling the model to process multilingual queries in a unified way and facilitating the sharing of its English expertise across different languages. The sharing of the English expertise makes the model to employ similar problem-sovling process to address multilingual questions In addition to analyzing the layer-wise generation process of the initial output token, we further analyze the entire chain-of-thought and reveal the larger impact brought by the unified representation space. We evaluate the consistency among them when the model is presented with the same question expressed in different languages. Specifically, we extract intermediate computation results as lists from the chain-of-thought and compute the Jaccard similarity between them to evaluate the consistency of the reasoning paths, as illustrated in Table 5. The quantitative results are depicted in En Es Ru Fr De Zh Ja Sw Th Es Ru Fr De Zh Ja Sw Th Bn 58.7 53.2 50.4 57.7 55.2 49.1 57.2 52.9 48.9 51.1 42.8 41.8 40.0 41.1 38.6 35.0 35.1 35.8 35.6 34.4 37.2 14.7 14.4 15.5 15.8 14.5 17.8 20.6 16.1 16.3 17.7 15.8 15.8 18.7 19.3 20.5 18.7 18.3 18.7 18.1 16.7 20.6 20.1 21.5 19.4 MonoReason (7B) En Es Ru Fr De Zh Ja Sw Th Es Ru Fr De Zh Ja Sw Th Bn 60.3 53.5 56.2 57.4 57.7 55.6 58.0 56.7 55.2 57.4 49.4 50.5 49.6 49.9 49.2 45.9 46.1 44.6 45.3 43.8 44.5 45.8 47.1 46.3 45.8 45.4 42.3 40.6 43.1 44.6 45.7 42.8 42.9 44.3 41.2 41.5 35.6 38.9 38.5 38.6 34.6 36.7 35.6 36.1 36.5 QAlign MonoReason (7B) En Es Ru Fr De Zh Ja Sw Th Es Ru Fr De Zh Ja Sw Th Bn 62.7 55.1 51.6 58.9 59.8 51.4 63.0 62.0 52.8 57.8 43.2 44.1 46.7 41.7 44.2 41.6 40.5 40.9 40.6 42.7 41.6 15.4 16.9 17.9 18.8 18.2 19.4 21.2 16.4 17.4 18.8 17.8 19.3 21.9 20.6 25.9 18.0 20.1 21.1 20.6 20.6 20.7 22.3 24.5 24.6 MonoReason (13B) En Es Ru Fr De Zh Ja Sw Th Es Ru Fr De Zh Ja Sw Th Bn 60.1 60.8 61.4 61.5 59.9 59.2 61.3 64.4 63.7 61.7 52.9 53.2 57.0 54.6 53.5 53.7 51.7 55.3 50.3 52.1 50.5 49.9 47.6 50.1 50.2 49.6 45.9 43.5 46.0 45.8 48.8 47.6 47.7 47.2 45.5 45.7 41.8 39.8 42.2 43.6 41.1 38.7 38.9 39.4 38.8 QAlign MonoReason (13B) 0 10 20 30 40 50 60 70 Figure 8: Comparative analysis of reasoning path consistency across different systems. The number in each cell represents the reasoning path consistency between the languages in the corresponding row and column. The number is in percentage. Darker blue denotes higher level of consistency. [Question (English)] There are 27 unicorns left in the world. One third of them are in the Scottish Highlands. Two thirds of the Scottish unicorns are female. How many female Scottish unicorns are there? [Chain-of-thought response] One third of the unicorns are in the Scottish Highlands, so there are 27/3 = 9 unicorns in the Highlands. Two thirds of the unicorns in the Highlands are female, so there are 9 * 2/3 = 6 female unicorns in the Highlands. Therefore, there are 6 female Scottish unicorns. The answer is: 6 [Extracted intermediate computation results] A = [9, 6, 6, 6] [Question (Chinese)] \u4e16\u754c\u4e0a\u5269\u4e0b27 \u53ea\u72ec\u89d2\u517d\uff0c\u5176\u4e2d\u4e09\u5206\u4e4b\u4e00\u5728\u82cf\u683c\u5170\u9ad8\u5730\u3002 \u82cf\u683c\u5170\u9ad8\u5730\u7684\u72ec\u89d2\u517d\u7684\u4e09\u5206\u4e4b\u4e8c\u662f\u96cc\u6027\u3002\u82cf\u683c\u5170\u96cc\u6027\u72ec\u89d2\u517d\u6709\u591a\u5c11\u53ea\uff1f [Chain-of-thought response] If there are 27 remaining unicorns in the world, and one-third of them are in the Scottish Highlands, then there are 27/3 = 9 unicorns in the Scottish Highlands. If two-thirds of the unicorns in the Scottish Highlands are female, then there are 9 * 2/3 = 6 female unicorns in the Scottish Highlands. Therefore, there are 6 female unicorns in the Scottish Highlands. The answer is: 6 [Extracted intermediate computation results] B = [9, 6, 6, 6] [Reasoning path consistency] \u02dc A = set(A), \u02dc B = set(B) J(A, B) = | \u02dc A \u2229\u02dc B|/| \u02dc A \u222a\u02dc B| = 100% Table 5: With this case, we illustrate the our computation process for evaluating reasoning path consistency. 0k 1k 2k 4k 8k Question Translation Data per Language 10 20 30 40 50 60 70 Accuracy (%) Bengali Thai Swahili Japanese Chinese German French Russian Spanish English Figure 9: The impact of the size of the question translation data on multilingual reasoning accuracy. Figure 8. The alignment-enhanced models generally have higher consistency compared to their unaligned counterparts, particularly in low-resource languages. This evidence further demonstrate that the question alignment approach can facilitate the sharing of reasoning abilities across languages. The impact of the size of question translation data Another important factor that influences the effectiveness of our training framework is the size of the question translation data. To analyze this factor, we uniformly downsample the multilingual question translation data from 8,000 pairs per language to 1,000 per language. Generally, question alignment does not significantly affect the model\u2019s proficiency in English but does impact its performance on non-English tasks. For highresource languages, the performance peak is usually achieved at smaller data scale, indicating that question alignment can be established in a more data-efficient way for these languages. For lowresource languages, such as Bengali, Thai, and Swahili, scaling up the question translation data for quesiton alignment always yields substantial improvement, indicating the potential of this approach to empower LLMs in supporting long-tail low-resource languages. 7 Conclusion In this paper, we present a comprehensive extension of our previous work and address its remaining limitations. At first, we demonstrate how to extend the two-stage training framework to a variety of challenging reasoning scenarios, including math reasoning with chain-of-thought in multiple languages, math reasoning with executable code and common sense reasoning. Extensive experiment results on multilingual reasoning benchmarks MGSM, MSVAMP, XCSQA demonstrate that by incorporating specialized data, the training framework can be broadly applicable across different reasoning scenarios and model families. Our question alignment approach also scales well with extremely large language models, e.g., LLaMA270B, LLaMA3-70B and Mixtral-8\u00d722B. Employing either vanilla fine-tuning or the efficient alternative proxy tuning, we can enhance the multilingual reasoning performance of these most powerful open-source pre-trained models to new heights. In the mechanism analysis section, we provide indepth analysis on the representation space, reasoning paths and question translation data scales. This analysis illuminates how our training regime influences LLM\u2019s multilingual working pattern."
16
+ }
intro_8K/test_introduction_short_2405.01350v1.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.01350v1",
3
+ "title": "Community-Invariant Graph Contrastive Learning",
4
+ "abstract": "Graph augmentation has received great attention in recent years for graph\ncontrastive learning (GCL) to learn well-generalized node/graph\nrepresentations. However, mainstream GCL methods often favor randomly\ndisrupting graphs for augmentation, which shows limited generalization and\ninevitably leads to the corruption of high-level graph information, i.e., the\ngraph community. Moreover, current knowledge-based graph augmentation methods\ncan only focus on either topology or node features, causing the model to lack\nrobustness against various types of noise. To address these limitations, this\nresearch investigated the role of the graph community in graph augmentation and\nfigured out its crucial advantage for learnable graph augmentation. Based on\nour observations, we propose a community-invariant GCL framework to maintain\ngraph community structure during learnable graph augmentation. By maximizing\nthe spectral changes, this framework unifies the constraints of both topology\nand feature augmentation, enhancing the model's robustness. Empirical evidence\non 21 benchmark datasets demonstrates the exclusive merits of our framework.\nCode is released on Github (https://github.com/ShiyinTan/CI-GCL.git).",
5
+ "authors": "Shiyin Tan, Dongyuan Li, Renhe Jiang, Ying Zhang, Manabu Okumura",
6
+ "published": "2024-05-02",
7
+ "updated": "2024-05-02",
8
+ "primary_cat": "cs.LG",
9
+ "cats": [
10
+ "cs.LG",
11
+ "cs.SI"
12
+ ],
13
+ "label": "Original Paper",
14
+ "paper_cat": "Knowledge AND Graph",
15
+ "gt": "Graph representation learning on graph-structured data, such as molecules and social networks, has become one of the hottest topics in AI (Cao et al., 2023). Typical GNNs (Kipf & Welling, 2017) require large-scale task- specific labels, which are expensive and labor-intensive to collect. To alleviate this, graph contrastive learning (GCL) has been proposed as one of the most successful graph rep- resentation learning methods, drawing a lot of attention (Li *Equal contribution with order determined by flipping a coin. \u2020 Corresponding Author. 1Tokyo Institute of Technology 2The Uni- versity of Tokyo 3RIKEN & Tohoku University. Correspondence to: Renhe Jiang <[email protected]>. Proceedings of the 38 th International Conference on Machine Learning, PMLR 139, 2021. Copyright 2021 by the author(s). et al., 2022b). The main goal of GCL is to maximize the agreement of node representations between two augmented views to capture graph invariance information (Tian et al., 2020). Among various GCL variations, effective graph aug- mentation turns out to be the bread and butter for achieving success (Wei et al., 2023). Early studies almost adopt ran- dom graph augmentation, such as randomly dropping edges or masking features (You et al., 2020). Researchers also attempt to incorporate expert knowledge to guide graph augmentation. For instance, GCA (Zhu et al., 2021) and MoCL (Sun et al., 2021) use network science or biomedical knowledge to constrain edge dropping probabilities. How- ever, such random or knowledge-based graph augmentations are sensitive to different datasets (Shen et al., 2023) and may yield suboptimal performance (Yin et al., 2022). To achieve better generalization and globally optimal perfor- mance, learnable graph augmentation is proposed to auto- matically disrupt redundant information as much as possible to share minimal yet sufficient core information between augmented views (Tong et al., 2021; Suresh et al., 2021). Although they have achieved great success, there still re- main two open challenges worth exploring. (1) Community structure plays a crucial role in various downstream tasks, such as node classification and link prediction (Li et al., 2022a; Chen et al., 2023b). However, current GCL methods often randomly disrupt graphs during graph augmentation, which inevitably leads to the corruption of high-level graph information (i.e., community) and limits the generaliza- tion (Chiplunkar et al., 2018). (2) Current constraints em- ployed in learnable graph augmentation methods primarily focus either on topology or node features (Li et al., 2022b). For instance, GAME (Wei et al., 2023) and GCL-SPAN (Lin et al., 2023) use spectrum-based constraints for topology augmentation. Due to the asymmetry of the feature matrix, their methods cannot be extended to feature augmentation. On the other hand, COSTA (Zhang et al., 2022) designs a covariance-preserving constraint for feature augmentation, which, however, lacks effectiveness in topology augmenta- tion. By solely focusing on one type of graph augmentation (topology or feature), models may not fully exploit all avail- able information and lack robustness against different types of noise (Liu et al., 2022b). To solve the aforementioned issues, we propose a general learnable Community-Invariant GCL framework (CI-GCL), arXiv:2405.01350v1 [cs.LG] 2 May 2024 Community-Invariant Graph Contrastive Learning which unifies constraints from both topology and feature augmentation to maintain CI for learnable graph augmenta- tion. Specifically, when considering topology augmentation with a certain degree of disruption, we observe a nearly neg- ative correlation between community and spectral changes (see Sec 4.1). Therefore, to maximize the topology pertur- bation while ensuring community invariance, we can simply maximize graph spectral changes during topology augmen- tation. To extend our CI constraint to feature augmentation, we convert the feature matrix into a symmetric bipartite feature matrix based on the bipartite graph co-clustering technique (Zhang et al., 2023). This approach converts fea- ture augmentation into bipartite feature augmentation, while elucidating the importance of features in maintaining com- munity structure. For bipartite feature augmentation, we also observed a negative relationship between community and spectral changes, which is consistent with topology aug- mentation. This motivates us to apply our CI constraint to feature augmentation by maximizing graph spectral changes during bipartite feature augmentation. To summarize, the contributions of this research are: \u2022 We propose a learnable CI-GCL framework to automat- ically maintain CI during graph augmentation by max- imizing spectral change loss, improving the model\u2019s downstream performances. \u2022 We theoretically show that the proposed CI constraint can be applied to both topology and feature augmenta- tion, enhancing the model\u2019s robustness. \u2022 Experiments on 21 widely used benchmarks demon- strate the effectiveness and robustness of CI-GCL. Table 1. Graph augmentation (Aug.) method comparison. An ideal method should support both topology and node feature augmenta- tion, be adaptive to different datasets, be end-to-end differentiable and have efficient back-propagation (BP), and be CI and have uni- fied constraints for any augmentation to against various noise. Property Random or Constraint Learnable Graph Augmentation GraphCL JOAO GCA AutoGCL AD-GCL GCL-SPAN Ours Topology Aug. \u2713 \u2713 \u2713 \u2713 \u2713 \u2713 \u2713 Feature Aug. \u2713 \u2713 \u2713 \u2713 - - \u2713 Adaptive - \u2713 \u2713 \u2713 \u2713 \u2713 \u2713 Differentiable - - - \u2713 \u2713 - \u2713 Efficient BP - - - \u2713 \u2713 - \u2713 Community - - - - - - \u2713 Unified Constraint - - - - - - \u2713",
16
+ "main_content": "As an effective self-supervised learning paradigm, contrastive learning has achieved great success to learn text or image representations (Chen et al., 2020; Zhang et al., 2020). DGI (Velickovic et al., 2019) first adopted contrastive learning to learn robust graph representations that are invariant to various noise and operations. However, different from Euclidean or sequential data, graphs are irregular non-Euclidean data and sensitive to minor structural augmentation (Shen et al., 2023), resulting in learned graph representations being ineffective. Given that among many GCL variations, graph augmentation shows its crucial advantage for graph representation learning, many studies attempted to investigate effective graph augmentation for GCL. Prior GCL almost adopts random graph augmentation. For example, GRACE (Zhu et al., 2020) firstly uses random edge dropping and feature masking as graph augmentations. After that, GraphCL (You et al., 2020) gives an extensive study on different combinations of graph augmentations including randomly node dropping, edge perturbation, subgraph sampling, and feature masking. To make GraphCL more flexible, JOAO (You et al., 2021) automatically selects the combination of different random graph augmentations. Due to the limited generalization of random augmentation, researchers start to incorporate expert knowledge as constraints for graph augmentation. For instance, Duan et al. (2022) and GraphAug (Luo et al., 2023) employ label-invariance between original and augmented views as constraints, which achieves great success in the graph-level classification task. Recent GCL focuses on fully parameterizing graph augmentation for utilizing learnable graph augmentation to automatically determine how to disrupt graphs (Chen et al., 2023a). For example, AutoGCL (Yin et al., 2022) and ADGCL (Suresh et al., 2021) build a learnable graph generator that learns a probability distribution to help adaptively drop nodes and mask features. CGI (Wei et al., 2022) introduces the Information Bottleneck theory into GCL to remove unimportant nodes and edges between two augmented graphs by minimizing shared mutual information. GAME (Liu et al., 2022a) and GCL-SPAN (Lin et al., 2023) explore graph augmentation in spectral space, by maximizing spectral changes of high-frequency or all components to automatically drop edges. AdaGCL (Jiang et al., 2023) and GACN (Wu et al., 2023a) design graph generators and discriminators to automatically augment graphs in an adversarial style. Compared with previous studies, we are the first to point out the importance of community invariance for graph augmentation and propose a unified CI constraint for both topology and feature augmentation by simply maximizing spectral changes. Detailed comparisons are listed in Table 1. 3. Preliminary Let G = (X, A) be a graph with n nodes and m edges, where X \u2208Rn\u00d7d describes node features and A \u2208 {0, 1}n\u00d7n denotes an adjacency matrix with Aij = 1 if an edge exists between node i and j, otherwise Aij = 0. The normalized Laplacian matrix is defined as Lnorm = Community-Invariant Graph Contrastive Learning Lap(A) = In \u2212D\u22121/2AD\u22121/2, where In \u2208Rn\u00d7n is an identity matrix, D = diag (A1n) is the diagonal degree matrix with 1n \u2208Rn being an all-one vector. Graph Spectrum. The spectral decomposition of Lnorm is defined as Lnorm = Lap(A) = U\u039bU\u22a4, where the diagonal matrix \u039b = eig (Lap (A)) = diag (\u03bb1, . . . , \u03bbn) consists of real eigenvalues known as graph spectrum, and U = [u1, . . . , un] \u2208Rn\u00d7n are the corresponding orthonormal eigenvectors known as the spectral bases (Gene et al., 2013). Graph Representation Learning. Let G denote the whole graph space with G \u2208G. Graph representation learning aims to train an encoder f\u03b8(\u00b7) : G \u2192Rn\u00d7d\u2032 to obtain node representations. Then, it trains a readout function r\u03d5(\u00b7) : Rn\u00d7d\u2032 \u2192Rd\u2032 by pooling all node representations to obtain a low-dimensional vector for graph G, which can be used in graph-level tasks. Graph Contrastive Learning. GCL trains the encoder f\u03b8(\u00b7) to capture the maximum mutual information between the original graph and its perturbed view by graph augmentation. Formally, letting T1(G) and T2(G) denote two graph augmentation distributions of G, GCL is defined as follows: min \u03b8,\u03d5 LGCL(t1(G), t2(G), \u03b8, \u03d5), (1) where tm(G) \u223cTm(G) with m \u2208{1, 2} and LGCL measure the disagreement between two augmented graphs. 4. Methodology We first show the importance of community invariance in GCL with preliminary analysis. Then, we introduce the details of our methodology CI-GCL as illustrated by Figure 2. 4.1. Preliminary Analysis (a) Accuracy (b) Community Changes (c) Spectral Changes GraphCL AD-GCL 0.68 0.69 0.70 0.71 0.72 0.00 0.05 0.10 0.15 0.20 0.20 0.25 0.30 0.35 0.40 0.45 GraphCL+Cluster GraphCL+Destroy Figure 1. In unsupervised graph classification, we define community changes as the average ratio of the changed community labels over the number of nodes before and after graph augmentation by spectral clustering. Spectral changes are the eigenvalue changes between original and augmented graphs, using the L2 distance. Preserving community structure is crucial for learnable graph augmentation, i.e., perturbing a constrained number of edges or features that have least impact to community changes of the input graph. To show the benefits of preserving communities, we conduct a preliminary experiment by applying GraphCL (unlearnable graph augmentation) and AD-GCL (learnable graph augmentation) on the IMDB-B dataset. Specifically, we design the following four methods: (1) AD-GCL with uniformly edge dropping; (2) GraphCL with uniformly edge dropping; (3) GraphCL+Cluster augmentation that removes edges between different clusters with a higher probability; (4) GraphCL+Destroy augmentation that removes edges within the same cluster with a higher probability. Note that (3) preserves community structure, while (4) tends to disrupt community structure, as indicated by recent studies (Chiplunkar et al., 2018; Lin et al., 2022). We plot the accuracy for unsupervised graph classification as Figure 1(a) and the community changes as (b). From Figure 1(a-b), we can observe: (1) Methods\u2019 performance generally exhibits a nearly negative correlation with their community changes (i.e., less cluster changes yield higher accuracy); (2) AD-GCL outperforms GraphCL but underperforms GraphCL+Cluster. All these indicate preserving the community structure yields better results. Moreover, we also draw the spectral changes as Figure 1(c), as graph spectrum can reflect high-level graph structural information (Spielman, 2012; Lee et al., 2014). Through Figure 1(b-c), we can see that spectral changes are almost negatively correlated with community changes. That is to say, we can preserve community invariance during graph augmentation by maximizing spectral changes, based on which we expand on our methodology as follows. 4.2. Community-Invariant Graph Augmentation Topology Augmentation. We conduct edge perturbation and node dropping operations as our topology augmentation. For edge perturbation, we define T1(G) as a Bernoulli distribution Bern(P EP ij ) for each edge Aij. Then, we can sample the edge perturbation matrices \u2206EP \u2208{0, 1}n\u00d7n, where \u2206ij \u223cBern(P EP ij ) indicates whether to flip the edge between nodes i and j. The edge is flipped if \u2206EP ij = 1; otherwise, it remains unchanged. A sampled augmented graph by edge augmentation can be formulated as: tEP 1 (G) = A + C \u25e6\u2206EP, C = Ac \u2212A, (2) where \u25e6denotes an element-wise product and Ac represents the complement matrix of A, calculated as Ac = J\u2212In\u2212A, where J denotes an all-one matrix. Thus, C \u2208{\u22121, 1}n\u00d7n denotes all edge flipping operations, i.e., an edge is added between nodes i and j if Cij = 1, and removed if Cij = \u22121. However, Eq.(2) cannot be directly applied to learnable graph augmentation, since Bernoulli sampling is nondifferentiable. Inspired by Jang et al. (2017), we soften it from the discrete Bernoulli distribution space to the continuous space with a range \u2206EP ij \u2208(0, 1)n\u00d7n using GumbelCommunity-Invariant Graph Contrastive Learning ... ... GNN Encoder Contrastive Shared f (.) r (.) \u03b8 \u1d69 1 2 3 4 5 6 Node features Input Graph Topology Node Feature ... 12 123 0 0 Bipartite Feature Node Embeddings Feature Embeddings Spectral Decomposation MLP Feature Augmentation T (G) 2 1 2 3 4 5 6 1 2 3 4 5 6 t (G) 2 Feature View Feature Masking Loss Conmmunity-Invariant Constraint Topology Augmentation T (G) 1 \u03bb \u03bb \u03bb 1 2 3 = \u001f \u001f Spectral Graph max max LEP( ) Node Embeddings ... 123456 1 2 3 4 5 6 Adjacency Matrix Spectral Decomposation Spectral Decomposation P P P P ... 11 13 23 34 MLP EP EP EP EP ... =Gumbel-Softmax( ) \u0394ij EP t (G) 1 Topology View Edge Perturbation GNN Encoder LFM( ) bases spectrum ... P EP ij \u0394 EP \u0394 FM =Gumbel-Softmax( ) \u0394ij FM P FM ij P P P P ... 11 12 21 22 FM FM FM FM ... ... ... ... Figure 2. The proposed CI-GCL consists of two core components: (1) Learnable graph augmenter optimizes Tm(G) to disrupt redundant information while ensuring community invariance from the original graph. (2) The GNN encoder f\u03b8(\u00b7) and Readout r\u03d5(\u00b7) maximize the mutual information between two augmented graphs by contrastive loss. We use edge dropping and feature masking as an instantiation. Softmax, which can be formulated as: \u2206EP ij (\u03f5) = Softmax((log(P EP ij ) + \u03f5)/\u03c4), (3) P EP ij = Sigmoid(MLPs(Concat(ei, ej))), (4) where P EP ij controls whether to flip edge Aij, MLPs are multilayer perceptions, ei is the i-th node representation, Concat(\u00b7, \u00b7) denotes the concatenation operation, \u03f5 \u223c Gumbel(0, 1),1 and \u03c4 is the temperature factor that controls the approximation degree for the discrete categorical distributions. \u03c4 > 0 results in a well-defined gradient \u2202\u2206EP ij /\u2202P EP ij , facilitating efficient optimization. Node dropping can be considered as a type of edge dropping, i.e., removing all edges connected to this node. Thus, node dropping can be formulated similarly to Eq.(2) as: tND 1 (G) = A + (\u2212A) \u25e6\u2206ND. (5) where \u2206ND can be calculated by: \u2206ND = (\u03a8ND \u00b7 1\u22a4 n + (\u03a8ND \u00b7 1\u22a4 n )\u22a4)/2, (6) \u03a8ND i (\u03f5) = Softmax((log(P ND i ) + \u03f5)/\u03c4), (7) P ND i = Sigmoid(MLPs(ei)). (8) We combine t(EP,ND) 1 (G) as topology augmentation t1(G). CI-based Topology Augmentation. Inspired by the findings in Sec 4.1, we aim to optimize \u2206by simultaneously maximizing graph disruption while minimizing community changes for learnable topology augmentation in Eqs.(2,5). Based on the matrix perturbation theory (Bojchevski & G\u00a8 unnemann, 2019), we have the following definition. Definition 1. Let \u03bbk denote the k-th eigenvalue of the spectral decomposition of Lap(A) = U\u039bU\u22a4. For a single edge 1The Gumbel(0, 1) distribution can be sampled by calculating \u03f5 = \u2212log(\u2212log(u)) with u \u223cUniform(0, 1) (Jang et al., 2017). perturbation Aij, it induces absolute changes in eigenvalues given by Pn k=1 |\u2206\u03bbk| = Pn k=1 |(Uik \u2212Ujk)2 + (\u03bbk \u2212 1)(U 2 ik + U 2 jk)|, and \u2206\u03bbk denotes the k-th spectral change. When optimizing Pn k=1 |\u2206\u03bbk| in Definition 1, we argue that maintaining community invariance requires a categorized discussion on edge adding and edge dropping. Theorem 1. The absolute spectral changes Pn k=1 |\u2206\u03bbk| are upper bounded by \u2225Ui\u00b7 \u2212Uj\u00b7\u22252 2+Pn k=1 |\u03bbk\u22121| and lower bounded by \u2225Ui\u00b7 \u2212Uj\u00b7\u22252 2 \u2212Pn k=1 |\u03bbk \u22121|, respectively. Here, Ui\u00b7 represents the i-th row vector of U, denoting the i-th node embedding in the spectral space. According to Theorem 1, maximizing spectral changes equates to maximizing their upper bound, i.e., flipping several edges between nodes with largest distances in spectral space. Shi & Malik (1997) states that node representations with larger distances always belong to different communities. Thus, we can maximize spectral changes during the edge dropping to preserve community invariance. However, we cannot conduct edge adding since adding edges between clusters always disrupts communities (Zhu et al., 2023). Conversely, minimizing spectral changes equates to minimizing their lower bound, i.e., flipping several edges between nodes with lowest distances, where nodes with lower distances always belong to the same cluster. Thus, we can minimize spectral changes during the edge adding, instead of edge dropping, since dropping edges within one cluster will disrupt communities (Zhu et al., 2023). We formulate the CI constraint for edge perturbation by jointly optimizing edge dropping and adding as follows: max \u2206ED, \u2206EA \u2208S LEP(\u2206EP) = LED(\u2206ED) \u2212LEA(\u2206EA), (9) LED(\u2206ED) = \u2225eig(Lap(A \u2212A \u25e6\u2206ED)) \u2212eig(Lap(A))\u22252 2, LEA(\u2206EA) = \u2225eig(Lap(A + Ac \u25e6\u2206EA)) \u2212eig(Lap(A))\u22252 2, Community-Invariant Graph Contrastive Learning where S = {S|S \u2208[0, 1]n\u00d7n, \u2225S\u22251 \u2264\u03c8}, \u03c8 controls the perturbation strength, and L(\u2206) represents graph spectral changes under different augmented operations. Node dropping can be considered as one type of ED, which can be constrained by community invariance by: max \u2206ND\u2208S \u2225eig(Lap(A \u2212A \u25e6\u2206ND)) \u2212eig(Lap(A))\u22252 2. (10) By jointly optimizing Eqs.(9,10), we can maximize topology perturbation while maintaining community invariance. Feature Augmentation. Similar to topology augmentation, we define T2(G) as a Bernoulli distribution Bern(P FM ij ) for each feature Xij. Then, we can sample feature masking matrix \u2206FM ij \u223cBern(P FM ij ), indicating whether to mask the corresponding feature. A sampled augmented graph by feature masking can be formulated as: tFM 2 (G) = X + (\u2212X) \u25e6\u2206FM. (11) CI-based Feature Augmentation. Different from topology augmentation, X \u2208Rn\u00d7d is an asymmetric matrix lacking spectral decomposition. Theorem 1 is not applicable to feature augmentation. Moreover, discerning which feature has the least impact on community changes is challenging. Inspired by the co-clustering of bipartite graph (Nie et al., 2017), which can determine the importance of features for node clustering, we construct the feature bipartite graph as: e X = \u0014 0 X X\u22a4 0 \u0015 (n+d)\u00d7(n+d), (12) where the first n rows of e X denote the original nodes, while the subsequent d rows serve to represent features as feature nodes. Then, e Xij, where i \u2208{1, \u00b7 \u00b7 \u00b7 n} and j \u2208{(n + 1), \u00b7 \u00b7 \u00b7 , (n + d)}, can be interpreted as the linking weight between i-th node with j-th feature. Theorem 2. Let the singular value decomposition of the feature matrix X be denoted as svd(D\u22121/2 u XD\u22121/2 v ) = U\u039b1V\u22a4where Du and Dv are the degree matrices of X and X\u22a4, and U and V represent the left and right singular vectors, respectively. Then, eig(Lap( e X)) = F\u039b2F\u22a4where the k-th smallest eigenvector F\u00b7k is equal to the concatenation of k-th largest singular vectors: F\u00b7k = [U\u00b7k; V\u00b7k]. According to Theorem 2 and findings from Nie et al. (2017), if we can maintain community invariance of e X, community structure will also be preserved in X. Hence, we investigate the community-invariant constraint in e X. Theorem 3. Let \u03bbk denote the k-th smallest eigenvalue of \u039b2. When masking one feature e Xij, the induced spectral changes are given by e X as Pn+d k=1 |\u2206\u03bbk| = Pn+d k=1 |(Fik \u2212 Fjk)2 + (\u03bbk \u22121)(F 2 ik + F 2 jk)|, which are upper bounded by \u2225Fi\u00b7 \u2212Fj\u00b7\u22252 2 + Pn+d k=1 |1 \u2212\u03bbk| where i \u2208{1, .., n} and j \u2208{(n + 1), .., (n + d)}, Fi\u00b7 is the i-th row vector of F. Based on Theorem 3, maximizing spectral changes in e X under a constrained number of perturbation equals finding several largest embedding distances between nodes and feature nodes, i.e., these features have the least impact on community changes for these nodes (Zhang et al., 2023). Thus, CI constraint for feature augmentation LFM(\u2206FM) can be formulated as follows: max \u2206FM\u2208S \u2225eig(Lap( e X \u2212e X \u25e6\u2206FM) \u2212eig(Lap( e X)))\u22252 2. (13) Finally, we parameterize \u2206FM and ensure its differentiability in the feature augmentation, formulated as follows: \u2206FM ij (\u03f5) = Softmax((log(P FM ij ) + \u03f5)/\u03c4), (14) P FM ij = Sigmoid(MLPs(Concat( e U\u00b7i, e V\u00b7j))). CI-GCL. As shown in Figure 2, we instantiate a graph contrastive learning framework with the proposed communityinvariant constraint, namely CI-GCL. Specifically, we first conduct spectral decomposition on the adjacent matrix and feature bipartite matrix to obtain node and feature representations. Then, we consider these node and feature representations as input of MLPs for both topology and feature augmentation, where we randomly initialize the parameters of MLPs. For each iteration of contrastive learning, we sample two augmented graphs by topology augmentation and feature augmentation. The augmented graphs are then fed into a GCN encoder f\u03b8(\u00b7), which outputs two sets of node representations. A readout pooling function r\u03d5(\u00b7) is applied to aggregate and transform the node representations and obtain graph representations z(1), z(2). Following GraphCL (You et al., 2020), given training graphs G, we use contrastive objective LGCL, which can be defined as: min \u03b8,\u03d5 LGCL (t1 (G) , t2 (G) , \u03b8, \u03d5) (15) = \u22121 |G| |G| X n=1 log exp(sim(z(1) n , z(2) n /\u03c42)) P|G| n\u2032=1,n\u2032\u0338=n exp(sim(z(1) n , z(2) n\u2032 )/\u03c42) ! , where \u03c42 is the temperature parameter, and we conduct minibatch optimization for Eq.(15) in our study. 4.3. Optimization and Scalability Optimization. Eqs.(9,10,13) are jointly optimized via projected gradient descent. Taking \u2206FM in Eq.(13) as an example, we can update the parameters \u2206FM as: \u2206FM t = PS \u0010 \u2206FM (t\u22121) \u2212\u03b7t\u2207LFM \u0010 \u2206FM (t\u22121) \u0011\u0011 , (16) where PS(\u2206) = arg minS\u2208S \u2225S \u2212\u2206\u22252 F is defined as one projection operation at \u2206over the constraint set S and \u03b7t > 0 is the learning rate for the t-th updating step. The gradient Community-Invariant Graph Contrastive Learning Table 2. Unsupervised representation learning classification accuracy (%) on TU Datasets. Bold denotes the best performance, and underline represents the second best performance. \u2745marks the reproduced results of the corresponding baselines by us. Method NCI1 \u2191 PROTEINS \u2191 DD \u2191 MUTAG \u2191COLLAB \u2191 RDT-B \u2191 RDT-M5K \u2191IMDB-B \u2191 Avg. \u2191 InfoGraph 76.20\u00b11.0 74.44\u00b10.3 72.85\u00b11.7 89.01\u00b11.1 70.65\u00b11.1 82.50\u00b11.4 53.46\u00b11.0 73.03\u00b10.8 74.02 GraphCL 77.87\u00b10.4 74.39\u00b10.4 78.62\u00b10.4 86.80\u00b11.3 71.36\u00b11.1 89.53\u00b10.8 55.99\u00b10.3 71.14\u00b10.4 75.71 MVGRL 76.64\u00b10.3 74.02\u00b10.3 75.20\u00b10.4 75.40\u00b17.8 73.10\u00b10.6 82.00\u00b11.1 51.87\u00b10.6 63.60\u00b14.2 71.48 JOAO 78.07\u00b10.4 74.55\u00b10.4 77.32\u00b10.5 87.35\u00b11.0 69.50\u00b10.3 85.29\u00b11.4 55.74\u00b10.6 70.21\u00b13.0 74.75 SEGA 79.00\u00b10.7 76.01\u00b10.4 78.76\u00b10.6 90.21\u00b10.7 74.12\u00b10.5 90.21\u00b10.7 56.13\u00b10.3 73.58\u00b10.4 77.25 GCS\u2745 77.18\u00b10.3 74.04\u00b10.4 76.28\u00b10.3 88.19\u00b10.9 74.00\u00b10.4 86.50\u00b10.3 56.30\u00b10.3 72.90\u00b10.5 75.64 GCL-SPAN\u2745 75.43\u00b10.4 75.78\u00b10.4 78.78\u00b10.5 85.00\u00b10.8 71.40\u00b10.5 86.50\u00b10.1 54.10\u00b10.5 66.00\u00b10.7 74.12 AD-GCL\u2745 73.38\u00b10.5 73.59\u00b10.7 75.10\u00b10.4 89.70\u00b11.0 72.50\u00b10.6 85.52\u00b10.8 54.91\u00b10.4 71.50\u00b10.6 74.53 AutoGCL\u2745 78.32\u00b10.5 69.73\u00b10.4 75.75\u00b10.6 85.15\u00b11.1 71.40\u00b10.7 86.60\u00b11.5 55.71\u00b10.2 72.00\u00b10.4 74.33 CI+AD-GCL 74.35\u00b10.5 74.66\u00b10.6 76.20\u00b10.4 89.88\u00b10.7 73.94\u00b10.3 87.80\u00b11.2 54.75\u00b10.6 72.10\u00b10.3 75.46 CI+AutoGCL 78.47\u00b10.7 70.81\u00b10.5 76.53\u00b10.6 86.73\u00b11.0 72.24\u00b10.9 87.50\u00b11.4 55.97\u00b10.2 72.50\u00b10.3 75.09 CI-GCL 80.50\u00b10.5 76.50\u00b10.1 79.63\u00b10.3 89.67\u00b10.9 74.40\u00b10.6 90.80\u00b10.5 56.57\u00b10.3 73.85\u00b10.8 77.74 \u2207LFM(\u2206FM (t\u22121)) can be calculated via a chain rule, with a closed-form gradient over eigenvalues, i.e., for Lnorm = Lap(A + C \u25e6\u2206FM), the derivatives of its k-th eigenvalue \u03bbk is \u2202\u03bbk/\u2202Lnorm = U\u00b7kU\u22a4 \u00b7k (Rogers, 1970). Scalability. Due to the eigendecomposition, the time complexity of optimizing Eqs.(9,10) is O(Mn3) with M representing the number of iterations, due to the eigendecomposition, which is prohibitively expensive for large graphs. To reduce the computational cost, instead of conducting eigendecomposition on the full graph spectrum, we employ selective eigendecomposition on the K lowest eigenvalues via the Lanczos Algorithm (Parlett & Scott, 1979), which can reduce time complexity to O(Mn2K). Similarly, we can use Truncated SVD (Halko et al., 2011) to obtain the K highest eigenvalues of X and then concatenate them as approximate for eigendecomposition of e X, thereby reducing time complexity from O(M(n+d)3) to O(Mn log K). 5. Experiments In our general experimental settings, we use GIN (Xu et al., 2019) as the base encoder for all baselines to ensure a fair comparison. Each experiment is repeated 10 times with different random seeds, and we report the mean and standard derivation of the corresponding evaluation metrics. We select several best-performing baselines for comparison, including classic GCL methods, such as MVGRL (Hassani & Ahmadi, 2020), InfoGraph (Sun et al., 2020), GraphCL, and JOAO, as well as GCL methods with learnable graph augmentation, such as SEGA (Wu et al., 2023b), GCS (Wei et al., 2023), GCL-SPAN, AD-GCL, and AutoGCL. 5.1. Quantitative Evaluation 5.1.1. COMPARISON WITH STATE-OF-THE-ARTS To comprehensively demonstrate the effectiveness and generalizability of CI-GCL, following previous studies (Yin et al., 2022), we perform evaluations for graph classification and regression under three different experimental settings: unsupervised, semi-supervised, and transfer learning. Table 3. RMSE for unsupervised graph regression. Method molesol \u2193 mollipo \u2193 molfreesolv \u2193Avg. \u2193 InfoGraph 1.344\u00b10.18 1.005\u00b10.02 10.00\u00b14.82 4.118 GraphCL 1.272\u00b10.09 0.910\u00b10.02 7.679\u00b12.75 3.287 MVGRL 1.433\u00b10.15 0.962\u00b10.04 9.024\u00b11.98 3.806 JOAO 1.285\u00b10.12 0.865\u00b10.03 5.131\u00b10.72 2.427 GCL-SPAN 1.218\u00b10.05 0.802\u00b10.02 4.531\u00b10.46 2.184 AD-GCL 1.217\u00b10.09 0.842\u00b10.03 5.150\u00b10.62 2.403 CI-GCL 1.130\u00b10.13 0.816\u00b10.03 2.873\u00b10.32 1.606 Unsupervised Learning. We first train graph encoders (i.e., GIN) separately for each of the GCL baselines using unlabeled data. Then, we fix parameters of these models and train an SVM classifier using labeled data. We use TU datasets (Morris et al., 2020) and OGB datasets (Hu et al., 2020a) to evaluate graph classification and regression, respectively. We adopt the provided data split for the OGB datasets and use 10-fold cross-validation for the TU datasets as it lacks such a split. Table 2 shows the performance on graph classification and Table 3 draws the performance on graph regression. In these tables, CI-GCL achieves the best results on 9 datasets and competitive results on the MUTAG and mollipo datasets. Specifically, CI-GCL achieves the highest averaged accuracy in graph classification (77.74%) and the lowest RMSE in graph regression (1.606), surpassing SOTA classification methods, such as SEGA (77.25%), GraphCL (75.71%), as well as SOTA regression methods, such as GCL-SPAN (2.184) and AD-GCL (2.403). Semi-Supervised Learning. Following GraphCL, we employ 10-fold cross validation on each TU datasets using ResGCN (Pei et al., 2021) as the classifier. For each fold, different from the unsupervised learning setting, we only use 10% as labeled training data and 10% as labeled testing data for graph classification. As shown in Table 4, CI-GCL achieves highest averaged accuracy (74.0%) compared with Community-Invariant Graph Contrastive Learning Table 4. Accuracy (%) for 10% labeled semi-supervised graph classification. Method NCI1 \u2191 PROTEINS \u2191 DD \u2191 COLLAB \u2191 RDT-B \u2191 RDT-M5K \u2191 GITHUB \u2191 Avg. \u2191 No Pre-train 73.72\u00b10.2 70.40\u00b11.5 73.56\u00b10.4 73.71\u00b10.3 86.63\u00b10.3 51.33\u00b10.4 60.87\u00b10.2 70.0 GraphCL 74.63\u00b10.3 74.17\u00b10.3 76.17\u00b11.4 74.23\u00b10.2 89.11\u00b10.2 52.55\u00b10.5 65.81\u00b10.8 72.3 JOAO 74.48\u00b10.3 72.13\u00b10.9 75.69\u00b10.7 75.30\u00b10.3 88.14\u00b10.3 52.83\u00b10.5 65.00\u00b10.3 71.9 SEGA 75.09\u00b10.2 74.65\u00b10.5 76.33\u00b10.4 75.18\u00b10.2 89.40\u00b10.2 53.73\u00b10.3 66.01\u00b10.7 72.9 AD-GCL 75.18\u00b10.4 73.96\u00b10.5 77.91\u00b10.7 75.82\u00b10.3 90.10\u00b10.2 53.49\u00b10.3 65.89\u00b10.6 73.1 AutoGCL 67.81\u00b11.6 75.03\u00b13.5 77.50\u00b14.4 77.16\u00b11.5 79.20\u00b13.5 49.91\u00b12.7 58.91\u00b11.5 69.3 CI-GCL 75.86\u00b10.8 76.28\u00b10.3 78.01\u00b10.9 77.04\u00b11.5 90.29\u00b11.2 54.47\u00b10.7 66.36\u00b10.8 74.0 Table 5. ROC-AUC (%) for graph classification under transfer Learning settings. Pre-Train ZINC 2M PPI-306K Fine-Tune BBBP \u2191 Tox21 \u2191 ToxCast \u2191 SIDER \u2191 ClinTox \u2191 MUV \u2191 HIV \u2191 BACE \u2191 PPI \u2191 Avg. \u2191 No Pre-train 65.8\u00b14.5 74.0\u00b10.8 63.4\u00b10.6 57.3\u00b11.6 58.0\u00b14.4 71.8\u00b12.5 75.3\u00b11.9 70.1\u00b15.4 64.8\u00b11.0 66.7 MVGRL 69.0\u00b10.5 74.5\u00b10.6 62.6\u00b10.5 62.2\u00b10.6 77.8\u00b12.2 73.3\u00b11.4 77.1\u00b10.6 77.2\u00b11.0 68.7\u00b10.7 71.4 SEGA 71.9\u00b11.1 76.7\u00b10.4 65.2\u00b10.9 63.7\u00b10.3 85.0\u00b10.9 76.6\u00b12.5 77.6\u00b11.4 77.1\u00b10.5 68.7\u00b10.5 73.6 GCS\u2745 72.5\u00b10.5 74.4\u00b10.4 64.4\u00b10.2 61.9\u00b10.4 66.7\u00b11.9 77.3\u00b11.7 78.7\u00b11.4 82.3\u00b10.3 70.3\u00b10.5 72.1 GCL-SPAN 70.0\u00b10.7 78.0\u00b10.5 64.2\u00b10.4 64.7\u00b10.5 80.7\u00b12.1 73.8\u00b10.9 77.8\u00b10.6 79.9\u00b10.7 70.0\u00b10.8 73.2 AD-GCL\u2745 67.4\u00b11.0 74.3\u00b10.7 63.5\u00b10.7 60.8\u00b10.9 58.6\u00b13.4 75.4\u00b11.5 75.9\u00b10.9 79.0\u00b10.8 64.2\u00b11.2 68.7 AutoGCL\u2745 72.0\u00b10.6 75.5\u00b10.3 63.4\u00b10.4 62.5\u00b10.6 79.9\u00b13.3 75.8\u00b11.3 77.4\u00b10.6 76.7\u00b11.1 70.1\u00b10.8 72.5 CI+AD-GCL 68.4\u00b11.1 74.5\u00b10.9 64.0\u00b10.8 61.4\u00b10.9 59.8\u00b13.2 76.5\u00b11.7 77.0\u00b10.9 80.0\u00b10.8 65.3\u00b11.1 69.6 CI+AutoGCL 73.9\u00b10.7 76.4\u00b10.3 63.8\u00b10.3 63.9\u00b10.6 80.9\u00b13.1 76.3\u00b11.3 78.8\u00b10.7 78.8\u00b11.1 70.9\u00b10.7 73.7 CI-GCL 74.4\u00b11.9 77.3\u00b10.9 65.4\u00b11.5 64.7\u00b10.3 80.5\u00b11.3 76.5\u00b10.9 80.5\u00b11.3 84.4\u00b10.9 72.3\u00b11.2 75.1 SOTA baselines SEGA (72.9%) and AD-GCL (73.1%). Transfer Learning. To show the generalization, we conduct self-supervised pre-training for baselines on the preprocessed ZINC-2M or PPI-306K dataset (Hu et al., 2020b) for 100 epochs and then fine-tune baselines on different downstream biochemical datasets. Table 5 shows that CIGCL achieves best results on 6 datasets and comparable performance on the rest datasets with an averaged performance (75.1%), comparing with SOTA baseline SEGA (73.6%). Summary. From the above experimental results, we obtain the following three conclusions: (1) Higher Effectiveness. CI-GCL can achieve the best performance in three different experimental settings, attributed to its unified communityinvariant constraint for graph augmentation. Compared to GraphCL and MVGRL, with similar contrastive objectives, the gain by CI-GCL mainly comes from the CI constraint and learnable graph augmentation procedure. While compared to AD-GCL and AutoGCL, with similar encoders, CI-GCL, guided by community invariance, is clearly more effective than the widely adopted uniformly random augmentation. (2) Better Generalizability. By maximizing spectral changes to minimize community changes, CI-GCL can obtain the encoder with better generalizability and transferability. Since the encoder is pre-trained to ignore the impact of community irrelevant information and mitigate the relationship between such information and downstream labels, solving the overfitting issue. Furthermore, previous studies, such as JOAO and GCL-SPAN, improve generalizability of the GNN encoder on molecule classification by exploring structural information like subgraph. We suggest that the community could be another way to study chemical and biological molecules. (3) Wider Applicability. By combing the CI constraint with AD-GCL and AutoGCL in Table 2 and Table 5, we also see significant improvements on almost all datasets, showing that the CI constraint could be a plugand-play component for any learnable GCL frameworks. Table 6. Ablation study on unsupervised graph classification. Method NCI1 \u2191 PROTEIN \u2191 DD \u2191 MUTAG \u2191Avg. \u2191 w/o TA 78.8\u00b10.6 74.6\u00b10.6 77.8\u00b10.8 86.1\u00b10.9 79.3 w/o FA 79.2\u00b10.4 75.1\u00b10.2 78.1\u00b10.3 86.3\u00b10.9 79.6 w/o CI on TA 79.9\u00b10.6 75.7\u00b10.9 78.8\u00b10.3 87.6\u00b10.5 80.5 w/o CI on FA 80.0\u00b10.3 75.9\u00b11.4 78.7\u00b10.8 87.5\u00b11.4 80.5 w/o CI on ALL 78.6\u00b10.9 74.8\u00b10.9 78.3\u00b10.3 86.5\u00b11.8 79.5 CI-GCL 80.5\u00b10.5 76.5\u00b10.1 79.6\u00b10.3 89.6\u00b10.9 81.5 5.1.2. ABLATION STUDY We conduct an ablation study to evaluate the effectiveness of the proposed CI constraint on topology augmentation (TA) and feature augmentation (FA). We consider the following variants of CI-GCL: (1) w/o TA: remove TA on one branch. (2) w/o FA: remove FA on one branch. (3) w/o CI on TA: remove CI constraint on TA. (4) w/o CI on FA: remove CI constraint on FA. (5) w/o CI on ALL: remove CI constraint on both TA and FA. Experimental results in Table 6 demonstrate that the removal of either component of the method negatively impacts the ability of the graph representation learning to perform well. These results align with our hypothesis that random topology or feature augmentation Community-Invariant Graph Contrastive Learning without CI constraint corrupt community structure, thereby hindering model\u2019s performance in downstream tasks. 5.2. Qualitative Evaluation 5.2.1. ROBUSTNESS AGAINST VARIOUS NOISE To showcase the robustness of CI-GCL, we conduct experiments in the adversarial setting. Following GraphCL, we conduct Random noise attack, with perturbation ratios \u03c3 \u2208{0.05, 0.30}, on topology A and feature X of the input graph, respectively. Specifically, for the topology attack, we randomly flip \u03c3 \u00d7 m edges with m as the total number of edges. For the feature attack, we randomly select \u03c3 \u00d7 d features to add Gaussian noise with d as the total number of features. Baselines without designed feature augmentation are set with random feature masking. Table 3 reports the 0.9 0.8 0.7 0.6 0.05 0.10 0.15 0.20 0.25 0.30 0.90 0.85 0.80 0.75 0.05 0.10 0.15 0.20 0.25 0.30 ACC (A) Topology attack on MUTAG CI-GCL AD-GCL AutoGCL GCL-SPAN GCS 0.8 0.7 0.6 0.05 0.10 0.15 0.20 0.25 0.30 0.70 0.65 0.60 0.05 0.10 0.15 0.20 0.25 0.30 ACC (B) Feature attack on MUTAG (C) Topology attack on NCI1 (D) Feature attack on NCI1 Figure 3. Accuracy (%) under noise attack on two datasets. graph classification performance. From Table 3, we have the following three findings. (1) CI-GCL outperforms four best-performing GCL methods in both topology and feature attack, demonstrating its strong robustness. (2) CI-GCL and GCL-SPAN are more robustness than other baselines in the topology attack, showing that preserving high-level graph structure can improve robustness than random graph augmentation. While CI-GCL can better focus on community invariance to outperform GCL-SPAN. (3) CI-GCL is more robust than other baselines in the feature attack since we also apply the uniformed CI constraint to feature augmentation. 5.2.2. EFFECTIVENESS IN COMMUNITY PRESERVATION To explore the ability of community preservation, we draw community changes in Table 7, where community changes are defined as the averaged number of changes of community labels of nodes before and after graph augmentation by spectral clustering. We observe that CI-GCL can effectively preserve community structure due to the proposed CI constraint. Furthermore, refer to Table 2, we can find that methods with larger community disruption, such as (A) Original Graph (B) Topology Aug. of CI-GCL (C) Topology Aug. of GraphCL (D) Feature Aug. of CI-GCL 0.2 0.4 0.6 0.8 0.2 0.4 0.6 0.8 0.2 0.4 0.6 0.8 Features Community Figure 4. A case study of TA and FA of GraphCL and CI-GCL. (B-C) share the same color map and Green lines are edge dropping and Red lines are edge adding. Table 7. Community changes (%) in unsupervised learning. Method NCI1 \u2193 PROTEINS \u2193 MUTAG \u2193 IMDB-B \u2193 Avg. \u2193 GraphCL 30.6 25.2 29.9 29.2 28.7 GCS 27.6 30.9 26.5 32.5 29.3 GCL-SPAN 15.1 11.2 14.5 19.7 15.1 AD-GCL 7.0 4.8 5.6 18.0 8.8 AutoGCL 34.2 31.0 30.3 33.4 32.2 CI-GCL 5.9 4.3 3.3 13.8 6.8 GraphCL and AutoGCL, underperform others with smaller community disruption. We also provide a visualization of CI-GCL on widely used synthetic graphs with 1,000 samples (Kim & Oh, 2021), that is suited for analysis since it possesses clear community structure (Kim & Oh, 2021). We train our models in an unsupervised learning manner on this dataset and randomly select one example for visualization. In Figure 4(A-C), CI-GCL effectively preserve community structure by removing edges between clusters (Green lines) and add edges within each cluster (Red lines), while GraphCL destroys communities by randomly add and remove edges. In Figure 4(D), with xand y-axis represent nodes and features, respectively, CI-GCL can effectively maintain important features for community invariance. 6. Conclusion In this work, we aimed to propose an unified constraint that can be applied to both topology and feature augmentation, to ensure community invariance and benefit for downstream tasks. To achieve this goal, we searched for the augmentation scheme that would maximize spectral changes of the input graph\u2019s topology and features, which can also minimize community changes. Our proposed community-invariant constraint can be paired with various GCL frameworks. We plan to explore more high-level graph information as constraints for learnable graph augmentation and apply our Community-Invariant Graph Contrastive Learning framework to many real-world applications in the future. Impact Statements In this section, we elaborate on the broader impacts of our work from the following two aspects. (1) Learnable Graph Augmentations. With the rapid development of GCL, learnable graph augmentation has become a significant research topic in the machine-learning community. Compared to current learnable graph augmentation methods, our work introduces control over the augmentation scheme in joint learning settings. Considering that CI constraint as a kind of expert knowledge, we can perceive our work as establishing a connection between expert knowledge-guided augmentations and learnable augmentations through the design of specific constraints. (2) Community and Spectrum: Despite significant advances in GCL, theoretical foundations regarding the relations between community preservation and spectrum design remain lacking. Our work highlights the significant potential of graph spectrum and community preservation in GCL, which may assist others in comprehending the graph spectrum. Moreover, we do not anticipate any direct negative impacts on society from our findings."
17
+ }
intro_8K/test_introduction_short_2405.01359v1.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.01359v1",
3
+ "title": "GAIA: A General AI Assistant for Intelligent Accelerator Operations",
4
+ "abstract": "Large-scale machines like particle accelerators are usually run by a team of\nexperienced operators. In case of a particle accelerator, these operators\npossess suitable background knowledge on both accelerator physics and the\ntechnology comprising the machine. Due to the complexity of the machine,\nparticular subsystems of the machine are taken care of by experts, who the\noperators can turn to. In this work the reasoning and action (ReAct) prompting\nparadigm is used to couple an open-weights large language model (LLM) with a\nhigh-level machine control system framework and other tools, e.g. the\nelectronic logbook or machine design documentation. By doing so, a multi-expert\nretrieval augmented generation (RAG) system is implemented, which assists\noperators in knowledge retrieval tasks, interacts with the machine directly if\nneeded, or writes high level control system scripts. This consolidation of\nexpert knowledge and machine interaction can simplify and speed up machine\noperation tasks for both new and experienced human operators.",
5
+ "authors": "Frank Mayet",
6
+ "published": "2024-05-02",
7
+ "updated": "2024-05-02",
8
+ "primary_cat": "cs.CL",
9
+ "cats": [
10
+ "cs.CL",
11
+ "physics.acc-ph"
12
+ ],
13
+ "label": "Original Paper",
14
+ "paper_cat": "Retrieval AND Augmented AND Generation AND RAG",
15
+ "gt": "Particle accelerators are complex machines that consist of a large number of subsystems. Although many processes are automated and feedback systems are in place, experiments and machine supervision need to be performed by a group of operators. These operators usually have an accelerator physics background and mostly know how the technology works. They especially know how to setup and tune the machine parameters for certain working points and experiments using high-level graphical user interfaces, which are connected to low-level machine control software. Due to the complexity of the machine, some subsystems of the machine are taken care of by experts, who the operators can turn to. This work shows that it is possible to support the day-to-day operation of a complex machine like a particle accelerator using a large language model (LLM), an object-oriented high-level machine control system framework, as well as a number of interfaces to knowledge bases such as the electronic logbook. The system is able to assist the operators on many levels, e.g. by producing Python scripts, which when executed perform a task defined by an input prompt to the LLM. To this end, the reasoning and action prompting paradigm (ReAct) [Yao et al., 2023] is implemented. This way a multi-expert system is realized, mimicking the real world, where the complex machine is operated by many subsystem experts. arXiv:2405.01359v1 [cs.CL] 2 May 2024 GAIA TECHNICAL REPORT",
16
+ "main_content": "Modern accelerators are routinely operated using sophisticated low-level control systems, such as EPICS1, TANGO2, TINE3, or DOOCS4. These control systems allow for high frequency machine and beam diagnostics, as well as control, which is an essential requirement for highly available, brilliant beams for the users. In addition, control systems are often paired with high speed data acquisitions systems (DAQ), capable of recording pulse synchronized data at the machine repetition rate. In order to access the machine, or derived data (via middle layer servers), users can usually leverage libraries and wrappers for common programming and scripting languages such as C, C++, Java, or Python. Easy to use toolkits are sometimes provided to create graphical user interfaces (GUIs) for simple interaction with control system servers (e.g. parameter scans), or data visualization. On many machines there is the need to perform experiments, which are more complex than simple oneor twodimensional parameter scans. In this case users and operators have to either perform the tasks manually, or to write measurement scripts using e.g. Python. In this work a Python-based scripting toolkit called doocs_generic_experiment is used, which simplifies the task of writing a measurement script substantially. The toolkit follows an object-oriented approach and is based on a set of so called actions, which resemble real-world sub-tasks an operator would have to perform if the experiment would be conducted manually. These actions can be grouped into procedures, which can reach arbitrary complexity. In addition of performing a set of action in series, procedures can also run multiple actions in parallel to save time. The final experiment can then consit of either one, or many of these procedures. Due to the easy to unterstand concept of stringing together common actions and procedures, the toolkit enables rapid prototyping of complex experiments and enables fulland semi-automation of experimental campaigns, which would otherwise take too long to perform manually. In addition, the concept of encapsulating commonly performed actions adds a safety layer as the room for possible mistakes is reduced. Figure 1 shows a selection of currently available actions and procedures. Action ReadAction WriteAction RichWriteAction RichReadAction IfElseAction LogAction UserInputAction PauseAction WaitForStabilityAction ParallelReadAction WaitForChangeAction SubscriptionAction ProcessingAction AlarmConditionAction NeuralNetAction LaserSwitchAction ScreenAction PulseShotModeAction BeamRecenteringAction LaserApertureAction MagnetDegaussAction MagnetCycleAction PrintToLogbookAction PolariXOnBeamSwitchAction Procedure ParallelProcedure DOOCSManager DOOCSAddress SingleDataProcedure ScanProcedure ImageScanProcedure OutputManager MachineStateProcedure PhaseAdvanceScanProcedure FourDimTomographyProcedure FiveDimTomographyProcedure Figure 1: A selection of procedures and actions available via the doocs_generic_experiment Python module. 1https://epics.anl.gov, last access 2024-04-26 2https://www.tango-controls.org, last access 2024-04-26 3https://tine.desy.de, last access 2024-04-26 4https://doocs.desy.de, last access 2024-04-26 2 GAIA TECHNICAL REPORT 3 Multi-Expert System \u2013 GAIA In order to realize the operations assistant, the open-weights LLM Mixtral 8x7B Instruct v0.1 (8 bit quantization) [Jiang et al., 2024] is used as the main model. Mixtral 8x7B supports a context size of 32k tokens. This is ideal for reasoning and chain of thought (CoT) prompting [Wei et al., 2022, 2023, Yao et al., 2023]. The model runs locally on a single Nvidia A100 80GB GPU in the DESY Maxwell cluster and is served by Ollama5 (model tag: mixtral:8x7b-instruct-v0.1-q8_0). Note that the Maxwell node does not need to directly interface with the accelerator control system. All interaction and knowledge retrieval is performed by a client application, which runs on a computer, which is part of the control system network. This client can then interface with control system servers, file servers, the Mattermost messaging system, etc., if needed. The client is called General AI Assistant (GAIA). One of the main concepts of the ReAct prompting scheme is tool use. As the agent engages in its inner monologue, or chain of thought, it will eventually reach a point where it needs to either perform an action, or retrieve information. This is analogous to an operator deciding to either use control system tools to control or diagnose certain parts of the machine, or to turn to an expert to retrieve information. Technically, the agent, which is implemented using LangChain6, interrupts the LLM output stream and injects the result of a particular tool use. The agent might, for example, come to the conclusion that it needs to retrieve the current value of a particular machine parameter. In this case, within its chain of thought, it may suggest using doocs_generic_experiment based tools to perform this particular task. Another example would be the retrieval of information from the electronic logbook. One advantage of the concept of tool use is that it helps to circumvent the LLM token limit. In many cases knowledge retrieval may involve classical retrieval augmented generation (RAG) [Lewis et al., 2021] implementations, which use their own disjunct LLM context. This way, only the result of the knowledge retrieval process is injected into the agents context window. In addition, this allows the use of fine-tuned RAG systems potentially based on other LLMs. If the task is to devise a certain experimental procedure at a particle accelerator, at least two tools/experts may be needed. The first expert may provide knowledge about the layout of the beam line, as well as how to utilize the specific elements for a given task. The second expert may on the other hand be proficient in producing doocs_generic_experiment code and how to interface with the accelerator control system. The agent is able to combine the output of the two tools, yielding a suitable Python program from a prompt, such as \"I want to operate the accelerator at maximum energy gain\". Figure 2 shows a selection of tools used in the current GAIA implementation. In the following section a number of example prompts to GAIA are shown. Note that the chain of thought, which is shown in the examples, is usually not visible to the user. \u21e8 Look up info about the accelerator layout \u21e8 Uses an already existing lattice file parser AcceleratorLatticeTool \u21e8 Helps looking up DOOCS addresses \u21e8 Uses simple sequence matching DOOCSAddressHelperTool \u21e8 Summarizes .pptx files from weekly meetings \u21e8 Uses basic prompting and its own context OperationsMeetingSummarizer \u21e8 Gives access to the electronic logbook \u21e8 Uses an existing CLI interface to the logbook LogbookTool \u21e8 Let\u2019s the agent contact experts \u21e8 Uses Mattermost API MattermostChatBotTool \u21e8 Runs Python code \u21e8 \u2026for example generated by DGEHelperTool PythonCodeRunnerTool \u21e8 ReadActionTool, LogbookActionTool, \u2026 \u21e8 MachineStateProcedureTool, \u2026 Wrappers around Actions and Procedures \u21e8 Explains doocs_generic_experiment \u21e8 Can also generate code DGEHelperTool \u21e8 \u2026in order to get accurate calculations CalculatorTool Figure 2: A selection of tools to be used by GAIA. 5http://www.ollama.com, last access 2024-04-26 6http://www.langchain.com, last access 2024-04-26 3 GAIA TECHNICAL REPORT 4 Examples In this section example prompts to GAIA are presented. The results are shown in Figs 3 through 7. Note that some parts of the output are truncated, as indicated by \u2019[...]\u2019. The chain of thought, shown in the examples, is usually not visible to the user. Also, names are redacted if needed. The example cases show the versatility of the system, which includes access to meeting notes (Fig. 3), guidance on how to use doocs_generic_experiment (Fig. 4), information retrieval from the electronic logbook (Fig. 5), machine data access using doocs_generic_experiment and subsequently contacting human experts via Mattermost chat (Fig. 6), and running complex tasks using an ad hoc generated doocs_generic_experiment script (Fig. 7). fmayet@desymac42 ~ % ./gaia \u2014-verbose True Welcome to GAIA (served on max-wng055) What can I do for you? Can you summarize the last operations meeting? > Entering new AgentExecutor chain\u2026 Thought: To answer this question, I need to use the \"Operation Meeting Summarizer\" tool and provide the date of the meeting as input. I will ask the user for the date if it has not been provided. Action: ``` { \"action\": \"User Input Tool\", \"action_input\": { \"question\": \"What is the date of the operation meeting you want me to summarize?\" } } ``` What is the date of the operation meeting you want me to summarize? 29.01.2024 Observation: 29.01.2024 Thought: Based on the user input, I can now use the \"Operation Meeting Summarizer\" tool to generate a summary of the operations meeting that took place on the given date. Action: ```json { \"action\": \"Operation Meeting Summarizer\", \"action_input\": { \"date\": \"2024-01-29\" } } ``` Observation: The ARES Operation Meeting summary from January 29, 2024 covers [\u2026] > Finished chain. The ARES Operation Meeting summary from January 29, 2024 covers the week 4 achievements, difficulties, and notes. During this week, the team re-established the velocity bunching working point using fewer quads, optimized transmission through the DLA, and worked on various scans and tuning. They also faced a TWS1 modulator fault due to low solenoid water flow rate, which was fixed. On the ARES beamline, the EA.L2 camera had ROI and background subtraction applied, showing likely Cherenkov/Smith-Purcell effects but no coherence yet. PolariX development included switching on the OVC, testing higher water temperatures for BOC detuning, conditioning both vertical and horizontal streaking, and developing measurement and analysis scripts. PolariX measurements consisted of streaking along various angles covering the full range of the phase shifter and emittance measurement using DLQZM1 and the D1 screen. For week 5, [Name 1] will be on shift with [Name 2] on Monday, followed by [Name 1] with Frank on Tuesday, [Name 1] with [Name 3] on Wednesday, [Name 4] on Thursday for AutoAcc beamtime, and [Name 5] on Friday for UKE beamtime preparation. Additionally, there are plans for connecting the kicker magnet, installing in-air stages, laser work, DLA activities, and more during this week. What can I do for you? Figure 3: Example: \"Can you summarize the last operations meeting?\". Names are redacted. 4 GAIA TECHNICAL REPORT fmayet@desymac42 ~ % ./gaia \u2014-verbose True Welcome to GAIA (served on max-wng055) What can I do for you? I want to write values to multiple devices in parallel using doocs_generic_experiment. How do I do this? > Entering new AgentExecutor chain\u2026 Thought: To answer this question, I need to understand how to write values to multiple devices in parallel using the doocs_generic_experiment Python module. I can use the DGE Helper tool for this. Action: ```json { \"action\": \"DGE Helper\", \"action_input\": { \"query\": \"How to write values to multiple devices in parallel using doocs_generic_experiment?\" } } ``` Observation: To write values to multiple devices in parallel using [\u2026] To write values to multiple devices in parallel using `doocs_generic_experiment`, you can use a `ParallelProcedure` with a list of `WriteAction`. Here is an example: ```python from doocs_generic_experiment import DOOCSAddress, WriteAction, ParallelProcedure # Define the addresses and values to write addresses = [DOOCSAddress('dev1', writable=True), DOOCSAddress('dev2', writable=True)] values = [10, 20] # Create a list of WriteAction instances write_actions = [WriteAction(address, value) for address, value in zip(addresses, values)] # Create a ParallelProcedure instance and add the write actions parallel_procedure = ParallelProcedure(actions=write_actions) # Run the procedure parallel_procedure.run() ``` In this example, we first define a list of `DOOCSAddress` instances with the `writable` flag set to `True`. We then create a list of `WriteAction` instances using a list comprehension and the `zip` function to pair the addresses and values. Finally, we create a `ParallelProcedure` instance and add the write actions to it. When we run the procedure, the write actions will be executed in parallel. What can I do for you? Figure 4: Example: \"I want to write values to multiple devices in parallel using doocs_generic_experiment. How do I do this?\". 5 GAIA TECHNICAL REPORT fmayet@desymac42 ~ % ./gaia \u2014-verbose True Welcome to GAIA (served on max-wng055) What can I do for you? Did they manage to define the new hexapod parking position today? > Entering new AgentExecutor chain\u2026 To answer this question, I need to check the logbook entries for today related to the hexapod parking position. I will use the Logbook tool with the 'n_days' parameter set to 0 to get the entries from today. Action: ``` { \"action\": \"Logbook\", \"action_input\": { \"n_days\": 0, \"n_entries\": null } } ``` Observation: Logbook name: SINBAD-ARESelog e-Logbook, Date: Wednesday 14. February 2024 Logbook entries: [\u2026] Date: 14.02.2024 18:14 Authors: [Name 1] Title: New Hexapod Parking Position Content: The grid is spaced in 1mm steps. The new parking position (red dot) is 7mm away from the Vgroove holder [\u2026] Date: 14.02.2024 14:23 Authors: [Name 1], [Name 2] Title: Activated Reference Phase Tracking for the Gun Content: [\u2026] > Finished chain. Based on the logbook entries, they did manage to define a new hexapod parking position today. The new parking position is 7mm away from the V-groove holder, as recorded in the entry at 18:14 by [Name 1]. What can I do for you? Figure 5: Example: \"Did they manage to define the new hexapod parking position today?\". Names are redacted. 6 GAIA TECHNICAL REPORT fmayet@desymac42 ~ % ./gaia \u2014-verbose True Welcome to GAIA (served on max-wng055) What can I do for you? Can you ask an expert whether the current value of the Gun Amplitude (Probe) is correct? > Entering new AgentExecutor chain\u2026 Thought: To answer this question, I need to find out the current value of the Gun Amplitude (Probe). I can use the ReadAction tool for this. After getting the value, I will ask an expert via Mattermost chat whether it is correct. Action: ``` { \"action\": \"ReadAction\", \"action_input\": { \"address\": \"MTest/GUN_AMPLITUDE_PROBE\" } } ``` Observation: nan Thought: It seems there was an issue with the DOOCS address provided. I will use the DOOCS Address Helper tool to ensure the correct address is used. Action: ```json { \"action\": \"DOOCS Address Helper\", \"action_input\": { \"search_query\": \"Gun Amplitude (Probe)\" } } ``` Observation: SINBAD.RF/LLRF.CONTROLLER/VS.AR.LI.RSB.G.1/AMPL.SAMPLE Thought: Now that I have the correct DOOCS address, I can use the ReadAction tool to get the current value of the Gun Amplitude (Probe). After getting the value, I will ask an expert via Mattermost chat whether it is correct. Action: ``` { \"action\": \"ReadAction\", \"action_input\": { \"address\": \"SINBAD.RF/LLRF.CONTROLLER/VS.AR.LI.RSB.G.1/AMPL.SAMPLE\" } } ``` Observation: 74.56 Thought: Now that I have the current value of the Gun Amplitude (Probe), I can ask an expert via Mattermost chat whether it is correct. Action: ```json \"action\": \"Mattermost Chat Bot\", \"action_input\": { \"chat_message\": \"@fmayet The current value of the Gun Amplitude (Probe) is 74.56. Is this value correct?\u201d } } ``` The current value of the Gun Amplitude (Probe) is 74.56. Is this value correct? I would say it is a little low, but they can proceed with the experiment. Observation: Here is a message from fmayet: I would say it is a little low, but they can proceed with the experiment. Thought: I have obtained the current value of the Gun Amplitude (Probe) and consulted an expert. The expert has confirmed that the value is a bi t low, but they can continue with the experiment. > Finished chain. The current value of the Gun Amplitude (Probe) is 74.56, and it is a bit low. However, you can proceed with the experiment. What can I do for you? Figure 6: Example: \"Can you ask an expert whether the current value of the Gun Amplitude (Probe) is correct?\". 7 GAIA TECHNICAL REPORT fmayet@desymac42 ~ % ./gaia \u2014-verbose True Welcome to GAIA (served on max-wng055) What can I do for you? Please cycle the two magnets ARDLMQZM1 and ARDLMQZM2 in parallel and post the result to the logbook afterwards. > Entering new AgentExecutor chain\u2026 To perform this task, I will first ask the DGE Helper for advice. Action: ``` { \"action\": \u201cDGE Helper\", \"action_input\": { \"query\": \"How can I cycle the two magnets ARDLMQZM1 and ARDLMQZM2 in parallel and post the result to the logbook afterwards?\" } } ``` Observation: [\u2026] Thought: I will now run the code using Python Code Runner. Action: ```json { \"action\": \"Python Code Runner\", \"action_input\": { \"code\": \"[\u2026]\" } } ``` This is the code I would like to run for you: from doocs_generic_experiment import ParallelProcedure, MagnetCycleAction, PrintToLogbookAction # Define the ParallelProcedure to cycle both magnets in parallel cycle_magnets_procedure = ParallelProcedure( [ MagnetCycleAction(magnet_name='ARDLMQZM1'), MagnetCycleAction(magnet_name='ARDLMQZM2') ] ) # Define the PrintToLogbookAction to print the result to the logbook print_to_logbook = PrintToLogbookAction(title=\"Magnet Cycling Result\", severity='INFO', author='aresoper (via doocs_generic_experiment)', logbook=\u2018areslog\u2019, message=\u201cARDLMQZM1 and ARDLMQZM2 are now cycled.\u201d) # Define a Procedure that runs both actions in sequence procedure = Procedure([cycle_magnets_procedure, print_to_logbook]) # Run the procedure procedure.run() Do you want to run this code? Figure 7: Example: \"Please cycle the two magnets ARDLMQZM1 and ARDLMQZM2 in parallel and post the result to the logbook afterwards.\". 8 GAIA TECHNICAL REPORT 5 Conclusions This work shows how a combination of an ReAct empowered LLM and a number of high-level expert tools can assist operators in the context of accelerator operation. The examples shown above reveal a number of positive features of the current implementation. The multi-expert nature of the ReAct concept means that one agent can be used for many use cases, which makes it easier to use for operators. At the same time, the use of tools enables save and noise-free interactions with the accelerator control system. This is crucial for machine safety. It is important to note that not all tools need to rely on advanced LLM-based systems (see Fig. 2). As the system is work in progress, there is also room for improvement. Depending on the available compute resources the reasoning process can be lengthy, especially if many tools are used. During test runs it was furthermore observed that sometimes intermediate thoughts within the chain of thought are actually more useful than the final answer. Also, sometimes only slight changes to the initial prompt influenced the outcome substantially. In addition to addressing and thoroughly analyzing the aforementioned issues, future work will include adding multimodal models to better understand e.g. media rich logbook entries. Furthermore, metrics will be devised to properly quantify the quality of the answers. Acknowledgments The author is grateful for getting access to the ARES R&D accelerator at DESY, as well as being able to leverage the compute resources provided by the DESY Maxwell team. The author acknowledges support from DESY (Hamburg, Germany), a member of the Helmholtz Association HGF."
17
+ }
intro_8K/test_introduction_short_2405.01373v1.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.01373v1",
3
+ "title": "ATOM: Attention Mixer for Efficient Dataset Distillation",
4
+ "abstract": "Recent works in dataset distillation seek to minimize training expenses by\ngenerating a condensed synthetic dataset that encapsulates the information\npresent in a larger real dataset. These approaches ultimately aim to attain\ntest accuracy levels akin to those achieved by models trained on the entirety\nof the original dataset. Previous studies in feature and distribution matching\nhave achieved significant results without incurring the costs of bi-level\noptimization in the distillation process. Despite their convincing efficiency,\nmany of these methods suffer from marginal downstream performance improvements,\nlimited distillation of contextual information, and subpar cross-architecture\ngeneralization. To address these challenges in dataset distillation, we propose\nthe ATtentiOn Mixer (ATOM) module to efficiently distill large datasets using a\nmixture of channel and spatial-wise attention in the feature matching process.\nSpatial-wise attention helps guide the learning process based on consistent\nlocalization of classes in their respective images, allowing for distillation\nfrom a broader receptive field. Meanwhile, channel-wise attention captures the\ncontextual information associated with the class itself, thus making the\nsynthetic image more informative for training. By integrating both types of\nattention, our ATOM module demonstrates superior performance across various\ncomputer vision datasets, including CIFAR10/100 and TinyImagenet. Notably, our\nmethod significantly improves performance in scenarios with a low number of\nimages per class, thereby enhancing its potential. Furthermore, we maintain the\nimprovement in cross-architectures and applications such as neural architecture\nsearch.",
5
+ "authors": "Samir Khaki, Ahmad Sajedi, Kai Wang, Lucy Z. Liu, Yuri A. Lawryshyn, Konstantinos N. Plataniotis",
6
+ "published": "2024-05-02",
7
+ "updated": "2024-05-02",
8
+ "primary_cat": "cs.CV",
9
+ "cats": [
10
+ "cs.CV"
11
+ ],
12
+ "label": "Original Paper",
13
+ "paper_cat": "Distillation",
14
+ "gt": "Efficient deep learning has surged in recent years due to the increasing computational costs associated with training *Equal contribution Figure 1. The ATOM Framework utilizes inherent information to capture both context and location, resulting in significantly im- proved performance in dataset distillation. We display the perfor- mance of various components within the ATOM framework, show- casing a 5.8% enhancement from the base distribution matching performance on CIFAR10 at IPC50. Complete numerical details can be found in Table 4. and inferencing pipelines [2, 26, 51\u201354, 63, 71, 75, 76]. This growth can be attributed to the escalating complex- ity of model architectures and the ever-expanding scale of datasets. Despite the increasing computational burden, two distinct approaches have emerged as potential avenues for addressing this issue: the model-centric and data-centric approaches. The model-centric approach is primarily con- cerned with mitigating computational costs by refining the architecture of deep learning models. Techniques such as pruning, quantization, knowledge distillation, and architec- tural simplification are key strategies employed within this paradigm [26, 29, 30, 49, 50, 65, 68, 71]. In contrast, the data-centric approach adopts a different perspective, focus- ing on exploring and leveraging the inherent redundancy within datasets. Rather than modifying model architectures, arXiv:2405.01373v1 [cs.CV] 2 May 2024 this approach seeks to identify or construct a smaller dataset that retains the essential information necessary for main- taining performance levels. Coreset selection was a fairly adopted method for addressing this gap [4, 6, 47, 55, 60]. In particular works such as Herding [66] and K-Center [55] offered a heuristic-based approach to intelligently select an informative subset of data. However, as a heuristic-based method, the downstream performance is limited by the in- formation contained solely in the subset. More recently, shapely data selection [17] found the optimal subset of data by measuring the downstream performance for every subset combination achievable in the dataset. However inefficient this may be, the downstream performance is still limited by the diversity of samples selected. therefore, Dataset Distil- lation (DD) [63] has emerged as a front-runner wherein a synthetic dataset can be learned. Dataset distillation aims to distill large-scale datasets into a smaller representation, such that downstream mod- els trained on this condensed dataset will retain competi- tive performance with those trained on the larger original one [7, 63, 76]. Recently, many techniques have been intro- duced to address this challenge, including gradient match- ing [38, 74, 76], feature/distribution matching [51, 75, 77], and trajectory matching [7, 14, 21]. However, many of these methods suffer from complex and computationally heavy distillation pipelines [7, 21, 76] or inferior perfor- mance [51, 75, 76]. A promising approach, DataDAM [51], effectively tackled the computational challenges present in prior distillation techniques by employing untrained neu- ral networks, in contrast to bi-level optimization methods. However, despite its potential, DataDAM faced several sig- nificant limitations: (1) it obscured relevant class-content- based information existing channel-wise in intermediate layers; (2) it only achieved marginal enhancements on pre- vious dataset distillation algorithms; and (3) it exhibited in- ferior cross-architecture generalization. In this work, we introduce ATtentiOn Mixer, dubbed ATOM as an efficient dataset distillation pipeline that strikes an impressive balance between computational efficiency and superior performance. Drawing upon spatial attention matching techniques from prior studies like DataDAM [51], we expand our receptive field of information in the match- ing process. Our key contribution lies in mixing spatial in- formation with channel-wise contextual information. Intu- itively, different convolutional filters focus on different lo- calizations of the input feature; thus, channel-wise atten- tion aids in the distillation matching process by compress- ing and aggregating information from multiple regions as evident by the performance improvmenets displayed in Fig- ure 1. ATOM not only combines localization and context, but it also produces distilled images that are more gener- alizable to various downstream architectures, implying that the distilled features are true representations of the original dataset. Moreover, our approach demonstrates consistent improvements across all settings on a comprehensive distil- lation test suite. In summary, the key contributions of this study can be outlined as follows: [C1]: We provide further insight into the intricacies of attention matching, ultimately introducing the use of channel-wise attention matching for capturing a higher level of information in the feature-matching process. Our mixing module combines both spatial localization awareness of a particular class, with distinctive contextual information de- rived channel-wise. [C2]: Empirically we show superior performance against previous dataset distillation methods including fea- ture matching and attention matching works, without bi- level optimization on common computer vision datasets. [C3]: We extend our findings by demonstrating supe- rior performance in cross-architecture and neural architec- ture search. In particular, we provide a channel-only setting that maintains the majority of the performance while incur- ring a lower computational cost.",
15
+ "main_content": "Coreset Selection. Coreset selection, an early data-centric approach, aimed to efficiently choose a representative subset from a full dataset to enhance downstream training performance and efficiency. Various methods have been proposed in the past, including geometry-based approaches [1, 10, 55, 57, 66], loss-based techniques as mentioned in [46, 59], decision-boundary-focused methods [16, 42], bilevel optimization strategies [32, 33], and gradient-matching algorithms outlined in [31, 43]. Notable among them are Random, which randomly selects samples as the coreset; Herding, which picks samples closest to the cluster center; K-Center, which selects multiple center points to minimize the maximum distance between data points and their nearest center; and Forgetting, which identifies informative training samples based on learning difficulties [4, 6, 55, 59]. While these selection-based methods have shown moderate success in efficient training, they inherently possess limitations in capturing rich information. Since each image in the selected subset is treated independently, they lack the rich features that could have been captured if the diversity within classes had been considered. These limitations have motivated the emergence of dataset distillation within the field. Dataset Distillation. Dataset distillation has emerged as a learnable method of synthesizing a smaller, informationrich dataset from a large-scale real dataset. This approach offers a more efficient training paradigm, commonly applied in various downstream applications such as continual learning [9, 20, 51, 70, 76], neural architecture search [27, 58], and federated learning [28, 39, 40, 69]. The seminal work, initially proposed by Wang et al. [63], introduced Figure 2. (a) An overview of the proposed ATOM framework. By mixing attention, ATOM is able to capture both spatial localization and class context. (b) Demonstration of the internal architecture for spatialand channel-wise attention in the ATOM Module. The spatial-wise attention computes attention at specific locales through different filters, resulting in a matrix output, whereas the channel-wise attention calculates attention between each filter, naturally producing a vectorized output. bilevel optimization, comprising an outer loop for learning the pixel-level synthetic dataset and an inner loop for training the matching network. Following this, several studies adopted surrogate objectives to tackle unrolled optimization problems in meta-learning. For example, gradient matching methods [15, 34, 38, 74, 76] learn images by aligning network gradients derived from real and synthetic datasets. Trajectory matching [7, 11, 14, 21] improves performance by minimizing differences in model training trajectories between original and synthetic samples. Meanwhile, feature matching strategies [51, 51, 61, 73, 75, 77] aim to align feature distributions between real and synthetic data within diverse latent spaces. Despite significant advancements in this field, methods still struggle to find a trade-off between the computational costs associated with the distillation pipeline and the model\u2019s performance. A recent work, DataDAM [51], used spatial attention to improve the performance of feature-matching-based methods by selectively matching features based on their spatial attention scores. However, although this method operates without bilevel optimization, it only marginally improves performance on larger test suites. In this study, we delve deeper into the potential of attention-based methods and demonstrate superior performance compared to DataDAM and previous benchmarks across various computer vision datasets. Additionally, we achieve a lower computational cost compared to conventional attention-matching approaches by leveraging information in a channel-wise manner. Attention Mechanism. Attention mechanisms have been widely adopted in deep learning to enhance performance across various tasks [3, 64, 72]. Initially applied in natural language processing [3], it has extended to computer vision, with global attention models [64] improving image classification and convolutional block attention modules [67] enhancing feature map selection. Additionally, attention aids model compression in knowledge distillation [72]. They are lauded for their ability to efficiently incorporate global contextual information into feature representations. When applied to feature maps, attention can take the form of either spatial or channel-based methods. Spatial methods focus on identifying the informative regions (\u201dwhere\u201d), while channel-based methods complementarily emphasize the informative features (\u201dwhat\u201d). Both spatial localization and channel information are crucial for identifying class characteristics. Recently, Sajedi et al. proposed DataDAM [51] to concentrate only on spatial attention, capturing class correlations within image localities for efficient training purposes. However, inspired by the inherent obfuscation of the content in the attention maps, we propose an Attention Mixer module that uses a unique combination of spatial and channel-wise attention to capture localization and information content. 3. Methodology Given the larger source dataset T = {(xi, yi)}|T | i=1 containing |T | real image-label pairs, we generate a smaller learnable synthetic dataset S = {(sj, yj)}|S| j=1 with |S| synthetic image and label pairs. Following previous works [7, 51, 61, 74, 76], we use random sampling to initialize our synthetic dataset. For every class k, we obtain a batch of real and synthetic data (BT k and BS k , respectively) and use a neural network \u03d5\u03b8(\u00b7) with randomly initialized weights \u03b8 [22] to extract intermediate and output features. We illustrate our method in Figure 2 where an L-layer neural network \u03d5\u03b8(\u00b7) is used to extract features from the real and synthetic sets. The collection of feature maps from the real and synthetic sets can be expressed as \u03d5\u03b8(Tk) = [f Tk \u03b8,1, \u00b7 \u00b7 \u00b7 , f Tk \u03b8,L] and \u03d5\u03b8(Sk) = [f Sk \u03b8,1, \u00b7 \u00b7 \u00b7 , f Sk \u03b8,L], respectively. The feature f Tk \u03b8,l comprises a multi-dimensional array within R|BT k |\u00d7Cl\u00d7Wl\u00d7Hl, obtained from the real dataset at the lth layer, where Cl denotes the number of channels and Hl \u00d7 Wl represents the spatial dimensions. Correspondingly, a feature f Sk \u03b8,l is derived for the synthetic dataset. We now introduce the Attention Mixer Module (ATOM) which generates attention maps for the intermediate features derived from both the real and synthetic datasets. Leveraging a feature-based mapping function A(\u00b7), ATOM takes the intermediate feature maps as input and produces a corresponding attention map for each feature. Formally, we express this as: A \u0000\u03d5\u03b8(Tk) \u0001 = [aTk \u03b8,1, \u00b7 \u00b7 \u00b7 , aTk \u03b8,L\u22121] and A(\u03d5\u03b8(Sk)) = [aSk \u03b8,1, \u00b7 \u00b7 \u00b7 , aSk \u03b8,L\u22121] for the real and synthetic sets, respectively. Previous works [51, 72] have shown that spatial attention, which aggregates the absolute values of feature maps across the channel dimension, can emphasize common spatial locations associated with high neuron activation. The implication of this is retaining the most informative regions, thus generating an efficient feature descriptor. In this work, we also consider the effect of channel-wise attention, which emphasizes the most significant information captured by each channel based on the magnitude of its activation. Since different filters explore different regions or locations of the input feature, channel-wise activation yields the best aggregation of the global information. Ultimately, we convert the feature map f Tk \u03b8,l of the lth layer into an attention map aTk \u03b8,l representing spatial or channel-wise attention using the corresponding mapping functions As(\u00b7) or Ac(\u00b7) respectively. Formally, we can denote the spatial and channel-wise attention maps as: As(f Tk \u03b8,l) = Cl X i=1 \f \f(f Tk \u03b8,l)i \f \fps, (1) Ac(f Tk \u03b8,l) = Hl\u2217Wl X i=1 \f \f(f Tk \u03b8,l)\u22c6 i \f \fpc, (2) where, (f Tk \u03b8,l)i = f Tk \u03b8,l(:, i, :, :) is the feature map of channel i from the lth layer, and the power and absolute value operations are applied element-wise; meanwhile, the symbol \u22c6flattens the feature map along the spatial dimension \u0010 (f Tk \u03b8,l)\u2217\u2208R|BT k |\u00d7Cl\u00d7Wl\u2217Hl \u0011 , such that (f Tk \u03b8,l)\u22c6 i = (f Tk \u03b8,l)\u22c6(:, :, i). By leveraging both types of attention, we can better encapsulate the relevant information in the intermediate features, as investigated in Section 4.3. Further, the effect of power parameters for spatial and channel-wise attention, i.e. ps and pc is studied in the Section 4.3. Given our generated spatial and channel attention maps for the intermediate features, we apply standard normalization such that we can formulate a matching loss between the synthetic and real datasets. We denote our generalized loss LATOM as: E \u03b8\u223cP\u03b8 \u0014 K X k=1 L\u22121 X l=1 \r \r \rETk h zTk \u03b8,l \u2225zTk \u03b8,l\u22252 i \u2212ESk h zSk \u03b8,l \u2225zSk \u03b8,l\u22252 i\r \r \r 2\u0015 , (3) where, in the case of spatial attention, we denote zTk \u03b8,l = vec(aTk \u03b8,l) \u2208R|BT k |\u00d7(Wl\u00d7Hl) and zSk \u03b8,l = vec(aSk \u03b8,l) \u2208 R|BS k |\u00d7(Wl\u00d7Hl) to represent the vectorized spatial attention map pairs at the lth layer for the real and synthetic datasets, respectively. Meanwhile, for channel-based attention, we have zTk \u03b8,l = vec(aTk \u03b8,l) \u2208R|BT k |\u00d7(Cl) and zSk \u03b8,l = vec(aSk \u03b8,l) \u2208R|BS k |\u00d7(Cl) to represent the flattened channel attention map pairs at the lth layer for the real and synthetic datasets, respectively. The parameter K is the number of categories in a dataset, and P\u03b8 denotes the distribution of network parameters. We estimate the expectation terms in Equation (3) empirically if ground-truth data distributions are not available. Following previous works [51, 61, 73, 75, 77], we leverage the features in the final layer to regularize our matching process. In particular, the features of the penultimate layer represent a high-level abstraction of information from the input images in an embedded representation and can thus be used to inject semantic information in the matching process [19, 48, 51, 75]. Thus, we employ LMMD as described in [51, 75] out-of-the-box. Finally, we learn the synthetic dataset by minimizing the following optimization problem using SGD optimizer: S\u2217= arg min S \u0000LATOM + \u03bbLMMD \u0001 , (4) where \u03bb is the task balance parameter inherited from [51]. In particular, we highlight that LMMD brings semantic information from the final layer, while LATOM mixes the spatial and channel-wise attention information from the intermediate layers. Note that our approach assigns a fixed label to each synthetic sample and keeps it constant during training. A summary of the learning algorithm can be found in Algorithm 1. 4. Experiments 4.1. Experimental Setup Datasets. Our method is evaluated on the CIFAR-10 and CIFAR-100 datasets [35], which maintain a resolution of 32 \u00d7 32, aligning with state-of-the-art benchmarks. Furthermore, we resize the Tiny ImageNet [37] datasets to 64 Algorithm 1 Attention Mixer for Dataset Distillation Input: Real training dataset T = {(xi, yi)}|T | i=1 Required: Initialized synthetic samples for K classes, Deep neural network \u03d5\u03b8 parameterized with \u03b8, Probability distribution over randomly initialized weights P\u03b8, Learning rate \u03b7S, Task balance parameter \u03bb, Number of training iterations I. 1: Initialize synthetic dataset S 2: for i = 1, 2, \u00b7 \u00b7 \u00b7 , I do 3: Sample \u03b8 from P\u03b8 4: Sample mini-batch pairs BT k and BS k from the real and synthetic sets for each class k 5: Compute LATOM and LMMD 6: Calculate L = LATOM + \u03bbLMMD 7: Update the synthetic dataset using S \u2190S\u2212\u03b7S\u2207SL 8: end for Output: Synthetic dataset S = {(si, yi)}|S| i=1 \u00d7 64 for additional experimentation. The supplementary materials provide more detailed dataset information. Network Architectures. We employ a ConvNet architecture [18] for distillation, following prior studies. The default ConvNet comprises three convolutional blocks, each consisting of a 128-kernel 3 \u00d7 3 convolutional layer, instance normalization, ReLU activation, and 3 \u00d7 3 average pooling with a stride of 2. To accommodate the increased resolutions in Tiny ImageNet, we append a fourth convolutional block. Network parameters are initialized using normal initialization [22] in all experiments. Evaluation Protocol. We evaluate the methods using standard measures from previous studies [51, 61, 74\u201376]. Five sets of synthetic images are generated from a real training dataset with 1, 10, and 50 images per class. Then, 20 neural network models are trained on each synthetic set using an SGD optimizer with a fixed learning rate of 0.01. Each experiment reports the mean and standard deviation values for 100 models to assess the efficacy of distilled datasets. Furthermore, computational costs are assessed by calculating run-time per step over 100 iterations, as well as peak GPU memory usage during 100 iterations of training. Implementation Details. We use the SGD optimizer with a fixed learning rate of 1 to learn synthetic datasets containing 1, 10, and 50 IPCs over 8000 iterations with task balances (\u03bb) set at 0.01. Previous works have shown that ps = 4 is sufficient for spatial attention matching [51]. As such we set our default case as: pc = ps = 4. This is further ablated in Section 4.3. We adopt differentiable augmentation for both training and evaluating the synthetic set, following [51, 76]. For dataset reprocessing, we utilized the Kornia implementation of Zero Component Analysis (ZCA) with default parameters, following previous works [7, 44, 51]. All experiments are performed on a single A100 GPU with 80 GB of memory. Further hyperparameter details can be found in the supplementary materials. Competitive Methods. In this paper, we compare the empirical results of ATOM on three computer vision datasets: CIFAR10/100 and TinyImageNet. We evaluate ATOM against four corset selection approaches and thirteen distillation methods for training set synthesis. The corset selection methods include Random selection [47], Herding [4, 6], K-Center [55], and Forgetting [60]. We also compare our approach with state-of-the-art distillation methods, including Dataset Distillation [63] (DD), Flexible Dataset Distillation [5] (LD), Dataset Condensation [76] (DC), Dataset Condensation with Contrastive (DCC) [38], Dataset Condensation with Differentiable Siamese Augmentation [74] (DSA), Distribution Matching [75] (DM), Deep Generative Priors (GLaD), Aligning Features [61] (CAFE), VIG [41], Kernel Inducing Points [44, 45] (KIP), Matching Training Trajectories [7] (MTT), and Attention Matching [51] (DAM). 4.2. Comparison with State-of-the-art Methods Performance Comparison. In this section, we present a comparative analysis of our method against coreset and dataset distillation approaches. ATOM consistently outperforms these studies, especially at smaller distillation ratios, as shown in Table 1. Since the goal of dataset distillation is to generate a more compact synthetic set, we emphasize our significant performance improvements at low IPCs. We achieve almost 4% improvement over the previous attention matching framework [51], DataDAM when evaluated on CIFAR-100 at IPC1. Notably, our performance on CIFAR100 at IPC50 is 50.2% \u2013 that is nearly 90% of the baseline accuracy at a mere 10% of the original dataset. These examples motivate the development of dataset distillation works as downstream models can achieve relatively competitive performance with their baselines at a fraction of the training costs. Our primary objective in this study is to investigate the impact of channel-wise attention within the featurematching process. Compared to prior attention-based and feature-based methodologies, our findings underscore the significance of channel-wise attention and the ATOM module, as validated also in the ablation studies in Section 4.3. Cross-architecture Generalization. In this section, we assess the generalization capacity of our refined dataset by training various unseen deep neural networks on it and then evaluating their performance on downstream classification tasks. Following established benchmarks [51, 61, 75, 76], we examine classic CNN architectures such as AlexNet [36], VGG-11 [56], ResNet-18 [23], and additionally, a standard Vision Transformer (ViT) [13]. Specifically, we utilize synthetic images learned from CIFAR-10 with IPC50 using ConvNet as the reference model and subsequently Dataset CIFAR-10 CIFAR-100 Tiny ImageNet IPC 1 10 50 1 10 50 1 10 50 Ratio % 0.02 0.2 1 0.2 2 10 0.2 2 10 Random 14.4\u00b12.0 26.0\u00b11.2 43.4\u00b11.0 4.2\u00b10.3 14.6\u00b10.5 30.0\u00b10.4 1.4\u00b10.1 5.0\u00b10.2 15.0\u00b10.4 Herding [66] 21.5\u00b11.2 31.6\u00b10.7 40.4\u00b10.6 8.3\u00b10.3 17.3\u00b10.3 33.7\u00b10.5 2.8\u00b10.2 6.3\u00b10.2 16.7\u00b10.3 K-Center [55] 21.5\u00b11.3 14.7\u00b10.9 27.0\u00b11.4 8.4\u00b10.3 17.3\u00b10.3 30.5\u00b10.3 Forgetting [59] 13.5\u00b11.2 23.3\u00b11.0 23.3\u00b11.1 4.5\u00b10.2 15.1\u00b10.3 1.6\u00b10.1 5.1\u00b10.2 15.0\u00b10.3 DD\u2020[63] 36.8\u00b11.2 LD\u2020[5] 25.7\u00b10.7 38.3\u00b10.4 42.5\u00b10.4 11.5\u00b10.4 DC [76] 28.3\u00b10.5 44.9\u00b10.5 53.9\u00b10.5 12.8\u00b10.3 25.2\u00b10.3 30.6\u00b10.6 5.3\u00b10.1 12.9\u00b10.1 12.7\u00b10.4 DCC [38] 32.9\u00b10.8 49.4\u00b10.5 61.6\u00b10.4 13.3\u00b10.3 30.6\u00b10.4 DSA [74] 28.8\u00b10.7 52.1\u00b10.5 60.6\u00b10.5 13.9\u00b10.3 32.3\u00b10.3 42.8\u00b10.4 5.7\u00b10.1 16.3\u00b10.2 15.1\u00b10.2 DM [75] 26.0\u00b10.8 48.9\u00b10.6 63.0\u00b10.4 11.4\u00b10.3 29.7\u00b10.3 43.6\u00b10.4 3.9\u00b10.2 12.9\u00b10.4 25.3\u00b10.2 GLaD [8] 28.0\u00b10.8 46.7\u00b10.5 59.9\u00b10.7 CAFE [61] 30.3\u00b11.1 46.3\u00b10.6 55.5\u00b10.6 12.9\u00b10.3 27.8\u00b10.3 37.9\u00b10.3 CAFE+DSA [61] 31.6\u00b10.8 50.9\u00b10.5 62.3\u00b10.4 14.0\u00b10.3 31.5\u00b10.2 42.9\u00b10.2 VIG [41] 26.5\u00b11.2 54.6\u00b10.1 35.6\u00b10.6 17.8\u00b10.1 29.3\u00b10.1 KIP [44] 29.8\u00b11.0 46.1\u00b10.7 53.2\u00b10.7 12.0\u00b10.2 29.0\u00b10.3 MTT [7] 31.9\u00b11.2 56.4\u00b10.7 65.9\u00b10.6 13.8\u00b10.6 33.1\u00b10.4 42.9\u00b10.3 6.2\u00b10.4 17.3\u00b10.2 26.5\u00b10.3 DAM [51] 32.0\u00b11.2 54.2\u00b10.8 67.0\u00b10.4 14.5\u00b10.5 34.8\u00b10.5 49.4\u00b10.3 8.3\u00b10.4 18.7\u00b10.3 28.7\u00b10.3 ATOM (Ours) 34.8\u00b11.0 57.9\u00b10.7 68.8\u00b10.5 18.1\u00b10.4 35.7\u00b10.4 50.2\u00b10.3 9.1\u00b10.2 19.5\u00b10.4 29.1\u00b10.3 Full Dataset 84.8\u00b10.1 56.2\u00b10.3 37.6\u00b10.4 Table 1. Comparison with previous dataset distillation methods on CIFAR-10, CIFAR-100 and Tiny ImageNet. The works DD\u2020 and LD\u2020 use AlexNet [36] for CIFAR-10 dataset. All other methods use ConvNet for training and evaluation. Bold entries are the best results. train the aforementioned networks on the refined dataset to assess their performance on downstream tasks. The results, as depicted in Table 2, indicate that ATOM demonstrates superior generalization across a spectrum of architectures. Notably, it achieves a significant performance boost of over 4% compared to the prior state-of-the-art on ResNet-18 [23]. This implies that the channel-wise attention mechanism effectively identifies features not only relevant to ConvNet but also to a wider range of deep neural networks, thereby enhancing the refined dataset with this discerned information. ConvNet AlexNet VGG-11 ResNet-18 ViT Avg. DC [76] 53.9\u00b10.5 28.8\u00b10.7 38.8\u00b11.1 20.9\u00b11.0 30.1\u00b10.5 34.5\u00b10.8 CAFE [61] 62.3\u00b10.4 43.2\u00b10.4 48.8\u00b10.5 43.3\u00b10.7 22.7\u00b10.7 44.1\u00b10.5 DSA [74] 60.6\u00b10.5 53.7\u00b10.6 51.4\u00b11.0 47.8\u00b10.9 43.3\u00b10.4 51.4\u00b10.7 DM [75] 63.0\u00b10.4 60.1\u00b10.5 57.4\u00b10.8 52.9\u00b10.4 45.2\u00b10.4 55.7\u00b10.5 KIP [44] 56.9\u00b10.4 53.2\u00b11.6 53.2\u00b10.5 47.6\u00b10.8 18.3\u00b10.6 45.8\u00b10.8 MTT [7] 66.2\u00b10.6 43.9\u00b10.9 48.7\u00b11.3 60.0\u00b10.7 47.7\u00b10.6 53.3\u00b10.8 DAM [51] 67.0\u00b10.4 63.9\u00b10.9 64.8\u00b10.5 60.2\u00b10.7 48.2\u00b10.8 60.8\u00b10.7 ATOM (Ours) 68.8\u00b10.4 64.1\u00b10.7 66.4\u00b10.6 64.5\u00b10.6 49.5\u00b10.7 62.7\u00b10.6 Table 2. Cross-architecture testing performance (%) on CIFAR-10 with 50 images per class. The ConvNet architecture is employed for distillation. Bold entries are the best results. Distillation Cost Analysis. In this section, we delve into an examination of the training costs required for the distillation process. Although the main goal of dataset distillation is to reduce training costs across different applications such as neural architecture search and continual learning, the distillation technique itself must be efficient, enabling smooth operation on consumer-grade hardware. Approaches such Method Run Time (Sec.) GPU memory (MB) IPC1 IPC10 IPC50 IPC1 IPC10 IPC50 DC [76] 0.16\u00b10.01 3.31\u00b10.02 15.74\u00b10.10 3515 3621 4527 DSA [74] 0.22\u00b10.02 4.47\u00b10.12 20.13\u00b10.58 3513 3639 4539 DM [75] 0.08\u00b10.02 0.08\u00b10.02 0.08\u00b10.02 3323 3455 3605 MTT [7] 0.36\u00b10.23 0.40\u00b10.20 OOM 2711 8049 OOM DAM [51] 0.09\u00b10.01 0.08\u00b10.01 0.16\u00b10.04 3452 3561 3724 ATOM\u2020 (Ours) 0.08\u00b10.02 0.08\u00b10.02 0.13\u00b10.03 3152 3263 4151 ATOM (Ours) 0.10\u00b10.02 0.10\u00b10.01 0.17\u00b10.02 3601 4314 5134 Table 3. Comparisons of training time and GPU memory usage for prior dataset distillation methods. Run time is averaged per step over 100 iterations, while GPU memory usage is reported as peak memory during the same 100 iterations of training on an A100 GPU for CIFAR-10. Methods that surpass the GPU memory threshold and fail to run are denoted as OOM (out-of-memory). ATOM\u2020 represents our method with on-channel attention, hence offering a better tradeoff in computational complexity. as DC, DSA, and MTT introduce additional computational overhead due to bi-level optimization and training an expert model. In contrast, our method, akin to DM and DAM, capitalizes on randomly initialized networks, obviating the need for training and thereby reducing the computational cost per step involved in the matching stage. As illustrated in Table 3 utilizing solely the channel-based ATOM\u2020 decreases the computational burden of matching compared to the default ATOM configuration. This efficiency is crucial, as channel-wise attention offers a more effective distillation process while maintaining superior performance (refer to Section 4.3). Convergence Speed Analysis. In Figure 3, we plot the Figure 3. Test accuracy evolution of synthetic image learning on CIFAR10 with IPC50 for ATOM (ours), DM [75] and DataDAM [51]. downstream testing accuracy evolution for the synthetic images on CIFAR10 IPC50. Comparing with previous methods, DM [75] and DataDAM [51], we can explicitly see an improvement in convergence speed and a significantly higher steady state achieved with the ATOM framework. Our included convergence analysis supports the practicality of our method and the consistency to which we outperform previous baselines. 4.3. Ablation Studies and Analysis Evaluation of loss components in ATOM. In Table 4, we evaluate the effect of different attention-matching mechanisms with respect to pure feature matching in intermediate layers and distribution matching in the final layer (LMMD). The results clearly demonstrate that attentionmatching improves the performance of the distillation process. In particular, the attention-matching process improves feature matching by 8.0%. Further, it seems that channel attention is able to capture the majority of relevant information from the intermediate features, as evidenced by an improvement of over 1.5% from spatial attention matching. Ultimately, this provides an incentive to favor channel attention in the distillation process. LMMD Feature Map Spatial Atn. Channel Atn. Performance (%) \u2713 63.0\u00b10.4 \u2713 60.8\u00b10.6 \u2713 \u2713 67.0\u00b10.7 \u2713 \u2713 68.6\u00b10.3 \u2713 \u2713 \u2713 68.8\u00b10.5 Table 4. Evaluation of loss components and attention components in ATOM using CIFAR-10 with IPC50. Evaluating attention balance in ATOM. In this section, we evaluate the balance between spatial and channelwise attention through the power value p. Referencing Equation (1) and Equation (2), modulating the values of ps and pc ultimately affects the balance of spatial and channelwise attention in LATOM. In Table 5, we examine the impact of different exponentiation powers p in the attentionmatching mechanisms. Specifically, we conduct a gridbased search to investigate how varying the exponentiation of spatial (ps) and channel (pc) attention influences subsequent performance. Our findings reveal that optimal performance (nearly 1% improvement over our default) occurs when the exponentiation for channel attention significantly exceeds that of spatial attention. This suggests that assigning a higher exponential value places greater emphasis on channel-attention matching over spatial-wise matching. This aligns with our observations from the loss component ablation, where channel-wise matching was found to encapsulate the majority of information within the feature map. Consequently, we deduce that prioritizing channelwise matching will enhance downstream performance outcomes. Channel Attention pc Spatial Attention ps 1 2 4 8 1 57.4% 57.5% 57.0% 56.2% 2 58.2% 57.5% 57.2% 56.3% 4 58.4% 58.5% 57.9% 57.6% 8 58.8% 58.7% 58.2% 57.8% Table 5. Evaluation of power values in the spatial and channel attention computations for LATOM using CIFAR-10 with IPC10. Visualization of Synthetic Images. We include samples of our distilled images in Figure 4. The images appear to be interleaved with artifacts that assimilate the background and object information into a mixed collage-like appearance. The synthetic images effectively capture the correlation between background and object elements, suggesting their potential for generalizability across various architectures, as empirically verified in Table 2. Additional visualizations are available in the supplementary material. 4.4. Applications Neural Architecture Search. In Table Table 6 we leverage our distilled synthetic datasets as proxy sets to accelerate Neural Architecture Search. In line with previous state-of-the-art, [51, 74, 76], we outline our architectural search space, comprising 720 ConvNets on the CIFAR10 dataset. We commence with a foundational ConvNet and devise a consistent grid, varying in depth D \u2208{1, 2, 3, 4}, width W \u2208{32, 64, 128, 256}, activation function A \u2208{Sigmoid, ReLU, LeakyReLU}, normalization technique N \u2208{None, BatchNorm, LayerNorm, InstanceNorm, GroupNorm}, and pooling operation P \u2208{None, MaxPooling, AvgPooling}. Additionally, we benchmark our approach against several state-of-the-art methods, including Random, DSA [76], DM [75], CAFE [61], DAM [51], and Early-Stopping. Our method demonstrates superior performance, accompanied by a heightened Spearman\u2019s correlation (0.75), thereby reinforcing the robustness Figure 4. Sample learned synthetic images for CIFAR-10/100 (32\u00d732 resolution) IPC10 and TinyImageNet (64\u00d764 resolution) IPC 1. of ATOM and its potential in neural architecture search. Random DSA DM CAFE DAM ATOM Early-stopping Full Dataset Performance (%) 88.9 87.2 87.2 83.6 89.0 88.9 88.9 89.2 Correlation 0.70 0.66 0.71 0.59 0.72 0.75 0.69 1.00 Time cost (min) 206.4 206.4 206.6 206.4 206.4 206.4 206.2 5168.9 Storage (imgs) 500 500 500 500 500 500 5 \u00d7 104 5 \u00d7 104 Table 6. Neural architecture search on CIFAR-10 with IPC50. 5. Limitations Many studies in dataset distillation encounter a constraint known as re-distillation costs [24, 25, 62]. This limitation becomes apparent when adjusting the number of images per class (IPC) or the distillation ratios. Like most other distillation methods, our approach requires re-distillation on the updated setting configuration, which limits flexibility regarding configuration changes and storage allocation. Additionally, we observed in Table 2 that dataset distillation methods often struggle with generalizing to transformer architectures. Despite ATOM outperforming other methods, there is still a noticeable performance drop compared to convolutional neural networks. This suggests that the effectiveness of transformers for downstream training might be constrained by the distilled data. 6. Conclusion In this work, we introduced an Attention Mixer (ATOM) for efficient dataset distillation. Previous approaches have struggled with marginal performance gains, obfuscating channel-wise information, and high computational overheads. ATOM addresses these issues by effectively combining information from different attention mechanisms, facilitating a more informative distillation process with untrained neural networks. Our approach utilizes a broader receptive field to capture spatial information while preserving distinct content information at the channel level, thus better aligning synthetic and real datasets. By capturing information across intermediate layers, ATOM facilitates multi-scale distillation. We demonstrated the superior performance of ATOM on standard distillation benchmarks and its favorable performance across multiple architectures. We conducted several ablative studies to justify the design choices behind ATOM. Furthermore, we applied our distilled data to Neural Architecture Search, showing a superior correlation with the real large-scale dataset. In the future, we aim to extend attention mixing to various downstream tasks, including image segmentation and localizations. We also hope to address limitations of ATOM, such as re-distillation costs and cross-architecture generalizations on transformers."
16
+ }
intro_8K/test_introduction_short_2405.01379v1.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.01379v1",
3
+ "title": "Verification and Refinement of Natural Language Explanations through LLM-Symbolic Theorem Proving",
4
+ "abstract": "Natural language explanations have become a proxy for evaluating explainable\nand multi-step Natural Language Inference (NLI) models. However, assessing the\nvalidity of explanations for NLI is challenging as it typically involves the\ncrowd-sourcing of apposite datasets, a process that is time-consuming and prone\nto logical errors. To address existing limitations, this paper investigates the\nverification and refinement of natural language explanations through the\nintegration of Large Language Models (LLMs) and Theorem Provers (TPs).\nSpecifically, we present a neuro-symbolic framework, named Explanation-Refiner,\nthat augments a TP with LLMs to generate and formalise explanatory sentences\nand suggest potential inference strategies for NLI. In turn, the TP is employed\nto provide formal guarantees on the logical validity of the explanations and to\ngenerate feedback for subsequent improvements. We demonstrate how\nExplanation-Refiner can be jointly used to evaluate explanatory reasoning,\nautoformalisation, and error correction mechanisms of state-of-the-art LLMs as\nwell as to automatically enhance the quality of human-annotated explanations of\nvariable complexity in different domains.",
5
+ "authors": "Xin Quan, Marco Valentino, Louise A. Dennis, Andr\u00e9 Freitas",
6
+ "published": "2024-05-02",
7
+ "updated": "2024-05-02",
8
+ "primary_cat": "cs.CL",
9
+ "cats": [
10
+ "cs.CL"
11
+ ],
12
+ "label": "Original Paper",
13
+ "paper_cat": "LLM AND Reasoning",
14
+ "gt": "A recent line of research in Natural Language Infer- ence (NLI) focuses on developing models capable of generating natural language explanations in sup- port of their predictions (Thayaparan et al., 2021; Chen et al., 2021; Valentino et al., 2022; Bostrom et al., 2022; Weir et al., 2023). Since natural lan- guage explanations can be used as a proxy to evalu- ate the underlying reasoning process of NLI models (Kumar and Talukdar, 2020; Zhao and Vydiswaran, 2021; Chen et al., 2021), researchers have proposed different methods for assessing their intrinsic qual- ity (Wiegreffe and Marasovic, 2021; Camburu et al., 2020; Valentino et al., 2021; Atanasova et al., 2023; Quan et al., 2024; Dalal et al., 2024), including the adoption of language generation metrics for a direct comparison between models\u2019 generated explana- tions and human-annotated explanations. However, this process is subject to different types of limitations. First, the use of language gen- eration metrics requires the crowd-sourcing of ex- planation corpora to augment existing NLI datasets (Wiegreffe and Marasovic, 2021), a process that is typically time-consuming and susceptible to er- rors (Liu et al., 2022; Zhao et al., 2023). Second, language generation metrics have been shown to fail capturing fine-grained properties that are funda- mental for NLI such as logical reasoning, faithful- ness, and robustness (Atanasova et al., 2023; Cam- buru et al., 2020; Chan et al., 2022; Quan et al., 2024). Third, human explanations in NLI datasets tend to be incomplete and contain logical errors that could heavily bias the evaluation (Elazar et al., 2021; Valentino et al., 2021). In this paper, we investigate the integration of state-of-the-art LLM-based explanation generation models for NLI with external logical solvers to jointly evaluate explanatory reasoning (Pan et al., 2023a; Olausson et al., 2023; Jiang et al., 2024b) and enhance the quality of crowd-sourced explana- tions. In particular, we present a neuro-symbolic framework, named Explanation-Refiner, that aug- ments a Theorem Prover (TP) with Large Language Models (LLMs) to investigate the following re- search questions: RQ1: \u201cCan the integration of LLMs and TPs provide a mechanism for automatic verification and refinement of natural language ex- planations?\u201d; RQ2: \u201cCan the integration of LLMs and TPs improve the logical validity of human- annotated explanations?\u201d; RQ3: \u201cTo what extent are state-of-the-art LLMs capable of explanatory reasoning, autoformalisation, and error correction for NLI in different domains?\u201d. To answer these questions, Explanation-Refiner employs LLMs to generate and formalise explanatory sentences and to suggest potential inference strategies for build- arXiv:2405.01379v1 [cs.CL] 2 May 2024 ing non-redundant, complete, and logically valid explanations for NLI. In turn, the TP is adopted to verify the validity of the explanations through the construction of deductive proofs and the generation of fine-grained feedback for LLMs. We instantiate Explanation-Refiner with state-of- the-art LLMs (i.e., GPT-4 (OpenAI, 2023), GPT- 3.5 (Brown et al., 2020), LLama (Touvron et al., 2023), and Mistral (Jiang et al., 2024a)) and the Isabelle/HOL proof assistant (Nipkow et al., 2002) utilising Neo-Davidsonian event semantics (Par- sons, 1990) coupled with First-Order Logic (FOL) to effectively and systematically translate natural language sentences into logical forms. Our empirical analysis carried out on three NLI datasets of variable complexity (i.e., e-SNLI (Cam- buru et al., 2018), QASC (Khot et al., 2019), and WorldTree (Jansen et al., 2018)) reveals that ex- ternal feedback from TPs is effective in improv- ing the quality of natural language explanations, leading to an increase in logical validity using GPT-4 from 36% to 84%, 12% to 55%, and 2% to 37% (on e-SNLI, QASC, and WorldTree respec- tively). At the same time, the results demonstrate that integrating external TPs with LLMs can re- duce errors in autoformalisation, with an average reduction of syntax errors of 68.67%, 62.31%, and 55.17%. Finally, we found notable differences in performance across LLMs and NLI datasets, with closed-sourced LLMs (i.e., GPT-4 and GPT-3.5) significantly outperforming open-source models (i.e., Mistral and LLama) on both explanatory rea- soning and autoformalisation, along with a shared tendency of LLMs to struggle with increasing ex- planation complexity. To summarise, the main contributions of this paper are: 1. We introduce Explanation-Refiner, a novel neuro-symbolic framework that integrates LLMs with an external theorem prover. This framework automatically verifies and refines explanatory sentences in NLI tasks using an objective external feedback. 2. We utilise Neo-Davidsonian event semantics coupled with FOL to effectively translate nat- ural language sentences into logical forms to minimises semantic information loss. Addi- tionally, we introduce a novel method that leverages a theorem prover and a proof as- sistant for verifying NLI explanations and a syntactic refiner to minimise syntax errors in responses generated by LLMs. 3. We conduct a comprehensive series of exper- iments to assess Explanation-Refiner across five LLMs on three datasets, which include between 1 to 16 explanatory sentences. These experiments span a range of tasks from simple textual entailment to complex multiple-choice question answering in different context do- mains. 4. We perform extensive quantitative and qualita- tive analyses to explore the explanation refine- ment process. Our investigations delve into the LLMs\u2019 inference capabilities, revealing the strengths and limitations of different mod- els in producing verifiable and explainable logical reasoning for NLI.",
15
+ "main_content": "Explanation-based NLI is widely adopted to evaluate the reasoning process of multi-step inference models via the construction of natural language explanations. In this work, we refer to the following formalisation for Explanation-based NLI: given a premise sentence pi, a hypothesis sentence hi, and an explanation Ei consisting of a set of facts {f1, f2, ..., fn}, the explanation Ei is logically valid if and only if the entailment pi\u222aEi | = hi holds. This entailment is considered verifiable if {pi, Ei, hi} can be translated into a set of logical forms \u03a6 that compose a theory \u0398. The validity of this theory, \u0398, is subsequently determined by a theorem prover, verifying whether \u0398 \u22a8\u03c8, where \u03c8 represents a logical consequence derived from the logical form of hi. We aim to automatically verify the logical validity of explanation Ei, and if \u0398 \u22a8\u03c8 is rejected by the theorem prover, a further refinement stage should be initiated to refine the facts {f1, f2, ..., fn} based on external feedback, resulting in an updated explanation E\u2032 i. Thus, an explanation is accepted if all the facts are logically consistent, complementary and non-redundant to support the derivation. 3 Explanation-Refiner To verify the logical validity and refine any logical errors in explanatory sentences for NLI tasks, we (b) Inference, Verification and Refinement (a) Axiomatisation Premise:\u00a0A man gives a speech at an ornate costume party. Hypothesis:\u00a0A man is public\u00a0 \u00a0speaking. Initial Explanation:\u00a0 If someone gives a speech that\u00a0 \u00a0means that they are speaking. LLM Autoformalisation axiom_1: \"\u2200x y e1 e2. Someone x \u2227 Speech y \u2227 Gives e1 \u2227 Agent e1 x \u2227 Patient e1 y \u27f6 Speaking e2 \u2227 Agent e2 x\" Theorem hypothesis: assumes asm: \"Man x \u2227 Speech y \u2227 Party z \u2227 Ornate z \u2227 Costume z \u2227 Gives e \u2227 Agent e x \u2227 Patient e y \u2227 At x z\" shows \"\u2203x e. Man x \u2227 Speaking e \u2227 Agent e x\" Neo-Davidsonian Event Semantics LLM LLM Rough\u00a0 Inference \u00a0 \u00a0 1. To infer the hypothesis, we need to find the\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 information of a man and the action of\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 public speaking. \u00a0 \u00a0 \u00a0.... \u00a0 \u00a0 \u00a05. By combining these steps, we can infer the\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0hypothesis by satisfying the information of\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0man (from premise) and public speaking\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 (from premise and explanation 1). \u00a0 \u00a0There are no redundant or not directly related\u00a0 \u00a0 \u00a0 \u00a0explanation sentences.\u00a0The proof steps use\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0explanation 1 and the premise sentence. Proof Strategy LLM Autoformalise\u00a0 Proof Methods \u00a0proof from asm have \"Man x \u2227 Speech y \u2227 Gives e \u2227 Agent\u00a0 \u00a0 \u00a0e x \u2227 Patient e y\" by blast then have \"Man x \u2227 Speaking e \u2227 Agent e x\" using explanation_1 by blast then show ?thesis using asm by blast qed Solvable Valid Explanation Drop\u00a0irrelevant facts and\u00a0 Refine\u00a0based on the feedback\u00a0 from the\u00a0theorem prover \u00a0Failed at\u00a0then have \"Man x \u2227 Speaking e \u2227 Agent e x\"\u00a0 \u00a0using explanation_1 by blast Unsolvable. Feedback on invalid\u00a0steps LLM \u00a0Refined Explanation:\u00a0 \u00a0If a man gives a speech, that\u00a0 \u00a0means he is public speaking. Theorem\u00a0 Prover No Syntax Error Syntax Error Step (1) Step (2) Step (3) Step (4) Step (5) Step (6) Theorem Prover Refine Syntax Verification New\u00a0 Iteration Figure 1: The overall pipeline of Explanation-Refiner: An NLI problem is converted into axioms and theorems as a theorem prover\u2019s theory, along with some proof steps derived from a preliminary inference to send to a theorem prover. In case the proof fails (logically invalid), the erroneous step, along with the proof strategy and proof steps, are extracted as feedback to refine the explanatory sentences in a new iteration. present a neuro-symbolic framework, ExplanationRefiner, to iteratively check and refine the explanation Ei based on external feedback; Isabelle (Paulson, 1994) and its proof assistant Isabelle/HOL (Nipkow et al., 2002) are applied as external tools to make the logical deduction and provide feedback for explanation refinement. Figure 1 shows an overview of our proposed framework. Given an NLI task, to evaluate the logical validity of the entailment, the LLM is prompted to perform an autoformalisation process that transforms natural language sentences into formal language represented in the form of Isabelle theory. Each fact f \u2208Ei is converted into an axiom ai, where each ai is an element of the set A = {a1, a2, ..., an}. The premise pi and corresponding hypothesis hi, is converted into a theorem for proving pi \u2227B \u2192hi, where B \u2286A. A syntax refinement mechanism is subsequently applied to the previously transferred symbolic forms. The theorem prover is implemented as a checker to identify any syntax errors and provide these error details as feedback to an LLM, enabling the LLM to iteratively correct the syntax errors over a fixed number of iterations, denoted by t. With the proof assistant, we can perform automated reasoning in the theorem prover by constructing proof steps. Step 3 initially generate a rough inference to state a preliminary proof strategy in natural language sentences and elicit the facts f \u2208Ei which are non-redundant and essential for entailing the hypothesis hi. Based on this preliminary proof strategy, the LLM is prompted to construct and formalise the proof steps for proving the theorem. In step 5, the theorem prover will verify the constructed theory by attempting to prove the theorem. If it is solvable, we state it as a logically valid explanation. If the prover failed at one of the proof steps, we extract this failed step along with the applied axioms B \u2286A as the external feedback for an LLM. This feedback is used to refine the logical error within B and consequently refine the facts f \u2208Ei, which were previously converted from natural language. 3.1 Autoformalisation In order to formally verify the logical validity of the explanations, we adopted Neo-Davidsonian eventbased semantics and FOL. Neo-Davidsonian Event Semantics Preventing the loss of semantic information during the repretheorem hypothesis: (* Premise: A smiling woman is playing the violin in front of a turquoise background. *) assumes asm: \"Woman x \u2227Violin y \u2227Background z \u2227Turquoise z \u2227Smiling x \u2227Playing e \u2227Agent e x \u2227Patient e y \u2227InFrontOf x z\" (* Hypothesis: A woman is playing an instrument. *) shows \"\u2203x y e. Woman x \u2227Instrument y \u2227Playing e \u2227Agent e x \u2227Patient e y\" Figure 2: An example of representing the premise and hypothesis sentences in Isabelle theorem proof from asm have \"Woman x \u2227Violin y \u2227Playing e \u2227Agent e x \u2227Patient e y\" by blast then have \"Woman x \u2227Instrument y \u2227Playing e \u2227Agent e x \u2227Patient e y\" using explanation_1 by blast then show ?thesis using asm by blast qed Figure 3: An example of proof constructed by the Isabelle/HOL proof assistant to verify the hypothesis. sentation of natural language sentences in logical forms, such as FOL, poses significant challenges when using LLMs, particularly with long and complex sentences that are crucial for logical reasoning (Olausson et al., 2023). Neo-Davidsonian event semantics (Parsons, 1990) utilises event variables to represent the verb predicates and their corresponding object arguments as semantic roles. This approach establishes a predicate-argument structure that preserves the information content and faithfulness of complex sentences, closer to the surface form of the sentence. For example, the sentence \u2018A wolf eating a sheep is an example of a predator hunting prey\u2019 can be formalised as follows: \u2200xye1(wolf(x) \u2227sheep(y) \u2227eating(e1) \u2227agent(e1, x) \u2227patient(e1, y) \u2192 (\u2203e2 predator(x) \u2227prey(y)\u2227 hunting(e2) \u2227agent(e2, x)\u2227 patient(e2, y) \u2227example(e1, e2))) (1) In 1, the verbs are represented as the events \u2018eating\u2019 and \u2018hunting,\u2019 where the agent and patient arguments correspond to the entities performing and receiving the actions within these events, respectively. The logical form example(e1, e2) explicitly captures the semantic meaning of this sentence: the event of a wolf eating a sheep as an exemplar of a predator hunting prey. Similarly, whenever there are no action verbs involved in a sentence, we utilise FOL to represent the static or descriptive aspects. For instance: \u2200x(gravity(x) \u2192force(x)) (2) \u2200xy(greater(x, y) \u2192larger(x, y)) (3) The above logical forms correspond to the sentences \u2018gravity is a kind of force\u2019 and \u2018greater means larger\u2019, respectively. Isabelle Theory Construction For the Isabelle theorem prover, a theory script is essential to facilitate the proof of a theorem. Therefore, we designate explanatory sentences as axioms: (* Explanation 1: A violin is an instrument. *) axiomatization where explanation_1: \"\u2200x. Violin x \u2212 \u2192Instrument x\" Additionally, as illustrated in Figure 2, both the premises and hypothesis are defined as constituting parts of the theorem. The \u2018assumes asm\u2019 clause generally comprises unquantified, specific propositions or conjunctions of propositions, which are recognised as known truths. The \u2018show\u2019 clause denotes the conclusion (hypothesis) for which we seek to establish proof through logical deductions based on the assumed propositions and axioms. Syntax Error Refiner Recent studies (Gou et al., 2024; Olausson et al., 2023) have revealed persistent syntax errors when prompting LLMs for code and symbolic form generation tasks. The method proposed by Pan et al. (2023a) utilises error messages from symbolic solvers to iteratively refine LLM outputs. Following a similar approach, we categorised the syntax errors into two distinct subdomains based on feedback from Isabelle: type unification errors and other syntax errors. Type unification errors primarily arise from mismatches between declared and actual argument types in logical clauses. Other syntax errors typically involve missing brackets, undefined entity names, or invalid logical symbols. Our process involves using Isabelle to identify syntax errors in the transferred theory, extracting these error messages, and then prompting the LLM with these messages along with few-shot examples. This guides the model on how to correct each type of syntax error over a series of iterations, allowing for continuous verification and refinement. Details of the autoformalisation prompts are described in Appendix A.4.1. 3.2 Proof Construction A proof provides a detailed, step-by-step strategy that elucidates the logical connections and unification among axioms to support the reasoning process aimed at achieving the solver\u2019s goal. Initially, we prompt the LLM to create a preliminary proof to assess how it infers the hypothesis and to identify which explanatory sentences are relevant, redundant, or unrelated. Based on this initial proof, we then guide the LLM to develop an Isabelle proof (figure 3) that utilise the Isabelle/HOL proof assistant to clearly demonstrate the explanatory sentences (axioms) required to prove the hypothesis. Details of proof construction process prompts are described in Appendix A.4.2. 3.3 Verify and Refine Finally, the constructed theory, which includes axioms, theorems, and proof steps, is submitted to the theorem prover for verification. If the theory is validated, it outputs a logically sound explanation. If the proof fails or timeouts, we extract the first error from the solver\u2019s error message, identify the corresponding proof step, and locate the related explanatory sentences (axioms) from the theory. We begin by removing redundant and irrelevant facts that are not present in the preceding Isabelle proof steps or are declared as such in the text inference strategy. Then, we prompt the LLM to refine the explanatory sentences by providing it with the error message, the failed proof step, the associated proof strategy, and the relevant explanatory sentences for further iteration. This process is iterative and progressive; with each iteration, the framework addresses one or more logical errors, continually refining the explanatory sentences to ultimately yield a logically valid and verifiable explanation. Additional details on the prompts used for refinement are described in Appendix A.4.3. 4 Empirical Evaluation To assess the effectiveness of integrating LLMs with external feedback for refinement purposes, we evaluated the Explanation-Refiner on two NLI tasks: textual entailment and multiple-choice question answering. For textual entailment, given a premise pi and an explanatory sentence as explanation Ei, the goal is to determine whether pi and Ei together entail a hypothesis hi. In the multiplechoice question answering task, there is a question q accompanied by a set of candidate answers C = {c1, c2, ..., cn}, with ci identified as the correct answer. We convert q and the correct answer ci as the hypothesis hi. Explanatory facts serve as evidence supporting ci as the correct answer and are denoted as Ei, with the question\u2019s context sentence as the premise pi. 4.1 Datasets We adopted three different NLI datasets for evaluation: e-SNLI (Camburu et al., 2018), QASC (Khot et al., 2019), and WorldTree (Jansen et al., 2018), using a total of 300 samples selected via the sampling strategy defined in (Valentino et al., 2021), which maximises representativeness and mutual exclusivity across syntactic and semantic features expressed in the datasets. e-SNLI is a crowd-sourced dataset typically used as a benchmark for textual entailment. It comprises one premise sentence, one explanatory sentence, and one hypothesis sentence per sample. QASC and WorldTree, on the other hand, are datasets designed for multiple-choice question ans wering within the scientific domain. QASC includes two explanatory sentences for each correct answer. WorldTree is the most complex of the three, featuring between 1 and 16 explanatory sentences in each sample. 4.2 Theorem Prover We adopted Isabelle (Paulson, 1994) as our theorem prover and Isabelle/HOL (Nipkow et al., 2002) as the proof assistant. To integrate this theorem prover as a real-time verification tool with LLMs, we employ a Python client (Shminke, 2022) as TCP (Transmission Control Protocol) client to configure Isabelle as a server. This enables the communication of the constructed theory files and the extraction of the response messages from Isabelle. Llama2-70b Mixtral-8x7b Mistral-SmallGPT-3.5-T urbo GPT-4 0 20 40 60 80 100 0.0 16.0 22.0 19.0 36.0 7.0 32.0 36.0 55.0 84.0 6.0 3.28 1.58 2.93 1.96 e-SNLI Initially Valid Explanations Finally Valid Explanations Number of Iterations (a) Llama2-70b Mixtral-8x7b Mistral-SmallGPT-3.5-T urbo GPT-4 0 20 40 60 80 100 3.0 3.0 0.0 6.0 12.0 25.0 12.0 15.0 44.0 55.0 3.6 3.58 4.8 4.11 3.55 QASC Initially Valid Explanations Finally Valid Explanations Number of Iterations (b) Llama2-70b Mixtral-8x7b Mistral-SmallGPT-3.5-T urbo GPT-4 0 20 40 60 80 100 0.0 1.0 0.0 0.0 2.0 3.0 6.0 5.0 29.0 37.0 3.33 5.5 4.6 5.38 4.41 WorldTree Initially Valid Explanations Finally Valid Explanations Number of Iterations (c) Figure 4: The initial and final number of logically valid explanations, along with the average iteration times required to refine an explanation for each LLM 0 2 4 6 8 10 Iteration Times 0 20 40 60 80 100 Number of Refined Explanations e-SNLI mixtral-8x7b mistral-small gpt-3.5-turbo gpt-4 llama2_70b (a) 0 2 4 6 8 10 Iteration Times 0 20 40 60 80 100 Number of Refined Explanations QASC mixtral-8x7b mistral-small gpt-3.5-turbo gpt-4 llama2_70b (b) 0 2 4 6 8 10 Iteration Times 0 20 40 60 80 100 Number of Refined Explanations WorldTree mixtral-8x7b mistral-small gpt-3.5-turbo gpt-4 llama2_70b (c) Figure 5: Number of successfully refined explanations at each iteration step. 4.3 Models We conducted experiments using five LLMs within the proposed framework. The models include two open-sourced models: Llama2-70b (Touvron et al., 2023) and Mixtral-8x7b (Jiang et al., 2024a), as well as Mistral-Small (mistral-small-latest) (Mistral AI, 2024), GPT-Turbo-3.5 (gpt-3.5-turbo) (Brown et al., 2020), and GPT-4 (gpt-4-0613) (OpenAI, 2023). Temperature settings were adjusted to 0 for GPT-Turbo-3.5 and GPT-4, and to 0.01 for Llama2-70b, Mixtral-8x7b, and Mistral-Small, aiming to achieve both determinism in the output and effective code generation for theorem prover. 4.4 Results Providing feedback on the exact failure step from an external theorem prover effectively guides LLMs in continuously verifying and refining explanations in NLI tasks. To assess the effectiveness of employing an external theorem prover to verify and refine explanations in NLI tasks, we conducted a comparative analysis across various LLMs (Figure 4). The initially valid explanations represent the percentage of explanations that can be verified as logically valid without any further iteration. Although the initial verification results varied among different models, all LLMs demonstrated a consistent improvement in refining the logical validity of the explanations. This process highlights the positive impact of the external feedback but also shows significant differences between models. We found that lower rates of initial valid explanations often resulted from syntactic errors, which impeded the theorem prover\u2019s ability to generate proofs. Despite this initial variability, all models demonstrate a consistent improvement in the refinement process across the datasets. Notably, GPT-4 outperformed other models, improving the validity of explanations by 48%, 43%, and 35% across the three datasets, respectively, within a maximum number of ten iterations (Figure 4). Figure 5 shows the number of explanations refined at each iteration across the e-SNLI, QASC, and WorldTree datasets. On average, we found that an increasing number of iterations leads to increasing refinement, with models requiring an average of five iterations across the datasets. Explanation length/complexity impacts formalisation. The e-SNLI dataset, which includes only a single explanatory sentence per example, shows the best overall performance. In contrast, the multiple-choice question answering datasets, QASC and WorldTree, exhibit comparatively lower Llama2-70b Mixtral-8x7b Mistral-Small GPT-3.5-T urbo GPT-4 0 20 40 60 80 100 Avg. Number of Theories Contain Syntax Errors 75.18 58.55 47.0 33.27 7.82 64.55 31.82 23.0 17.64 2.45 e-SNLI (a) Llama2-70b Mixtral-8x7b Mistral-Small GPT-3.5-T urbo GPT-4 0 20 40 60 80 100 Avg. Number of Theories Contain Syntax Errors 50.18 54.45 50.36 46.1 20.27 25.64 41.45 38.27 22.18 7.64 QASC (b) Llama2-70b Mixtral-8x7b Mistral-Small GPT-3.5-T urbo GPT-4 0 20 40 60 80 100 Avg. Number of Theories Contain Syntax Errors 51.27 68.45 63.63 61.73 22.91 41.27 53.72 54.27 35.64 10.27 WorldTree (c) Figure 6: The average number of theories containing syntactic errors before and after the syntax refinement process Llama2-70b Mixtral-8x7b Mistral-SmallGPT-3.5-T urbo GPT-4 0 20 40 60 80 100 Number of Refined Explanations 7 32 36 55 84 65 67 69 74 84 e-SNLI TI+AF(Base model) TI+AF(GPT-4) (a) Llama2-70b Mixtral-8x7b Mistral-SmallGPT-3.5-T urbo GPT-4 0 20 40 60 80 100 Number of Refined Explanations 25 12 15 44 55 42 44 45 48 55 QASC TI+AF(Base model) TI+AF(GPT-4) (b) Llama2-70b Mixtral-8x7b Mistral-SmallGPT-3.5-T urbo GPT-4 0 20 40 60 80 100 Number of Refined Explanations 3 6 5 29 37 26 28 31 34 37 WorldTree TI+AF(Base model) TI+AF(GPT-4) (c) Figure 7: AF represents the autoformalisation components, and TI represents the textual inference components. TI+AF (Base Model) indicates the use of the base model for both the autoformalisation and textual inference components. TI+AF (GPT-4) indicates the use of GPT-4 for the autoformalisation components, while the base model is used for textual inference. performance. QASC typically contains 2 explanatory sentences, while WorldTree ranges from 1 to 16 sentences. As the number of explanatory sentences increases, so does the complexity of the logical reasoning required. The WorldTree dataset, in particular, poses the greatest challenge due to its demand for multi-hop inference strategies. Models show lower refining performance in WorldTree when compared to e-SNLI and QASC, with only 3%, 5%, and 6% of Llama-70b, Mixtral-8x7b, and Mistral-Small explanations being refined in WorldTree. Meanwhile, 29% and 37% of explanations are refined by GPT-3.5-Turbo and GPT-4 in WorldTree, respectively. This process involves synthesising multiple explanatory sentences to fulfill sub-goals, which must then be integrated to meet the overall hypothesis goal. Iterative and categorical refinement can monotonically reduce syntax errors in responses generated by LLMs. To evaluate the syntax error refinement stage, we quantified the presence of syntax errors in the Isabelle theories both before and after the iterative refinement process. After a maximum of three iterations, all models showed significant reductions, with maximum reductions of 68.67%, 62.31%, and 55.17% from 7.82 to 2.45, 20.27 to 7.64, and 22.91 to 10.27 across the three respective datasets (see Figure 6). While models like Llama2-70b and Mixtral-8x7b still exhibit some syntax errors in the refined theories\u2019 code, this is primarily due to their inability to perform complex autoformalisation, especially for multiple and more complex explanatory sentences such as those in the WorldTree dataset. This result is consistent with the percentage of explanations that were successfully refined across the models, which suggests that the autoformalisation process plays a critical role in the models\u2019 logical reasoning capability. 4.5 Ablation Study We conducted an ablation study to further evaluate and disentangle the impact of autoformalisation on performance. To this end, we adopted GPT-4 exclusively for the autoformalisation component, while retaining the original models for explanation refinement and proof strategy generation. As shown in Figure 7, integrating GPT-4 for autoformalisation led to a significant increase in the number of explanations successfully refined across all models. For instance, Llama2-70b with GPT-4 as the formalisation component refined explanations from 7% to 65% in the e-SNLI dataset. For the multiplechoice question answering dataset, GPT-3.5-Turbo showed a relatively smaller increase from 44% to 0 2 4 6 8 10 12 14 T otal Suggested Proof Steps 0 1 2 3 4 5 6 7 8 Avg. Processed Proof Steps Refined e-SNLI mixtral-8x7b mistral-small gpt-3.5-turbo gpt-4 llama2_70b (a) 0 2 4 6 8 10 12 14 T otal Suggested Proof Steps 0 1 2 3 4 5 6 7 8 Avg. Processed Proof Steps Refined QASC mixtral-8x7b mistral-small gpt-3.5-turbo gpt-4 llama2_70b (b) 0 2 4 6 8 10 12 14 T otal Suggested Proof Steps 0 1 2 3 4 5 6 7 8 Avg. Processed Proof Steps Refined WorldTree mixtral-8x7b mistral-small gpt-3.5-turbo gpt-4 llama2_70b (c) 0 2 4 6 8 10 12 14 T otal Suggested Proof Steps 0 1 2 3 4 5 6 7 8 Avg. Processed Proof Steps Unrefined e-SNLI mixtral-8x7b mistral-small gpt-3.5-turbo gpt-4 llama2_70b (d) 0 2 4 6 8 10 12 14 T otal Suggested Proof Steps 0 1 2 3 4 5 6 7 8 Avg. Processed Proof Steps Unrefined QASC mixtral-8x7b mistral-small gpt-3.5-turbo gpt-4 llama2_70b (e) 0 2 4 6 8 10 12 14 T otal Suggested Proof Steps 0 1 2 3 4 5 6 7 8 Avg. Processed Proof Steps Unrefined WorldTree mixtral-8x7b mistral-small gpt-3.5-turbo gpt-4 llama2_70b (f) Figure 8: Average of proof steps processed by the proof assistant against the total proof steps suggested by the LLMs in refined and unrefined explanations. 48% and from 29% to 34%. Despite these improvements, a performance gap persists between GPT-4 and the other models, which is attributed to GPT-4\u2019s superior symbolic reasoning capabilities required for explanation refinement from the identified logical errors. Explanations are progressively made more complete and consistent through iterative symbolic refinement. In order to deliver step-wise logical consistency, explanations need to be made complete and self-contained, leading to the introduction of additional explanatory sentences, leading to an increase in the total number of suggested proof steps. Therefore, we further evaluated how the proof steps vary when the total number of suggested proof steps increases contrasting both refined and unrefined cases. Figure 8 illustrates this trend. In general, all models show a positive trend, as the total suggested proof steps increase, the average number of proof steps processed by the proof assistant also increases. Models like Mistral-Small and GPT-3.5-Turbo tend to suggest more proof steps to accomplish the logical goal, which can result in some redundant steps, such as the significant pulse shown in Figure 8c. For unrefined explanations, as shown in Figure 8d, 8e and 8f, the progression is steadier but retains a positive trend, where the models generally suggest more proof steps in response to the additional explanatory sentences introduced to correct a logical error identified from the erroneous step. We also conducted experiments on the relationship of average number of successfully processed explanatory sentences in one proof against total planned explanatory sentences in a suggest proof in appendix A.3. Examples of refined and unrefined explanations can be found in Appendix A.5. 4.6 Factual Errors and Trivial Explanations In addition to evaluating the logical validity of explanations, we also conducted a human evaluation of the refined explanations, considering factual correctness and explanation triviality, for the two bestperforming models (GPT-3.5-Turbo and GPT-4). This evaluation focused on two questions: \u201cAre the refined explanatory sentences factually correct?\u201d and \u201cIs the explanation trivial, merely repeating or paraphrasing the content of the premise and hypothesis to achieve logical validity?\u201d. As illustrated in Figure 9, our findings indicate that all refined explanations in the e-SNLI and WorldTree datasets are consistent with commonsense knowledge. In the QASC dataset, 2.27% and 1.82% of the explanation refined by GPT-3.5-Turbo and GPT4 contain sentences misaligned with true world GPT-3.5-T urbo GPT-4 0 20 40 60 80 100 Percentage of Explanations 100 100 100 100 e-SNLI Factually Correct Not Trivial (a) GPT-3.5-T urbo GPT-4 0 20 40 60 80 100 Percentage of Explanations 97.73 98.18 95.45 98.18 QASC Factually Correct Not Trivial (b) GPT-3.5-T urbo GPT-4 0 20 40 60 80 100 Percentage of Explanations 100.0 100.0 86.21 97.3 WorldTree Factually Correct Not Trivial (c) Figure 9: Human evaluation of refined explanations in terms of factuality and triviality. Factually Correct indicates the percentage of explanation sentences that are correct in terms of commonsense knowledge. Not Trivial states the percentage of explanations that contain only explanatory sentences that repeat or paraphrase the premise and/or the hypothesis. knowledge. We found that the majority of these errors result from over-generalisation, such as the sentence All tetrapods are defined to have four limbs, which inaccurately includes snakes. Finally, we found a relatively low number of explanations that repeat or paraphrase the content of premise and hypothesis. This phenomenon is absent in e-SNLI and becomes more evident when the explanatory sentences increase in complexity (i.e., WorldTree), leading models sometimes to generate explanations that do not include any additional information for the entailment to hold. 5 Related Work 5.1 LLMs Self-Refinement from External Feedback Self-refinement of LLMs has demonstrated promising effectiveness in generating faithful and trustworthy responses (Pan et al., 2023b). The use of external feedback to guide LLMs has been extensively studied (Olausson et al., 2024a; Yu et al., 2023; Akyurek et al., 2023). Previous work such as Peng et al. (2023) and Li et al. (2024) have employed facts retrieved from external knowledge bases as sources of feedback, while Paul et al. (2024) developed a critic model to provide feedback for reasoning refinement. Additionally, Nathani et al. (2023) have explored the use of feedback models for automated feedback generation. Various works have also investigated tasks related to code generation (Chen et al., 2023; Olausson et al., 2024b) and the creation of either synthetic or expert-written logical natural language expressions (Olausson et al., 2023). Quan et al. (2024) leverage a differentiable logic reasoner to verify and refine explanations through abductive reasoning , enhancing the logical consistency of explanations in ethical NLI tasks based on the solver\u2019s output. This paper focuses on the automated refinement of natural language sentences created by human annotators. It verifies and refines the logical validity of these human-generated sentences using detailed external feedback, which can identify the exact erroneous steps to effectively refine logical errors in the explanatory sentences. 5.2 Autoformalisation Autoformalisation refers to the process of translating natural language descriptions into symbolic representations. Research in this area has included the formalisation of mathematical proofs (Cunningham et al., 2022; Wu et al., 2022; First et al., 2023), and efforts to transform natural language sentences into logical forms using LLMs (Pan et al., 2023a; Olausson et al., 2023; Jiang et al., 2024b; Dalal et al., 2024). However, contextual information is frequently lost when sentences are translated in these logical frameworks. To mitigate semantic loss during the transformation process, we leverage Neo-Davidsonian event semantics, which aims to maximise the preservation sentence-level content. This representation paradigm can facilitate a more systematic content-preserving translation to logical forms, which is more independent from particular choices of representation schemas. 5.3 Conclusion In this work, we present a novel neuro-symbolic framework that automatically verifies and refines natural language explanations using iterative refinement cycles between LLMs and theorem provers (TPs). We have conducted extensive experiments on both textual entailment and multiplechoice question answering tasks, demonstrating that the proposed method, Explanation-Refiner, effectively enhances the logical validity of such human-annotated explanations. We investigated the model\u2019s performance from simple to complex explanatory/sentence structures and introduced a method to prevent the loss of semantic information in autoformalisation tasks with error correction. Furthermore, we investigated the iterative model at each iteration to evaluate how the explanation is refined at each cycle. In future work, we aspire to enhance the framework\u2019s robustness towards complex and unstructured explanations with fewer iterations required to improve the model\u2019s efficiency. 5.4 Limitations While this work have demonstrated significant improvements in terms of enhancing the logical consistency of explanations, the connection between improved logical consistency and AI safety still needs further investigation. While the concept of using formal solvers in conjunction with LLMs delivers a promise avenue to improve the consistency of reasoning within LLMs, these methodologies needs to be further developed and critically assessed as a mechanism which can provide guarantees of correctness, consistency and completeness within critical application domains. Acknowledgments This work was partially funded by the Swiss National Science Foundation (SNSF) project NeuMath (200021_204617), by the EPSRC grant EP/T026995/1, \u201cEnnCore: End-to-End Conceptual Guarding of Neural Architectures\u201d under Security for all in an AI enabled society, by the CRUK National Biomarker Centre, and supported by the Manchester Experimental Cancer Medicine Centre and the NIHR Manchester Biomedical Research Centre."
16
+ }
intro_8K/test_introduction_short_2405.01387v1.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.01387v1",
3
+ "title": "Lexicographic Optimization: Algorithms and Stability",
4
+ "abstract": "A lexicographic maximum of a set $X \\subseteq \\mathbb{R}^n$ is a vector in\n$X$ whose smallest component is as large as possible, and subject to that\nrequirement, whose second smallest component is as large as possible, and so on\nfor the third smallest component, etc. Lexicographic maximization has numerous\npractical and theoretical applications, including fair resource allocation,\nanalyzing the implicit regularization of learning algorithms, and\ncharacterizing refinements of game-theoretic equilibria. We prove that a\nminimizer in $X$ of the exponential loss function $L_c(\\mathbf{x}) = \\sum_i\n\\exp(-c x_i)$ converges to a lexicographic maximum of $X$ as $c \\rightarrow\n\\infty$, provided that $X$ is stable in the sense that a well-known iterative\nmethod for finding a lexicographic maximum of $X$ cannot be made to fail simply\nby reducing the required quality of each iterate by an arbitrarily tiny degree.\nOur result holds for both near and exact minimizers of the exponential loss,\nwhile earlier convergence results made much stronger assumptions about the set\n$X$ and only held for the exact minimizer. We are aware of no previous results\nshowing a connection between the iterative method for computing a lexicographic\nmaximum and exponential loss minimization. We show that every convex polytope\nis stable, but that there exist compact, convex sets that are not stable. We\nalso provide the first analysis of the convergence rate of an exponential loss\nminimizer (near or exact) and discover a curious dichotomy: While the two\nsmallest components of the vector converge to the lexicographically maximum\nvalues very quickly (at roughly the rate $\\frac{\\log n}{c}$), all other\ncomponents can converge arbitrarily slowly.",
5
+ "authors": "Jacob Abernethy, Robert E. Schapire, Umar Syed",
6
+ "published": "2024-05-02",
7
+ "updated": "2024-05-02",
8
+ "primary_cat": "math.OC",
9
+ "cats": [
10
+ "math.OC"
11
+ ],
12
+ "label": "Original Paper",
13
+ "paper_cat": "LLM Fairness",
14
+ "gt": "A lexicographic maximum of a set X \u2286Rn is a vector in X that is at least as large as any other vector in X when sorting their components in non-decreasing order and comparing them lexicographically. For example, if X = \uf8f1 \uf8f2 \uf8f3x1 = \uf8eb \uf8ed 5 2 4 \uf8f6 \uf8f8, x2 = \uf8eb \uf8ed 2 6 3 \uf8f6 \uf8f8, x3 = \uf8eb \uf8ed 8 7 1 \uf8f6 \uf8f8 \uf8fc \uf8fd \uf8fe then the lexicographic maximum of X is x1, since both x1 and x2 have the largest smallest components, and x1 has a larger second smallest component than x2. In\ufb01nite and unbounded sets can also contain a lexicographic maximum. One of the \ufb01rst applications of lexicographic maximization was fair bandwidth allocation in com- puter networks (Hayden, 1981; Mo and Walrand, 1998; Le Boudec, 2000). Lexicographic maximimiza- tion avoids a \u2018rich-get-richer\u2019 allocation of a scarce resource, since a lexicographic maximum is both Pareto optimal and has the property that no component can be increased without decreasing another smaller component. More recently, Diana et al. (2021) considered lexicographic maximization as an approach to fair regression, where the components of the lexicographic maximum represent the per- formance of a model on di\ufb00erent demographic groups. Bei et al. (2022) studied the \u201ccake sharing\u201d 1 problem in mechanism design and showed that assigning a lexicographically maximum allocation to the agents is a truthful mechanism. Rosset et al. (2004) and Nacson et al. (2019) used lexicographic maximization to analyze the im- plicit regularization of learning algorithms that are based on minimizing an objective function. In particular, they showed for certain objective functions and model classes that the vector of model predictions converges to a lexicographic maximum. In game theory, Dresher (1961) described an equilibrium concept in which each player\u2019s payo\ufb00 vector is a lexicographic maximum of the set of their possible payo\ufb00vectors. Van Damme (1991) showed that in a zero-sum game this concept is equivalent to a proper equilibrium, a well-known re\ufb01nement of a Nash equilibrium. Lexicographic maximization is also used to de\ufb01ne the nucleolus of a cooperative game (Schmeidler, 1969). Two methods for computing a lexicographic maximum have been described in the literature. The \ufb01rst method is an iterative algorithm that solves n optimization problems, with the ith iteration computing the ith smallest component of the lexicographic maximum, and is guaranteed to \ufb01nd the lexicographic maximum if one exists. This method is often called the progressive \ufb01lling algorithm (Bertsekas and Gallager, 2021). The second method minimizes the exponential loss, which is de\ufb01ned Lc(x) = n X i=1 exp(\u2212cxi), (1) where x \u2208Rn and c \u22650. Previous work has shown that a minimizer in X of Lc(x) may converge to a lexicographic maximum of X as c \u2192\u221e, but the conditions placed on X to ensure convergence were quite restrictive. Mo and Walrand (1998), Le Boudec (2000) and Rosset et al. (2004) all assumed that X is a convex polytope, while Nacson et al. (2019) assumed that X is the image of a simplex under a continuous, positive-homogeneous mapping. These results were all based on asymptotic analyses, and did not establish any bounds on the rate of convergence to a lexicographic maximum. As far as we know, no previous work has drawn a connection between the two methods for computing a lexicographic maximum. Our contributions: The iterative algorithm described above is guaranteed to \ufb01nd a lexicographic maximum of a set if the optimization problems in each iteration are solved exactly. But if the opti- mization problems are solved only approximately, then the output of the algorithm can be far from a lexicographic maximum, even if the approximation error is arbitrarily small (but still non-zero). We de\ufb01ne a property called lexicographic stability, which holds for a set X \u2286Rn whenever this pathological situation does not occur, and prove that it has an additional powerful implication: Any vector xc \u2208X that is less than exp(\u2212O(c)) from the minimum value of Lc(x) converges to a lexicographic maximum of X as c \u2192\u221e. By proving convergence for all non-pathological sets, we signi\ufb01cantly generalize existing convergence criteria for exponential loss minimization. We show that all convex polytopes are stable, thereby subsuming most of the previous works mentioned above. On the other hand, we show that sets that are convex and compact but not polytopes need not be stable, in general. We also show that our convergence result does not hold generally when stability is not assumed by constructing a set X that is not lexicographically stable, and for which xc is bounded away from a lexicographic maximum of X for all su\ufb03ciently large c, even if xc is an exact minimizer of the exponential loss. We also study the rate at which xc approaches a lexicographic maximum, and \ufb01nd a stark discrep- ancy for di\ufb00erent components of xc. The smallest and second smallest components of xc are never more than O \u0010 log n c \u0011 below their lexicographically maximum values, even if X is not lexicographically stable. However, all other components of xc can remain far below their lexicographically maximum values for arbitrarily large c, even if xc is an exact minimizer of the exponential loss and X is lexicographically stable with a seemingly benign structure (it can be a single line segment). Finally, we prove that the multiplicative weights algorithm (Littlestone and Warmuth, 1994; Freund and Schapire, 1999) is guaranteed to converge to the lexicographic maximum of a convex polytope, essentially be- 2 cause it minimizes the exponential loss. While Syed (2010) proved that the multiplicative weights algorithm diverges from the lexicographic maximum when the learning rate is constant, we guarantee convergence by setting the learning rate to O(1/t) in each iteration t. Additional related work: Lexicographic maximization is often applied to multi-objective opti- mization problems where, for a given function f : \u0398 \u2192Rn, the goal is to \ufb01nd \u03b8\u2217\u2208\u0398 such that f(\u03b8\u2217) is a lexicographic maximum of the set X = {f(\u03b8) : \u03b8 \u2208\u0398} (Luss and Smith, 1986). As in our work, this approach sorts the components of each vector before performing a lexicographic com- parison, which contrasts with other work in which an ordering of the dimensions is \ufb01xed in advance (Sherali and Soyster, 1983; Mart\u00b4 \u0131nez-Legaz and Singer, 1987). Diana et al. (2021) and Henzinger et al. (2022) critiqued the lexicographic maximum as a fairness solution concept because of its potential for instability when subject to small perturbations. They gave examples demonstrating that instability can occur, but did not relate the instability to the problem of computing a lexicographic maximum via loss minimization. The primary objective of our work is to provide a much fuller characterization of this instability and to explore its implications. In the analytical framework used by Rosset et al. (2004) and Nacson et al. (2019), larger values of c for the exponential loss Lc(x) correspond to weaker explicit regularization by a learning algorithm, so understanding the behavior of the minimizer of Lc(x) as c \u2192\u221ehelps to characterize the algorithm\u2019s implicit regularization. Hartman et al. (2023) appear to contradict one of our key negative results by showing that if the iterative algorithm uses an approximate solver in each iteration then its output will always be close to a lexicographic maximum, provided that the approximation error is su\ufb03ciently small (see their Theorem 9). However, this apparent discrepancy with our results is in fact due only to us using a di\ufb00erent and incompatible de\ufb01nition of \u201ccloseness.\u201d See Appendix B for a discussion.",
15
+ "main_content": "For any non-negative integer n let [n] := {1, . . . , n}, and note that [0] is the empty set. Let R\u22650 := {x \u2208R : x \u22650} be the non-negative reals. Let xi be the ith component of the vector x \u2208Rn. Let \u2225X\u2225\u221e:= supx\u2208X maxi |xi| be the largest \u2113\u221enorm of any vector in X \u2286Rn. For each i \u2208[n] let \u03c3i : Rn \u2192R be the ith sorting function, such that \u03c3i(x) is the ith smallest component of x \u2208Rn. For example, if x = (2, 1, 2)\u22a4then \u03c31(x) = 1, \u03c32(x) = 2 and \u03c33(x) = 2. We define a total order \u2265\u03c3 on vectors in Rn as follows: for any points x, x\u2032 \u2208Rn we say that x \u2265\u03c3 x\u2032 if and only if x = x\u2032 or \u03c3i(x) > \u03c3i(x\u2032) for the smallest i \u2208[n] such that \u03c3i(x) \u0338= \u03c3i(x\u2032). Definition 1. A lexicographic maximum of a set X \u2286Rn is a vector x\u2217\u2208X for which x\u2217\u2265\u03c3 x for every x \u2208X. Let lexmax X be the set of all lexicographic maxima of X. While lexmax X can be empty, this can only occur if X is empty or not compact (see Theorem 1). For notation, we always write x\u2217to denote a lexicographic maximum of X. Also, for all c, \u03b3 \u22650 we write xc,\u03b3 to denote an arbitrary vector in X that satisfies Lc(xc,\u03b3) \u2264inf x\u2208X Lc(x) + \u03b3 exp(\u2212c \u2225X\u2225\u221e). In other words, xc,\u03b3 is a near minimizer in X of the exponential loss if \u03b3 is small, and xc,0 is an exact minimizer. The notation x\u2217and xc,\u03b3 suppresses the dependence on X, which will be clear from context. While x\u2217and xc,0 do not exist in every set X, we are only interested in cases where they do, and implicitly make this assumption throughout our analysis. An exception is when we construct a set X to be a counterexample. In these cases we explicitly prove that x\u2217and xc,0 exist. Also, a set may contain multiple vectors that satisfy the definitions of x\u2217or xc,\u03b3, and our results apply no matter how they are chosen. Our goal is to characterize when xc,\u03b3 is \u201cclose\u201d to x\u2217, where we use the following definition of closeness. 3 De\ufb01nition 2. The lexicographic distortion between x, x\u2032 \u2208Rn with respect to I \u2286[n] is dI(x | x\u2032) \u225cmax k\u2208I [max{0, \u03c3k(x) \u2212\u03c3k(x\u2032)}]. If I = [n] we abbreviate d(x | x\u2032) \u225cdI(x | x\u2032). Lexicographic distortion is useful for quantifying the closeness of xc,\u03b3 to x\u2217because d(x\u2217| xc,\u03b3) = 0 if and only if xc,\u03b3 \u2208lexmax X. It is important to note, however, that dI(\u00b7 | \u00b7) is not a symmetric function, and most typically dI(x | x\u2032) \u0338= dI(x\u2032 | x) Problem statement: We want to describe conditions on X \u2286Rn, I \u2286[n], c \u22650 and \u03b3 \u22650 which ensure that dI(x\u2217| xc,\u03b3) is almost equal to zero. We are particularly interested in cases where I = [n], since this implies that xc,\u03b3 is close to a lexicographic maximum. Also, we want to identify counterexamples where dI(x\u2217| xc,\u03b3) is far from zero. Since d(x\u2217| xc,\u03b3) \u2265dI(x\u2217| xc,\u03b3) for all I \u2286[n], this implies that xc,\u03b3 is far from a lexicographic maximum. 3 Basic properties We prove several basic properties of lexicographic maxima that will be useful in our subsequent analysis. We \ufb01rst show that conditions which su\ufb03ce to ensure that a subset of R contains a maximum also ensure that a subset of Rn contains a lexicographic maximum. Theorem 1. If X \u2286Rn is non-empty and compact then lexmax X is non-empty. Proof. De\ufb01ne X0, . . . , Xn and \u03a31, . . . , \u03a3n recursively as follows: Let X0 = X, \u03a3i = {\u03c3i(x) : x \u2208Xi\u22121} and Xi = {x \u2208Xi\u22121 : \u03c3i(x) \u2265sup \u03a3i}. We will prove by induction that each Xi is non-empty and compact, which holds for X0 by assumption. Since Xn = lexmax X this will complete the proof. By Theorem 11 in Appendix A.1, each sorting function \u03c3i is continuous. If Xi\u22121 is non-empty and compact then \u03a3i is non-empty and compact, since it is the image of a compact set under a continuous function. Therefore sup \u03a3i \u2208\u03a3i. If Xi\u22121 is non-empty and compact and sup \u03a3i \u2208\u03a3i then Xi is non-empty and closed, since it is the pre-image of a compact set under a continuous function. Also, Xi is bounded, since Xi \u2286Xi\u22121, and therefore Xi is compact. Furthermore, if X is also convex, we are assured that the lexicographic maximum of X is unique. Theorem 2. If X \u2286Rn is non-empty, compact and convex then | lexmax X| = 1. Proof. Suppose X is nonempty, compact and convex. By Theorem 1, | lexmax X| \u22651, so it only remains to show that | lexmax X| \u22641. Suppose, towards a contradiction, there exist distinct points x, y \u2208X that are both lexicographic maxima. We assume without loss of generality that the coordinates of x are sorted in nondecreasing order and, further, that on any segment of \u201cties\u201d on the sorted x, the corresponding segment in y is nondecreasing. Thus, for i, j \u2208[n], if i \u2264j then xi \u2264xj, and in addition, if xi = xj then yi \u2264yj. Since both x and y are lexicographic maxima, it follows that \u03c3i(y) = \u03c3i(x) = xi for i \u2208[n]. Let k be the smallest index on which x and y di\ufb00er (so that xi = yi for i < k and xk \u0338= yk). Let I = [k \u22121]. Then \u03c3i(y) = \u03c3i(x) = xi = yi for i \u2208I. Therefore, \u03c3k(y) is the smallest of the remaining components of y, implying yk \u2265min{yk, . . . , yn} = \u03c3k(y) = \u03c3k(x) = xk, (2) and so that yk > xk. Let z = (x+ y)/2, which is in X since X is convex. We consider the components of z relative to xk. Let i \u2208[n]. If i < k then zi = xi \u2264xk (since xi = yi). If i = k then zk = (yk + xk)/2 > xk since yk > xk. 4 Finally, suppose i > k, implying xi \u2265xk. If xi > xk then zi = (yi + xi)/2 > xk since yi \u2265xk (by Eq. (2)). Otherwise, xi = xk, implying, by how the components are sorted, that yi \u2265yk > xk; thus, again, zi = (yi + xi)/2 > xk. To summarize, zi = yi = xi \u2264xk if i \u2208I, and zi > xk if i \u0338\u2208I. It follows that \u03c3i(z) = \u03c3i(x) = xi for i = 1, . . . , k \u22121, and that \u03c3k(z) = min{zk, . . . , zn} > xk = \u03c3k(x). However, this contradicts that x is a lexicographic maximum. 4 Computing a lexicographic maximum Algorithm 1 below is an iterative procedure for computing a lexicographic maximum of a set X \u2286Rn. In each iteration k, Algorithm 1 \ufb01nds a vector in X with the (approximately) largest kth smallest component, subject to the constraint that its k \u22121 smallest components are at least as large as the k \u22121 smallest components of the vector from the previous iteration. The quality of the approximation is governed by a tolerance parameter \u03b5. Since the optimization problem in each iteration can have multiple solutions, the output of Algorithm 1 should be thought of as being selected arbitrarily from a set of possible outputs. We write A(X, \u03b5) for the set of all possible outputs when Algorithm 1 is run on input (X, \u03b5). Algorithm 1 and close variants have been described many times in the literature. In general, the optimization problem in each iteration can be di\ufb03cult to solve, especially due to the presence of sorting functions in the constraints. Consequently, much previous work has focused on tractable special cases where the optimization problem can be reformulated as an equivalent linear or convex program (Luss, 1999; Miltersen and S\u00f8rensen, 2006). Other authors have used Algorithm 1 with tolerance parameter \u03b5 = 0 as the de\ufb01nition of a lexicographic maximum (Van Damme, 1991; Nacson et al., 2019; Diana et al., 2021). Theorem 3 explains the relationship between De\ufb01nition 1 and Algorithm 1 when \u03b5 = 0. Algorithm 1: Compute a lexicographic maximum. Input: Set X \u2286Rn, tolerance \u03b5 \u22650. for k = 1, . . . , n do Find an \u03b5-optimal solution x(k) to the optimization problem: max x\u2208X \u03c3k(x) subject to \u03c3i(x) \u2265\u03c3i(x(k\u22121)) for all i \u2208[k \u22121] end Return: x(n). Theorem 3. Let X \u2286Rn. A vector \u02c6 x \u2208Rn is a possible output of Algorithm 1 on input (X, 0) if and only if \u02c6 x \u2208lexmax X. That is, A(X, 0) = lexmax X. Proof. Let x\u2217\u2208lexmax X. We prove by induction that in each iteration k of Algorithm 1 we have \u03c3i(x(k)) = \u03c3i(x\u2217) for all i \u2208[k]. Setting k = n proves the theorem. For the base case k = 1, observe that the algorithm assigns x(1) \u2208arg maxx\u2208X \u03c31(x). Therefore \u03c31(x(1)) = \u03c31(x\u2217), by the de\ufb01nition of x\u2217. In each iteration k > 1 the algorithm assigns x(k) \u2208arg max x\u2208X \u03c3k(x) subject to \u03c3i(x) \u2265\u03c3i(x\u2217) for all i \u2208[k \u22121], where the constraints are implied by the inductive hypothesis. Therefore \u03c3i(x(k)) = \u03c3i(x\u2217) for all i \u2208[k], again by the de\ufb01nition of x\u2217. 5 While Algorithm 1 can only \ufb01nd a lexicographic maximum if one exists in X, we recall from Theorem 1 that this holds whenever X is non-empty and compact. Diana et al. (2021) and Henzinger et al. (2022) proposed A(X, \u03b5) (or minor variants thereof) as the de\ufb01nition of the \u03b5-approximate lexicographic maxima of X. However, they observed that A(X, \u03b5) may nonetheless contain vectors that are far from any lexicographic maximum, even if \u03b5 is very small (but still non-zero). In the next section we formally characterize this phenomenon, and in the rest of the paper we explore its implications. 5 Lexicographic stability Theorem 3 states that Algorithm 1 outputs a lexicographic maximum (assuming one exists) if the optimization problem in each iteration of the algorithm is solved exactly. In practice, the optimization problems will be solved by a numerical method up to some tolerance \u03b5 > 0, with smaller values of \u03b5 typically requiring longer running times. Ideally, we would like the quality of the output of Algorithm 1 to vary smoothly with \u03b5, and if this happens for a set X then we say that X is lexicographically stable. De\ufb01nition 3. A set X \u2286Rn is lexicographically stable if for all \u03b4 > 0 there exists \u03b5 > 0 such that for all \u02c6 x \u2208A(X, \u03b5) and x\u2217\u2208lexmax X we have d(x\u2217| \u02c6 x) < \u03b4. De\ufb01nition 3 says that if a set X is lexicographically stable, then in order to \ufb01nd an arbitrarily good approximation of its lexicographic maximum, it su\ufb03ces to run Algorithm 1 with a su\ufb03ciently small tolerance parameter \u03b5 > 0. On the other hand, if X is not lexicographically stable, then no matter how small we make \u03b5 > 0 the output of Algorithm 1 can be far from a lexicographic maximum. A su\ufb03cient (but not necessary) condition for a set X \u2286Rn to be lexicographically stable is that X is a convex polytope (i.e., the convex hull of \ufb01nitely many points in Rn). Theorem 4. If X \u2286Rn is a convex polytope then X is lexicographically stable. Proof sketch. We outline the main steps of the proof. Detailed justi\ufb01cation for these steps is given in Appendix A.2. We suppose X is a convex polytope and that x\u2217is its unique lexicographic maximum. On each round k, Algorithm 1 approximately solves an optimization problem whose value is given by some function Hk(x(k\u22121)) so that x(k) must satisfy \u03c3k(x(k)) \u2265Hk(x(k\u22121)) \u2212\u03b5. In fact, we show that a point x is a possible output on (X, \u03b5) if and only if it satis\ufb01es \u03c3k(x) \u2265Hk(x) \u2212\u03b5 for all k \u2208[n]. This function Hk can be lower-bounded by another function GI, which, if X is a convex polytope, is concave and lower semicontinuous. If X is not lexicographically stable, then there exists sequences xt and \u03b5t > 0 with \u03b5t \u21920 and xt \u2208A(X, \u03b5t) such that xt \u2192\u02c6 x for some \u02c6 x \u2208X but \u02c6 x \u0338= x\u2217. We can re-index points in X so that the components of \u02c6 x are sorted, and furthermore, that when there are ties, they are additionally sorted according to the components of x\u2217. Let k be the \ufb01rst component for which \u02c6 xk \u0338= x\u2217 k. We show this implies GI(\u02c6 x) > \u02c6 xk. Combining facts, we then have \u02c6 xk < GI(\u02c6 x) \u2264lim inf t\u2192\u221eGI(xt) \u2264lim inf t\u2192\u221eHk(xt) \u2264lim inf t\u2192\u221e[\u03c3k(xt) + \u03b5t] = \u02c6 xk, a contradiction. Combining Theorem 4 with Theorem 5 below recovers results due to Mo and Walrand (1998), Le Boudec (2000) and Rosset et al. (2004) which show that an exponential loss minimizer in X converges to a lexicographic maximum of X when X is a convex polytope. Fairly simple non-convex sets that contain a lexicographic maximum but are not lexicographically stable exist. We provide an example in Theorem 6, one that also shows how, in such cases, minimizing exponential loss might not lead to a lexicographic maximum. 6 Theorem 4 shows that a nonempty set X is lexicographically stable if it is a convex polytope, a condition that implies that X is also convex and compact. The theorem still holds, by the same proof, with the weaker requirement that X is convex, compact and locally simplicial, a property de\ufb01ned in Rockafellar (1970, page 84). However, the theorem is false, in general, if we only require that X be convex and compact. As an example, in R3, let Z = [\u22121, 0] \u00d7 [0, 1] \u00d7 [0, 1], and let X = \b x \u2208Z : x1(x3 \u22121) \u2265x2 2 \t . (3) This set is compact and convex, and has a unique lexicographic maximum, namely, x\u2217= (0, 0, 1)\u22a4. For \u03b5 \u2208(0, 1), let x\u03b5 = \u0000\u2212\u03b52, \u03b5/2, 3/4 \u0001\u22a4. It can be shown that x\u03b5 \u2208A(X, \u03b5). It follows that X is not lexicographically stable since \u03c33(x\u03b5) = 3/4 for all \u03b5 > 0 while \u03c33(x\u2217) = 1. (Details are given in Appendix A.3.) 6 Convergence analysis of exponential loss minimization In this section we study conditions under which a near or exact minimizer xc,\u03b3 \u2208X of the exponential loss Lc(x) converges to a lexicographic maximum x\u2217\u2208lexmax X as c \u2192\u221e. To see why convergence should be expected, note that when c is large, the dominant term of Lc(x) corresponds to the smallest component of x, since the function x 7\u2192exp(\u2212cx) decreases very quickly. Therefore minimizing Lc(x) will tend to make this term as large as possible. Further, among vectors x that maximize their smallest component, the dominant term in Lc(x) corresponds to the second smallest component of x, if we ignore terms that are equal for all vectors. In general, when c is large, the magnitudes of the terms in Lc(x) decrease sharply when they are sorted in increasing order of the components of x, and this situation tends to favor a minimizer of Lc(x) that is also a lexicographic maximum. Although such reasoning is intuitive, proving convergence to a lexicographic maximum can be challenging; indeed, convergence need not hold for every set X, as will be seen shortly. 6.1 Asymptotic results Theorem 5 states our main convergence result: If X is lexicographically stable then xc,\u03b3 converges to a lexicographic maximum x\u2217as c \u2192\u221e, provided that \u03b3 \u2208[0, 1). In other words, in the contrapositive, if a near minimizer in X of Lc(x) fails to converge to a lexicographic maximum as c \u2192\u221e, then Algorithm 1 can fail to \ufb01nd a good approximation of a lexicographic maximum for any tolerance \u03b5 > 0. Theorem 5. Let X \u2286Rn and \u03b3 \u2208[0, 1). If X is lexicographically stable then for all x\u2217\u2208lexmax X lim c\u2192\u221ed(x\u2217| xc,\u03b3) = 0. Theorem 5\u2019s requirement that \u03b3 \u2208[0, 1) ensures that xc,\u03b3 is less than exp(\u2212c \u2225X\u2225\u221e) from the minimum value of Lc(x). To see why this condition aids convergence to a lexicographic maximum, consider that the smallest possible value of any term in Lc(x) is exp(\u2212c \u2225X\u2225\u221e). So if the minimization error were larger than this value then xc,\u03b3 may not make every term of Lc(xc,\u03b3) small, which in turn could cause xc,\u03b3 to be far from a lexicographic maximum. Before proving Theorem 5, we introduce some additional notation and a key lemma. For any X \u2286Rn, x \u2208Rn and k \u2208{0} \u222a[n] let Xk(x) = {x\u2032 \u2208X : \u03c3i(x\u2032) \u2265\u03c3i(x) for all i \u2208[k]} (4) be the set of all vectors in X whose k smallest components are at least as large as the k smallest components of x. Note that X0(x) = X. Also, if x(k\u22121) is the vector selected in iteration k \u22121 7 of Algorithm 1, then Xk\u22121(x(k\u22121)) is the set of feasible solutions to the optimization problem in iteration k of the algorithm. Lemma 1 below, which is key to our convergence results and proved in Appendix A.4, states that if xc,\u03b3 is selected in any iteration of Algorithm 1 then it is a good solution for the next iteration when c is large. Lemma 1. For all X \u2286Rn, \u03b3 \u2208[0, 1), c > 0 and k \u2208[n] \u03c3k(xc,\u03b3) \u2265 sup x\u2208Xk\u22121(xc,\u03b3) \u03c3k(x) \u22121 c log \u0012n \u2212k + 1 1 \u2212\u03b3 \u0013 . We are now ready to prove Theorem 5. Proof of Theorem 5. Say that a vector x \u2208Rn is (X, \u03b5)-valid if it is a solution to all of the optimization problems in Algorithm 1 when run on input (X, \u03b5). In other words, if Algorithm 1 is run on input (X, \u03b5), then the algorithm can let x(1) = x, x(2) = x, . . . , x(n) = x. For all c > 0 let \u03b5(c) = 1 c log n 1\u2212\u03b3 . By Lemma 1 we have \u03c3k(xc,\u03b3) \u2265 sup x\u2208Xk\u22121(xc,\u03b3) \u03c3k(x) \u2212\u03b5(c) for all k \u2208[n], which immediately implies that xc,\u03b3 is (X, \u03b5(c))-valid. Let {\u03b4t} be a positive sequence with limt\u2192\u221e\u03b4t = 0. Since \u03b5(c) is a continuous function with range (0, \u221e), by De\ufb01nition 3 for each \u03b4t there exists ct > 0 such that for all \u02c6 x \u2208A(X, \u03b5(ct)) and k \u2208[n] we have \u03b4t \u2265\u03c3k(x\u2217) \u2212\u03c3k(\u02c6 x). Also, since A(X, \u03b5) \u2286A(X, \u03b5\u2032) if \u03b5 < \u03b5\u2032, and \u03b5(c) is a decreasing function, we can arrange {ct} to be an increasing sequence with limt\u2192\u221ect = \u221e. Since xc,\u03b3 is (X, \u03b5(c))-valid we have for all k \u2208[n] \u03b4t \u2265\u03c3k(x\u2217) \u2212\u03c3k(xct,\u03b3). By taking the limit superior of both sides we have for all k \u2208[n] 0 \u2265lim sup t\u2192\u221e[\u03c3k(x\u2217) \u2212\u03c3k(xct,\u03b3)] , and therefore lim t\u2192\u221emax k\u2208[n][max{0, \u03c3k(x\u2217) \u2212\u03c3k(xct,\u03b3)}] = 0, which proves the theorem. The lexicographic stability requirement in Theorem 5 cannot be relaxed without risking nonconvergence. Theorem 6 below constructs a lexicographically unstable set for which the exact exponential loss minimizer is bounded away from the lexicographic maximum. The set is a piecewise linear path consisting of two adjoining line segments that is bounded, closed and connected, but not convex. Theorem 6. For all n \u22658 there exists a set X \u2286Rn consisting of two line segments with a shared endpoint and satisfying \u2225X\u2225\u221e= 1 such that for all x\u2217\u2208lexmax X and c \u22652 d(x\u2217| xc,0) \u2265d{n}(x\u2217| xc,0) \u22651 2. Proof sketch. The full construction and proof are given in Appendix A.5. Brie\ufb02y, the lexicographic maximum x\u2217in X is a vector consisting of 0 in the \ufb01rst n\u22121 components and 1 in the nth component. For all \u03b5 > 0, X also includes a vector x(\u03b5) whose \ufb01rst component is \u2212\u03b5 2, next n \u22122 components are \u03b5 4, and nth component is 1 2. We prove that for all n \u22658 and c \u22652 there exists \u03b5 > 0 such that Lc(x(\u03b5)) < Lc(x\u2217), essentially because when n is su\ufb03ciently large the lower loss on the middle n \u22122 components compensates for the higher loss on the \ufb01rst and last components. Observing that \u03c3n(x(\u03b5)) = \u03c3n(x\u2217) \u22121 2 completes the proof. 8 From Theorem 5 it immediately follows that the set constructed in Theorem 6 is not lexicographically stable. We can also give more direct intuition for why the set is unstable. The set X in Theorem 6 consists of a \u201cgood\u201d and a \u201cbad\u201d line segment, and the unique lexicographic maximum is a point on the \u201cgood\u201d line segment. If the optimization problem in the \ufb01rst iteration of Algorithm 1 is solved exactly, then the smallest component of the solution will be equal to 0. In this case, every iteration of the algorithm will output a solution on the \u201cgood\u201d line segment, since only points on that segment have a smallest component that is at least 0. However, if the optimization problem in the \ufb01rst iteration is solved with a tolerance \u03b5 > 0, then the smallest component of the solution can be as small as \u2212\u03b5. In this case, every iteration of the algorithm will output a solution on the \u201cbad\u201d line segment, since the 2nd, 3rd, . . . , (n \u22121)th smallest components of the points on that segment are larger than the corresponding components of the points on the \u201cgood\u201d line segment. However, the largest component of each point on the \u201cgood\u201d line segment is equal to 1, while the largest component of each point on the \u201cbad\u201d line segment is equal to 1 2. As a result, the algorithm outputs a vector whose largest component has a value that is far from its lexicographic maximum. Theorem 6 provides a much stronger example of instability than the construction of Diana et al. (2021), who showed that for all \u03b5 > 0 there exists a set X such that A(X, \u03b5), the set of possible outputs on input (X, \u03b5), contains an element that is far from the lexicographic maximum x\u2217of X. By contrast, Theorem 6 reverses the order of the quanti\ufb01ers, and shows that there exists a set X such that for all \u03b5 > 0 the set A(X, \u03b5) contains an element that is far from x\u2217. 6.2 Convergence rates Theorems 7 and 8 below give bounds on the rate at which a near or exact minimizer xc,\u03b3 \u2208X of the exponential loss function Lc(x) converges to a lexicographic maximum x\u2217\u2208lexmax X as c \u2192\u221e. Theorem 7 states that the smallest and second smallest components of xc,\u03b3 are never more than O(1/c) below their lexicographically maximum values, provided that \u03b3 \u2208[0, 1), so that an arbitrarily good approximation is possible by making c large. Note that the theorem makes no assumptions about X, not even that it is lexicographically stable. While the rate of convergence for the smallest component of xc,\u03b3 has been studied previously (Rosset et al., 2004), we believe that we are the \ufb01rst to prove unconditional convergence, even asymptotically, for the second smallest component. Theorem 7. For all n \u22652, X \u2286Rn, x\u2217\u2208lexmax X, c > 0 and \u03b3 \u2208[0, 1) d{1,2}(x\u2217| xc,\u03b3) \u22641 c log \u0012n \u2212k + 1 1 \u2212\u03b3 \u0013 . Proof. By Lemma 1 we only need to show sup x\u2208Xk\u22121(xc,\u03b3) \u03c3k(x) \u2265\u03c3k(x\u2217) for k \u2208{1, 2}. By the de\ufb01nition of x\u2217we have supx\u2208X \u03c31(x) = \u03c31(x\u2217), and since X0(xc,\u03b3) = X this implies supx\u2208X0(xc,\u03b3) \u03c31(x) = \u03c31(x\u2217). We also have \u03c32(x\u2217) = sup x\u2208X:\u03c31(x)\u2265\u03c31(x\u2217) \u03c32(x) \u2235De\ufb01nition of x\u2217 \u2264 sup x\u2208X:\u03c31(x)\u2265\u03c31(xc,\u03b3) \u03c32(x) \u2235\u03c31(x\u2217) \u2265\u03c31(xc,\u03b3) = sup x\u2208X1(xc,\u03b3) \u03c32(x) \u2235Eq. (4) In contrast to Theorem 7, the situation is very di\ufb00erent for the kth smallest component of xc,\u03b3 for all k \u22653. Theorem 8 states that this component can remain far below its lexicographically maximum value for arbitrarily large values of c, even if X is a bounded line segment (and thus is lexicographically stable) and \u03b3 = 0 (i.e., the minimization is exact). 9 Theorem 8. For all n \u2265k \u22653 and a \u22651 there exists a line segment X \u2286Rn satisfying \u2225X\u2225\u221e= 1 such that for all x\u2217\u2208lexmax X and c > 0 d(x\u2217| xc,0) \u2265d{k}(x\u2217| xc,0) \u22651 3 min n 1, a c o . Proof sketch. We consider only the case n = 3 here, as the general case n \u22653 proceeds very similarly. The complete proof is provided in Appendix A.6. If n = 3 then we de\ufb01ne X to be the line segment joining the following two points: x\u2217= (\u03b5, \u03b5, 1)\u22a4and x\u2032 = \u0012 0, 2 3, 2 3 \u0013\u22a4 where \u03b5 > 0. Clearly x\u2217is the lexicographic maximum of X. As discussed earlier, if c is large then the dominant term in Lc(x) corresponds to the smallest component of x. However if c is small then several of the largest terms in Lc(x) can have similar magnitude. We show that if c \u2264a = \u2126(log 1 \u03b5) then at least the two largest terms in Lc(x), which correspond to the two smallest components of x, have roughly the same magnitude. In this case the minimizer of Lc(x) will be much closer to x\u2032 than to x\u2217, because the second smallest component of x\u2032 is roughly 2 3 larger than the second smallest component of x\u2217, while the smallest component of x\u2217is only \u03b5 larger than the smallest component of x\u2032. 6.3 A related algorithm using multiplicative weights Next, we discuss a related approach for \ufb01nding a lexicographic maximum using no-regret strategies to solve an associated zero-sum game, as was considered in great detail by Syed (2010). This is another natural approach for \ufb01nding lexicographic maxima since, at least when X is a convex polytope, we can view the lexicographic maximization computational task through the lens of solving a zero-sum game, a problem where no-regret algorithms have found a great deal of use. In the game theory perspective, we are trying to solve the following minimax problem: min p\u2208\u2206m max q\u2208\u2206n p\u22a4Mq where M \u2208Rm\u00d7n and \u2206m, \u2206n are the probability simplices on m, n items, respectively. An equilibrium pair of this minimax problem is a pair of distributions \u02c6 p \u2208\u2206m and \u02c6 q \u2208\u2206n satisfying p\u22a4M\u02c6 q \u2265\u02c6 p\u22a4M\u02c6 q \u2265\u02c6 p\u22a4Mq for all p \u2208\u2206m and q \u2208\u2206n. Von Neumann (1928) showed that such a \u201cminimax-optimal\u201d pair, commonly known as a Nash equilibrium in a zero-sum game, always exists for every M. There has been considerable work on how to compute such a pair, including through the use of no-regret online learning algorithms for sequentially updating p and q. For example, Multiplicative Weights (Algorithm 2) is known to compute an \u01eb-approximate equilibrium of the game given by M, with \u01eb = O \u0010p log(m)/T + p log(n)/T \u0011 (Freund and Schapire, 1999). In this paper, we study how to compute lexmax X for some X \u2286Rm. Let us suppose X is a convex polytope, that is, the convex hull of a \ufb01nite set of points S \u2286Rm. Let M be a matrix whose columns are the points in S, so that X = {Mq : q \u2208\u2206n}. We can then de\ufb01ne a lexmax equilibrium strategy for the column player as any q\u2217for which Mq\u2217\u2208lexmax X. A \u201clexmin\u201d equilibrium strategy for the row player can be de\ufb01ned similarly. Thus, a lexmin and lexmax equilibrium pair p\u2217, q\u2217is a \u201cspecial\u201d Nash equilibrium which satis\ufb01es the additional constraints of being lexicographically optimal. In this work, we focus only on computing a lexmax solution q\u2217for the column player. The use of no-regret algorithms (such as Multiplicative Weights) has been very helpful for \ufb01nding Nash equilibria in zero-sum games, among many other applications, and the exponentiation used in the update in Algorithm 2 has an attractive similarity to the minimization of the exponential loss 10 Algorithm 2: Multiplicative weights method for computing a Nash equilibrium Input: M \u2208Rm\u00d7n, num. iter. T Input: \u03b71, \u03b72, . . . > 0 learning parameters p1 \u2190 \u0000 1 n, . . . , 1 n \u0001\u22a4 for t = 1, . . . , T do qt \u2190any element in arg maxq\u2208\u2206n p\u22a4 t Mq pt+1 \u2190 1 Zt+1 exp \u0010 \u2212\u03b7tM Pt s=1 qs \u0011 where exp is applied coordinate wise, and Zt+1 is the normalizer. end Return: \u00af pT := 1 T PT t=1 pt, \u00af qT := 1 T PT t=1 qt (Eq. (1)) considered primarily in this work. A natural question is whether Algorithm 2 is suitable for \ufb01nding not just any equilibrium strategy, but a lexmax equlibrium strategy p\u2217as de\ufb01ned above. Unfortunately, prior work suggests this is not the case when the learning parameter \u03b7t is \ufb01xed to a constant: Theorem 9 (Informal summary of (Syed, 2010, Theorem 3.7)). There is a family of game matrices M \u2208R3\u00d74 such that if Algorithm 2 is run with a constant learning parameter \u03b7t = \u03b7, the output \u00af qT will not converge to a lexmax equilibrium strategy for the row player as T \u2192\u221e. On the positive side, Syed (2010) also gives a result for when Algorithm 2 computes a lexocographically optimal solution, but only in a very speci\ufb01c case where the solution has distinct values. We now aim to rehabilitate Algorithm 2, the work of Syed (2010) notwithstanding. As we will see, the choice of learning parameters \u03b7t is indeed quite important. First, let us de\ufb01ne the function Hc : \u2206n \u2192R as Hc(q) := 1 c log m X i=1 exp(\u2212ce\u22a4 i Mq) ! , (5) where ei is the ith basis vector. This function is strongly related to Lc(\u00b7) as in Eq. (1), except that we have 1 c log(\u00b7) operating on the outside. Notice, however, that the log transformation is monotonically increasing, so any minimizer of the exponential loss also minimizes Hc(\u00b7). Second, we observe that Hc is c-smooth \u2014 that is, it satis\ufb01es \u2225\u2207Hc(q) \u2212\u2207Hc(q\u2032)\u2225\u2264c\u2225q \u2212q\u2032\u2225for any q, q\u2032 \u2208\u2206n. To give our main result in this section, we emphasize that the following leans on a primal-dual perspective on optimization that uses aforementioned tools on game playing. See Wang et al. (2023) for a complete description. Theorem 10. If Algorithm 2 is run with parameter \u03b7t = c t then Hc(\u00af qT ) \u2212min q\u2208\u2206n Hc(q) = O \u0012c log T T \u0013 . Proof. This proof proceeds in three parts. First, we recall the well-known Frank-Wolfe procedure for minimizing smooth convex functions on constrained sets, applied here to Hc(q). Second, we show that, with the appropriate choice of update parameters, Frank-Wolfe applied to Hc(q) is identical Algorithm 2. Finally, we appeal to standard convergence guarantees for Frank-Wolfe to obtain the desired convergence rate. The Frank-Wolfe algorithm, applied to Hc(\u00b7), is as follows. Let \u00af q0 \u2208\u2206n be an arbitrary initial point, and let \u03b31, \u03b32, . . . > 0 be a step-size schedule. On each iteration t = 1, . . . , T , we perform the following: \u2207t \u2190\u2207Hc(\u00af qt\u22121) (6) 11 qt \u2190arg min q\u2208\u2206n\u27e8q, \u2207t\u27e9 (7) \u00af qt \u2190(1 \u2212\u03b3t)\u00af qt\u22121 + \u03b3tqt. (8) Ultimately, the algorithm returns \u00af qT . We now show that this implementation of Frank-Wolfe is identical to Algorithm 2, as long as we have \u03b3t = 1 t . To see this, we need to observe that the gradients \u2207t can be written as \u2207t = \u2207Hc(\u00af qt\u22121) \u221d\u2212exp(\u2212cM\u00af qt\u22121) = \u2212exp \u2212 c t \u22121M t\u22121 X s=1 qs ! . In other words, on every round we have that pt = \u2212\u2207t. Furthermore, the vectors qt are chosen in exactly the same way, as the di\ufb00erence in sign is accounted for by the fact that qt is chosen as an arg max in Algorithm 2 as opposed to an arg min in Frank-Wolfe. Let us \ufb01nally recall a result that can be found in Abernethy and Wang (2017), that the FrankWolfe algorithm run with parameters \u03b3t = 1 t , for any function which is \u03b1-smooth on its domain, converges at a rate of O(\u03b1 log(T )/T ). We can now appeal to a well-known result on the convergence of Frank-Wolfe.1 This observation emphasizes that the popular multiplicative update method (Algorithm 2), that appears to fail to solve the desired problem according to the work of Syed (2010), actually succeeds in \ufb01nding a lexicographic maximum insofar as the exponential loss minimization scheme succeeds. The critical \u201cpatch\u201d that \ufb01xes the algorithm is the modi\ufb01ed learning rate of c t in place of \ufb01xed \u03b7. 7 Conclusion We proved a close connection between the two primary methods for computing a lexicographic maximum of a set, and used this connection to show that the method based on exponential loss minimization converges to a correct solution for sets that are lexicographically stable. We believe our results represent the most general convergence criteria for exponential loss minimization that are known. We also undertook the \ufb01rst analysis of the convergence rate of exponential loss minimization, and found that even when convergence is guaranteed, the components of the minimizing vector can converge at vastly di\ufb00erent rates. Finally, we showed that the well-known Multiplicative Weights algorithm can \ufb01nd a lexicographic maximum of a lexicographically stable set if the learning rate is suitably chosen. Acknowledgements We thank Miroslav Dud\u00b4 \u0131k , Saharon Rosset and Matus Telgarsky and for helpful discussions, and the anonymous reviewers for their comments and suggestions."
16
+ }
intro_8K/test_introduction_short_2405.01413v1.json ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.01413v1",
3
+ "title": "MiniGPT-3D: Efficiently Aligning 3D Point Clouds with Large Language Models using 2D Priors",
4
+ "abstract": "Large 2D vision-language models (2D-LLMs) have gained significant attention\nby bridging Large Language Models (LLMs) with images using a simple projector.\nInspired by their success, large 3D point cloud-language models (3D-LLMs) also\nintegrate point clouds into LLMs. However, directly aligning point clouds with\nLLM requires expensive training costs, typically in hundreds of GPU-hours on\nA100, which hinders the development of 3D-LLMs. In this paper, we introduce\nMiniGPT-3D, an efficient and powerful 3D-LLM that achieves multiple SOTA\nresults while training for only 27 hours on one RTX 3090. Specifically, we\npropose to align 3D point clouds with LLMs using 2D priors from 2D-LLMs, which\ncan leverage the similarity between 2D and 3D visual information. We introduce\na novel four-stage training strategy for modality alignment in a cascaded way,\nand a mixture of query experts module to adaptively aggregate features with\nhigh efficiency. Moreover, we utilize parameter-efficient fine-tuning methods\nLoRA and Norm fine-tuning, resulting in only 47.8M learnable parameters, which\nis up to 260x fewer than existing methods. Extensive experiments show that\nMiniGPT-3D achieves SOTA on 3D object classification and captioning tasks, with\nsignificantly cheaper training costs. Notably, MiniGPT-3D gains an 8.12\nincrease on GPT-4 evaluation score for the challenging object captioning task\ncompared to ShapeLLM-13B, while the latter costs 160 total GPU-hours on 8 A800.\nWe are the first to explore the efficient 3D-LLM, offering new insights to the\ncommunity. Code and weights are available at\nhttps://github.com/TangYuan96/MiniGPT-3D.",
5
+ "authors": "Yuan Tang, Xu Han, Xianzhi Li, Qiao Yu, Yixue Hao, Long Hu, Min Chen",
6
+ "published": "2024-05-02",
7
+ "updated": "2024-05-02",
8
+ "primary_cat": "cs.CV",
9
+ "cats": [
10
+ "cs.CV",
11
+ "cs.AI",
12
+ "cs.CL",
13
+ "cs.LG"
14
+ ],
15
+ "label": "Original Paper",
16
+ "paper_cat": "Mixture AND of AND Experts",
17
+ "gt": "Large Language Models (LLMs) have recently driven advance- ments in multiple fields [15, 35, 45, 46], benefiting from their world knowledge. Built on LLMs, large 2D vision-language models (2D- LLMs) [4, 27, 62] can align image features with text through an image feature projector, enabling 2D-LLMs to understand visual content. Inspired by 2D-LLMs, large 3D point cloud-language mod- els (3D-LLMs) [39, 40, 51] aim to incorporate 3D point cloud features into LLMs, equipping LLMs with the ability to perceive and reason in 3D space. These 3D-LLMs hold promise for widespread applica- tions in fields like robotics [44, 48] and autonomous driving [10, 15]. However, 3D-LLMs are expensive to train. For example, training PointLLM-13B [51] takes 213 total GPU-hours on 8 A100 GPU, mak- ing research and applications extremely challenging. Here, we aim to find a more efficient way to connect 3D point clouds with LLMs. We observe that existing 3D-LLMs directly align point cloud en- coders with LLMs. Although these encoders can produce somewhat unified features through multimodal pre-training, there is still a significant modality gap between 3D points with LLMs, requiring substantial resources for alignment. Besides, in contrast to resource- intensive alignment between vision and language, 3D point clouds and 2D images are both visual modalities, which makes it easier to align their representations. Thus, we pose a question: Can we use 2D-LLMs as a strong prior to connect LLMs and 3D data, making alignment more efficient? In other words, as shown in Figure 2, leveraging pre-trained 2D-LLMs directly allows for cutting down the cost of vision-language alignment, leaving only the 2D-3D vision alignment, which is significantly cheaper. Following this intuition, we propose MiniGPT-3D, an efficient 3D-LLM that connects 3D point clouds and LLMs using 2D-LLMs as priors. Our MiniGPT-3D achieves multiple state-of-the-art (SOTA) Pre-trained 2D Priors A single RTX 3090 (24G) with 26.8 GPU-Hours is all you need! 3D LLM 2D 3D LLM 8 \u00d7 A100 (80G) GPU 213 Total GPU-Hours Existing Methods Ours Align Figure 2: Existing methods and ours to align 3D with LLMs. results, requiring only 27 hours of training on a single RTX 3090 GPU. Specifically, we propose an efficient four-stage training strat- egy in a cascaded way, gradually allowing the model to learn unified visual-textual representations. This process achieves the smooth transfer of priors from 2D-LLM to the 3D space, thus efficiently constructing a bridge from 3D to LLM. Moreover, we introduce the Mixture of Query Experts (MQE), which comprises multiple query experts and an expert router, enabling the adaptive aggregation of features from multiple experts with only 0.4M parameters. MQE dynamically adjusts the cooperation relationship between experts, thereby aggregating 3D features from multiple perspectives into the semantic space of 2D-LLM. Meanwhile, we employ various parameter-efficient fine-tuning (PEFT) technologies like LoRA [21] and Norm fine-tuning, and utilize an efficient LLM, further reducing the model\u2019s training overhead. As shown in Figure 1, MiniGPT-3D achieves new SOTA perfor- mance on generative 3D object classification and object captioning tasks. Specifically, compared to the powerful baseline ShapeLLM- 13B [39], MiniGPT-3D achieves a 6.77% increase in classification average accuracy and an 8.12 increase in GPT-4 evaluation score. Notably, MiniGPT-3D utilizes extremely cheaper training resources (1\u00d7 RTX 3090 vs. 8\u00d7 A800), with up to 6\u00d7 acceleration (26.8h on RTX 3090 vs. 160h on A800). Furthermore, our model has signif- icantly fewer trainable parameters, reduced by up to 260\u00d7, with 2.95B model parameters in total, which is decreased by up to 4.6\u00d7. MiniGPT-3D takes the first step in efficient 3D-LLM, we hope that MiniGPT-3D can bring new insights to this community. In summary, our contributions are as follows: \u2022 We present MiniGPT-3D, an efficient and powerful 3D-LLM that aligns 3D points with LLMs using 2D priors, achieving multiple SOTA with only 26.8h of training on one RTX 3090. \u2022 We propose an efficient four-stage training strategy in a cascaded way, gradually transferring the knowledge from 2D-LLMs to 3D while requiring only 47.8M learnable pa- rameters. \u2022 We design the mixture of query experts to aggregate multi- ple features from different experts with only 0.4M parame- ters. \u2022 Extensive experiments show the superior performance of MiniGPT-3D on multiple tasks while reducing the training time and parameters by up to 6x and 260x, respectively. (a) Stage I (c) Stage III Copy Weight Copy Weight Copy Weight PC Encoder Q-Former Point Cloud LLM (Phi-2) Queries Detailed Caption & Conversation Brief Caption Queries PC Encoder Q-Former Normalization & LoRA Point Cloud Instruction Projector LLM (Phi-2) Brief Caption (d) Stage IV Normalization & LoRA Projector LLM (Phi-2) Detailed Caption & Conversation Point Cloud PC Encoder Q-Former MLP Normalization & LoRA Expert Weights Instruction Instruction MLP (b) Stage II PC Encoder Q-Former Point Cloud LLM (Phi-2) Queries Instruction Normalization & LoRA MLP Normalization & LoRA Projector Normalization & LoRA MLP Normalization & LoRA Projector Mixture of Query Experts Query Experts 2D-LLM 3D Trainable Frozen Figure 3: Training framework and strategy. Our MiniGPT-3D utilizes a four-stage training strategy. (a) We solely train the point cloud projection layer (MLP). (b) We train the modality projector while fine-tuning the point cloud projection layer, Q-Former, and LLM backbone. (c) We further enhance the modules trained in the second stage by leveraging a more challenging task. (d) Finally, we only train the mixture of query experts, while freezing the remaining modules.",
18
+ "main_content": "The exceptional instruction-following and generalization capabilities of LLMs [46, 49, 53, 55] have been integrated into vision, leading to the emergence of large 2D vision-language models (2D-LLMs). Early works such as Flamingo [1] and BLIP-2 [27] successfully use projectors to align vision information to LLMs. More recently, most works mainly focus on improving model capabilities through expanding the instruction-tuning dataset [5, 30, 61], increasing resolution of image [2, 31], enhancing image encoders [7, 59]. Meanwhile, some methods [8, 9, 57, 60] have also begun to explore efficient 2D-LLM. Models like TinyLlama [60] and TinyGPT-V [57] use Phi-2 [33], an efficient LLM, to achieve easily deployable 2DLLMs. Among them, TinyGPT-V leverages LoRA [21] technology and pre-trained modules to achieve extremely efficient fine-tuning. However, TinyGPT-V can only handle 2D images, efficient 3D-LLM remains unexplored, and we aim to fill this gap. 2.2 Large 3D Point Cloud-Language Models Large 3D point cloud-language models (3D-LLMs) introduce LLM into the point cloud modality [6, 20, 23, 29, 36, 39, 40, 51, 54]. Early attempt [20] renders 3D objects into 2D multi-view images, then utilizes 2D-LLM to understand 3D. However, the absence of direct perception of raw point cloud data limits its comprehension of 3D geometry. To address this issue, recent works [6, 23, 36, 40] propose to discard the \u201crendering\u201d and encode point cloud directly, followed by modal alignment to fixed LLMs via trainable projectors. PointLLM [51] and ShapeLLM [39] show that models can be enhanced after fully fine-tuning. However, the training of 3D-LLMs is expensive. For instance, PointLLM-13B requires training on 8 A100 GPUs for up to 216 total GPU-hours. We observe that with 2D-LLM as visual prior, we can not only bypass the \u201cpoint cloud rendering\u201d, but also make this hierarchical alignment extremely efficient. Therefore, we propose MiniGPT-3D, different from existing 3D-LLMs which aligns 3D points directly to LLMs, our MiniGPT-3D leverages the powerful priors from 2D-LLM as a linkage between LLM and 3D points, using only a RTX 3090 to train for 27 hours. 2.3 Mixture of Experts Mixture of Experts (MoE) [22, 24] is an ensemble learning technique that adaptively activates selected modules, referred to as experts, based on input. MoE is widely used in various fields [14, 25, 26, 42, 43]. Shazeer et al. introduce MoE into NLP for the first time, where each intervening layer between LSTM layers serves as an expert. Gshard [26] further expands the MoE to Transformer [47], treating each Feed-Forward Neural Network (FNN) as an expert. Recently, with the emergence of LoRA, several works [13, 16, 58] design FFN\u2019s LoRA network as an expert to efficiently fine-tune LLM. Moreover, OneLLM [19] introduces MoE to the learned projector of 2D-LLM, with each projector serving as an expert. In our work, we integrate the MoE concept into the queries of Q-Former [27], treating each set of queries as an expert. These experts adaptively aggregate point cloud features across diverse extraction perspectives. 3 METHOD In this section, we first introduce the architecture of MiniGPT-3D (Sec. 3.1), and then present our four-stage training strategy (Sec. 3.2), and finally elucidate the training loss for MiniGPT-3D (Sec. 3.3). 3.1 Model Architecture Figure 3 depicts the architecture of MiniGPT-3D, which consists of the six main components: a point cloud encoder, a point cloud projection layer (MLP), a Q-Former, a mixture of query expert (MQE), a modality projector, and a large language model. The MiniGPT-3D framework introduces a two-step projection process, transforming the point cloud from 3D to 2D and then to 1D. Specifically, the point cloud is passed to the point cloud encoder to extract 3D features. Subsequently, features are then projected into a 2D semantic space using the point cloud projection layer. Finally, leveraging the 2D-LLM modules including the Q-Former, modality projector, Norm of LLM, and LoRA of LLM, features in 2D-LLM space are transduced into the 1D-text space of LLM, enabling efficient alignment between 3D and LLM. Additionally, MQE enhances MiniGPT-3D\u2019s comprehensive and accurate perception of 3D objects. Details are presented in the following sections. 3.1.1 3D Features to 2D. During this process, the point cloud is encoded into 3D features and subsequently projected into the 2D semantic space of the 2D-LLM. Point Cloud Encoder. The input point cloud is encoded into 3D features by the point cloud encoder \ud835\udc53\ud835\udc5d\ud835\udc50. Specifically, the point cloud \ud835\udc43\u2208R\ud835\udc5b\u00d7\ud835\udc51is input to \ud835\udc53\ud835\udc5d\ud835\udc50, where \ud835\udc5bis the number of points and \ud835\udc51denotes the feature dimension of each point. Then, \ud835\udc53\ud835\udc5d\ud835\udc50outputs a point feature sequence \ud835\udc4b\u2208R\ud835\udc5a\u00d7\ud835\udc4f, comprising \ud835\udc5afeatures, each with a dimension of\ud835\udc4f. In our experiments, we employ the Point-BERT [56] model, pre-trained on ULIP-2 [52] using the Objaverse [12] dataset, as the point cloud encoder. To maintain pre-training knowledge, we freeze the encoder\u2019s parameters on all training stages. Point Cloud Projection Layer. The point cloud projection layer \ud835\udc53\ud835\udc40\ud835\udc3f\ud835\udc43is an MLP with two linear layers, which embeds point features \ud835\udc4binto the semantic space of the pre-trained 2D Q-Former [27], aligning their dimensions. Concisely, \ud835\udc4c= \ud835\udc53\ud835\udc40\ud835\udc3f\ud835\udc43(\ud835\udc4b), where \ud835\udc4c\u2208 R\ud835\udc5a\u00d7\ud835\udc4f\u2032 and \ud835\udc4f\u2032 is the hidden space dimension of Q-Former. 3.1.2 Features in 2D-LLM space to LLM. This part transduces the point cloud representation in the 2D semantic space of 2D-LLM to the 1D text space of LLM. Q-Former. The Q-Former \ud835\udc53\ud835\udc44\ud835\udc39, with a decoder-based Transformer structure, transforms point features \ud835\udc4cinto point queries \ud835\udc44. This process not only enhances the information extracted from point cloud features but also reduces input size for subsequent LLM, accelerating training and inference. Concisely, \ud835\udc44= \ud835\udc53\ud835\udc44\ud835\udc39(\ud835\udc4c, \ud835\udc44), where \ud835\udc44\u2208R\ud835\udc5c\u00d7\ud835\udc4f\u2032, \ud835\udc44\u2208R\ud835\udc5c\u00d7\ud835\udc4f\u2032. \ud835\udc44is the queries of Q-former and \ud835\udc5cis the number of query. In experiments, we initialize Q-Former with BLIP-2 [27] pre-trained weights. Given Q-Former\u2019s extensive 105M parameters, we employ PEFT technologies to fine-tune its Query, Key, and Value layers, and normalization layers, thus enhancing adaptability to point clouds while preserving 2D knowledge. Mixture of Query Experts. Inspired by multi-view image rendering for 3D-to-2D projection, we propose the Mixture of Query Experts (MQE) to achieve a similar effect. In the process of MQE, multiple sets of queries (query expert) are used to transform point features into the semantic space of 2D Q-Former. MQE is the first PC Encoder Q-Former Point Cloud MLP Query Experts SoftMax Routing Weights Trainable Frozen Non-activated Forward Activated Forward Expert Router MLP Figure 4: The framework of the mixture of query experts. First, a point cloud is encoded to features\ud835\udc4band\ud835\udc4c. Feature\ud835\udc4bis then passed through to the expert router, assigning softmaxbased weights to experts. The top \ud835\udc54experts are selected based on these weights. These experts, together with \ud835\udc4c, are then fed into the Q-Former, and their outputs are weighted to produce the final point queries \ud835\udc44. to introduce dynamic routing of MoE into queries, enabling adaptive activation of more suitable query experts to capture richer semantic information across diverse point cloud inputs, as shown in Figure 4. MQE contains \ud835\udc58trainable query experts {\ud835\udc38\ud835\udc58}, each is a set of queries initialized from BLIP-2. To integrate multiple query experts into one set of queries, we use a dynamic routing, expert router \ud835\udc53\ud835\udc45, which regulates each expert\u2019s contribution. The expert router includes an MLP and a softmax operation, which accepts feature \ud835\udc4band assigns softmax-based routing weights to each expert. We employ the sparse routing strategy [43], selecting\ud835\udc54experts with the highest weights. Next, the selected query experts {\ud835\udc38\ud835\udc54} utilize Q-Former to extract high-dimensional semantics {\ud835\udc44\u210e} from the feature \ud835\udc4c. {\ud835\udc44\u210e} are then weighted by the corresponding routing weights to generate the final point queries \ud835\udc44. The process can be formulated as: \ud835\udc44= \u2211\ufe01 \ud835\udc38\ud835\udc5e\u2208{\ud835\udc38\ud835\udc54} \ud835\udc64\ud835\udc5e\u00b7 \ud835\udc53\ud835\udc44\ud835\udc39(\ud835\udc4c, \ud835\udc38\ud835\udc5e), (1) \ud835\udc64\ud835\udc5e= \ud835\udc53\ud835\udc45(\ud835\udc4b) [\ud835\udc5e] = Softmax (MLP(\ud835\udc4b)) [\ud835\udc5e] . (2) To enable query experts to learn knowledge within a stable 3D-LLM semantic context, MQE is only utilized in the final training stage, by which time other modules have completed training. Modality Projector. We use an MLP as the modality projector to bridge the modality gap between point cloud and text, while transforming point queries\ud835\udc44\u2208R\ud835\udc5c\u00d7\ud835\udc4f\u2032 into point tokens\ud835\udc47\ud835\udc5d\ud835\udc50\u2208R\ud835\udc5c\u00d7\ud835\udc50, where \ud835\udc50denotes the shared dimension of both point and text tokens. 3.1.3 Large Lanuguage Model Backbone. To minimize GPU memory usage during training, we utilize Phi-2 [33] with 2.7 billion parameters as the large language model backbone of MiniGPT-3D. Table 1: Each training stage setups and overhead. Training Stages Dataset Types Dataset Scale Epochs Init_lr & Min_lr Trainable Parameters Training Time using One RTX 3090 GPU Stage I Brief Caption 660 k 1 3e-5, 1e-5 1.4 M 9.4 h Stage II Brief Caption 660 k 1 3e-5, 1e-5 47.4 M 10.9 h Stage III Detailed Caption & Conversation 70 k 3 1e-5, 1e-6 47.4 M 4.9 h Stage IV Detailed Caption & Conversation 70 k 1 5e-6, 1e-6 0.4 M 1.6 h In MiniGPT-3D, the LLM backbone \ud835\udc53\ud835\udc59\ud835\udc59\ud835\udc5aprocesses a sequence of tokens \ud835\udc47= \u0000\ud835\udc611,\ud835\udc612, . . . ,\ud835\udc61\ud835\udc57 \u0001 \u2208R\ud835\udc57\u00d7\ud835\udc50, where \ud835\udc57is the number of tokens, including point tokens and text tokens. Leveraging the self-attention mechanism, the LLM backbone can comprehend the semantic relationships from different modality tokens and generate responses for given instructions. This process can be expressed as: \u02c6 \ud835\udc47= \ud835\udc53\ud835\udc59\ud835\udc59\ud835\udc5a(\ud835\udc47), (3) where \u02c6 \ud835\udc47= \u0000\u02c6 \ud835\udc611, \u02c6 \ud835\udc612, . . . , \u02c6 \ud835\udc61\ud835\udc57 \u0001 \u2208R\ud835\udc57\u00d7\ud835\udc50, and \u02c6 \ud835\udc61\ud835\udc56denotes the predicted \ud835\udc56-th token, based on the semantics of all previous tokens {\ud835\udc61<\ud835\udc56}. Subsequently, \u02c6 \ud835\udc61\ud835\udc56is passed through a linear layer \ud835\udc53\ud835\udc59\ud835\udc59\ud835\udc5a\u2192\ud835\udc63\ud835\udc5c\ud835\udc50\ud835\udc4e\ud835\udc4fto be mapped into the vocabulary space. A softmax operation is then applied to compute a probability distribution across the vocabulary, with the word of highest probability designated as the prediction \ud835\udc67\ud835\udc56for \u02c6 \ud835\udc61\ud835\udc56. The process can be formulated as: \u02dc \ud835\udc61\ud835\udc56= \ud835\udc53\ud835\udc59\ud835\udc59\ud835\udc5a\u2192\ud835\udc63\ud835\udc5c\ud835\udc50\ud835\udc4e\ud835\udc4f(\u02c6 \ud835\udc61\ud835\udc56), (4) \ud835\udc67\ud835\udc56= \ud835\udc4e\ud835\udc5f\ud835\udc54 max \ud835\udc64\u2208\ud835\udc63\ud835\udc5c\ud835\udc50\ud835\udc4e\ud835\udc4fSoftmax(\u02dc \ud835\udc61\ud835\udc56)[\ud835\udc64]. (5) As LLMs are primarily trained on text, a perception gap arises when processing non-textual information. Therefore, we adapt PEFT technology LoRA [21] to the LLM backbone, and also further fine-tune the normalization layers, preserving learned knowledge and reducing computational overhead. 3.2 Training Stages To gradually transfer the priors of 2D-LLM to point cloud modality and enhance the nascent 3D-LLM\u2019s comprehension, our training process includes four stages, each focusing on a distinct task, as shown in Figure 3. The following subsections will describe them. 3.2.1 Stage I. As shown in Figure 3(a), the first stage aims to bridge the knowledge gap between the 3D point cloud encoder and 2D-LLM modules, facilitating a seamless transition from 3D to 2D. We solely train the point cloud projection layer (MLP), with other modules frozen. Initialization is sourced from ULIP-2 [52] for the encoder, BLIP-2 [27] for Q-Former, and TinyGPT-V [57] for normalization layers of LLM, LoRA of LLM, and the modality projector. Since the frozen Q-Former from BLIP-2 is also used in TinyGPT-V, MiniGPT-3D only owns two knowledge domains from 3D of ULIP-2 and 2D-LLM of TinyGPT-V before training. To build a robust bridge between domains, we train the projection layer using 660k caption-point cloud pairs, involving 1.4M parameters, as detailed in Table 1. 3.2.2 Stage II. In the second stage, our objective is to transfer the vision-language knowledge domain to 3D space, establishing the 3Dlanguage knowledge domain. As shown in Figure 3(b), we fine-tune four parts: the point cloud projection layer (MLP), the Q-Former, the modality projector, and the LLM. Utilizing the 3D-2D bridge of the first stage, 2D-LLM modules, via fine-tuning, gain comprehension of 3D point clouds and gradually transfer the powerful priors to be the 3D-language knowledge. During this process, to minimize the impact of the 3D-2D bridge, we employ the identical dataset from the first stage to train 47.4M parameters, as outlined in Table 1. 3.2.3 Stage III. To gain better 3D-language knowledge, we further fine-tune the modules trained in the second stage and utilize a more challenging dataset, including detailed caption-point cloud pairs and conversations, to empower MiniGPT-3D with the capabilities to comprehend and respond to complex instructions. 3.2.4 Stage IV. During the prior stages, using a single set of queries restricts 3D perception perspective, leading to incomplete cognition. To refine MiniGPT-3D\u2019s perception, we introduce MQE to adaptively activate suitable multiple query experts for Q-Former, as shown in Figure 3(d). Distinct from the preceding three stages focusing on rapidly establishing 3D-language knowledge, this stage presents a stable semantic context for query experts to learn knowledge efficiently. Specifically, we only fine-tune 0.4M MQE-related parameters, reusing the dataset from the third stage to minimize the impact of data distribution changes, as outlined in Table 1. 3.3 Training Objective The training objective of MiniGPT-3D aims to minimize the discrepancy between predicted and true probability distributions at each token position. Given a point cloud and corresponding text instruction, MiniGPT-3D outputs a sequence \u02c6 \ud835\udc47. Next, \u02c6 \ud835\udc47is processed by \ud835\udc53\ud835\udc59\ud835\udc59\ud835\udc5a\u2192\ud835\udc63\ud835\udc5c\ud835\udc50\ud835\udc4e\ud835\udc4fand then a softmax operation is applied to obtain the probability distribution over the vocabulary for each output token, denoted as \ud835\udc47. The training loss is formulated as follows: L = CrossEntropy \u0010 \u210e(\ud835\udc3a), \ud835\udc47 \u0011 , (6) where the\u210e(\u00b7) represents the LLM\u2019s tokenizer.\ud835\udc3ais the ground truth text. The \ud835\udc36\ud835\udc5f\ud835\udc5c\ud835\udc60\ud835\udc60\ud835\udc38\ud835\udc5b\ud835\udc61\ud835\udc5f\ud835\udc5c\ud835\udc5d\ud835\udc66(\u00b7) refers to the cross-entropy loss function. Notably, we only compute the loss for the generated text. 4 EXPERIMENTS 4.1 Experimental Settings Utilizing one RTX 3090 GPU with 24GB of RAM, we train MiniGPT3D with only 47.8M trainable parameters in 26.8 hours. We adopt the AdamW optimizer with a weight decay of 0.05 and a cosine decay with linear warm up learning rate schedule. The initial learning rate decreases gradually as the training stage advances, as shown in Table 1. We use the point-text instruction dataset [51], including 660K brief-description instructions and 70K complex instructions. 200 objects are splited as test data, following PointLLM [51] and ShapeLLM [39]. For each input point cloud \ud835\udc43\u2208R\ud835\udc5b\u00d7\ud835\udc51, the number of point \ud835\udc5bis 8192, and the dimension \ud835\udc51is 6. We default point clouds Table 2: Generative 3D object classification results on the ModelNet40 test split and Objaverse. The accuracy (%) under the Instruction-typed (I) prompt \u201cWhat is this?\u201d and the Completion-type (C) prompt \u201cThis is an object of\u201d are reported. The bold and underline indicate the best and second best results, respectively. ModelNet40 Objaverse Model Reference LLM Size Trainable Params Input (I) (C) Average (I) (C) Average Average InstructBLIP-7B [11] NeurIPS,23 7B 0.20B Single-V. Img. 19.53 31.48 25.51 45.00 42.00 43.50 34.50 InstructBLIP-13B [11] NeurIPS,23 13B 0.20B Single-V. Img. 25.97 31.40 28.69 37.00 31.50 34.25 31.47 LLaVA-7B [32] NeurIPS,23 7B 7.03B Single-V. Img. 39.75 39.67 39.71 49.50 50.50 50.00 44.86 LLaVA-13B [32] NeurIPS,23 13B 13.03B Single-V. Img. 37.12 36.06 36.59 53.00 50.50 51.75 44.17 3D-LLM [20] NeurIPS,23 13B 3D Obj. + Mul.-V. Img. 49.00 41.50 45.25 45.25 Point-Bind LLM [18] arXiv,23.9 7B 3D Point Cloud 51.90 39.71 45.81 6.00 4.50 5.25 25.53 PointLLM-7B [51] arXiv,23.8 7B 7.01B 3D Point Cloud 53.44 51.82 52.63 55.00 51.00 53.00 52.82 PointLLM-13B [51] arXiv,23.8 13B 13.01B 3D Point Cloud 53.00 52.55 52.78 56.50 51.50 54.00 53.39 ShapeLLM-7B [39] arXiv,24.2 7B 7.04B 3D Point Cloud 53.08 54.50 53.79 ShapeLLM-13B [39] arXiv,24.2 13B 13.04B 3D Point Cloud 52.96 54.00 53.48 0.05B 61.75 59.97 60.86 60.00 60.50 60.25 60.56 MiniGPT-3D 2.7B (47.8M) 3D Point Cloud (+8.31) (+7.42) (+7.78) (+3.5) (+9.00) (+5.75) (+6.77) Table 3: 3D object captioning results on Objaverse. The results are from human evaluation, GPT-4 evaluation, and traditional metrics. The bold and underline indicate the best and second best results, respectively. Model Reference LLM Size Trainable Params GPT-4 Sentence-BERT SimCSE Human Evaluation Correctness Hallucination \u2193 Precision InstructBLIP-7B [11] NeurIPS,23 7B 0.20B 45.34 47.41 48.48 2.56 0.77 76.99 InstructBLIP-13B [11] NeurIPS,23 13B 0.20B 44.97 45.90 48.86 2.58 1.13 69.56 LLaVA-7B [32] NeurIPS,23 7B 7.03B 46.71 45.61 47.10 2.76 0.86 76.30 LLaVA-13B [32] NeurIPS,23 13B 13.03B 38.28 46.37 45.90 2.43 0.86 73.97 3D-LLM [20] NeurIPS,23 13B 33.42 44.48 43.68 1.77 1.16 60.39 PointLLM-7B [51] arXiv,23.8 7B 7.01B 44.85 47.47 48.55 3.04 0.66 82.14 PointLLM-13B [51] arXiv,23.8 13B 13.01B 48.15 47.91 49.12 3.10 0.84 78.75 ShapeLLM-7B [39] arXiv,24.2 7B 7.04B 46.92 48.20 49.23 ShapeLLM-13B [39] arXiv,24.2 13B 13.04B 48.94 48.52 49.98 0.05B 57.06 49.54 51.39 3.50 0.71 83.14 MiniGPT-3D 2.7B (47.8M) (+8.12) (+1.02) (+1.41) (+0.40) (+0.05) (+1.00) without color to black. For a fair comparison, we adopt the identical versions models of GPT-4 [35] (\u201cgpt-4-0613\u201d) and ChatGPT [34] (\u201cgpt-3.5-turbo-0613\u201d) as our evaluation tools, like prior works [39, 51]. We choose multiple SOTA 3D-LLMs [18, 20, 39, 51] and two popular open-source 2D-LLMs [11, 32] as our baselines. 4.2 Generative 3D Object Classification We conduct the generative 3D object classification tasks [51] on ModelNet40 [50] and Objaverse [12] datasets to assess MiniGPT3D\u2019s categorical cognitive ability. Settings. For a fair comparison, we utilize the classification evaluation settings similar to prior works [39, 51]. We employ identical prompts: the Instruction-typed (I) prompt \u201cWhat is this?\u201d and the Completion-type (C) prompt \u201cThis is an object of\u201d. Point clouds and these prompts are fed into our MiniGPT-3D, outputting textual responses. For close-set zero-shot classification on ModelNet40, ChatGPT processes the text responses of MiniGPT-3D to select predicted categories from 40 ModelNet40 classes. For open-vocabulary classification on Objaverse, GPT-4 is employed as an evaluator to determine whether MiniGPT-3D\u2019s text response refers to the same category as the ground-truth caption. Results. Experimental results are shown in Table 2. We achieve SOTA performance on all classification benchmarks using only one RTX 3090. Specifically, compared to the best baseline, ShapeLLM [39], we achieve significant improvements of 7.78% and 5.75% in average accuracy on ModelNet40 and Objaverse datasets, respectively. Unlike other methods using LLM (7B or 13B) that require fine-tuning on 8 A100 or 8 A800 for hundreds of total GPU-hours, our MiniGPT3D only utilizes a 2.7B LLM and trains 47.8M parameters on a single RTX 3090 GPU in 27 hours. These demonstrate the superiority and efficiency of our MiniGPT-3D, which leverages the powerful priors from 2D-LLMs to build 3D-LLM. Additionally, we observe that MiniGPT-3D exhibits the best performance in recognizing 3D objects of the ModelNet40 dataset unused during training, indicating its stronger generalization ability compared to other methods. Furthermore, even if using different prompts ( I and C) on the Objaverse dataset, MiniGPT-3D demonstrates highly consistent classification performance compared to other 3D-LLMs that have a 4% accuracy gap, showcasing its robustness for diverse prompts. 4.3 3D Object Captioning To assess the model\u2019s understanding of 3D object details, we perform the 3D object captioning task. Settings. For a fair comparison, we follow the evaluation settings of prior works [39, 51]. We use the prompt \u201cCaption this 3D model in detail\u201d. We adopt three distinct evaluation methods: human evaluation, GPT-4 [35] evaluation, and traditional metric evaluation. In Table 4: Qualitative comparisons. The classification and caption results of models on ModelNet40 and Objaverse are presented. Our MiniGPT-3D generates more detailed and insightful responses compared to other baselines. Samples 1, 2 Prompt What is this? Briefly caption this 3D model. Ground Truth Laptop A cartoon black monster like a dragon InstructBLIP-13B [11] symbol letter l a black lizard with a sharp tooth in a dark room LLaVA-13B[32] A small, grainy, black and white letter j. A 3D model of a dark, menacing dragon. 3D-LLM [20] A black and white tiger with long legs, standing on its hind leg. Point-Bind LLM [18] This is a laptop computer. The 3D model features a large, ornate gargoyle with a horned helmet, sitting on top of a building. PointLLM-13B [51] The 3D model represents a notebook computer, typically a laptop. The 3D model depicts a menacing black dragon, with its mouth opened wide revealing a row of sharp teeth. MiniGPT-3D This is a 3D model of a sleek, black laptop. The laptop is rectangular in shape and has a glossy finish, suggesting a high-quality build. It has a flat screen, which is a common feature in modern laptops, and it\u2019s equipped with a keyboard and a touchpad for user interaction. The laptop appears to be in good condition, suggesting it could be used for various tasks such as work, entertainment, or education. This is a 3D model of a fearsome, black-colored dinosaur. The dinosaur possesses an aggressive stance, with sharp claws extending from its arms and legs. It has long black fangs that seem ready to attack. Table 5: Ablation on training process. Row No. Stage I Stage II Stage III Stage IV Acc. 1 \u2713 39.10 2 \u2713 \u2713 55.92 3 \u2713 \u2713 \u2713 59.10 4 \u2713 \u2713 \u2713 \u2713 60.56 5 \u2713 \u2713 \u2713 52.81 6 \u2713 \u2713 \u2713 58.46 7 \u2713 \u2713 \u2713 47.93 Table 6: Ablation on 2D priors from 2D-LLM. Modality Projector Norm and LoRA for LLM Acc. 49.04 \u2713 57.44 \u2713 57.86 \u2713 \u2713 58.46 Table 7: Ablation on stages using MQE. Stage I Stage II Stage III Stage IV Acc. \u2713 \u2713 \u2713 \u2713 58.83 \u2713 \u2713 \u2713 60.25 \u2713 \u2713 59.50 \u2713 60.56 human evaluation, volunteers evaluate the model using standardized processes from PointLLM [51]. Specifically, focusing on object attributes (such as type, color, material, etc.), volunteers visually assess objects and assign correctness scores and hallucination scores to captions. Correctness measures model accuracy in describing attributes, while hallucination evaluates fabricated details\u2019 severity. Each attribute, correct or hallucinated, receives a point. Precision is calculated as the ratio of correct information in model-generated content. The Inter-Annotator Agreement score is 0.89 on ICC1k, indicating volunteers\u2019 high consistency in cognitive understanding and scoring criteria. GPT-4 evaluates semantic similarity between our model\u2019s output and manually annotated captions. In traditional metric evaluation, like prior works [39, 51], we use data-driven metrics like Sentence-BERT [41] and SimCSE [17], instead of BLEU1 [37], ROUGEL [28], and METEOR [3], because the latter lack sufficient capabilities in semantic evaluation. Results. As shown in Table 3, our MiniGPT-3D achieves SOTA performance on multiple metrics. Specifically, MiniGPT-3D outperforms ShapeLLM-13B [39], by a large margin of 8.12 on the GPT-4 evaluation score, setting new SOTA with only 2.7B LLM, indicating robust 3D detail comprehension. Also, compared to ShapeLLM-13B, MiniGPT-3D surpasses 1.02 and 1.41 on Sentence-BERT and SimCSE metrics, respectively, achieving new SOTA with its remarkable ability to generate accurate captions matching ground truth. Human evaluation further reveals MiniGPT-3D\u2019s superior correctness and precision scores compared to baselines. Notably, even with a 2.7B LLM, MiniGPT-3D exhibits a hallucination score comparable to SOTA, surpassing larger 13B LLM-based methods. These outstanding results showcase MiniGPT-3D\u2019s fine-grained understanding of 3D objects, inheriting the cognitive capabilities of 2D-LLM. 4.4 Qualitative Results Figure 1(e) qualitatively shows the MiniGPT-3D\u2019s powerful ability to perceive 3D object details. Our MiniGPT-3D precisely extracts information from 3D objects, encompassing categories, colors, shapes, materials, and internal component relationships. Additionally, MiniGPT-3D can perform reasonable reasoning based on object cues, such as potential occurrence periods and locations. Figure 1(f) further demonstrates MiniGPT-3D\u2019s comprehension of 3D object information in open-ended dialogues. MiniGPT-3D accurately outputs 3D object-related world knowledge, showcasing its extensive textual knowledge inherited from LLMs. In sample 1 of Figure 4, our MiniGPT-3D successfully recognizes the shape, screen, and keyboard of a laptop, compared to other methods. Furthermore, it can deduce the potential usage of this 3D object. In the more complex sample 2 of Figure 4, our MiniGPT-3D demonstrates superior understanding capabilities of 3D objects by recognizing additional features like the dinosaur\u2019s sharp claws and inferring its potential action intentions, compared to other methods. 4.5 Ablation Studies In this section, we conduct ablation studies to investigate various model design options. Herein, we report the total average accuracy of MiniGPT-3D on the generative classification benchmark. Table 8: Ablation on finetuned modules in Q-Former. LoRA LoRA Norm Acc. Q, K, V Dense 58.18 \u2713 59.85 \u2713 \u2713 59.97 \u2713 \u2713 \u2713 60.14 \u2713 \u2713 60.56 Table 9: Ablation on the number of query experts. Number Acc. 1 59.19 3 59.66 6 59.14 8 60.56 10 59.85 Table 10: Ablation on the point cloud projection layers. Number of layers Acc. 1 57.02 2 60.56 3 59.20 Table 11: Ablation on router type of MQE. Type Acc. Constant Router 60.10 Sparse Router 60.56 Soft Router 60.31 Table 12: Ablation on trained modules in stage IV. MQE Norm. & LoRA for Q-Former Modality Projector Norm. & LoRA for LLM MLP Acc. \u2713 \u2713 \u2713 \u2713 \u2713 58.93 \u2713 \u2713 \u2713 \u2713 59.93 \u2713 \u2713 \u2713 59.02 \u2713 \u2713 59.64 \u2713 60.56 4.5.1 Training process . We conduct ablation study to validate the efficacy of our four-stage training strategy. The results in Table 5 highlight the optimal performance achieved by our approach. Specifically, comparing Row #4 vs. #6, we observe that the first stage bridges knowledge between 2D-LLM and 3D encoder, enabling smoother semantic transitions across different dimensional spaces. Comparing Row #4 vs. #5, we note that the second training stage which involves using easy tasks to adapt the knowledge of the 2D-LLM to the 3D space, allows the model to focus on enhancing cognitive capabilities in subsequent stages. Comparing Row #4 vs. #7, the third training stage utilizes more challenging tasks to reinforce the newborn 3D cognitive abilities, providing a reliable semantic context for the final stage to train MQE. Comparing Row #4 vs. #3, the inclusion of the fourth stage, dedicated to training the MQE, enables each query expert to acquire unique knowledge, further enhancing MiniGPT-3D\u2019s understanding of 3D objects. 4.5.2 2D priors from 2D-LLM. We conduct ablation study to varify the effectiveness of the 2D priors from 2D-LLM, as detailed in Table 6. Since dropping any pre-trained weights of 2D-LLM would make the first training stage infeasible, all cases of this ablation study are just trained through stages II to IV. We find that removing any of 2D-LLM weight degrades performance, and discarding more pre-trained weights of 2D-LLM causes an up to 9.4% accuracy drop. These results highlight the crucial role of 2D-LLM knowledge in boosting 3D-LLM performance. Using 2D-LLM modules facilitates cost-efficient training of 3D-LLM even on consumer GPUs like RTX 3090 GPU, enhancing accessibility for the community. 4.5.3 Training stages using MQE. We further investigate the impact of training MQE in different stages, with detailed results presented in Table 7. Our results indicate that introducing MQE in only stage IV achieves optimal performance. The I-III stages enable the model to learn enough semantic features, paving the way for MQE to adaptively select useful information in stage IV. 4.5.4 Fine-tuned modules in Q-Former. Employing PEFT methods to fine-tune Q-Former can better align point features with LLM, avoiding expensive computation. As outlined in Table 8, fine-tuning the Query, Key, and Value layers with LoRA [21], along with normalization layers, maximizes the potential of Q-Former. Notely, we efficiently fine-tune the 105M-parameter Q-Former using only 0.7M parameters, achieving a 2.38% accuracy improvement compared to the frozen Q-Former. 4.5.5 Number of query experts. Within MQE, each query expert holds unique knowledge, facilitating extraction of point cloud features. Our experiments, in Table 9, reveal that 8 query experts yield optimal performance. Insufficient experts may compromise information extraction, while excessive ones may affect cooperation among experts. Notably, single-expert, i.e. without MQE, results in a 1.37% accuracy drop, highlighting the superiority of MQE. 4.5.6 Point cloud projection layer. The point cloud projection layer bridges point cloud features with the 2D semantics of frozen Q-Former, while ensuring dimensional alignment. As shown in Table 10, our experiments demonstrate that two MLP layers offer the optimal setup, as excessive or insufficient layers can result in information loss, compromising overall performance. 4.5.7 Router type of MQE. The routing mechanism in MQE regulates the cooperation among query experts. The constant router [25] assigns static average weights, while the soft router [38] dynamically assigns weights during training. The sparse router [43] selects the top two experts based on the dynamic weights provided by the soft router. We explore these router types in Table 11, finding that the sparse router, which dynamically assigns weights and selects the most promising experts, maximizes the capabilities of MQE. 4.5.8 Trained modules in stage IV. In the training stage IV, only MQE is trained to enable each query expert to learn knowledge within a stable semantic context. Our experiments in Table 12 investigate the integration of various training modules. The results indicate that stage IV is to adaptively aggregate features of different experts, with knowledge gained from I-III stages frozen. Losing any frozen knowledge causes information loss, demonstrating the MQE is specifically designed for information aggregation. 5 CONCLUSION In this paper, we present MiniGPT-3D, a efficient and powerful 3D-LLM, requiring the training of only 47.8M learnable parameters within 26.8 hours on one single NVIDIA RTX 3090 GPU. Specifically, we propose a novel four-stage training strategy that gradually aligns 3D point cloud features with LLM using 2D priors from 2D-LLM. Additionally, we design the mixture of query experts, introducing MoE to queries, to adaptively aggregate multiple features. Extensive experiments show the superiority of MiniGPT-3D in 3D point cloud understanding and question answering. Discussion. MiniGPT-3D\u2019s limitations lie in its training on object-level datasets, preventing it from understanding large-scale point clouds. Moreover, like existing 3D-LLMs, our MiniGPT-3D solely focuses on comprehending static 3D objects, lacking the capacity to recognize the actions of dynamic objects. We will extend our 3D-LLM building approach to autonomous driving scenarios."
19
+ }
intro_8K/test_introduction_short_2405.01418v1.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.01418v1",
3
+ "title": "GTX: A Write-Optimized Latch-free Graph Data System with Transactional Support",
4
+ "abstract": "This paper introduces GTX a standalone main-memory write-optimized graph\nsystem that specializes in structural and graph property updates while\nmaintaining concurrent reads and graph analytics with snapshot isolation-level\ntransactional concurrency. Recent graph libraries target efficient concurrent\nread and write support while guaranteeing transactional consistency. However,\ntheir performance suffers for updates with strong temporal locality over the\nsame vertexes and edges due to vertex-centric lock contentions. GTX introduces\na new delta-chain-centric concurrency-control protocol that eliminates\ntraditional mutually exclusive latches. GTX resolves the conflicts caused by\nvertex-level locking, and adapts to real-life workloads while maintaining\nsequential access to the graph's adjacency lists storage. This combination of\nfeatures has been demonstrated to provide good performance in graph analytical\nqueries. GTX's transactions support fast group commit, novel write-write\nconflict prevention, and lazy garbage collection. Based on extensive\nexperimental and comparative studies, in addition to maintaining competitive\nconcurrent read and analytical performance, GTX demonstrates high throughput\nover state-of-the-art techniques when handling concurrent transaction+analytics\nworkloads. For write-heavy transactional workloads, GTX performs up to 11x\nbetter than the best-performing state-of-the-art systems in transaction\nthroughput. At the same time, GTX does not sacrifice the performance of\nread-heavy analytical workloads, and has competitive performance similar to\nstate-of-the-art systems.",
5
+ "authors": "Libin Zhou, Yeasir Rayhan, Lu Xing, Walid. G. Aref",
6
+ "published": "2024-05-02",
7
+ "updated": "2024-05-02",
8
+ "primary_cat": "cs.DB",
9
+ "cats": [
10
+ "cs.DB",
11
+ "H.2.4"
12
+ ],
13
+ "label": "Original Paper",
14
+ "paper_cat": "Temporal AND Graph",
15
+ "gt": "Managing and querying dynamic graph datasets have become in- creasingly important in many real-world applications, e.g., in rec- ommendation systems, fraud detection, threat detection, geo-spatial This work is licensed under the Creative Commons BY-NC-ND 4.0 International License. Visit https://creativecommons.org/licenses/by-nc-nd/4.0/ to view a copy of this license. For any use beyond those covered by this license, obtain permission by emailing [email protected]. Copyright is held by the owner/author(s). Publication rights licensed to the VLDB Endowment. Proceedings of the VLDB Endowment, Vol. 18, No. 1 ISSN 2150-8097. doi:XX.XX/XXX.XX navigation, e-commerce, knowledge graph applications, and risk management [8\u201310, 37, 43, 51, 57, 58, 67]. Graphs continuously change [12, 36], where graph write operations can reach millions per second, e.g., as in social networks [1, 4, 18, 51, 56, 59]. Moreover, applications require transactional atomicity when updating multi- ple vertices and edges [51]. For instance, in ByteDance services [7], when a user creates an article, a transaction atomically inserts the 3 edges (user, article), (user, tag), (article, tag). Without transactional support, updates can lead to data corruption [23]. For example, they can violate the reciprocal consistency that requires atomically updating two edges between a pair of vertices [23, 66]. At the same time, dynamic graphs need to support graph analyt- ics, e.g., [24, 26, 35, 51, 73]. ByteDance [7] detects fraud and manages risk via subgraph pattern matching operations that may traverse multiple hops while the graph is being updated concurrently [51]. Without transactional atomicity, consistency, and isolation, graph analytical algorithms, originally designed for static graphs, cannot be run [35], and may yield incorrect results [23, 26], and security vul- nerabilities [23, 65]. In fraud detection, a customer may be flagged wrongly. In computer networks, a suspicious authentication may pass security checks without being noticed [26]. Also, an access control system needs to update access permissions of roles and en- tities atomically [23, 65]. Without transactional guarantees, users may obtain incorrect permissions, and malicious users can exploit race conditions to trigger vulnerabilities. Although applications can build concurrency control protocols to achieve transactional isola- tion, these solutions are inefficient and complicated [23]. Graph sys- tems, e.g., [29, 45, 53] cannot run graph analytics concurrently with updates [35]. Several recent graph data systems, e.g., [26, 35, 73] support mixed transactions and analytics workloads. Experiments [35] show that performance of state-of-the-art trans- actional graph systems [26, 35, 73] suffers significantly given write- heavy workloads when following real-world update patterns. Many real-world scenarios exhibit power-law graphs with vertex degrees following power-law distribution [21, 22, 30, 63, 73], and super- vertices (hub vertices) [35, 51]) having large fanout. Also, real-world graph workloads have hotspots and their edge updates have tempo- ral locality [46]. These update patterns cause not only updates to the same vertex\u2019s adjacency list to be temporally close to each other, but also create congestion of large amounts of concurrent updates at the same vertexes (i.e., hub vertices becoming hotspots [51]). Vertex-centric locking, and the lack of lock-free synchronization are believed to cause significant degradation in performance, espe- cially in terms of transaction throughput [35]. Efforts have been dedicated into developing lock/latch-free data structures and indices, e.g., [13, 14, 47, 49, 50, 52, 64, 72] that allow for high concurrency. They rely on atomic hardware primitives, e.g., fetch_add and compare_and_swap (CAS) to serialize updates, arXiv:2405.01418v1 [cs.DB] 2 May 2024 but, by themselves, these primitives do not support the notion of transactions, but serve as building blocks to achieve that [49]. This paper studies how atomic primitives and multiversion con- currency control (\ud835\udc40\ud835\udc49\ud835\udc36\ud835\udc36) [71] can be applied to realize a transac- tional graph data system. We introduce GTX, a latch-free write- optimized main-memory transactional graph system to support highly concurrent update transactions and graph analytics. Latch-free indexes, e.g., Bw-Tree [49], assume that a separate transaction layer handles the transactional concurrency control. Thus, Bw-Tree does not prevent write-write conflicts or provide transaction atomicity and consistency. Extending a latch-free index to be a standalone system requires designing an efficient transaction protocol. One could extend Bw-Tree as a key-value store with locks. However, locks may incur longer blocking, and would reduce the advantage of using non-blocking atomic operations. Having a lock per object induces high memory overhead. Alternatively, one can abort a transaction when it fails to execute a \ud835\udc36\ud835\udc34\ud835\udc46(hence, a conflict). However, this would abort all transactions that write concurrently to the same memory blocks (although not necessarily updating the same object), and hence reducing concurrency and scalability. To address these challenges, GTX has a cache-friendly latch-free \ud835\udc40\ud835\udc49\ud835\udc36\ud835\udc36dynamic graph storage. It is the first to combine pointer- based and sequential delta-chain storage for efficient edge lookup and adjacency list scans. It uses vector-based delta-chains indexes to locate target edges efficiently. GTX\u2019s block allocation protocol is optimized for power-law edge distributions and hub vertices that are prominent in real-world graphs [35, 73]. Also, GTX of- fers an efficient transaction protocol that is well-suited for the underlying latch-free data structure and dynamic graph workload. GTX eliminates vertex- and edge-centric locks, and has an adaptive write-write conflict prevention protocol. GTX adapts to the tempo- ral locality and hotspots of edge updates by increasing concurrency for \u201chot\" adjacency lists with higher memory cost, and reducing concurrency for the \u201ccold\" blocks. To exploit the high concurrency provided by latch-free \ud835\udc40\ud835\udc49\ud835\udc36\ud835\udc36and adaptive concurrency control, GTX has a low-latency hybrid group commit protocol and a cache- friendly latch-free \ud835\udc40\ud835\udc49\ud835\udc36\ud835\udc36garbage collection protocol. GTX is also efficient in computing resources with a cooperative worker thread design. Except for a single commit manager thread, GTX has no other service threads. GTX worker threads perform garbage col- lection, failed transaction aborts, and commit other transactions\u2019 updates while executing the thread\u2019s own transactions. Experiments demonstrate that GTX achieves up to 2x higher transaction throughput in random order power-law graph edge insertions, and up to 11x higher transaction throughput in real- world timestamp-ordered power-law graph edge insertions than the best competitor. For concurrent transaction and graph analytics workloads, GTX has up to 4.3x higher throughput in edge update transactions for write-heavy workloads, and 2.8x higher throughput for read-write balanced workloads than competitor systems. The trade-off is that GTX takes between 0.8x to 1.8x longer time to execute most of the concurrent graph analytics, but still performs reasonably well for the majority of the workloads. The contributions of this paper are as follows. We introduce GTX, a write-optimized transactional graph system with up to 11X better write transaction throughput over its best competitor system. GTX has a latch-free graph storage that leverages existing techniques of latch-free indexes, and is optimized for adjacency list storage of dynamic graphs. GTX combines pointer-based delta-chains storage and sequential storage to provide both efficient single edge lookup and adjacency list scans. GTX has a high-throughput transaction protocol suited for latch-free storage. For power-law graphs without temporal localities and hotspot, GTX transactions achieve up to 2.27 higher read-write transaction throughput than its competitors. GTX has a delta chains-based adaptive \ud835\udc40\ud835\udc49\ud835\udc36\ud835\udc36, hybrid commit protocol, and decentralized epoch-based garbage collector. GTX eliminates the need for vertex- or edge-centric locking commonly used in graph systems. GTX dynamically adapts to real-world update pat- terns, temporal localities, and hotspots, where GTX achieves up to 11x better read-write transaction throughput for these work- loads. For mixed-workloads of read-write transactions and graph analytics, GTX achieves up to 4.3x higher read-write transactional throughput than the second best competitor while maintaining competitive graph analytics latency. The rest of this paper proceeds as follows. Section 2 discusses related work for dynamic and transactional graph data systems. Section 3 introduces GTX and its components. Section 4 presents GTX\u2019s storage layer including the delta chains architectures for handling vertex and edge updates. Section 5 presents GTX\u2019s transac- tion protocol. Section 6 discusses GTX\u2019s resource management and garbage collection. Section 7 presents an extensive experimental study. Finally, Section 8 concludes the paper.",
16
+ "main_content": "Many graph data systems have been introduced over the years for graph data management and analytics, e.g., [2, 3, 6, 16, 17, 19, 20, 24, 27\u201329, 31\u201334, 38, 42, 45, 48, 51, 53, 54, 60\u201362, 68\u201370]. Standalone transactional graph systems execute read-write transactions and analytics on a dynamic graph concurrently, e.g., [26, 35, 73]. LiveGraph [73], a main-memory graph data system, supports graph analytics and read-write transactions. LiveGraph\u2019s efficient analytics is due to the purely sequential adjacency list scan. It uses \ud835\udc40\ud835\udc49\ud835\udc36\ud835\udc36to store edge versions of the same source vertex consecutively in the same memory block. Read-write transactions acquire exclusive locks on vertexes to append a new update log to update an edge. Readers do not need locks, and directly scan the memory block utilizing transaction timestamps to read the visible edge versions. LiveGraph\u2019s sequential storage of adjacency lists reduces random memory access, facilitates prefetching, and improves cache performance by eliminating cache misses from pointer chasing [26, 35, 73]. LiveGraph\u2019s linear multi-versioned storage lacks indexes within each edge block, and scans the adjacency list for single edge lookups and updates (that need to \u201cinvalidate\" old edge versions). Also, LiveGraph\u2019s vertex-centric locks block concurrent transactions from writing to the same vertex\u2019s adjacency list even if they may be updating different edges. Teseo [26] stores dynamic large sparse arrays that represent a graph inside leaf nodes of a \u201cfat\" B+-tree to provide both efficient updates and adjacency list reads. Teseo has a clustered sparse index on keys of each sparse array segment to support fast point lookup of vertices and edges, and secondary indexes on vertex locations in the segments to initiate adjacency list scans. Teseo uses a hybrid latch that combines conventional and optimistic latches to support single-writer and multi-reader semantics, and reduces concurrency 2 control overhead in read-intensive workloads. Each read-only or read-write transaction can acquire a latch either conventionally or optimistically according to the desired operation and transaction type. However, when a segment inside a sparse array becomes full, Teseo requires a service thread to determine a rebalance window of multiple segments in the sparse array, lock them, and redistribute entries among these segments. Also, a sparse array resize either creates a new sparse array as the leaf or splits the leaf into two new sparse arrays. These operations require locking multiple memory blocks exclusively. Thus, transactions that do not conflict with normal operations may conflict in locking neighbor segments that may cause threads to block and stall. Also, multiple threads may compete in locking segments for rebalancing. GTX handles this issue by designating one worker thread to conduct garbage collection on each adjacency list via GTX\u2019s state-based block protection protocol. Sortledton [35] is a graph data structure that is optimized for the access patterns popular in graph analytics. It stores sorted adjacency lists in large memory blocks for better read performance at the expense of maintaining the sort during updates. Sortledton uses a concurrent unrolled skiplist [55] to store blocks of edges (versions) in sorted sets, where each skiplist element is an edge block. Sortledton supports fast scan and set operations on adjacency lists while concurrently updating edges. It uses a read-write latch per vertex, and restricts that transactions need to know their write sets and acquire all latches in advance. Its transactions reads have to be completed before starting any writes. In contrast, GTX has an adaptive concurrency control protocol with evolving \u201clocking\" granularity based on the workload. Latchfree systems and structures, e.g., [14, 47, 49, 50, 52, 64, 72] focus mostly on atomic operations, e.g., atomic load, \ud835\udc36\ud835\udc34\ud835\udc46, and \ud835\udc53\ud835\udc52\ud835\udc61\ud835\udc50\u210e_\ud835\udc4e\ud835\udc51\ud835\udc51 to serialize concurrent operations. GTX shares similar ideas with the Bw-Tree [49, 64]. GTX uses atomic operations to resolve conflicts, and delta-based multi-versioning to provide lock-free reads. GTX allocates consecutive memory blocks for adjacency lists. It amortizes the cost of single delta allocations, and ensures that deltas are stored sequentially to reduce random access. 3 OVERVIEW OF GTX GTX addresses the problem of managing and querying dynamic labeled property graphs [40], where edges can contain labels, e.g., as in [73]. Graph operations and analytics are executed under readwrite or read-only transactions with snapshot isolation [15, 71]. Transactions support vertex create and update, edge insert, delete and update, single vertex read, single edge read, and adjacency list scan. Graph analytics invoke adjacency list scans iteratively with a read-only transaction. GTX supports graphs with uniform and power-law edge distribution but is optimized for power-law graphs. GTX can run read-write transactions and analytics concurrently. Fuchs et al. [35] have a microbenchmark to evaluate the most important access patterns in graph workloads. There are major takeaways from their experiments. (1) Storing neighbors of all vertices together (sequential vertex access) is beneficial but not necessary. (2) Sequential vertex access only makes up a small fraction of graph memory access. (3) Sequential access to a vertex\u2019s adjacency list enables better performance. (4) A dense vertex identifier domain has better performance for graph analytics than a sparse domain. (5) Some graph algorithms access vertices in random order. Thus, an index to locate the vertex entry is important. Based on these findings, we make several design decisions. GTX uses an adjacency list-like structure and stores edges (versions) of a vertex sequentially in memory blocks. GTX uses a vertex index for fast access, and an atomic integer on a dense domain to manage vertices. For atomicity and consistency, graph updates and analytics workloads are executed as read-write or read-only transactions. GTX\u2019s graph storage has a vertex index. Each index entry has 2 pointers that point to the head of a vertex delta chain, and the first edge label block of this vertex, respectively. Edge label block stores pointers to edge-deltas blocks of different labels. Vertex delta chain is a linked list of vertex versions while an edge-deltas block stores all edge-deltas (versions) of the edges of the same source vertex with the same label. Figures 1a and 1b give GTX\u2019s graph structure, a 2level vertex index-based graph storage, and 1 vertex entry from the 2nd level vector with this vertex\u2019s version and edge storage. This design is inspired by LiveGraph [73]. GTX manages the following tables that aid its transaction manager and garbage collector: The group commit array, the block access table, the timestamp table, and the distributed transaction table. These global tables allocate one entry per worker thread. Each worker thread has its own local garbage queue and all worker threads share a hybrid block manager. 4 GTX STORAGE 4.1 Vertex-centric Index GTX uses a dense vertex domain and provides an efficient vertex index. Figure 1b illustrates GTX\u2019s vertex-centric storage. Having a vector-based vertex table is beneficial in graph storage [35]. GTX has a latch-free two-level vector-based vertex index (Figures 1a 1 and 2). The 1st level stores pointers to the 2nd level in a fixed-size array. The 2nd level has vertex index vectors that are of the same size (but can be extended to be of growing sizes). A transaction atomically fetch_adds a global offset variable, and uses the return value as the ID of a new vertex, say \ud835\udc63. The corresponding 2nd level entry of the vertex index points to \ud835\udc63and its adjacency list versions. The initial vertex index has only 1 pointer in its 1st level and allocates 1 vector. When the vector runs out of entries, the transaction that creates the first vertex that should reside in the next vector allocates the next 2nd level vector, and claims the first entry of this vector. Other concurrent transactions that create vertices that need to be in the next vector will wait until the corresponding pointer in the 1st level becomes non-null. This allocation strategy amortizes the cost of allocating new vectors, and has \ud835\udc42(1) vertex lookup. Sortledton [35] has a similar strategy. To accommodate large numbers of vertices, we manage the reference to the 1st level array by an atomic pointer. When the 1st level array and all 2nd level vectors are full, the transactions that need to create new vertices will allocate a 2X larger 1st level array, copy all pointers to the new array, and allocates a new vector there. Then, it uses CAS to update the pointer to reference the new array. Failed CAS indicates that another transaction has allocated a new array, and thus other transactions should only reload the pointer. For simplicity we assume that the 1st level array is large and accommodates the 2nd level vectors of all vertices. Each vertex index entry stores an atomic pointer to the head of the vertex delta chain. Each vertex version is stored in a delta that points to the previous version, forming a delta chain. A 3 (a) GTX Highlevel Overview (b) Vertex-centric Storage Figure 1: GTX Storage (a) An overview of system structure. 1. 1st level array vertex index. 2. 2nd level vector vertex index (entry). 3. Vertex delta chain. 4. Edge label block. 5. Edge-deltas block. (b) Representation of (a)2-5 for one vertex, its versions (3) and its labeled adjacency list (4,5) Figure 2: Edge-Deltas Block Layout transaction uses CAS to install a new vertex version. While using a pointer-based delta chain is less efficient due to pointer chasing, random memory access and competing CAS [64], vertex updates are less frequent than edge updates [73]. Also, most transactions need access to only the latest version of a vertex, and will access it with a single pointer chase. Concurrent updates to the same vertex are less common. Thus, the negative effects of this design are minimal. In contrast, vertex delta chains is beneficial. Detecting write-write conflicts to the same vertex is simple. A failed CAS indicates a concurrent transaction has updated the vertex delta chain. Hence, the current transaction aborts. 4.2 Edge Label Block The 2nd pointer in each vertex index entry points to an edge label block to guide search in the edge-deltas block of a certain label. More fields in the label block and label entries are added to support latch-free operations, delta-chains index and edge-deltas block state protection. Due to space limitation, we omit details of how to handle latch-free label blocks. Each edge-deltas block has a delta-chains index, a version number, and a state variable (Detailed in Section 5). For simplicity, we assume the graph only contains 1 label. 4.3 Edge-Deltas Block Figure 2 illustrates an edge-deltas block and its delta-chains index. GTX\u2019s edge-deltas block has 4 sections. At the lowest address, the header contains the block\u2019s metadata. Then, a property data storage section stores each edge-delta\u2019s variable-sized property data that grows after the header from lower to upper address. Fixed-size edgedeltas grow from the opposite end of the block. Edge-deltas are logically linked in different edge-delta chains (Figure 2). Each block has a delta-chains index whose \ud835\udc56\ud835\udc61\u210eentry stores the offset of the \ud835\udc56\ud835\udc61\u210eedge-delta chain\u2019s head. The middle section is free space. The combined_offset combines the addresses of the latest edge-deltas and property data in a 64-bit integer. Since 64-bit is the largest word that can be atomically updated by the hardware without a latch, the combined_offset enables atomically allocating memory for both the edge-delta and its property in a single operation. Its higher 32 bits store the offset of the property data region and the lower 32 bits store the offset of the edge-deltas region. Each edge-delta takes 1 cache line (64 bytes) to avoid false sharing among concurrent writers. It captures the operation the edge-delta stands for, and the creation and invalidation timestamps for multi-versioning. An edgedelta\u2019s previous_offset is the key of the clustered edge-delta chains storage, and points to the previous edge-delta on the delta chain. At an edge-deltas block\u2019s creation time, GTX determines its edge-delta chain number, where transactions append edge-deltas to different delta chains using a hash function. Each edge-delta also stores the offset of the same edge\u2019s previous edge-delta to support efficient previous version lookup. Given a target edge \ud835\udc52, a reader transaction locates \ud835\udc52\u2019s edge-delta chain, and uses edge-deltas\u2019 previous_offset and previous_version offset to locate \ud835\udc52\u2019s visible version. Finally, GTX uses a hybrid storage scheme of each edge-delta\u2019s variable-sized property, where if the property size is small, e.g., \u226416 bytes, it is stored in the cache line-aligned edge-delta to save space and reduce random memory access, else it is stored at the data region. Figure 3: Transaction Table Partition of Thread 5 5 EPOCH-BASED TRANSACTION HANDLING GTX has a fixed number of worker threads that execute transactions and perform garbage collection. Each thread executes its own transactions while being cooperative, i.e., performing work on behalf of other transactions. For analytical workloads, GTX supports using additional OpenMP [5] threads that collectively execute a single read-only transaction managed by an internal worker thread. 4 GTX supports \ud835\udc46\ud835\udc5b\ud835\udc4e\ud835\udc5d\ud835\udc60\u210e\ud835\udc5c\ud835\udc61\ud835\udc3c\ud835\udc60\ud835\udc5c\ud835\udc59\ud835\udc4e\ud835\udc61\ud835\udc56\ud835\udc5c\ud835\udc5b[15, 71] based on \ud835\udc40\ud835\udc49\ud835\udc36\ud835\udc36[71], and we find this sufficient for most graph workloads. GTX manages 2 global variables for transactions, a global read and a global write epochs. LiveGraph also manages those variables for \ud835\udc40\ud835\udc49\ud835\udc36\ud835\udc36[73]. To commit transactions, GTX uses a hybrid group commit [39] protocol. At creation, a transaction \ud835\udc61fetches the global read epoch, and uses its value as \ud835\udc61\u2019s local read timestamp. \ud835\udc61does not know its write timestamp: the time \ud835\udc61\u2019s updates logically get committed atomically until \ud835\udc61commits via the group commit manager. We use the following 3 latch-free fix-sized arrays for GTX\u2019s transactions to store objects that can be updated atomically. 1. The Distributed Transaction Table. In GTX, each worker thread executes transactions, and manages a horizontal partition of the transaction table. The worker thread maintains 2 attributes for every transaction, say \ud835\udc61that it manages: \ud835\udc60\ud835\udc61\ud835\udc4e\ud835\udc61\ud835\udc62\ud835\udc60and \ud835\udc5c\ud835\udc5d_\ud835\udc50\ud835\udc5c\ud835\udc62\ud835\udc5b\ud835\udc61. \ud835\udc60\ud835\udc61\ud835\udc4e\ud835\udc61\ud835\udc62\ud835\udc60records \ud835\udc61\u2019s current state: \ud835\udc3c\ud835\udc5b_\ud835\udc43\ud835\udc5f\ud835\udc5c\ud835\udc54\ud835\udc5f\ud835\udc52\ud835\udc60\ud835\udc60, \ud835\udc34\ud835\udc4f\ud835\udc5c\ud835\udc5f\ud835\udc61that are predefined 64-bit integers or a positive integer that represents a commit timestamp. \ud835\udc5c\ud835\udc5d_\ud835\udc50\ud835\udc5c\ud835\udc62\ud835\udc5b\ud835\udc61records the number of deltas created by \ud835\udc61. Figure 3 illustrates the transaction table partition for Worker Thread 5. Thread 5 has created 17 transactions and the table contains its 1517\ud835\udc61\u210etransactions\u2019 information. Before a worker thread \ud835\udc64executes a read-write transaction, \ud835\udc64generates a 64-bit unsigned transaction ID from \ud835\udc64\u2019s local counter and \ud835\udc64\u2019s thread ID. \ud835\udc64inserts \ud835\udc61in the table at an entry determined by the ID with \ud835\udc3c\ud835\udc5b_\ud835\udc43\ud835\udc5f\ud835\udc5c\ud835\udc54\ud835\udc5f\ud835\udc52\ud835\udc60\ud835\udc60\ud835\udc60\ud835\udc61\ud835\udc4e\ud835\udc61\ud835\udc62\ud835\udc60 and 0 \ud835\udc5c\ud835\udc5d_\ud835\udc50\ud835\udc5c\ud835\udc62\ud835\udc5b\ud835\udc61. If \ud835\udc61aborts, its \ud835\udc60\ud835\udc61\ud835\udc4e\ud835\udc61\ud835\udc62\ud835\udc60is set to \ud835\udc34\ud835\udc4f\ud835\udc5c\ud835\udc5f\ud835\udc61. If \ud835\udc61commits, its \ud835\udc60\ud835\udc61\ud835\udc4e\ud835\udc61\ud835\udc62\ud835\udc60is set by the commit manager to the commit timestamp. Thus, any integer value other than the pre-defined \ud835\udc3c\ud835\udc5b_\ud835\udc43\ud835\udc5f\ud835\udc5c\ud835\udc54\ud835\udc5f\ud835\udc52\ud835\udc60\ud835\udc60 and \ud835\udc34\ud835\udc4f\ud835\udc5c\ud835\udc5f\ud835\udc61values reflects \ud835\udc61\u2019s commit timestamp. \ud835\udc5c\ud835\udc5d_\ud835\udc50\ud835\udc5c\ud835\udc62\ud835\udc5b\ud835\udc61captures how many writes (deltas) \ud835\udc61has made and gets incremented as \ud835\udc61 creates deltas. When \ud835\udc64commits or aborts a delta, it will decrement the \ud835\udc5c\ud835\udc5d_\ud835\udc50\ud835\udc5c\ud835\udc62\ud835\udc5b\ud835\udc61by 1. Thus, \ud835\udc5c\ud835\udc5d_\ud835\udc50\ud835\udc5c\ud835\udc62\ud835\udc5b\ud835\udc61records how many \ud835\udc3c\ud835\udc5b_\ud835\udc43\ud835\udc5f\ud835\udc5c\ud835\udc54\ud835\udc5f\ud835\udc52\ud835\udc60\ud835\udc60 deltas \ud835\udc61has. More detail about \ud835\udc5c\ud835\udc5d_\ud835\udc50\ud835\udc5c\ud835\udc62\ud835\udc5b\ud835\udc61is explained in Sections 5.6, 5.7. Given GTX\u2019s transaction ID structure, a thread can find out the corresponding transaction table and transaction\u2019s status in constant time. Each transaction table partition functions as a circular array, and an entry can be reused if its \ud835\udc5c\ud835\udc5d_\ud835\udc50\ud835\udc5c\ud835\udc62\ud835\udc5b\ud835\udc61is 0. 2. Block Access and 3. Timestamp Tables. These tables capture information each thread wants the other concurrent threads to observe. Each thread is assigned an entry in both tables. When starting a new transaction, say \ud835\udc61, a thread fetches the global read epoch, and stores it in \ud835\udc61\u2019s timestamp table entry. \ud835\udc61can read all the latest versions of the vertices and edges that are created before and invalidated after \ud835\udc61\u2019s read timestamp. The timestamp table ensures no thread can garbage collect edge or vertex blocks that current transactions can read. When \ud835\udc61accesses an edge-deltas block, \ud835\udc61 calculates a block ID from the source vertex and label, and stores the block ID into \ud835\udc61\u2019s entry in the block access table. This way, concurrent threads can learn that \ud835\udc61is accessing that block, and hence the operations that require mutual exclusion on the block cannot be executed at this time. This design avoids writing to the shared memory object, and reduces cache invalidation across cores that using shared/exclusive locks would incur. 5.1 Block Protection Protocol GTX supports mutual exclusion on edge-deltas blocks. Inspired by [52], GTX has a block version protection protocol that does not use shared locks, and enables customized concurrency levels of the edge-deltas block. Each block has a state variable, and each worker thread registers block access in the block access table. The Block Protection protocol has 2 operations: register_access and change_state. When a transaction \ud835\udc61accesses an edge-deltas block \ud835\udc4f, \ud835\udc61calls register_access(b) to examine \ud835\udc4f\u2019s state. If \ud835\udc61\u2019s operation is compatible with \ud835\udc4f\u2019s state, \ud835\udc61stores \ud835\udc4f\u2019s ID in \ud835\udc61\u2019s entry in the block access table. Then, \ud835\udc61rechecks \ud835\udc4f\u2019s state. If it has not changed, \ud835\udc61executes the operation. If \ud835\udc4f\u2019s state is incompatible with \ud835\udc61\u2019s operation, \ud835\udc61unregisters its block access to \ud835\udc4fby storing 0 in the table, and retries later. After finishing the operation, \ud835\udc61unregisters \ud835\udc61\u2019s block access. A worker thread needing to change a block\u2019s state immediately changes the state to the new value. If the new state does not require mutual exclusion, the thread proceeds. Else, the thread scans the block access table. If it finds no other thread accessing the block, it executes the operations in the new state. Else, it continues scanning the block access table until \ud835\udc4fis \u201csafe\". The correctness of this protocol is guaranteed by checking the state twice when a thread registers the access. We define 4 states in GTX, namely, normal, overflow, consolidation, and installation to manage edge-deltas block concurrency. Most of the time, edge-deltas blocks are in State normal that allows almost unlimited concurrent reads and writes. The other 3 states are for consolidating edge-deltas block (Section 5.9). Figure 4: Vertex Delta Chain 5.2 Transaction Vertex Operations GTX supports vertex read, insert, and update. GTX manages a delta chain for every vertex using atomic operations. Figure 4 overviews the vertex delta chain structure. When a transaction \ud835\udc61creates or updates a vertex \ud835\udc63, \ud835\udc61reads the pointer stored at \ud835\udc63\u2019s index entry that points to the head of \ud835\udc63\u2019s delta chain. If the pointer is null or is pointing to a committed vertex delta whose creation timestamp is \u2264 \ud835\udc61\u2019s read timestamp, \ud835\udc61detects no write-write conflict so far, allocates a memory block, say \ud835\udc4f, as a delta, and fills the metadata and \ud835\udc63\u2019s properties into \ud835\udc4f. \ud835\udc61stores its TID as the creation timestamp of \ud835\udc4fand the original delta chain head pointer as \ud835\udc4f\u2019s previous_version pointer. Finally, \ud835\udc61performs \ud835\udc36\ud835\udc34\ud835\udc46to update the pointer in the vertex index to install the new delta chain head. If the current delta chain head has larger creation timestamp at the initial read or if the \ud835\udc36\ud835\udc34\ud835\udc46fails, \ud835\udc61aborts immediately due to the presence of a write-write conflict. Vertex deletions are rare [73]. We leave it for future work. To read a vertex \ud835\udc63, \ud835\udc61uses the pointer in the vertex index to locate the delta chain head.\ud835\udc61compares\ud835\udc61\u2019s read timestamp with each delta\u2019s creation timestamp until it finds the first delta whose creation timestamp is smaller than or equal to \ud835\udc61\u2019s read timestamp. 5.3 Transaction Edge Updates Transactions update edges by appending new edge-deltas in the edge-delta chains within the corresponding edge-deltas blocks. GTX supports edge inserts, deletes, and property updates, and manages 5 concurrency control at the delta chain level. It guarantees only one concurrent transaction can update a specific delta chain at a time. To update an edge \ud835\udc52(\ud835\udc62, \ud835\udc63), a transaction \ud835\udc61, locates \ud835\udc62\u2019s edge-deltas block and registers the access. Assuming the state is compatible, \ud835\udc61identifies the delta chain that \ud835\udc52\u2019s edge-delta belongs to. Then, \ud835\udc61 needs to lock the delta chain. \ud835\udc61fetches the corresponding delta chain\u2019s index value (offset), and checks whether the offset\u2019s most significant bit is 1. The most significant bit of each entry in the deltachains index serves as a locking bit for the delta chain. If the bit is 0, \ud835\udc61checks the current delta chain head according to the offset. If no such head edge-delta exists or its creation timestamp is less than or equal to \ud835\udc61\u2019s read timestamp, \ud835\udc61determines no concurrent writes have taken place. \ud835\udc61invokes \ud835\udc36\ud835\udc34\ud835\udc46to update the index entry to set the locking bit. If the \ud835\udc36\ud835\udc34\ud835\udc46succeeds, \ud835\udc61has locked this delta-chain. If any previous checks or \ud835\udc36\ud835\udc34\ud835\udc46fail, \ud835\udc61detects a write-write conflict and aborts. Otherwise, \ud835\udc61checks if \ud835\udc52already exists before appending an edge-delta. \ud835\udc61uses the delta-chains index to lookup if an edge-delta already exists for \ud835\udc52, and returns the edge-delta\u2019s offset. If no offset is found, an update operation for \ud835\udc52is considered an insert, and a delete operation can end immediately. Otherwise, \ud835\udc61caches this offset, and stores \ud835\udc61\u2019s TID as the invalidation timestamp of this previous_version delta. Next, \ud835\udc61calculates the memory size needed for the edge-delta and data storage using the hybrid storage model (Section 4.3), and allocates space in the block by updating the combined_offset. Then, \ud835\udc61installs the edge-delta by writing \ud835\udc52\u2019s destination vertex ID, the delta type, variable-sized property information, previous (version) offsets, and \ud835\udc61\u2019s TID as the creation timestamp in \ud835\udc52\u2019s edge-delta and writing \ud835\udc52\u2019s property in the corresponding address. An insert (resp. update, delete) edge-delta is created for edge insert (resp. property update, delete). Finally, \ud835\udc61records the block ID, the block version number, and the edge and delta-chain \ud835\udc61have updated. 5.4 Adjacency List Scan GTX supports sequential adjacency list scan on a consistent snapshot based on Transaction \ud835\udc61\u2019s read timestamp. After locating the edge-deltas block, say \ud835\udc4f, and registering the access, \ud835\udc61uses \ud835\udc4f\u2019s creation time to determine the block version to read. If\ud835\udc4f\u2019s creation time is greater than \ud835\udc61\u2019s read timestamp, \ud835\udc61will use \ud835\udc4f\u2019s previous pointer to find the visible block version. Block versioning and creation time are further studied in Section 5.9. After locating the visible version, \ud835\udc61loads the combined_offset to locate the latest edge-delta. \ud835\udc61scans from there until exhausting all edge-deltas in the current block. \ud835\udc61 reads all deltas whose creation timestamp is less than or equal to \ud835\udc61\u2019s read timestamp and is not invalidated or is invalidated in the future. \ud835\udc61also reads all deltas created by but not invalidated by itself. When reading an edge property, \ud835\udc61loads the property either in place or using the edge-delta\u2019s data_offset and data_size fields. 5.5 Transaction Edge Lookup The delta-chains index supports fetching a single edge. When Transaction \ud835\udc61needs Edge \ud835\udc52(\ud835\udc62, \ud835\udc63), \ud835\udc61calculates/finds \ud835\udc52\u2019s edge-delta chain. Then, \ud835\udc61searches the delta-chains index to get the edge-delta chain head that \ud835\udc52belongs to. \ud835\udc61examines the head edge-delta, and uses each edge-delta\u2019s previous_offset to locate the previous deltas in the edge-delta chain instead of performing a scan. \ud835\udc61searches until it finds a visible edge-delta for \ud835\udc52, or until the current delta chain is exhausted. If \ud835\udc61finds \ud835\udc52\u2019s insert or update edge-delta, \ud835\udc61returns \ud835\udc52\u2019s corresponding version. If a delete edge-delta or no edge-delta is found, \ud835\udc61reports that \ud835\udc52does not exist. 5.6 Lazy Update GTX uses Lazy Update to reduce latency of transaction commits, and enforce Snapshot Isolation [15, 71]. A transaction \ud835\udc61may access an edge-deltas block but encounters another transaction, say \ud835\udc5e, with private In_Progress edge-deltas. As the next section shows, GTX\u2019s group-commit manager commits transactions by updating their status in the transaction table to a commit timestamp without updating the transactions\u2019 deltas. Using the transaction table, \ud835\udc61 checks on\ud835\udc5e\u2019s status, and updates the delta\u2019s creation timestamp (and previous version\u2019s invalidation timestamp) to\ud835\udc5e\u2019s commit timestamp if \ud835\udc5ehas committed. Then, \ud835\udc61subtracts 1 from \ud835\udc5e\u2019s \ud835\udc5c\ud835\udc5d_\ud835\udc50\ud835\udc5c\ud835\udc62\ud835\udc5b\ud835\udc61entry. 5.7 Hybrid Transaction Commit and Abort GTX has a hybrid group-commit protocol combining Lazy Updates and group commits to support fast transaction commit while maintaining Snapshot Isolation consistency [15, 71]. It manages a global commit array. A committing transaction \ud835\udc61makes its edge-deltas the heads of their delta chains, and unlocks those edge-delta chains by updating the associated delta-chains index entries. Thus, future transactions can lookup \ud835\udc61\u2019s edge-deltas using the delta-chains index. Then, \ud835\udc61writes its transaction table entry in its worker thread\u2019s commit array entry. The commit manager fetch_adds 1 to the global write epoch, uses the return value as the group commit timestamp, and scans the commit array. For each filled entry, the commit manager writes the group commit timestamp as the status of \ud835\udc61\u2019s transaction table entry, and nullifies this commit array entry. \ud835\udc61 has committed at this timestamp. After the commit manager scans the whole table, it increases the global read epoch by 1, atomically committing all transaction updates in this commit group. \ud835\udc61 eagerly updates and commits its deltas\u2019 timestamps while concurrent transactions can lazily update these deltas, making up the hybrid commit protocol. Correctness-wise, transactions starting after the commit manager increases the global read epoch see deltas of the just-committed transactions. Performance-wise, it allows a new read-write transaction to logically commit without updating all its deltas, and the new read global epoch to start immediately. This allows the commit manager to start a new commit group of transactions without waiting for the previous group to commit all its deltas. This decreases the amount of synchronization between the commit manager and the commit group, and reduces latency. Also, concurrent transactions cooperate (Lazy Update) to reduce the workload of heavy-write transactions. Transaction Abort: When a read-write transaction \ud835\udc61aborts, \ud835\udc61 iterates through its modified edge-deltas blocks, registers access, and eagerly aborts all of \ud835\udc61\u2019s deltas by writing \ud835\udc34\ud835\udc4f\ud835\udc5c\ud835\udc5f\ud835\udc61to their creation timestamp and unsets any \u201clocking\" bits the delta chains have. 5.8 Transaction Examples Let \ud835\udc61\ud835\udc60, \ud835\udc61\ud835\udc60\ud835\udc50, \ud835\udc61\ud835\udc60\ud835\udc5f, \ud835\udc61\ud835\udc60\ud835\udc50\ud835\udc5f, and \ud835\udc61\ud835\udc60\ud835\udc56be timestamp, commit_timestamp, read_ -timestamp, creation_timestamp, and invalidation_timestamp, respectively. Figure 5 illustrates how Transactions\ud835\udc471-\ud835\udc473 insert, delete, and update edges. To illustrate, assume that transactions execute sequentially. The edge-deltas block for Vertex 1 stores Edges (1, 4), (1, 3), and (1, 5) using 3 edge-deltas and 2 delta-chains. 6 Figure 5: Transaction Examples Example 1: \ud835\udc471 inserts Edge (1, 10) with 32-byte property data. \ud835\udc471 calculates Edge (1, 10)\u2019s Delta-chain 0 as 10 mod 2 = 0, where 2 is the number of delta-chains, locks the delta-chain, and uses the deltachains index entry\u2019s Offset 64 to locate the edge-delta chain, and to search for a previous version (if exists). Then, it allocates space in both the delta region and the data region by atomically updating combined_offset. It writes the insert edge-delta and its property into the allocated region. The new edge-delta has previous_offset=64 and previous_version=0 as it points to the previous edge-delta on the delta-chain, and Edge (1, 10) has no previous version. The deltachains index stores the new edge-delta, and the commit manager commits\ud835\udc471 at \ud835\udc61\ud835\udc60\ud835\udc50= 10. The edge-delta\u2019s \ud835\udc61\ud835\udc60\ud835\udc50\ud835\udc5fis updated to 10. Then, \ud835\udc472 deletes Edge (1, 4). \ud835\udc472 locks the delta chain, finds the previous version of (1, 4) at Offset 64 using the delta-chains index, and allocates the space for its edge-delta by updating combined_offset. \ud835\udc472 does not allocate space in the data region due to the delete. Instead, it creates a delete edge-delta, and stores Offset 64 as the new delta\u2019s previous version. \ud835\udc472 commits by updating the delta-chains index and registering with the commit manager to commit at \ud835\udc61\ud835\udc60\ud835\udc50= 12. The new edge-delta\u2019s \ud835\udc61\ud835\udc60\ud835\udc50\ud835\udc5fand the previous version edge-delta\u2019s \ud835\udc61\ud835\udc60\ud835\udc56 are both updated to 12. Finally, \ud835\udc473 updates Edge (1, 5) with a new 8 bytes property. The previous version is recorded as in deletion, but the new version\u2019s property takes 8 bytes. \ud835\udc473 directly stores this property within the edge-delta. \ud835\udc473 creates an update edge-delta for (1, 5) and commits. The delta-chains index, the new edge-delta\u2019s \ud835\udc61\ud835\udc60\ud835\udc50\ud835\udc5f, and the previous version\u2019s \ud835\udc61\ud835\udc60\ud835\udc56get updated similarly. Example 2: Figure 5 illustrates how transactions read edges, and scan adjacency lists. \ud835\udc474 with \ud835\udc61\ud835\udc60\ud835\udc5f= 20 tries to read Edge (1, 3). \ud835\udc474 calculates that Edge (1, 3) is in Delta-chain 1, and reads the index entry with Offset 384. \ud835\udc474 starts the search at Delta (1, 5) created at Time 14, and uses each edge-delta\u2019s previous_offset to get the previous edge-delta in the delta-chain.\ud835\udc474 locates Edge-delta (1, 3) of \ud835\udc61\ud835\udc60\ud835\udc50\ud835\udc5f= 5 at Offset 128. Since\ud835\udc474\u2019s \ud835\udc61\ud835\udc60\ud835\udc5f> 5, it reads the edge-delta, and returns the edge\u2019s property data. \ud835\udc475 with \ud835\udc61\ud835\udc60\ud835\udc5f= 7 tries to read Edge (1, 5). It uses the delta-chains index to locate the head of the deltachain, and finds Edge-delta (1, 5) with \ud835\udc61\ud835\udc60\ud835\udc50\ud835\udc5f= 14. Although\ud835\udc475 wants to read Edge (1, 5), its \ud835\udc61\ud835\udc60\ud835\udc5fis smaller than the current edge-delta\u2019s \ud835\udc61\ud835\udc60\ud835\udc50\ud835\udc5f= 14. Thus, \ud835\udc475 uses this edge-delta\u2019s previous_version_offset to locate the previous version of Edge (1, 5), and finds the visible version of Edge (1, 5) with \ud835\udc61\ud835\udc60\ud835\udc50\ud835\udc5f= 6 and \ud835\udc61\ud835\udc60\ud835\udc56= 14. \ud835\udc475 reads Weight 32.6 instead of 40.1.\ud835\udc476 and\ud835\udc477 with \ud835\udc61\ud835\udc60\ud835\udc5f= 14 and \ud835\udc61\ud835\udc60\ud835\udc5f= 7, respectively, try to read Edge (1, 4). \ud835\udc476 and \ud835\udc477 use the delta-chains index to locate the first edge-delta at Offset 320, and finds a delete edgedelta for (1, 4) with \ud835\udc61\ud835\udc60\ud835\udc50\ud835\udc5f= 12. \ud835\udc477 determines that this edge-delta represents a future version of Edge (1, 4), and continues the search via previous_version_offset to find the insert edge-delta for (1, 4) with \ud835\udc61\ud835\udc60\ud835\udc50\ud835\udc5f= 5 and \ud835\udc61\ud835\udc60\ud835\udc56= 12 that is visible to \ud835\udc477. It reads this edgedelta\u2019s associated data. \ud835\udc476 sees the delete edge-delta for Edge (1, 4), determines it has been deleted, and returns. Finally,\ud835\udc478 with \ud835\udc61\ud835\udc60\ud835\udc5f= 15 scans the adjacency list. It determines the start offset 384 of the scan using combined_offset. \ud835\udc478 scans all edge-deltas until end of block, and compares its \ud835\udc61\ud835\udc60\ud835\udc5fwith \ud835\udc61\ud835\udc60\ud835\udc50\ud835\udc5fand \ud835\udc61\ud835\udc60\ud835\udc56of each edge-delta. It reads Edges (1, 5) with \ud835\udc61\ud835\udc60_\ud835\udc50\ud835\udc5f= 14, (1, 10), and (1, 3). 5.9 Consolidation of an Edge-Deltas Block An edge-deltas block may overflow due to many edge-delta allocations. Consolidation refers to (1) Recycling the memory used by invalidated edge-deltas in an overflowing block, and (2) Allocating a new block according to workload history. To support better concurrency during consolidation, GTX uses the block protection protocol in Section 5.1. Each edge-deltas block is associated with a state variable that can be atomically read and updated. A normal state constitutes the majority of the lifetime of each delta block, where in this state, transactions\u2019 reads and writes can proceed normally. An edge-deltas block, say \ud835\udc4f, enters the overflow state when a transaction requests to allocate more memory than what \ud835\udc4fhas. Due to the atomic \ud835\udc53\ud835\udc52\ud835\udc61\ud835\udc50\u210e_\ud835\udc4e\ud835\udc51\ud835\udc51, only one transaction \ud835\udc61causes \ud835\udc4fto overflow. \ud835\udc61becomes the consolidation worker, and changes \ud835\udc4f\u2019s state to overflow, and waits till all current threads exit \ud835\udc4f. State Overflow prevents other threads from accessing \ud835\udc4f. \ud835\udc61restores \ud835\udc4f\u2019s combined_offset to the value before the overflow. Then, \ud835\udc61changes \ud835\udc4f\u2019s state to consolidation. State Consolidation allows concurrent readers to access \ud835\udc4fwhile \ud835\udc61is identifying the latest versions of \ud835\udc4f\u2019s edges. \ud835\udc61scans \ud835\udc4f, performs Lazy Updates, records the last committed delta for each edge, and records all \ud835\udc3c\ud835\udc5b_\ud835\udc43\ud835\udc5f\ud835\udc5c\ud835\udc54\ud835\udc5f\ud835\udc52\ud835\udc60\ud835\udc60edgedeltas and their transactions. \ud835\udc61allocates a new block \ud835\udc4f\u2032 according to the total size of latest committed and \ud835\udc3c\ud835\udc5b_\ud835\udc43\ud835\udc5f\ud835\udc5c\ud835\udc54\ud835\udc5f\ud835\udc52\ud835\udc60\ud835\udc60edge-deltas in \ud835\udc4f. \ud835\udc4f\u2032 stores a previous pointer to \ud835\udc4f, has creation time equal to the largest edge-delta invalidation time in \ud835\udc4f, and creates a new delta-chains index with an updated size. Then, \ud835\udc61inserts the latest versions of all un-deleted edge-deltas into \ud835\udc4f\u2019, and updates the new delta-chains index accordingly. After the installation of the latest edge-deltas, \ud835\udc61changes \ud835\udc4f\u2019s state to installation to synchronize the 7 \ud835\udc3c\ud835\udc5b_\ud835\udc43\ud835\udc5f\ud835\udc5c\ud835\udc54\ud835\udc5f\ud835\udc52\ud835\udc60\ud835\udc60edge-deltas. A block \ud835\udc4f\u2019s installation state synchronizes \ud835\udc3c\ud835\udc5b_\ud835\udc43\ud835\udc5f\ud835\udc5c\ud835\udc54\ud835\udc5f\ud835\udc52\ud835\udc60\ud835\udc60transactions that have deltas in\ud835\udc4f, and ensures the correctness of the transaction protocol. \ud835\udc61checks those \ud835\udc3c\ud835\udc5b_\ud835\udc43\ud835\udc5f\ud835\udc5c\ud835\udc54\ud835\udc5f\ud835\udc52\ud835\udc60\ud835\udc60 transactions. Committing transactions need to update delta-chains indexes (Section 5.7), and consolidation may create a delta-chains index of a different size. To avoid creating inconsistency across edgedeltas chains, State installation stops transactions from updating the edge-delta chains. \ud835\udc61can only proceed after all transactions that update \ud835\udc4f\u2019s delta-chains index have committed or aborted. \ud835\udc61uses the \ud835\udc3c\ud835\udc5b_\ud835\udc43\ud835\udc5f\ud835\udc5c\ud835\udc54\ud835\udc5f\ud835\udc52\ud835\udc60\ud835\udc60edge-deltas observed during the analysis to determine which edge-delta chains may be updated, and checks these delta-chain heads until all their transactions terminate. \ud835\udc61copies the newly committed edge-deltas to \ud835\udc4f\u2032, and updates the delta chains and the index entries. \ud835\udc4f\u2032 may have a different delta-chains layout. Thus, a committed delta may be added into a different delta-chain than the corresponding one that it was assigned to in \ud835\udc4f. For the In_Progress edge-deltas, the consolidation worker copies them to \ud835\udc4f\u2032 without updating the delta-chains index. No \u201clock\" bit is set for the new block\u2019s index. After all these deltas are installed in\ud835\udc4f\u2032, \ud835\udc61updates the block pointer and the delta-chains index, increases the block version by 1, and changes to the state normal in the block label entry. \ud835\udc61marks \ud835\udc4fsafe for garbage collection at \ud835\udc4f\u2032\u2019s create timestamp. Example 3: Figure 6 gives an example block consolidation. Assume that Vertex 1\u2019s edge-deltas block of size 512 bytes is full with a 64-bytes header, 6 edge-deltas, and 64 bytes of property data. It has combined_offset = (64\u00ab32 + 384). Two concurrent transactions \ud835\udc471 and \ud835\udc472 are updating edges. Later, \ud835\udc471 will commit and \ud835\udc472 will abort. Another transaction \ud835\udc61wants to delete an edge by creating a delete edge-delta with no associated data. It invokes the delta allocation protocol and updates the combined offset as (64 << 32 + 448). Since 448 + 64 = 512 is greater than the total available space, a consolidation is triggered (Block Overflow). \ud835\udc61becomes the consolidation worker and changes the block state to overflow. It restores combined_offset to the non-overflow value (64 << 32 + 384) (Block State Restoration). Next, \ud835\udc61changes the state to consolidation, and scans the edge-deltas block to find all latest committed versions of edges, i,e., Edges (1, 3), (1, 5), and (1, 4) with \ud835\udc61\ud835\udc60\ud835\udc50\ud835\udc5f= 5, \ud835\udc61\ud835\udc60\ud835\udc50\ud835\udc5f= 6, and \ud835\udc61\ud835\udc60\ud835\udc50\ud835\udc5f= 10, respectively, and records \ud835\udc471 and \ud835\udc472 as \ud835\udc3c\ud835\udc5b_\ud835\udc43\ud835\udc5f\ud835\udc5c\ud835\udc54\ud835\udc5f\ud835\udc52\ud835\udc60\ud835\udc60. \ud835\udc61 allocates a new edge-deltas block and a new delta-chains index according to edge-delta sizes, and copies the latest version edge-deltas there (Block Consolidation). \ud835\udc61changes the state to installation and checks In_Progress transactions. \ud835\udc61finds that \ud835\udc471 has committed at \ud835\udc61\ud835\udc60\ud835\udc50\ud835\udc5f= 18 and\ud835\udc472 has aborted. \ud835\udc61copies the newest version of (1, 5) to the new block, and updates the delta-chains index entry (Block Installation). Finally, \ud835\udc61updates the block pointer and delta-chains index to point to the new block and index, and changes the state back to normal (New Block Version). By now, \ud835\udc61has finished the consolidation, and continues executing its transaction\u2019s work. 5.10 State-aware Transaction Operations Transaction operations need to be modified to accommodate block versioning. The number of edge-delta chains of a block may change after consolidation. Thus, we need to revise the delta-chain-level locking between transactions. GTX introduces a version number for each edge-deltas block that increases by 1 per consolidation. Transaction \ud835\udc61locally records which edge-delta chains (and edges) \ud835\udc61has modified per edge-deltas block \ud835\udc4f. When \ud835\udc61accesses \ud835\udc4fagain, either for a read, write, or commit, \ud835\udc61checks if the block version has changed. If so, \ud835\udc61relocks its edge-deltas chains to avoid causing write-write conflicts. \ud835\udc61uses its updated-edges records to calculate the edge-delta chains that it needs to relock and tries to lock them (Section 5.3). If no concurrent transactions have locked or updated those edge-delta chains, \ud835\udc61locks them, finds where its edge-deltas reside, and continues its normal operations, else \ud835\udc61aborts. 6 RESOURCE MANAGEMENT Memory Allocation. GTX adapts a hybrid memory allocator from LiveGraph [73] to balance concurrent memory allocation and space utilization. GTX allocates a large chunk of memory at system start via \ud835\udc5a\ud835\udc5a\ud835\udc4e\ud835\udc5d, and divides memory into blocks of varying power-of-2 sizes (as in Buddy systems [44]). Each worker thread locally manages a pool of memory blocks smaller than a user-defined threshold. It can allocate memory blocks from this pool without synchronizing with the other threads. Larger memory blocks are managed by a memory manager that handles requests to allocate large memory blocks. Due to the power-law nature of real-world graphs, most vertices are not hub vertices, i.e., do not have many edges. Thus, most of the vertex delta and edge-deltas block\u2019s memory allocations only require smaller blocks from threads\u2019 private pools. Even when threads need to allocate a larger block, it is unlikely that multiple threads request that concurrently. Thus, GTX\u2019s memory allocation protocol is mostly contention-less, yet at times requires the memory manager to allocate large memory blocks. Garbage Collection. GTX recycles the delta blocks no longer visible to transactions. GTX is cooperative and does not have a garbage collector thread. Instead, worker threads register memory blocks as candidates for garbage collection and periodically recycle the \u201csafe\" blocks. When a transaction commits or Lazy Updates a vertex delta, in effect it invalidates the vertex\u2019s previous version. GTX embeds a local priority queue per worker thread, and the worker thread registers the previous vertex delta block with its invalidation time in the local queue. The same operation takes place when a consolidation worker consolidates an edge-deltas block. It registers the old block invalidated at the new block\u2019s create time. After executing a certain amount of transactions, a worker thread \ud835\udc64scans the timestamp table and finds the \u201csafe\" timestamp, the minimum timestamp in the table. \ud835\udc64continuously checks its local garbage queue, dequeues an entry, and recycles the associated memory block until the queue is empty or the current queue head\u2019s timestamp becomes too large. If a recycled block \ud835\udc4fis small, \ud835\udc64stores \ud835\udc4fin its local pool, else \ud835\udc4fis sent back to the memory manager. It ensures that small blocks are allocated contention-free while large blocks return to the global pool for reuse by other threads. 7 EVALUATION The focus of this study is on supporting high throughput read-write transactions while supporting concurrent graph analytics. Experiment Setup. Experiments run on a dual-socket machine with Intel(R) Xeon(R) Platinum 8368 @ 2.40GHz processors with 156 CPUs. It has 2 NUMA nodes and each node has 38 cores and 96GB RAM. We compile all systems with GCC 11.4.0 and O3 optimization flag and evaluate them over a single NUMA node with best effort, i.e., the evaluated system allocates memory locally, and uses remote memory only if its memory exceeds a single NUMA node. One 8 Figure 6: Consolidation Example exception is for insert experiments of graph500-26 and Twitter with over 1 billion edges. This machine\u2019s memory is not enough to evaluate all the systems. Thus, for these datasets, we evaluate all systems in a setup with Intel(R) Xeon(R) Platinum 8168 CPU @ 2.70GHz processors that has 24 cores and 387GB DRAM per NUMA node. To minimize NUMA effects, we run those experiments in a single NUMA node with up to 20 worker threads to avoid contention. We disable disk logging. We use the same evaluation software based on \ud835\udc3f\ud835\udc37\ud835\udc35\ud835\udc36Graphalytics benchmark [11, 41] as used in Teseo [26] and Sortledton [35], and add additional experiments. Experiments include few instances for uniform graphs but the main focus is on power-law graphs as many real-world graphs exhibit such patterns [21, 22, 30, 63]. Figure 7: Bars Color Code in Experimental Results. 7.1 Insert Performance Testing includes inserting uniform and power-law real-world and synthetic graphs with edges shuffled or according to timestamp order. We simulate undirected edge inserts by inserting 2 directed edges in a transaction. They execute \u201cchecked\" operations so that every edge insert involves a read to check if the edge exists. Thus, each transaction internally executes 2 edge lookups and 2 edge writes atomically. Thus, having efficient edge reads can improve transactions\u2019 write performance. Figure 7 gives the color codes we use in displaying results. Refer to Figure 8 for results. Random-order Inserts. In this study, the evaluated systems insert edges from uniform and power-law real-world and synthetic graphs. The results include graph500-24, 26, uniform-24, dota-league and twitter. graph500-26 and twitter have over 1 billion edges. Thus, we evaluate them on the machine with larger DRAM up to 20 worker threads. We evaluate other power-law and uniform graph datasets but their results are similar to the ones presented, and are omitted for brevity. GTX has over 10x better throughput over LiveGraph and around 1.34x to 2x better throughput over the second-best Sortledton. We attribute GTX\u2019s performance to GTX\u2019s low-overhead concurrency control and its adaptive delta-chains index. GTX has only an exclusive lock that is embedded in the index with minimal overhead. GTX adjusts the number of delta-chains of each edgedeltas block as edge updates arrive, and enables concurrent writes to the same vertex. While Teseo can reduce the conflict in updating the hub vertices by storing their neighborhoods into multiple segments, its sparse array segment-based storage has drawbacks More discussion on this issue below. LiveGraph\u2019s performance suffers due to vertex-centric locking and the absence of edge indexes. For each edge check, a LiveGraph transaction scans the edge block to check if the edge exists which is costly. For uniform graphs, Sortledton outperforms GTX for low parallelism degrees, but GTX\u2019s performance catches up as the number of concurrent worker threads increases. Since the graph is uniform, each vertex has about the same number of edges. Concurrent transactions are not likely updating the same adjacency lists. Thus, GTX\u2019s delta-chains index becomes underutilized, and the drawbacks of vertex-centric locking are mitigated because transactions are less likely to conflict in this case. Timestamp-ordered Inserts. In this experiment, we insert edges based on their real-world create timestamps. Real-life power-law graphs not only have hub vertices (dataset edit-wiki has vertices with millions of adjacent edges), but also have hotpots. Transactional graph systems have significant performance degradation when inserting edges in timestamp order [35]. Thus, we run the experiments in both random and timestamp order. Refer to Figures 8\u2019s yahoo and edit-wiki. As baseline, first, we evaluate inserting edges in random order. For yahoo-songs , GTX has the best scalability with up to 70% better throughput over the second best system (Sortledton) at higher parallelism degree. For the edit-wiki workload, GTX overall ranks the second among the evaluated systems and performs the best with medium parallelism degree (1.2x better than the second best). GTX does not scale well here at high worker thread numbers because its adaptive delta-chains adjustment heuristic does not catch up well with large neighborhoods. GTX could have used a more aggressive heuristic to increase delta chain numbers eagerly when consolidations take place, so more concurrent writers would be allowed per edge-deltas block. However, this would increase the size of allocated blocks and memory used by indexes, and may incur wasted spaces for low-fanout vertices. We plan to explore more sophisticated heuristics in future work. In contrast, when inserting in the real timestamp order, GTX performs the best over all competitor systems. The second-best Sortledton\u2019s throughput degrades to less than 11% of its random order insertion throughput. In contrast, GTX\u2019s throughput only drops by around 30%. In the yahoo-songs workload, GTX has up to 11x better throughput than Sortledton and in edit-wiki workload, GTX has up to 9x better 9 Figure 8: Insertion Throughput throughput. GTX achieves high performance in timestamp-ordered workload due to its better write concurrency provided by adaptive delta-chains. Sortledton\u2019s and LiveGraph\u2019s worker threads block on concurrent updates during a hotspot. Ideally, Teseo should mitigate the high conflict when concurrently updating hub vertices by storing large neighborhoods in multiple segments. However, its sparse array segment rebalance exclusively locks multiple adjacent segments but segments of the same neighborhood are always adjacent. Thus, concurrent rebalance threads may conflict when locking the same adjacent segment in their rebalance window, or conflict with other concurrent transactions. Teseo\u2019s segment locking may incur false positives among low-degree vertices by locking vertices in the same segment that the transaction does not intend to update. LiveGraph, Sortledton, and Teseo suffer from temporal localities in edge update hotspot and power-law distributions. In contrast, GTX allows concurrent transactions to update the same vertex and adjusts the vertex\u2019s concurrency level adaptively. This experiment demonstrates that GTX is the only transactional graph system that can handle these real-life workloads efficiently. 7.2 Concurrent Updates and Analytics We study how systems support concurrent update transactions and graph analytics simultaneously. We adopt the graph update experiments from Teseo [26] and Sortledton [35], but have additional workloads. We generate log files based on the edges of the powerlaw graph graph500-24 using graphlog [25]. The source graph has around 260 million edges with 2.6 billion edge update logs. The first 10% of the edge logs are purely edge inserts that build the original graph. The next 90% of the logs are edge inserts and deletes. If a vertex has high fanout in the source graph, it has proportionally more edge update logs. The experiment ensures that the graph size during the experiment stays about the same as the initial size. As in the previous experiments, all systems execute check operations, edge inserts (deletes) only happen after checking the edge existence. We try to use one NUMA node. If one NUMA node\u2019s memory is not enough, we allocate in a remote node. LiveGraph\u2019s memory usage has been overly large, and could only process 20% of the logs. Also, we have evaluated each system\u2019s transaction throughput when running only updates. The results are mostly similar to the transaction throughput results of this experiment and are omitted for brevity (One exception is Teseo\u2019s throughput increase due to its in-place updates and became the second best behind GTX). We design the mixed-workload experiment as follows: We allocate a fixed number of 50 thread resources, and control the read/write ratio by assigning the threads either as write or OpenMP [5] read threads, e.g., 25 write and 25 read threads form a 50% write workload. Write threads continuously run update transactions while read threads run graph analytics. We could not report the results of Teseo as it ran into deadlocks (the same problem is reported in Sortledton\u2019s evaluation [35]). Sortledton has segmentation fault errors for only a few of the workloads. but we are still able to produce some results. For analytics, we evaluated 1-hop (1-HN) and 2-hop neighbors (2-HN) that find 1-hop and 1-2 hop neighbors of a set of vertices, and concurrent breadth first search (BFS), PageRank (PR), and single source shortest paths (SSSP) from Graphalytics [11, 41]. Similar experiments can be found in [35] but they use only BFS and PR under certain threads configurations. The results below use the same color code in Figure 7. Normal Update Distribution. We generate edge update logs and place them randomly in the edge log stream. Write threads run transactions executing edge logs, and concurrent graph analytics start at 10% of the update workload (after the source graph is loaded). The results are in Figures 9 and 10. Sortledton crashed for several workloads as discussed above. We redo those workloads many times and record the results for the successful runs, but this was not always possible. For 1-HN, PR, and SSSP, when the workload is read-heavy, e.g., 30% write, GTX\u2019s concurrent analytics can take at most 2x longer time than the best performing Sortledton. GTX and Sortledton also have similar transaction throughput under those workloads. As the workload becomes more write-heavy, GTX\u2019s graph analytics performance catches up, and it has the highest transaction throughput. Under write-heavy workloads, e.g., 90% write, GTX has 1.3x to 1.48x higher transaction throughput than the second best competitor Sortledton, and performs the best and second best in SSSP and PR, respectively. Also, it has slightly worse 10 Figure 9: Concurrent Updates and Analytics: Transaction Throughput Figure 10: Concurrent Updates and Analytics: Graph Analytics Latency performance in 1-HN than the other systems. However, GTX has better write throughput when executing BFS, but Sortledton has significantly BFS lower latency under all workloads. GTX is 1.12x to 1.52x faster than Sortledton in 2-HN under all workloads while achieving better transaction throughput for most of the workloads (0.96x to 1.28x). LiveGraph exhibits strong performance for graph analytics (around 0.77x to 1.51x faster than GTX), but its transaction throughput is significantly lower than the other 2 systems (GTX has 4x to 9x higher transaction throughput than LiveGraph). Temporal Locality and Hotspot Update Distribution. We generate another set of edge logs, and create temporal localities and hotspots to mimic real-world scenarios. Transactions are more likely updating the same vertex\u2019s adjacency list concurrently. This experiment has the same setting as the previous mixed-workload experiment except for having different edge logs. We evaluate performance under concurrent reads and writes, and temporal localities. Sortledton\u2019s segmentation error persists for this hotspots workload. Thus, we only report available results in Figures 11 and 12. GTX, Sortledton, and LiveGraph graph analytics performance are not affected much by update hotspots. However, Sortledton and LiveGraph transaction throughputs have up to 64.3% and 41.2% drop in throughput, respectively, for workloads with temporal localities and hotspots in update patterns. Only GTX maintains its high transaction throughput and stays competitive in graph analytics. Performance Analysis. GTX has the best transaction throughput while maintaining competitive performance for graph analytics across different workloads. We attribute this to the following reasons. GTX supports delta-based MVCC that prevents read transactions from conflicting with write transactions. Thus, the side effects of concurrent updates are mitigated for graph analytics, and vice versa. It outperforms vertex-centric locking and adapts to workload temporal localities and hotspots. GTX is the only system that can maintain high transaction throughput across all experiments. Also, it stores and manages edge-delta chains of each adjacency list sequentially in a memory block. This preserves cache locality, eliminates pointer chasing, reduces random memory access and cache misses, and facilitates prefetch. Previous works [26, 35, 73] show that sequential adjacency list storage is beneficial to adjacency list scan; the backbone for graph analytics. GTX implements graph algorithms (e.g., BFS, PR, and SSSP) directly accessing edge delta blocks, and is compiled within the system. It saves the cost of using adjacency list iterators and associated functional calls. Our experiment (not shown due to space) shows that using an iterator is around 1.15x slower in graph analytics. Finally, GTX has several techniques in reducing the overhead of concurrent readwrite transactions and graph analytics. Distributing the transaction table over worker threads enables latch-free \ud835\udc42(1) access to transaction status for Lazy Updates, and the Block Access Table allows each reader and writer thread to register block-level access in its own cache line-aligned entry without invalidating caches of other worker threads or modifying a shared lock. This greatly handles concurrency control overheads. Sortledton exhibits different results across experiments. Sortledton has the second best transaction throughput. However, its performance degrades for workloads with temporal localities and hotspots. For analytics, it has the best performance in BFS. Sortledton uses exclusive lightweight latches for each adjacency list. BFS touches only a smaller portion of the graph. Thus, update and read transactions are less likely to conflict, bypassing the negative locking effects. BFS requires each vertex\u2019s neighborhood size to initialize the algorithm. GTX and LiveGraph scan the vertex adjacency list to calculate this value. This dominates the graph analytics latency. Sortledton supports vertex neighborhood size versioning that allows transactions to get each vertex\u2019s neighborhood size efficiently, thus supporting efficient BFS. PR and SSSP require finding each vertex\u2019s neighborhood size, but they need to scan vertices\u2019 adjacency list multiple times. The algorithm\u2019s execution phase dominates graph analytics, and thus Sortledton\u2019s advantages drop. As read threads scan the graph heavily, they are more likely to conflict with write 11 Figure 11: Concurrent Update with Hotspot and Temporal Localities and Analytics: Transaction Throughput Figure 12: Concurrent Update with Hotspot and Temporal Localities and Analytics: Graph Analytics Latency Figure 13: Memory Consumption transactions, and hinder the performance. 1-HN and 2-HN only requires adjacency list scans. In 2-HN, each vertex\u2019s 1-hop neighbors are not known a priori, causing Sortledton to have large locking overhead, and hence has similar or worse performance. LiveGraph shows strong disparity between its transaction throughput and graph analytics. Its transaction throughput is several times worse than both GTX and Sortledton but its graph analytics perform better. LiveGraph uses a similar-styled \ud835\udc40\ud835\udc49\ud835\udc36\ud835\udc36as GTX\u2019s but only optimizes for sequential adjacency list scans [73]. It provides the same level of guarantees that each vertex\u2019s adjacency list is stored sequentially in memory and that transaction reads and writes never conflict. Moreover, LiveGraph has smaller edge log entries compared to GTX\u2019s edge-deltas. The compact sequential adjacency list storage enables good read performance. But LiveGraph suffers from its simple vertex-centric locking and lack of edge lookup indexes (Section 7.1), thus having the worst transaction throughput. 7.3 Memory Consumption GTX uses 64-byte edge-deltas, so it uses more memory to load and store a graph. For insert experiments, GTX uses around 2x to 3x more memory than Sortledton, 0.9x to 2.1x more than Teseo and 1.2x to 1.9x more memory than LiveGraph (Figures omitted for brevity). However, for workload with updates, concurrent analytics, and hotspots, GTX does not incur big memory overhead than storing a static graph (pure inserts), and memory consumption stays stable across workloads (Figure 13). For these workloads, GTX is efficient in memory allocation and garbage collection. LiveGraph uses more memory if the workload involves updates and analytics, while Sortledton\u2019s memory usage increases at most 6x in workloads with temporal localities and hotspots. GTX has the lowest memory consumption for these experiments. 8 CONCLUSIONS GTX is a latch-free write-optimized transactional graph system that supports concurrent read-write transactions and graph analytics. It adopts an adjacency list format and a delta chain-based storage with delta-chains index. Its concurrency control combines both the traditional linked list-based delta store [49] and append-only delta updates [64, 73] to enable high update throughput, low-latency single edge lookup, and sequential adjacency list scan. GTX has high read and write concurrency using transactions, \ud835\udc40\ud835\udc49\ud835\udc36\ud835\udc36, delta updates, and Lazy Update hybrid commit under snapshot isolation. Extensive evaluation of GTX against state-of-the-art transactional graph systems shows that GTX has better performance for checkedwrite workloads, and is several times better in handling real-world write workload with temporal localities. GTX performs well under mixed workloads when the workload is at least 50% writes, in which case, GTX has better update throughput while its graph analytics latency stays competitive. For write-heavy workloads, e.g., 90%, GTX can outperform its competitors in graph analytics while significantly exceeds them in write throughput. In conclusion, GTX demonstrates that using latch-free techniques, avoiding vertex-centric locking, maintaining adjacency list-level edge indexes, and sequential adjacency list storage, and being adaptive to the workload can enable high transaction throughput and better read-write concurrency in transactional graph systems. 9 ACKNOWLEDGEMENTS Walid G. Aref acknowledges the support of the National Science Foundation under Grant Number IIS-1910216. 12"
17
+ }
intro_8K/test_introduction_short_2405.01448v1.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.01448v1",
3
+ "title": "GTX: A Transactional Graph Data System For HTAP Workloads",
4
+ "abstract": "Processing, managing, and analyzing dynamic graphs are the cornerstone in\nmultiple application domains including fraud detection, recommendation system,\ngraph neural network training, etc. This demo presents GTX, a latch-free\nwrite-optimized transactional graph data system that supports high throughput\nread-write transactions while maintaining competitive graph analytics. GTX has\na unique latch-free graph storage and a transaction and concurrency control\nprotocol for dynamic power-law graphs. GTX leverages atomic operations to\neliminate latches, proposes a delta-based multi-version storage, and designs a\nhybrid transaction commit protocol to reduce interference between concurrent\noperations. To further improve its throughput, we design a delta-chains index\nto support efficient edge lookups. GTX manages concurrency control at\ndelta-chain level, and provides adaptive concurrency according to the workload.\nReal-world graph access and updates exhibit temporal localities and hotspots.\nUnlike other transactional graph systems that experience significant\nperformance degradation, GTX is the only system that can adapt to temporal\nlocalities and hotspots in graph updates and maintain\nmillion-transactions-per-second throughput. GTX is prototyped as a graph\nlibrary and is evaluated using a graph library evaluation tool using real and\nsynthetic datasets.",
5
+ "authors": "Libin Zhou, Walid Aref",
6
+ "published": "2024-05-02",
7
+ "updated": "2024-05-02",
8
+ "primary_cat": "cs.DB",
9
+ "cats": [
10
+ "cs.DB",
11
+ "H.2.4"
12
+ ],
13
+ "label": "Original Paper",
14
+ "paper_cat": "Temporal AND Graph",
15
+ "gt": "Dynamic graph management is an important part of many applica- tion domains including risk management, knowledge graph appli- cations, recommendation services, etc [6, 9]. Real-world graphs can reach hundreds of millions of vertices and billions of edges [6, 8] while hundreds of thousands to millions of updates take place per second [9]. These applications need to run concurrent graph analyt- ics while ingesting graph updates. Moreover, to concurrently run graph analytics and updates while preserving graph\u2019s consistency and avoiding anomalies, graph systems need to support read-write transactions [3, 5, 6]. Besides having high arrival rate, graph up- dates also exhibit temporal localities and hotspots [6, 8]. Namely, updates arriving at the same time frame likely belong to the same vertex (neighborhood) and a large number of such updates may arrive simultaneously. For example, at a certain time, lots of users are liking the same post on social media, creating a large amount of edges to the post simultaneously. In the rest of the paper, we This work is licensed under the Creative Commons BY-NC-ND 4.0 International License. Visit https://creativecommons.org/licenses/by-nc-nd/4.0/ to view a copy of this license. For any use beyond those covered by this license, obtain permission by emailing [email protected]. Copyright is held by the owner/author(s). Publication rights licensed to the VLDB Endowment. Proceedings of the VLDB Endowment, Vol. 14, No. 1 ISSN 2150-8097. doi:XX.XX/XXX.XX use \"temporal localities\" to collectively refer to graph updates with both temporal localities and hotspots. State-of-the-art transactional graph systems use coarse-grained concurrency control (e.g., vertex- centric locking [6, 12]) and suffer from these update patterns [6]. To address these challenges and requirements, a graph system needs to support transactions, adapt to the temporal localities, and reduce the interference between concurrent transactions and analytics. This demo presents GTX, a main memory latch-free transac- tional graph data system that is optimized for read-write transac- tions while maintaining competitive graph analytics. It targets the open problem of maintaining high throughput in workloads involv- ing temporal localities [6]. Many real world graphs are power-law graphs with hub vertices (vertices with large degree) [6, 12]. For example, one of the real world graph edit-wiki has vertices with millions of adjacent edges and the edges\u2019 creation follows temporal localities and contains hotspots [6, 8]. Most of the existing transac- tional graph systems only support transactions, but are unware of temporal localities. Moreover, they suffer from concurrently run- ning updates and graph analytics. Two of the three state-of-the-art systems that we have evaluated have some bugs under such con- current workloads. Therefore, we design GTX with the following features to resolve these challenges: 1. GTX has a latch-free graph storage that uses atomic operations to update vertices and edges. It eliminates latching overheads and reduces thread idling. 2. GTX combines chain-based delta storage and linear delta storage to benefit from delta-chains\u2019 efficient lookup and linear storage\u2019s cache performance. 3. GTX has an efficient transaction management and concurrency control protocol by managing concurrency control at delta-chain level that adopts to the workload history. 4. GTX has a hybrid group commit protocol that improves trans- action commit throughput. In this demo, we will present how GTX handles vertex and edge creation transactions of real world power-law graphs with and with- out temporal localities and how GTX executes read-write transac- tions while concurrently executing graphs analytics using synthetic graph logs in Graph Framework Evaluation(GFE) [5, 6].",
16
+ "main_content": "GTX addresses the problem of managing and analyzing dynamic labeled property graphs. It consists of a latch-free adjacency list-based graph store and a transaction manager and a concurrency control protocol. GTX can manage both uniform and power-law graphs but is optimized for power-law graphs as many real-world graphs are power-law graphs [12]. In this demonstration, we focus on managing power-law graphs. GTX\u2019s graph store is a multi-version delta store, where each delta captures an operation (insert, update) of a vertex or (insert, delete, update) of an edge. An overview of GTX\u2019s arXiv:2405.01448v1 [cs.DB] 2 May 2024 Figure 1: GTX Storage: 1.Vector-based vertex index (entry). 2.Vertex delta chain. 3.Edge-deltas block storage is given in Figure 1. Each edge-delta takes 1 cache line to avoid false sharing and is optionally associated with a variablesized property that either resides inside the delta or at the other end of the block depending on the property size. GTX stores vertex versions as vertex delta-chains, and each vertex\u2019s adjacency list consecutively in a single edge-deltas block. Within each block, GTX organizes edge-deltas as delta-chains. Delta-chains support efficient single edge lookup and concurrency control and the linear edgedelta storage reduces random memory access, and improves cache performance during adjacency list scan. To support efficiently locating a vertex\u2019s delta-chain and edge-deltas block, GTX has a vertex index providing \ud835\udc42(1) lookup given vertex ID. Note that GTX supports edge labels, and stores edge-deltas (edges) of the same label in an edge-deltas block, but for simplicity of this demonstration, we assume all edges have the same label. GTX manages a delta-chains index of each edge-deltas block. Each edge-deltas block determines its delta-chains number at its allocation time, and allocates a vector where each vector entry \ud835\udc56stores the offset of the \ud835\udc56\ud835\udc61\u210edelta-chain within the block. GTX adopts multi-version concurrency control [11], and uses atomic operations to access and update vertex and edge versions, e.g. \ud835\udc53\ud835\udc52\ud835\udc61\ud835\udc50\u210e_\ud835\udc4e\ud835\udc51\ud835\udc51, \ud835\udc59\ud835\udc5c\ud835\udc4e\ud835\udc51, and \ud835\udc50\ud835\udc5c\ud835\udc5a\ud835\udc5d\ud835\udc4e\ud835\udc5f\ud835\udc52_\ud835\udc4e\ud835\udc5b\ud835\udc51_\ud835\udc60\ud835\udc64\ud835\udc4e\ud835\udc5d(\ud835\udc36\ud835\udc34\ud835\udc46). Vertex versions are updated by a \ud835\udc36\ud835\udc34\ud835\udc46on the vertex delta-chains pointer. Each edgedeltas block maintains a combined_offset that combines the offsets of the edge data region and edge-delta region (each 32 bits) into a 64 bits integer. Edge-delta allocation is done by executing a fetch_add to the combined_offset and write the edge-delta and its associated property data at the offsets returned by the operation. GTX allocates a fixed number of worker threads that execute read-write transactions while additionally supports collectively running graph analytics using OpenMP [1]. Besides a commit manager, GTX has no other server thread. Transactions\u2019 isolation and consistency ensures the correctness of concurrent operations. GTX read-write transactions support vertex inserts and updates, edge inserts, deletes, updates and lookups, and adjacency list scans. GTX guarantees atomicity of each transaction\u2019s updates. GTX implements all graph analytics under read-only transactions. While each read-write transaction is executed by its creator worker thread, a read-only transaction can be executed by several OpenMP threads concurrently. GTX further implements a state protection protocol to support edge-deltas block consolidation when it becomes overflow. GTX adopts the block manager from [12] and manages garbage collection lazily. Memory blocks are recycled when no concurrent and future transactions can see them. 3 GTX TRANSACTION OPERATIONS Each GTX transaction is assigned a read timestamp (\ud835\udc5f\ud835\udc61\ud835\udc60) at its creation time from a global read epoch, and it does not know its write timestamp (\ud835\udc64\ud835\udc61\ud835\udc60) until it gets committed. GTX guarantees Snapshot Isolation [2] of its transactions. Let \ud835\udc61refer to both the transaction and its transaction ID interchangeably. 3.1 Vertex Operations GTX transactions can insert new vertices and update them to new versions. Transaction \ud835\udc61reads a vertex \ud835\udc63by using an atomic \ud835\udc59\ud835\udc5c\ud835\udc4e\ud835\udc51on \ud835\udc63\u2019s delta-chains pointer after locating the \ud835\udc63\u2019s entry in the index. \ud835\udc61 checks \ud835\udc63\u2019s delta-chain head, and compares its \ud835\udc5f\ud835\udc61\ud835\udc60against the vertexdelta\u2019s creation time. A transaction can see all deltas created before its \ud835\udc5f\ud835\udc61\ud835\udc60. If the current vertex-delta represents the future version, \ud835\udc61 uses the \ud835\udc5d\ud835\udc5f\ud835\udc52\ud835\udc63\ud835\udc56\ud835\udc5c\ud835\udc62\ud835\udc60_\ud835\udc5d\ud835\udc61\ud835\udc5fto read the previous vertex-delta (previous version) on the delta-chain. Vertex writes (insertion and updates) are handled by creating a new vertex-delta (new version), and invoking a \ud835\udc36\ud835\udc34\ud835\udc46on \ud835\udc63\u2019s delta-chain pointer. If the \ud835\udc36\ud835\udc34\ud835\udc46fails, \ud835\udc61observes writewrite conflict and immediately abort. 3.2 Edge Write Operations GTX supports edge inserts, update and deletes, and they are handled similarly except having different delta types. GTX executes checked operations. An edge delete only takes place after determining the edge\u2019s existence. An edge insert only takes place after checking that the edge does not exist. Otherwise, the write operation is executed as an update. \ud835\udc61inserts/updates/deletes an edge, say \ud835\udc52(\ud835\udc62, \ud835\udc63), by creating an insert/update/delete edge-delta of \ud835\udc52(\ud835\udc62, \ud835\udc63). 2 Edge-deltas blocks manage concurrency at the delta-chain level. \ud835\udc61 calculates the delta-chain \ud835\udc52(\ud835\udc62, \ud835\udc63) belongs to, say the \ud835\udc56\ud835\udc61\u210edelta-chain, and locks it in the delta-chains index. After locking the delta-chain, \ud835\udc61searches the latest edge-delta(version) of \ud835\udc52(\ud835\udc62, \ud835\udc63) using each edgedelta\u2019s \ud835\udc5d\ud835\udc5f\ud835\udc52\ud835\udc63\ud835\udc56\ud835\udc5c\ud835\udc62\ud835\udc60_\ud835\udc5c\ud835\udc53\ud835\udc53\ud835\udc60\ud835\udc52\ud835\udc61, and writes \ud835\udc61as its invalidation timestamp if the edge-delta exists. Then, \ud835\udc61allocates memory by updating the combined_offset, and write the edge-delta in the corresponding locations. The edge-delta stores the offset of the current \ud835\udc56\ud835\udc61\u210edelta-chain head, and stores the offset of the previous edge-delta of \ud835\udc52(\ud835\udc62, \ud835\udc63). Moreover, it stores \ud835\udc61as the new edge-delta\u2019s creation timestamp. If \ud835\udc61fails to lock the \ud835\udc56\ud835\udc61\u210edelta-chain, \ud835\udc61will abort. 3.3 Edge Read Operation GTX supports two types of edge read operations: Single-edge lookup and adjacency list scan. Single-edge lookup uses the delta-chains index and is identical as checking whether an edge exists in an edge write operation. Adjacency list scan is implemented as scanning the whole edge-deltas block. After locating the edge-deltas block, \ud835\udc61loads the combined_offset. The combined_offset determines where in the edge-deltas block the scan starts. \ud835\udc61scans all edge-deltas starting from there, and can see all edge-deltas created before and invalidated after (or not invalidated at all) \ud835\udc61\u2019s \ud835\udc5f\ud835\udc61\ud835\udc60. 3.4 Hybrid Commit GTX supports a hybrid group commit protocol. GTX manages a transaction table that provides \ud835\udc42(1) access time of looking up a transaction\u2019s status. A transaction registers a commit request by informing the commit manager. The commit manager keeps a global write epoch number, and assigns its value as the \ud835\udc64\ud835\udc61\ud835\udc60to the commit group of transactions by updating their status in the transaction table. After committing each group, the commit manager increments the global read epoch and global write epoch by 1. The committing transactions eagerly find and update their deltas\u2019 timestamps with their commit timestamp. Concurrent transactions may observe a delta\u2019s timestamp is a committed transaction (ID) by querying the transaction table, and update the delta\u2019s timestamp for the committing transaction. We call this cooperative commit protocol hybrid commit. It reduces group commit latency, and saves cost in synchronizing committing transactions and commit manager. 3.5 Block Consolidation and Garbage Collection Edge-deltas blocks may become overflowed and require allocating a new block for the most recent edge-deltas while recycling the old block with the outdated edge-deltas. GTX uses a latch-free state protection protocol inspired by [10] for each edge-deltas block that eliminates latches in shared memory and preserves cache coherence. Its core is analyzing the edge-deltas block, allocating a new block according to workload history, and migrating the latest version edge-deltas to the new block while concurrent transactions can still read the original edge-deltas block. After consolidating an edgedeltas block, the old deltas block is considered outdated and is placed in a queue. GTX tracks timestamps of current running transactions, and periodically has its worker threads free the memory blocks no transactions can access. 3.6 Transaction Example We present how GTX transactions lookup and update edges. We use \ud835\udc61\ud835\udc60, \ud835\udc61\ud835\udc60\ud835\udc50, \ud835\udc61\ud835\udc60\ud835\udc5f, \ud835\udc61\ud835\udc60\ud835\udc50\ud835\udc5f, and \ud835\udc61\ud835\udc60\ud835\udc56to indicate timestamp, commit_timestamp, read_timestamp, creation_timestamp, and invalidation_timestamp, respectively.Figure 2 illustrates how transactions execute edge insert, delete, and update. Transactions and their modification to the edge-deltas block and delta-chains index are in the same color. The edge-deltas block for Vertex 1 originally stores the 3 edges (1, 12), (1, 3), and (1, 5) using 3 edge-deltas and 2 delta-chains. Delta-chains Index Entry 0 stores \ud835\udc5c\ud835\udc53\ud835\udc53\ud835\udc60\ud835\udc52\ud835\udc61= 64 that points to (1, 12), and Entry 1 stores \ud835\udc5c\ud835\udc53\ud835\udc53\ud835\udc60\ud835\udc52\ud835\udc61= 192 pointing to (1, 5). Transaction \ud835\udc471 inserts a new edge (1, 8) with 32 bytes of property data. \ud835\udc471 calculates that Edge (1, 8) belongs to Delta-Chain 0 = 8 mod 2 where 2 is the number of delta-chains, and uses the 0\ud835\udc61\u210edelta-chains index entry 64 to locate the edge-deltas chain, and searches for a previous version of (1, 8). Then, it allocates space in both the delta region and the data region by updating combined_offset atomically. Next, it writes the new insert edge-delta and its property into the allocated region. Note that the new edge-delta has previous_offset=64 and previous_version=0 because it needs to point to the previous edge-delta on the deltachain, and Edge (1, 8) has no previous version. At commit phase, \ud835\udc471 updates delta-chains index entry to point to the new edge-delta, and the commit manager commits \ud835\udc471 at \ud835\udc61\ud835\udc60\ud835\udc50= 14. The edge-delta\u2019s \ud835\udc61\ud835\udc60\ud835\udc50\ud835\udc5fgets updated to 14. Next, Transaction \ud835\udc472 deletes Edge (1, 5). \ud835\udc472 uses the delta-chains index entry 1 to find the previous version of (1, 5) at offset 196. Then, it allocates the space for its edge-delta by updating the combined_offset. Since \ud835\udc472 is deleting (1, 5), it does not allocate space in the data region. \ud835\udc472 creates its delete edge-delta, and stores offset 192 as the new delta\u2019s previous version. \ud835\udc472 commits in a similar way by updating the delta-chains index Entry 1 and registers with the commit manager. After the commit manager commits \ud835\udc472 at \ud835\udc61\ud835\udc60\ud835\udc50= 20, the delete edge-delta\u2019s \ud835\udc61\ud835\udc60\ud835\udc50\ud835\udc5fand the previous version edge-delta\u2019s \ud835\udc61\ud835\udc60\ud835\udc56are both updated to 20. Transaction \ud835\udc473 updates Edge (1, 12) with a new 32 bytes property. \ud835\udc473 finds the previous version of (1, 12) at offset 64 using the delta-chain similarly. \ud835\udc473 allocates space and creates an update edge-delta for (1, 12) and commits, and the delta-chains index, the new edge-delta\u2019s \ud835\udc61\ud835\udc60\ud835\udc50\ud835\udc5f, and the previous version\u2019s \ud835\udc61\ud835\udc60\ud835\udc56get updated. Dataset Category Vertex Count Edge Count yahoo-songs real-world 1,625,951 256,804,235 edit-wiki real-world 50,757,442 572,591,272 graph500-24 synthetic 8,870,942 260,379,520 Table 1: Dataset Statistics 4 DEMONSTRATION AND EVALUATION We demonstrate GTX\u2019s abilities to ingest real-world graphs with hotspots and to efficiently execute concurrent read-write transactions and graph analytics. We evaluate GTX\u2019s performance against other state-of-the-art transactional graph systems, illustrating GTX\u2019s high throughput in transactions, and robustness across workloads while maintaining competitive graph analytics performance. 4.1 Demonstration Scenario We showcase GTX\u2019s high transaction throughput for workloads with temporal localities, while maintaining competitive graph analytics performance. We compare GTX\u2019s transaction throughput 3 Figure 2: Examples and graph analytics latency against state-of-the-art transactional graph systems.We adopt the Graph Framework4 Evaluation (GFE) from Teseo [5] and Sortledton [6] for evaluation and demonstration. We use two real-world power-law graph datasets yahoo-songs(y-s) and edit-wiki(e-w) [8] and two sets of synthetic graph update logs (2.6 billion entries each) generated by [4] on graph500-24 [7]. One graph log shuffles edge updates and the other contains temporal localities. The statistics of the datasets can be found in Table 1. We execute the demonstration on a dual-socket machine with Intel(R) Xeon(R) Platinum 8368 CPU @ 2.40GHz processors with 156 CPU and 192GB of DRAM. We run all evaluations in a single NUMA node if the memory size fits. All systems execute checked operations as transactions, checking an edge\u2019s existence before performing the write. This demonstration only considers undirected graphs and each system treats an undirected edge as two directed edges. Therefore, for each edge \ud835\udc52(\ud835\udc62, \ud835\udc63), each system creates a transaction that checks whether \ud835\udc52(\ud835\udc62, \ud835\udc63) and \ud835\udc52(\ud835\udc63,\ud835\udc62) exist, and inserts both edges. y-s shuffled y-s ordered e-w shuffled e-w ordered GTX 6727920 4864010 3909976 3946243 Sortledton 4118089 443473 3686403 432638 Teseo 3534455 105234 2609071 24640 LiveGraph 723299 151309 603040 102398 Table 2: Graph Construction Throughput(txns/s) PR SSSP PR hotspot SSSP hotspot GTX 4365903 4265851 4959926 4265851 Sortledton 3568179 3490729 1364880 N/A LiveGraph 640138 630250 418388 405232 Table 3: Mixed-Workload Throughput(txns/s) PR SSSP PR hotspot SSSP hotspot GTX 15993070 22575448 14020837 18414454 Sortledton 10525814 15560585 10391524 N/A LiveGraph 13720317 21072157 10633277 18571507 Table 4: Mixed-Workload Latency(microsecond) 4.2 Evaluation We run two sets of experiments to show GTX\u2019s high performance. The first experiment constructs real-world power-law graphs with (ordered) and without (shuffled) temporal localities to evaluate each system\u2019s transaction throughput (shown in Table 2). Then, we make each system process update logs with and without temporal localities and concurrently run graph analytics (pagerank(PR) and single source shortest path(SSSP)). We report the transaction throughput and graph analytics latency in Table 3, 4. The results demonstrate significant advancement in performance of GTX over other systems w.r.t. transcation throughput and analytics workloads. ACKNOWLEDGMENTS Walid G. Aref acknowledges the support of the National Science Foundation under Grant Number IIS-1910216."
17
+ }
intro_8K/test_introduction_short_2405.01481v1.json ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.01481v1",
3
+ "title": "NeMo-Aligner: Scalable Toolkit for Efficient Model Alignment",
4
+ "abstract": "Aligning Large Language Models (LLMs) with human values and preferences is\nessential for making them helpful and safe. However, building efficient tools\nto perform alignment can be challenging, especially for the largest and most\ncompetent LLMs which often contain tens or hundreds of billions of parameters.\nWe create NeMo-Aligner, a toolkit for model alignment that can efficiently\nscale to using hundreds of GPUs for training. NeMo-Aligner comes with highly\noptimized and scalable implementations for major paradigms of model alignment\nsuch as: Reinforcement Learning from Human Feedback (RLHF), Direct Preference\nOptimization (DPO), SteerLM, and Self-Play Fine-Tuning (SPIN). Additionally,\nour toolkit supports running most of the alignment techniques in a Parameter\nEfficient Fine-Tuning (PEFT) setting. NeMo-Aligner is designed for\nextensibility, allowing support for other alignment techniques with minimal\neffort. It is open-sourced with Apache 2.0 License and we invite community\ncontributions at https://github.com/NVIDIA/NeMo-Aligner",
5
+ "authors": "Gerald Shen, Zhilin Wang, Olivier Delalleau, Jiaqi Zeng, Yi Dong, Daniel Egert, Shengyang Sun, Jimmy Zhang, Sahil Jain, Ali Taghibakhshi, Markel Sanz Ausin, Ashwath Aithal, Oleksii Kuchaiev",
6
+ "published": "2024-05-02",
7
+ "updated": "2024-05-02",
8
+ "primary_cat": "cs.CL",
9
+ "cats": [
10
+ "cs.CL",
11
+ "cs.AI",
12
+ "cs.LG"
13
+ ],
14
+ "label": "Original Paper",
15
+ "paper_cat": "Parameter AND Efficient AND Fine AND Tuning",
16
+ "gt": "Pre-training large language models on tremendous amounts of unlabelled text has show- cased promising capabilities (Brown et al., 2020; Zhang et al., 2022). While such unsuper- vised pre-trained models have achieved impressive results, subsequently aligning models to follow user instructions is a critical step to tap the capabilities of LLMs for practical use cases (Sanh et al., 2022; Wei et al., 2022). Attempts based on Supervised Finetuning (Conover et al., 2023; K\u00a8 opf et al., 2023; Taori et al., 2023) proved less effective compared to techniques that also made use of feedback to tune models towards responses that are more helpful and away from responses that are less so (Bai et al., 2022a; Ouyang et al., 2022; Touvron et al., 2023; Dong et al., 2023). Despite the benefits of training models using feedback, these pipelines are notoriously chal- lenging to get right (Lambert & Calandra, 2023; Zheng et al., 2023b), deterring widespread, productive adoption outside of select well-resourced organizations. For example, the popu- lar Proximal Policy Optimization (PPO) variant of Reinforcement Learning from Human Feedback (RLHF) approach (Ouyang et al., 2022) requires running a complicated pipeline with four large language models interacting in a complex manner during training. Such alignment algorithms introduce new system challenges for efficient training that require re-thinking various aspects of the software stack including model scalability, coordination among models, and text generation (so-called \u201drollout\u201d) within the training loop. There are existing open source tools for model alignment, most notably HuggingFace TRL (von Werra et al., 2020), CarperAI trlX (Havrilla et al., 2023) and Microsoft DeepSpeed- Chat (Yao et al., 2023). These tools provide an excellent starting point with respect to usability and feature set. However, with NeMo-Aligner we aim to vastly improve performance and scalability of PPO and other methods especially when it comes to the largest and most competent models such as Llama 2 70B (Touvron et al., 2023) and beyond. 1 arXiv:2405.01481v1 [cs.CL] 2 May 2024 Preprint. Under review. NeMo-Aligner addresses scalability challenges by (I) building upon Megatron-LM (Shoeybi et al., 2020) with 3D (data, tensor, and pipeline)-parallelism training, (II) having a distributed approach to Proximal Policy Optimization (PPO) training in RLHF and (III) integrating PPO inference optimizations based on TensorRT-LLM (NVIDIA, 2023b) during rollout stage. Combined, these optimizations allow users to efficiently train the largest models over hundreds of GPUs reducing research iteration time. NeMo-Aligner optimizes popular alignment techniques including Supervised Finetuning (SFT), PPO-based RLHF (Ouyang et al., 2022), Direct Preference Optimization (Rafailov et al., 2023), SteerLM (Dong et al., 2023) and Self-Play Fine-Tuning (Chen et al., 2024). We briefly outline the background for these techniques in Section 2, followed by an in-depth exploration of training with each of the techniques in Sections 3, 4, 5, and 6. Finally, we demonstrate the extensible design of NeMo-Aligner in Section 7.",
17
+ "main_content": "SFT Model Pretrained Model Annotated + retrained Model Annotated Data + Data 1. Supervised Fine-Tuning 2. Reward Model Training + SFT Model + Preference Data Data Reward Model 3. Proximal Policy Optimization Policy Network (Actor) e 1: Tra SFT Model Reward Model Model Policy Network (Actor) Value Network (Critic) + Prompts + Trained Policy Model Trainable Weights Frozen Weights Dataset Produce AND + Initialize (Actor) Figure 1: Training Recipe for RLHF based on Ouyang et al. (2022). Step 1: Annotated PromptResponse Data is used to perform Supervised Fine-Tuning on the pre-trained (base) Model. Step 2: The resulting SFT model is trained with Preference Data to produce a Reward Model. Step 3: The SFT Model is used to initialize the Policy Network, and the Reward Model is used to initialize the Value Network \u2013 together with input prompts, all four models are used to train a Policy Model. The SFT model is also used to compute the KL divergence penalty in Step 3 (not illustrated). 2.1 Supervised Fine Tuning Given a pre-trained (also referred to as \u201dbase\u201d) model, supervised fine-tuning (SFT) updates the base model\u2019s parameters on prompts with expected responses, where the expected responses might come from expert human annotations (K\u00a8 opf et al., 2023) or other language models (Ding et al., 2023). The model is trained to mimic the expected responses given prompts using the token-level cross-entropy loss. SFT is an important prerequisite step in Reinforcement Learning from Human Feedback (Ouyang et al., 2022) and Direct Preference Optimization (Rafailov et al., 2023) because without it, the base model is very unlikely to generate responses which follow user\u2019s instructions. This step is also sometimes called behaviour cloning because the model is expected to mimic responses of a human or another model. 2.2 Reinforcement Learning from Human Feedback Reinforcement Learning from Human Feedback (RLHF) was introduced by Christiano et al. (2017) as a way to avoid manually defined reward functions in Reinforcement Learning. 2 Preprint. Under review. Instead, a reward model is trained from a dataset of human preferences consisting of pairs of \u201cchosen\u201d and \u201crejected\u201d trajectories. The reward model\u2019s loss, derived from the BradleyTerry model (Bradley & Terry, 1952), tries to maximize the likelihood that rchosen > rrejected (i.e., that the predicted rewards are consistent with human preferences). Once the reward model is trained, it may be used to compute rewards for RL algorithm. Two most common methods used in RLHF are REINFORCE (Williams, 1992) and Proximal Policy Optimization (PPO) (Schulman et al., 2017). In NeMo-Aligner we focus on PPO, specifically as described by Ouyang et al. (2022). RLHF has been shown to bring significant benefits for model alignment (Ouyang et al., 2022; Bai et al., 2022a; Touvron et al., 2023) with the typical training recipe being as follows, also illustrated in Figure 1: 1. From a pre-trained base model, train an initial SFT model as described in Section 2.1. 2. From the SFT model, train a reward model using a dataset of human preferences made of pairs of \u201cchosen\u201d and \u201crejected\u201d responses to a set of prompts, following Christiano et al. (2017). Typically, we initialize a linear reward model head on top of the SFT model before training. 3. From the SFT model, train a policy with the online Proximal Policy Optimization algorithm (PPO, Schulman et al., 2017), with rewards provided by the trained reward model. Input prompts may not necessarily be the same as those used for reward model training. A regularization term based on the KL divergence w.r.t. the SFT model helps prevent the policy from straying too far away from its starting point and exploiting the \u201cblind spots\u201d of the reward model (Stiennon et al., 2020; Ouyang et al., 2022). The PPO critic is typically initialized from the reward model. 2.3 Direct Preference Optimization Direct Preference Optimization (Rafailov et al., 2023) is an offline, off-policy algorithm that makes use of preference data to directly train an optimal policy without an explicit reward model. Rather than use a reward model, a reference policy is used to implicitly derive the reward between a chosen and rejected pair via the Bradley-Terry model. This is accomplished via the difference in the log probabilities between the chosen and rejected responses, which is calculated for the optimal and reference policies. This difference is scaled and then transformed by the sigmoid function to derive the loss. The reference policy is frozen during training and represents the policy used to generate the chosen/rejected responses to the given prompts. If the reference policy used to generate the preference data is not available, it can be approximated by supervised fine-tuning on the prompts and preferred responses of the preference data. 2.4 SteerLM SteerLM (Dong et al., 2023) is a model alignment algorithm based on supervised finetuning which avoids use of complex RL methods, similarly to DPO. SteerLM involves three steps. The first step is to train an Attribute Prediction Model that learns to predict the values (between 0 and 4 where higher is more) for various semantic aspects of a response that make responses helpful and safe, such as its correctness and toxicity (K\u00a8 opf et al., 2023; Wang et al., 2023). Next, the Attribution Prediction Model can be used to annotate the various attributes contributing to helpfulness and safety in a diversity of promptresponse datasets. Finally, these annotated datasets can be used to perform AttributeConditioned Supervised Fine-Tuning where the model learns to generate the response conditioned on the prompt as well as the annotated attributes formatted into a string, such as helpfulness:4,correctness:4,toxicity:0. This step teaches the model to discriminate between responses that are more helpful/safe and those that are less, in a fine-grained manner for each semantic aspect. At inference time, the prompt can be appended with the optimal attribute values, as above, to generate the most helpful response. 3 Preprint. Under review. 2.5 Self-Play Fine-Tuning Self-Play Fine-Tuning (SPIN) (Chen et al., 2024) is a self-play based algorithm, where a strong model is developed from a weaker model by playing against previous instances of itself. Starting from an SFT dataset of prompt/response pairs, new responses are generated from previous iterations of the model. Its policy is then improved by discriminating between these self-generated responses and the ground truth human-generated SFT responses. This is accomplished through a preference loss function which is identical to the one used by DPO (Section 2.3). When SPIN training first starts, we use a copy of the initial policy as the reference policy in the DPO loss. The self-play \u201cgame\u201d is then played for a number of iterations during which we train the policy as in DPO whilst keeping the reference policy frozen, and at the end of each iteration we update the reference policy\u2019s weights with those from the trained policy. During each iteration, we iterate over our SFT training dataset and use the reference policy to generate responses for each prompt, building a preference tuple between the ground truth SFT human \u201cchosen\u201d response and the generated \u201crejected\u201d response. Once we have these preference tuples for the entire epoch, we update the model weights via the DPO loss function from these tuples of \u201c(chosen, rejected)\u201d preference pairs. The model thus implicitly learns to prefer the ground truth SFT responses to those generated by the previous iteration of itself, which forms the self-play mechanism. 3 RLHF (PPO) Training NeMo-Aligner is designed to support numerous alignment techniques efficiently at extremely large scales. It does so by building upon Megatron-LM (Shoeybi et al., 2020) and NeMo (Kuchaiev et al., 2019) to include features such as optimized kernels from Transformer Engine (NVIDIA, 2022), distributed fused adam optimizer and 3D parallelism support. NeMo-Aligner supports the entire RLHF pipeline as introduced by Ouyang et al. (2022) and described in Section 2.2. The training pipeline is separated into three distinct stages as illustrated in Figure 1: Supervised Fined Tuning, Reward Model Training, and Proximal Policy Optimization. The challenges with the pipeline efficiency come primarily from the Proximal Policy Optimization stage, and this section describes our approach to tackling these challenges, as summarized in Figure 2. Figure 2: Optimizations for RLHF training. Optimizations for PPO training and inference are detailed in Sections 3.1 and 3.2 respectively. 3.1 Distributed Approach to PPO training The PPO stage requires running training and/or inference on four different models, as illustrated in Figure 3: 1. PPO Actor (training and inference, initialized from SFT model): The model we want to fine tune with PPO. 2. Reference Policy (inference only, set to the SFT model): The model to compute the KL penalty against. 3. PPO Critic (training and inference, initialized from the reward model): Used in PPO to compute value estimates. 4. Reward Model (inference only) : Provides RL rewards on generated rollout data. 4 Preprint. Under review. All of these models can be extremely large (e.g. Llama 2 70B), so NeMo-Aligner takes a distributed approach to PPO training. We allow users to setup PyTriton (NVIDIA, 2022) servers and clients to communicate across the different models during PPO. These PyTriton servers make it possible to run the models on different compute clusters, removing the requirement of having both the critic and actor on the same compute allocation. Naively, four different servers (i.e. one for each model) would be launched. However, we note that the reference policy and PPO actor are the same model but with different weights. Therefore, we combine them into one job and offload the reference policy\u2019s weights to CPU, swapping them with the actor\u2019s weights for the reference policy inference step. We deploy the same strategy for the reward model and critic. All communications are done asynchronously, permitting pipelined critic inference/training with policy inference/training. We scale compute allocation sizes such that the [reward model inference + critic inference] \u2248[actor sampling + reference policy inference] and [critic train] \u2264[actor train + actor inference initialization]. This ensures that the pipeline can use available compute capacity most efficiently. Figure 3: NeMo-Aligner PPO System Architecture. The PPO Actor is a PyTriton (NVIDIA, 2022) client that sends async requests to the server (PPO critic and reward model) to obtain the rewards and values of generated rollouts, and to send the training data for the critic. 3.2 Optimizations for PPO rollout Response generation during the rollout step is the main bottleneck during PPO training. The generation stage of the actor is composed of multiple forward passes, with one token generated per forward pass. Therefore, generation stage kernels are generally launch latency and memory bandwidth bound, meaning that directly reusing the compute optimized forward pass implementation of the training stage results in very poor performance. To address these bottlenecks, we implement the generation stage using TensorRT-LLM (NVIDIA, 2023b), a high-performance LLM deployment framework. TensorRT-LLM integrates inference optimized kernels and automatic kernel fusion into a TensorRT based runtime to achieve better performance. At the start of RLHF, the model is passed to TensorRTLLM, which takes the model and compiles it into a TensorRT engine; TensorRT-LLM loads the engine into its runtime and performs generation. The engine contains a copy of the model weights, while the runtime allocates the memory required by the KV-Cache and activations. Before and after generation, the engine and runtime are loaded and offloaded respectively. Because generation has lower memory requirements than training, we reshard the model to only use tensor parallelism. On subsequent training steps, the engine must be synced with updated parameter weights from training. Because generation must wait until the weights are synced, we update the engine in-place using the TensorRT Refitter (NVIDIA, 2023c) instead of recompiling the engine which would incur a large overhead. In addition, we observe that large discrepancies of generation time occur between the fastest and the slowest data parallel worker during generation due to the differences in response lengths. To mitigate this, we setup a worker pool to dynamically load balance among data parallel workers to give workers with shorter generations correspondingly more work. 5 Preprint. Under review. 3.3 Model Training Details and Quality As a demonstration of practical large-scale RLHF training with NeMo-Aligner, we first perform SFT on a Llama 2 70B model using a mixture of public and proprietary data, then train a Llama 2 13B reward model using the Anthropic HH-RLHF dataset (Bai et al., 2022a), selecting the checkpoint with the lowest validation loss. Subsequently, we perform PPO using the same dataset. For each PPO iteration, we set rollout batch size to 512, optimizer batch size to 64, KL Penalty (\u03b2) to 0.005 and temperature to 1. We use constant learning rate of 9e \u22127 after 10 warmup steps with the AdamW optimizer. Model was trained for 100 rollout steps, based on validation performance on heldout validation set done every 4 steps. Following Jiang et al. (2023); Tunstall et al. (2023), we use MT-Bench (Zheng et al., 2023a) to evaluate the performance of the trained RLHF Model. The resulting model achieves a performance of 7.59 on MT-Bench, which is substantially higher than 6.86 achieved by Llama 2 70B Chat (Touvron et al., 2023). We have openly released this model alongside its training resources on Huggingface for reproducibility but avoid mentioning links here to maintain anonymity. For reproducibility, we release the SFT blend1, reward model2, and RLHF-ed model3 publicly. 3.4 Scalability No. of compute nodes (Actor + Critic) 8 + 2 16 + 4 32 + 8 Time per step in seconds (std.) \u2193 Overall 187.6 (13.770) 111.4(8.904) 69.9 (2.583) Train 32.8 (0.161) 19.4 (0.040) 13.1 (0.038) Rollout 154.9 (13.917) 91.9 (8.912) 56.8 (2.554) Response generation 110.2 (14.107) 61.9 (8.511) 33.5 (2.447) Log-probs calculation 28.8 (0.006) 14.9 (0.021) 8.2 (0.009) TensorRT Refit 12.1 (0.131) 11.8 (0.203) 11.8 (0.151) Critic wait 0.1 (0.002) 0.1 (0.001) 0.1 (0.001) Relative speed up (vs. 8 + 2 node setup) \u2191 Overall 1x 1.68x 2.68x Train 1x 1.69x 2.50x Rollout 1x 1.69x 2.73x Response generation 1x 1.78x 3.29x Log-probs calculation 1x 1.93x 3.51x Table 1: Effects of scaling training across different number of compute nodes for Llama 2 70B actor and Llama 2 13B critic on Rollout batch size of 512 and BF16 precision following Section 3.3. Nodes are 8*H100-80GB-SXM connected with intra-node NVLink (NVIDIA, 2023a) and inter-node Infiniband (NVIDIA, 2024) interconnects. Time per step calculated based on mean of 5 steps after the first step, as the first step incurs substantial time for TRT-LLM Engine Building. Further training configuration details are in Table 5. To demonstrate the scaling efficiency of NeMo-Aligner, we repeat identical training setups from Section 3.3, with i. 8 actor nodes + 2 critic nodes, ii. 16 actor nodes + 4 critic nodes and iii. 32 actor nodes + 8 critic nodes respectively. As shown in Table 1, overall time per step reduces correspondingly, achieving a 1.68x speed up between 8+2 nodes and 16+4 nodes as well as a 2.68x speed up when further scaling to 32+8 nodes. The speed up in overall time per step is contributed by speed ups in both the Train and Rollout stages, demonstrating the effective optimization that NeMo-Aligner has done for both stages. 1https://huggingface.co/datasets/nvidia/sft datablend v1 2https://huggingface.co/nvidia/NV-Llama2-13B-RLHF-RM 3https://huggingface.co/nvidia/NV-Llama2-70B-RLHF-Chat 6 Preprint. Under review. The scaling of Train stage is sublinear (1.69x and 2.50x respectively when nodes double and quadruple) due to number of micro-batches per data parallel rank decreasing as node count increases. Because all pipeline stages must complete before the optimizer is called in pipeline parallel models, we incur an overhead to fill and drain the pipeline that is independent of the number of micro-batches (Shoeybi et al., 2020). Therefore, decreasing the number of micro-batches per data parallel rank increases the proportion of the train step spent in filling and draining the pipeline, where GPU utilization is poor. Within the Rollout stage, the steps for Response generation and Log-probs calculation scales well with the number of nodes, achieving a 1.78x/3.29x and 1.93x/3.51x speed up when scaling to twice and four times the number of nodes respectively. This is because scaling up the number of actor nodes proportionally increases the number of data parallel workers for each step, which can evenly share the work. On the other hand, the TensorRT Refit step within the Rollout stages takes a constant amount of time (11.8-12.1 seconds) irrespective of the number of compute nodes, as the inplace engine sync time is dominated by time spent reloading the engine. As a result, as the number of nodes increases from 8+2 to 32+8, the time taken by Tensor Refit as a proportion of the Rollout stage increases from 7.81% to 20.8%. Finally, async communications between the Actor and the Critic models result in the additional time taken to wait for the Critic model to be inconsequential (0.1 seconds), suggesting the effectiveness of having async non-blocking calls between actor and critic models in the PPO pipeline. No. of compute nodes (Actor + Critic) 32 + 16 64 + 32 Time per step in seconds (std.) \u2193 Overall 147.0 (26.720) 90.1 (14.167) Train 21.2 (0.865) 15.1 (0.299) Rollout 125.8 (26.010) 75.0 (13.916) Response generation 95.8 (26.274) 51.9 (14.182) Log-probs calculation 14.8 (0.010) 8.1 (0.008) TensorRT Refit 12.3 (0.281) 11.7 (0.162) Critic wait 0.1 (0.002) 0.1 (0.014) Relative speed up (vs. 32 + 16 node setup)\u2191 Overall 1x 1.63x Train 1x 1.40x Rollout 1x 1.68x Response generation 1x 1.85x Log-probs calculation 1x 1.83x Table 2: Effects of scaling training on 32 Actor node + 16 Critic node and 64 Actor node + 32 Critic node for Llama 2 70B actor and Llama 2 70B critic on Rollout batch size of 1024 and BF16 precision. Nodes are 8*H100-80GB-SXM connected with intra-node NVLink (NVIDIA, 2023a) and inter-node Infiniband (NVIDIA, 2024) interconnects. Time per step calculated based on mean of 5 steps after the first step, as the first step incurs substantial additional time for TRT-LLM Engine Building. Further training configuration details are in Table 5. System scalability also needs to be considered under the context of the problem requirements. The training setup in Section 3.3 has a 70B Llama 2 Actor, 13B Llama 2 Critic as well as a Rollout batch size of 512. Such a setup limits the effective demonstration of our system scaling beyond 32 + 8 nodes as there is not enough work to be meaningfully shared across more data parallel workers. Therefore, we modify the setup slightly to use a 70B Llama 2 Critic and a Rollout batch size of 1024 in Table 2 in order to measure the system performance when the requirements are higher. We use more nodes for the critic with the corresponding number of actor nodes (i.e. 32 + 16 instead of 32 + 8) because the critic is larger (70B vs. 13B) and hence require more compute to run effectively. Table 2 shows that the increased requirements of the training job allows it to meaningfully scale to 64 + 32 node (with 768 H100 GPUs total) for various stages within PPO. 7 Preprint. Under review. 3.5 What contributes to system performance? To better understand the importance of each aspect of NeMo-Aligner\u2019s PPO system design, we conduct ablation studies by removing one aspect at a time and measuring the overall time per step as shown in Table 3. We find that TensorRT-LLM Integration is the most critical component for high system performance, without which PPO will take nearly seven times as long for each step. This is followed by using TensorRT Refit to avoid TensorRT-LLM engine recompiling (1.46x), the use of async requests between actor and critic models (1.29x) and finally load-balancing of data parallel workers during generation using a worker pool (1.08x). Time per step Time relative to in seconds (std.) \u2193 Optimal RLHF setup \u2193 Optimal RLHF Setup 187.6 (13.8) 1x TensorRT-LLM Integration 1305.9 (160.9) 6.96x (i.e. using NeMo Generate) TensorRT Refit 278.6 (7.8) 1.46x Async Requests 241.4 (16.6) 1.29x Worker Pool 203.1 (20.6) 1.08x Table 3: Ablation studies on training Llama 2 70B actor and Llama 2 13B critic on Rollout batch size of 512 with 8 Actor nodes and 2 critic nodes. Nodes are 8*H100-80GB-SXM connected with intra-node NVLink (NVIDIA, 2023a) and inter-node Infiniband (NVIDIA, 2024) interconnects. Time per step is calculated based on mean of 5 steps after the first step, as the first step incurs substantial additional time for TRT-LLM Engine Building. 4 DPO Training We follow the Zephyr-7B-Beta (Tunstall et al., 2023) training recipe, a model trained with SFT and DPO. Briefly, SFT was first performed on Mistral-7B (Jiang et al., 2023) using the Ultrachat dataset (Ding et al., 2023). Model was then further trained with DPO using the Ultrafeedback dataset (Cui et al., 2023). For SFT, we used a constant learning rate of 2e \u22125, global batch size of 512, and trained the model for 3 epochs. For DPO training, we used KL regularization coefficient of 3e \u22124, global batch size of 512 and a cosine learning rate schedule with peak LR of 1e \u22127, minimum LR of 1e \u22128, 50 warmup steps, and max. 300 steps. We obtain slighter better MT-Bench scores than those reported by Tunstall et al. (2023) for both the final model (7.60 vs 7.34) and the SFT-only initial model (6.77 vs 6.64). 5 SteerLM Training with LoRA Low Rank Adaptation (Hu et al., 2021) enables fine-tuning large language models in a more efficient and cost-effective manner. Supported for various alignment techniques within NeMo-Aligner, LoRA is applied to SteerLM training following the training recipe by Wang et al. (2023) using the Llama 2 70B model as well as the HelpSteer (Wang et al., 2023) and Open Assistant datasets (K\u00a8 opf et al., 2023). Specifically, we applied LoRA to all attention layers, with a rank of 32. We used global batch size of 128, constant learning rate of 1e \u22125 after 10 warmup steps with the AdamW optimizer, and trained for 3 epochs. As shown in Table 4, applying LoRA to SteerLM training with BF16 can reduce the minimum number of 80GB GPUs required from 32 to 8. With the same number of GPUs, LoRA achieves a 5\u00d7 speedup compared to full-parameter fine-tuning, while maintaining comparable model performance: MT-Bench 7.43 vs. 7.54, which is within noise level for this benchmark (Jiang et al., 2023). As we increase the number of GPUs used for LoRA training, the relative throughput (measured in samples per second) improves almost proportionally, as shown in Figure 4. This shows that NeMo-Aligner can effectively distribute and parallelize the workload across a large number of GPUs with minimal overhead and diminishing returns. 8 Preprint. Under review. Full-Param LoRA # trainable params 70B 89M min # 80GB GPUs required 32 8 Relative speed (sample/GPU/s) 1\u00d7 5\u00d7 MT-Bench 7.54 7.43 Table 4: Comparison of Full-Parameter and LoRA SteerLM following training recipe by Wang et al. (2023). 0 5 10 15 20 25 30 35 0 50 100 150 200 250 300 Relative Throughput (Sample/Second) Number of GPUs Scalability of LoRA-SteerLM Ours Upper bound Ours Linear Scaling Figure 4: Relative throughput of LoRA applied to SteerLM training as the number of GPUs increases. 6 SPIN Training We recreate the Zephyr-7B-Beta (Tunstall et al., 2023) SFT model via SPIN instead of SFT as formulated by Chen et al. (2024). We start with the Mistral-7B base model (Jiang et al., 2023) and perform SPIN training following Chen et al. (2024). However, we make a few departures from their methodology, in that we do not inject generations from the previous iteration into the current iteration (which would double the dataset size every epoch), and we only train for a single iteration, with 1 epoch per iteration. Additionally, we use a random subset of only 50k samples from Ultrachat200k (Ding et al., 2023) rather than the entire dataset, and use AdamW instead of RMSProp. Our learning rate is 5e \u22127 with 400 total steps, 40 warmup steps, and this LR is then decayed to 1e \u22127 for the last 100 steps using cosine annealing. Global batch size is 64, weight decay is 0.0, and the KL regularization coefficient is 0.1, as per Chen et al. (2024). Using this approach, we achieve an MT-Bench score of 7.04 which exceeds the 6.64 of Zephyr-7B-Beta using SFT (Tunstall et al., 2023), as well as the 6.78 of the 3-iteration SPIN model (Chen et al., 2024). 7 Framework Extensibility We design NeMo-Aligner with extensibility in mind, allowing users to easily modify algorithms in spite of the complexities of distributed training. We do so using the trainer abstraction, which encourages re-use of existing trainer methods across various steps and approaches. The extensibility of NeMo-Aligner allows variants of DPO to be integrated with minimal code changes, including the Identity Preference Optimization (Azar et al., 2023), the Conservative DPO (Mitchell, 2023), and the Kahneman-Tversky Optimization (Ethayarajh et al., 2023). Furthermore, other model alignment techniques such as Constitutional AI (Bai et al., 2022b), Rejection Sampling (Touvron et al., 2023), and Self-Rewarding Language Models (Yuan et al., 2024) are also being incorporated into NeMo-Aligner, facilitated by the framework design. 9 Preprint. Under review. 8 Conclusion Modern model alignment techniques, especially those based on Reinforcement Learning, pose complex optimization challenges with respect to system implementation. We create and open-source NeMo-Aligner to allow AI researchers and practitioners to efficiently experiment with LLM alignment by utilizing all available compute in a scalable way. Our framework consistently scales well when training large models with more compute. As this is our initial release, we expect this scaling to only improve with future versions. Additionally, we support SFT, PPO, DPO, SteerLM in a parameter-efficient manner using LoRA for compute-limited settings. As an Apache 2.0 licensed open-source codebase, NeMo-Aligner can make alignment research more efficient and accessible. Acknowledgements We would like to thank many teams at NVIDIA who contributed towards enabling NeMoAligner, especially the NeMo, TRT-LLM and TensorRT teams."
18
+ }
intro_8K/test_introduction_short_2405.01496v1.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.01496v1",
3
+ "title": "LocInv: Localization-aware Inversion for Text-Guided Image Editing",
4
+ "abstract": "Large-scale Text-to-Image (T2I) diffusion models demonstrate significant\ngeneration capabilities based on textual prompts. Based on the T2I diffusion\nmodels, text-guided image editing research aims to empower users to manipulate\ngenerated images by altering the text prompts. However, existing image editing\ntechniques are prone to editing over unintentional regions that are beyond the\nintended target area, primarily due to inaccuracies in cross-attention maps. To\naddress this problem, we propose Localization-aware Inversion (LocInv), which\nexploits segmentation maps or bounding boxes as extra localization priors to\nrefine the cross-attention maps in the denoising phases of the diffusion\nprocess. Through the dynamic updating of tokens corresponding to noun words in\nthe textual input, we are compelling the cross-attention maps to closely align\nwith the correct noun and adjective words in the text prompt. Based on this\ntechnique, we achieve fine-grained image editing over particular objects while\npreventing undesired changes to other regions. Our method LocInv, based on the\npublicly available Stable Diffusion, is extensively evaluated on a subset of\nthe COCO dataset, and consistently obtains superior results both quantitatively\nand qualitatively.The code will be released at\nhttps://github.com/wangkai930418/DPL",
5
+ "authors": "Chuanming Tang, Kai Wang, Fei Yang, Joost van de Weijer",
6
+ "published": "2024-05-02",
7
+ "updated": "2024-05-02",
8
+ "primary_cat": "cs.CV",
9
+ "cats": [
10
+ "cs.CV"
11
+ ],
12
+ "label": "Original Paper",
13
+ "paper_cat": "Diffusion AND Model",
14
+ "gt": "Text-to-Image (T2I) models have made remarkable progress and demonstrated an unprecedented ability to gen- erate diverse and realistic images [35, 42, 47, 48, 63]. T2I models are trained on gigantic language-image datasets, ne- *Equal contributions \u2020Corresponding Author cessitating significant computational resources. However, despite their impressive capabilities, they do not directly support real image editing, and they typically lack the ca- pability to precisely control specific regions in the image. Recent research on text-guided image editing allows users to manipulate an image using only text prompts [8, 17, 54, 60, 62]. In this paper, we focus on text-guided editing, where we aim to change the visual appearance of a specific source object in the image. Several of the existing meth- ods [29, 37, 39, 52] use DDIM inversion [49] to attain the initial latent code of an image and then apply their proposed editing techniques along the denoising phase. Nonetheless, present text-guided editing methods are susceptible to inad- vertent alterations of image regions. This arises from the heavy reliance of existing editing techniques on the preci- sion of cross-attention maps. DPL [53] observes the phe- nomenon that the cross-attention maps from DDIM [49] and NTI [37] do not only correlate with the corresponding ob- jects. This phenomenon is attributed to cross-attention leak- age, which is the main factor impeding these image edit- ing methods to work for complex multi-object images. To address this, DPL enhances the cross-attention by incorpo- rating additional attention losses. However, DPL relies on a relatively weak connection between the noun and its as- sociated object. This connection occasionally tends to be weak and results in unsatisfactory performance. Further- more, given recent advancements in text-based segmenta- tion and detection foundation models [27, 31, 58, 65], it is now straightforward to automatically obtain strong localiza- tion priors into general applications. In this paper, we include localization priors to offer en- hanced mitigation against cross-attention leakage. With the introduction of localization priors, our approach, named Localization-aware Inversion (LocInv), involves updating the token representations associated with objects at each arXiv:2405.01496v1 [cs.CV] 2 May 2024 Adjective binding DDIM\u00a0 inversion LocInv\u00a0 inversion 1. Provided 2. Model Localization prior guidance Adjective binding segmentation maps or bounding boxes Cross-attention\u00a0 enhancement Figure 1. Compared with the naive DDIM inversion, our method LocInv aims at enhancing the cross-attention maps by applying localization priors (segmentation maps or detection bounding boxes provided by the datasets or foundation models) to guide the inversion processes. Furthermore, to force strong bindings between adjective and noun words, we constrain the cross-attention similarity between them. timestep, a technique akin to dynamic prompt learning [53]. In both segmentation and detection scenarios, we opti- mize two losses\u2014namely, the similarity loss and overlap- ping loss\u2014to ensure that the cross-attention maps align closely with the provided priors. Moreover, to accommo- date situations in which adjectives describe their associ- ated noun words, we incorporate an additional similarity loss to reinforce the binding between them. In the ex- periments, we quantitatively evaluate the quality of cross- attention maps on a dataset COCO-edit collected from MS- COCO [30]. We further combine LocInv with P2P [18] to compare with other image editing methods. LocInv shows superior evaluation metrics and improved user evaluation. Furthermore, we qualitatively show prompt-editing results for Word-Swap and Attribute-Edit.",
15
+ "main_content": "Inversion based editing is mainly relying on the DDIM inversion [49], which shows potential in editing tasks by deterministically calculating and encoding context information into a latent space and then reconstructing the original image using this latent representation. However, DDIM is found lacking for text-guided diffusion models when classifier-free guidance (CFG) [20] is applied, which is necessary for meaningful editing. Leveraging optimization on null-text embedding, Null-Text Inversion (NTI) [37] further improved the image reconstruction quality when CFG is applied and retained the rich text-guided editing capabilities of the Stable Diffusion model [45]. Negative-prompt inversion (NPI) [36] and ProxNPI [16] reduces the computation cost for the inversion step while generating similarly competitive reconstruction results as Null-text inversion. Direct Inversion [24] further enhances the inversion technique by adjusting the editing direction in each timestep to offer essential content preservation and edit fidelity. IterInv [50] generalizes the inversion to the DeepFloyd-IF T2I model [48]. Text-guided editing methods [3, 10, 26, 28, 29] of recent researches [23, 34, 40] in this topic adopt the large pretrained text-to-image(T2I) models [7, 13, 21, 43, 44, 47] for controllable image editing. Among them, Imagic [25] and P2P [18] attempt structure-preserving editing via Stable Diffusion (SD) models. However, Imagic [25] requires fine-tuning the entire model for each image. P2P [18] has no need to fine-tune the model and retrains the image structure by assigning cross-attention maps from the original image to the edited one in the corresponding text token. InstructPix2Pix [4] is an extension of P2P by allowing human-like instructions for image editing. NTI [37] further makes the P2P capable of handling real images. Recently, pix2pix-zero [39] propose noise regularization and crossattention guidance to retrain the structure of a given image. DiffEdit [11] automatically generates a mask highlighting regions of the input image by contrasting predictions conditioned on different text prompts. PnP [52] demonstrated that the image structure can be preserved by manipulating spatial features and self-attention maps in the T2I models. There are also text-guided inpainting methods [15, 33, 38, 45] to achieve the editing purposes given user-specific There are also text-guided inpainting methods [15, 33, 38, 45] to achieve the editing purposes given user-specific masks. For example, Blended diffusion [1] adapts from a pre-trained unconditional diffusion model and encourages the output to align with the text prompt using the CLIP score. Blended latent diffusion (BLD) [2] further extend to the LDM [45]. Nonetheless, inpainting methods primarily concentrate on filling arbitrary objects in specified regions while ensuring visual coherence with the surrounding areas. These methods do not inherently preserve semantic similarity between the source and target objects, as is required for image translation effects. Text-based segmentation and detection models aim at segmenting or detecting arbitrary classes with the help of language generalization property after pretraining. One of Editing Image\u00a0 Diffusion model Diffusion model Reconstruction Inversion Denoise Text (1) overlapping loss\u00a0 A woman\u00a0and a brown dog\u00a0on a boat. Tokenizer Embedding lookup Text Transformer ? ? ? ? ? ? ? ? ? ? ? ? Dynamic prompts (woman, dog, boat) (2) similarity loss (3) adjective binding loss (optional)\u00a0 segmentation \u00a0or bounding box cross attention map 1. Provided 2. Model Figure 2. Illustration of our proposed method LocInv. The image I comes with its localization prior denoted as S (segmentation maps or detection boxes). For each time stamp t, the noun (and optionally adjective) words in the text prompt are transformed into dynamic tokens, as introduced in Sec 3.2. In each denoising step \u00af zt\u22121 \u2192\u00af zt, we update the dynamic token set Vt with our proposed overlapping loss, similarity loss and adjective binding loss, in order to ensure high-quality cross-attention maps. the most representative prompt-based segmentation model is SAM [27]. Given an image and visual prompt (box, points, text, or mask), SAM encodes image and prompt embeddings using an image and prompt encoder, respectively which are then combined in a lightweight mask decoder that predicts segmentation masks. Similar works include CLIPSeg [32], OpenSeg [14], GroupViT [58], etc. For prompt-based object detector, GroundingDINO [31] stands out as the state-of-the-art method by grounding the DINO [6] detector with language pre-training for openset generalization. Except that, MaskCLIP [12], Xdecoder [64], UniDetector [56] also offer prompt-based detectors. Leveraging these foundational models, we can acquire localization information as a valuable semantic prior to enhance image inversion. This, in turn, contributes to an overall improvement in image editing performance. 3. Methodology In this section, we provide the description of our method LocInv. An illustration of our method is shown in Fig. 2 and the pseudo-code in Algorithm 1. 3.1. Preliminary Latent Diffusion Models. We use Stable Diffusion v1.4 which is a Latent Diffusion Model (LDM) [45]. The model is composed of two main components: an autoencoder and a diffusion model. The encoder E from the autoencoder component of the LDMs maps an image I into a latent code z0 = E(I) and the decoder reverses the latent code back to the original image as D(E(I)) \u2248I. The diffusion model can be conditioned on class labels, segmentation masks or textual input. Let \u03c4\u03b8(y) be the conditioning mechanism which maps a condition y into a conditional vector for LDMs, the LDM model is updated by the loss: Lldm = Ez0\u223cE(x),y,\u03f5\u223cN (0,1) h \u2225\u03f5 \u2212\u03f5\u03b8(zt, t, \u03c4\u03b8(y))\u22252 2 i (1) The neural backbone \u03f5\u03b8 is typically a conditional UNet [46] which predicts the added noise. More specifically, textguided diffusion models aim to generate an image from a random noise zT and a conditional input prompt P. To distinguish from the general conditional notation in LDMs, we itemize the textual condition as C = \u03c4\u03b8(P). DDIM inversion. Inversion aims to find an initial noise zT reconstructing the input latent code z0 upon sampling. Since we aim at accurate reconstruction of a given image for image editing, we employ the deterministic DDIM sampler: zt+1 = \u221a\u00af \u03b1t+1f\u03b8(zt, t, C) + p 1 \u2212\u00af \u03b1t+1\u03f5\u03b8(zt, t, C) (2) where \u00af \u03b1t+1 is noise scaling factor defined in DDIM [49] and f\u03b8(zt, t, C) predicts the final denoised latent code z0 as f\u03b8(zt, t, C) = h zt \u2212\u221a1 \u2212\u00af \u03b1t\u03f5\u03b8(zt, t, C) i /\u221a\u00af \u03b1t. Null-Text inversion (NTI). To amplify the effect of conditional textual prompts, classifier-free guidance (CFG) [20] is proposed to extrapolate the conditional noise with an unconditional noise prediction. Let \u2205= \u03c4\u03b8(\u201c\u201d) denote the null-text embedding, the CFG is defined as: \u02dc \u03f5\u03b8(zt, t, C, \u2205) = w\u00b7\u03f5\u03b8(zt, t, C)+(1\u2212w) \u00b7\u03f5\u03b8(zt, t, \u2205) (3) where we set the guidance scale w = 7.5 as is standard for LDM [20, 37, 45]. However, the introduction of CFG complicates the inversion, and the generated image from the found initial noise zT deviates from the input image. NTI [37] proposes a novel optimization which updates the null text embedding \u2205t for each DDIM step t \u2208[1, T] to approximate the DDIM trajectory {zt}T 0 according to: min \u2205t \u2225\u00af zt\u22121 \u2212\u02dc \u03f5\u03b8(\u00af zt, t, C, \u2205t)\u22252 2 (4) where {\u00af zt}T 0 is the backward trace from NTI. This allows to edit real images starting from initial noise \u00af zT = zT using the learned null-text \u2205t in combination with P2P [18]. 3.2. Dynamic Prompt Learning Text-based image editing takes an image I described by an initial prompt P, and aims to modify it according to an altered prompt P\u2217in which the user indicates desired changes. The initial prompt is used to compute the crossattention maps. As discussed in Sec. 1, cross-attention leakage [53] is a challenge for existing text-based editing methods, when facing complex scenarios. DPL [53] introduces three losses to enhance the alignment between attention maps and nouns, which rely on the inherent connection between image and prompt and is not always reliable in realworld scenarios. In this section, we present our method, denoted as LocInv, which leverages localization priors derived from existing segmentation maps (Segment-Prior) or detection boxes (Detection-Prior). This information can be readily acquired with the assistance of recent advancements in foundation models [27, 65] and has the potential to dramatically strengthen the quality of the cross-attention maps. To simplify, we denote the segmentation map and detection boxes uniformly as S. The cross-attention maps in the Diffusion Model UNet are obtained from \u03f5\u03b8(zt, t, C), which is the first component in Eq. 3. They are computed from the deep features of the noisy image \u03c8(zt) which are projected to a query matrix Qt = lQ(\u03c8(zt)), and the textual embedding which is projected to a key matrix K = lK(C). Then the attention map is computed as At = softmax(Qt \u00b7 KT / \u221a d), where d is the latent dimension, and the cell [At]ij defines the weight of the j-th token on the pixel i. We optimize the word embeddings v corresponding to the initial prompt P in such a way that the resulting cross-attention At does not suffer from the above-mentioned cross-attention leakage. The initial prompt P contains K noun words and their corresponding learnable tokens at each timestamp Vt = {v1 t , ..., vk t ..., vK t }. Similar to DPL, LocInv updates each specified word in Vt for each step t. The final sentence embedding Ct now varies for each timestamp t and is computed by applying the text encoder on the text embeddings. 3.3. LocInv: Localization-aware Inversion To update the token representations in each timestep, we propose several losses to optimize the embedding vectors Vt: we develop one loss to address the similarity and another one to ensure high overlapping, between the crossattention map and its corresponding location prior S. Similarity loss. The similarity is defined as the cosine distance between the attention map and the location prior. Lsim = K X i=1 \u0002 1 \u2212cos(Avi t t , Svi t t ) \u0003 (5) Nonetheless, our experiments reveal that solely employing the similarity loss leads to lower Intersection over Union (IoU) curves. Given that attention maps are continuous functions, we have additionally introduced an overlapping loss to gently restrict the cross-attention. Overlapping loss. This loss is defined as the percentage of the attention map locating in the localization prior as: Lovl = 1 \u2212 PK i=1 Avi t t \u00b7 Svi t t PK i=1 Avi t t (6) By incorporating both losses, our method effectively aligns the cross-attention maps with the localization priors. We update the learnable token vk t according to: arg min Vt L = \u03bbsim \u00b7 Lsim + \u03bbovl \u00b7 Lovl (7) Gradual Optimization for Token Updates. So far, we introduced the losses to learn new dynamic tokens at each timestamp. However, the cross-attention leakage gradually accumulated in the denoising phase. Hence, we enforce all losses to reach a pre-defined threshold at each timestamp t to avoid overfitting the cross-attention maps [53]. We express the gradual threshold by an exponential function. For the losses proposed above, the corresponding thresholds at time t are defined as THt = \u03b2 \u00b7 exp(\u2212t/\u03b1). And for each loss we have a group of hyperparameters as (\u03b2sim, \u03b1sim), (\u03b2ovl, \u03b1ovl), (\u03b2adj, \u03b1adj). We verify the effectiveness of this mechanism in our ablation experiments. Null-Text embeddings. The above described token updating ensures that the cross-attention maps are highly related to the noun words in the text prompt and minimize crossattention leakage. To reconstruct the original image, we use NTI [37] in addition to learn a set of null embeddings \u2205t for each timestamp t. Then we have a set of learnable word embeddings Vt and null text \u2205t which can accurately localize the objects and also reconstruct the original image. 3.4. Adjective binding Existing text-guided image editing methods have focused on translating a source object to a target one. However, often users would like to change the appearance of objects. Typically, in text-guided image editing, this would be done by changing the objects\u2019 attributes described by adjectives. However, existing methods fail when editing attributes of the source objects (as shown in Fig. 5). We ascribe this case Algorithm 1: Localization-aware Inversion 1 Input: A source prompt P, an input image I, the localization prior S, T = 50 2 Output: A noise vector \u00af zT , a set of updated tokens {Vt}T 1 and null-text embeddings {\u2205t}T 1 3 {zt}T 0 \u2190DDIM-inv(I); 4 Set guidance scale w = 7.5; 5 Initialize VT with original noun tokens; 6 Initialize \u00af zT = zT , PT = P, \u2205T = \u03c4\u03b8(\u201c\u201d); 7 for t = T, T \u22121, . . . , 1 do 8 Initialize Pt by Vt, then Ct = \u03c4\u03b8(Pt); 9 Compute Lsim, Lovl, Ladj by Eq.5-8; 10 while Lsim \u2265THsim t or Lovl \u2265THovl t or Ladj \u2265THadj t do 11 L = \u03bbsim \u00b7 Lsim + \u03bbovl \u00b7 Lovl + \u03bbadj \u00b7 Ladj 12 Vt \u2190Vt \u2212\u2207VtL 13 end 14 \u02dc zt = \u02dc \u03f5\u03b8(\u00af zt, t, Ct, \u2205t) 15 \u00af zt\u22121, \u2205t \u2190NTI(\u02dc zt, \u2205t) 16 Initialize \u2205t\u22121 \u2190\u2205t, Vt\u22121 \u2190Vt 17 end 18 Return \u00af zT , {Vt}T 1 , {\u2205t}T 1 to the disagreement in cross-attention between the adjective and its corresponding noun word (as evidenced in Fig. 1). To empower the T2I model with attribute editing capability, we propose to bind the adjective words with their corresponding nouns. To achieve this, we use the Spacy parser [22] to detect the object noun and adjective words, as so called the adjective-noun pairs (vi t, ai t). Given these pairs, the adjective binding loss is defined as the similarity between the attention maps of the adjective and noun words. Ladj = K X i=1 \u0002 1 \u2212cos(Avi t t , Aai t t ) \u0003 (8) This loss ensures maximum overlap between the adjectivenoun pairs and is only applied when Adjective-Edit is demanded, and we simply add \u03bbadj \u00b7 Ladj to Eq. 7. 4. Experiments We demonstrate LocInv in various experiments based on the open-source Stable Diffusion [45] following previous methods [37, 39, 52]. All experiments are done on R6000 GPUs. Datasets. For the quantitative ablation study of hyperparameters and partially for the qualitative editing comparison, we select 315 images as a subset COCO-edit out of MS-COCO dataset [30]. We compose this subset from various searching prompts (including concepts as airplane, apple, banana, bear, bench, etc.) and store the groundtruth segUser Study (%) method LocInv (Seg) DiffEdit MasaCtrl NTI pix2pix PnP Edit quality 40.0 27.0 3.5 25.5 0.75 3.25 Background 25.8 3.7 4.5 20.0 22.7 23.3 Table 1. User study compared with methods freezing the Stable Diffusion [45]. We request the respondents to evaluate methods in both editing quality and background preservations. mentation/detection images for experimental usage. Overall, there are 7 search prompts with a single object (noun) in the sentence and 6 with multiple objects. More detailed information is shown in the supplementary material. Compared methods. We organize two groups of methods for qualitative and quantitative comparison. The first group of methods, which are freezing the Stable Diffusion models, include NTI [37], DPL [53], pix2pix-zero [39], PnP [52], DiffEdit [11] and MasaCtrl [5]. The second group of methods is finetuning the large pretrained T2I model as specific models for image editing, such as SD-inpaint [45], InstructPix2Pix [4] and Imagic [25], or taking masks as locations for inpainting, including SD-inpaint [45] and BLD [2]. Evaluation metrics. To quantitatively assess our method\u2019s performance, we employ well-established metrics, including LPIPS [61], SSIM [55], PSNR, CLIP-Score [19] and DINO-Sim [51], to evaluate the edited full image. Additionally, to illustrate the quality of background preservation, we follow DirectInversion [24] to compute LPIPS, SSIM, PSNR, and MSE metrics for regions outside the mask. 4.1. Ablation study For the ablation study, we experiment on the COCO-edit dataset. To quantitatively assess the localization performance of LocInv, we vary the threshold from 0.0 to 1.0 to obtain the segmentation mask from the cross-attention maps. We then calculated the Intersection over Union (IoU) metric using the segmentation ground truth for comparison. Our method can operate with both segmentation maps and detection bounding boxes as localization priors. Here, we consider the hyperparameters for both these cases. In Fig. 3, we conduct ablation studies over the similarity loss and the overlapping loss. From Fig. 3-(c)(g), we observe that only applying one of these losses does not ensure a satisfactory performance. Empirically, we find the optimal hyperparameters for the segmentation and detection prior as \u03bbsim = 1.0, \u03b1sim = 50.0, \u03b2sim = 0.7, \u03bbovl = 1.0, \u03b1ovl = 10.0, \u03b2ovl = 0.7 and \u03bbsim = 0.1, \u03b1sim = 25.0, \u03b2sim = 0.5, \u03bbovl = 1.0, \u03b1ovl = 25.0, \u03b2ovl = 0.3, respectively. For the adjective binding loss, since there are no sufficient image-text pairs for the ablation study, we empirically set the hyperparameters to be \u03bbadj = 2.0, \u03b1adj = 50.0, \u03b2adj = 0.1. All results in this paper are generated with this hyperparameter setting. (a) Segment-Prior: ablate over the similarity loss (b) Segment-Prior: ablate over the overlapping loss (c) Segment-Prior: ablate over the trade-off parameters (d) Segment-Prior: comparison with NTI and DPL (e)\u00a0Detection-Prior: ablate over the similarity loss (f)\u00a0Detection-Prior: ablate over the overlapping loss (g)\u00a0Detection-Prior: ablate over the trade-off parameters (h) Detection-Prior: comparison with NTI and DPL Figure 3. Ablation study over hyperparameters given the Segment-Prior (first row) or Detection-Prior (second row). For the first and second columns, we ablate hyperparameters for the similarity loss and overlapping loss, respectively. Then we illustrate how the trade-off parameters influence in the third column. Lastly, we show the IoU curves of LocInv together with NTI and DPL as baseline comparisons. Method Full Image Evaluation Background Preservation CLIP-Score(\u2191) LPIPS(\u2193) SSIM(\u2191) PSNR(\u2191) DINO-Sim(\u2193) MSE(\u2193) LPIPS(\u2193) SSIM(\u2191) PSNR(\u2191) Edited Original Multi-Object Image Editing (6 tasks) Freezing SD DiffEdit [11] 0.2821 0.5910 18.5804 0.0219 0.0066 0.1981 0.6888 22.7726 21.4439 22.05 pix2pix [39] 0.4047 0.5492 19.7397 0.0549 0.0152 0.3347 0.6262 20.9542 21.8384 NTI [37] 0.2936 0.5919 21.7963 0.0433 0.0118 0.2413 0.6644 23.1352 21.7216 DPL [53] 0.2686 0.6121 21.3193 0.0223 0.0071 0.2299 0.6601 22.2695 21.5982 PnP [52] 0.3960 0.5568 18.8198 0.0384 0.0113 0.3331 0.6243 19.8573 21.8470 MasaCtrl [5] 0.4406 0.4671 17.3948 0.0611 0.0198 0.3784 0.5309 18.2041 21.7835 LocInv (Det) 0.2935 0.5956 21.3116 0.0272 0.0065 0.2458 0.6532 22.5126 21.6615 LocInv (Seg) 0.2523 0.6161 22.3027 0.0181 0.0054 0.1970 0.6905 24.3783 21.7757 Finetuning SD Imagic [25] 0.7347 0.2098 9.9586 0.1217 0.0935 0.6166 0.3280 10.7490 21.7566 InstructP2P [4] 0.3330 0.5428 17.4007 0.0274 0.0150 0.2462 0.6407 20.2072 21.6666 Inpaint (Det) [45] 0.3710 0.4853 16.9441 0.1242 0.0398 0.2755 0.6075 21.5161 21.8475 Inpaint (Seg) [45] 0.2703 0.6040 19.2707 0.0299 0.0061 0.1620 0.7233 26.3149 21.8315 BLD (Det) [2] 0.3412 0.5604 17.0294 0.0405 0.0112 0.2424 0.6814 21.0436 21.7218 BLD (Seg) [2] 0.2924 0.6257 18.9036 0.0258 0.0031 0.1845 0.7426 26.5964 21.7806 Single-Object Image Editing (7 tasks) Freezing SD DiffEdit [11] 0.2990 0.5701 17.7486 0.0278 0.0057 0.1873 0.7136 23.4800 21.4608 21.77 pix2pix [39] 0.4398 0.4824 17.2601 0.0645 0.0171 0.3180 0.6332 20.1122 21.8336 NTI [37] 0.2758 0.5864 21.4369 0.0280 0.0092 0.1936 0.7001 24.3014 21.7665 DPL [53] 0.2743 0.5906 21.1188 0.0212 0.0061 0.1791 0.7133 25.1545 21.7944 PnP [52] 0.3983 0.5379 18.0061 0.0338 0.0111 0.2893 0.6688 20.0125 21.7214 MasaCtrl [5] 0.4004 0.4472 17.2875 0.0430 0.0138 0.2879 0.5957 19.4637 22.0493 LocInv (Det) 0.2756 0.5867 21.1920 0.0196 0.0049 0.1810 0.7118 25.1956 21.7308 LocInv (Seg) 0.2662 0.5952 21.2287 0.0180 0.0047 0.1730 0.7193 25.2118 21.8069 Finetuning SD Imagic [25] 0.6657 0.2429 10.8554 0.1418 0.0694 0.5107 0.4313 12.3759 21.8340 InstructP2P [4] 0.3684 0.4925 16.3615 0.0421 0.0177 0.2519 0.6477 19.7768 21.7257 Inpaint (Det) [45] 0.3034 0.5458 17.4352 0.0301 0.0039 0.1570 0.7300 26.4033 21.7324 Inpaint (Seg) [45] 0.3080 0.5380 17.3868 0.0304 0.0038 0.1599 0.7241 26.4464 21.7967 BLD (Det) [2] 0.4112 0.4729 15.3787 0.0861 0.0175 0.2676 0.6649 18.9651 21.6654 BLD (Seg) [2] 0.3423 0.5571 17.4153 0.0455 0.0040 0.1895 0.7478 25.6214 21.6880 Table 2. Comparison with various text-based image editing methods based on the evaluation metrics over the COCO-edit dataset. We evaluate on single-object and multi-object images editing tasks separately. The comparison methods are organized into two groups as we stated in Sec. 4 The \u201cSeg\u201d and \u201cDet\u201d in the bracket represent the Segment-prior and Detection-Prior, respectively. Ours NTI DiffEdit Imagic SD-inpaint InstructP2P pix2pix-zero Input A ripe <banana> with a red <apple> sitting on a wood table an <apple> a <banana> a cup of orange juice a white <cat> a wooden <bench> and some pumpkins A <cat> sitting on a wooden <bench> outside in a garden area A brown <bear> walking across a forest surrounded by trees a <zebra> standing on some dirt and bushes and trees A little <dog> is smiling yet panting with a <frisbee> at his side there is a <dog> and a <cat> sitting in a basket Caption apple peach apple tomato cat tiger cat cougar bear panda zebra horse dog cat dog cat MasaCtrl PnP BLD Freezing Stable Diffusion Finetuning Stable Diffusion dog wolf a <dog> and a <cat> laying beside each other on a lonely street Figure 4. Comparison over the local object Word-Swap editing given the Segment-Prior. All examples are from the COCO-edit dataset. We distinguish these comparison methods by (1) freezing the SD [45] models; (2) fine-tuning the SD models or mask-based inpainting. a <dog> laying on the grass with a plastic <frisbee> in its mouth a red <apple> and a green <pear> on a table Ours DiffEdit MasaCtrl NTI pix2pix-zero InstructP2P Imagic SD-inpaint BLD PnP Input plastic frisbee wooden frisbee Ours DiffEdit MasaCtrl NTI pix2pix-zero InstructP2P Imagic SD-inpaint BLD PnP Input red apple purple apple Freezing Stable Diffusion Freezing Stable Diffusion Finetuning Stable Diffusion Finetuning Stable Diffusion Figure 5. Attribute-Edit by swapping the adjectives given the Segment-Prior. By forcing the binding between the cross-attention from the adjective words and corresponding noun words, LocInv successfully edits the color or material attribute. 4.2. Image editing evaluation For image editing, we combine LocInv with the P2P [18] image editing method. In this paper, we mainly focus on local editing tasks including Word-Swap and Attribute-Edit. Word-Swap. As shown in Fig. 4, we compare LocInv with various methods by swapping one object from the original image given the segmentation maps as localization priors. Our method, LocInv, more successfully translates the source object into the target object while maintaining semantic similarities. In Table 2, we designed one editing task for each group of images collected in the COCOEdit dataset (details in the supplementary material). In both single-object and multi-object editing tasks, LocInv achieves better full image evaluation and only performs worse than the inpainting-based methods [2, 45] in terms of background preservation (since these methods do not change background pixels). It is worth noting that LocInv does not require fine-tuning the base model, resulting in better time complexity and no forgetting in the T2I models. In Table 1, we question twenty participants to evaluate the image editing performance from two aspects: the editing quality and the background preservation. In both cases, LocInv stands out of these six methods of freezing the SD models. Details on the user study are shown in the supplementary. Attribute-Edit. Furthermore, LocInv excels in another editing aspect that other methods tend to overlook, which is attribute editing. This capability is illustrated in Fig. 5. By force the binding between the adjective words and their corresponding noun objects, we achieve the capacity to accurately modify their attributes (color, material, etc.). 5. Conclusion In this paper, we presented Localization-aware Inversion (LocInv) to solve the cross-attention leakage problem in image editing using text-to-image diffusion models. We propose to update the dynamic tokens for each noun word in the prompt with the segmentation or detection as the prior. The resulting cross-attention maps suffer less from crossattention leakage. Consequently, these greatly improved cross-attention maps result in considerably better results for text-guided image editing. The experimental results, confirm that LocInv obtains superior results, especially on complex multi-object scenes. Finally, we show that our method can also bind the adjective words to their corresponding nouns, leading to accurate cross-attention maps for the adjectives, and allowing for attribute editing which has not been well explored before for text-guided image editing. Acknowledgments We acknowledge projects TED2021132513B-I00 and PID2022-143257NB-I00, financed by MCIN/AEI/10.13039/501100011033 and FSE+ by the European Union NextGenerationEU/PRTR, and by ERDF A Way of Making Europa, and the Generalitat de Catalunya CERCA Program. Chuanming Tang acknowledges the Chinese Scholarship Council (CSC) No.202204910331."
16
+ }
intro_8K/test_introduction_short_2405.01502v1.json ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.01502v1",
3
+ "title": "Analyzing the Role of Semantic Representations in the Era of Large Language Models",
4
+ "abstract": "Traditionally, natural language processing (NLP) models often use a rich set\nof features created by linguistic expertise, such as semantic representations.\nHowever, in the era of large language models (LLMs), more and more tasks are\nturned into generic, end-to-end sequence generation problems. In this paper, we\ninvestigate the question: what is the role of semantic representations in the\nera of LLMs? Specifically, we investigate the effect of Abstract Meaning\nRepresentation (AMR) across five diverse NLP tasks. We propose an AMR-driven\nchain-of-thought prompting method, which we call AMRCoT, and find that it\ngenerally hurts performance more than it helps. To investigate what AMR may\nhave to offer on these tasks, we conduct a series of analysis experiments. We\nfind that it is difficult to predict which input examples AMR may help or hurt\non, but errors tend to arise with multi-word expressions, named entities, and\nin the final inference step where the LLM must connect its reasoning over the\nAMR to its prediction. We recommend focusing on these areas for future work in\nsemantic representations for LLMs. Our code:\nhttps://github.com/causalNLP/amr_llm.",
5
+ "authors": "Zhijing Jin, Yuen Chen, Fernando Gonzalez, Jiarui Liu, Jiayi Zhang, Julian Michael, Bernhard Sch\u00f6lkopf, Mona Diab",
6
+ "published": "2024-05-02",
7
+ "updated": "2024-05-02",
8
+ "primary_cat": "cs.CL",
9
+ "cats": [
10
+ "cs.CL",
11
+ "cs.AI",
12
+ "cs.LG"
13
+ ],
14
+ "label": "Original Paper",
15
+ "paper_cat": "LLM AND Reasoning",
16
+ "gt": "Formal representations of linguistic structure and meaning have long held an important role in the construction and evaluation of NLP systems. Se- mantic representations such as Abstract Meaning Representation (AMR; Banarescu et al., 2013) are designed to distill the semantic information of text to a graph consisting of the entities, events, and states mentioned in a text and the relations between them. Existing studies have shown the benefits of representations like AMR in a variety of NLP tasks, such as paraphrase detection (Issa et al., 2018), machine translation (Song et al., 2019), event ex- traction (Garg et al., 2015; Huang et al., 2018), \u2217Equal contribution. 1Our code: https://github.com/causalNLP/amr_llm. AMR Are the two sentences paraphrases of each other? Easier computation 388+12 Three hundred eighty-eight plus twelve Four hundred Input Output Better representation 400 Paul of Rochester played against Gary of Philadelphia. Gary of Rochester played against Paul of Philadelphia. 1 2 AMR Difference \"Gary\" is associated with \"Philadelphia\" in AMR1, but with \"Rochester\" in AMR2. [...] Not paraphrases. Input Output Paul Philadelphia :loc 1 2 Gary Play Rochester :ARG0 Gary Philadelphia Paul Play Rochester :loc :loc :loc :ARG1 :ARG1 :ARG0 Traditional Test Setup Practical Setup in the Era of LLMs Text Output AMR Output Text Output AMR Output To test the representation power: Pretrained LLMs Trainable Model Trainable Model In the era of LLMs, which one have better representation power: text or AMR? Pursuit of Representation Power In math: In language: Figure 1: The role of representation power in different fields. Analogous to Arabic numbers for math, AMR is designed to efficiently and explicitly represent the semantic features of text. Existing work using AMR is concerned with trainable models, whereas we investigate the use of AMR in the modern practical setup of pre-trained LLMs. code generation (Yin and Neubig, 2017), and oth- ers (Dohare and Karnick, 2017; Jangra et al., 2022; Wolfson et al., 2020; Kapanipathi et al., 2021). By explicitly representing the propositional structure of sentences, AMR removes much of the informa- tion from text that is irrelevant to semantic tasks, while surfacing the most important information (entities, relations, etc.), rendering them easier to operate on. In theory, this implies that using AMR as an intermediate representation should make it easier for a model to learn to perform such tasks, in the same way that a representation like Arabic numerals aids with arithmetic (see Figure 1). arXiv:2405.01502v1 [cs.CL] 2 May 2024 However, learning to produce and operate over representations like AMR is nontrivial, especially since AMR data is limited. In contrast, mod- ern NLP systems based on large language models (LLMs) learn to directly manipulate text very ef- fectively (Ignat et al., 2024), not only achieving high performance on a variety of tasks without using intermediate formal representations (Brown et al., 2020), but also achieving gains by directly us- ing informal textual intermediates in methods such as chain-of-thought (CoT) prompting (Wei et al., 2022). Due to economic concerns (Zhao et al., 2022; Samsi et al., 2023; Patterson et al., 2021; Sharir et al., 2020), there is a growing trend to uti- lize readily available pre-trained LLMs in various application scenarios, without allocating additional resources for training or fine-tuning the models. These trends raise the question: What is the role of semantic representa- tions in the era of LLMs, when no train- ing or finetuning is involved? Motivated by this question, we propose a theoreti- cal formulation of representation power, and what it means to have an ideal representation for text, using ideas from Kolmogorov complexity (Solomonoff, 1964; Kolmogorov, 1965). Our key observation is that making use of even a very strong intermediate representation requires optimizing the model with regard to that representation; however, when us- ing (out of the box) pretrained LLMs, the optimal representation will be the one which the LLM can most effectively use on the basis of its pretraining, which might shift away from the optimal represen- tation for a learnable mapping to the output space. In short, the a priori ideal representation for a task is not necessarily the ideal representation for an LLM to use. Given this, we empirically study how good AMR is as an intermediate representation for LLMs. Specif- ically, we answer the following three questions: (1) Does AMR help LLM performance? (2) When does AMR help, and when doesn\u2019t it? (3) Why does it help or not help in these cases? On a diverse set of five NLP tasks, our experiments show that the contribution of AMR in the era of LLMs is not as great as that in the traditional setup where we can optimize the model for the represen- tation. AMR causes a slight fluctuation of perfor- mance by -3 to +1 percentage points. However, we find that AMR is helpful for a subset of samples. We also find that the next step for using AMR to improve performance is likely not improving AMR parser performance, but improving the LLM\u2019s abil- ity to map AMR representations into the output space. In summary, the contributions of our work are as follows: 1. We are the first to investigate how semantic representations such as AMR can be leveraged to help LLM performance in the practical sit- uation where no training is involved; 2. We propose a formalization of representa- tion power for intermediate representations of language, and comprehensive experimen- tal studies investigating whether, when, and why AMR can help when performing seman- tic tasks with LLMs; 3. We present thorough analysis experiments and results reflecting on the contribution of tradi- tional linguistic structures such as semantic representations in the current wave of LLMs, and point out potential areas for improvement.",
17
+ "main_content": "In this section, we propose a framework to formulate representation power in both the pre-LLM era, where we do not outsource the training of the models, and the LLM era, where a lot of practical settings are to optimize the representation with regard to given fixed LLMs. 2.1 Notation Suppose we have a dataset D := {(xi, yi)}N i=1 consisting of N pairs of input xi and corresponding output yi. Given the task to learn the x \ufffd\u2192y mapping, we can consider it as a two-stage modeling process: the first step is to convert the raw input x into a good representation r by the representation model g : x \ufffd\u2192r, and the second step is to perform the computation that takes the representation r and predicts the output y by a computation model h : r \ufffd\u2192y. In this way, we decompose the resulting overall x \ufffd\u2192y modeling process into p(y|x) = pg(r|x)ph(y|r) , (1) where the overall probabilistic model p(y|x) is turned into first the representation step pg(r|x) to generate the representation r given the input x, and then the computation step ph(y|r) to perform operations on the intermediate representation r to derive the output y. 2.2 Problem Formulation Let us draw some intuition from the math example in Figure 1. To represent numbers, the first representation choice r1 is the English expression, such as \u201cThree hundred eighty-eight plus twelve,\u201d and the second, intuitively stronger representation r2 is the same calculation in Arabic numbers, \u201c388+12\u201d. For our research question of whether AMR demonstrates stronger representation power than raw text, we formulate the question as follows: \u2022 Representation choices: R = {text, amr} (which is an instance of text and its semantic representation, a common question of interest in linguistics); \u2022 Representation power: some properties of the function h : r 7\u2192y. The next question becomes theorizing what properties of the computation model h : r 7\u2192y we are optimizing for, for which we will introduce two formulations, one in the pre-LLM era and the other in the LLM era. 2.3 Ideal Representations in the Pre-LLM Era Continuing on with the Arabic number example: Our claim is that the representation r2 (i.e., the Arabic number) is better than r1 (i.e., the English expression) because the computation for h2 : r2 7\u2192y is simpler than h1 : r1 7\u2192y, as measured by Kolmogorov complexity, or algorithmic entropy (Solomonoff, 1964; Kolmogorov, 1965), which is a theoretical construct of the complexity of an algorithm in bits.2 Intuitively, the shortest program specifying an algorithm to take English expressions like \u201cThree hundred eighty-eight plus twelve\u201d as input and produce \u201cFour hundred\u201d as output should be longer than for the one taking \u201c388+12\u201d as input and \u201c400\u201d as output, since the former requires more complicated string manipulation to achieve the same effect. We also use this notion of Kolmogorov complexity to quantify the power of representations for language. The intuition is that powerful representations are those that significantly simplify the complexity of the computation model h. Hence, the 2Formally, Kolmogorov complexity is the length of the shortest program which produces a string. optimal representation function g\u2217\u2208G from the set G of possible functions should satisfy g\u2217= argmin g\u2208G min h\u2208H K(h) , (2) where h : g(x) 7\u2192y , (3) and the optimal representation r\u2217is r\u2217= g\u2217(x) . (4) Here, note that given each representation function, we optimize over all possible computation models h from the hypothesis space H to achieve the minimal Kolmogorov complexity K(h). If the representation enables the computation model h to have a low Kolmogorov complexity, it usually results in several good properties, such as that learning h has smaller generalization risks, requires fewer data samples, has smaller empirical risks, and results in more robustness, as introduced in Jin et al. (2021). Various studies explore the theoretical foundations for the above claims by connecting Kolmogorov complexity with statistical learning theory. For example, Vapnik (1995, \u00a74.6.1) shows that an upper bound of Kolmogorov complexity, called \u201ccompression coefficient,\u201d can bound the generalization error in classification problems; Shalev-Shwartz and Ben-David (2014, Eq. 3) and Goldblum et al. (2023) show that generalization error is upper bounded by training error plus a term involving the Kolmogorov complexity of the hypothesis space. This is, we argue, the implicit framework behind many previous studies showing AMR as a better representation than the raw text sequence by demonstrating its better performance (Turian et al., 2010), data efficiency (Liu et al., 2021), and robustness and domain transferability (Li et al., 2016; Jin et al., 2021). A crucial element of these studies is that they train models customized explicitly for the AMR representation, optimizing h over the hypothesis space H. 2.4 Representation Power in the LLM Era As mentioned previously, in the era of LLMs, we are moving towards the paradigm where the model training is usually outsourced, and during the inference stage, i.e., for most use cases, the model weights are fixed. Formally, this means two differences from the previous setting: (1) the hypothesis space H is collapsed to a size of one, containing only the fixed function hLLM, (2) the optimization constraint in Eq. (3) that h can map the representation to the ground truth y is not necessarily guaranteed, namely that h could lead to \u02c6 y, with certain estimation error. Therefore, the key measure of representation power in the LLM era naturally shifts from simplicity of the computation model h\u2014which aids optimization towards low estimation error\u2014to low estimation error itself, i.e., E[\u03b4(\u02c6 y, y)], where \u03b4 is the error function. This change results in a shift from the double optimization over both r and h to the optimization only of r with regard to the fixed hLLM: g\u2217 LLM = argmin g\u2208G E[\u03b4(\u02c6 y, y)] , (5) = argmin g\u2208G E[\u03b4(hLLM(g(x)), y)] , (6) where the optimal representation r\u2217 LLM becomes r\u2217 LLM = g\u2217 LLM(x) . (7) This framework can also be used to explain the success, for example, of CoT prompting (Wei et al., 2022; Nye et al., 2021) in terms of how the intermediate representation generated by CoT better unlocks the power of LLMs. Comparing Eqs. (2) to (4) with Eqs. (5) to (7), we can see that the ideal best representation r\u2217is not necessarily equal to the representation r\u2217 LLM that works well with LLMs, so there remains a need for experiments to fill in this knowledge gap. It is also worth noting that for any learned representation function g, errors in pg(r|x) relative to pg\u2217(r|x) may cascade into the computation step p(y|r), harming the final output. We investigate this concern in Section 6.1. 3 Designing the AMRCOT Experiments We introduce an AMR-driven prompting method which we call AMRCOT, and investigate its performance on five datasets with five LLMs. 3.1 Dataset Setup We test AMRCOT on paraphrase detection (Zhang et al., 2019), machine translation (Bojar et al., 2016), logical fallacy detection (Jin et al., 2022a), event extraction (Garg et al., 2015), and text-toSQL generation (Yu et al., 2018). We select these tasks as they hinge on complex sentence structures and most of them are reported to have benefited Dataset Task Test Size PAWS Paraphrase Detection 8,000 WMT16 Translation 5,999 Logic Logical Fallacy Detection 2,449 Pubmed45 Event Extraction 5,000 SPIDER Text2SQL Code Generation 8,034 Table 1: Tasks and datasets used. from AMR in the pre-LLM era (Issa et al., 2018; Song et al., 2019; Garg et al., 2015; Yin and Neubig, 2017). For each dataset, we first take the entire original test set, and if it has fewer than 5,000 examples, we also include the development or training set. Data statistics are in Table 1 and details on test set construction are in Appendix A.1. 3.2 AMRCOT Prompt Design To test the utility of AMR with LLMs, we draw inspiration from the CoT prompt design (Wei et al., 2022; Nye et al., 2021), together with CoT variants on causal (Jin et al., 2023) and moral reasoning tasks (Jin et al., 2022b), which enables models to answer an initially difficult question with the help of assistive intermediate steps to render the task easier. We propose AMRCOT, in which we supplement the input text with an automatically-generated AMR and condition the LLM on the input text and AMR when generating the answer. If AMR has a stronger representation power than the raw text, then providing AMR as an assistive intermediate step should improve the performance of LLMs. We compare AMRCOT to directly querying the LLMs, denoted BASE. An example prompt pair is shown in Table 2, and all prompts for all datasets BASE Please translate the following text from English to German. Text: {sentence1} Translation: AMRCOT You are given a text and its abstract meaning representation (AMR). Text: {sentence1} AMR: {amr1} Please translate the text from English to German. You can refer to the provided AMR if it helps you in creating the translation. Translation: Table 2: Example BASE and AMRCOT prompt (for the translation task). We serialize AMRs with the commonly used Penman notation (Patten, 1993). are in Appendix A.3. 3.3 Language Models Since our experiments require models that can reasonably understand and reason over the symbols in AMRs, we find that only the instruction-tuned GPT models, from text-davinci-001 to GPT-4 are capable of processing it, but not the open-sourced models such as LLaMa and Alpaca, at the time we conducted our research. For reproducibility, we set the text generation temperature to 0 for all models, and we use the model checkpoints from June 13, 2023 for GPT-3.5 and GPT-4, namely gpt-3.5turbo-0613 and gpt-4-0613. 3.4 Addressing Research Questions 4 Q1: Does AMR Help LLMs? First, we are interested in the utility of AMR as an intermediate representation for LLMs. Specifically, we answer the following subquestions: what is the overall effect of AMR as a representation on LLMs\u2019 performance (Section 4.1)? Does the effect vary case by case (Section 4.2)? And how does the effect change with using various LLMs with different levels of capabilities (Section 4.3)? 4.1 Overall Effect of AMR We first evaluate the overall effect of AMR as a representation to assist LLMs. Following the setup in Section 3, Table 3 shows performance on our five NLP tasks. Comparing AMRCOT to the BASE method which directly queries LLMs, AMR does not have an overall positive impact on performance. The performance fluctuates between a slight drop (-1 to -3 in most tasks) and a slight increase (+0.61 in the case of Text-to-SQL code generation). Dataset Task BASE \u2206AMRCOT PAWS Paraphrase Detection 78.25 -3.04 WMT Translation 27.52 -0.83 Logic Fallacy Detection 55.61 -0.49 Pubmed45 Event Extraction 39.65 -3.87 SPIDER Text2SQL 43.78 +0.61 Table 3: Across the five tasks, we report the baseline performance (BASE), and the additional impact of AMRCOT (\u2206AMRCOT), using GPT-4. See statistical significance tests in Appendix D.1. 4.2 Helpfulness of AMR in Some Cases Using AMR hardly changes overall performance, but this could be either because it does not change model predictions or because it helps in roughly as many cases as it hurts. To explore which is Dataset % Helped % Hurt % Unchanged PAWS 16.48 20.16 63.36 WMT 16.45 21.17 62.38 Logic 1.96 2.45 95.59 Pubmed45 4.84 11.66 83.5 SPIDER 4.94 4.33 90.72 Table 4: Percentage of test samples that are helped (% Helped), hurt (% Hurt), or unchanged (% Unchanged) when we change from BASE to AMRCOT using GPT-4. the case, we calculate the percentage of examples which are helped and hurt by AMRCOT, shown in Table 4. We count a sample as helped by AMR if its prediction improves (i.e., the output changes from incorrect to correct in classification tasks, or its score increases in text generation tasks), and hurt by AMR if its prediction degrades; the rest of the examples are considered unchanged. As shown in Table 4, AMR can change a significant proportion of answers, with 36.64% changed on PAWS, and 37.62% changed on WMT. On its face, the lack of overall improvement from AMR supports the current concern in the NLP community that traditional linguistics might have little role to play in improving the performance of NLP systems in the era of LLMs (Ignat et al., 2024). However, as there is a substantial subset of the data where AMR helps, if these improvements come from certain systematically identifiable subsets of the data, then this could provide clues for how structures such as AMR may potentially be leveraged to improve overall performance. We investigate this question further in Sections 5 and 6. 4.3 AMR\u2019s Effect on Models with Different Capabilities Figure 2 shows the results of our experiments on models of varying capability, from text-davinci001, -002, -003, to GPT-3.5 and GPT-4. Overall, AMRCOT hurts performance for most tasks and models, again with Text-to-SQL being the exception, at least for text-davinci-003 and GPT-4. In some cases, less capable models degrade more when using AMR, which might be due to their limited ability to comprehend AMR and reason over its special symbols. This is consistent with our preliminary observations that none of the non\u2013 instruction-tuned earlier GPT models, or the less capable models such as LLaMa and Alpaca, comprehend AMR or reason over them well. text-d.-001 text-d.-002 text-d.-003 gpt-3.5 gpt-4 0 20 40 60 80 F1 PAWS text-d.-001 text-d.-002 text-d.-003 gpt-3.5 gpt-4 0 10 20 30 40 BLEU WMT text-d.-001 text-d.-002 text-d.-003 gpt-3.5 gpt-4 0 20 40 60 F1 Logic text-d.-001 text-d.-002 text-d.-003 gpt-3.5 gpt-4 0 20 40 60 F1 Pubmed45 text-d.-001 text-d.-002 text-d.-003 gpt-3.5 gpt-4 0 20 40 60 Exact Match SPIDER Figure 2: Performance of BASE (in purple) and AMRCOT (in red) on 5 datasets across 5 model versions: text-davinci-001|-002|003, GPT-3.5 and GPT-4. 5 Q2: When Does AMR Help/Hurt? The previous section shows that AMR is helpful or harmful for different samples. Now we investigate the conditions under which it helps or harms performance, in particular whether this can be predicted from features of the input text. We first illustrate a case study in Section 5.1, where AMR\u2019s lack of ability to capture the semantic equivalence of multi-word expressions (MWEs) hinders paraphrase detection. Then, we perform two systematic interpretability studies: First, we treat linguistic features as our hypotheses, and extract features with high correlation with AMR helpfulness (Section 5.2); second, we directly train classifiers to learn AMR helpfulness (Section 5.3). 5.1 Case Study: AMR\u2019s Shortcomings on MWEs AMR has its unique advantages and limitations, from which we can interpret what cases it can help, and what cases not. One such limitation of AMR is its lack of ability to capture MWEs such as idiomatic expressions, which makes it overlook certain semantic equivalences for paraphrase detection. Consider the example in Figure 3. Here, the proper paraphrase for the MWE swan song is not \u201cbird song,\u201d but \u201cfinal performance.\u201d However, the AMRs for the three sentences do not reflect this; the AMR for the \u201cswan song\u201d sentence is structurally and lexically more similar to the \u201cbird song\u201d AMR than the one for the \u201cfinal performance\u201d variant. Given this intuition, we qualitatively study whether AMR systematically fails on texts that contain more MWEs. We run AMRCOT on a selfcomposed dataset of paraphrase detection involving slang, assuming slang has more MWEs. Since our experiments need annotations for both slang paraphrase pairs and AMRs, we compose two datasets, GoldSlang-ComposedAMR, Original Sentence with MWE Her swan song disappointed her fans. (z0 / disappoint-01 :ARG0 (z1 / song :mod (z2 / swan) :poss (z3 / she)) :ARG1 (z4 / fan :poss z3)) Paraphrase Candidate 1 Paraphrase Candidate 2 (\u2717Not a paraphrase.) (\u2713A paraphrase.) Her bird song disappointed her fans. Her final performance disappointed her fans. (z0 / disappoint-01 :ARG0 (z1 / song :mod (z2 / bird) :poss (z3 / she)) :ARG1 (z4 / fan :poss z3)) (z0 / disappoint-01 :ARG0 (z1 / perform-02 :ARG0 (z2 / she) :mod (z3 / final)) :ARG1 (z4 / fan :poss z2)) Figure 3: An example showing the failure of AMR for paraphrase detection when the original sentence involves a MWE. This example is from our GoldSlang-ComposedAMR dataset. and GoldAMR-ComposedSlang. For GoldSlangComposedAMR, we use the curated slang paraphrase pairs by Tayyar Madabushi et al. (2021) and generate their AMRs with an off-the-shelf parser (Drozdov et al., 2022). For GoldAMRComposedSlang, we use gold AMRs from the LDC AMR 3.0 corpus (Banarescu et al., 2013), and compose slang paraphrases using a combination of manual annotation and assistance from GPT-4. The data curation steps and data statistics are in Appendix B.1. Table 5 shows evaluation results, where AMRCOT causes a large drop in performance compared to BASE, more substantial than the slight fluctuation of -3 to +1 percentage points shown previously in Dataset BASE \u2206AMRCOT GoldSlang-ComposedAMR 86.83 -6.63 GoldAMR-ComposedSlang 77.69 -8.78 Table 5: AMRCOT results in a large drop in performance on slang-comprising paraphrase detection data. Top 5 Positive Features Pearson Correlation Adj POS Tag Frequency 0.0393 Avg. Word Complexity 0.0343 # Adjuncts 0.0337 Max Word Complexity 0.0316 Avg. Word Frequency 0.0271 Table 6: The top five features with the highest positive correlation coefficients to AMR helpfulness: the frequency of adjectives among all the words (Adj POS Tag Frequency), average word complexity level by the age of acquisition (Kuperman et al., 2012), number of adjuncts, maximum word complexity level by the age of acquisition, and average word frequency. Table 3. It is very likely that, due to the shortcomings of AMR on MWEs, AMRCOT mostly distracts the model, yielding worse performance. 5.2 Large-Scale Text Feature Analysis The case study above provides a precise insight into a special case when AMR does not work. To systematically explore a larger set of hypotheses, we perform a feature analysis over the input texts. We formulate the contribution of AMR as the AMR helpfulness score, which is the per-example performance difference between AMRCOT and BASE, ranging between -100% and 100%, where a negative value means that AMR hurts performance on the example, and a positive value means that AMR improves performance. For each input, we compute a comprehensive set of linguistic features, including 139 features on the text representation, and 4 features derived from the AMR. Specifically, we obtain 55 features using the Text Characterization Toolkit (TCT) (Simig et al., 2022), which is specifically designed to facilitate the analysis of text dataset properties, 17 different part-of-speech (POS) tags, 44 dependency tags, and 61 other hand-crafted features, which characterize the semantic and syntactic complexity of the input text, such as the number of arguments vs. adjuncts (Haspelmath, 2014). Tables 6 and 7 show the Pearson correlation between each linguistic feature and the AMR helpfulness score. Overall, the correlation of each individual feature to the AMR helpfulness score is not strong, either because these features do not explain much about AMR helpfulness, or because it requires a combination of multiple features. Though the correlations are weak, the top correlated features in Table 6 align with our intuition that AMR should be helpful for semantically complex senTop 5 Negative Features Pearson Correlation # Named Entities -0.0630 % of Tokens Containing Digits -0.0281 # Proper Nouns -0.0258 # Third Person Singular Pronouns -0.0236 # Quantifier Phrase Modifiers -0.0222 Table 7: The top five features with the highest negative correlation coefficients to AMR helpfulness: the number of named entities, percentage of tokens containing digits, number of proper nouns (e.g., London), number of third person singular pronouns (e.g., he), and number of quantifier phrase modifiers. See detailed explanations of features in Appendix C. tences: AMR is most helpful for samples with more adjectives, complex words, and adjuncts. In Table 7, the top negative feature, the number of named entities, echoes the finding in our previous MWE case study in Section 5.1, and we systematically show that AMR is most harmful on samples with many named entities, tokens containing digits, and proper nouns. 5.3 AMR Helpfulness Prediction as a Learning Task Now we analyze the upper-bound predictability of AMR helpfulness from the input, both on the basis of our linguistic features and text input itself. Specifically, we train models to predict AMR helpfulness as a binary classification task where the positive class is the case where AMR helps, and the negative class is the rest. Merging all five datasets together, we have a binary classification dataset of 19,405 training samples, 4,267 development samples, and 5,766 test samples, with positive labels composing 10.38% of the dataset. As shown in Table 8, classifiers based on linguistic features achieve an F1 score of up to 32.67%. BERT-based deep learning models improve by up to 1.16 F1 scores, with substantial increases in recall. For interpretability, we run Shapley feature attribution method (Fryer et al., 2021) and find that Model F1 Acc P R Random Baseline 16.14 49.95 9.65 49.16 Using Linguistic Features Random Forest 32.67 81.93 25.72 44.75 XGBoost 30.08 78.47 22.06 47.27 Ensemble 30.42 77.59 21.85 50.00 Using the Free-Form Text Input BERT 33.83 79.70 25.00 52.28 RoBERTa 33.29 80.36 25.11 49.38 Table 8: Classification performance of various models on AMR helpfulness. We report the F1, precision (P), and recall (R) of the positive class, as well as the accuracy (Acc). See the implementation details of the models in Appendix A.5. words that signal the existence of clauses tend to have high importance for the classifier, such as \u201cwhat,\u201d \u201chow,\u201d \u201csaid,\u201d and \u201csays.\u201d These results do not provide a clear explanation of when AMR can help, but give a starting point, and we welcome future research to continue exploring the potential benefits of AMR. The fact that AMR helpfulness is challenging to predict even for BERT models may indicate either that we need more data to learn the features that predict this, or that a substantial portion of the changes that AMR makes to model predictions correspond to noise (i.e., help or hurt in unpredictable ways). 6 Q3: Why Does AMR Help/Hurt? To understand why AMR helps or hurts when it does, we look into the following subquestions: (1) how does parser-generated AMR work compared with gold AMR (Section 6.1)? (2) what is the representation power of AMR versus text when the other is ablated (Section 6.2)? And (3) how does AMR help in each step of the reasoning process (Section 6.3)? 6.1 Gold vs. Parser-Generated AMR First, we investigate whether there are cascading errors before the CoT process, due to mistakes in the parser-generated AMR. For example, the reported performance of Drozdov et al. (2022) is 83% on AMR 3.0 (Banarescu et al., 2013). To assess this, we compare AMRCOT performance when using predicted versus gold AMRs. Testing this requires data with gold AMR annotations as well as gold labels for some downstream NLP task we can evaluate the models on. To this end, we take the intersection of the AMR 3.0 dataset (Banarescu et al., 2013) with Ontonotes 5.0 (Pradhan et al., 2011), which contains 131 sentences that have both gold AMR and named entity recognition (NER) annotations. We list the intuition of why AMR can be helpful for NER in Appendix B.2. Using this AMR-NER dataset, we compare the performance of AMRCOT with gold AMR versus Dataset BASE AMR \u2206AMRCOT AMR-NER 60.51 Gold +0.03 Parser +1.91 Table 9: Model performance on the AMR-NER data using the gold AMR (Gold) and parser-generated AMR (Parser). We report the BASE performance, and the change of performance by AMRCOT (\u2206AMRCOT) in terms of F1 scores. parser-generated AMR on NER, shown in Table 9. Both lead to similar results, with a difference of less than two percentage points (which is not statistically significant, with p = 0.627 by t-test). The test set is unfortunately too small to reliably detect an effect of reasonable size, due to the lack of available data with both gold AMR and NLP task annotations; this result is also specific to NER, which may not have all of the relevant features for understanding the effect of gold versus automatically produced AMRs. However, the fact that the observed effect size is very small constitutes some evidence that improving the predicted AMRs would likely not play a huge role in increasing downstream performance with current models. 6.2 Ablating the AMR/Text Representation As discussed in Section 2, AMR and text representations are two different surface forms for expressing sentence semantics, but one representation may be more useful to the LLM than the other. To test this, we conduct an ablation study removing either the original text or the AMR and measuring performance (see Appendix A.6). To avoid the potential for cascading errors from the parsing process, we use the AMR-NER dataset with the gold AMR. 0 20 40 60 80 100 30 40 50 60 70 AMR/Text Tokens (%) Ablated in the Prompt Task Performance (%) % AMR Ablated % Text Ablated Figure 4: Ablation studies of AMR and text representations in the prompt on the AMR-NER dataset using GPT-4. Starting from the AMRCOT prompt with the complete text and AMR, we randomly drop out a certain portion of tokens in the text/AMR, and see the effect on the task performance. Results In addition to previous results contrasting AMRCOT, which provides both the text and AMR in the prompt, and BASE with the text-only input, we show the results of a more granular analysis in Figure 4, where we randomly drop out text and AMR tokens and measure the effect on task performance. Similar to the above, we find that dropping AMR marginally decreases performance, and dropping text much more drastically degrades LLM performance, showing the greater utility of text as a representation for LLMs. We also conduct the same ablation study on 1,000 random samples from the WMT dataset using predicted AMRs in Appendix D.3, where the observations are similar. 6.3 Checking the Step-By-Step Reasoning To better understand how LLMs use AMR, we directly examine the step-by-step reasoning process produced by AMRCOT with GPT-4. We randomly select 50 samples from the PAWS dataset and manually annotate the correctness of each step in the reasoning process. For paraphrasing on PAWS, the steps (and our evaluations) are as follows: 1. Produce the AMR for the input sentences using Drozdov et al. (2022)\u2019s structured BART model. Instead of manually annotating correctness of these AMRs, we defer to their reported performance of 82.6 SMATCH scores on the AMR 3.0 dataset. 2. Provide the AMRs to GPT-4 in the paraphrasing task prompt using AMRCOT, and then instruct it to list all the commonalities and differences of the AMRs. Our manual check finds that GPT-4 achieves a 97% F1 score (with 95% precision, 98% recall) at listing these. 3. GPT-4 then outputs a final decision on whether the sentences are paraphrases. We evaluate that its judgment in this step is consistent with the reasoning in the prior step 80% of the time. Even though GPT-4 was able to correctly enumerate the relevant features of the AMRs, it still had trouble synthesizing this information into a correct paraphrasing judgment. These mistakes as well as the potential for cascading errors may explain why AMRCOT achieves a performance of 75.21% on PAWS, which is a slight drop from the BASE performance of 78.25%. Overall, this provides further evidence of the advantages that free-form text itself has as a representation for LLMs to operate on. 7 Related Work Semantic Representations Traditionally, NLP models often represent text by features developed on the basis of linguistic expertise, among which semantic representations such as AMR (Banarescu et al., 2013) are used to abstract away the surface form of the text and distill the most important elements of its meaning. In the past, such representations have helped with a variety of NLP tasks, such as semantic parsing (Kuhn and De Mori, 1995), machine translation (Wu and Fung, 2009; Wong and Mooney, 2006), and text summarization (Liu et al., 2018). Recent research also looks into whether LLMs already incorporate a good understanding of semantic representations (e.g., Stali\u00af unait\u02d9 e and Iacobacci, 2020; Blevins et al., 2023). Chain-of-Thought Prompting Rapid advancement in LLMs has led to a new paradigm of performing NLP tasks by eliciting model behavior via instructions and examples using prompting (Brown et al., 2020; Raffel et al., 2020). Chain-of-thought (CoT) prompting (Wei et al., 2022; Nye et al., 2021), which pairs input examples with step-bystep explanations of how to produce their respective outputs, has been shown to improve LLMs\u2019 performance at various reasoning tasks (Yu et al., 2023), and its variants have also shown success in various scenarios, such as CAUSALCOT for causal reasoning (Jin et al., 2023), and MORALCOT for moral reasoning (Jin et al., 2022b). Our work proposes a way to bridge linguistic representations with text by AMRCOT, providing an AMR as an intermediate representation for the LLM to reason over. Our results are mixed, demonstrating the relative advantage that unstructured, free-text representations have for language models pretrained on large amounts of natural language data. 8 Conclusion In this work, we analyze the role of semantic representations in the era of LLMs. In response to the ongoing paradigm shift in the NLP community, we show that AMR in general is not yet a representation immediately fit for pre-trained LLMs. However, our study shows that AMR still helps on some samples. We also suggest that a potential direction to enhance AMR\u2019s contribution to LLMs is to improve the understanding of LLMs over the schemes and symbols of AMR, and map it to the reasoning of the respective NLP task. This work presents an effort to bridge the traditionally rich linguistic structures with the strength of LLMs. Limitations and Future Work This work explores one form of linguistic representation of text. In the future, we welcome more exploration on various other linguistic representations using the methodology presented in this work. Moreover, we explore one intuitive way of prompting the model. Future work is welcome to explore different ways of prompting to make the AMR information more accessible and useful to the model. In addition, some of our analyses are limited by a lack of annotated resources, so we were only able to show experimental results on hundreds of examples in some cases where gold AMR annotation is needed. This is a commonly known issue for AMR, which is expensive and requires a high level of linguistic expertise to annotate. This limitation makes the results less statistically significant than what we could have if there are more annotated AMRs available. In this work, we hope to strike a balance to still show some meaningful trends while trying to get the largest size of annotated data we can. Moreover, if any future work has the resources to train an LLM specifically optimized for AMR as a representation, this would be the ideal setting to check out the upper bound of the power of AMR in the era of LLMs. As for the limitations for specific parts of the paper, for example for the notion of gold AMRs in Section 6, although we use the AMR annotated by humans in the official Banarescu et al. (2013) dataset, it should be noted that such AMRs are not necessarily \u201cperfect\u201d, as humans might also have a non-perfect inter-annotator agreement over some AMRs. And while SMATCH scores can be predictive, they may not perfectly reflect the quality of parser-generated AMRs (Opitz and Frank, 2022). These are both open research questions, and we use the AMRs released by the official source (Banarescu et al., 2013) as a proxy for ground-truth AMR. Ethical Considerations The datasets used in this paper are existing public datasets on general NLP tasks without any usersensitive information. We are not aware of specific ethical concerns with the analysis in this study, which is a neutral investigation to understand the role of traditional linguistic structures such as semantic representations in the era of LLMs. Acknowledgments We thank Juri Opitz for his insightful suggestions on our AMR experiments based on profound domain expertise. We also appreciate Wendong Liang for insightful discussions on Komolgorov complexity, which is a foundation of the theoretical framework in this work. We thank Nils Heil for extracting the SQL schemes of the SPIDER dataset so that we can incorporate them in the prompt to improve our performance. This material is based in part upon works supported by the German Federal Ministry of Education and Research (BMBF): T\u00fcbingen AI Center, FKZ: 01IS18039B; and by the Machine Learning Cluster of Excellence, EXC number 2064/1 \u2013 Project number 390727645; Zhijing Jin is supported by PhD fellowships from the Future of Life Institute and Open Philanthropy. Author Contributions Mona Diab initiated the project idea based on her strong expertise in traditional linguistics, and an intuition that the semantic representations should help model efficiency, robustness, and interpretability. During the course of exploration by Zhijing Jin and Mona Diab together for over a year, they find that the AMR representations does not always help LLMs over multiple experimental setups and model implementations. Zhijing further explores the theoretical formulation of representation power to provide the explanations behind the observed performance, together with the expertise of Bernhard Schoelkopf in causal representation learning. Julian Michael provided valuable insights and overview of the field of semantic representations, which brings the depth of the project to another level. Julian also provided constructive suggestions for improving the experiments and structuring the paper, and substantially improved the writing. Yuen Chen and Fernando Gonzalez contributed substantially to scaling up all the experiments across multiple datasets and multiple model versions, and analyzing the results. Jiarui Liu and Jiayi Zhang helped with the training the BERT-based classifiers, and analyzing the Shapley values. Jiarui Liu conducted several important experiments for the camera-ready version of the paper, especially on checking the ceiling performance of AMRCOT with various prompt improvements and data setups."
18
+ }
intro_8K/test_introduction_short_2405.01580v1.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.01580v1",
3
+ "title": "On the Limitations of Embedding Based Methods for Measuring Functional Correctness for Code Generation",
4
+ "abstract": "The task of code generation from natural language (NL2Code) has become\nextremely popular, especially with the advent of Large Language Models (LLMs).\nHowever, efforts to quantify and track this progress have suffered due to a\nlack of reliable metrics for functional correctness. While popular benchmarks\nlike HumanEval have test cases to enable reliable evaluation of correctness, it\nis time-consuming and requires human effort to collect test cases. As an\nalternative several reference-based evaluation metrics have been proposed, with\nembedding-based metrics like CodeBERTScore being touted as having a high\ncorrelation with human preferences and functional correctness. In our work, we\nanalyze the ability of embedding-based metrics like CodeBERTScore to measure\nfunctional correctness and other helpful constructs like editing effort by\nanalyzing outputs of ten models over two popular code generation benchmarks.\nOur results show that while they have a weak correlation with functional\ncorrectness (0.16), they are strongly correlated (0.72) with editing effort.",
5
+ "authors": "Atharva Naik",
6
+ "published": "2024-04-26",
7
+ "updated": "2024-04-26",
8
+ "primary_cat": "cs.SE",
9
+ "cats": [
10
+ "cs.SE",
11
+ "cs.AI"
12
+ ],
13
+ "label": "Original Paper",
14
+ "paper_cat": "LLM Fairness",
15
+ "gt": "Code generation sometimes also called program synthesis is the task of generating a program or code from a natural language (NL) intent or speci- fication (NL2Code) (Sun et al., 2024). Code gen- eration has a lot of applications for both indus- try and academia due to the potential to increase the productivity of programmers (Sobania et al., 2022; Barke et al., 2023), connections with pars- ing (Shin et al., 2019; Sun et al., 2019; Ben-Nun et al., 2018), machine reasoning (Gao et al., 2023), planning (Singh et al., 2023), mathematical capa- bilities (Shao et al., 2024), and creating structured representations for NLP tasks (Li et al., 2023b). Modeling approaches for code generation can be grouped into three categories (Sun et al., 2024): neural models of code, pre-trained mod- els (CodePTMs), and Large Language Models for code (CodeLLMs). Neural methods included re- current and convolutional networks possibly incor- porating abstract syntax tree (AST) structure (Mou et al., 2016; Zhang et al., 2019a). CodePTMs like CodeBERT (Feng et al., 2020), CodeT5 (Wang et al., 2021b), and PLBART (Ahmad et al., 2021) were pre-trained with code-oriented self-supervised training objectives followed by task-specific fine- tuning. Some CodePTMs also incorporate struc- tures like AST, dataflow, and program dependency graphs (PDG) (Guo et al., 2020, 2022; Wang et al., 2021a). Among CodeLLMs, the earliest models were the Codex (Chen et al., 2021) and CodeGen (Nijkamp et al., 2022) series. They were followed by BigCode project models like StarCoder (Li et al., 2023c) and SantaCoder (Allal et al.) that adopted infilling ((Fried et al., 2022)) in their train- ing. Subsequently, CodeT5+ (Wang et al., 2023a) and CodeLLaMA (Roziere et al., 2023) became the first CodeLLMs to have an encoder-decoder architecture and to be trained from an existing model (LLaMA-2 (Touvron et al., 2023)) instead of from scratch respectively. The DeepSeekCoder (Guo et al., 2024) model series added a focus on repository-level and cross-file code completion and the Lemur (Xu et al., 2023) series also based on LLaMA-2 added a focus on agents with robust cod- ing abilities and grounding in their environment. Evaluation metrics in this space usually tar- get the construct of functional correctness and can be categorized broadly into reference-based, reference-free, hybrid, and human evaluation. Reference-based methods like BLEU (Papineni et al., 2002), ROUGE (Lin, 2004), METEOR (Banerjee and Lavie, 2005) and chrF (Popovi\u00b4 c, 2015) utilize n-gram matching techniques. Met- rics like CodeBLEU (Ren et al., 2020), RUBY (Tran et al., 2019), and CrystalBLEU (Eghbali and Pradel, 2023) capture code-specific properties by arXiv:2405.01580v1 [cs.SE] 26 Apr 2024 adding dataflow, PDG, AST information or filter- ing out frequent n-grams. However, prior work has shown that metrics like CodeBLEU and BLEU scores are not reliable indicators of functional correctness or human preferences (Chen et al., 2021; Evtikhiev et al., 2023). Additionally, func- tionally non-equivalent programs can have higher BLEU scores than functionally equivalent ones (Chen et al., 2021). Embedding-based methods like BERTScore (Zhang et al., 2019b), COMET (Rei et al., 2020), and CodeBERTScore (Zhou et al.) try to capture more semantics. Reference-free methods include executing the generated code on test cases (Chen et al., 2021), using LLMs with evaluation instructions like ICE-Score (Zhou et al., 2023) and round trip evaluation (Allamanis et al., 2024). An example of a hybrid method is CodeScore (Dong et al., 2023) which trains a language model to cap- ture execution semantics and can utilize both the NL context and reference code. Finally despite higher costs human evaluation approaches like Re- alHumanEval (Mozannar et al., 2024) can capture more human-centered notions of productivity like time to complete coding tasks apart from functional correctness. Despite the vast array of available approaches we choose to study embedding-based methods, like CodeBERTScore since among reference-based metrics they are the best at capturing semantic infor- mation, human preferences, and functional correct- ness (Zhou et al.), while being faster than execution- based evaluation and not requiring test-cases or test- case training data like CodeScore. They also re- quire fewer computation resources like GPUs com- pared to LLM-based evaluation approaches like ICE-Score. However, claims about the ability of embedding-based metrics to capture semantics and functional correctness warrant further investiga- tion, since the underlying models (like CodeBERT) are known to fail in capturing semantic equiva- lence of code (Troshin and Chirkova, 2022; Naik et al., 2022). In this work, we audit the ability of embedding-based metrics like CodeBERTScore to capture functional correctness for ten models over two popular code generation benchmarks. We also analyze their effectiveness in measuring other more human-aligned constructs like editing effort (Dibia et al., 2023). Our results show that embedding- based metrics indeed fail to capture functional correctness as measured by execution success, but can potentially function as metrics for editing effort. Our results motivate the importance of exploring more reference-free evaluation methods like Code- Score for code generation.",
16
+ "main_content": "Before we begin our analysis it is important to formally define the task of code generation. Context Space: The context space X for code generation is the natural language instruction (NL) which the set of all strings can represent. X = \u03a3\u2217 Decision Space: The decision space Y for code X\u2217 Decision Space: The decision space Y for code generation is the space of all programs (PL) which can also be represented by the set of all strings, i.e. Y = \u03a3\u2217A more specific or constrained space can be defined for specific programming languages guided by the syntax and grammar (Ben-Nun et al., 2018; Sun et al., 2019; Shin et al., 2019). Let\u2019s say the grammar G, with non-terminals N, a start symbol S, the set of terminals \u03a3, a finite set of production rules P or G = (N, \u03a3, P, S). We can define the decision space Y as the set of all possible programs that can be produced with the grammar or Y = {w|S \u21d2\u2217 P w} However, most state-of-theart approaches like LLMs use the unconstrained decision space Y = \u03a3\u2217. Construct: The main construct of interest is functional correctness, i.e. whether the generated code captures the NL intent accurately. It\u2019s important to measure because it can be challenging for humans to manually validate the correctness of generated code by eyeballing it which can lead to subtle bugs when using AI programming assistants (Bar-Zik, 2023). Since NL is an ambiguous and incomplete specification, practical efforts to measure functional correctness involve creating perfect specifications in the form of test cases on which generated code can be executed to verify the correctness, or reference programs that can be used as proxies to verify correctness. Test case evaluation is more likely to be comprehensive (even though test suites can be limited (Liu et al., 2024)) but is more expensive due to the human effort required to collect test cases, the time taken to execute code, and the need for sandboxing and environment setup to execute code (Yang et al., 2024). Execution-based functional correctness is often measured with test cases using the pass@k (Chen et al., 2021) metric which is defined as: pass@k := Eproblems \ufffd 1 \u2212 \ufffdn\u2212c k \ufffd \ufffdn k \ufffd \u2212 k \ufffdn k \ufffd \ufffd Here k is the number of programs sampled from the model, while c is the number of correct programs that pass all test cases. A high pass rate means a high probability that the generated code is functionally correct. Another interesting construct is the editing effort (or syntactic similarity) highlighted by (Dibia et al., 2023). They show that functional correctness can underestimate productivity gains as outputs/system decisions that fail test cases can still be useful if the overall effort for coding can be reduced. They propose EDIT-SIM or the normalized edit similarity defined below as a proxy for effort: EDIT-SIM = 1 \u2212 lev(gen, ref) max(len(gen), len(ref)) where gen is the code generated or system decision, ref is a reference solution to the problem, and lev is the character Levenshtein edit distance (Levenshtein, 1966). A high EDIT \u2212SIM is more desirable and means low editing effort. Similar to functional correctness, distinguishability (d) (Eghbali and Pradel, 2023) measures the ability of a metric to detect the similarity of semantically similar code over semantically different code. It can be mathematically formulated as: d = m(Pairsintra) m(Pairsinter) Where m is the metric, where Pairsintra represents the intra-class similarity between pairs of semantically similar code (belonging to an equivalence class or partition based on functional properties) while Pairsinter represents the similarity between pairs of code belonging to different equivalence classes. A high distinguishability is ideal as it means the metric can cluster semantically similar codes together. However (Zhou et al.) shows that since distinguishability uses absolute metric values it can be easily gamed by simply exponentiating metric values making it a bad proxy or alternative for functional correctness. Additionally, they speculate that any meta-metric or construct of correctness that compares exact scores is likely to be gameable compared to ranking-based approaches. Hence we don\u2019t use it for our analysis in this study. Metric of study: The CodeBERTScore metric proposed by (Zhou et al.) as a code-specific variant of BERTScore (Zhang et al., 2019b) utilizes embeddings of a pre-trained CodeBERT model (Feng et al., 2020) to compute the similarity of a reference code (y\u2217) with a candidate (\u02c6 y). They compute all pairs of token similarity between the candidate and reference while masking out certain tokens (according to masks m\u2217and \u02c6 m) like punctuation to compute the precision (P) (matching candidate tokens \u02c6 yj to most similar reference tokens y\u2217 i ) (Eq. 1) and recall (R) (Eq. 2) (matching reference tokens to most similar candidate tokens). The F1 (Eq. 3) and F3 (Eq. 4) scores are computed using the precision and recall scores with the F3 score weighing the recall more heavily. CodeBERTScoreP = 1 |\u02c6 y[ \u02c6 m]| X \u02c6 yj\u2208\u02c6 y[ \u02c6 m] max y\u2217 i \u2208y\u2217[m\u2217] sim(y\u2217 i , \u02c6 yj) (1) CodeBERTScoreR = 1 |y\u2217[m]| X y\u2217 i \u2208y\u2217[m\u2217] max \u02c6 yj\u2208\u02c6 y[ \u02c6 m] sim(y\u2217 i , \u02c6 yj) (2) CodeBERTScoreF1 = 2 \u00b7 CodeBERTScoreP \u00b7 CodeBERTScoreR CodeBERTScoreP + CodeBERTScoreR (3) CodeBERTScoreF3 = 10 \u00b7 CodeBERTScoreP \u00b7 CodeBERTScoreR 9 \u00b7 CodeBERTScoreP + CodeBERTScoreR (4) 3 Related Work 3.1 Datasets Due to the popularity of code generation as a task, many datasets exist out there. Early datasets like CoNaLa (Yin et al., 2018) and its multilingual version mCoNaLa (Wang et al., 2023b) targeted the task of generating small code snippets. This was followed by the development of some of the most popular function-generation benchmarks like APPS (Hendrycks et al., 2021), MBPP (Austin et al., 2021), and HumanEval (Chen et al., 2021) with problems that span basic \u201cclosed-domain\u201d algorithmic challenges in Python with test cases to support evaluation. HumanEval+ is an extended version of the HumanEval dataset which has better test case coverage (Liu et al., 2024), while HumanEval-X (Zheng et al., 2023) and MultiPLE (Cassano et al., 2022) (extends HumanEval and MBPP) contain additional programming languages other than Python. Code generation datasets for more complex contexts like Jupyter notebooks (ARCADE (Yin et al., 2023), ExeDS (Huang et al., 2022) and JuICe (Agashe et al., 2019)) and classes (CoderEval (Yu et al., 2023) and ClassEval (Du et al., 2023)) have been proposed to move beyond function level code generation. Other datasets aim for specialized domains like data-science DS-1000 (Lai et al., 2023) or open domain code generation (Wang et al., 2022). Some recent datasets have also focused on interactive and multi-turn coding datasets like InterCode (Yang et al., 2024) and CodeClarQA (Li et al., 2023a) (clarification question generation). Finally, some of the latest code generation datasets target repository level and code agent tasks like SWEBench (Jimenez et al., 2023) RepoEval (Zhang et al., 2023), CodeAgentBench (Zhang et al., 2024), RepoBench (Liu et al., 2023) and Stack-Repo (Shrivastava et al., 2023). 3.2 Metrics For this study, we mainly focus on reference-based evaluation metrics, as the embedding-based methods belong to that category. Each metric is described below: Exact match: EM := Eproblems [1cand=ref] Here the 1c\u2208T is an indicator variable which is 1 when the candidate is the same as the reference. This essentially plays the role of a lower limit on the performance as in every instance where the exact match is 1 all other metrics will also attain their highest value. BLEU score: BLEU = BP \u00d7 exp N X n=1 wn \u00b7 log \u0012 Cclip,n Ctotal,n \u0013! Where: BP is the Brevity Penalty N is the maximum n-gram order considered (typically N = 4) wn is the weight assigned to the n-gram precision Cclip,n is the clipped count of n-grams in the generated code Ctotal,n is the total count of n-grams in the generated code. The Brevity Penalty (BP) is defined as: BP = \uf8f1 \uf8f2 \uf8f3 1 if lengen > lenref e (1\u2212lenref lengen ) if lengen \u2264lenref CrystalBLEU Score: CrystalBLEU = BP\u00d7exp N X n=1 wn \u00b7 log \u0012 Cclip,Sk,n Ctotal,Sk,n \u0013! The Crystal BLEU metric is defined almost identically to BLEU but with one major difference in the clipped n-gram precision computation. The modified precision now also takes in Sk or the top-k most frequent n-grams (k = 50 for our study) and removes them from the n-gram counts from both the numerator and denominator in the precision. CodeBLEU: CodeBLEU = \u03b1\u00b7BLEU+\u03b2\u00b7BLEUweight+\u03b3\u00b7Matchast+\u03b4\u00b7Matchdf The CodeBELU score adds a weighted BLEU score, syntax match score, and semantic dataflow match score to the BLEU score. Here BLEUweight is the weighted n-gram BELU score: BLEUweight = BP \u00b7 exp N X n=1 wn log pn ! Here pn is the weighted n-gram precision which is defined as: pn = P C\u2208Candidates Pl i=1 \u00b5i n \u00b7 Countclip(C(i, i + n)) P C\u2032\u2208Candidates Pl i=1 \u00b5i n \u00b7 Countclip(C\u2032(i, i + n)) Where: C(i, i + n) is the n-gram from the position i to the position i + n Countclip(C(i, i + n)) is the maximum number of n-grams co-occurring in a candidate code and a set of reference codes. \u00b5i n denotes the weights of different keywords or n-grams. The weight of the keywords is usually set to 5 times the weight of other tokens. BP is the brevity penalty as defined for BLEU. The syntactic match score Matchast is defined below: Matchast = Countclip(Tcand)/Count(Tref) Where: Count(Tref) is the total number of the reference subtrees. Countclip(Tcand) is the number of the candidate subtrees that are matched to the reference. Finally, the data match score Matchdf is defined below: Matchdf = Countclip(DFcand)/Count(DFref) Where: Count(DFdf) is the total number of the reference data-flows. Countclip(DFcand) is the number of matched candidate data-flows. chrF: CHRF = max \u0012 1 \u2212 2 \u00b7 unigram_errors reference_length + hypothsis_length, 0 \u0013 \u00b7 \u0012 1 \u2212abs(r \u2212h) r + h \u0013\u03b2 BERTScore: The BERTScore metric is computed similarly to CodeBERTScore and we use the F1 measure (Eq. 3) throughout the study. 4 Methods To measure the capacity of code generation metrics to capture functional correctness we use the datasets and systems described below: 4.1 Datasets We choose the HumanEval (Chen et al., 2021) and MBBP (Austin et al., 2021) benchmarks because of their popularity in the research community as standard benchmarks, relatively simpler coding tasks that don\u2019t involve external libraries, and the availability of test cases for directly measuring functional correctness. The HumanEval dataset has 164 examples and an average of 7.7 test cases. The MBPP (sanitized) test set has 257 examples with 3 test case on average per instance. 4.2 Systems To analyze the effectiveness of the evaluation metrics we pick some of the most popular opensource language models (StarCoder2 (Lozhkov et al., 2024), CodeLLaMA (Roziere et al., 2023), WizardCoder (Luo et al., 2023), DeepSeekCoder (Guo et al., 2024), Magicoder (Wei et al., 2023), CodeQwen (Bai et al., 2023) CodeGemma (Team et al., 2024), LLaMA-3 (AI@Meta, 2024)) in the 6.7B-15B parameter range as well as some popular closed source models (GPT-3.5-Turbo (Ouyang et al., 2022) and GPT-4 (Achiam et al., 2023)). The results obtained by the models on all the metrics and the constructs of interest (functional correctness and editing effort) for HumanEval and MBPP are shown in Table 1 and Table 2 respectively. The CodeBERTScore metrics like CodeBERTScore-P etc. are abbreviated as CB-P and so on. 5 Metric Properties We analyze and compare various properties of CodeBERTScore with respect to other metrics. 5.1 Boundedness The CodeBERTScore metric theoretically spans values from \u22121 to 1 similar to BERTScore (Zhang et al.), but empirically it only spans from 0 to 1 empirically. The bounds for all metrics are shown in Table 3. 5.2 Score Distribution We also analyze the score distribution of all the metrics in Figure 1 and summarize the centrality and shape measures in Table 4. The score distributions show that the embedding-based metrics BERTScore and CodeBERTScore overly reward models while the other metrics, especially exact match overly penalize the models. The centrality measures like mean, median, and mid-hinge also point towards that. The Kurtosis (excess) measures for all metrics except chrF are Leptokurtic, meaning they have fatter tails than a normal distribution. The chrF metric exhibits the least skewness and excess Kurtosis while the exact match metric is most skewed with a fat tail, as most of the values Model Metrics Constructs EM chrF BLEU CodeBLEU CrystalBLEU BERTScore CB-P CB-R CB-F1 CB-F3 Functional Correctness Editing Effort StarChat2-15B 0.000 0.238 0.111 0.188 0.030 0.570 0.479 0.525 0.500 0.519 0.366 0.191 CodeLLaMA-13B-Hf 0.000 0.156 0.032 0.063 0.008 0.835 0.743 0.673 0.705 0.679 0.018 0.221 WizardCoder-13B 0.000 0.191 0.058 0.105 0.029 0.858 0.783 0.700 0.737 0.707 0.024 0.255 Magicoder-S-DS-6.7B 0.030 0.398 0.251 0.248 0.070 0.897 0.816 0.803 0.809 0.804 0.543 0.440 CodeGemma-7B-It 0.018 0.381 0.213 0.287 0.054 0.893 0.774 0.832 0.799 0.824 0.500 0.369 DeepSeekCoder-6.7B-Instruct 0.030 0.427 0.260 0.267 0.084 0.906 0.838 0.829 0.832 0.829 0.598 0.459 LLaMA-3-8B-Instruct 0.012 0.368 0.199 0.269 0.053 0.890 0.785 0.821 0.800 0.816 0.488 0.377 CodeQwen1.5-7B-Chat 0.043 0.433 0.267 0.311 0.101 0.904 0.820 0.846 0.830 0.843 0.652 0.459 GPT-3.5-Turbo 0.024 0.433 0.266 0.284 0.080 0.904 0.840 0.836 0.838 0.837 0.567 0.456 GPT-4 0.055 0.482 0.325 0.323 0.107 0.914 0.861 0.855 0.857 0.855 0.665 0.499 Table 1: Results for all models over all metrics and constructs for the HumanEval dataset. The CodeQwen and DeepSeekCoder seem to have the best performance over the embedding-based metrics and n-gram-based metrics respectively and the functional correctness and editing effort constructs respectively. Among the close source models, GPT-4 is better on all metrics and constructs and better overall among all models. Model Metrics Constructs EM chrF BLEU CodeBLEU CrystalBLEU BERTScore CB-P CB-R CB-F1 CB-F3 Functional Correctness Editing Effort StarChat2-15B 0.000 0.353 0.151 0.250 0.016 0.893 0.798 0.820 0.807 0.817 0.533 0.353 CodeLLaMA-13B-Hf 0.000 0.201 0.032 0.044 0.000 0.868 0.806 0.712 0.755 0.720 0.027 0.239 WizardCoder-13B 0.000 0.207 0.036 0.043 0.000 0.861 0.802 0.709 0.752 0.717 0.012 0.237 Magicoder-S-DS-6.7B 0.000 0.360 0.160 0.224 0.018 0.898 0.838 0.815 0.825 0.817 0.529 0.383 CodeGemma-7B-It 0.000 0.299 0.085 0.241 0.009 0.861 0.718 0.785 0.749 0.778 0.494 0.229 DeepSeekCoder-6.7B-Instruct 0.000 0.317 0.128 0.180 0.008 0.889 0.814 0.792 0.802 0.794 0.366 0.352 LLaMA-3-8B-Instruct 0.000 0.349 0.140 0.221 0.015 0.887 0.810 0.810 0.809 0.809 0.451 0.357 CodeQwen1.5-7B-Chat 0.004 0.487 0.317 0.306 0.066 0.910 0.854 0.863 0.857 0.862 0.619 0.474 GPT-3.5-Turbo 0.000 0.445 0.246 0.249 0.025 0.904 0.845 0.845 0.844 0.845 0.786 0.426 GPT-4 0.004 0.469 0.301 0.267 0.051 0.911 0.873 0.854 0.862 0.856 0.802 0.458 Table 2: Results for all models over all metrics and constructs for the MBPP dataset. The CodeQwen model has the best performance over all metrics among the open-source models and the constructs. Among the close source models, GPT-4 is better on all metrics and best for most metrics overall (except the BLEU metrics, chrF and CodeBERTScore-F3). Metric Theoretical Bounds Empirical Bounds Max Min Max Min Exact Match 0 1 0 1 chrF 0 1 0 1 BLEU 0 1 0 1 CodeBLEU 0 1 0 1 CrystalBLEU 0 1 0 1 BERTScore-F1 -1 1 0 1 CodeBERTScore-P -1 1 0 1 CodeBERTScore-R -1 1 0 1 CodeBERTScore-F1 -1 1 0 1 CodeBERTScore-F3 -1 1 0 1 Table 3: The theoretical bounds and empirically attained bounds for all metrics are zero (high peak) and then there is a fait tail of all the non-zero values. 5.3 Number of ties We compute the percentage of ties across a given context for all pairs of model comparisons and show the results in Table 5. For continuous metrics that can attain any real values between zero to one on an instance level, we count it as a tie if the metric values differ by a small value \u03f5 = 10\u22126. The exact match metric has the most ties because of sparsity and being 0 for most cases while, BERTScore and CodeBERTScore metrics have fewest ties. 5.4 Convergent and Discriminant Validity We analyze the Kendall Tau (\u03c4) correlation between all the metrics (EM, chrF, BLEU, CodeBLEU, CrystalBLEU, and BERTScore-F1) and all the CodeBERTScore metrics (P, R, F1, F3) and visualize the results via a heatmap in Figure 2. All the correlations are statistically significant with very low p-values. We observe that except for exact match, CodeBERTScore exhibits moderate correlation with all metrics. 5.5 Discriminative Power We plot the discriminative power of all the metrics compared using the student t-test with the hypothesis that the mean of the underlying distribution of the first model is less than the mean of the distribution of the second model with Bonferroni correction (Armstrong, 2014) applied to the acceptance threshold \u03b1 = 0.05 n (n is the number of hypotheses) to account for multiple hypotheses. Since there are n = 2 \u00d7 \u000010 2 \u0001 = 90 comparisons, the acceptance threshold \u03b1 = 0.05 90 = 0.00055. We plot the achieved significance level (Sakai, 2014) for each metric (sorted p-values on the y-axis and run comparison pairs on the x-axis) in Figure 3. However, since the acceptance threshold is very low, to make the plots easier to visualize we also show a clipped version where all the p-values are clipped if they are above 0.008 in Figure 4. We note that CodeBERTScore-F1 has the most discriminative power being able to achieve statistical significance for 58/90 pairs, and the exact match being the worst, not being able to distinguish any pairs. Additionally, the code-specific BLEU scores (CodeBLEU, CrystalBLEU) are worse than regular BLEU which is comparable to BERTScore and chrF. 6 Empirical Analysis 6.1 Hypotheses Since the primary focus of the work is to test the reliability of CodeBERTScore as a measure of the primary construct of functional correctness and the secondary measure of editing effort, we can formulate the following hypotheses: H1. CodeBERTScore is moderate to strongly correlated with execution results (pass@1). H2. CodeBERTScore is robust to surface-level perturbation, i.e. it exhibits strong auto-correlation under function-name and variable input perturbations (test of criterion validity with invariant transformations) H3. CodeBERTScore is moderate to strongly correlated with editing effort (EDIT-SIM) 6.2 Hypothesis Testing To test H1 we measure the correlation between the pass@1 metric and CodeBERTScore-F1 using point-biserial correlation rbp which is suitable for measuring the correlation between a binary or dichotomous variable like whether an instance passes all test cases and a continuous variable like CodeBERTScore. We use F1 for all subsequent hypotheses as it is the most sensitive metric according to the discriminative power and combines both precision and recall without any specific bias to either. In addition to this correlation, we also compute the point-biserial correlation between editing effort (EDIT-SIM) and pass@1 and BERTScore and pass@1 to see if the two constructs are correlated and compare the reliability of BERTScore and CodeBERTScore respectively. We find an rbp of 0.162 with p-value of 3.28 \u00d7 1o\u221226 between CodeBERTScore and pass@1 and an rbp of 0.186 between EDIT-SIM and pass@1 with a p-value of 5.83 \u00d7 10\u221234 and an rbp of 0.103 with a p-value of 2.11 \u00d7 10\u221211 for BERTScore. To test H2 we measure the auto-correlation between the results obtained with CodeBERTScore (and additionally BERTScore) before and after semantics preserving input perturbations, using Kendall Tau (\u03c4) and Spearman (\u03c1) correlations. The perturbations are described below: 1. Var: Cand only The variable names in the candidate are replaced with generic variable names (e.g. if sum is the third variable in the code, then it will be replaced with var2). We expect this transformation to lower the metric Figure 1: The distribution of metric values. The x-axis shows the range of values while the y-axis shows the number of model decisions with values in that range. The distributions clearly show that the embedding metrics tend to over-reward decisions while the other metrics tend to over-penalize them. The chrF metric has the least skew value 2. Var: Ref only The variable names in the reference are replaced with generic variable names. We expect this transformation to lower the metric value 3. Var: Cand and Ref The variable names in both reference and candidate are replaced with generic variable names. We expect this transformation to increase the metric value 4. Func: same The function names of both candidate and reference functions are replaced with the same name (candidate_function). We expect this transformation to increase the metric value 5. Func: different The function names of both candidate and reference functions are replaced with different names (candidate_function and reference_function respectively). We expect this transformation to decrease the metric value We show the correlations and p-values in Table 6. Note that all the p-values are 0 and all autocorrelation values are in the strong range, even though the metric values themselves are affected slightly by the perturbations (all the trends of increasing/decreasing values are as expected). To test H3 we compute the Kendall Tau (\u03c4) and Spearman (\u03c1) correlations between EDIT-SIM and CodeBERTScore getting \u03c4 = 0.72 and \u03c1 = 0.89 with p-values of 0 in both cases. 7 Discussion The results for H1 show that while there is a statistically significant correlation between functional correctness pass@1 and CodeBERTScore, it is rather weak and the same is the case for BERTScore. It is less correlated with it than the construct of editing effort EDIT-SIM, so we can discard H1. The results for H2 show that while the input perturMetric Median Midhinge Mean Std. Dev. Skewness Kurtosis Exact Match 0 0 0.009 0.093 10.526 108.793 chrF 0.323 0.330 0.350 0.205 0.813 0.645 BLEU 0.108 0.132 0.175 0.212 1.611 2.642 CodeBLEU 0.186 0.189 0.215 0.177 1.314 2.302 CrystalBLEU 0 0 0.037 0.117 4.772 27.645 BERTScore-F1 0.892 0.889 0.876 0.118 -6.035 42.250 CodeBERTScore-P 0.817 0.815 0.800 0.135 -3.072 15.600 CodeBERTScore-R 0.806 0.804 0.789 0.137 -2.829 13.819 CodeBERTScore-F1 0.804 0.804 0.792 0.131 -3.187 17.127 CodeBERTScore-F3 0.805 0.803 0.790 0.135 -2.937 14.796 Table 4: The centrality and shape measures for all the metrics. The metrics highlighted in red are heavily skewed towards the left (i.e. they over-penalize the models) while the ones highlighted in red are heavily skewed towards the right (i.e. they over-reward the models). Additionally almost all of the metrics are Leptokurtic, i.e. they have fatter tails than a normal distribution, especially Exact Match and BERT-Score. Metric Rate of Ties Exact Match 98.8 chrF 3.73 BLEU 16.84 CodeBLEU 7.7 CrystalBLEU 47.77 BERTScore-F1 3.17 CodeBERTScore-P 3.17 CodeBERTScore-R 3.17 CodeBERTScore-F1 3.18 CodeBERTScore-F3 3.17 Table 5: Percentage of ties for each metric. For realvalued metrics, we count it as a tie if the metric values differ by a very small value \u03f5 = 10\u22126. The exact match metric has the most ties because of sparsity and being 0 for most cases while BERTScore and CodeBERTScore metrics have the fewest ties. Figure 2: Correlations of the CodeBERTScore metrics with the other metrics. Except for exact match, most of the metrics have a moderate correlation with CodeBERTScore bations can affect the metric values in expected ways, the metrics remain strongly and statistically significantly auto-correlated so we can accept H2. The results for H3 show a strong and statistically significant correlation so we can accept it. From these results, we can conclude that despite Figure 3: The achieved significance level of the metric values. x-axis shows pairs of runs while y-axis shows the p-values. The dotted line shows the acceptance threshold claims made by (Zhou et al.), CodeBERTScore (or BERTScore) is not a reliable proxy for functional correctness. However, it is a relatively robust metric with the highest sensitivity (discriminative power) among reference-based metrics and could be used as a measure of syntactic similarity or editing effort for a reference/goal output. In fact, the results indicate that embedding-based metrics might be able to capture the syntax level structure of code (suggested by the robustness to input perturbations) but fail to penetrate to the level of code semantic equivalence as supported by previous studies that probe code models (Troshin and Chirkova, 2022; Naik et al., 2022). Future Work: While this study extensively explores many state-of-the-art LLMs over two of the most popular benchmarks, it would be interesting to explore the results over more datasets like ODEX (Wang et al., 2022) that cover more challenging and open-domain code generation but still support test case evaluation to further validate the findings. It would also be interesting to explore these results for models trained on the evaluation benchmarks for the MBPP dataset and benchmark the performance of iterative/multi-step code generators like code agents (Huang et al., 2024; Zhong et al., 2024; Hong et al., 2023; Zhou et al., 2023). Finally, it would be interesting to explore the effectiveness of metrics that learn execution behavior like CodeScore (Dong et al., 2023) and explore ways to improve embedding-based metrics by learning execution behavior. Online Experiments: While static benchmarks like HumanEval (Chen et al., 2021) were useful at the time of their introduction for evaluating the functional correctness of LLM-generated code, they fail to completely align with the primary use case of LLMs as programming assistants and their effect on constructs like programmer productivity. As a result, benchmarking efforts like RealHumanEval (Mozannar et al., 2024) use an online web interface, with an editor and chat window to measure the ability of LLMs to Figure 4: The achieved significance level of the metric values. x-axis shows pairs of runs while y-axis shows the p-values. The dotted line shows the acceptance threshold. All p-values above 0.008 are clipped to zoom into the portion close to the acceptance threshold. assist programmers through auto-complete or chat support, while using proxy behavioral signals like code acceptance and copy rates. Experiments done by (Mozannar et al., 2024) show that while increased performance on static benchmarks leads to increased productivity the gaps in benchmark and human performance are not always proportional. They also found that programmer preferences do not correlate with actual LLM performance motivating more human-centered evaluation. Important sub-populations for such experiments include novice programmers and experts. Experts are likely to reject more code or take longer till they except a code as they might be stricter with their requirements and other aspects of code quality like clarity and efficiency. The copy rates could be higher with any population as while experts might be more strict with accepting output they might be more likely to just copy code and prefer to edit it themselves than novice users. Figuring out these differences for certain would require further experimentation."
17
+ }
intro_8K/test_introduction_short_2405.01591v1.json ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.01591v1",
3
+ "title": "Simplifying Multimodality: Unimodal Approach to Multimodal Challenges in Radiology with General-Domain Large Language Model",
4
+ "abstract": "Recent advancements in Large Multimodal Models (LMMs) have attracted interest\nin their generalization capability with only a few samples in the prompt. This\nprogress is particularly relevant to the medical domain, where the quality and\nsensitivity of data pose unique challenges for model training and application.\nHowever, the dependency on high-quality data for effective in-context learning\nraises questions about the feasibility of these models when encountering with\nthe inevitable variations and errors inherent in real-world medical data. In\nthis paper, we introduce MID-M, a novel framework that leverages the in-context\nlearning capabilities of a general-domain Large Language Model (LLM) to process\nmultimodal data via image descriptions. MID-M achieves a comparable or superior\nperformance to task-specific fine-tuned LMMs and other general-domain ones,\nwithout the extensive domain-specific training or pre-training on multimodal\ndata, with significantly fewer parameters. This highlights the potential of\nleveraging general-domain LLMs for domain-specific tasks and offers a\nsustainable and cost-effective alternative to traditional LMM developments.\nMoreover, the robustness of MID-M against data quality issues demonstrates its\npractical utility in real-world medical domain applications.",
5
+ "authors": "Seonhee Cho, Choonghan Kim, Jiho Lee, Chetan Chilkunda, Sujin Choi, Joo Heung Yoon",
6
+ "published": "2024-04-29",
7
+ "updated": "2024-04-29",
8
+ "primary_cat": "cs.CL",
9
+ "cats": [
10
+ "cs.CL",
11
+ "cs.AI",
12
+ "eess.IV"
13
+ ],
14
+ "label": "Original Paper",
15
+ "paper_cat": "LLM Fairness",
16
+ "gt": "Recently, large multimodal models (LMMs) have made significant progress, becoming capa- ble of interpreting and integrating information across different data types, such as text and images (Achiam et al., 2023; Driess et al., 2023). This progress closely aligns to advance- ments in Large Language Models (LLMs), since many LMMs are built upon the structural foundation of pre-trained LLMs (Touvron et al., 2023; Chiang et al., 2023). Consequently, attempts to use LMM\u2019s in-context learning capability without fine-tuning on specific tasks are increasing (Wei et al., 2022; Zhang et al., 2023b). This tendency also appears in the medical domain, where the data collection is challenging due to the concerns on using sen- sitive clinical information containing re-identifiable patient data (Benitez and Malin, 2010) and the expensive labeling process which requires expert knowledge. However, the performance of in-context learning approach is highly susceptible to the data quality (Liu et al., 2023). One primary cause of degrading data quality is data loss occurring during the data collection and curation process, even when guided by expert input. For instance, the analysis of radiographic images has reported an error rate of 3 to 5% (Robinson et al., 1999) and the problem appears challenging to resolve until now (Brady, 2017). Similarly, a basic chart review showed electronic health record (EHR) errors could reach 9 to 10% (Feng et al., 2020). Additionally, variations in data interpretation by medical professionals are quite common (Whiting et al., 2015) and this can further exacerbate these issues by adding another layer of complexity to ensuring the quality and consistency of data. In this paper, we introduce MID-M, a Multimodal framework with Image Description for Medical domain. It is an in-context learning framework that demonstrates robust perfor- mance even with low-quality data. Our framework uses general-domain LLM and leverages multimodal data by converting images to textual descriptions. Notably, it achieves compa- rable performance to other general-domain and fine-tuned LMMs without pre-training on multimodality and extensive fine-tuning for the medical domain. In addition, by processing the image as a text description, it has the advantage of representing the image in an acces- sible and interpretable way, compared to traditional embedded vector representations. Our framework is illustrated in Figure 1. To evaluate the model\u2019s performance in the scenario with low-quality data, we system- atically mask the texts in a medical dataset and make the new adversarial evaluation data. We compare the performance of general domain pre-trained and medically fine-tuned LMM models to our framework in a few-shot setting, focusing on their accuracy and semantic comprehensibility. Through this experiment, we aim to assess the effectiveness of LMMs in generalizing to domain-specific tasks and managing incomplete source data. 2 Unimodal Approach to Multimodal Challenges in Radiology Figure 1: Overview of MID-M framework. Generalizable Insights about Machine Learning in the Context of Healthcare In this study, we analyze the generalization capabilities of LLMs and LMMs across various dimensions. First, we investigate the ability of models trained in general domains to effec- tively generalize to medical texts through few-shot learning. This aligns with the recent trend towards adopting eco-friendly approaches in AI model training by reducing the need for massive pretraining and extensive fine-tuning. It also contributes to the validation of model\u2019s potential to generalize well across any domain or task, as aimed for in general do- main research. Second, we verify the models\u2019 ability to generalize in situations where data quality deteriorates. The challenge that in-context learning models are easily influenced by the number of few-shot samples and the quality of trained data remains unresolved. Therefore, concrete evaluation in this regard is crucial for the advancement of models. Furthermore, our research reveals that text-only models can efficiently handle multi- modal healthcare tasks with less computational demand. We demonstrate that smaller language models can achieve results comparable to larger models, indicating a path for wider adoption of AI in healthcare, especially in resource-limited settings in computation or a lower level of healthcare environment. This goes in line with a move towards develop- ing globally accessible AI solutions by the National Institutes of Health1, highlighting the potential utility of proposed research in expanding AI\u2019s reach.",
17
+ "main_content": "In this section, we review the relevant literature on large language or multimodal models in the general domain as well as those applied specifically to the tasks in healthcare. 2.1. Large Language Models for Vision and Language Tasks Multimodal pretraining Large multimodal models have often leveraged language models as the backbone architecture and incorporate methods to connect the image encoder with the language model (Awadalla et al., 2023; Alayrac et al., 2022; Li et al., 2023a,b; Xu et al., 2024; Lauren\u00b8 con et al., 2024; Liu et al., 2024; Dai et al., 2024). Notably, LLaVA (Liu 1. https://commonfund.nih.gov/bridge2ai 3 Unimodal Approach to Multimodal Challenges in Radiology et al., 2024) employs a direct projection layer, BLIP-2 (Li et al., 2023b) uses a lightweight Q-Former, and Flamingo (Alayrac et al., 2022) uses gated cross-attention, to name a few examples. However, this paradigm suffers from two critical drawbacks. First, pretraining requires a high computational burden. Jointly aligning vision backbones and large language models requires large computational resources, despite considerably fewer trainable parameters compared to the total parameters. For example, Flamingo takes two weeks with 1,536 TPUs for the largest run, and BLIP-2 takes around 10 days with 16 A100 GPUs. Hence, it becomes prohibitively expensive to switch to a different large language model. The second limitation is the lack of modularity and flexibility. Even though they integrate pretrained image encoder and frozen LLM, updating this integrated system demands training the network with substantial data instances and time (Lauren\u00b8 con et al., 2024), hindering iterative research and development. Multimodal Reasoning with Language There have also been approaches that attempt to tackle multimodal tasks using language models alone (Yang et al., 2022; Guo et al., 2022; Hu et al., 2022; Xie et al., 2022; Sur\u00b4 \u0131s et al., 2023; Berrios et al., 2023). Among these, LENS (Berrios et al., 2023) is particularly relevant to our work, as this framework extracts generic information such as tags, attributes, and captions with three different vision modules, and provides them to the language model. While this method can be beneficial for general domain tasks, such an approach does not scale well to tasks that require expert knowledge. In contrast, our framework facilitates LLM\u2019s multimodal reasoning within the medical domain. We introduce a more streamlined framework with a single \u2018vision module\u2019 and tailor a general domain LLM specifically for medical tasks, by leveraging the adaptability of the LLM in this knowledge-intensive domain. 2.2. Large Language Models for Healthcare Domain Several efforts have been made to specialize these general-purpose foundation models through fine-tuning on healthcare domain-specific data (Li et al., 2024; Moor et al., 2023; Zhang et al., 2023a; Wu et al.; Tu et al., 2024; Sun et al., 2023). However, task-specific fine-tuning is often inefficient, requiring large amounts of high-quality medical data and extensive computing resources. Consequently, efficient adaptation methods are also explored, which primarily follow two categories: lightweight fine-tuning techniques (Hu et al., 2021; Houlsby et al., 2019), and leveraging the models\u2019 in-context learning capabilities in a few-shot setting. While finetuned models generally outperform their counterparts in few-shot setting(Van Veen et al., 2023b,a; Lewis et al., 2020), there is a growing interest in exploring domain adaptation techniques, mainly in-context learning, which do not require additional training (Agrawal et al., 2022; Li\u00b4 evin et al., 2023; Byra et al., 2023; Yan et al., 2023; Nori et al., 2023; Van Veen et al., 2023a). In-context learning has many potential advantages for the clinical domain because there is often a limited set of labeled data due to the high level of expertise needed for annotation. Our work also aligns with this in-context learning approach. By leveraging the capabilities of general-purpose LLMs without additional fine-tuning, our approach aims to overcome the limitations of task-specific fine-tuning while maintaining the models\u2019 broad knowledge and generalization abilities. 4 Unimodal Approach to Multimodal Challenges in Radiology 3. Methods This section introduces MID-M, an efficient in-context learning framework that performs multimodal tasks using only small-sized language models. MID-M consists of two key components: a domain-specific image converter and a general-domain language model. Image Conversion with Domain-Specific Image Classifier To reformulate visionlanguage tasks to text-to-text task, we first extract key information from images and convert it to text. Taking inspiration from Classification-via-Description (Menon and Vondrick, 2022) which showed good generalization performance, we employ a Description-viaClassification strategy. Classification-via-description involves generating textual features representing each class with a language model, then analyzing whether the input exhibits those features to make the final classification decision. Conversely, we use the classification result to generate an image description. Note that using a general-domain image module trained only on generic data, as in Berrios et al. (2023), cannot yield meaningful classification results since chest X-rays require domain knowledge. Therefore, we utilize a SOTA model (Chong et al., 2023) for CheXpert, a publicly available large chest X-ray classification dataset. This classifer computes a probability of 14 major diseases for chest radiology images. For diseases that the classifier predicts to have a probability of presence greater than 20%, we use the sentence \u201cIt seems there is {disease} in the image.\u201d Conversely, for diseases predicted as absent, we use \u201cIt seems there is no {disease} in the image.\u201d These sentences describing the presence or absence of each disease are all concatenated and used as an image description. This method can be extended to other domains without domain expertise, by simply replacing the classifier with a domain-specific one. We provide an example of our prompt in Appendix A. General-domain LLM We utilize a general-domain language model for multimodal reasoning. While chains-of-thought (Wei et al., 2022) are generally advantageous for domain adaptation and complex reasoning, their benefits are not consistent across generation tasks (Kim et al., 2023). Furthermore, it requires domain knowledge to generate rationales. Thus, we utilize strategies known to be helpful for domain adaptation (Van Veen et al., 2023a) instead. Specifically, we prepend a sentence assigning the role of an AI assistant as a domain expert: \u201cYou are an expert medical professional.\u201d This is followed by a detailed task description crafted for the radiology domain. For the few-shot examples, we provide the image description, corresponding finding, and impression in sequence. During inference, we omit the impression from the examples to be completed by the model. We use a BM25 retriever, known for its capability to compute token overlap quickly and retrieve relevant examples effectively. 4. Dataset We utilized the MIMIC-CXR dataset (Johnson et al., 2019), a de-identified and publicly available chest radiograph database collected from 2011 to 2016. It has a total number of 227,835 images and 128,032 paired reports from 65,379 patients. Paired reports include finding and impression. A finding is a detailed observation made from the X-ray images written by radiologists, and an impression is a concise summary of the finding which becomes the primary mode of communication between medical professionals. 5 Unimodal Approach to Multimodal Challenges in Radiology Table 1: Example of MIMIC-CXR findings with different masking rate. Full Corrupted 0.1 Corrupted 0.3 Corrupted 0.5 PA and lateral views of the chest provided. AICD projects over the left chest wall with lead tip extending to the region of the right ventricle. The heart is mildly enlarged. There is no evidence of pneumonia or CHF. No effusion or pneumothorax seen. Bony structures are intact. PA and lateral views the chest provided. AICD projects over the chest wall with lead tip extending to the region of the right ventric . heart is mildly enlarged. there is no of pneumonia or CHF. effusion or pneumothorax seen. Bony structures are intact. PA and lateral of chest provided. CD projects over the left wall with lead tip to the region of the ventric the heart is . of pneumonia or CH . no effusion or pneum rax seen. intact. PA and lateral views of provided. AI the chest with tip to the vent heart is enlarged. there no F no usion or pneum rax structures are intact Preliminary Analysis Our examination of the MIMIC-CXR dataset revealed 3,146 samples (2.46% of the total) that were unsuitable for research due to one or more of the following reasons: the finding label contained fewer than three words, was shorter than the impression label, or had more than three words masked. This level of incompleteness aligns with error rates reported in previous studies (Robinson et al., 1999). Pre-preprocessing To prepare the data for our study, we first merged the original training, validation, and test sets. We then removed samples with the findings at both extremes of length, removing the shortest and longest 25% of findings to ensure a meaningful level of detail remained after applying masking techniques. This process resulted in a dataset of 64,613 images and reports, which we then randomly split into training, validation, and test sets containing 62,613, 1,000, and 1,000 samples, respectively. Masking To further evaluate the robustness of LLMs against incomplete data, we employed a masking technique akin to that used by BERT (Devlin et al., 2018), randomly deleting words or phrases at subword levels. We conducted masking at three different rates to the original test set and yielded four distinct test sets: original (full), corrupted at 10% (corrupted 0.1), corrupted at 30% (corrupted 0.3), and corrupted at 50% (corrupted 0.5). We evaluated the effectiveness of our corruption strategy using the F1CheXbert (Zhang et al., 2020) metric. CheXbert (Smit et al., 2020) is designed to identify 14 medical observations in chest X-ray images. The F1CheXbert score is calculated by comparing CheXbert\u2019s predictions on the generated text against the corresponding reference text. We compare the findings from full against the findings from each corrupted test set. We discovered a strong correlation between the extent of text corruption and the precision of disease mentions. The micro-average F1CheXbert scores were 91.2% for the 10% corrupted set, 69.3% for the 30% corrupted set, and 51.1% for the 50% corrupted set. Examples of corrupted text are illustrated in Table 1. 6 Unimodal Approach to Multimodal Challenges in Radiology 5. Experimental Setup 5.1. Task In this study, the task requires the model to generate an impression based on an X-ray image and its corresponding findings. For all experiments, we employed a 2-shot learning approach, utilizing the BM25 algorithm to retrieve two training samples with findings similar to the findings of the test sample. When experimenting with corrupted test sets, the samples are retrieved based on the corrupted findings. This task is designed to comprehensively evaluate the model\u2019s ability to understand medical images and clinical data, as well as to identify salient information even when incomplete data is given. 5.2. Baselines We have two groups of baselines in our experiments: LMMs that are pre-trained in the general domain and those further fine-tuned with medical domain data. However, considering the constraints of a few-shot learning setting, we found a limited number of LMMs are capable of processing multiple independent images in a single prompt. Notably, Flamingo (Alayrac et al., 2022), is trained on interleaved text and image pairs and supports multiple images. We used its open-sourced version, OpenFlamingo (Awadalla et al., 2023). This functionality is also applied to the models that are based on Flamingo, IDEFICS (Lauren\u00b8 con et al., 2024) and OTTER (Li et al., 2023a). For models trained on data from the medical domain, we selected MedFlamingo (Moor et al., 2023) and RadFM (Wu et al., 2023). MedFlamingo is trained with medical publications and textbooks and is based on OpenFlamingo. RadFM is a specialized model for the radiology domain and pre-trained with large-scale, high-quality multimodal dataset named MedMD. It\u2019s important to note that although MedMD includes MIMIC-CXR, we still chose to include it in our baseline set. Despite data leakage concerns in our full test setting, we believe that evaluating the model\u2019s performance on corrupted versions of MIMIC-CXR would provide valuable insights into its robustness and generalization capabilities. A detailed comparison is presented in Table 2. Note that all the baseline models have 9 billion parameters or more, whereas our backbone model, Flan-T5-xl, has only 3 billion parameters. For the baselines, we adhered to the example code in their public GitHub repositories when available; otherwise, we utilized a consistent prompt with the example code from other baselines. 5.3. Evaluation The performance of the models are compared using two different LLM evaluation methods, ROUGE-L (Lin, 2004) and F1RadGraph (Delbrouck et al., 2022). ROUGE, the RecallOriented Understudy for Gisting Evaluation, is a metric widely used in the text summarization task. This score evaluates the similarity between the generated text and the reference text based on token overlap. Specifically, we used ROUGE-L which measures the longest common subsequence (LCS) to assess similarity at the sentence level. F1RadGraph is a metric that evaluates medical entities and relations within the generated reports. It utilizes a PubMedBERT (Gu et al., 2021) fine-tuned on the RadGraph (Jain et al., 2021), a dataset with chest X-ray radiology reports annotated with medical entities and their relations. This 7 Unimodal Approach to Multimodal Challenges in Radiology Table 2: Descriptions of baseline models. The first four models are general-domain LMMs that can handle multiple images in one input. The two models in the middle are LMMs that are fine-tuned with medical domain data. Model Size Domain Image Encoder Language Model OtterM 9B General ViT-L/14 MPT-7b OtterL 9B General ViT-L/14 LLaMA-7b OpenFlamingo 9B General ViT-L/14 MPT-7b IDEFICS 9B General ViT-H/14 LLaMA-7b MedFlamingo 9B Medical ViT-L/14 LLaMA-7b RadFM 14B Radiology ViT-B/32 MedLLaMA-13b MID-M 3B General Classifier(7M) Flan-T5-xl fine-tuned PubMedBERT model identifies the entities and relations from the generated reports and the reference reports. When the generated report correctly matches the entities and their relation to that in the reference, they receive a reward of 1, and otherwise, 0. The final F1 score is computed based on this reward score. Both metrics are implemented using the Python Package Index (PyPI)2 6. Experiment 6.1. Result The experimental results are presented in Table 3. MID-M achieves performance comparable to that of other baseline models in all settings, despite using only a third of the parameters. It even surpasses other models in experiments with corrupted data, highlighting its exceptional generalization capabilities. RadFM, which previously trained on MIMIC-CXR, achieves the highest score on the full test set, which is not a surprising result. IDEFICS and OpenFlamingo demonstrate strong generalization ability, achieving scores around 0.4 across both ROUGE and F1RadGraph metrics. However, we observed that some models are highly sensitive to even a minor level of data corruption (i.e. 0.1). For instance, RadFM suffers from a significant performance drop in corrupted settings and shows performance similar to or even worse than large-scale general domain models. This sensitivity to text corruption is likely due to its training on only high-quality radiology images and associated texts. It is critical problem in clinical applications where even small decreases in performance can significantly impact patient care. Otter and OpenFlamingo also shows considerable performance declines in corrupted settings. Conversely, IDEFICS, MedFlamingo and our framework maintain robust performance on low-quality data. As the text masking probability increases (i.e. 0.3, 0.5), our model begins to demonstrate its full potential. It achieves the highest scores under the ROUGE metric and exhibits either comparable or superior performance on the F1RadGraph metric. 2. https://pypi.org/project/radgraph/ 8 Unimodal Approach to Multimodal Challenges in Radiology Table 3: Model performances in each test set with regard to the accuracy. The highest score is bolded and the second highest is underlined. Method Full Corrupted 0.1 Corrupted 0.3 Corrupted 0.5 ROUGE OtterM 0.3140 0.2258 0.2116 0.1894 OtterL 0.2655 0.2089 0.1876 0.1461 OpenFlamingo 0.4310 0.3458 0.3420 0.2888 IDEFICS 0.3951 0.3776 0.3321 0.2807 MedFlamingo 0.3804 0.3622 0.3270 0.2784 RadFM 0.5222 0.3952 0.3299 0.2534 MID-M(ours) 0.4064 0.3719 0.3438 0.2977 F1RadGraph OtterM 0.2801 0.2114 0.1704 0.1327 OtterL 0.2587 0.1994 0.1720 0.1237 OpenFlamingo 0.4030 0.3427 0.3151 0.2541 IDEFICS 0.4089 0.3688 0.3064 0.2543 MedFlamingo 0.3973 0.3510 0.3046 0.2566 RadFM 0.5054 0.3839 0.3094 0.2363 MID-M(ours) 0.3732 0.3379 0.2973 0.2622 It is also surprising that MedFlamingo, which is further fine-tuned from OpenFlamingo with medical domain text, does not outperform OpenFlamingo. This may be because MedFlamingo is primarily fine-tuned for medical visual question-answering tasks rather than various medical generation tasks. These experimental results shows the importance of task setting during pre-training and fine-tuning. 6.2. Ablation Studies To comprehend the the effect of each component in our framework to the overall performance, we systematically remove each element textual input (findings), image input (descriptions), and both. In scenarios where both text and images are excluded, the model must generate impressions based solely on two impressions retrieved based on the similarity of findings. The results of these ablation studies are presented in Table 4. The results indicate that the multimodal framework exhibits the strongest generalization capability by leveraging all available information. Since the impressions are primarily derived from findings, the text component alone proves to be most effective when the text data encompasses sufficient information. Still, using both image and text information achieves comparable scores and outperforms in the corrupted test sets. When text corruption exceeds a 10% probability, incorporating image descriptions improves the model\u2019s accuracy compared to using only the findings or only the image descriptions in most cases. One interesting point is, at the highest level of text corruption, the corrupted text acts as noise 9 Unimodal Approach to Multimodal Challenges in Radiology Table 4: Model performances in different input settings. Method Full Corrupted 0.1 Corrupted 0.3 Corrupted 0.5 ROUGE MID-M 0.4064 0.3719 0.3438 0.2977 w/o text 0.3794 0.3563 0.3440 0.2942 w/o image 0.4119 0.3683 0.3218 0.2550 w/o text and image 0.3649 0.3309 0.3129 0.2670 F1RadGraph MID-M 0.3732 0.3379 0.2973 0.2622 w/o text 0.3354 0.3185 0.2948 0.2575 w/o image 0.3918 0.3323 0.2784 0.2247 w/o text and image 0.3298 0.2976 0.2783 0.2378 and severely impairs the model\u2019s performance, leading to worse outcomes than when both text and images were excluded. This trend remains consistent across evaluation metrics of token overlap, measured by ROUGE, and semantic coherence, measured by F1RadGraph. 6.3. Disease Identification Ability In this section, we further analyze the model\u2019s ability to identify the major diseases in X-rays and corresponding findings. The evaluation in Section 6.1 is based on a broad comparison of impressions and predictions at the word and semantic level. Additionally in this section, we use the F1CheXbert to assess whether the model identifies all the key diseases noted in the gold impressions in its predictions as well. We report both the micro-average F1 score and the individual F1 scores for five main diseases, as shown in Table 5. Table 5: Comparison of MID-M\u2019s performance on the identification of major diseases under different levels of data corruption. Abbreviations: CMG, cardiomegaly; Consol, consolidation; Atelect, atelectasis; PE, pleural effusion. CMG Edema Consol Atelect PE Micro Avg. Full 0.4200 0.5714 0.4444 0.4916 0.6335 0.5302 Corrupted 0.1 0.3204 0.4923 0.1538 0.4067 0.5543 0.4379 Corrupted 0.3 0.3010 0.3884 0.1176 0.3972 0.4250 0.3724 Corrupted 0.5 0.2631 0.3606 0.1538 0.2704 0.4467 0.3327 Support 87 148 25 151 138 549 We found that the model\u2019s performance declines with increased data corruption, but the rate of decline varies across diseases. For instance, the model\u2019s ability to identify pleural effusion (PE) remains relatively robust against corruption, even under conditions of high data corruption, compared to other diseases. In contrast, our model experiences a significant drop in consolidation (Consol) even at the lowest level of corruption (0.1). This suggests that the model manages to recognize some diseases in a manner less susceptible 10 Unimodal Approach to Multimodal Challenges in Radiology Table 6: Model performances in different number of shots in the input prompt. Method Full Corrupted 0.1 Corrupted 0.3 Corrupted 0.5 ROUGE 0-shot 0.1673 0.1270 0.1127 0.1091 1-shot 0.3818 0.3438 0.3172 0.2180 OURS (2-shot) 0.4064 0.3719 0.3438 0.2977 F1RadGraph 0-shot 0.1573 0.0994 0.072 0.0439 1-shot 0.3536 0.3091 0.2797 0.2386 OURS (2-shot) 0.3732 0.3379 0.2973 0.2622 to corruption. This might be attributed to the feasibility of retrieving similar samples. When there are many samples with a certain disease in the training set, it becomes easier to retrieve relevant samples for the diseases and vice versa. In addition, a significant drop in consolidation might be attributed to a unique linguistic challenge. When some of the subwords from consolidation are obscured, the remaining segments (such as \u2019solid\u2019 or \u2019tion\u2019) could be mistakenly associated with other terms frequently used in radiology reports, complicating its identification. 6.4. In-Context Learning Ability In this section, we evaluate the model\u2019s in-context learning capabilities by varying the number of examples (shots) included in the prompt. Previous experiments, as discussed in Section 6.1, uniformly used a two-shot setting. In addition to this, we explore the model\u2019s abilities in zero-shot and one-shot settings. The results are presented in Table 6. We observe a decline in performance in both zero-shot and one-shot settings compared to the two-shot scenario. However, the disparity in performance across these settings is noteworthy. In the zero-shot setting, the model struggles significantly, achieving a ROUGE score of only about 0.1673 on the full test set and with the F1RadGraph score approaching zero across all corrupted test sets. In contrast, introducing just one relevant example enhances the model\u2019s performance substantially, achieving results that are comparatively close to those in the two-shot setting in many cases. This underscores the impressive incontext learning capabilities of pretrained large models, highlighting their potential for easy adaptation to different domains without the need for fine-tuning. 7. Discussion In this paper, we introduce MID-M, a novel multimodal framework designed for the medical domain, which leverages the in-context learning capabilities of a general-domain LLM. By transforming images into textual descriptions, MID-M facilitates an interpretable representation of medical images, enabling complex information to be more accessible to engineers and, potentially, to practitioners. Notably, MID-M demonstrates superior generalization capability, outperforming models with substantially more parameters that are extensively 11 Unimodal Approach to Multimodal Challenges in Radiology trained for multimodality and medical applications. This emphasizes the potential of leveraging general-domain models for specialized tasks, offering a sustainable and cost-effective approach without massive pretraining and extensive fine-tuning. This approach aligns with the ongoing shift towards developing powerful AI tools in healthcare. Furthermore, it highlights the importance of robustness and generalization in systems, particularly in healthcare settings where data quality can vary significantly. The framework\u2019s ability to maintain robust performance even with degraded data quality, presents a compelling case for its application in real-world medical scenarios even with limited resources. We hope our work can contribute to the development of globally accessible AI systems. Limitations Despite the contributions of our research, there are a few limitations to our work. First, while MID-M demonstrates superior in-context learning capability, it still underperforms compared to task-specific fine-tuned models. This is observed from the performance of RadFM on the full test set. However, it is important to note that models like RadFM demand high computational resources for their training and show a significant drop in generalization ability when faced with incomplete data. In future works, we could focus on developing in-context learning approaches that can achieve performance comparable to that of task-specific models, even under ideal conditions. The second limitation is the artificial nature of our masking approach. In real-world applications, masking is likely to occur at the word level and targets sensitive information. Our method, however, employed subword-level random masking. Despite this fundamental difference, we believe our test set includes samples with realistically plausible masking since sub-word masking cumulatively approximates word-level masking, and sensitive information can also be obscured by randomly chosen. Our statistical analysis, which shows main diseases are masked in proportion to the corruption rate, also supports our hypothesis. Moreover, this approach allows for the systematic masking with which we produced four different test sets. We believe the insights derived from our experiments can provide valuable guidance for future research. Acknowledgments This work was supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government(MSIT) (RS-2022-00143911,AI Excellence Global Innovative Leader Education Program). Yoon JH has received grant support from NIH K23GM138984."
18
+ }
intro_8K/test_introduction_short_2405.01593v1.json ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.01593v1",
3
+ "title": "Large Language Model Agent for Fake News Detection",
4
+ "abstract": "In the current digital era, the rapid spread of misinformation on online\nplatforms presents significant challenges to societal well-being, public trust,\nand democratic processes, influencing critical decision making and public\nopinion. To address these challenges, there is a growing need for automated\nfake news detection mechanisms. Pre-trained large language models (LLMs) have\ndemonstrated exceptional capabilities across various natural language\nprocessing (NLP) tasks, prompting exploration into their potential for\nverifying news claims. Instead of employing LLMs in a non-agentic way, where\nLLMs generate responses based on direct prompts in a single shot, our work\nintroduces FactAgent, an agentic approach of utilizing LLMs for fake news\ndetection. FactAgent enables LLMs to emulate human expert behavior in verifying\nnews claims without any model training, following a structured workflow. This\nworkflow breaks down the complex task of news veracity checking into multiple\nsub-steps, where LLMs complete simple tasks using their internal knowledge or\nexternal tools. At the final step of the workflow, LLMs integrate all findings\nthroughout the workflow to determine the news claim's veracity. Compared to\nmanual human verification, FactAgent offers enhanced efficiency. Experimental\nstudies demonstrate the effectiveness of FactAgent in verifying claims without\nthe need for any training process. Moreover, FactAgent provides transparent\nexplanations at each step of the workflow and during final decision-making,\noffering insights into the reasoning process of fake news detection for end\nusers. FactAgent is highly adaptable, allowing for straightforward updates to\nits tools that LLMs can leverage within the workflow, as well as updates to the\nworkflow itself using domain knowledge. This adaptability enables FactAgent's\napplication to news verification across various domains.",
5
+ "authors": "Xinyi Li, Yongfeng Zhang, Edward C. Malthouse",
6
+ "published": "2024-04-30",
7
+ "updated": "2024-04-30",
8
+ "primary_cat": "cs.CL",
9
+ "cats": [
10
+ "cs.CL",
11
+ "cs.AI",
12
+ "cs.IR"
13
+ ],
14
+ "label": "Original Paper",
15
+ "paper_cat": "LLM Fairness",
16
+ "gt": "The pervasive nature of social media and online platforms in the modern digital era has exacerbated the spread of fake news, char- acterized by false or misleading information disguised as credible news. The proliferation of fake news poses critical challenges to societal well-being, public trust, and democratic processes [1], with the potential to incite fear, sway public opinion, and influence criti- cal decision-making [11]. To mitigate the ramifications of fake news dissemination, it is imperative to detect fake news, especially in its early stages before it spreads widely on social platforms. While fact-checking sites such as PolitiFact, and Snopes employ professionals for manual fact-checking,1 the rapid pace of misin- formation in the digital age makes laborious manual efforts time- consuming and unscalable [5]. Automated solutions are therefore essential, and in recent years, deep neural network-based models for fact-checking have been developed [10, 15]. Detecting fake news 1https://www.politifact.com, https://www.snopes.com is a multifaceted challenge that entails evaluating aspects like au- thenticity, author intention, and writing style. Various viewpoints can be taken, such as a knowledge-based approach that compares textual information from news articles against a factual knowledge graph, a style-based approach that examines differences in writing style between fake and real news, and a credibility perspective that examines relationships between news articles and entities like pub- lishers [23]. Additionally, propagation-based methods leverage in- formation provided in news dissemination [23]. Existing supervised learning approaches for fake news detection have demonstrated effectiveness in identifying misinformation. However, these models often require human-annotated data for training. This requirement can pose challenges as annotated datasets may not always be readily available or could be costly to collect in practice. LLMs have demonstrated impressive performance in various NLP tasks [2, 17], motivating us to explore their potential in fake news detection. The fact-checking process for professionals often in- volves assembling information from multiple, sometimes conflicting sources into a coherent narrative [5], highlighting the importance of verifying details before publication. In this work, we introduce FactAgent, an innovative agentic approach that harnesses LLMs for fake news detection. The distinction between using LLMs in an agentic versus non-agentic way lies in its operational mode: in a non-agentic approach, the LLM responds to prompts or learns in context to generate responses. In contrast, FactAgent integrates LLMs into its decision-making process by breaking down complex problems into manageable sub-steps within a structured workflow, leveraging not only the LLM\u2019s internal knowledge but also exter- nal tools to complete each component and collaboratively address the overall task. Our primary contributions can be summarized as follows: \u2022 We propose FactAgent, an agentic approach that utilizes LLMs for fact-checking and fake news detection. FactA- gent emulates human expert behavior through a structured workflow where LLMs can integrate both internal knowl- edge and external tools for news verification throughout sub-steps within the workflow. Unlike human experts, Fac- tAgent achieves enhanced efficiency, and unlike supervised models, it operates without the need for annotated data for training. Moreover, FactAgent is highly adaptable, al- lowing for easy modification for diverse news domains by adjusting the tools in the workflow. \u2022 FactAgent is capable of identifying potential fake news early in its dissemination process without relying on social context information. It provides explicit reasoning for the authenticity of news at each step of the workflow, enhanc- ing interpretability and facilitating user understanding. \u2022 We conduct experiments on three real-world datasets, demon- strating FactAgent\u2019s effectiveness in achieving high perfor- mance. We compare FactAgent\u2019s performance following arXiv:2405.01593v1 [cs.CL] 30 Apr 2024 Xinyi Li, Yongfeng Zhang, and Edward C. Malthouse an structured expert workflow and an automatically self- designed workflow. Our experiments underscore the critical role of expert workflow design based on domain knowledge for FactAgent.",
17
+ "main_content": "Fake News Detection Existing approaches to fake news detection that do not use social context can be categorized into two main groups: content-based and evidence-based. The content-based approaches focus more on the text pattern within the news articles themselves, including writing style and article stance [13]. These approaches can leverage NLP techniques for analysis [16] such that LSTM [6] and BERT [3]. The evidence-based approaches verify news veracity by examining semantic similarity or conflicts in claimevidence pairs, often retrieving evidence from knowledge graphs or websites. For example, Popat et al. [15] introduced DeClarE, utilizing BiLSTM and attention network to model claim-evidence semantic relations. Xu et al. [21] developed GET, a unified graphbased model for capturing long-distance semantic dependency. Liao et al. [10] introduced MUSER, a multi-step evidence retrieval enhancement framework for fake news detection. With the recent development of LLMs, researchers have explored whether LLMs can effectively detect fake news using internal knowledge. For instance, Wei et al. [20] explored their potential by using techniques such as zero-shot prompt, zero-shot Chain-of-Thought (CoT) prompting, few-shot and few-shot CoT prompting. They also utilized LLMgenerated rationales to enhance the performance of a supervised model like BERT in fake news detection tasks. However, the aforementioned approaches still require annotated data for model training, which limits their ability to handle news requiring knowledge not present in the training data. In contrast, our proposed FactAgent eliminates the need for model training by integrating LLM\u2019s semantic understanding with external search engine for evidence retrieval. Zhang et al. [22] proposed HiSS, a hierarchical step-by-step prompting approach that integrates LLMs to decompose a news claim into subclaims and uses an external search engine to answer queries when LLMs lack confidence. Unlike HiSS, which employs LLMs in a non-agentic manner, our proposed FactAgent utilizes LLMs in an agentic manner, allowing LLMs to rigorously follow a workflow to collect evidence at each sub-step of the process. FactAgent leverages both LLM\u2019s internal knowledge and external search engines to examine the veracity of a news claim. LLM Agent The development of LLMs has led to the creation of LLM agents with diverse applications across various domains. For instance, Part et al. [12] designed a sandbox environment featuring virtual entities endowed with character descriptions and memory systems to simulate human behaviors. Liang et al. [9] explored a multi-agent debate framework, demonstrating the collaborative problem-solving capabilities of LLM agents. LLM-based approaches offer several benefits, including the ability to provide rationales based on emotions, content, textual descriptions, commonsense, and factual information. Leveraging these advantages, FactAgent is designed to interpret diverse clues and real-world contexts for fake news detection. Unlike existing approaches that allow LLMs to autonomously design their own problem-solving plans [4], FactAgent enables LLMs to adhere to a structured workflow, emulating human fact-checkers to complete the fake news detection task using the LLM\u2019s internal knowledge and external tools. 3 METHODOLOGY While FactAgent is designed to utilize LLMs in an agentic manner, emulating human expert behavior for fact-checking by decomposing tasks into multiple sub-steps within a workflow and collecting evidence from various perspectives using both LLM\u2019s internal knowledge and external tools, our primary methodology or focus is to enable LLMs to follow a structured expert workflow designed using domain knowledge in such an agentic manner. Given the multifaceted nature of fake news detection and the need for a nuanced understanding of diverse clues and real-world context, such as writing style and common sense [23], coupled with the strong textual understanding capabilities of LLMs and the potential for hallucinations [8], we categorize the tools designed for the LLMs to use within the structured expert workflow into two groups: one that utilizes only the LLM\u2019s internal knowledge (i.e. Phrase, Language, Commonsense, Standing tools) and another that integrates external knowledge (i.e. URL and Search tools). Each tool is tailored with specific assumptions about fake news, as described below. Phrase_tool: This tool is tailored to scrutinize news claims by examining the presence of sensational teasers, provocative or emotionally charged language, or exaggerated claims. It operates under the assumption that fake news often employs these tactics to attract attention from readers. Language_tool: This tool is designed to identify grammar errors, wording errors, misuse of quotation marks, or words in all caps within news claims. It assumes that fake news often includes such errors to overemphasize credibility or attract readers. Commonsense_tool: This tool utilizes LLM\u2019s internal knowledge to assess the reasonableness of news claims and to identify any contradictions with common sense. It operates under the assumption that fake news may resemble gossip rather than factual reporting and could contain elements that contradict common knowledge. Standing_tool: This tool is specifically crafted for news that is relevant to politics and aims to detect whether the news promotes a particular viewpoint rather than presenting objective facts. It operates under the assumption that fake political news often reinforces existing beliefs or biases held by target audiences. Additionally, it may contribute to polarization by portraying political opponents in a negative light or demonizing certain groups. Search_tool: This tool utilizes the SerpApi to search for any conflicting information reported by other media resources.2 It assumes that fake news often contains unconfirmed information with little evidence to support the claims being made. Leveraging the Serpapi API can also mitigate the hallucination issue of LLM by using external knowledge to cross-reference and verify the news claim. URL_tool: This tool integrates LLM internal and external knowledge to assess if the news claim originates from a domain URL that lacks credibility. It first utilizes LLM internal knowledge to gain an overview of a domain URL. Subsequently, it leverages external knowledge such as past experiences stored in a database containing 2https://serpapi.com Large Language Model Agent for Fake News Detection URLs verified for real and fake news to augment the understanding of the domain URL. The assumption underlying this tool is that fake news often originates from domains that are not credible. The external knowledge database can be updated whenever a news article is verified, ensuring its timely accuracy and reliability. Figure 1 depicts the structured expert workflow, which utilizes the aforementioned tools step by step to gather evidence for verifying the news claim. Upon receiving a news claim, FactAgent first enables the LLM to utilize its contextual capability to determine if the article concerns politics. If so then the news claim will be analyzed using all provided tools within the structured expert workflow; otherwise the Standing_tool will be skipped. In the final step, all evidence is collected and compared against a expert checklist to summarize and predict the veracity of the news claim. 4 EXPERIMENTS AND RESULTS We conduct experimental studies to answer the following research questions: \u2022 RQ1: How does FactAgent with a structured expert workflow perform compared to other fake news detection baselines? \u2022 RQ2: How does the domain knowledge influences the FactAgent\u2019s performance in fake news detection? \u2022 RQ3: What is the significance of the external search engine on FactAgent\u2019s performance? \u2022 RQ4: How does the decision-making strategy influence the performance of FactAgent? 4.1 Experimental Setup Dataset We evaluate FactAgent\u2019s performance using a structured expert workflow with three English-language datasets, Snopes [14], PolitiFact and GossipCop [19]. PolitiFact and GossipCop are chosen because they not only provide news titles but also source URLs for each news claim. Given that one tool in the expert workflow searches for relevant articles online and identifies conflicting reports, we ensure that the original source URLs of the sampled testing data contain their publication dates. This information is utilized to set constraints for the SerpAPI to avoid the online data leakage problem, wherein events occurring at the current time step are mistakenly included in the search results even though they had not occurred yet when the article was published. For the Snopes dataset, where source URLs are unavailable for news claims, we showcase the flexibility of FactAgent by adjusting the tools used in the expert workflow based on the available information. We randomly select 100 news articles from each dataset for evaluation, ensuring that the ratio of real news to fake news in the testing data is less than 1:2. Baselines To test the effectiveness of FactAgent using a structured expert workflow, we compare it with the following methods: \u2022 LSTM [6]: Applies LSTM to encode textual information from news claims. \u2022 TextCNN [7]: Utilizes CNN to capture local patterns and representations of news claims. \u2022 BERT [3]: Implements a transformer architecture to capture contextual relationships and understand news claims. \u2022 HiSS [22]: Introduces a hierarchical prompting method directing LLMs to break down claims into sub-claims and verifies them via question-answering steps, leveraging a search engine for external information. \u2022 Zero-shot Standard Prompt: Utilizes a prompt containing only the task description and the provided news claim. \u2022 Zero-shot CoT: Applies the CoT [20] prompting approach for zero-shot inference. \u2022 Zero-shot (Tool): Leverages individually designed tools mentioned in Section 3, excluding the Standing_tool since not all news concern politics. Implementation Details We employ the LangChain framework and gpt-3.5-turbo model as the underlying LLM for all tools employed in the analysis engine.3 The temperature parameter is set to 0 to ensure the reproducibility. Each article is represented using its title, domain URL, and publication date for PolitiFact and GossipCop dataset. For the Snopes dataset only the title information is used due to the unavailability of URLs. Other baselines in our comparison used only the news title for analysis. The statistics of the baseline training data are summarized in Table 1. For the baseline HiSS [22], we directly utilize the same prompt provided by the researchers with the modification that \u2018pants-fire\u2019, \u2018barely-true\u2019, and \u2018false\u2019 labels are grouped together as \u2018false\u2019, while \u2018half-true\u2019, \u2018mostly-true\u2019, and \u2018true\u2019 labels are grouped as \u2018true\u2019 following Rashkin et al. [18]. To ensure an equitable comparison of LLMs using individually designed tools and FactAgent following the designed expert workflow, we retain the analysis results at each step from each tool throughout the workflow process and then apply the same bullet point for each tool in Figure 1 methodology to derive the final prediction veracity results for LLM using each tool in a zero-shot manner. To assess performance, we employ various metrics including accuracy, F1 score, as well as F1 scores specifically for real and fake news classifications to get a comprehensive assessment of the model\u2019s performance in fake news detection. Table 1: Statistics of datasets for supervised baselines training. GossipCop PolitiFact Snopes #Real News 3586 456 1050 #Fake News 2884 327 1500 5 EXPERIMENTAL RESULTS 5.1 Fake News Detection Performance (RQ1) The performance comparisons of different models are summarized in Table 2. Our observations reveal that FactAgent, following the expert workflow, achieves superior performance compared to other baselines on all datasets. Unlike supervised baselines that implicitly learn contextual patterns or writing styles of fake news from labeled data, the designed 3https://www.langchain.com Xinyi Li, Yongfeng Zhang, and Edward C. Malthouse Figure 1: The structured expert workflow for fake news detection is depicted in this diagram. The \"Standing_tool\" is highlighted with a dashed frame, and the fifth bullet point is shaded grey to indicate that the \"Standing_tool\" and its corresponding checklist item are skipped if the news is not relevant to politics. The Observations section comprises a list of observations collected from each tool sequentially. The News is represented using its title, domain URL, and publish date, formatted as \u2018Title: Riverdale Set to Recast a Major Character Ahead of Season 2, Domain URL: tvline.com, Publish Date: 04/25/2017\u2019. If the domain URL and publish date are unavailable, only the title information is used. tools explicitly leverage the LLM\u2019s internal knowledge and contextual understanding capabilities to assess the specific existence of common phrasing or language styles indicative of fake news claims. FactAgent also benefits from external tools that extend the LLMs capabilities beyond what supervised baselines can achieve. For example, FactAgent enables LLMs to search for related news claims online and detect conflicting reports, leveraging external sources to enhance their veracity assessment. Additionally, FactAgent allows LLMs to check the credibility of domain URLs using both internal knowledge of domain URLs and external databases containing recent domain URLs associated with verified fake news. These capabilities enable FactAgent with a structured expert workflow to have enhanced performances without requiring model training and hyper-parameter tuning processes typically associated with supervised learning models. Comparing the performances of LLMs using different prompting techniques (i.e., standard prompt, CoT prompt), or making decisions based on findings from each designed tool, we observe that LLMs utilizing CoT prompting do not consistently outperform standard prompting techniques. This observation is consistent with previous research [22], which also highlighted similar conclusions. After analyzing errors associated with the CoT prompting approach, we also observe the omission of necessary thought, echoing the finding of Zhang et al. [22]. The inferior performances of standard prompting and CoT prompting compared to LLMs making decisions based on commonsense and phrase observations across all three datasets have underscore the importance of explicitly guiding the LLM reasoning process from a specific perspective to effectively leverage its textual understanding capability. Furthermore, the superior performance of the FactAgent with an expert workflow compared to the LLM using individual tools highlights the importance of examining a news claim from various perspectives. Our approach differs from HiSS, which utilizes an LLM and external search engines primarily as a prompting technique. In contrast, FactAgent decomposes the fake news detection problem into simple tasks, with each task relying on LLMs to provide answers. The superiority of FactAgent over HiSS is attributed to the rigorous utilization of external tools integrated into the structured expert workflow, along with the examination of the LLM\u2019s internal commonsense. In contrast, HiSS relies on external search engines only when the LLM itself lack confidence to answer specific questions, potentially limiting its scope and depth of analysis compared to FactAgent. Overall, the superior performance of FactAgent over baselines demonstrates the advantage of utilizing LLMs in an agentic way to emulate human expert behaviors, rigorously examining a news claim from multiple perspectives, and integrating an external search process following an expert workflow to verify its veracity. Large Language Model Agent for Fake News Detection Table 2: Performance comparison among different models for PolitiFact, GossipCop, and Snopes. The superior outcomes are indicated in bold, with statistical significance indicated by a p-value < 0.05. Model PolitiFact GossipCop Snopes F1 Acc. F1_real F1_fake F1 Acc. F1_real F1_fake F1 Acc. F1_real F1_fake LSTM 0.79 0.79 0.79 0.79 0.77 0.77 0.76 0.77 0.65 0.66 0.64 0.67 TextCNN 0.80 0.80 0.79 0.82 0.79 0.79 0.78 0.79 0.62 0.64 0.55 0.69 BERT 0.85 0.85 0.85 0.85 0.79 0.79 0.78 0.80 0.63 0.63 0.59 0.67 HiSS 0.62 0.62 0.58 0.65 0.66 0.66 0.69 0.63 0.58 0.60 0.47 0.68 Zero-shot Standard Prompt 0.73 0.73 0.70 0.76 0.61 0.61 0.65 0.56 0.61 0.62 0.55 0.67 Zero-shot CoT Prompt 0.64 0.63 0.56 0.69 0.64 0.64 0.63 0.65 0.59 0.63 0.46 0.72 Zero-shot Language Tool 0.73 0.72 0.77 0.67 0.53 0.48 0.64 0.32 0.60 0.62 0.53 0.68 Zero-shot Phrase Tool 0.83 0.83 0.85 0.80 0.69 0.69 0.71 0.67 0.66 0.68 0.75 0.57 Zero-shot URL Tool 0.81 0.81 0.83 0.79 0.63 0.63 0.64 0.62 \u2014\u2014 \u2014\u2014 \u2014\u2014 \u2014\u2014 Zero-shot Search Tool 0.78 0.78 0.77 0.79 0.66 0.66 0.67 0.65 0.72 0.73 0.67 0.77 Zero-shot Commonsense Tool 0.80 0.80 0.80 0.80 0.76 0.75 0.71 0.79 0.66 0.66 0.61 0.70 FactAgent with Expert Workflow 0.88 0.88 0.89 0.88 0.83 0.83 0.83 0.83 0.75 0.75 0.75 0.75 5.2 Importance of Domain Knowledge (RQ2) This subsection conducts two experiments to evaluate the importance of domain knowledge in creating the expert workflow for FactAgent to verify the veracity of news claims. Expert Workflow vs. Automatically Self-designed Workflow Figure 2 depicts the instructions for the LLM to automatically generate a self-designed workflow to verify a news claim using the provided tools. The agent then executes the chosen tools step by step to collect findings related to the news claim. This process is similar to the workflow shown in Figure 1. At the final step, the LLM compares its findings with the checklist items corresponding to the selected tools to determine the veracity of the news claim. Figure 3 compares the performance of FactAgent with an expert workflow against that of an automatically self-designed workflow. It is observed that allowing FactAgent with an LLM to automatically design its own workflow for fake news detection results in inferior performance compared to instructing the agent to adhere to an expert workflow for the GossipCop dataset. The performance is similar for PolitiFact dataset, and shows slightly better performance for Snopes dataset. We further analyze the ratio of tool usage among testing samples for each dataset when the LLM designs its own workflow in Figure 4. From Figure 4, we observe that when the LLM designs the workflow using provided tools, it tends to prefer tools that focus more on news textual content while neglecting other important factors such as domain URL, despite the availability of the domain URL in the news description. Integration of Standing_tool for Political News The Standing_tool is specifically designed to check news claims that the LLM identifies as relevant to politics. In Figure 5, we conduct an experiment where we remove the Standing_tool from the expert workflow shown in Figure 1. As a result, all news articles, regardless of their relevance to politics, are processed using the remaining general tools (excluding the URL_tool for the Snopes dataset). For the PolitiFact and GossipCop datasets, all remaining tools are applied. Figure 5 shows the results of this experiment, which indicate a decline in performance for the PolitiFact and GossipCop datasets, and slightly better performance for the Snopes dataset. Combining the above observations from Figure 3, Figure 4 and Figure 5 in the context of PolitiFact data, we deduce that the Standing_tool, which analyzes political views, holds greater importance than the URL_tool. Conversely, for GossipCop data, we infer that overusing the Standing_tool and underutilizing the URL_tool produces worse results compared to scenarios where the URL_tool is used but the Standing_tool is omitted. Given that GossipCop data primarily focuses on entertainment, celebrity gossip, and rumors rather than politics, the emphasis on political bias detection with the Standing_tool in LLM self-designed workflow may not be as relevant for this dataset. For the Snopes dataset, both using and not using the Standing_tool show improvements in performance. This suggests that identifying whether news is relevant to politics is less critical for Snopes dataset. The external Search_tool appears to play a significant role in achieving good performance from Table 2. Combining the Search_tool with textual pattern analysis tools seems sufficient for the LLM to summarize the overall findings when combined with external search results. These observations underscore the importance of structuring an expert workflow that incorporates domain knowledge to design appropriate tools specific to the dataset\u2019s domain, rather than allowing the LLM to automatically design its own workflow for FactAgent. This approach ensures that the designed workflow is more tailored and suitable for the characteristics of the news domain being analyzed. One of the key benefits of FactAgent is the flexibility and ease of adding, deleting, or modifying certain tools within the workflow based on domain-specific requirements. This flexibility allows adaptations for different datasets and contexts, enhancing the effectiveness and applicability of the FactAgent in various scenarios. 5.3 Importance of External Search Engine (RQ3) Table 2 shows that solely relying on external searches to detect conflicting reports as a means of scrutinizing news claims does not yield optimal performance, particularly for the GossipCop dataset. This outcome could be attributed to the fact that while online Xinyi Li, Yongfeng Zhang, and Edward C. Malthouse Figure 2: Instructions for the LLM to automatically generate a self-designed workflow for fake news detection. The News is represented using its title, domain URL, and publish date if available. searches can offer evidence, the same rumor or misinformation may also be reported across multiple online resources, leading to a dilution of credibility rather than a clear verification. Despite the suboptimal performance of relying solely on external searches, this subsection evaluates the impact of incorporating the external search tool within the expert workflow for FactAgent. To test the importance of the Search_tool, we exclude it from the expert workflow, leaving the LLM to assess veracity based solely on the available tools, which primarily utilize the LLM\u2019s internal knowledge. Figure 6 indicates that performance deteriorates without the Search_tool, suggesting that relying solely on the LLM\u2019s internal knowledge is insufficient for effective fake news detection. Figure 3: Performance comparison of FactAgent following an automatically self-designed workflow and an expert workflow. Figure 4: The frequency of each tool\u2019s usage among the testing samples when the LLM creates a self-designed workflow to evaluate news veracity. Figure 5: Performance comparison of FactAgent with and without utilizing the Standing_tool within the expert workflow. 5.4 Ablation Study on Decision Making Strategy (RQ4) This subsection explores the final decision-making strategy employed by the LLM at the final step of FactAgent with an expert Large Language Model Agent for Fake News Detection Figure 6: Performance comparison of FactAgent with and without utilizing the Search_tool within the expert workflow. workflow. Currently, the LLM autonomously compares the collected information from various tools and makes the final decision without predefined decision rules, such as relying on each tool\u2019s prediction and using a majority vote strategy. To investigate the impact of using a majority vote strategy during the final verification process, we manually apply a majority vote to all tools\u2019 decisions used by the LLM. Figure 7 shows that using majority voting results in inferior performance across all three datasets compared to instructing the LLM to compare with a checklist for the final prediction. This performance discrepancy highlights that when the LLM compares observations with the checklist at the final step of FactAgent, it does not blindly rely on a majority vote. Instead, it suggests that giving the LLM the flexibility to summarize the overall prediction based on its reasoning and insights may yield better outcomes than imposing rigid decision rules like a majority vote. Figure 7: Performance comparison of different decisionmaking strategies. 6 CASE STUDY In the context of fake news detection, interpretation is critical for providing clarity and understanding to end users. Figure 8 illustrates the reasoning process of the LLM at each step of the workflow within FactAgent. The LLM outputs explicit observations from each tool in natural language and provides reasoning at the final step by comparing each observation with the corresponding checklist item to reach a conclusion. This stands in contrast to conventional supervised models that may lack explicit transparency in their decision pathways. The checking and reasoning process can also inform end users about what aspects to scrutinize to avoid falling for fake news. Figure 8: A case demonstrating the FactAgent\u2019s reasoning process at each step of the expert workflow, leading to the final conclusion. The LLM does not identify this news as relevant to politics; therefore, the Standing_tool is not used. Xinyi Li, Yongfeng Zhang, and Edward C. Malthouse 7 CONCLUSION Motivated by the rich internal knowledge embedded within LLMs, our work proposes FactAgent, which uses LLMs in an agentic manner to emulate human expert behavior in assessing news claims via a structured workflow. Our results demonstrate that FactAgent with an expert workflow outperforms supervised learning models, standard prompting, CoT prompting techniques, and single-aspect analyses. By rigorously integrating the external search and LLM\u2019s commonsense within the workflow, FactAgent also outperforms HiSS, which also utilizes LLM and external search engine for fact checking. Furthermore, our experiments underscore the importance of leveraging domain expert knowledge to design FactAgent\u2019s workflow and highlight the flexibility of modifying the workflow and the final decision-making strategy. The benefits of our approach over existing ones are many. Unlike supervised learning models that require annotated data for training and time-consuming hyper-parameter tuning, our method does not require any training or tuning. This makes our approach highly efficient and accessible, eliminating the need for extensive manually labeled datasets. Moreover, our experiments demonstrate that FactAgent\u2019s performance relies on expert domain knowledge used to design the workflow. While the expert workflow designed in this work may not be optimal, a key advantage of FactAgent is its flexibility in incorporating new tools into the workflow. When experts detect new indicators of fake news in a specific domain, they can easily integrate this knowledge into a tool and seamlessly merge it with the existing workflow. FactAgent can then instruct the LLM to simulate their fact-checking process. This benefits have significant implications, such as using FactAgent with a carefully designed workflow to assist humans in data annotation for training supervised models for fake news detection. Additionally, the FactAgent provides explicit reasoning for each step in the workflow, enhancing the interpretability of the fact-checking process in natural language. Some limitations in our work suggest potential directions for future research. For instance, our approach currently relies on news titles and domain URLs if available, but considering a social context such as retweet relationships could be important for detecting fake news during the dissemination process [24]. Additionally, incorporating a multi-modal approach by analyzing web design elements may also enhance fake news detection capabilities. Moreover, some news titles alone may not directly determine the veracity of the news claim. It could be beneficial to integrate the full content of news articles and assess whether the title accurately reflects the content or if it is misleading to attract user engagement (i.e. clickbait). Lastly, exploring the integration of expert decision-making strategies to improve performance represents another potential avenue for future investigation."
18
+ }
intro_8K/test_introduction_short_2405.01610v1.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.01610v1",
3
+ "title": "Automating the Analysis of Public Saliency and Attitudes towards Biodiversity from Digital Media",
4
+ "abstract": "Measuring public attitudes toward wildlife provides crucial insights into our\nrelationship with nature and helps monitor progress toward Global Biodiversity\nFramework targets. Yet, conducting such assessments at a global scale is\nchallenging. Manually curating search terms for querying news and social media\nis tedious, costly, and can lead to biased results. Raw news and social media\ndata returned from queries are often cluttered with irrelevant content and\nsyndicated articles. We aim to overcome these challenges by leveraging modern\nNatural Language Processing (NLP) tools. We introduce a folk taxonomy approach\nfor improved search term generation and employ cosine similarity on Term\nFrequency-Inverse Document Frequency vectors to filter syndicated articles. We\nalso introduce an extensible relevance filtering pipeline which uses\nunsupervised learning to reveal common topics, followed by an open-source\nzero-shot Large Language Model (LLM) to assign topics to news article titles,\nwhich are then used to assign relevance. Finally, we conduct sentiment, topic,\nand volume analyses on resulting data. We illustrate our methodology with a\ncase study of news and X (formerly Twitter) data before and during the COVID-19\npandemic for various mammal taxa, including bats, pangolins, elephants, and\ngorillas. During the data collection period, up to 62% of articles including\nkeywords pertaining to bats were deemed irrelevant to biodiversity,\nunderscoring the importance of relevance filtering. At the pandemic's onset, we\nobserved increased volume and a significant sentiment shift toward horseshoe\nbats, which were implicated in the pandemic, but not for other focal taxa. The\nproposed methods open the door to conservation practitioners applying modern\nand emerging NLP tools, including LLMs \"out of the box,\" to analyze public\nperceptions of biodiversity during current events or campaigns.",
5
+ "authors": "Noah Giebink, Amrita Gupta, Diogo Ver\u00ecssimo, Charlotte H. Chang, Tony Chang, Angela Brennan, Brett Dickson, Alex Bowmer, Jonathan Baillie",
6
+ "published": "2024-05-02",
7
+ "updated": "2024-05-02",
8
+ "primary_cat": "cs.CL",
9
+ "cats": [
10
+ "cs.CL",
11
+ "cs.IR"
12
+ ],
13
+ "label": "Original Paper",
14
+ "paper_cat": "Sentiment AND Analysis",
15
+ "gt": "Public interest in biodiversity is pivotal to the success of conservation efforts, but varies significantly across species, geographies, and time. While targeted conservation campaigns can amplify public engagement around focal species and * Equal contribution. Correspondence to: Amrita Gupta <[email protected]>, Diogo Ver\u00edssimo <[email protected]> Charlotte Chang <[email protected]>, arXiv:2405.01610v1 [cs.CL] 2 May 2024 Broadening conservation culturomics with NLP A PREPRINT catalyze policy change (Thaler et al., 2017), the systemic change needed to halt biodiversity loss requires cultivating public awareness and support for nature and biodiversity as a whole (D\u00edaz et al., 2019; Convention on Biological Diversity, 2022). Monitoring public attitudes towards species comprehensively and at scale is a formidable challenge, but conserva- tion culturomics\u2013analyzing digital data to examine societal relationships with nature\u2013holds great promise for this purpose (Correia et al., 2021; Ladle et al., 2016). Digital data sources offer global reach and cost-efficiency over conventional opinion-based surveys, and can reveal information-seeking behavior rather than behavioral intent (Cooper et al., 2019). Although recent work has developed attention metrics based on Wikipedia page views (Millard et al., 2021; Vardi et al., 2021) and Google Trends (Cooper et al., 2019; Burivalova et al., 2018; Vardi et al., 2021), news and social media offer additional insights into the context of public attention on species (Roberge, 2014). News media narratives shape public perceptions (G. King et al., 2017) while social media have become a dominant platform for sharing news and viewpoints toward issues including biodiversity conservation (Chang et al., 2022; Ver\u00edssimo, 2021; Papworth et al., 2015). However, unlike Google Trends and Wikipedia page views, news and social media yield unstructured text data, requiring careful search and filtering for relevant content. Selecting effective search terms for species in keyword-based search application programming interfaces (APIs) is a nuanced task. This partly stems from the mismatch between the specialized biological nomenclatures conservation experts use, such as Latin (e.g., Rhinolophus affinis) or specific common names (e.g., \u201cIntermediate horseshoe bat\u201d), and the broader folk taxonomic terms the public favors (e.g., \u201cbat\u201d or \u201chorseshoe bat\u201d) that may encompass multiple related species (Beaudreau et al., 2011). This highlights a trade-off between specificity and volume of relevant content when assessing public views on species groups. Using Latin (Jari\u00b4 c et al., 2020; Ladle et al., 2019) or full common names (Roberge, 2014; Kulkarni & Di Minin, 2021) as keywords enhances specificity but risks overlooking general references to species within folk taxonomies, potentially biasing search results towards scientific content, especially for species lacking well-known common names. Conversely, common names for folk taxa are challenging to infer. Past efforts hand-curated common names for target taxa (Fink et al., 2020), but extending this approach to thousands of species is both arduous and subjective. Additionally, some common names (e.g., \u201celephant\u201d) appear as substrings within unrelated species names (e.g., \u201celephant seal\u201d), requiring careful consideration when constructing search queries. Another challenge in conservation culturomics is the use of species common names in non-biological contexts, such as sports teams (e.g., Clemson Tigers), individuals (e.g., Tiger Woods), and other entities. Machine learning and natural language processing (NLP) approaches can be used to develop text classification models for filtering out such irrelevant results. These models predict whether or not a sample of text pertains to biodiversity conservation (Kulkarni & Di Minin, 2021), target species or conservation topics (Keh et al., 2023; Hunter et al., 2023; Roll et al., 2018; Egri et al., 2022). However, they require extensive manually annotated data for training, are susceptible to biases in data labeling, and may not generalize well to examples not seen during training. To address these challenges, we develop a pipeline for retrieving online news and X (formerly Twitter) posts about biological taxa of conservation interest. We introduce a novel method for deriving a folk taxonomy from English common names via substring matching, simplifying the identification of names used in everyday language to refer to animals. This approach facilitates analysis of less well-known species by grouping them into more broadly recognized taxa, overcoming the limitations posed by using only Latin or full common names for these species. It also reveals spurious groupings of unrelated species, corrected by incorporating negative search terms into API queries to enhance search specificity. Furthermore, we use a zero-shot text classification model to filter out irrelevant content, a cutting-edge machine learning approach that obviates the need for data annotation by generalizing to new tasks without additional training. We illustrate the utility of our pipeline in an example analysis of public discourse on several mammal taxa from 2019 to 2021, encompassing periods both before and after the United Nations World Health Organization officially declared the COVID-19 pandemic on March 11, 2020. Early in the outbreak, interest in wildlife increased, particularly in potential zoonotic coronavirus sources like bats or pangolins (Vijay et al., 2021; Petrovan et al., 2021; Zhou et al., 2020). We explore changes in public perceptions toward bats and pangolins (versus elephants and gorillas, which were not implicated in the pandemic) by examining discourse volume and sentiment shifts over time.",
16
+ "main_content": "Our pipeline for collecting online news articles and social media posts about biological taxa of interest is illustrated in Figure 1 and summarized below: 1. Query taxa and search term selection: We begin by selecting the taxa for analysis, focusing on either individual species or broader categories based on public visibility. This sets the foundation for our data 2 Broadening conservation culturomics with NLP A PREPRINT Figure 1: A diagram of the data pipeline, starting from constructing a folk taxonomy to derive search terms; retrieving news and tweets by querying each data source; performing zero-shot relevance modeling and scraping to obtain full-text for the news media articles; filtering out syndicated news and identifying specific references to queried taxa within news articles; and finally conducting analyses on shifts in volume, sentiment, and topics in the tweets and news articles through time and over space. collection by specifying which species are encompassed in each targeted search. Details can be found in Section 2.1. 2. Information retrieval: Following the identification of target taxa and corresponding search terms in the previous step, we use keyword search APIs to retrieve online news articles and social media posts containing search terms related to each query taxon (refer Section 2.2). 3. Full text extraction: For news articles, where our initial retrieval yields only titles and URLs, we first classify these article titles by topic to determine their relevance to conservation (see Section 2.3.1). Only articles deemed relevant undergo full text scraping (Section 2.3.2), ensuring efficiency by avoiding the extraction of text from irrelevant articles. 4. Data post-processing: We apply text similarity techniques to identify and filter out syndicated articles, which are near-duplicates of original content and could introduce redundancy into our text corpus. Further, we extract specifically those sections of text with original articles that directly reference the target taxa, thus enhancing the specificity of our analysis. 5. Data and text analysis: We leverage the collected data for a range of analyses aimed at uncovering insights into the public discourse surrounding the target taxa. We explore the volume of online content about different target taxa and how that varies geographically and over time. Sentiment analysis can help track shifts in the tone of these discussions, while topic analysis sheds light on underlying themes in these discussions. These examples illustrate the versatility of our dataset in facilitating diverse analytical approaches to deepen our understanding of the discourse dynamics related to the target taxa. 2.1 Search term selection Identifying salient folk taxa\u2013groups of species as referenced in everyday language\u2013is a fundamental step in monitoring public perceptions of these taxa in conservation contexts. We accomplished this through a human-in-the-loop approach, using English-language common names for species and their simplified forms as the basis for identifying these taxa in our analysis. First, we gathered the comprehensive list of mammalian species and their English common names from the International Union for Conservation of Nature and Natural Resources (\u201cIUCN\u201d) Red List (IUCNredlist.org), encompassing a total of 5,650 species and 9,150 common names. We leveraged an efficient dynamic programming algorithm to extract shared trailing substrings from the common names (such as \"sea lion\" from \"South American sea 3 Broadening conservation culturomics with NLP A PREPRINT Figure 2: Example of an initial connected component in the folk taxonomy graph for species in Order Carnivora based on their IUCN Red List common names. Solid lines represent edges between species and their listed common names; dashed lines represent edges between names and simplified names; and dotted lines represent connections that would be pruned on inspection to separate conceptually distinct taxa. lion\" and \"Californian sea lion\", see Fig. 2), yielding prospective folk taxa. We then constructed an undirected graph representation of the connections between species, their common names, and the identified shared substrings. We clustered this graph into connected components, each of which represents a candidate taxon comprised of a group of species and a simplified set of names for them. Each cluster was manually inspected to ensure that the species formed a coherent group. Otherwise, nodes or edges in the graph were modified before repeating the clustering and inspection. In some cases, our method grouped several taxa into a broader taxon that might be considered too coarse. For instance, the \u201cAndean bear\u201d, \u201cblack bear\u201d, \u201cbrown bear\u201d, \u201cpolar bear\u201d, \u201csloth bear\u201d, and \u201csun bear\u201d were initially grouped under the \u201cbear\u201d taxon. Given the widespread recognition of distinct bear species, one could consider eliminating the node associated with the shared substring \u201cbear\u201d to separate these species into distinct clusters. A more complex issue arises, however, when shared substrings are found between common names of unrelated species. For instance, the substring \u201clion\u201d appears in the common names for Panthera leo (\u201clion\u201d), Puma concolor (\u201cmountain lion\u201d), Leontopithecus spp. (\u201clion tamarins\u201d), Zalophus wollebaeki (\u201cGal\u00e1pagos sea lion\u201d), and other unrelated species (Fig. 2). Conducting a search using the term \u201clion\u201d could potentially yield results encompassing all these taxa. To avoid this, we incorporated negative keywords (e.g. \u201clion\u201d AND NOT \u201cmountain lion\u201d AND NOT \u201csea lion\u201d AND NOT \u201clion tamarin\u201d) to improve differentiation among these species during searches, a strategy not documented in prior work. For each folk taxonomic entity, we compiled a set of positive keywords, at least one of which must be present in a search result, and an optional set of negative keywords, all of which must be absent from a search result. Additional details about the graph construction can be found in Supplementary Information Section A.1. 2.2 News and social media information retrieval We collected online news articles from the Global Database of Events, Language, and Tone (GDELT), a live database capturing global news media offering full-text search via the GDELT 2.0 DOC API. Using positive and, where applicable, negative search keywords for each target taxon, we requested English-language articles published between January 1, 2019 and December 31, 2021. Each query returned JSON-formatted article metadata that included the article\u2019s title, URL, domain, date, and country of publication. To work within the limit of 250 results per query imposed by the API, we divided the three year period into shorter intervals, aggregating results from each interval to form our final dataset. Similarly, for social media analysis, we utilized the Twitter Academic Access v2 API to access Twitter\u2019s full archive of public tweets. We queried this API with the positive and negative keywords for each target folk taxonomic entity, 4 Broadening conservation culturomics with NLP A PREPRINT requesting only tweets written in English and including geolocation data to support analyses on geographic differences in species media portrayals. Twitter data collection concluded before February 9, 2023, ahead of potential deprecation notices for the Academic Access API by Twitter. 2.3 News full-text extraction 2.3.1 Relevance filtering The keyword-based search described above often retrieves a mixture of relevant and irrelevant results for wildlife conservation (Kulkarni & Di Minin, 2021). For instance, a query using the search term \u201ctiger\u201d might fetch articles mentioning sports teams (e.g. the Clemson Tigers), people (e.g. Tiger Woods or Tiger Shroff), companies (e.g. Tiger Global Management, LLC), places (e.g. Tiger Hill), or even events (e.g. Year of the Tiger). Articles in which the search keywords refer to non-animal entities should be excluded from the corpus of wildlife-focused news articles. However, GDELT queries return only metadata such as titles and URLs, not full texts, requiring us to decide whether an article likely uses the search keywords in the intended sense from these relatively limited metadata. We make the simplifying assumption that articles about topics related to nature and conservation are more likely to use keywords in the intended context. Our goal, then, is to classify the title of a news article as relevant or irrelevant to wildlife or conservation. We developed a topic classification-based approach, in which an online news article is predicted as belonging to one or more predefined topics, a subset of which are considered relevant. We derived the set of predefined article topics in a two-stage approach. In the first stage, we randomly sampled 10,000 articles from GDELT query results for news articles from 2019, stratified such that at least one taxon from each of 14 Mammalian Orders was represented. The resulting sample contained mentions of 154 mammalian taxa. The full-text of these articles was obtained via webscraping (see Section 2.3.2) and text snippets containing animal search terms were extracted, with each snippet being 7 sentences long (for context, 3 sentences before and after the sentence mentioning the taxon). We used Latent Dirichlet Allocation (LDA) to perform unsupervised topic modeling on these text snippets, obtaining 40 initial topics. LDA models texts as mixtures of topics, which are themselves mixtures of words, allowing for the discovery of underlying thematic structures in large text corpora. The choice of 40 topics was numerous enough to glean many informative topics without exceeding the model\u2019s capacity to reliably converge within 150 iterations. We reviewed the 20 words scored most important by the model for each topic to assign a semantically meaningful label to each one, yielding 23 topic labels which we then grouped into relevant versus irrelevant topics as follows: Relevant: agriculture, climate change, conservation, energy, health, infrastructure, natural disasters, nature, outdoor recreation, science and technology, tourism, wildlife, habitat loss, invasive species, pollution Irrelevant: business, crime, education, entertainment, food, holidays, politics, sports We defined relevant topics as those that discussed species in a biological, conservation, or real-world context, whereas irrelevant topics were instead focused on non-biological issues. Given these predefined topics, our next challenge was to classify GDELT query results among these predefined topics, keeping in mind that we have access to only the article title at this stage in the GDELT data collection pipeline. We used Facebook\u2019s Bidirectional and Autoregressive Transformers (\u201cBART\u201d) model to perform multi-label \u201czero-shot\u201d topic classification for the GDELT article data using the topics identified through our LDA analysis of the full-text subset dataset (Lewis et al., 2019). Each article title received a unit-sum vector of topics with probabilities across the 23 topics enumerated above. If an article title was predicted as having any of the relevant topics with a model score greater than 0.5, the article was considered relevant and was flagged for webscraping. The zero-shot BART model is capable of predicting topics on new data, given the extensive scale of the data that were used to train these models and their watershed advance in creating numeric representations (also known as \u201cembeddings\u201d) that can capture the semantic structure of the English language. The major advantage of these models is that they enable conservation practitioners to now filter text corpora that would simply be impossible to manually review. 2.3.2 Scraping the full text of articles To obtain the full text of news articles flagged as relevant, we first submitted an HTTP request for the HTML content of each relevant GDELT news article URL. If the request was successful, the HTML content was parsed using one of three Python libraries (trafilatura, newsplease, or boilerpy3) to extract the article body. Often, however, the HTML request or the text extraction was unsuccessful due to broken URLs. As a method of recourse in these cases, we searched for a snapshot of the article on the Internet Archive. If a snapshot was found, we requested the HTML content of this snapshot and attempted to extract the article body text using the same combination of Python libraries as before. 5 Broadening conservation culturomics with NLP A PREPRINT 2.4 News data post-processing In mainstream media, news articles are often syndicated across multiple outlets with minimal changes to the text (Kulkarni & Di Minin, 2021). To prevent bias in downstream models and avoid redundant analyses on near-identical content, we implemented a process to identify duplicates. We measured the similarity between articles by first using Term Frequency-Inverse Document Frequency to create a vector representation of each article\u2019s text based on its most distinctive words, and then computing the cosine similarity between pairs of article vectors. We compared all pairs of articles published within two months of each other, as syndicated articles are typically released soon after their originals. If the cosine similarity exceeded 0.95, indicating a high degree of similarity, we classified the later-published article as a syndicate of the earlier one. Conversely, if an article\u2019s cosine similarity score with every other article published within the preceding two months was below 0.95, we classified it as an original. Next, we isolated sentences within articles that directly reference the target taxa, a step we call \u201centity mention detection\u201d. This step enables us to precisely apply NLP tasks like sentiment analysis and topic modeling to text segments containing the entity of interest. This is especially useful for longer bodies of text like articles, which can discuss many different things and have shifts in tone. We scanned each article for the positive search terms for a target taxon. Upon finding a mention, we extract the sentence that contains this reference along with the sentence immediately preceding it. Including the preceding sentence is helpful as it often frames the mention with additional context. 2.5 Analyzing public discourse about species For each taxon, we had a set of articles and tweets from 2019 through the end of 2021. To determine the overall volume of discourse toward each taxon, we aggregated the number of articles mentioning each taxon by month and country. We examined the sentiment of media and public discussion of species using a lexicon sentiment model. Specifically, we used the \u201cValence Aware Dictionary and sEntiment Reasoner\u201d (abbreviated to VADER, (Hutto & Gilbert, 2014)). This yielded a sentiment score for each article ranging from -1 (negative) to 0 (neutral) to 1 (positive). We also aggregated these article-level sentiment scores by month and country to examine patterns in public discourse regarding species. We illustrate how conservation social science researchers and practitioners interested in messaging or marketing to conserve biodiversity can perform different analyses using the outputs of our data pipeline. Using information on the country where each news article is published, we show how one can create choropleth maps of the volume of public discourse toward different taxa. We use chord diagrams to visualize the distribution and co-occurrence of different topics associated with news media coverage of each taxa. Finally, we use breakpoint analyses (Killick & Eckley, 2014) to evaluate whether or not there were significant changes in the mean volume or sentiment through time for different taxa. 3 Results 3.1 Creating a folk taxonomy and collecting data on target taxa Using the approach described in Section 2.1, we derived folk taxonomic terms for the species within each of the 26 Orders of mammals in the IUCN Red List. Figure 2 illustrates one of the connected components for the Order Carnivora before inspection, conveniently grouping several species under the folk taxon \u201csea lion\u201d\u2013a term the public is more likely to use than any of those species\u2019 full common names. However, it also reveals links between \u2018lion\u2019 and both \u2018sea lion\u2019 and \u2018mountain lion\u2019 due to the shared substring \u2018lion.\u2019 These links, which could lead to mixed search results when searching for \u2019lion,\u2019 highlighted the need for negative search terms to improve search specificity. We conducted comprehensive analysis using our proposed pipeline for 10 taxa, which are listed along with their corresponding scientific taxa in Table A1. These range from the Genus level (gorilla) to the Order level (bats). We also considered more specific yet popularly recognized taxa like \u201cflying fox\u201d and \u201cvampire bat\u201d and lesser-known but still distinct taxa like \u201cpipistrelle\u201d and \u201chorseshoe bat\u201d. Figure 3 shows the outcome at each stage of our GDELT data collection pipeline applied to the 10 case study taxa. Raw article counts varied from 588,077 results for \u201cbat\u201d to 311 for \u201clong-tongued bat\u201d. Notably, approximately 54% of articles were predicted to be irrelevant to wildlife, with \u201cbat\u201d (62.6%), \u201cgorilla\u201d (48.4%), \u201celephant\u201d (45.3%) and \u201cvampire bat\u201d (36.4%) yielding high proportions of unrelated content due to homonymy (e.g. \u201cbat\u201d as a piece of sports equipment), idiomatic expressions (e.g. \u201celephant in the room\u201d, \u201c800 pound gorilla\u201d, and \u201coff the bat\u201d), and popular culture depictions of these animals. Full-text scraping was attempted for all articles that were predicted to be relevant based on their title, yet about a third were inaccessible due to broken links. We also found that 41% of articles across 6 Broadening conservation culturomics with NLP A PREPRINT Figure 3: Number of news articles obtained at each stages in the GDELT data collection pipeline run fully on ten query taxa, from querying, to relevance filtering, webscraping, and deduplication. Table 1: The counts across taxa for Twitter posts from 2019-2021. Entity Count Elephant 171059 Gorilla 52650 Pangolin 3667 Bat 325319 Flying fox 1889 Myotis 166 Horseshoe bat 140 Pipistrelle 354 Vampire bat 462 Long-tongued bat 80 these taxa were syndicated, indicating significant potential for computational efficiency by limiting analyses to only relevant, original articles (Table A2). Ultimately, taxa with widespread popular appeal (elephants, gorillas) had more wildlife news articles than lesser-known taxa (pangolins), and generic taxa had more articles than specific ones. Pivoting to social media, we observed that the public made anywhere between several hundred to nearly 300,000 posts about different taxa from 2019 to 2022 (Table 1). In our subsequent analyses, we now illustrate different use cases of the data generated by our data pipeline. 3.2 How does discourse vary around the world? Figure 4 shows spatial variations in media coverage across different taxa. Globally recognized animals like gorillas receive widespread attention online, while less well-known taxa such as pangolins and pipistrelle bats see more geographically concentrated coverage. Pangolins are primarily featured in Southeast Asia, whereas pipistrelle bats, despite their prevalence in the British Isles and widespread distribution in Asia, attract less media attention outside the 7 Broadening conservation culturomics with NLP A PREPRINT (a) Gorilla (b) Pangolin (c) Pipistrelle Figure 4: Volume of relevant news articles from GDELT (left) and Tweets (right) between January 1, 2019 and December 31, 2022 for gorillas (4a), pangolins (4b), and pipistrelles (4c). Insets show top 10 countries by volume. 8 Broadening conservation culturomics with NLP A PREPRINT wildlife nature conservation agriculture natural disasters pollution invasive species habitat loss crime climate change infrastructure outdoor recreation entertainment sports tourism holidays science & technology education health energy food politics business Horseshoe bat A wildlife nature conservation agriculture natural disasters pollution invasive species habitat loss crime climate change infrastructure outdoor recreation entertainment sports tourism holidays science & technology education health energy food politics business Long-tongued bat B Figure 5: Chord diagrams depicting the co-occurrence of relevant topics for two focal taxa, Horseshoe bat (family Rhinolophidae) and Long-tongued bat (genera Glossophaga, Craseonycteris, and Leptonycteris). A wider chord indicates that more articles contain two taxa and each chord is a band colored by one of the nodes that is being connected. The circular perimeter of the chord displays the proportional occurrence of each topic in the dataset, and the colors correspond to different groups of topics. UK. These findings show media exposure to different animal taxa varies by geography, potentially influencing levels of awareness and familiarity. 3.3 Which topics are associated with different taxa? The advent of large language models trained on Internet-scale text corpora offers major advances for zero-shot learning, where practitioners can use the predictions of a model on their own datasets to help sift through volumes of data that simply defy manual review. Figure 5 displays the co-occurrence of topics predicted by Facebook\u2019s BART (Bidirectional and Auto-Regressive Transformers) model (Lewis et al., 2019). Each line in one of the chord diagrams represents a topic that is co-occurring in an article with another topic (for the full set of chord diagrams for all folk taxonomic entities, please refer to Figure 8). Practitioners could use approaches such as these to evaluate how wildlife is framed in the news media. Comparing horseshoe bats, a known reservoir of SARS-CoV, versus long-tongued bats, which are not regarded as a coronavirus reservoir, we observed that the distribution and co-occurrence of topics was quite different between these two groups of species. Long-tongued bats had much more news coverage devoted to nature-based topics such as conservation or wildlife. Moreover, the nature or conservation threats topics (e.g. nature, wildlife, climate change, habitat loss, etc.) tend to occur with one another in articles. In contrast, horseshoe bat media coverage exhibited comparatively more discourse on the topics of conservation threats (e.g. habitat loss, natural disasters, climate change) or socio-economic issues (e.g. business, health, education). Health and food were more prevalent topics for horseshoe bats compared to long-tongued bats. For both types of bat, however, the chord diagram indicates that there is substantial co-occurrence of different topics at the level of individual articles. 3.4 How has the salience of taxa changed through time? Figure 6 displays changes in the volume of mass media articles referencing different taxa, shown using normalized counts for each taxa\u2019s count of articles or Twitter posts, aggregated over a two week sliding window. Comparing taxa implicated as coronavirus hosts or as potential spillover hosts (pangolin or horseshoe bat) versus species of conservation 9 Broadening conservation culturomics with NLP A PREPRINT Figure 6: Changes in volume through time. The solid vertical magenta line denotes March 11, 2020, which was the date when the UN WHO declared COVID-19 a pandemic. The dashed vertical orange or blue lines correspond to any significant breakpoints in the trend for GDELT or Twitter respectively, after conducting Bonferroni family-wise error correction. concern that are not clearly associated with COVID-19 (e.g. elephant), we observed differences in the salience of these taxa. A breakpoint analysis indicated that there were significant changes in the volume of news media articles published on horseshoe bats. Specifically, there were an average of 3 articles published every two weeks on horseshoe bat before January 10, 2020, and this volume jumped to an average of 20 articles every two weeks after this breakpoint. We did not observe any other significant breakpoints for the count of Twitter posts or the count of news media articles for any of the other focal taxa. However, in the full set of taxa (Figure 9), we found breakpoints in news media coverage for flying fox, gorilla, and myotis between November 2019 to September 2020. We did not find any breakpoints for any taxa in terms of the volume of Twitter posts in our sample. However, unlike the horsehoe bat, all of these taxa exhibited either no change or a reduction in average volume in news media content. 10 Broadening conservation culturomics with NLP A PREPRINT 3.5 How has the sentiment of discourse about taxa changed through time? Practitioners and researchers may also seek to monitor changes in public sentiment toward different taxa. We illustrate how such analyses can be conducted with the outputs of our pipeline. Focusing on the same focal taxa of elephant, horseshoe bat, and pangolin, we compare and contrast changes in mean article or Twitter post sentiment, measured through a unidimensional value ranging from -1 (very negative) to 0 (neutral) to 1 (very positive) (Figure 7). We saw that the mean sentiment toward taxa was lowest for pangolins in the news, with a mean of -0.01, and highest for long-tongued bat on Twitter, with a mean of 0.28. Of the three focal species, pangolins on the news had the lowest average sentiment, but higher sentiment on Twitter (0.14), elephants had an average sentiment of 0.12 (news) or 0.13 (Twitter), and horseshoe bats had the highest average sentiment across the board (0.18 in the news and 0.26 on Twitter). Across all of the taxa in our pipeline, we only observed a significant breakpoint in sentiment for horseshoe bat (Figure 10). Horseshoe bat discourse showed a change in sentiment on October 6, 2020 across both news media coverage and Twitter posts. The mean sentiment of horseshoe bat coverage remained the same (0.2 on a scale from -1 to +1) in the news media; however, Twitter horseshoe bat sentiment changed from an average of 0.2 to 0.4, becoming more positive through time. 4 Discussion Our data pipeline permits practitioners and researchers to monitor public perceptions of biodiversity globally and with geographic or temporal disaggregation. This project builds on several recent advances using NLP machine learning approaches to process and analyze large, unstructured text data about biodiversity. For instance, Kulkarni and Di Minin (2021) created a pipeline to detect and extract news articles mentioning more than 500 CITES Appendix 1 species in the news media. Egri et al. (2022) analyzed articles from the Times of India for instances of human-wildlife conflict in West Bengal with 15 species, such as the Asian elephant (Elephas maximas). In our project, we extended these bodies of work by simultaneously scraping data from the news media and social media, by creating a folk taxonomy to broaden the data sampled, by leveraging cutting-edge large language models to filter our data in an efficient, performant, and replicable fashion, and by scraping the Internet Archive, which permits us to mitigate issues such as the ephemerality of digital media. One advance of our approach is using approaches drawn from string algorithms and graph theory to generate a folk taxonomy. We showed examples of how researchers and practitioners can develop a folk taxonomy to identify unique common names for sets of species. Such a folk taxonomy can permit future monitoring efforts to capture more of the potentially relevant discourse toward biodiversity (Beaudreau et al., 2011). At the same time, using a folk taxonomy increases the volume of results at the expense of introducing some quantity of non-relevant content; we demonstrate how zero-shot LLMs can deal with this problem. Zero-shot approaches allow conservationists to judiciously and efficiently filter public content about biodiversity using cutting-edge machine learning models \u201cout of the box\u201d. Therefore, conservation practitioners and researchers do not need to invest resources in creating labelled training and test data, a necessity for training a model from the ground up or fine-tuning an existing model using transfer learning, which may not always be feasible or advisable, especially for complex models with millions of parameters. In our case, we found that up to 62% of the articles about bats were irrelevant, showing the importance of filtering out results that are not related to public perceptions of nature. The choropleth maps gesture toward the relative popularity of different folk taxa; we observed that there was much broader coverage and higher volume in general for prominent taxa such as elephants or more generalized entities such as bat across Twitter and news media. Using the data generated from our pipeline, we saw that taxa differed in the distribution of topics in news media articles, and in terms of their volume and sentiment through time in the news media or on Twitter. We saw that the volume of news articles about horseshoe bats increased during the early days of the COVID19 outbreak, and that this contrasted with the other bat taxa in our pipeline, which largely did not exhibit any significant changes in volume. In contrast, while sentiment toward horseshoe bats was generally positive, our results indicated that there was a significant breakpoint in sentiment for both Twitter and news media horseshoe bat discourse in late 2020. These types of analyses can be extended in future monitoring and research efforts to evaluate the impact of public campaigns to conserve biodiversity or monitor human-nature perceptions in general (Fern\u00e1ndez-Bellon & Kane, 2020; Hammond et al., 2022; Millard et al., 2021; Wright et al., 2020; Correia et al., 2021). Changes in the volume of discourse about species can herald problems such as the societal extinction of rare species (Jari\u00b4 c et al., 2022). Calculating metrics such as volume and sentiment from automated data tracking public perceptions of biodiversity offers new, standardized ways to monitor public interest in biodiversity more broadly (de Oliveira Caetano et al., 2022). 11 Broadening conservation culturomics with NLP A PREPRINT Figure 7: Changes in sentiment through time. The solid vertical magenta line denotes March 11, 2020, which was the date when the UN WHO declared COVID19 a pandemic. The dashed vertical orange or blue lines correspond to any significant breakpoints in the trend for GDELT or Twitter respectively, after conducting Bonferroni family-wise error correction. 12 Broadening conservation culturomics with NLP A PREPRINT Our approach can serve as the foundation for an automated \u201cnature tracker\u201d, which would permit practitioners and researchers to track public perceptions of biodiversity. Monitoring human-nature perceptions is critical to evaluating progress toward the targets of the Global Biodiversity Framework, particularly the targets focused on human-wildlife conflict and sustainable use. By scraping and processing data from the news and social media, we can provide real-time, cost-effective insight that is global in scale. Therefore, digital approaches open new avenues for assessing compliance with the Global Biodiversity Framework, which is particularly pressing given that even as late as 2024 most of these targets still show significant gaps in their evaluation mechanisms. Without effective resolution that enables tracking over substantial time periods, these monitoring deficiencies could render the targets politically irrelevant, as it would be impossible to evaluate the progress\u2013or lack thereof\u2013made by different countries. In considering the future development of our data pipeline, we have identified several key areas for future exploration and enhancement. A primary aspect to address revolves around the linguistic scope of our approach, which currently centers solely on English-language data. It will be key for future work to broaden this scope to include other languages spoken in megadiverse countries, such as Spanish, Chinese, Portuguese or Bahasa Indonesia. Furthermore, it is evident that our conservation social science monitoring must adapt to dynamic shifts in platform governance and data accessibility. Recent transitions in the ownership and management of platforms like Twitter have underscored the urgency of this need. These transitions have coincided with the proliferation of misinformation regarding climate change (J. King, 2023) and wildlife in the context of the COVID-19 pandemic and a marked decline in active users, particularly environmentally-focused users (Stokel-Walker, 2022; Chang et al., 2023), both of which pose increasing challenges to monitoring approaches using online data. Overall, this study highlights the potential benefits of combining machine learning with the automated tracking of different data platforms to monitor public perceptions of biodiversity. We anticipate that methods such as ours or building on our approach can enhance applied conservation by creating new ways to examine human-nature perceptions at a global scale. 5 Acknowledgments We express our gratitude to the institutions that supported the work presented in this paper, which was conducted while the authors were affiliated as listed in the manuscript. We note that several authors have since moved to new affiliations: Tony Chang is now with Vibrant Planet, Amrita Gupta is at the Microsoft AI for Good Lab, Diogo Verissimo is currently at the University of Oxford, and Noah Giebink is with the Spatial Informatics Group. These changes in affiliation are mentioned for the sake of accuracy regarding the authors\u2019 current positions. 13 Broadening conservation culturomics with NLP A PREPRINT"
17
+ }
intro_8K/test_introduction_short_2405.01625v1.json ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.01625v1",
3
+ "title": "SAUNAS I: Searching for Low Surface Brightness X-ray Emission with Chandra/ACIS",
4
+ "abstract": "We present SAUNAS (Selective Amplification of Ultra Noisy Astronomical\nSignal), a pipeline designed for detecting diffuse X-ray emission in the data\nobtained with the Advanced CCD Imaging Spectrometer (ACIS) of the Chandra X-ray\nObservatory. SAUNAS queries the available observations in the Chandra archive,\nperforms photometric calibration, PSF (point spread function) modeling, and\ndeconvolution, point-source removal, adaptive smoothing, and background\ncorrection. This pipeline builds on existing and well-tested software including\nCIAO, VorBin, and LIRA. We characterize the performance of SAUNAS through\nseveral quality performance tests, and demonstrate the broad applications and\ncapabilities of SAUNAS using two galaxies already known to show X-ray emitting\nstructures. SAUNAS successfully detects the 30 kpc X-ray super-wind of NGC 3079\nusing Chandra/ACIS datasets, matching the spatial distribution detected with\nmore sensitive XMM-Newton observations. The analysis performed by SAUNAS\nreveals an extended low surface brightness source in the field of UGC 5101 in\nthe 0.3-1.0 keV and 1.0-2.0 keV bands. This source is potentially a background\ngalaxy cluster or a hot gas plume associated with UGC 5101. SAUNAS demonstrates\nits ability to recover previously undetected structures in archival data,\nexpanding exploration into the low surface brightness X-ray universe with\nChandra/ACIS.",
5
+ "authors": "Alejandro S. Borlaff, Pamela M. Marcum, Mehmet Alpaslan, Pasquale Temi, Nushkia Chamba, Drew S. Chojnowski, Michael N. Fanelli, Anton M. Koekemoer, Seppo Laine, Enrique Lopez-Rodriguez, Aneta Siemiginowska",
6
+ "published": "2024-05-02",
7
+ "updated": "2024-05-02",
8
+ "primary_cat": "astro-ph.GA",
9
+ "cats": [
10
+ "astro-ph.GA",
11
+ "astro-ph.HE",
12
+ "astro-ph.IM"
13
+ ],
14
+ "label": "Original Paper",
15
+ "paper_cat": "Diffusion AND Model",
16
+ "gt": "1.",
17
+ "main_content": "gle12, necessitating an elaborate deconvolution scheme and hampering the ability to exploit the full capabilities of the archive. Consequently, Chandra observations are under-explored to date in studies advancing the low X-ray surface brightness (SB) domain. Future studies of low X-ray SB emission (\u223c10\u22128 to 10\u221211 s\u22121 cm\u22122 arcsec\u22122 and beyond) enabled by data processed to enhance detection of low-count regions could advance progress in several currently open questions relevant to galaxy evolution, including the origins 1 Understanding the Chandra PSF https://cxc.cfa.harvard.edu/ ciao/PSFs/psf central.html 2 Chandra/CIAO PSF presentation from 233rd AAS meeting: https://cxc.harvard.edu/ciao/workshop/nov14/02-Jerius.pdf arXiv:2405.01625v1 [astro-ph.GA] 2 May 2024 2 of diffuse soft X-ray emission in galaxies and feedback involvement (Kelly et al. 2021; Henley et al. 2010). Lambda Cold Dark Matter (LCDM) cosmology predicts filaments of diffuse gas from the cosmic web to accrete during their infall onto proto-galactic dark matter (DM) halos (White & Rees 1978; White & Frenk 1991; Benson & Devereux 2010) where gas is heated to approximately the halo virial temperature (T > 106 K). This plasma, further shaped by energy injection from active galactic nuclei (AGN, Diehl & Statler 2008), supernovae (SN) and stellar winds (Hopkins et al. 2012), is detected as diffuse soft X-ray band emission around galaxies (Mulchaey 2000; O\u2019Sullivan et al. 2001; Sato et al. 2000; Aguerri et al. 2017). The origins and evolution of hot gas halos are important open questions in astrophysics, as halos are both the aftermath and active players of gas feedback processes, which modulate the star formation efficiency in galaxies (Rees & Ostriker 1977; Silk 1977; Binney 1977; White & Rees 1978; White & Frenk 1991). The largely-unexplored realm of extreme diffuse gas emission, likely associated with large departures from equilibrium (Strickland et al. 2004), is likely to preserve a unique historical record of these events. Such emission is also likely to be disregarded in studies using standard pipelines that are not optimized for preservation of statistically significant but low SB detections. This project is the first in a series that will study the hot gas halos around galaxies using X-ray observations from the Chandra X-ray observatory. The first step is to test the pipeline to reduce the Chandra/ACIS data products, named SAUNAS (Selective Amplification of Ultra Noisy Astronomical Signal). This paper describes the SAUNAS pipeline processing of data from the Chandra Data Archive3 and benchmarks it to previous works. In particular, we focus on the comparison of results between our analyses and those from other investigations for two well-detected X-ray sources characterized in the literature: NGC 3079 and UGC 5101. The latter has complex and extended X-ray emission, previously unexplored and only revealed by the current work. This paper is organized as follows. The SAUNAS pipeline is described in Sec. 2. The selection of published results for SAUNAS performance comparison is discussed in Sec. 3.1. The benchmark analysis is presented in Secs. 3.2, and 3.3. The discussion and conclusions are presented in Secs. 4 and 5, respectively. We assume a concordance cosmology (\u2126M = 0.3, \u2126\u039b = 0.7, H0 = 70 3 Chandra Data Archive: https://cxc.harvard.edu/cda/ km s\u22121 Mpc\u22121, see Spergel et al. 2007). All magnitudes are in the AB system (Oke 1971) unless otherwise noted. 2. METHODOLOGY 2.1. Observational challenges From an observational perspective, measuring diffuse X-ray halo properties in galaxies involves at least four technical challenges: 1. Detection: The outskirts of X-ray halos are extremely faint (\u227210\u22128 \u2013 10\u221211 s\u22121 cm\u22122 arcsec\u22122). Separating the faint emission associated with sources from that of the X-ray background (Anderson & Bregman 2011) within such low count regimes is an extraordinarily challenging task. Statistical methods that assume a normal (Gaussian) distribution may not produce accurate results. 2. Deblending: AGNs and XRBs are typically unresolved point sources that may contribute to the same X-ray bands where the hot gas halos are expected to emit (from \u223c0.3 \u22120.5 to 1.2 \u22122 keV). While in principle the detection of hot gas halos in nearby galaxies may not require very high spatial resolution observations or spectral capabilities, the separation of such emission from that of point sources does require them. High spatial resolution observations reduce systematic contamination in low surface brightness regimes. 3. Point spread function (PSF) contamination: The distribution of diffuse emission is easily confused with the scattered, extended emission of the unresolved bright cores that contaminate the outskirts of the target through the extended wings of the PSF of the detector (Sandin 2014, 2015). Most studies do not correct for this type of scattering effect, although a few works, such as Anderson et al. (2013), have explored the combined stacked hot gas halo emission of 2165 galaxies observed with ROSAT (0.5\u20132.0 keV), convolving the combined surface brightness profiles by the PSF model to take into account the dispersion of light. 4. Reproducibility & Accessibility: The methodologies for calibration, detection, and characterization of X-ray emission have substantial differences between studies. Due to the Poissonian nature of the X-ray emission, most studies employ different types of adaptive smoothing in their analysis. These software methods tend to be custom-made and infrequently made publicly available. Like3 wise, the final data products (final science frames) are seldom offered to the community. The SAUNAS methodology presented in the current paper attempts to address most of these points by 1) correcting the PSF in the images, 2) separating the emission of point sources from that of diffuse extended ones, and 3) providing a quantitative metric to determine if a detection is real or not. These two points implemented in SAUNAS are the major difference with other existing codes for detection of extended X-ray emission, such as vtpdetect (Ebeling & Wiedenmann 1993) or EXSdetect (Liu et al. 2013), as they do not attempt to deconvolve the observations using dedicated PSF models or to separate diffuse emission from point sources. 2.2. SAUNAS pipeline SAUNAS generates two main products: a) PSFdeconvolved X-ray adaptively smoothed surface brightness maps and b) signal-to-noise ratio (SNR) detection maps. The X-ray adaptively smoothed surface brightness maps provide the flux and luminosity of the hot gas X-ray halos, while the SNR detection maps provide the probability that the flux associated with each region on those maps is statistically higher than the local X-ray background noise. SAUNAS creates these products in four major steps (see Fig. 1): 1) pre-processing of the archival Chandra Xray observations using the Chandra Interactive Analysis of Observations4 software, (Fruscione et al. 2006, CIAO, hereafter, see Sec. 2.2.1), 2) statistical resampling of the X-ray detection events by bootstrapping5, 3) PSF deconvolution of the event maps using the Bayesian Markov Chain Monte Carlo (MCMC) LIRA tool (Lowcounts Image Reconstruction and Analysis, Donath et al. (2022b); see Sec. 2.2.3)6, and 4) adaptive smoothing using VorBin (see Sec. 2.2.4)7. SAUNAS requires a few userinput parameters, including the location of the target (\u03b1, \u03b4), field-of-view (FOV), and energy band. The main steps of the pipeline are described in the following subsections. 2.2.1. CIAO pre-processing First, the data is pre-processed using CIAO in the following way: 4 CIAO: Chandra Interactive Analysis of Observations https:// cxc.cfa.harvard.edu/ciao/ 5 Bootstrapping: https://link.springer.com/chapter/10.1007/ 978-1-4612-4380-9 41 6 LIRA: Low-counts Image Reconstruction and Analysis https: //pypi.org/project/pylira/ 7 VorBin: Adaptive Voronoi Binning of Two Dimensional Data https://pypi.org/project/vorbin/ 1. All available Chandra/ACIS observations containing the user-supplied sky coordinates are identified using find chandra obsid. The datasets and their best available calibration files are automatically downloaded using (download chandra obsid and download obsid caldb). 2. The raw observations are reprocessed using chandra repro (v4.16). To avoid over-subtraction of both the source and background counts necessary for the statistical analysis, the particle background cleaning subprocess is set (check vf pha) to \u201cno\u201d. See the main CIAO manual8 for more information on this step. 3. All the available ACIS datasets are merged into a single events file (merge obs). This product serves as the phase 1 (first pass) observation file and is used to identify emission regions and to determine the source spectra needed for PSF construction. 4. The phase 1 merged observation file is used to define the angular extent of detected emission sufficient for basic spectral characterization. The spectral information is used in the step following this one. The VorBin (Cappellari & Copin 2003) library generates a map of Voronoi bins, from which and a surface brightness profile is constructed. The preliminary detection radius (Rlim,0), defined as the radial limit having a surface brightness equal to 10% of the surface brightness at the central coordinates, is computed. If Rlim,0 is undefined due to a low central surface brightness, the presence of detectable emission is unlikely. For such cases, Rlim,0 is arbitrarily set to 1/4 of the FOV defined by the user. The events inside this detection radius are used to construct a spectrum employed in the next step to define the deconvolution kernel (e.g., PSF) appropriate for this target. The choice of a 10% limit is an optimal compromise based on the analysis of Chandra/ACIS observations: including as much emission as possible from the source enhances the spectra used to generate the PSF. However, including a region too large reduces computational efficiency. Note that the spectrum derived in this step serves the sole purpose of informing PSF construction and is not intended for physical characterization of the gas. 8 ACIS VFAINT Background Cleaning: https://cxc.harvard. edu/ciao/why/aciscleanvf.html 4 Figure 1. SAUNAS pipeline flowchart. From left to right: SAUNAS pre-calibrates the Chandra observations by first using Chandra X-ray Center (CXC)/CIAO, which generates the event files, extended source masks, and the PSFs. The events in each individual visit are first resampled via bootstrapping and then deconvolved using LIRA. Voronoi binning is applied to each deconvolved observation, and merged into a single flux map after sky background correction. 5. CIAO\u2019s task simulate psf, in combination with the spectral information provided by the previous step, is used to generate a PSF representative of each observing visit to the target. The PSF modeling is dependent on the spectra of both the source and the background region, as well as the target position within the detector (off-axis angle). The latter is unique to each visit. The preliminary detection radius defines both the circular (R < Rlim,0) and annular (Rlim,0 < R < 2 Rlim,0) apertures used to measure the source and background spectra, respectively (specextract). The aspectblur9 is set to 0.25, and the number of iterations to 1000 per dataset. 6. Finally, the individual event files and PSFs corresponding to each visit are cropped to a cutout, with the preferred energy range selected. The outputs from the pre-processing procedure with CIAO described above, are: 1) the detected event maps (named obsid Elow-upper flt evt.fits, where low and upper refer to the energy range limits and obsid is the observation ID identification in the Chandra archive), 2) the exposure time maps (obsid Elow-upper flt expmap.fits), 3) the flux maps (obsid Elow-upper flt flux.fits), and 4) the PSF (obsid Elow-upper flt psf.fits). This set of intermediate files is used in the remaining steps of the SAUNAS pipeline to generate the final maps. 2.2.2. X-ray event resampling: bootstrapping The X-ray sky background is a very low count regime. Bartalucci et al. (2014) obtained a flux of 10.2+0.5 \u22120.4 \u00d7 10\u221213 erg cm\u22122 deg\u22122 s\u22121 for the 1\u20132 keV band 9 Aspectblur in CIAO: https://cxc.cfa.harvard.edu/ciao/why/ aspectblur.html and 3.8\u00b10.2 \u00d7 10\u221212 erg cm\u22122 deg\u22122 s\u22121 for the 2\u2013 8 keV band. This flux is equivalent to \u223c0.03\u20130.003 photons arcsec\u22122 (1\u20132 keV) and \u223c0.01\u20130.001 photons arcsec\u22122 for a typical t = 104\u2013105 s exposure (She et al. 2017). As a consequence, the signal from spurious groups of a few counts can dominate the shape of the Voronoi bins used for adaptive smoothing in each simulation if appropriate statistical methods are not implemented. To enhance the robustness of the adaptively smoothed mosaics and to reduce contamination from nonsignificant signal in the background, the X-ray events are re-sampled via replacement (bootstrapping) as an additional (and user-optional) step before deconvolution. Bootstrapping is especially well-suited for inferring the uncertainties associated with an estimand \u2013 such as the median surface brightness in cases for which the Gaussian standard deviation regime does not apply or parametric solutions are too complicated or otherwise unknown. Bootstrapping effectively reduces the leverage that single events or very low count sources may have in the background of the final mosaics by accounting for the photon-noise uncertainties in the PSF deconvolution and Voronoi binning steps through a non-parametric approach, allowing for a better assessment of the uncertainties in the final simulations. In our application, bootstrapping generates N\u223c100 (hereafter, Nboot) new X-ray event samples from the observed sample, preserving size (flux) and permitting events to be repeated. While the number of bootstrapping simulations is set to 100 by default as a compromise between precision and computational resources, Nboot can be defined by the user in SAUNAS. Each resampled list of events is translated into an image, which is fed into the next step, PSF deconvolution (Sec. 2.2.3). 2.2.3. LIRA PSF deconvolution The LIRA (Connors et al. 2011; Donath et al. 2022b) package deconvolves the emission from sources in X-ray 5 Simulated source PSF PSF convolved source Simulated observation LIRA deconvolved Final smoothed mosaic Figure 2. SAUNAS analysis test on a synthetic dataset. Top left: Underlying distribution of the simulated test source. Top right: Point spread function (PSF) of the simulated observation. Central left: Simulated underlying distribution of the test source convolved by the PSF. Central right: Simulated observed events based on the PSF-convolved distribution. Bottom left: LIRA PSF deconvolved average posterior image. Bottom right: Adaptively smoothed final mosaic. Dashed contours represent the 3\u03c3 and dotted contours the 2\u03c3 detection level of X-ray emission. The equivalent exposure time for this test is \u03c4exp = 5 \u00d7 106 s cm2. 6 data. Through the use of LIRA, SAUNAS removes the contamination from active galactic nuclei (AGNs) and X-ray binary stars (XRBs) which can be significantly extended and easily confused with a diffuse halo if the PSF is not accurately corrected. LIRA uses a Bayesian framework to obtain the best-fit PSF convolved model to the observations, allowing the user to evaluate the probability that a detection is statistically significant. LIRA was designed to provide robust statistics in the low-count Poissonian regimes representative of faint extended halos, the primary science focus of our project. As detailed in Sec. 2.2.1, the PSF models are generated specifically for each target, taking into account their location in the detector and their spectral energy distributions, on a per-visit basis. SAUNAS deconvolves data from individual visits, using these PSF models as input into LIRA. Discrete hard-band emission is produced primarily by point sources, including AGNs (Fabbiano et al. 1989; Fabbiano 2019), young stellar objects, and mass transfer onto the compact stellar object within XRB pairs (Wang 2012). Because these point sources contaminate the soft band emission, they are excised from the data. They are identified using the Chandra Source Catalog (Evans et al. 2010), and then removed from the event file by deleting events that lay within the cataloged positional uncertainty ellipse of the source. The Python implementation of LIRA is used to deconvolve the X-ray event files, thus minimizing the effects of the off-axis dependency associated with Chandra\u2019s PSF, such that data from different visits can be combined in a later stage. LIRA accepts five input arrays: a) counts (number of events), b) flux (in s\u22121 cm\u22122 px\u22121), c) exposure ( s cm2), d) PSF, e) a first approximation to the background (counts). The first four inputs are generated by the CIAO pipeline (Sec. 2.2.1), while the initial baseline background is set to one. The number of LIRA simulations is set to 1000 (n iter max), in addition to 100 initial burn-in simulations (num burn in). To speed up the process10, SAUNAS splits the LIRA simulations in parallel processing blocks (defined by the number of bootstrapping simulations), to be combined after the deconvolution process has finished. While 1000 LIRA simulations are run on each of the N\u223c100 bootstrappingresampled images described in Section 2.2.2, only the last LIRA realizations (those produced after the deconvolution process has stabilized) for each resampled im10 Even in parallel processing mode, PSF deconvolution takes the largest fraction of time of the SAUNAS pipeline. As a reference, in an Apple M1 Max 2021 laptop (32 Gb of RAM, 10 cores), the computation of a 1024 \u00d7 1024 mosaic typically takes two hours, with \u223c90% of the time spent in deconvolution. age are used (hereafter, Nstable), which typically is equal to \u223c100. To save computational resources, Nstable is adapted based on the number of bootstrapping simulations so that the deconvolved dataset consists of a maximum of N = Nboot \u00d7 Nstable = 1000 deconvolved images (posterior samples). 2.2.4. Adaptive Voronoi smoothing The deconvolved datacubes, hereafter referred to as \u201dBootstrapping-LIRA\u201d realizations, serve as a proxy of the probability density distribution of the true Xray emission on a pixel-per-pixel basis, at the Chandra/ACIS spatial resolution (a minimum of 0.492\u201d px\u22121, depending on the binning set by the user). To facilitate the detection of extended, low surface brightness structures such as hot gas halos \u2013 with apparent sizes substantially larger than the spatial resolution limit for the galaxies \u2013 the use of spatial binning enhances the detection of regions with very low signal-to-noise ratio. Voronoi binning (VorBin, Cappellari & Copin 2003) is applied to each of the N posterior samples in the deconvolved datacube. This process generates N Voronoi tesselation maps, each one differing from the other because they were calculated from the Bootstrapping-LIRA realizations. This dataset is a Voronoi map datacube representing the probability density distribution of the surface brightness of the target. A consequence of this binning approach is the loss of spatial resolution in the faintest regions of the image (halos, background) compared to the brightest regions (i.e., the galactic cores). This loss is caused by the fact that the Voronoi technique varies the bin size in order to achieve a fixed signal-to-noise ratio in the resulting map. As we are primarily interested in mapping the large scale halo structures, this loss in spatial resolution does not significantly impact our science goals. A surface brightness map is created by calculating the median across one of the axes of the Voronoi datacube. To prevent background emission from contaminating the final image, the scalar background level is determined individually for each realization of the BootstrappingLIRA datacube. All sources, both resolved and unresolved, must be meticulously masked prior to measuring the background level, to prevent systematically oversubtracting the background in the final mosaics. The source masking and background correction process are conducted iteratively: 1. After the LIRA deconvolution process and before the Voronoi binning is performed, point sources 7 from the Chandra Source Catalog (CSC 2.0;11 likely X-ray binaries, SNe, AGNs) that lay in the image footprint are removed from the associated event file. Point source removal prevents the associated emission from impacting the adaptive Voronoi maps and resulting in diffuse contamination that could be confused with a gas halo component. 2. A secondary mask is generated using CIAO\u2019s routine vtpdetect12. This mask identifies the regions with detectable extended X-ray emission that are removed from the maps before measuring the background level. A mask is generated for each CCD of each visit through independent analysis. The masks are then combined into a single master extended source mask. 3. If a source was detected in the preliminary surface brightness profile generated a part of the CIAO preprocessing step (see Sec. 2.2.1, step 4), then those pixels with R < Rlim,0 are also masked before the background assessment. 4. After removing all the masked pixels using the masks from the three previous steps, the first approximation of the background level (B0) is made by measuring the median value of the unmasked sigma clipped (\u03c3 = 3) pixels. The background value is then subtracted from the voronoi binned maps. Once the individual observations have been background corrected, all the flux maps are combined using mean-weighting by the respective exposure times. Finally, a refined background value (B1) is calculated using the combined observations by repeating the process described above. The noise level is then estimated from the background distribution as the ratio between the median background level and the lower limit of the 1\u03c3 error bar (equivalent to the 15.8% percentile). The final background-subtracted, PSF-corrected, and Voronoi binned surface brightness maps are derived by using a median of the background-corrected bootstrapping-LIRA realizations. The final mosaics and the noise level are used to generate three different frames to be stored in the final products: 1) an average adaptive X-ray surface brightness map, 2) a noise level map, and 3) an SNR map. 11 Chandra Source Catalog Release 2.0: https://cxc.harvard. edu/csc2/ 12 CIAO/vtpdetect: https://cxc.cfa.harvard.edu/ciao/ahelp/ vtpdetect.html 2.3. Quality tests This section presents the results from a series of quality tests designed to evaluate specific aspects of the output mosaics generated with SAUNAS: 1. Identify the fraction of false positives and false negative detections (Sec. 2.3.1). 2. Estimate the flux conservation of the deconvolution / Voronoi binning process (Sec. 2.3.2) 3. Quantify the quality of SAUNAS performance as compared to that of other methods (arestore, Sec. 2.3.3). 2.3.1. False positive / False negative ratio For quality assessment, SAUNAS is tested using two different models varying the exposure time to reduce the photon flux and the detectability conditions: 1. A model of an idealized edge-on galaxy with two lobes emerging from a jet (double jet model). 2. A shell-like structure with a central bright source (cavity model). The models are created as combinations of Gaussian 2D probability distributions (astropy.convolution.Gaussian2DKernel) with different ellipticities and rotations as described in Table 1. Following PSF convolution, a synthetic observed events map is generated using a random Poisson distribution (numpy.random.poisson). The double jet model includes the emission from three sources: the galactic disk, a bright core, and the lobes. The range of surface brightnesses is \u223c10\u22126\u201310\u22128 s\u22121 cm\u22122 arcsec\u22122, excluding the considerably brighter (three to five orders of magnitude brighter) peak surface brightness of the core. Its morphology mimics the predominant structure observed in double jet radio galaxies such as Centaurus A (Hardcastle et al. 2007). The other test simulation, a cavity model, contains a hollow shell with a central bright source. This model provides an important pipeline test for the reconstruction of cavities found in the intergalactic medium. The detection of cavity rims seen in projection against the diffuse emission from the hot intracluster and/or intergalactic medium is challenging. These large bubbles potentially provide a useful record of interactions between AGNs and the intergalactic medium, in which the expansion of the associated radio lobes excavate the surrounding medium (Pandge et al. 2021). Our test model is designed to be particularly challenging: an X-ray cavity with a dominant central source representing an AGN 8 Figure 3. Fraction of false positives and false negatives in the SAUNAS detection maps derived from two truth models as a function of the equivalent exposure time (cm2 s). Blue symbols and lines represent the fraction of false negatives, while red represents the fraction of false positive detections in the mock maps. Cross symbols correspond to the double jet model and filled circles represent the cavity model (see Table 1). Vertical dashed lines indicate the median equivalent exposure times for the analyzed real observations in their respective bands. (Blanton et al. 2001; Dunn et al. 2010). The surface brightness background level of both models is fixed at 5\u00d710\u22129 s\u22121 cm\u22122 arcsec\u22122, and the equivalent exposure time is assumed to be flat an varying from \u03c4exp = 108 to \u03c4exp = 104 s cm2. For reference, \u03c4exp = 5 \u00d7 105 s cm2, equals \u223c10 ks at 0.3 \u22121.0 keV band13. The synthetic data are generated using the real PSF associated with the Chandra/ACIS datasets of NGC 3656 (Arp 155, PID:10610775, Fabiano, G., Smith et al. 2012). This PSF, which displays the characteristic ellipsoid pattern of off-axis ACIS observations, is selected as a worst-case scenario, given its extreme ellipticity due to its off-axis position in the detector array. The readout streak14 is visible as a spike departing from the center of the PSF at a position angle of -70\u25e6approximately (North = 0\u25e6, positive counter-clockwise). The simulated observed events are passed to the SAUNAS pipeline for processing, followed by a comparison between the detected (3\u03c3) maps and truth models. The quantitative quality test includes identification of 13 Chandra variation of effective area with energy https://cxc. cfa.harvard.edu/proposer/POG/html/INTRO.html 14 Chandra/ACIS PSF: https://cxc.cfa.harvard.edu/ciao/PSFs/ psf central.html the fraction of pixels that were incorrectly identified as false negatives (FN) and false positives (FP). Fig. 2 demonstrates the deconvolution and smoothing process for a mock galaxy with \u03c4exp = 5 \u00d7 106 s, having both diffuse X-ray emission and an extended PSF. The position angle selected for the model galaxy (Table 1) is selected specifically to offer a nontrivial test for the PSF deconvolution method. By using a position angle of 45\u25e6, the resulting convolved image displays two elongated features with apparently similar intensity (central left panel in Fig. 2, PSF convolved source): one real, and one created by the PSF. If the PSF elongated feature is removed in the final images, we can conclude that the image reconstruction was successful. After Poisson sampling (see Simulated observation panel in Fig. 2), the resulting events map is equivalent to the processed CIAO event files. The events map shows broad emission for the core of the galaxy model in which the disk is indistinguishable. The two lobes are still present, but considerably blended with the emission from the inner regions. The events are then processed using SAUNAS (LIRA deconvolution, Bootstrapping, and vorbin steps). The results from the PSF deconvolution (LIRA deconvolved panel in Fig. 2) show a removal of most of the PSF emission, recovering the signal from the disk of the galaxy and removing the PSF spike emission. However, 9 Mock model Component Size \u00b5 q PA (1) (2) (3) (4) (5) (6) (\u03c3x, \u03c3y, pixels) [s\u22121 cm\u22122 arcsec\u22122] [\u25e6] Double jet Core 1,1 1.2\u00d710\u22124 1 0 Disk 15,3 \u223c10\u22126\u201310\u22128 0.2 135 Lobes 7,7 \u223c10\u22126\u201310\u22128 1 0 Jet 25,2 5\u00d710\u22127 0.08 45 Background \u2013 5\u00d710\u22129 \u2013 \u2013 Cavity Core 1,1 1.2\u00d710\u22124 1 0 Shell [30\u201345] \u223c10\u22127 1 0 Background \u2013 5\u00d710\u22129 \u2013 \u2013 Table 1. Photometric and structural properties of the synthetic test models. Columns: 1) Name, 2) Component, 3) Size, 4) Surface brightness, 5) Eccentricity, 6) Position angle. a significant amount of noise is still visible, and the background level is difficult to estimate (lower left panel of Fig. 2). After applying the bootstrapping and Voronoi binning methods, the resulting final corrected mosaic (final smoothed mosaic panel, Fig. 2) clearly shows the signal from the X-ray lobes, the disk, and the central bright core over the background. The 2\u03c3 and 3\u03c3 contours show the detected features following the calibration procedures described in Sec. 2, demonstrating complete removal of the PSF streak in the final mosaics (at a 99.7% of confidence level). The original shape and orientation of the disk is recovered, with the flux correctly deconvolved into the bright core of the model galaxy. Due to its dim brightness, the jet that connects the lobes with the main disk is notably distorted in the final mosaic, but still visible at a 2\u03c3 confidence level. For this test, the fraction of pixels unrecovered by the pipeline that were part of the model sources (false negatives, FN) is FN = 3.2%. On the other hand, the fraction of misidentified pixels that were part of the background (false positives, FP) is FP = 4.0%. The maps of false positives and false negatives for this test are available in Appendix B. The test for the cavity model is repeated, sampling different equivalent exposure times. The results are shown in Appendix B. Fig. 3 presents a comparison of the false positive and false negative fraction as a function of the equivalent exposure time and model. For equivalent exposure times higher than \u03c4exp = 106 s cm2, the FP and FN are lower than 5\u201310%. These fractions increase towards shorter exposures as expected, showing a notable increase to 20% of false negatives (true source emission that is unrecovered by SAUNAS) at approximately \u03c4exp = 5 \u00b7 105 s cm2. The reason for this increase is the lack of detection of the dimmer outer regions in contrast with the brighter core (the lobes in the case of the double jet model, and the outer shell in the cavity model). Interestingly, the fraction of false positives does not increase substantially even at extremely low equivalent exposure times, remaining stable at \u223c10% down to \u03c4exp < 104 s cm2. This result demonstrates that even in cases of extremely short exposure times, SAUNAS is not expected to generate false positive detections, which is a critical requirement for our study. 2.3.2. Flux conservation In an ideal scenario, the total flux of the events processed by SAUNAS should be equal to the total flux in the pre-processed frames by CIAO. In practice, the baseline model assumptions during the deconvolution process may affect the total flux in the resulting frames. LIRA assumes a flat background model that\u2013combined with the counts in the source\u2013tries to fit all the events in the image. However, deviations from this ideal scenario (non-uniform background, regions with different exposure time) generate differences between the input and output flux. In order to understand the impact of flux conservation in LIRA deconvolved images, we must 1) analyze the relative difference of flux before and after deconvolution, and 2) determine if the residuals of the deconvolution process generate any systematic artificial structure (i.e., photons may be preferentially lost around bright sources, generating holes in the image or erasing the dim signal from halos). Total flux conservation is tested by measuring the ratio between the total flux in the input frames (those obtained at the end of the CIAO pre-processing, see Sec. 2.2.1) divided by the total flux in the final, SAUNAS 10 Figure 4. Flux conservation in SAUNAS frames. The histogram represents the probability distribution of the ratio between the recovered flux after SAUNAS processing and the total flux of the input, pre-processed frames. processed frames. We perform this test on real (UGC 5101, see Sec. 3.3) and synthetic observations (Sec. 2.3.1). The results are shown in Fig. 4. A total flux loss of \u223c5% is detected in the SAUNAS processed frames when compared with the pre-processed event maps by CIAO. The results are consistent in real observations (recovered flux ratio of 95.0 \u00b1 1.7%) and in synthetic observations (95.4+2.7 \u22122.4%). Using different simulations, we determined that this small flux loss is independent of the size of the FOV (in pixels), remaining stable at \u223c5%. For the total area of the images analyzed, a 5% of lost flux is negligible and well within the stochastic uncertainty of typical photometry (see the error bars in the profiles described in Fig. 5). We consider a flux conservation ratio lower than 100% (i.e., 90% \u2013 99%) as erring on the side of caution from a statistical perspective: the bias of LIRA to lose flux implies that SAUNAS will not generate false positive detections of hot gas halos. 2.3.3. Quality PSF deconvolution test While Sec. 2.3.2 reported on the conservation of total flux in the image as a whole, this section discusses whether SAUNAS introduces unwanted artificial structures (fake halos, or oversubtracted regions) in the processed maps. For this test, two additional types of test sources are used: 1) a point source, and 2) a circular extended source. Both of these sources have been previously combined with a Chandra/ACIS PSF. To provide context, the results of LIRA are compared with those from CIAO/arestore15. The results are displayed in Fig. 10 (point source) and Fig. 11 (circular extended source) and detailed in Appendices A and B. To quantify the quality of the different deconvolution methods, radial surface brightness profiles of the truth (non convolved) model, the convolved simulated observations, and the resulting deconvolved maps are constructed. The profiles show that arestore tends to oversubtract the PSF, generating regions of negative flux around the simulated source. In the point source case scenario, arestore oversubtracts the background by more than 5 \u00d7 10\u22128 s\u22121 cm\u22122 px\u22121, while LIRA recovers the background level with five times less residuals. The superiority of LIRA over arestore to recover diffuse structures is even more obvious in the extended source scenario (Fig. 11): arestore shows a clear oversubtraction ringlike region around the source, dipping the background level to 10\u22127.8 s\u22121 cm\u22122 px\u22121 as compared to the real (truth model) level of 10\u22127 s\u22121 cm\u22122 px\u22121. LIRA fits the background level significantly more faithfully, at a level of \u223c10\u22127.2 s\u22121 cm\u22122 px\u22121. We conclude that LIRA deconvolution results are better suited for the detection of diffuse X-ray emission, such as extended hot gas halos, compared to other PSF correction techniques, such as CIAO\u2019s arestore. Despite the model limitations described in Sec. 2.3.2, SAUNAS suppresses false positive extended emission detections without over-fitting the PSF, while recovering the true morphologies of X-ray hot gas distributions. Thanks to the modularity of SAUNAS, future updates in the LIRA deconvolution software will be automatically implemented in our pipeline, improving the quality of the processed frames. 3. APPLICATION TO REAL OBSERVATIONS 3.1. Sample selection We identified two astrophysical targets of interest for testing the pipeline: 1. NGC 3079, a highly inclined barred spiral galaxy with a prominent Fermi bubble (Hodges-Kluck et al. 2020, the primary benchmarking target, see Sec. 3.2). 2. UGC 5101, an ultra-luminous IR galaxy that is undergoing a galactic merger (Sanders et al. 1988; Imanishi et al. 2001, the secondary benchmarking target, see Sec. 3.3). 15 arestore: https://cxc.cfa.harvard.edu/ciao/ahelp/arestore. html 11 The targets used to demonstrate SAUNAS capabilities were selected because they were known apriori to have extended soft X-ray emission detected by telescopes other than Chandra (NGC 3079), and the characterization of the extended emission was well-documented with a detailed methodology that could be replicated in the published research. Insisting that the data come from a different platform provides a truth model independent of systematic effects inherently associated with Chandra. Finally, these specific targets were selected in order to test SAUNAS against simple and complex emission structures associated with the different morphologies (a disk galaxy and an interacting system). 3.2. NGC 3079 Large-scale bipolar winds, Fermi and radio bubbles, are examples of extended structures observed around the center of the Milky Way in multi-wavelength observations, including radio (MeerKAT, S-PASS), microwave (WMAP), mid-infrared (MSX ), UV (XMM), X-rays (Chandra, XMM-Newton, ROSAT) and gamma rays (Fermi-LAT) (Sofue 1995; Bland-Hawthorn & Cohen 2003; Su et al. 2010; Finkbeiner 2004; Carretti et al. 2013; Heywood et al. 2019). While the presence of these structures is well-known in our own galaxy, Li et al. (2019) reported the first non-thermal hard X-ray detection of a Fermi bubble in an external galaxy, NGC 3079 (\u03b1 = 150.491\u25e6, \u03b4 = +55.680\u25e6, D = 18.68 \u00b1 1.32 Mpc, 11.04 arcsec kpc\u22121 Springob et al. 2005), using Chandra observations. Further works in X-ray and UV using XMM-Newton and GALEX revealed a 30 kpc long Xray Galactic Wind Cone in NGC 3079 (up to 60 kpc in FUV, Hodges-Kluck et al. 2020), potentially associated with material that has been shocked by Type II supernovae. The length of the X-ray wind cone of NGC 3079 (R \u223c 3 arcmin, 16.3 kpc) contrasts with that of the bubble found by Li et al. (2019) using Chandra observations (R \u22720.75 arcmin, 4.1 kpc). Hodges-Kluck et al. (2020) argued that the sensitivity of the longest Chandra observations in the soft X-ray band (E < 1 keV) is affected by the molecular contaminant buildup on the detector window, and as a consequence, these Chandra/ACIS observations were only used for point source identification on NGC 3079 and subsequent masking for XMM-Newton. Additionally, the available Chandra observations were much shallower (124.2 ks, with only 26.6 ks of usable exposure time due to contamination) than those of XMM (300.6 ks). Despite Fig. 6 in Hodges-Kluck et al. (2020) showing signs of faint extended emission in the Chandra/ACIS datasets, the authors did not attempt to characterize it. Because ancillary X-ray observations from XMM-Newton are available for this object, NGC 3079 is an ideal case for benchmarking the low surface brightness recovery capabilities of the SAUNAS pipeline. To detect the X-ray galactic wind in NGC 3079, the same bandpass (0.3\u20132.0 keV) as in Hodges-Kluck et al. (2020) is used. The available Chandra/ACIS observations of NGC 3079 are detailed in Table 2. Each visit was reprocessed with independent PSF deconvolution, and then the visits were combined for Voronoi binning. Observations 19307 and 20947 were processed but discarded due to the presence of very large-scale gradients and unusually high background levels in the detectors where the main emission from NGC 3079 is located. After processing the remaining observations (2038 and 7851) with SAUNAS, extended emission observed by Chandra is compared to the results from XMM-Newton. The PSFs of the 2038 and 7851 observations and their unprocessed events are available in Figs. 16 and 18 in Appendices C and D respectively. Following the results from Fig. 2 in Hodges-Kluck et al. (2020), four angular cone regions display diffuse emission: north-east (\u03b8 = 40\u25e6), south-east (\u03b8 = 110\u25e6), south-west (\u03b8 = \u2212140\u25e6), and north-west (\u03b8 = \u221260\u25e6), (\u03b8 is measured counter-clockwise, north corresponds to 0\u25e6, see Fig. 5). Mimicking the methodology in the original article, an amplitude of \u00b120\u25e6is set for all the cones around their central axis. Surface brightness profiles are generated from the reprocessed Chandra observations, providing a direct comparison with previous results. The results show that the extended X-ray wind emission is detectable using Chandra observations, up to a limit of Rlim \u223c40 kpc from the center of NGC 3079 (Rlim = 39.9+4.5 \u22125.1 kpc on average, extending up to Rlim = 37.9+4.1 \u22124.4 kpc in the North-East filament) at a confidence level of 95% (2\u03c3). The filament in the south-west of the galaxy is shortest at R \u223c16\u201320 kpc. Interestingly, the XMM observations reveal a slightly larger extent in the X-ray emission on the west side (40 kpc) compared to the east side (30\u201335 kpc) according to Hodges-Kluck et al. (2020).16. The average limiting surface brightness (95% confidence level) is \u00b5 = 1.66+0.5 \u22120.5 \u00d710\u221210 s\u22121 cm\u22122 arcsec\u22122. Limiting surface brightness reaches its lowest limit when combining all the filaments, suggesting that the observations are limited by noise and not by systematic effects (if dominated by systematic gradients, a lower SNR would result from combining all the regions). 16 Note that the authors do not specify the details of their methodology for measuring the radial limits in their X-ray observations, but rather infer the dimensions of the X-ray filaments by visual inspection of their Fig. 1b. In this work, we adopt a 95% confidence level (p = 0.05) to claim statistical significance. 12 a) b) c) Figure 5. Extended X-ray wind cones in NGC 3079, recovered in the Chandra/ACIS observations using SAUNAS. a) Broad-band (Chandra: 0.3\u20132.0 keV) surface brightness profiles of the four filaments identified by Hodges-Kluck et al. (2020) using XMMNewton and GALEX observations. Top to bottom: All filaments, north-east, south-east, south-west, and north-west. Radial detection limits are in the panels (95% confidence level). b) SAUNAS processed image showing 2\u03c3 contours (black, shown in white in panel c)) with filament sectors in yellow. The radial detection limit indicated in panel a) for each of the four filaments is shown as solid yellow sectors, while that of \u201call filaments\u201d is shown as dashed yellow, following the methodology found in Hodges-Kluck et al. (2020). The thick magenta circle in b) shows the maximum detection limit found with XMM-Newton, compatible with our results. c) Comparison of the optical morphology (Pan-STARRS gri) of NGC 3079 with the extended X-ray emission. 13 Obs ID Instrument Exposure time Mode Count rate Start date (1) (2) (3) (4) (5) (6) [ks] [s\u22121] NGC 3079 2038 ACIS-S 26.58 FAINT 10.27 2001-03-07 7851 ACIS-S 5.00 FAINT 14.88 2006-12-27 19307 ACIS-S 53.16 FAINT 6.14 2018-01-30 20947 ACIS-S 44.48 FAINT 6.10 2018-02-01 UGC 5101 2033 ACIS-S 49.32 FAINT 9.53 2001-05-31 Table 2. Chandra/ACIS observations available within 10 arcmins of NGC 3079 and UGC 5101, retrieved from the Chandra Data Archive, as of February 2024 . Columns: 1) Observation ID, 2) Chandra instrument, 3) total exposure time per observation, 4) observation mode, 5) average count rate, 6) exposure start date. 3.3. UGC 5101 UGC 5101 (z = 0.039, D = 161.8 Mpc, 0.784 kpc arcsec\u22121, Rothberg & Joseph 2006) is an irregular galaxy that is undergoing a potential major merger. This object has previously been identified as a Seyfert 1.5 (Sanders et al. 1988), a LINER (lowionization nuclear emission-line region) galaxy (Veilleux et al. 1995), and a Seyfert 2 galaxy (Yuan et al. 2010). UGC 5101 has a very extended optical tidal tail (\u223c 40 kpc) to the west from the nucleus, with a second semicircular tidal tail that surrounds the bright core of the galaxy with a radius of 17 kpc (Surace et al. 2000). Radio, (Lonsdale et al. 2003), IR (Genzel et al. 1998; Soifer et al. 2000; Armus et al. 2007; Imanishi et al. 2001), and X-ray observations with Chandra and XMMNewton (Ptak et al. 2003; Gonz\u00b4 alez-Mart\u00b4 \u0131n et al. 2009) suggest the presence of a heavily dust-obscured AGN in the nucleus of this galaxy. The total exposure time and other information relevant to the Chandra/ACIS observations of UGC 5101 are provided in Table 2. The diffuse X-ray emission of UGC 5101 has been previously analyzed in the literature. Huo et al. (2004) found evidence for an inner hot gas halo of 8.7 kpc (10.4\u2032\u2032) and an outer halo of 14.3 kpc (17.0\u2032\u2032). Grimes et al. (2005) found that 95% of the 0.3\u20131.0 keV emission is enclosed in the inner 8.75 kpc galactocentric radius (10.5\u2032\u2032). Smith et al. (2018, 2019) analyzed the Chandra/ACIS observations, finding that the 0.3\u20131.0 keV emission has a size of 24.0\u2032\u2032 \u00d7 14.2\u2032\u2032 (\u223c19.1 \u00d7 11.3 kpc, position angle of 90\u25e6), and a total X-ray luminosity of log LX = 41.6 erg s\u22121. Given these known robust detections, we employ SAUNAS in the characterization of the low surface brightness emission from UGC 5101. Three bandpasses are used, to ensure a direct comparison to the analyses by Smith et al. (2019): soft (0.3\u20131.0 keV), medium (1.0\u2013 2.0 keV), and hard (2.0\u20138.0 keV). The flux conservation ratio after PSF deconvolution in this exposure is 96.0 \u00b1 0.02% in the three bands. The processed X-ray emission maps are presented in Fig. 6, in comparison with the optical/NIR observations from HST, as well as ancillary radio observations for reference. The PSFs and unprocessed events of the UGC 5101 observations in the three bands analyzed are available in Figs. 17 and 19 in Appendices C and D, respectively. The results are summarized in Fig. 6. The analysis of the Chandra/ACIS observations with SAUNAS reveal that even after PSF deconvolution, the soft Xray emission of UGC 5101 still shows extended emission around its core. The 0.3\u20131.0 and 1.0\u20132.0 keV bands present X-ray emission with an elongated morphology, with a characteristic bright plume-like structure in the core, oriented in the north-south direction (\u00b5soft = 1\u20132\u00d710\u22128 s\u22121 cm\u22122 arcsec\u22122), very similar to the results of Smith et al. (2018). In contrast, the hard band only shows a bright core in the center, compatible with an unresolved source. In the soft band, the diffuse X-ray emission is detectable down to levels of \u00b5soft = 1.23+1.02 \u22120.66\u00d710\u22129 s\u22121 cm\u22122 arcsec\u22122 (2\u03c3), compared to the medium band level of \u00b5medium = 1.25+1.38 \u22120.73\u00d710\u22129 s\u22121 cm\u22122 arcsec\u22122 (2\u03c3). Both soft and medium band emissions are centered over the main core of UGC 5101, showing the same orientation as observed by Smith et al. (2019). The soft band emission extends up to 25 arcsec (20 kpc) to the north and 17 arcsec (13.5 kpc) to the south (3\u03c3). The spatial distribution of X-ray emission around UGC 5101 is generally comparable to that detected in previous works (Smith et al. 2019). However, at approximately 40\u201360 arcsec radius to the north-east (\u03b1, \u03b4 = 143.980\u25e6, +61.363\u25e6), the SAUNAS map reveals a diffuse bridge connecting with UGC 5101, at a \u223c2\u03c3 level (\u00b5soft \u223c6.2 \u00d7 10\u221210 s\u22121 cm\u22122 arcsec\u22122 in the soft band). For clarity, we will refer to this extended emission as X1. Fig. 7 displays surface brightness profile analysis results and associated comparisons with X1. The central surface brightness of X1 is \u00b5soft = 4.2+1.5 \u22121.3 \u00d7 10\u22129 s\u22121 cm\u22122 arcsec\u22122 in the soft band and \u00b5medium = 1.54+0.76 \u22120.66 \u00d7 10\u22129 s\u22121 cm\u22122 arcsec\u22122 in the medium band. The emission of X1 is detectable at a 14 HST/ACS F435W F435W + F814W F814W Chandra/ACIS UGC 5101 X1 0.3-1.0 keV GMRT 150 MHz Chandra/ACIS 1.0-2.0 keV Chandra/ACIS 2.0-8.0 keV Figure 6. Diffuse X-ray emission of UGC 5101 as detected with SAUNAS/Chandra in the 0.3\u20131.0 keV band (top), 1.0\u20132.0 keV band (central), and 2.0\u20138.0 keV band (bottom). Left: HST/ACS color image (red: F814W, green: F435W + F814W, blue: F435W ). Upper Right: SAUNAS map of the diffuse X-ray emission, corrected for PSF effect, point-sources, and background. Solid contours represent 3\u03c3 detections and dotted contours the 2\u03c3 detection level of X-ray emission, represented in white (left panel) and black (right panel) for contrast. Solid red contours show GMRT 150 MHz data. White dashed ellipse represents the previous detection limits reported by Smith et al. (2019) of UGC 5101 in the same band. 15 3\u03c3 confidence level with a comparable angular area to UGC 5101, but with a maximum surface brightness 20\u2013 30 times lower than the main object (see Fig. 7). Fig. 27 in Smith et al. (2019) shows a hint of what might be emission jutting to the North-East of UGC 5101 where we see X1, but at a considerably lower detectability. The X1 feature has not been discussed previously in the literature as part of the UGC 5101 system, but rather as a potential higher-z galaxy cluster (Clerc et al. 2012; Koulouridis et al. 2021) in need of spectroscopic confirmation. Observations of the Giant Metrewave Radio Telescope (GMRT) 150 MHz all-sky radio survey17 (Intema et al. 2017, see bottom left panel in Fig. 6) confirm the detection of an adjacent source centered over the recovered X-ray emission, with a surface brightness of \u00b5 = 10\u22124 Jy arcsec\u22122. The GMRT flux maps are shown as contours in Fig. 6, revealing a peak of radio emission over the center of X1 in addition to UGC 5101. GALEX UV observations provide a near-ultraviolet (NUV) flux of 5.14\u00b10.15\u00d710\u22126 Jy (Seibert et al. 2012) but only upper limits in the far-ultraviolet (FUV) band (9.8 \u00b7 10\u22126 Jy). Recent JWST observations (GO 1717, PI: Vivian U., MIRI) of UGC 5101 were inspected for this work, but they suffer from extreme saturation of the bright core of the galaxy, and the outer X-ray emitting region lies outside the footprint, so they were discarded for this study. While investigating the nature of this extended X-ray emission is beyond the scope of this paper focused on the presentation of the SAUNAS pipeline, we briefly discuss the main hypotheses (hot gas plume or high-z galaxy cluster) in Sec. 4. 4. DISCUSSION 4.1. Limitations We have demonstrated the SAUNAS methodology to be successful in recovering dim, extended surface brightness X-ray features under low signal-to-noise ratio conditions through performance tests using both synthetic (Section 2.3) and real (Section 3) X-ray datasets. There are, however, several limitations of SAUNAS in its current form that will be addressed in future versions of the pipeline. Among them, SAUNAS does not attempt to provide a quantitative separation between extended sources, such as a segmentation map. Deblending of extended X-ray sources is one of the main objectives of a complementary code, EXSdetect (Liu et al. 2013), using a friend-of-friends algorithm. Other specialized 17 TIFR GMRT Sky Survey (TGSS) Archive: https://vo.astron. nl/tgssadr/q fits/imgs/form pipelines for X-ray observations, such as CADET, based on machine-learning algorithms, allow for the identification of specific source morphologies, such as X-ray cavities (Pl\u02c7 sek et al. 2024). The potential combination of SAUNAS for generating low surface brightness detection maps with existing morphological identification and segmentation software will be explored in the future. Another limitation of the SAUNAS pipeline is the precision of the PSF. The generation of the Chandra/ACIS PSFs depends on multiple factors, including, but not limited to, the position of the source on the detector, the SED of the source, or the specific parameters fed into the MARX simulation software (like the aspect blur). For example, LIRA deconvolution software only accepts one PSF for the whole image, and as a consequence, the shape of sources at high distances from the center of the image might be inaccurate. This phenomena can cause residuals if observations present bright sources at high angular distances from the center of the source, since the deconvolution will be based on the PSF at the center of the observation, but not at the location of the secondary contaminating source. As an attempt to quantify this effect, we estimate in Fig. 8 the variation of the PSF size (R90%, radius that contains 90% of the flux of a point source) vs. angular separation to the source using CIAO psfsize srcs18, based on the Chandra/ACIS observations on UGC 5101. The results show that the PSF increases a factor of \u00d72 in \u223c2 arcmin (\u00d710 in \u223c10 arcmin). In our science cases, no bright object was observed in the environment of the main sources (NGC 3079, UGC 5101), so the main contributors to the scattered light are the sources for which the PSF was calculated. However, observers must be wary of strong residual PSF wings from nearby sources at \u223c2 arcmin and longer distances. While a complete analysis of the uncertainties of the PSF in Chandra is out of the scope of the current paper, we refer to the Appendix in Ma et al. (2023) for a review in the field. 4.2. NGC 3079 The analysis of the Chandra/ACIS observations in the field of NGC 3079 revealed signs of X-ray wind out to galactocentric distances R \u223c30 kpc, compatible with previous observations using XMM\u2013Newton (Hodges-Kluck et al. 2020). While XMM\u2013Newton is able to trace the extended X-ray emission out to larger distances (\u223c40 kpc) in some directions, some considerations must be made in order to compare XMM\u2013Newton results with the benchmark study provided here: 18 CIAO/psfsize srcs: https://cxc.cfa.harvard.edu/ciao/ahelp/ psfsize srcs.html 16 Figure 7. Surface brightness profiles of the diffuse X-ray emission of UGC 5101 and the extended diffuse north-east source (X1) detected with SAUNAS/Chandra in the 0.3\u20131.0 and 1.0\u20132.0 keV bands. Radially averaged surface brightness profile (blue upward triangles: 0.3\u20131.0 keV band, purple downward triangles: 1.0\u20132.0 keV band). Shaded areas represent the 1\u03c3 and 2\u03c3 error bars. Solid blue and dashed purple vertical lines represent the 2\u03c3 detection limits for the 0.3\u20131.0 keV and the 1.0\u20132.0 keV bands. Blue and purple stars show the average surface brightness of the north-east extended emission X1, represented at the measured galactocentric distance from UGC 5101. Figure 8. Variation of the Chandra/ACIS PSF size as a function of the angular separation to the center of the FOV. Vertical axis: Radius enclosing 90% of the flux from the PSF at 1.0 keV, based on the observations of UGC 5101. Horizontal axis: Angular separation to the center of the source, approximately the center optical axis. The horizontal dotted lines mark the PSF sizes that correspond to \u00d72, \u00d75, \u00d710, and \u00d720 the PSF size at its center (\u00d71). 1. XMM\u2013Newton observations of NGC 3079 combine an \u223c11 times longer exposure time (300.6 ks) than the usable time in Chandra/ACIS (26.6 ks) observations. 2. XMM-Newton has a larger effective area (4650 cm2 at 1 keV) than Chandra (555 cm2), at the expense of a lower spatial resolution19 (XMM-Newton/FWHM = 6 arcsec vs. Chandra/FWHM = 0.2 arcsec). While the aperture is smaller, proper masking of point sources improves detectability of dim structures by reducing the background noise. 3. The analysis of the X-ray emission by HodgesKluck et al. (2020) is based on the inspection of the quadrant stacked images with a certain signal and radial threshold (see their Fig. 4, central panel). The methodology they use to calculate the limiting radius of the diffuse X-ray emission is not clearly stated in their analysis, making a direct and accurate comparison of results difficult. Despite the differences of the detection methods, we conclude that SAUNAS is able to recover extended, low surface brightness X-ray emission using Chandra/ACIS X-ray observations of NGC 3079, in excellent agreement with the deeper exposure taken by XMM\u2013Newton. 4.3. UGC 5101 19 https://xmm-tools.cosmos.esa.int/external/xmm user support/documentation/uhb/xmmcomp.html 17 Figure 9. Hubble Space Telescope ACS imaging over the diffuse extended emission X1 found adjacent to UGC 5101. Pseudo-RGB color combination: Blue: F435W, Green: F435W + F814W. Red: F814W. Grey contours represent the (3, 5, 7, and 10) \u03c3 detection levels obtained in the 0.3\u2013 1.0 keV band from Chandra/ACIS observations, processed with SAUNAS. Notice the merger shell structure of UGC 5101 at the bottom right corner. Section 3.3 described evidence for extended low surface brightness emission (X1, \u00b5soft = 4.2+1.5 \u22121.3 \u00d7 10\u22129 s\u22121 cm\u22122 arcsec\u22122, 0.3\u20131.0 keV) located in the north-east of the UGC 5101 merging galaxy. X1 has been previously detected in X-ray by Smith et al. (2019) but its emission was not discussed nor treated as part of UGC 5101\u2019s outskirts. Other works (Clerc et al. 2012; Koulouridis et al. 2021) tentatively classified X1 as a potential background galaxy cluster, but this feature remained unconfirmed as spectroscopic observations are unavailable. X1 is detected also in GMRT 150 Mhz observations as a secondary source adjacent to UGC 5101, confirming the existence of a feature at this location. Two main hypotheses regarding the nature of X1 are: 1. X1 is part of the extended X-ray emitting envelope of UGC 5101. 2. X1 is a background source, potentially the extended envelope of a higher-z object, such as a massive early-type galaxy or a cluster. Although the X-ray emission in the soft and medium bands of X1 is adjacent to that of UGC 5101, and both objects have a dominant emission in the soft band compared to the medium and hard (see Figs. 6 and 7, the emission could still be part of a hot gas halo at higher-z. In fact, the center of the Chandra/ACIS Xray emission overlaps remarkably well with that of a background galaxy. Fig. 9 shows the HST/ACS imaging (bands) centered over X1, with the soft-band X-ray emission contours overlapped for reference. The peak of X-ray emission is coincident with the position of a background galaxy (WISE J093555.43+612148.0). Unfortunately, WISE J093555.43+612148.0 does not have spectroscopic or photometric redshifts available. While resolving the nature of X1 is beyond the scope of this paper, we conclude that the test performed with the Chandra/ACIS observations of UGC 5101 using SAUNAS demonstrates the pipeline\u2019s capabilities in successfully producing adaptively smoothed, PSFdeconvolved X-ray images in different bands. The image reduction process presented here allows for a better calibration of the background to recover details at both high resolution and surface brightness (inner core structure of the merging galaxy) as well as extended ultra-low surface brightness regions, such as the previously unknown extended emission around UGC 5101. 5. CONCLUSIONS In this paper we have presented SAUNAS: a pipeline to detect extended, low surface brightness structures on Chandra X-ray Observations. SAUNAS automatically queries the Chandra Archive, reduces the observations through the CIAO pipeline, generates PSF models and deconvolves the images, identifing and masking point sources, and generating adaptative smoothed surface brightness and detection SNR maps for the sources in the final mosaics. We have demonstrated through tests on simulated data and comparisons to published results that the SAUNAS pipeline distinguishes itself from other existing X-ray pipelines by meeting the following main objectives: 1. Generate X-ray detection maps for extended sources in a consistent, statistically reproducible way. 2. Provide a modular framework for reduction of Chandra/ACIS observations focusing on the detection of faint extended sources, simplifying the access to X-ray archival observations for multiwavelength studies. Our approach to meeting these objectives is to assess the statistical probability that signal in low-count areas is real. This strategy can both produce detections of previously-overlooked diffuse emission as well as minimize false positive detections of extended hot gas emission. In Sec. 3, we compare SAUNAS-processed archival Chandra/ACIS data to published results. This section 18 demonstrates that the proposed methodology succeeds in recovering the extended emission detected in a selection of local Universe targets. While the CIAO pipeline provides a canonical and highly efficient procedure to reduce the Chandra observations, the secondary analysis of the resulting event files is usually performed in an independent way by the observers. Such a situation results in two suboptimal consequences: 1) Most X-ray studies are focused on single objects, or very small samples (three or four objects), and 2) most studies develop their own procedure to correct the PSF effects (if considered), to generate smoothed maps, and to determine the significance of emission over the background. Planned future work includes an analysis of the extended emission of nearby galaxies using Chandra/ACIS archival data, and releasing the tools to the astronomical community. In this first article, we made the processed maps available20 for the community through the Zenodo open repository. A benefit of the automated functionality provided by this tool is its provision of straightforward access to high-level archival Chandra products and facilitation of their use in multi-wavelength studies. In future works of this series (Borlaff et al. in prep.) we will explore the X-ray emission of a sample of targets using the SAUNAS pipeline, focusing on the evolution of lenticular galaxies based on Chandra/ACIS data in combination with Hubble and Spitzer observations. The serendipitous discovery presented in this work in one of the galaxies studied; UGC 5101, an on-going merger galaxy, demonstrate that the combination of multi-wavelength legacy archives, such as those of Chandra, GMRT, and Hubble, may already hold the information to disentangle the impact of the different evolutionary drivers in galaxies. The authors thank to the anonymous referee for the provided input that helped to improve this publication significantly. The SAUNAS X-ray surface brightness and signal-to-noise maps of NGC 3079 and UGC 5101 are publicly available in Zenodo in FITS format: DOI: 10.5281/zenodo.10892485. The list of Chandra datasets, obtained by the Chandra X-ray Observatory, are contained in DOI: 10.25574/cdc.225. A.B. acknowledges the tireless support from Nicholas Lee, Tara Gokas, Kenny Glotfelty, Catherine Cranmer, and the rest of the CXC Helpdesk team. Without your dedication, this project would have not been possible. This research has made use of data obtained from the Chandra Data Archive and the Chandra Source Catalog, and software provided by the Chandra X-ray Center (CXC) in the application packages CIAO (Fruscione et al. 2006) and Sherpa (Freeman et al. 2001). A.S. acknowledge support from NASA contract to the Chandra X-ray Center NAS8-03060. The work conducted at NASA Ames Research Center was funded through NASA\u2019s NNH22ZDA001N Astrophysics Data and Analysis Program under Award 22-ADAP22-0118. This work was authored by an employee of Caltech/IPAC under Contract No. 80GSFC21R0032 with the National Aeronautics and Space Administration. This paper represents the views of the authors only and should not be interpreted as representing the views of ConstructConnect, Inc. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 Facilities: Chandra, HST Software: Matplotlib (Hunter 2007), CIAO, astropy (Astropy Collaboration et al. 2018, 2013, 2022), LIRA (Donath et al. 2022a)21, VorBin (Cappellari & Copin 2003) APPENDIX A. PSF DECONVOLUTION EFFICIENCY TEST In this section, a set of synthetic observations generated with CIAO/MARX22 are used to evaluate the reliability of the SAUNAS algorithm when applied to a simple point source. SAUNAS\u2019s ability to accurately recover diffuse emission is significantly governed by limitations imposed by LIRA, the associated deconvolution tool. SAUNAS could have instead utilized the widely-used and proven arestore tool, which can restore emission structures down to scales comparable to the Chandra/ACIS resolution (0.492\u2032\u2032). Here we benchmark these two PSF deconvolution methodologies using simulated observations of an unresolved object constructed by convolving a point source with a highly off-axis PSF 20 The SAUNAS X-ray surface brightness maps of NGC 3079 and UGC 5101 are publicly available in Zenodo: https://zenodo.org/ records/10892485. 21 pyLIRA: https://github.com/astrostat/pylira 22 Using MARX to Simulate an Existing Observation: https:// cxc.cfa.harvard.edu/ciao/threads/marx sim/ 19 from Chandra/ACIS, associated with the observations of 3C 264 (NGC 3862, \u03b1 = 176.2709\u25e6, \u03b4 = +19.6063\u25e6, Obs. ID: 514). The simulated observations processed with SAUNAS (Sec. 2.2) are compared to the results produced by standard application of arestore. Both methods use the same number of iterations (Niter = 1000). For each method, surface brightness profiles are constructed from Voronoi binning of the deconvolved data and compared to that of the model point source. The results are shown in Fig. 10. The PSF convolved point source shows the characteristic elliptical shape of the off-axis PSF from Chandra/ACIS. The surface brightness profiles obtained from the images show that CIAO/arestore provides output images with more flux at their core than SAUNAS. However, CIAO/arestore\u2019s deconvolved image has a higher noise in the surroundings of the center (R = [0 \u221210] px) than SAUNAS, including some clear signs of oversubtraction (see the Voronoi bins at the bottom right image) around the center of the object. In addition, CIAO/arestore leave a characteristic residual at larger distances (R = [10 \u221220] px) that could easily be confused with a shell of extended X-ray emission. In contrast, SAUNAS provides a deconvolved image with less central flux but a smoother transition to the background level and without the presence of residual halos of emission or oversubtraction. We conclude that CIAO/arestore concentrates more signal into a single point source at the expense of higher noise in the resulting images when compared to the methodology utilized by SAUNAS described in Sec. 2.2. B. SAUNAS EXTENDED TEST MODELS Appendix A demonstrated that the combination of LIRA + Bootstrapping methods adopted in the SAUNAS pipeline provides a more accurate representation of the real distribution of light compared to CIAO/arestore, including avoiding arestore\u2019s PSF over-subtraction. Given that the main aim of SAUNAS is the detection of extended sources, we extend the analysis from Appendix A to SAUNAS processing of an extended source model. Figure 11 shows the result from this analysis. A simulated source with a central surface brightness of \u00b5 =10\u22123 s\u22121 px\u22121 and a background level of \u00b5 =10\u22127 s\u22121 px\u22121 is convolved with the same PSF used by the point source tests described in Appendix A. The resulting event file of convolved data is then processed by SAUNAS and deconvolved by a standard application of CIAO/arestore. A comparison of the associated surface brightness profiles provides both quantitative and qualitative assessments of the different light reconstruction methods. The top right panel of Fig. 11 shows that the methodology adopted in SAUNAS produces a result that is more closely aligned with our science-driven requirements. Proper treatment of the fainter regions surrounding objects is a critical factor for the detection of faint extended emission, such as hot gas X-ray halos around galaxies. While SAUNAS produces a well-behaved profile that smoothly transitions to the background level at large radii, CIAO/arestore manufactures an over-subtracted background region surrounding the object, similar to its treatment of point sources (Appendix A). Figures 12 and 13 show the results of the false positive / false negative quality test described in Sec. 2.3.1 for the double jet model. In Figs. 14 and 15 the equivalent results are shown for the cavity model. Each row represents different equivalent exposure times, from \u03c4exp = 5 \u00d7 107 s cm2to \u03c4exp = 5 \u00d7 104 s cm2. We refer to the caption in the figures for details. C. NGC 3079 AND UGC 5101 POINT SPREAD FUNCTION This section presents the PSFs generated for the NGC 3079 (see Sec. 3.2) and UGC 5101 (see Sec. 3.3), Chandra/ACIS observations. The PSFs were generated using MARX as described in Sec. 2.2. The panels in Figs. 16 and Fig. 17 show the different PSFs obtained for the three bands (0.3\u20131.0 keV, 1.0\u20132.0 kev, and 2.0\u20138.0 kev) in UGC 5101, and for the two datasets analyzed in the broadband (0.3\u20132.0 keV) for NGC 3079. D. NGC 3079 AND UGC 5101 EVENT MAPS This section presents the event maps as observed by Chandra/ACIS and processed by CIAO for the NGC 3079 (see Sec. 3.2) and UGC 5101 (see Sec. 3.3) observations. Note that the events in the panels represent the raw event counts without any SAUNAS processing, and thus they include contamination by sky background, gradients generated by the different equivalent exposure time across the field of view, and point source contamination. The panels in Fig. 18 show the events obtained in the Chandra 2038 and 7851 visits to NGC 3079 in the 0.3\u20132.0 keV broadband, and Fig. 19 show the events obtained for the three bands (0.3\u20131.0 keV, 1.0\u20132.0 kev, and 2.0\u20138.0 kev) in UGC 5101."
18
+ }
intro_8K/test_introduction_short_2405.01649v1.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.01649v1",
3
+ "title": "Improving Complex Reasoning over Knowledge Graph with Logic-Aware Curriculum Tuning",
4
+ "abstract": "Answering complex logical queries over incomplete knowledge graphs (KGs) is\nchallenging. Most previous works have focused on learning entity/relation\nembeddings and simulating first-order logic operators with various neural\nnetworks. However, they are bottlenecked by the inability to share world\nknowledge to improve logical reasoning, thus resulting in suboptimal\nperformance. In this paper, we propose a complex logical reasoning schema over\nknowledge graphs upon large language models (LLMs), containing a\ncurriculum-based logical-aware instruction tuning framework, named LACT.\nSpecifically, we augment the arbitrary first-order logical queries via binary\ntree decomposition, to stimulate the reasoning capability of LLMs. To address\nthe difficulty gap among different types of complex queries, we design a simple\nand flexible logic-aware curriculum learning framework. Experiments across\nwidely used datasets demonstrate that LACT has substantial improvements~(brings\nan average +5.5% MRR score) over advanced methods, achieving the new\nstate-of-the-art. Our code and model will be released at GitHub and huggingface\nsoon.",
5
+ "authors": "Tianle Xia, Liang Ding, Guojia Wan, Yibing Zhan, Bo Du, Dacheng Tao",
6
+ "published": "2024-05-02",
7
+ "updated": "2024-05-02",
8
+ "primary_cat": "cs.CL",
9
+ "cats": [
10
+ "cs.CL"
11
+ ],
12
+ "label": "Original Paper",
13
+ "paper_cat": "Knowledge AND Graph",
14
+ "gt": "Large-scale knowledge graphs (KGs) such as FreeBase (Bollacker et al. 2008) stores structural knowledge in a col- lection of fact triplets and are widely adopted by many domains. Unfortunately, KGs are often incomplete, leav- ing many missing triplets undiscovered. Thus, complex logical reasoning over such KGs (Hamilton et al. 2018; Ren and Leskovec 2020) is challenging and has attracted much attention in the recent years. A complex logical query can be represented with First-Order Logic (FOL) that in- cludes logical operators such as conjunction (\u2227), disjunc- tion (\u2228), negation (\u00ac), and existential quantifier (\u2203), etc. A more direct approach involves the representation of com- putation graphs as Directed Acyclic Graphs (DAGs), which can be resolved through the systematic traversal of Knowl- edge Graphs (KG). This process entails the allocation of suitable entities to intermediate variables based on their structural attributes (Dalvi and Suciu 2007). Inspired by the success of knowledge graph embedding (KGE) (Bordes et al. 2013; Bai et al. 2021), a line of re- search proposes to answer complex logical queries by learn- ing query embedding and simulating logical operators with well-designed neural networks (Chen, Hu, and Sun 2022; Zhu et al. 2022; Zhang et al. 2021; Arakelyan et al. 2020) Current research based on embeddings primarily focuses on the creation of diverse latent space geometries, such as vec- tors (Hamilton et al. 2018), boxes (Ren, Hu, and Leskovec 2019), hyperboloids (Choudhary et al. 2021), and proba- bilistic distributions (Ren, Hu, and Leskovec 2019), to ef- fectively capture the semantic position and logical coverage of knowledge graph entities. However, these approaches are limited in their perfor- mance due to the following. (1) Limited information: The information contained in a knowledge graph is usually in- complete and limited. When only the information from the knowledge graph can be used, it is difficult to answer some complex reasoning that lacks relevant information. (2) High complexity of logical queries: The intricacies of world knowledge determine the complexity of reasoning in prac- tical applications, which determines that it is difficult to model the relationship of world knowledge through simple space geometries figures that may lose potentially complex relationship information (Choudhary et al. 2021), thus lim- iting the effect of complex logical reasoning. (3) General- izability: KGE method for a particular KG can not general- ize to other KGs which limits the applicability of these ap- proaches in real-world scenarios where KGs can vary widely in terms of their structure and content. Recently, large language models (LLMs) (Achiam et al. 2023; Touvron et al. 2023a; Zeng et al. 2022) show outper- forming capabilities to a wide range of tasks (Zhao et al. 2023b; Ouyang et al. 2022; Zhong et al. 2023b; Peng et al. 2023; Lu et al. 2023). (Choudhary and Reddy 2023; Liu et al. 2024a) construct prompt templates and apply LLMs as text-generators to answer complex queries. How- ever, LLM without fine-tuning suffers from hallucination problem (Zhang et al. 2023c), especially when faced with such a knowledge-intensive task that generates answers on an incomplete KG rather than simple retrieval. Besides, pre- vious tasks relied on manual classification of queries to im- prove performance, which is unrealistic in large-scale practi- cal applications and also limits the types of queries. Finally, Previous methods generally decompose the problem into sub-problems, which greatly increases the cost of reason- ing, especially considering that previous methods are gener- ally based on closed-source models such as GPT. Therefore, compared to pure prompt engineering, we prefer to fine-tune arXiv:2405.01649v1 [cs.CL] 2 May 2024 Complex Logical Query Reasoning LLMs Logic-Aware Curriculum Tuning LACT Easy Hard Tree of Reasoning for FT corpus Curriculum Learning Medium \u2022 Given the following triplets...{(h, r ,t)}... \u2022 Let us assume that the set of entities E is connected to entity Africa by relation adjoins.... \u2022 Let's think step by step.The question can be split into 2 question... \u2022 Which entities are connected to \u201c?\u201d... Curriculum FT LACT Reasoning Example Given the following (h,r,t) triplets where entity h is related to entity t by relation r: \u2022 ( Major League Soccer,affiliated,Los Angeles FC) \u2022 ( Major League Soccer,affiliated\uff0cInter Miami CF) \u2022 ( Major League Soccer,affiliated,CF Montreal) \u2022 ... Answer the question: Let us assume that the set of entities E is connected to entity Major League Soccer by relation affiliated, F is the set of entities connected to entity Beckham by relation owner, and G is the set of entities in the intersection of E and F. Then, what are the entities connected to entities in set G by relation caption? Query prompt\uff1a KG Context Query body Let's think step by step. The question can be split into 4 questions. 1.Which entities are connected connected to entity Major League Soccer by relation affiliated?The entity set of the answer is represented by [PP1]. 2.Which entities are connected to entity Beckham by relation owner?The entity set of the answer is represented by [PP2] 3.What are the entities in the intersection of entity sets [PP1] and [PP2]?\\nThe entity set of the answer is represented by [PP3]. 4.What are the entities connected to entity set [PP3] by relation caption? With reference to the relevant triplet above,the final answer is Leon Messi Result\uff1a COT Reasoning FOL computation graph 1 2 3 4 5 Binary Tree of Reasoning FT Corpus 1 2 4 5 1 2 3 4 5 3 5 u u n n n n n 1p 2p 2i 2in 3p 3i 3in inp ip pi 2u up pni pin Easy Medium Hard Corpus curriculum: Eposide curriculum: Stage 1: Stage 2: Stage 3: 0.8 : 0.1 : 0.1 0.1 : 0.8 : 0.1 0.1 : 0.1 : 0.8 David Beckham Captain Major League Soccer Logical query Answer: Leon Messi LACT a) b) c) Figure 1: Schematic illustration: a) Answering logical query over KG with LACT. b) The framework of Logic-Aware Curriculum Tuning over LLama. We leverage binary tree decomposition strategy (Seen in Section 4.2) to construct logic-rich FT corpus and Curriculum learning strategy (Seen in Section 4.3) to fine-tune a base LLM. c) Performing reasoning using well-designed prompts. our own model to solve the above problems. In this paper, we propose Logic-Aware Curriculum Tuning (LACT ), a novel fine-tune framework for an- swering complex logical query, which stimulates the abil- ity of LLMs to perform complex reasoning on knowledge graphs. We propose a strategy to incorporate the knowledge contained in the KGs into our training corpus to activate the corresponding knowledge of the LLMs and supplement the missing relevant knowledge of the LLMs during the fine- tuning process. At the same time, we have proven that data argument by binary tree decomposition can arouse the corre- sponding capabilities of LLMs and effectively improve their reasoning performance. At last, we show that curriculum learning(Bengio et al. 2009) can effectively smooth the dif- ficulty differences between different types of queries and greatly improve the results of difficult queries. In summary, our contribution is three-folded: \u2022 We propose a logic-aware curriculum fine-tuning (LACT) paradigm for complex logical reasoning over KGs. \u2022 LACT achieves state-of-the-art performance beyond embedding-based and PLM-based methods, using only a 7B llama2 model. \u2022 Through extensive experimentation, we found that fine- tuning corpus constructed with rigorous logical context over KGs and curriculum learning can significantly en- hance LLM logical reasoning ability.",
15
+ "main_content": "Miller, and Liang 2015; Hamilton et al. 2018). Most of current approaches concentrated on learning meaningful query embeddings (Chen, Hu, and Sun 2022; Zhu et al. 2022; Zhang et al. 2021; Arakelyan et al. 2020; Wang et al. 2023b). Neuralizing logical operators through a specific embedding space, thereby embedding FOL queries into a vector space(Hamilton et al. 2018; Ren, Hu, and Leskovec 2019; Choudhary et al. 2021), or probabilistic distribution(Ren, Hu, and Leskovec 2019), and predict answers by locating nearest neighbours to answer set representation. Additionally, approaches such as CQD(Arakelyan et al. 2020) have focused on improving the performance of complex reasoning tasks through the answer composition of simple intermediate queries, and QTO (Bai et al. 2023) proposes query computation tree optimization that can efficiently find the exact optimal solutions. However, embedding-based methods usually lack interpretability as there is no explicit mapping between the embedding and the set of entities, and this limits their generalization ability to more complicated query structures. Moreover, PLM-based methods consider complex logical reasoning as text-based tasks by modeling pre-trained language models. The short textual descriptions are organized as an input sequence and encoded by the PLMs (Wang et al. 2023c; Xu et al. 2023; Wang et al. 2023c). However, limited by the performance limitations of the base model, the PLM method has always been deficient in understanding world knowledge and reasoning capabilities, limiting its performance in complex reasoning. 2.2 LLMs for KG Reasoning In recent years, substantial advancements have been witnessed in the domain of LLMs (Achiam et al. 2023; Touvron et al. 2023b; Peng et al. 2023; Zhong et al. 2023b). Among these, instruction tuning (IT) (Ouyang et al. 2022) and the alignment of the model (Wang et al. 2023e) with human preferences stand out. Within the realm of LLM, the integration of LLMs with Knowledge Graphs (KG) (Pan et al. 2024b; Wang et al. 2023a; Luo et al. 2024) constitutes a prominent and consequential research avenue. Leveraging its potent generative capabilities, LLMs prove invaluable in addressing Knowledge Graph-related tasks, including but not limited to Knowledge Graph Completion (KGC) (Zhu et al. 2023; Zhang et al. 2023b), entity alignment (Zhang et al. 2023a), Knowledge Graph Question Answering (KGQA) (Luo et al. 2024), and others (Luo et al. 2023). Consequently, the synergy between Knowledge Graphs for LLMs (KG4LLM) and LLMs for Knowledge Graphs (LLM4KG) emerges as an essential focal point, bearing significance in advancing the collective capabilities of both entities. We focus on applying LLMs in the Complex Logical Reasoning task, which has not been carefully studied yet. (Choudhary and Reddy 2023) made the initial attempt by prompt engineer but it lacks in-depth research and simply uses LLMs as text generators. 2.3 Curriculum Learning The idea of training a neural network in an easy-to-difficult fashion can be traced back to (Elman 1993; Krueger and Dayan 2009).Based on these works, Curriculum Learning is first proposed in (Bengio et al. 2009). They designed several toy experiments to demonstrate the benefits of curriculum strategy both in image classification and language modeling. When focusing on the field of NLP, by experimenting with several heuristics, (Sachan and Xing 2016; Xu et al. 2020) migrated the success of CL to NLU tasks. (Ding et al. 2021; Zhou et al. 2021) improved the machine translation modeling by carefully designing different curriculum. Recently, with the rise of LLM, (Liu et al. 2024b) discovered the huge potential of CL in in-context learning, while (Wang et al. 2023d) focus on the improvement of CL for LLM in pretrain. However, we are committed to exploring the huge potential of CL in fine-tuning LLM. 3 Preliminary 3.1 Knowledge Graph In our work, a knowledge graph is G = (E, R, T ) where E, R are the set of entity and relation respectively. With regard to generality, KG can be organized as a set of triplets {T = (vs, r, vt)|vs, vt \u2208E, r \u2208R}, where vs/vt denotes the head/tail entity. 3.2 Complex logical query Complex logical query is used for retrieving and manipulating data stored in knowledge graphs, which is grounded in a subset of FOL. The process of answering a complex logical query involves trying to match a suitable results using the composition of queries: q[v?] = \u2203v : q1 \u2227q2 \u2227\u00b7 \u00b7 \u00b7 \u2227qn, (1) or, q[v?] = \u2203v : q\u2032 1 \u2228q\u2032 2 \u2228\u00b7 \u00b7 \u00b7 \u2228q\u2032 n, (2) where q denotes a FOL query. Note that Eq. (1) is conjunctive normal form (CNF) and Eq. (2) is disjunctive normal form (DNF). The two can be equivalently converted to each other via De Morgan\u2019s law. Following previous works (Ren, Hu, and Leskovec 2019), we focus on modeling the operations: projection r(\u00b7), conjunction (\u2227), disjunction (\u2228), and negation (\u00ac). Additionally, note that existential positive first-order (EPFO) queries only includes projection, conjunction (\u2227), and disjunction (\u2228). 4 Methodology 4.1 Instruction Tuning on LLMs In this section, we introduce how to incorporate the KG information in the text-based prompt. When applying LLMs to complex logical reasoning, we denote an LLM as M which is a text decoder to generate the corresponding output. If we start from the above definition, this task can be modelled as a text generation task. However, triplet generation is different from vanilla text generation because the entities and the relation in the triplet prompt have complex semantic information defined by the given KG. In fact, we want the generated answers to be entities that exist in KG itself. Without these knowledge, the predicted answers are unreliable and unstable. Thus, incorporating the KG information into the prompt to provide more auxiliary information is the key to engage LLMs in complex logical reasoning. In particular, when we fine-tune M, we can treat the training corpus as a set of question-answer pairs (S, A). For the task of complex logical reasoning over knowledge graph, the input textual sequence S consists of the description of question D, knowledge graph neighbourhood information(i.e. related triplets) X and logical query. In our work, we used a simple but effective method called greedy depth traversal algorithm to search for neighbourhood information and we simply discarded all samples that exceeded the token limit. (Detailed algorithm and token distribution can be found in Appendix A). The logical query contains the textual information about the query q\u03c4 with the query structure \u03c4 and specific query content Q\u03c4 that needs to be processed, which can be denoted as f1(q\u03c4). Likewise,we can denote the output answer A as f2(V\u03c4), where f2 indicates textualization of V\u03c4 here. In summary, the fine-tune training corpus C can be expressed in the following form: C = (S, A) = (D \u2295X \u2295f1(q\u03c4), f2(V\u03c4)). (3) The model M(parameterized by \u03b8) is fine-tuned by the next token prediction task. We fine-tune M to obtain the final model by maximizing the log-likelihood of the next token. The training objective can be formulated as L = \u22121 |C| |C| X i=1 log PM (ci | c<i) , (4) where ci(i = 1, 2, ..., |C|) represents the textual tokens of the training corpus C. For our task, the training objective can be transferred as L = \u22121 |C| |C| X i=1 log PM (A | S) . (5) 4.2 Data Augmentation by Binary Tree Decomposition This section introduces how to build fine-tuning corpora that make LLMs (Large Language Models) logic-aware based on instruction tuning. Chain of thought (COT) enables models to decompose multi-step problems into intermediate steps, subsequently improving the reasoning abilities of LLMs (Wei et al. 2022). However, pure-prompt based reasoning needs more incontext memory to perform complex logical reasoning. Considering complex logical queries, which query structure can be transferred into the form of a DAG and its hierarchical structure becomes a natural fit for decomposition into a series of sub-problems. So we propose a method for data augment based on Binary Tree Decomposition Mechanism to stimulate LLMs with the potential to decompose a complex query into a chain of simple queries. Binary Tree Decomposition Mechanism. The Binary Tree Decomposition Mechanism is divided into the following three steps: Query Computation Tree. For a complex FOL query, like the example shown in Figure 1, its computation graph that is a directed acyclic graph can be converted into a tree where the root node is v?. query computation tree. The answer variable and the constant entities in the query correspond to root and leaf nodes in the query computation tree. Each edge in the query computation tree points from the child node to the parent node. It can be recursively deduced that the subtree rooted at any non-leaf node in the tree corresponds to a subquery. In Appendix B, we provide a systematical procedure for transforming an FOL query to its query computation tree. Binary Tree Decomposition. For the one-to-many intersection/union structures in the tree, we separate each parent node into two child nodes. Note that the union branches merging step may create one-to-many structures that consist of both intersection and union edges, take Figure 1 for an example. This can be taken care of by first separating v? into an intersection structure (v\u2032 3 and v\u2032 5 in the example), and then separating the child node into an intersection structure (v\u2032 1 and v\u2032 2 for v\u2032 3, v\u2032 4, v3 for v\u2032 5), where v\u2032 denotes an intermediate entity retrieved by Neighborhood Retrieval Algorithm (Seen in Appendix A). Reverse Level Traversal. Finally, we decompose the binary computation tree into independent branches. Since the root node of the calculation tree is the answer entity, we perform a hierarchical traversal of all non-leaf nodes of the binary tree in reverse. As shown in Figure 1, the complex FOL query is decomposed into a sequence: [(v1, r, v\u2032 1), (v2, r, v\u2032 2), (v3, r, v\u2032 5), (v4, r, v\u2032 4), (v\u2032 1, r, v\u2032 3), \u2227, (v\u2032 2, r, v\u2032 3), \u00ac, (v\u2032 4, r, v\u2032 5), \u2227, (v3, r, v\u2032 5), (v\u2032 3, r, v?), \u2227, (v\u2032 5, r, v?)]. Data Augmentation. Now we can turn any loopless FOL query into a series of separate subqueries. We use a defined template to integrate the decomposition process into the answers to the training corpus. So, the training corpus C can be transferred into the following form: C = (S, A) = (D \u2295X \u2295f1(q\u03c4), f2(V\u03c4,Decomposed)), (6) where V\u03c4,Decomposed indicates the answer corresponding to the logical query with the decomposition reasoning path. 4.3 Fine-tuning Enhanced by Curriculum Learning As mentioned in previous sections, though decomposing into chain responses, complex queries still vary greatly in difficulty and complexity due to differences between query structure. Naturally, we believe that these different types of samples should not be simply lumped together. Intuitively, we incorporate curriculum learning into our training. To be specific, In view of the particularity of complex reasoning data, when we decompose it into logical chains, naturally, we can use the number of decomposed sub-logical queries as a natural difficulty discriminator to select different types of queries, e.g., a 1p query would be defined as difficulty-1, while a 2p query, which can be decomposed into two projection queries and an intersection query, would be defined as difficulty-3. The detailed difficulty discriminating process will be shown in Appendix (Table S1). Finally, we divided samples into three parts: easy samples, medium samples and difficult samples according to the difficulty level. Correspondingly, our training process is also divided into three stages. After we did some exploratory experiments, we did not simply train three data sets in the order of easy-medium-difficult. On the contrary, we decided to first use 80% easy samples, 10% medium samples, and 10% difficult samples for the first stage of training and the subsequent two-stage training process is a Leto, and experimental results in the next few sections also proved that this is effective. 4.4 Reasoning Module We use the final LACT LLM as the answer generator, as shown in Figure 1(c). We retrieve relevant information and textualize the FOL query, and finally we populate it into the template in Prompt 1 to generate responses. We use the LLM to do a simple text generation task to get the answer. After fine-tune, LACT LLM can follow the output mode in training stage in Figure 1, so we can extract final answers through simple regular expressions with the template in Prompt 2. Prompt 1: Query Prompt Template of LACT. Query Prompt Template Given the following (h,r,t) triplets where entity h is related to entity t by relation r. <Related Triplets> Answer the question: <question> Prompt 2: Answer Template of LACT. Expected Answer Template Let's think step by step. The question can be split into <k> question. <k decomposed subqueries> With reference to the relevant triplet above, the final answer is <answer entities>. Method avgp avgood avgn 1p 2p 3p 2i 3i pi ip 2u up 2in 3in inp pin pni FB15K GQE 28.0 20.1 54.6 15.3 10.8 39.7 51.4 27.6 19.1 22.1 11.6 Query2Box 38.0 29.3 68.0 21.0 14.2 55.1 66.5 39.4 26.1 35.1 16.7 BetaE 41.6 34.3 11.8 65.1 25.7 24.7 55.8 66.5 43.9 28.1 40.1 25.2 14.3 14.7 11.5 6.5 12.4 CQD-CO 46.9 35.3 89.2 25.3 13.4 74.4 78.3 44.1 33.2 41.8 21.9 CQD-Beam 58.2 49.8 89.2 54.3 28.6 74.4 78.3 58.2 67.7 42.4 30.9 ConE 49.8 43.4 14.8 73.3 33.8 29.2 64.4 73.7 50.9 35.7 55.7 31.4 17.9 18.7 12.5 9.8 15.1 GNN-QE 72.8 68.9 38.6 88.5 69.3 58.7 79.7 83.5 69.9 70.4 74.1 61.0 44.7 41.7 42.0 30.1 34.3 QTO 74.0 71.8 49.2 89.5 67.4 58.8 80.3 83.6 75.2 74.0 76.7 61.3 61.1 61.2 47.6 48.9 27.5 LARK 56.1 43.1 18.4 72.8 50.7 36.2 66.9 60.4 56.1 23.5 52.4 40.6 16.2 5.7 33.7 26.1 10.0 LACT 82.6 71.9 56.9 93.5 73.5 59.6 92.3 82.3 76.8 75.9 74.6 60.4 81.2 61.6 52.0 43.5 41.7 FB15K-237 GQE 16.3 10.3 35.0 7.2 5.3 23.3 34.6 16.5 10.7 8.2 5.7 Query2Box 20.1 15.7 40.6 9.4 6.8 29.5 42.3 21.2 12.6 11.3 7.6 BetaE 20.9 14.3 5.5 39.0 10.9 10.0 28.8 42.5 22.4 12.6 12.4 9.7 5.1 7.9 7.4 3.5 3.4 CQD-CO 21.8 15.6 46.7 9.5 6.3 31.2 40.6 23.6 16.0 14.5 8.2 CQD-Beam 22.3 15.7 46.7 11.6 8.0 31.2 40.6 21.2 18.7 14.6 8.4 FuzzQE 24.0 17.4 7.8 42.8 12.9 10.3 33.3 46.9 26.9 17.8 14.6 10.3 8.5 11.6 7.8 5.2 5.8 ConE 23.4 16.2 5.9 41.8 12.8 11.0 32.6 47.3 25.5 14.0 14.5 10.8 5.4 8.6 7.8 4.0 3.6 GNN-QE 26.8 19.9 10.2 42.8 14.7 11.8 38.3 54.1 31.1 18.9 16.2 13.4 10.0 16.8 9.3 7.2 7.8 QTO 33.5 27.6 15.5 49.0 21.4 21.2 43.1 56.8 38.1 28.0 22.7 21.4 16.8 26.7 15.1 13.6 5.4 LARK 50.7 41.0 10.6 73.6 40.5 26.8 46.1 43.1 49.9 22.9 62.8 28.3 6.5 3.4 23.2 16.5 3.2 LACT 57.0 44.4 21.9 76.5 54.3 30.3 56.0 54.5 54.6 36.9 56.5 29.7 17.6 33.1 27.1 19.8 11.2 NELL995 GQE 18.6 12.5 32.8 11.9 9.6 27.5 35.2 18.4 14.4 8.5 8.8 Query2Box 22.9 15.2 42.2 14.0 11.2 33.3 44.5 22.4 16.8 11.3 10.3 BetaE 24.6 14.8 5.9 53.0 13.0 11.4 37.6 47.5 24.1 14.3 12.2 8.5 5.1 7.8 10.0 3.1 3.5 CQD-CO 28.8 20.7 60.4 17.8 12.7 39.3 46.6 30.1 22.0 17.3 13.2 CQD-Beam 28.6 19.8 60.4 20.6 11.6 39.3 46.6 25.4 23.9 17.5 12.2 FuzzQE 27.0 18.4 7.8 47.4 17.2 14.6 39.5 49.2 26.2 20.6 15.3 12.6 7.8 9.8 11.1 4.9 5.5 ConE 27.2 17.6 6.4 53.1 16.1 13.9 40.0 50.8 26.3 17.5 15.3 11.3 5.7 8.1 10.8 3.5 3.9 GNN-QE 28.9 19.6 9.7 53.3 18.9 14.9 42.4 52.5 30.8 18.9 15.9 12.6 9.9 14.6 11.4 6.3 6.3 QTO 32.9 24.0 12.9 60.7 24.1 21.6 42.5 50.6 31.3 26.5 20.4 17.9 13.8 17.9 16.9 9.9 5.9 LARK 52.9 26.9 12.4 87.8 45.7 33.5 51.3 48.7 23.1 22.2 20.6 41.1 9.9 5.9 24.5 13.3 7.3 LACT 60.1 32.0 17.2 91.4 53.6 40.6 62.2 54.9 31.4 34.8 27.0 34.0 16.0 21.2 21.0 16.3 11.6 Table 1: Test MRR results (%) on complex query answering across all query types. avgp is the average on EPFO queries; avgood is the average on out-of-distribution (OOD) queries, which consist pi/ip/2u/ip queries; avgn is the average on queries with negation. 5 Experiments 5.1 Training Settings Training datasets We opt for the the most popular datasets: FB15K, FB15K-237, NELL995. Detailed information about dataset is listed in D.1. We used the training set of the above dataset as the original training data. Training Details We use open-source model LLaMA-2base, including two different parameter sizes: 7B and 13B, as the base model for fine-tuning. All LLaMA-2-7B and LLaMA-2-13B models are trained by fully fine-tuning. For the fully fine-tuning setting, we use the AdamW optimizer to train the model with 1 epoch and the batch size is 128. We use 8 NVIDIA A100 GPUS for training with the learning rate of 3e-6. Training Cost We trained for a total of approximately 10 hours using 8*A100 during three training stages. 5.2 Experimental Settings Baseline Methods For comparing with KGE, we chose the following representative methods as baselines: GQE (Hamilton et al. 2018), Query2Box(Q2B) (Choudhary and Reddy 2023), BetaE (Ren and Leskovec 2020), CQD (Arakelyan et al. 2020), ConE (Zhang et al. 2021), GNN-QE (Zhu et al. 2022), QTO (Bai et al. 2023). We also compared our method to an LLM-based method, LARK (Choudhary and Reddy 2023). Evaluation Protocol Following previous works (Ren, Hu, and Leskovec 2019), we use mean reciprocal rank (MRR) as standard evaluation protocols for evaluating complex reasoning over knowledge graph. In the filtered setting, all easy and hard answers are filtered out during ranking. The detail could be found in Appendix D.3. 5.3 Main Results The main experiment results of three datasets are shown in Table 1. The baselines were trained on 1p/2p/3p/2i/3i queries, hence other than these 4 types of EPFO queries serve as OOD queries, and we report the average result on these queries in avgood. We observe that LACT significantly outperforms baseline methods across all datasets. Notably, LACT yields an averag gain of 7.3%, 2.9%, and 6.3% on avgp, avgodd, and avgn, compared to the previous SOTA method, especially more challenging datasets like FB15K237 and NELL995. This suggests that our method has better reasoning capability and captures a broad range of relations \u0394 = -0.29 \u0394 = + 0. 08 \u0394 = + 0. 41 20 40 60 80 100 0.5 Step 3.0 2.5 2.0 1.5 1.0 Stage 1 Stage 2 Stage 3 0.0 0.5 2.5 2.0 1.5 1.0 3.0 a) Effect of BTD b) Effect of CL Figure 2: Results of Ablation Studies.(a) Comparison PPL and train loss results of whether to use BTD based on FB15k.(b) Comparison PPL results of whether use CL based on FB15K highdifficulty queries. to effectively utilize this capability for enhancing the performance of complex queries. K FB p avg 15 , K FB o avg 15 , K FB n avg 15 , 237 15 , p _ K FB avg 237 15 , _ K FB o avg 237 15 , _ K FB n avg NELL p avg , NELL avg , o NELL avg , n Figure 3: The results of the main experiment. We evaluate the performance of three current state-of-the-art methods on three datasets. 5.4 Ablation Studies To verify the effectiveness of the LACT design, we conduct a two-part ablation study. The first part is designed to verify the effectiveness of logical chain decomposition and the second part is designed to verify the effectiveness of curriculum learning. Effect of Binary Tree Decomposition (BTD). As shown in Figure 2, logical chain decomposition can stimulate LLM\u2019s ability of logical decomposition, thereby greatly improving the performance of difficult queries. From a training perspective, as shown in Figure 2, although perplexity (PPL) and training loss of decomposed queries before training was slightly higher than that of ordinary queries, we found that as training progresses, the loss and PPL of decomposed queries will quickly decrease to levels much lower than ordinary queries, proving that chain decomposition is effective to reduce the difficulty of learning complex queries. Effect of Curriculum Learning. Curriculum learning, as illustrated in Table 3, greatly alleviates the gap between difficult training samples and the understanding ability of LLMs. We can observe from Figure 2 that compared with random shuffle sequence training, difficult training samples under curriculum learning gradually become easier to understand. It is worth mentioning that we found that the gain of curriculum learning on training corpus that has not been MRR (%) MRR (%) 0 10 20 30 40 50 60 0 10 50 40 30 20 0.25 0.75 0.50 1.00 7B 13B a) Model scaling b) Data scaling AvgP AvgOOD Avgn AvgP AvgOOD Avgn Figure 4: Performance of scaling LACT on FB15K-237 with different a) model and b) data scales. decomposed by logical chains is very small, which supports our theory from the side. It is difficult for LLMs to understand the difficulty difference between undecomposed samples, so curriculum learning is also difficult to take effect. 5.5 Transferability Study Considering the diversity of complex reasoning tasks, we can divide transferability into two levels, task-level and dataset-level transferability. Task-level transferability. The results in Table 1 show that our method achieves a relative gain of 9.9% on the OOD task, which demonstrates the strong generalization of our fine-tuning framework. Even in the OOD queries, as shown in Table 5, more than 95% of test samples can still follow logical chain reasoning. These phenomena indicate strong generalization ability of LACT. Dataset-level transferability. In fact, almost all KGE methods, even if some of the optimization methods claim not to require training, require a KGE base model adapted to a specific dataset, which leads to the inherent defect of extremely poor Transferability of the KGE method. However, as previous research has shown, fine-tuning of LLMs is mainly to stimulate the knowledge and capabilities of potentially relevant downstream tasks contained in LLM pre-training. This has also become the theoretical basis for the transferability of fine-tuning methods for LLMs. The results in Figure 5 show that the reasoning ability stimulated by one dataset can still be demonstrated in another dataset, which reflects well in the query performance which only dropped less than 5%. Model-level transferability. We tried analytical experiments with different base models to determine whether our Figure 5: Ablation experimental results of Accuracy (%) trained on FB15k and tested on FB15K-237, compared to models trained on all mixed training data. Method 1p 2p 3p 2i 3i pi ip 2u up 2in 3in inp pin pni Llama2 67.2 42.3 38.3 61.6 44.8 34.1 36.9 44.2 28.4 44.7 38.5 36.9 32.1 30.0 + IT 94.6 68.8 60.2 84.5 66.7 56.0 60.2 69.5 42.3 66.5 54.4 41.0 38.8 36.9 +BTD 91.5 72.3 65.6 89.7 75.2 60.1 65.4 72.5 49.9 74.3 66.4 48.9 46.5 42.5 Table 2: Accuracy results (%) of whether to use BTD on hard complex query answering across all query types, evaluated on FB15k. BTD CL 1p 2p 3p 2i 3i pi ip 2u up 2in 3in inp pin pni Llama2 w/IT 94.6 68.8 60.2 84.5 66.7 56.0 60.2 69.5 42.3 66.5 54.4 41.0 38.8 36.9 \u2713 94.2 72.3 65.6 89.7 75.2 60.1 65.4 72.5 49.9 74.3 66.4 48.9 46.5 42.5 \u2713 94.6 70.9 61.3 84.4 72.7 58.2 63.2 69.5 47.3 68.5 64.0 42.3 40.9 37.1 \u2713 \u2713 94.8 78.7 69.2 94.8 88.1 79.3 80.5 80.7 67.1 90.6 70.4 59.3 53.6 46.7 Table 3: Accuracy results (%) of whether to use CL on hard complex query answering across all query types, evaluated on FB15k. +3.9 +7.1 +13.9 Low Medium High MRR (%) 0 90 80 70 60 50 40 30 20 10 Previous best LACT 0.0 0.8 0.6 0.4 0.2 1.0 30 90 80 70 60 50 40 Accuracy (%) Completeness of relevant information Various difficulty Figure 6: (left) MRR performance of LACT and previous SOTA methods at different difficulties based on FB15K. (right) The correlation between related information completeness and accuracy evaluated on FB15K-237, we selected 3p query and pi query with the same inference path length as the task types. We assume that the completeness of all simple queries is 1. LACT was universal. Obviously, all types of queries have been improved to a certain extent due to the progress of the base model. Experimental results show that our LACT is suitable for different pedestal models and has strong generalization. 5.6 Scalability Study For verifying the scalability of LACT, we scale LACT to different model sizes and data volumes. Performance on different model size. We tried scaling model size to see if LACT would have an impact when operating on a larger scale. As Figure 4 shows, the performance of our method improves as the model size increases. Performance on different data size. We conducted experiments on different ratios of training data to verify the robustness of LACT. 6 Discussion 6.1 When and Where Does LACT Work? The performance of LACT would be related to the following two aspects: I. The completeness of relevant information extracted from KG. II. Sophistication of complex reasoning. LACT performs consistently better with more complete information. We take the form of a posteriori that set the completeness of relevant triplets to the proportion of triplets in the inference path of complex reasoning in the provided context, (For example, if the inference path of a 2i query is (Turing Award, winner, Yoshua Bengio), (Canada, citizen,Yoshua Bengio), and we can only retrieve (Turing Let's think step by step.The question can be split into 3 question. 1.Which entities are connected to Tom Hardy by relation award?The entity set of the answer is represented by [PP1]. 2.Which entities are connected to Spider-Man 3 by relation nominated for?The entity set of the answer is represented by [PP2]. 3.What are the entities in the intersection of entity sets [PP1] and [PP2]? With reference to the relevant triplet above,the final answer is MTV Movie Award for Best Villain,MTV Movie Award for Best Fight. LACT: Given the following (h,r,t) triplets where entity h is related to entity t by relation r: (Spider-Man 3,nominated for,MTV Movie Award for Best Villain),(Spider-Man 3,nominated for,BAFTA Award for Best Special Visual Effects),(Tom Hardy,award,MTV Movie Award for Best Fight),(Tom Hardy,award,MTV Movie Award for Best Villain). Answer the question:Let us assume that the set of entities E is connected to entity Tom Hardy by relation award and the set of entities F is connected to entity Spider-Man 3 by relation nominated for. Then, what are the entities in the intersection of set E and F, i.e., entities present in both F and G? Query: The entities in the intersection of set E and F, i.e., entities present in both F and G, are \"MTV Movie Award for Best Villain\" and \"BAFTA Award for Best Special Visual Effects.\" Regular: Figure 7: Inference results of ChatGPT and LACT on 2i query case, respectively. Award, winner, Yoshua Bengio) in incomplete KG, then our completeness of relevant triplets to 1/2.) and set the completeness of simple queries that can be directly inferred to 1, to obtain the relation between Accuracy and correlation information completeness. As seen in Figure 6, LACT obtains a significant gain when the completeness of relevant information increased, though, with zero relevant information, it remains a certain amount of complex reasoning ability. LACT performs consistently better on higher difficulties As mentioned before, we simply divide the difficulty of the query by the number of hops in the query. The results in Figure 6 show that our model yields more gain in tasks of higher-level difficulty and complexity, which benefits from our unique and sophisticated fine-tuning framework. 6.2 Case Study To have a close look, we perform the case studies by analyzing the results of LACT and ChatGPT (GPT-3.5-turbo0613). As shown in Figure 7, ChatGPT cannot make good use of incomplete knowledge graphs for reasoning in some cases. Conversely, LACT performs reasoning through a complete logical chain, making maximum use of the relevant information provided and deducing the correct answer, which greatly improves the reasoning ability. Method 1p 2p 3p 2i 3i pi ip 2u up 2in 3in inp pin pni Llama2-7B 76.5 54.3 30.3 56.0 54.5 54.6 36.9 56.5 29.7 17.6 33.1 27.1 19.8 11.2 Mistral-7B 82.6 59.5 34.2 59.6 58.5 59.1 39.1 59.2 33.7 20.4 39.2 31.4 22.4 13.5 Qwen1.5-7B 81.3 57.2 36.8 62.9 58.1 63.9 38.6 60.4 31.8 20.7 41.9 28.7 20.8 11.6 Table 4: MRR results (%) based on differnet base models across all query types, evaluated on FB15k-237. Metric pi ip 2u up Prodecomposed 98.7 100.0 97.8 100.0 Protrue,decomposed 98.6 99.9 97.8 99.6 Table 5: In OOD queries, the proportion of queries that can be decomposed and the proportion of queries that can be decomposed correctly on fb15k. 7 Conclusion In this paper, we present a simple and effective fine-tuning framework LACT to boost the complex logical reasoning ability over KGs. LACT is a two-stage method that consists of both Binary Tree Decomposition and Curriculum Learning and can be applied to various size LLMs with different data sizes. We empirically demonstrate the effectiveness and universality of the LACT on a series of widely-used knowledge graph datasets. Further analyses reveal the underlying mechanism of our method, and investigate When and Why Does LACT Work. We hope that our work can inspire more research on combining knowledge graphs and LLMs. Appendix A Neighborhood Retrieval Algorithm A.1 Retrieval Algorithm To strike a balance between the completeness of relevant information and the token number limit of LLMs, we search for as many relevant triplets as possible along the possible paths. Particularly, for the 1p query, we simply find all the triplets containing the entity or the relation. For another query, as shown in Figure S1 for each leaf node in DAG, we do depth traversal on the graph. For each step in the traversal process, if this step is a projection, we search for all the possible triplets. Otherwise, we perform corresponding operations on intersection and union respectively to filter out the corresponding entities. We continue this traversal until the obtained entity is empty or reaches the root node. All triplets during the traversal are related to triplets. A.2 Over-limit Solutions and Token distribution In all experimental setups, we used the Max Seq Length of 4096, and we simply discarded all out-of-bounds samples and recorded them as 0 at the time of evaluation. In fact, after using our information retrieval algorithm, most of our samples were controlled below 4096. As shown in Figure S2, compared with the algorithm of searching all possible related triples, our retrieval algorithm greatly reduces the consumption of tokens. intermediate variables related but wrong variables easy answers hard answers Neighborhood Retrieval Algorithm Figure S1: A case on Neighborhood Retrieval Algorithm 0 250 500 750 1000 1250 1500 1750 2000 counts 0.00% 5.00% 10.00% 15.00% 20.00% 25.00% 30.00% 35.00% number fb15k_all ours ours_mean all_context all_mean Figure S2: Probability distribution of the number of tokens in FB15K and all-context Retrieval Algorithm. B Conversion Between FOL Expression and Query Computation Tree The conversion from an FOL expression (disjunctive normal form) to its query computation tree involves two steps: dependency graph generation, and union branches duplication. Dependency Graph Generation. Upon encountering a First-Order Logic (FOL) expression, our primary procedure entails the allocation of distinct nodes to individual variables, while assigning a unique node to the constant entity within each one-hop atom. It is important to acknowledge that multiple nodes may represent the same constant entity, given its occurrence in various one-hop atoms. Subsequently, undirected edges are employed to establish connections between nodes in accordance with the defined one-hop atoms. Specifically, if ei j = r(v\u2032, v) (or r(c, v)), then we connect the nodes of v\u2032 (or c) and v by an edge ri. Similarly, if ei j = \u00acr(v\u2032, v) (or \u00acr(c, v)), then we connect the nodes of v\u2032 (or c) and v by an edge \u00acri. The variable i serves as a distinguishing label for edges emanating from distinct conjunctions. The formulated undirected dependency multigraph must conform to a tree structure, signifying a connected and acyclic graph. Choosing the node v? as the root, 1p 2p 3p 2i 3i pi ip 2u up 2in 3in inp pin pni Number of subqueries 1 2 3 3 5 4 4 3 4 3 5 4 4 4 Difficulty 1 1 2 2 3 3 3 2 3 2 3 3 3 3 Table S1: Difficulty of different query types where 1 means easy, 2 means medium, 3 means hard. 1p 2p 3p 2i ip pi 2u up u u 3i 1p 2p 3p 2i ip pi 2u up 3in 2in inp pni pin n n n n n Figure S3: Query structures, illustrated in their query computation graph representations we establish edge directions ensuring that they uniformly point from child nodes to their respective parent nodes, with due consideration to handling inverse relations. Notably, constant entities inherently function as leaf nodes within the tree, given that each entity node exclusively connects to a single variable node. Union Branches Duplication Then we handle the duplication branches in the query computation tree. On the path \u03c4 from root to every leaf node, if exists, we find the first node vi such that the edges between vi and its child node vj are all of the same relations, but in different conjunctions: rt1, rt2, . . . , rtp. We merge these edges into a single edge rt1,t2,...tp, since they all correspond to the same one-hop atom but in different conjunctions, they can be merged by the distributive law: (A \u2227B) \u2228(A \u2227C) \u21d4A \u2227(B \u2228C) (7) We assert that there is a subpath from vi to some vk within the path \u03c4 that only consists of edges rt1,t2,...tp, and vk is connected to different child nodes by relations from conjunctions t1, t2, . . . tp. These edges are annotated as unions, while the remaining one-to-many structures are designated as intersections. Consequently, the multigraph transforms into a simple graph, devoid of multiple edges. C Difficulty We divide the difficulty by the number of decomposed subqueries. Query types and their corresponding difficulties are shown in Table S1. D Experiment Details D.1 Dataset Details \u2022 FB15K is based on Freebase, a large collaborative knowledge graph project that was created by Google. FB15k contains about 15,000 entities, 1,345 relations, and 592,213 triplets (statements that assert a fact about an entity). \u2022 FB15K-237 is a subset of FB15k, containing 14,541 entities, 237 relations, and 310,116 triplets. The relations in FB15k-237 are a subset of the relations in FB15k, and were created to address some of the limitations of FB15k, such as the presence of many irrelevant or ambiguous relations, and to provide a more challenging benchmark for knowledge graph completion models. \u2022 NELL995 was created using the Never-Ending Language Learning (NELL) system, which is a machine learning system that automatically extracts knowledge from the web by reading text and inferring new facts. NELL995 contains 9,959 entities, 200 relations, and 114,934 triplets. The relations in NELL995 cover a wide range of domains, including geography, sports, and politics. D.2 Query Structure For a fair comparison, we use the 14 types of complex queries generated by the same rules in (Ren and Leskovec 2020). The query structure of each type is shown in Figure S3. For each complex query, answers are divided into easy answers and hard answers, based on whether answer can be derived by existing edges in the graph directly. Specifically, in the valid/test set, the easy answers are the entities that can be inferred by edges in the training/valid graph, while hard answers are those that need to be inferred by predicting missing edges in the valid/test graph. Referring to previous work, we calculate standard evaluation metrics including mean reciprocal rank (MRR), in the filtered setting where all easy and hard answers are filtered out during ranking. D.3 Evaluation Protocol Detail For each complex query, its answers are divided into easy answers and hard answers, based on whether the answer can be derived by existing edges in the graph directly. Specifically, in the valid/test set, the easy answers are the entities that can be inferred by edges in the training/valid graph, while hard answers are those that need to be inferred by predicting missing edges in the valid/test graph. Referring to previous work (Ren and Leskovec 2020), we calculate standard evaluation metrics including mean reciprocal rank (MRR), in the filtered setting where all easy and hard answers are filtered out during ranking."
16
+ }
intro_8K/test_introduction_short_2405.01660v1.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.01660v1",
3
+ "title": "Investigating Wit, Creativity, and Detectability of Large Language Models in Domain-Specific Writing Style Adaptation of Reddit's Showerthoughts",
4
+ "abstract": "Recent Large Language Models (LLMs) have shown the ability to generate\ncontent that is difficult or impossible to distinguish from human writing. We\ninvestigate the ability of differently-sized LLMs to replicate human writing\nstyle in short, creative texts in the domain of Showerthoughts, thoughts that\nmay occur during mundane activities. We compare GPT-2 and GPT-Neo fine-tuned on\nReddit data as well as GPT-3.5 invoked in a zero-shot manner, against\nhuman-authored texts. We measure human preference on the texts across the\nspecific dimensions that account for the quality of creative, witty texts.\nAdditionally, we compare the ability of humans versus fine-tuned RoBERTa\nclassifiers to detect AI-generated texts. We conclude that human evaluators\nrate the generated texts slightly worse on average regarding their creative\nquality, but they are unable to reliably distinguish between human-written and\nAI-generated texts. We further provide a dataset for creative, witty text\ngeneration based on Reddit Showerthoughts posts.",
5
+ "authors": "Tolga Buz, Benjamin Frost, Nikola Genchev, Moritz Schneider, Lucie-Aim\u00e9e Kaffee, Gerard de Melo",
6
+ "published": "2024-05-02",
7
+ "updated": "2024-05-02",
8
+ "primary_cat": "cs.CL",
9
+ "cats": [
10
+ "cs.CL",
11
+ "cs.AI"
12
+ ],
13
+ "label": "Original Paper",
14
+ "paper_cat": "LLM Fairness",
15
+ "gt": "As Large Language Models (LLMs) continue to advance, it becomes increasingly challenging for humans to distinguish AI-generated and human- written text. Generated text may appear surpris- ingly convincing, inciting debates whether new forms of evaluating models are necessary (Se- jnowski, 2023). The high quality of LLM outputs can benefit diverse use cases, while also increasing the risk of enabling more sophisticated spam, mis- information, and hate speech bots (Manduchi et al., 2024). LLMs are known to master various aspects of grammar and basic semantics. Yet, one goal that still has proven non-trivial using LLMs is that of generating creative text (Chakrabarty et al., 2023a), especially in the realm of humour (Jentzsch and Kersting, 2023). *Equal contribution We seek to understand the ability of differently- sized LLMs to replicate human writing style in short and creative texts as shared in the Show- erthoughts community on Reddit, which exhibits humour, cleverness, and creativity \u2013 often in a single sentence. The Showerthoughts community (Reddit\u2019s 11th largest) provides a unique dataset of short texts with a characteristic writing style drawing from general creative qualities. To un- derstand how well models of different sizes can replicate such witty Reddit posts, we fine-tuned two LLMs, GPT-2 (Medium) and GPT-Neo, on posts from this online community. Additionally, we used GPT-3.5-turbo as a zero-shot model, i.e., without additional fine-tuning for our specific task. We evaluated how well the AI-generated texts em- ulate the style of Showerthoughts. To this end, we employed a mixed-method approach: We com- pare genuine, human-authored posts with generated Showerthoughts based on various lexical charac- teristics as well as in their similarity in sentence embeddings. Furthermore, we conducted a human evaluation study to assess the human evaluators\u2019 perception of the creative quality (specifically, logi- cal validity, creativity, humour, and cleverness) and to measure how easily AI-generated texts can be detected. We find that participants cannot reliably detect AI-generated texts, as the LLMs come close to human-level quality. Generating humour remains a challenging task, but shows a promising future for the generation of short, witty, and creative state- ments. We find that a machine learning (ML) classifier, trained on Showerthoughts, succeeds at robustly distinguishing human-authored from AI- written text. Thus, there remains potential for cur- rent AI-generated content to be identified, even in the ambiguous realm of humour and creative text. We summarize our contributions in this paper as follows: (1) A new dataset for creative, witty text generation based on Reddit Showerthoughts arXiv:2405.01660v1 [cs.CL] 2 May 2024 posts.1 (2) Experiments with three different models for the generation of creative, witty text. (3) Eval- uation of human perception of creative language generation through a survey. (4) Experiments on automated authorship identification of the text as human-written or AI-generated.",
16
+ "main_content": "Reddit and Showerthoughts Reddit is a social media platform that is organized in communities called subreddits, which exist for a plethora of topics \u2013 all written, curated, voted, and commented on by the community. This provides a diverse and valuable research subject; each subreddit is characterized by a distinct writing style and type of content (Agrawal et al., 2022; Buz et al., 2024). Our work is centered on the r/Showerthoughts subreddit2, which defines Showerthought as \u201ca loose term that applies to the types of thoughts you might have while carrying out a routine task like showering, driving, or daydreaming. At their best, Showerthoughts are universally relatable and find the amusing/interesting within the mundane.\u201d In general, popular Showerthoughts exhibit wit (or cleverness), creativity, and sometimes humour, which come from the realization of matters that lie in everyday life\u2019s banality, which are well thought out but tend to go unnoticed. They condense various intellectual qualities into short texts that often allude to a deeper context \u2013 these qualities can be facilitators of a text\u2019s success in various other settings, including posting on social media or copywriting for marketing purposes. One of the community\u2019s most successful post goes as follows: \u201cWhen you\u2019re a kid, you don\u2019t realize you\u2019re also watching your mom and dad grow up.\u201d3 To the best of our knowledge, there is only one related paper focused on Showerthoughts, which covers a neuro-scientific perspective (Crawford, 2020). Limited research exists that uses Showerthoughts data among other subreddits, but on completely different topics, e.g., detection of suicidal thoughts (Alada\u02d8 g et al., 2018), predicting conversations (Kim et al., 2023), or changes of the community (Lin et al., 2017). Our work is the first to analyse the texts that are shared in this community from a perspective of computational linguistics and the first to publish a Showerthoughts dataset. 1Dataset accessible via our GitHub repository. 2www.reddit.com/r/Showerthoughts 3Accessible via https://www.reddit.com/awd10u/ Creative Quality in Natural Language Generation Early work on computational creativity found that while computers can aid in the creative process, it has long remained difficult to achieve novelty and quality with such systems (Gerv\u00e1s, 2009). More recent LLMs possess a remarkable ability to produce entirely novel content, but Chakrabarty et al. (2022a) find that they have limited capabilities w.r.t. figurative language, and that full stories generated by LLMs seem to be of far inferior quality compared to those written by professional authors (Chakrabarty et al., 2023a). Further, popular LLMs such as ChatGPT have been found to be subpar at writing creative and humourous content such as jokes (Jentzsch and Kersting, 2023). For many creative tasks, such as writing convincing poems, human intervention may be needed to create high-quality text (Chakrabarty et al., 2022b), and the temperature hyperparameter may have a significant impact on the creativity of LLM-generated texts (Davis et al., 2024). AI-assisted writing may lead to improved results (Roemmele, 2021) and LLMs have been perceived as writing collaborators by professional writers (Chakrabarty et al., 2023b). However, it is yet to be seen how the generation of creative, witty text without human intervention can be improved to agree with human preferences. Authorship Identification There have been significant advancements in LLMs generating grammatically correct sentences adhering to semantic rules, even purportedly attaining human levels (K\u00f6bis and Mossink, 2021; Clark et al., 2021). This presents opportunities in areas such as accessibility of information and education, and enhanced productivity (Dwivedi et al., 2023; Noy and Zhang, 2023). However, it also poses a threat to the credibility of information (Kreps et al., 2022; Kumar and Shah, 2018), especially as social media users often fail to detect bots (Kenny et al., 2022), while such bots continue to evolve and spread misinformation (Abokhodair et al., 2015; Shao et al., 2018). Indeed, Ippolito et al. (2020) found that even trained participants struggle to identify AIgenerated texts. K\u00f6bis and Mossink (2021) further found that while completely random texts could be detected, cherry-picked texts could not be distinguished by humans. The model size used to generate texts affects participants\u2019 performance \u2013 both studies used smaller models (GPT-2 with 355M, 774M, and 1.5B parameters, respectively), whereas participants confronted with never models such as GPT-3 performed significantly worse in a similar study (Clark et al., 2021; Brown et al., 2020). With larger model sizes, humans require more time to decide, and their accuracy declines (Brown et al., 2020). In very recent work, Chen and Shu (2024) find that LLM-generated misinformation can be more deceptive than when written by human authors, and Mu\u00f1oz-Ortiz et al. (2023) identify measurable differences between AI-generated and human-written texts. As LLMs advance rapidly, it becomes crucial to understand what type of generated content humans can detect and how to detect generated content automatically. For automatic authorship identification, Wani and Jabin (2017) use ML classifiers to detect bots. Ippolito et al. (2020) use a finetuned BERT-based binary classifier to label texts as human-written or AI-generated. However, their model lacks generalizability \u2013 when trained on topk samples and evaluated on non-truncated random samples, the model only achieves 43.8% accuracy. The sharp increase in discussions about misuse and plagiarism using tools such as ChatGPT has shifted researchers\u2019 focus on this area, e.g., Mitchell et al. (2023) proposed DetectGPT, a zero-shot model for detecting AI-generated text, and Deng et al. (2023) proposed a Bayesian Surrogate Model, claiming to outperform DetectGPT. Tang et al. (2024) provide an overview of further detection techniques. 3 Data Compilation To create the Showerthoughts dataset, we used the publicly available Pushshift API (Clark et al., 2021; Brown et al., 2020) to extract submissions from the Showerthoughts subreddit from April 2020 to November 2022, resulting in an initial collection of 1.3 million posts.4 We discard posts that have been deleted or removed (often due to rule violation) as well as those that contain images or additional explanations in their body text (as the community\u2019s rules require the full Showerthought to be contained in the title). Accordingly, we only use each post\u2019s title for our experiments, resulting in a dataset of 411,189 Showerthoughts. An analysis of the most frequent choices of words reveals that they are often about people, life, common objects, and the world in general. A frequent word analysis 4In mid 2023, Reddit changed their API guidelines, forcing Pushshift to restrict its access to Reddit moderators only. Our datasets were collected before this change occurred. indicates that they often compare things using, e.g., \u201cmore\u201d, \u201cother\u201d, \u201cold\u201d, \u201cgood\u201d. In order to obtain a ground truth about the lexical characteristics of the dataset and later compare them with the generated texts, we conducted several tests on 5,000 randomly selected examples, focusing on sentence complexity, length, grammar, and vocabulary, the results of which are summarized in Table 1 (in the first row \u2018Genuine\u2019). The complexity score is based on the Flesch-Kincaid grade level, which quantifies a text\u2019s complexity based on the number of words per sentence and syllables per word (Kincaid et al., 1975). For example, a score of 7.0 indicates that a 7th-grade student (or a person with at least seven years of education) would typically be able to read and understand the respective text.5 4 Experimental Setup In the following, we detail our experimental setup for addressing our three research questions. We explain our process for generating Reddit Showerthoughts-like texts with differently sized selected LLMs. These texts are subsequently evaluated through a survey, assessing several textual aspects. Additionally, we compare the ability of humans and fine-tuned BERT-based classifiers in detecting originality. An overview of this experimental setup is given in Figure 1. Figure 1: Overview of our experimental setup 5Tests are conducted with the textstat, language_tool_python, and nltk libraries. 4.1 LLM Fine-Tuning and Prompting We consider two setups for the generation of Showerthoughts; (1) two models of different sizes are fine-tuned; (2) ChatGPT (based on GPT-3.5-turbo) is invoked to generate Showerthoughts in a zeroshot setting. Fine-tuning GPT-2 and GPT-Neo For the finetuned models, we select GPT-2 Medium (355M parameters) and GPT-Neo (2.7B parameters) and finetune them on the aforementioned Showerthoughts dataset. To later be able to induce the models to generate Showerthoughts, each instance is wrapped around two previously unseen tokens, <|showerthought|> and <|endoftext|>. These serve as prompt and end-of-text markers, respectively, during generation. We use the standard parameters for text generation for both models, including a temperature value of 0.9. GPT-2 Medium6 is a unidirectional causal language model that generates text sequences, using 355 million parameters. This was the smallest LLM still able to generate sensible results in our initial evaluation during LLM selection. We use AdamW for optimization, the GPT2Tokenizer, a maximum learning rate of 3 \u00d7 10\u22125 with 5,000 warm-up steps, a batch size of 16, and train the model for five epochs on the task of next token prediction. GPT-Neo is an architecturally upgraded model compared to GPT-2 that closely resembles GPT3, with 2.7 billion parameters and trained on the Pile dataset (Gao et al., 2020). We selected the same hyperparameters as for GPT-2 besides using Adafactor optimization, which provides manual control over the learning rate and has better memory efficiency (Shazeer and Stern, 2018). We used a learning rate of 2 \u00d7 10\u22125, which is reduced to 7 \u00d7 10\u22126 over five epochs, and a batch size of 32. Zero-shot Text Generation with ChatGPT In initial experiments, we found that a basic prompt (\u201cPlease generate 10 Showerthoughts\") results in repetition of content and structure in generated texts, in accordance with the findings of Jentzsch and Kersting (2023). We therefore extended the prompt by including a definition of Showerthoughts, alongside instructions for enhancing wit, creativity, and humour, and varying sentence structure. This resulted in the following 6Accessible via https://huggingface.co/ gpt2-medium prompt: \"Please generate 100 Showerthoughts, which are inspired by the Reddit community r/Showerthoughts. Vary the sentence structure between the different sentences, and try to be clever, creative, and funny. The Showerthoughts should be relatable and connected to things that people might encounter during mundane tasks.\" This process was repeated 50 times to sample a total of 5,000 Showerthoughts. We use the standard settings for text generation, including a temperature value of 0.7. 4.2 Survey of Human Preferences We evaluated the results of the text generation models by means of a survey. The participants were randomly split into two groups to evaluate a larger number of Showerthoughts while ensuring an adequate number of responses per Showerthought and a reasonable completion time (around 25 minutes). Each group evaluated 15 human-written and ten AIgenerated Showerthoughts, each from GPT-2, GPTNeo, and ChatGPT. Participants were not informed about the distribution of the sources and received the texts in a random order to prevent evaluation bias. The Showerthoughts were selected randomly and manually filtered to exclude posts harboring vulgarity or a \u201cnot safe for work\u201d (NSFW) topic. The survey starts with a briefing on Reddit and r/Showerthoughts, and we informed participants that they will evaluate 45 Showerthoughts, some of which are written by humans and some generated by LLMs. We further ask demographic questions, including age group and the level of experience with Reddit, Showerthoughts, and Machine Learning on a five-point scale. Then, participants were asked to evaluate a series of 45 Showerthoughts by rating along six dimensions (each on a six-point Likert scale): (1) \u201cI like this Showerthought\u201d, (2) \u201cIt makes a true/valid/logical statement\u201d, (3) \u201cIt is creative\u201d, (4) \u201cIt is funny\u201d, (5) \u201cIt is clever\u201d, and (6) \u201cI believe this Showerthought has been written by a real person\u201d. These criteria were selected to capture the quality of a Showerthought from diverse angles, and are also applicable to comparable short texts such as social media posts and marketing texts. For evaluation, we consider the average scores of the selected Likert scale from 1 (lowest) to 6 (highest). This method is widely used, e.g., in Tang et al. (2021). Finally, the participants could optionally provide a free-text explanation or reasoning on how they decided. 4.3 Authorship Identification As a counterpart to the human evaluators on the task of authorship identification, we fine-tuned a total of four RoBERTa-based models7 (Liu et al., 2019) for binary classification of each input Showerthought as either human-written or AI-generated. For the training and testing of the three LLMspecific RoBERTa classifiers, we used 10,000 randomly selected Showerthoughts per class (i.e., genuine, generated) for GPT-2 and GPT-Neo, and 5,000 examples for the ChatGPT version (due to the smaller generated dataset size). In addition, we trained and tested another RoBERTa classifier on a combined set of 15,000 examples per class (i.e., 5,000 per LLM source). All datasets were randomly split at a 80\u201320 ratio for training and testing. We assessed the classifiers in three setups; (1) evaluating the three LLMs\u2019 outputs compared to human-written (genuine) text separately; (2) evaluating all three LLMs\u2019 outputs combined compared to human-written text; (3) training the classifier on one LLM\u2019s outputs (GPT-Neo) and evaluating it on another LLM\u2019s outputs (GPT-2, ChatGPT, and all combined). All versions of the classifier were trained with the tokenizer of RoBERTa-Base, AdamW optimization, a learning rate of 2 \u00d7 10\u22125, batch size of 32, and a linear scheduler with 300 warm-up steps. To compute the loss for a given prediction, the model receives the tokenized Showerthought and the corresponding label indicating whether the Showerthought was genuine or generated. 5 Results This section presents our experimental results. Section 5.1 compares lexical characteristics, showing that the LLMs come close to human quality. Next, Section 5.2 explores the survey results, providing insights into crucial Showerthought attributes such as logical validity and creativity. Lastly, Section 5.3 reports on our authorship identification, including patterns to distinguish between human-written and AI-generated Showerthoughts. 5.1 Characteristics of Generated Showerthoughts To assess the quality and similarity of generated to original Showerthoughts, we apply the linguistic metrics described in Section 3 to the AI-generated Showerthoughts utilizing 5,000 random samples 7Specifically: RoBERTaForSequenceClassification. per source (for ChatGPT we use all 5,000 texts generated). Table 1 shows that human-written (genuine) Showerthoughts have a larger vocabulary, are slightly more complex, and contain more difficult words and grammar mistakes. Based on these metrics, GPT-Neo\u2019s generated texts are closer to genuine texts compared to the significantly smaller GPT-2. ChatGPT ranks closest to the human reference regarding average complexity and length, slightly behind GPT-Neo regarding vocabulary size, but farthest away from the reference in terms of difficult words and grammar mistakes. We find that the models produce a negligible amount of duplicate Showerthoughts (GPT-2: 13 of 10,000, GPT-Neo: 162 of 10,000, ChatGPT: 6 of 5,000). Source Genuine GPT-Neo GPT-2 ChatGPT Compl.1 7.4 \u00b1 3.4 6.8 \u00b1 3.0 6.3 \u00b1 2.7 6.9 \u00b1 2.4 Length1 81 \u00b1 38 88 \u00b1 39 87 \u00b1 33 81 \u00b1 21 Vocab.2 13,000 8,700 4,900 7,200 Diffic.3 1.2 0.7 0.4 0.36 Errors3 0.3 0.2 0.1 0.05 1 Mean linguistic complexity (Flesch-Kincaid grade level) and length with standard deviation. 2 Vocabulary size in number of unique words. 3 Number of difficult words and grammatical errors per sentence. Table 1: Comparison of common lexical characteristics (based on 5,000 random samples per source) Comparison of Sentence Embeddings How semantically diverse are Showerthoughts and are our LLMs able to match this diversity? To answer this, we employ sentence embeddings8 for comparing the similarity between human-written and AI-generated content, and to measure the linguistic distance to texts from other subreddits. We have reviewed the embeddings of 1,000 randomly sampled Showerthoughts per source visualized with the t-SNE algorithm (Van der Maaten and Hinton, 2008); GPT-2 and GPT-Neo produce more diverse texts than zero-shot ChatGPT, which matches human-written Showerthoughts based on their output distributing across the same semantic clusters as the human-written texts (Figure 2, in Appendix). When comparing these embeddings to 1,000 randomly selected titles from different, similarly large and popular subreddits, we find that every subreddit has a distinct focus, and the generated and genuine Showerthoughts being in the same cluster indicates 8SBERT embeddings in their default, pre-trained configuration (all-MiniLM-L6-v2) that the models are successful in replicating the distinct writing of each subreddit (Figure 3, in Appendix). 5.2 Survey Results A total of 56 human evaluators took our survey (25 participants in Group A and 31 in Group B), resulting in an accumulated 2,520 ratings for the full set of 90 Showerthoughts and an average of 28 ratings per item, as each group reviewed a completely different set of 45 texts. Demographics of Survey Participants The participants\u2019 demographics are influenced by the channels the survey was shared in: The majority of the participants are younger than 30 years old, with 8.5% above 30 years. 89.4% of respondents have some degree of machine learning (ML) experience, 42.6% have trained an ML model at least once, and some of these even work with ML models daily. Only 10.6% indicated little to no experience with ML. 53.1% of participants rarely or never visit Reddit, while the rest visit monthly (8.5%), weekly (38.3%), or daily (27.7%). 31.2% had never heard of r/Showerthoughts before, while 68.7% visited the community at least once in the past \u2013 16.6% are subscribed and follow it regularly, with 6.2% even occasionally engaging in the community. It is clear that this demographic distribution is not representative for the broader population, but a result of the distribution channels used for the survey: the professional and university networks of the authors. From a statistical perspective, this is likely to introduce a bias \u2013 however, we find it highly interesting to study this group of individuals nonetheless, as many are experienced with ML and approximately half are familiar with Reddit, which we hypothesize to potentially improve their abilities. Source Genuine GPT-2 GPT-Neo ChatGPT Score 3.71 2.42 3.40 3.23 Log. Val. 4.20 3.10 3.96 3.55 Creativity 3.63 2.42 3.23 3.45 Humour 3.18 2.10 2.74 2.85 Cleverness 3.41 2.19 3.15 3.07 Table 2: Mean score (on a six-point scale) for the Showerthought quality criteria (Log. Val. = Logical Validity); best score bold, best model underlined Overview of Showerthought Ratings Table 2 displays the average response scores for the first five evaluation criteria. None of the LLMs is able to beat or match the scores of human-written Showerthoughts, but some of them get remarkably close. Among the models, GPT-Neo achieves the best ratings for general score, logical validity, and cleverness, while ChatGPT (based on GPT-3.5-turbo) performs better on creativity and humour. It appears that the general ability to write a convincing, logical, and clever Showerthought can be learned in fine-tuning, but more abstract abilities like creativity and humour improve with model size. The smallest model, GPT-2, performs the worst, consistently short of human-written Showerthoughts, exhibiting an approximately 30% worse performance. GPT-Neo and ChatGPT achieve a much smaller margin with an overall average disparity of 6% and 7%, respectively. The evaluators consistently prefer human-written texts \u2013 however, the margins are small and this does not necessarily have implications for the task of authorship identification, as we show below. Manual Authorship Identification From the survey responses regarding authorship of a text, we consider answers between 1 and 3 as a vote for AI-generated, and answers between 4 and 6 as a vote for human-written text. Table 3 displays the average accuracy of the survey\u2019s participants in correctly identifying each Showerthought\u2019s source. For a more granular evaluation, we additionally display the responses by the participants\u2019 experience in Reddit, machine learning (ML), and Showerthoughts.9 We find that the survey participants were not able to consistently identify whether a Showerthought was human-written or AI-generated; Between all human-written (genuine) and GPT-2, GPT-Neo, and ChatGPT generated Showerthoughts the survey participants were only able to correctly identify 63.8%, 73.1%, 48.1%, and 46.2%, respectively. For GPT-Neo and ChatGPT, this is worse than (balanced) random guessing, i.e., a strategy that would choose one of the two classes in 50% of cases. This indicates that GPT-Neo and ChatGPT already generate Showerthoughts sufficiently convincing to mislead human evaluators. Experience with Reddit and Showerthoughts improves the participants\u2019 ability to identify human-written Show9The participants were considered \u2018experienced\u2019 in one of the given categories if they chose one of the top two answers (e.g., visiting Reddit \u2018Weekly\u2019 or \u2018Daily\u2019) and \u2018unexperienced\u2019 if they chose one of the bottom two answers (e.g., visiting Reddit \u2018Never\u2019 or \u2018Rarely\u2019). Overall Reddit Experience ML Experience Showerthoughts Experience Model Yes No Yes No Yes No Genuine 63.8 % 71.3 % 60.2 % 63.2 % 62.3 % 81.6 % 62.0 % GPT-2 73.1 % 71.3 % 72.2 % 74.0 % 74.0 % 60.0 % 72.4 % GPT-Neo 48.1 % 49.0 % 46.6 % 48.0 % 53.5 % 55.0 % 45.7 % ChatGPT 46.2 % 43.9 % 45.1 % 46.8 % 44.0 % 42.5 % 44.3 % No. Participants 56 21 30 25 7 5 45 Table 3: Survey participants\u2019 accuracy in correctly identifying the Showerthought\u2019s source Prec. Rec. F1 Support GPT-2 Generated 0.91 1.00 0.95 2,000 Genuine 1.00 0.90 0.95 2,000 Accuracy 0.95 4,000 Average 0.96 0.95 0.95 4,000 GPT-Neo Generated 0.84 0.99 0.91 2,000 Genuine 0.99 0.82 0.90 2,000 Accuracy 0.90 4,000 Average 0.92 0.90 0.90 4,000 ChatGPT Generated 0.91 0.99 0.95 400 Genuine 0.99 0.91 0.94 400 Accuracy 0.95 800 Average 0.95 0.95 0.95 800 Combined Generated 0.82 0.95 0.88 3,000 Genuine 0.94 0.79 0.86 3,000 Accuracy 0.87 6,000 Average 0.88 0.87 0.87 6,000 Table 4: Precision, Recall, F1, and Support of the RoBERTa models trained for Showerthoughts authorship identification (LLM-specific models and one combined model for all LLMs) erthoughts, but does not improve their ability to detect AI-generated texts consistently. To investigate whether evaluators are more accurate with higher confidence, we evaluated highconfidence answers only (i.e., 1 \u2013 2 and 4 \u2013 6). However, detection accuracy did not improve. In these cases GPT-2 was detected with an accuracy of 79.6%, while there were only small improvements in detecting the other sources. The detection accuracy regarding GPT-Neo and ChatGPT remained below the random-guess baseline. Similar to the overall results, experience with Reddit or Showerthoughts only helped in identifying genuine texts. This shows that independent of their size GPT-Neo and ChatGPT are able to mislead evaluators with the quality of their generated texts. Participants\u2019 Reasoning for Detecting AIGenerated Texts At the end of the survey, participants could add explanations for their evaluation. Within the 42 responses, the primary factors were: illogical statements, common sense, good grammar, lack of humour / depth / creativity, and repetitive word or syntax usage. Endowing machines with commonsense knowledge has been a long-standing goal in AI (Tandon et al., 2017), which LLMs address to a significant degree. The finding that \u2018good grammar\u2019 was frequently mentioned is noteworthy, as many participants believed that machines excel at grammar while errors indicate human authorship. These findings are consistent with prior research by Dugan et al. (2022), who identified similar factors as the most commonly cited indicators of AIgenerated content. 5.3 Automated Authorship Identification This section presents the evaluation results of the four different RoBERTa classifiers introduced in Section 4.3 \u2013 three LLM-specific classifiers and one trained on the combined texts of all three models. The classification reports presented in Table 4 show that the classifiers trained per model achieve an overall accuracy ranging from 90% to 95%, with the single model trained for all LLMs scoring an accuracy of 87% (Table 4). Across all classifiers, recall for LLM-generated instances approaches 100% with lower precision, while precision for the genuine human-authored class is nearly perfect but with lower recall. These findings indicate the following: (1) These classifiers outperform human evaluators on authorship identification.10, (2) The 10Note: While human evaluators receive a more general instruction at the beginning of the survey, the classification models are fine-tuned for the task. Nonetheless, we consider this a realistic setup, as almost 70% of the evaluators have responded to have prior experience with the Showerthoughts community. For future work, human evaluators could be presented with human-written and AI-generated examples at the beginning of the survey. classifiers consistently misclassify a portion of genuine Showerthoughts as generated, which are either lower-quality examples or similar to generated texts in some regard. (3) The models perform well in detecting the AI-generated texts, with the combined RoBERTa model achieving an average F1 score of 0.87. (4) Current (GPT-based) language models, independent of their size, appear to utilize similarly transparent techniques for language generation and are therefore similarly easy to detect for an ML classifier, even when trained on a different GPT-based model. In an additional experiment, we trained a classifier to distinguish texts of GPT-Neo from genuine ones but evaluate its performance on texts of the other LLMs. The results in Table 5 show that the classifier\u2019s average performance on the texts of other models can achieve a relatively high value of 0.86 when a single model\u2019s texts are utilized for evaluation. However, the results are significantly worse when texts of various models, of which most were not part of the training, are included for evaluation, suggesting fine-tuning with texts from multiple LLMs for better detection performance. Our evaluation of the fine-tuned RoBERTa models shows that none of the classifiers attain 100% accuracy, emphasizing caution when using detection tools, particularly in cases with serious consequences such as academic failure or job loss. In a real-world setting, the specific LLM invoked to generate and spread texts will likely be unknown, and, therefore, cannot provide training samples, which requires robust generalizable classifiers and non-GPT-based LLMs \u2013 important questions requiring investigation in future work. Nonetheless, our results suggest that the models have learned patterns that strongly indicate whether a given Showerthought is AI-generated, which proves valuable for evaluating the tokens and patterns that contribute the most to the classification results, which we do in the following section. Tokens with Greatest Contribution towards Class Prediction We use the LLM explainability library transformers-interpret to identify the most influential tokens per RoBERTa model. For evaluating correctly and falsely classified texts, we select the top four contributing tokens to each Showerthought\u2019s predicted class, then aggregate and normalize each token\u2019s significance relative to the dataset. The results for the three LLMs are similar \u2013 sigPrec. Rec. F1 Support GPT-2 Generated 0.83 0.90 0.86 2,000 Genuine 0.89 0.82 0.85 2,000 Accuracy 0.86 4,000 Average 0.86 0.86 0.86 4,000 ChatGPT Generated 0.99 0.74 0.85 400 Genuine 0.79 0.99 0.88 400 Accuracy 0.86 800 Average 0.89 0.86 0.86 800 All models Generated 0.75 0.54 0.62 3,000 Genuine 0.64 0.82 0.72 3,000 Accuracy 0.68 6,000 Average 0.69 0.68 0.67 6,000 Table 5: Evaluation of the RoBERTa model trained on on GPT-Neo\u2019s generated texts when evaluated on texts from other sources nificant contributors are (1) tokens at the beginning of a sentence, as they start with a capitalized first letter (\u2018If\u2019, \u2018The\u2019, \u2018You\u2019 and \u2018We\u2019 seem to be frequent in generated texts) and (2) punctuation (\u2018.\u2019 and \u2018,\u2019 specifically). Punctuation and specific stop words (e.g., \u2018you\u2019, \u2018the\u2019) seem to be tokens with high attribution scores for the genuine class, indicating that a critical difference between the two classes is the placement of these tokens. ChatGPT shows slightly different top contributors, especially \u2018Why\u2019 and \u2018?\u2019 \u2013 this model seems to generate questions more frequently and seems to have a unique usage of the word \u2018is\u2019. Differences between ChatGPT and the other models may result from ChatGPT\u2019s pre-training data including a different subset of Reddit data and the model\u2019s much larger size. Furthermore, our results indicate that those human-written Showerthoughts falsely classified as AI-generated by GPT-2 and GPT-Neo share the characteristics identified of generated texts, e.g., starting sentences with \u2018You\u2019, \u2018The\u2019, and \u2018We\u2019. ChatGPT shows fewer distinct patterns in contributor variety and overlap between correct and incorrect human classifications. Showerthoughts mistaken as human-written ones use punctuation and blank spaces in a similar way as the genuine texts, while the misclassified human-written texts use words that may occur rarely, or seem to originate from another language. We provide more detailed results in the Appendix. In summary, RoBERTa classifiers have difficulties in cases where the characteristic writing styles of the classes overlap (especially for GPT-2 and GPT-Neo) or the misclassified Showerthought contains rarely-used or foreign words. 6 Conclusion In this study, we demonstrate that relatively small, GPT-based LLMs can be fine-tuned to replicate the writing style of short texts of high creative quality, using the Showerthoughts subreddit as an example. While it remains to be investigated to what extent the creativity stems from observations encountered in the pretraining corpus as opposed to novel creations, we find that large numbers of diverse texts can be produced with great ease. Human raters confirm that the generated texts exhibit wit, creativity, and humour. This paves the way for diverse applications in productivity, creative work, and entertainment, and is relevant for practitioners deploying small LLMs to be cost-efficient. We find that human evaluators rate the generated texts on average slightly lower regarding creativity, humour, cleverness. This does not seem to aid in authorship detection (\u201cI believe this Showerthought has been written by a real person\u201d), as we find that evaluators could not reliably distinguish AI-generated texts from human-written ones. Additionally, the quality of human-written Showerthoughts varies, with bad ones often being mislabeled as AI-generated. Nonetheless, the possibility to abuse these models to produce spam, misinformation, or other harmful content is a growing concern. Our RoBERTabased authorship identification classifiers performs well after fine-tuning, revealing interesting hidden patterns that help in detecting the texts generated by specific LLMs. While ML classifiers can currently detect AI-generated texts (when fine-tuned for the task), we can assume that the text generation quality of LLMs will further improve, making this task more difficult. Additionally, differently designed models may pursue other strategies for generating texts, necessitating their inclusion when training general-purpose classifiers. Our work extends existing work that LLMs can learn to generate specific types of texts (when fine-tuned on high-quality data) to the domain of creative and witty texts, as exhibited by Showerthoughts, but not limited to those. For example, practitioners who would like to utilize such a LLM for marketing or copy-writing, could not only prompt it for general Showerthoughts about a random topic, but also add the start of a text or topic to their prompt for the LLM to complete. Alternatively, generated texts can be clustered by topic to identify the right topics for a specific use case. Simultaneously, we strongly recommend further research on detection mechanisms \u2013 while training detection models using generated texts of known LLMs and those fine-tuned on known datasets seems feasible, the task becomes more difficult when there is an exceedingly high number of LLMs to consider and even more so if the authorLLM\u2019s architecture or the training dataset is not known. Ethics Statement As the dataset proposed in this paper (see Section 3) is based on real user-submitted data from the Reddit Showerthoughts community, it is important to handle it with care. It should not be used to identify individuals and might contain offensive text or wrong information. This should be considered in future use of the dataset. For the survey (see Section 4.2), we manually removed inappropriate content to make it appropriate for the context of where the survey was distributed, e.g., university mailing lists. The type of survey conducted here is exempt from an ethics board review at our institution, as we have carefully designed it to be transparently described and to avoid collection of personal data."
17
+ }
intro_8K/test_introduction_short_2405.01668v1.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.01668v1",
3
+ "title": "WitheredLeaf: Finding Entity-Inconsistency Bugs with LLMs",
4
+ "abstract": "Originating from semantic bugs, Entity-Inconsistency Bugs (EIBs) involve\nmisuse of syntactically valid yet incorrect program entities, such as variable\nidentifiers and function names, which often have security implications. Unlike\nstraightforward syntactic vulnerabilities, EIBs are subtle and can remain\nundetected for years. Traditional detection methods, such as static analysis\nand dynamic testing, often fall short due to the versatile and\ncontext-dependent nature of EIBs. However, with advancements in Large Language\nModels (LLMs) like GPT-4, we believe LLM-powered automatic EIB detection\nbecomes increasingly feasible through these models' semantics understanding\nabilities. This research first undertakes a systematic measurement of LLMs'\ncapabilities in detecting EIBs, revealing that GPT-4, while promising, shows\nlimited recall and precision that hinder its practical application. The primary\nproblem lies in the model's tendency to focus on irrelevant code snippets\ndevoid of EIBs. To address this, we introduce a novel, cascaded EIB detection\nsystem named WitheredLeaf, which leverages smaller, code-specific language\nmodels to filter out most negative cases and mitigate the problem, thereby\nsignificantly enhancing the overall precision and recall. We evaluated\nWitheredLeaf on 154 Python and C GitHub repositories, each with over 1,000\nstars, identifying 123 new flaws, 45% of which can be exploited to disrupt the\nprogram's normal operations. Out of 69 submitted fixes, 27 have been\nsuccessfully merged.",
5
+ "authors": "Hongbo Chen, Yifan Zhang, Xing Han, Huanyao Rong, Yuheng Zhang, Tianhao Mao, Hang Zhang, XiaoFeng Wang, Luyi Xing, Xun Chen",
6
+ "published": "2024-05-02",
7
+ "updated": "2024-05-02",
8
+ "primary_cat": "cs.CR",
9
+ "cats": [
10
+ "cs.CR",
11
+ "cs.SE"
12
+ ],
13
+ "label": "Original Paper",
14
+ "paper_cat": "LLM Fairness",
15
+ "gt": "Entity-inconsistency bug. Among the program bugs with security implications, the hardest ones to detect are logic flaws that are closely related to a program\u2019s functionalities and its context of operations. Figure 1 shows an example, in which quote() at line 991 should take the argument params instead of query. The type of flaws in the figure is characterized by misuses of the wrong yet valid program entities \u2013 a leaf node in the program\u2019s abstract syntax tree such as a variable identifier, a function name, a control string, etc., which we call Entity-Inconsistency Bug (EIB) in our research. EIBs have been reported in prior studies as unique logic flaws introduced by the human developer who mistakenly chooses \u2217All three authors contributed equally to this research. Prompt: Find bugs in this Python code snippet. 987 def quote\u02d9url(url\u02d9text): 988 (scheme , netloc , path , params , query , fragment) = six.moves.urllib.parse.urlparse(url\u02d9text) 989 # netloc\u02d9quoted = quote(netloc) 990 path\u02d9quoted = quote(path) 991 params\u02d9quoted = quote( query ) 992 query\u02d9quoted = quote\u02d9plus(query) 993 fragment\u02d9quoted = quote(fragment) 994 url\u02d9quoted = six.moves.urllib.parse.urlunparse (( scheme , netloc , path\u02d9quoted , params\u02d9quoted , query\u02d9quoted , fragment\u02d9quoted)) 995 return url\u02d9quoted Output by ChatGPT-4: The code snippet you provided has a couple of issues that could be causing problems: 1. Incorrect usage of the quote function for params: In the code, params quoted = quote(query). This line mistakenly passes the query to the quote function instead of params. It should be params quoted = quote(params). 2. Potential Misuse of quote vs. quote plus: Using quote plus for the query is appropriate, but if any other component requires handling spaces as +, then quote plus should be used accordingly. Conversely, if spaces should be encoded as %20 in the query, then quote should be used instead of quote plus Another false alarm and a corrected code snippet are truncated. Figure 1: An Example of ChatGPT detecting an existing bug. The variable query in the red rectangular is buggy. a different or even a random, meaningless string for a program entity [54]. They often come with security implications, exposing the program to such threats as denial of service, violation of control and data flow integrity, etc. [29, 33], and have also been recorded by the CVE database, e.g., CVE-2022-30780, CVE-2022-23574, CVE- 2020-8889, CVE-2020-10574, and CVE-2019-11463. Like other logic flaws, EIBs are extremely hard to detect in an automatic and scalable way. For the example in Figure 1, although a human reviewer might be able to identify the out-of-place pair 1 arXiv:2405.01668v1 [cs.CR] 2 May 2024 Trovato and Tobin, et al. (query and params quoted) from the context of the program (path and path quoted, query and query quoted), the buggy code does not carry any clear-cut patterns, as those for syntactic vulnerabil- ities like use after a free, insecure API invocation, etc., making it difficult for a traditional static analyzer to locate the flaws. While dynamic testing (e.g., fuzzing) might detect some of such errors, it tends to miss most of them due to its low code coverage. Ad- ditionally, EIBs may not be effectively captured by the sanitizers (e.g., AddressSanitizer [26]) indicating the existence of bugs (e.g., transform a silent memory corruption into an explicit crash) for fuzzers (e.g., EIBs may cause only logical errors without memory corruption). As evidence, the EIB shown in Figure 1 has remained undetected for approximately seven years until January 2024 [18] in a popular GitHub repository with 368 stars. EIB detection with LLM. A key observation is that the human reviewer can recognize the presence of an EIB thanks to her capa- bility to \u201cpredict\u201d the correct code entity (e.g., a variable name) at a specific location, according to the context of the program (e.g., the correlations between the argument names and function names from Line 990 and 992-993 in Figure 1). This capability to predict most likely tokens to follow a sequence of tokens (i.e., the context) has also been provided by language models, a Natural Language Processing (NLP) technology. Particularly, with the recent advance in Large Language Models (LLMs) such as ChatGPT, we believe that LLM-powered automatic EIB detection becomes increasingly feasible. So in our research, we performed the first study on the LLM\u2019s potential to automate the discovery of EIBs, focusing on OpenAI\u2019s GPT-4 [4]. Our study shows that indeed the LLM offers powerful semantic analysis capabilities, promising to move us a significant step closer to scalable identification of EIBs. On a dataset of synthesized EIBs (\u00a72.3), GPT-4 successfully captured 60% of them with proper prompt engineering. Particularly, we found that when running open-sourced LLMs such as Code Llama for an infilling task [48] (that is, predicting missing tokens from a program), the LLMs tend to produce tokens consistent with the valid program entities and different ones when a true EIB is encountered (\u00a72.4). However, we found that GPT-4 alone is not ready for supporting a comprehensive, highly scalable, and reasonably accurate EIB detec- tion due to the following limitations. First, it is widely known that LLMs have an inherent tendency to generate inaccurate or fabricated information in their responses, commonly referred to as \u201challucination\u201d [4], sometimes due to the model being distracted to the code context unrelated to the targeted flaws[52]. This weakness leads to a high false positive rate if we directly apply LLMs like GPT-4 to EIB detection, especially on large codebases with a large number of suspicious EIBs. For example, in Figure 1, GPT-4 issues a false alarm on quote plus, probably due to it being distracted by words like plus. In our measurement, on a sampled dataset (with 66 functions randomly selected from the GitHub projects modified after the cut-off date for the GPT-4 training [4]), GPT-4 reports EIBs for nearly half of the functions, most of which turns out to be false positives. Second, GPT-4 tends to miss many EIBs, incurring a high false negative rate, in the presence of a large context window (that is, a large token length) [4]. OpenAI has reported that the context window size is a limitation to applying GPT-4 to cybersecurity [4]. A large context involves many program entities, most unrelated to the EIB, which however distract GPT-4 away from those indeed containing the flaws, as observed in the prior research [52]. Such problems were also observed in our measurement study, rendering it less practical to directly use GPT-4 for EIB detection. Our solution. The key to addressing the limitations of GPT-4 is to reduce the distraction of the model, focusing its attention on truly suspicious entities. To identify the locations of these entities, our idea is to leverage a language model\u2019s capability to \u201cfill in the blank\u201d, that is, predicting the vacated program entity through an infilling task [48]. Note that this cannot be done directly using GPT-4: given the large number of entities in a program, directly running the infilling task through the LLM could incur an unbearable monetary cost. So our solution is to utilize a set of lightweight, open-source LLMs to first go through these tokens, identifying suspicious ones before handing them over to GPT-4 for an in-depth analysis. Specifically, this analysis pipeline, which we call WitheredLeaf (\u00a73), works as follows. Given the source code of a program, our approach first performs a static analysis to identify all its entities, including variable names and function names. WitheredLeaf uti- lizes CodeBERT [25], an extremely lightweight language model, to execute the first infilling run: for each occurrence of a given entity, our approach removes it from the program location, runs CodeBERT on the program to predict it, and records all failed pre- dictions. In this way, our approach quickly excludes the bug-free code fragments. The program locations related to the failed predic- tions, which are considered to be suspicious, are further examined by Code Llama [48], a larger, more capable yet slower model. The output of the LLM, including all program locations involving the en- tities inconsistent with the predictions made by both local models, is uploaded to GPT-4 for an in-depth EIB analysis. WitheredLeaf applies a novel prompt engineering technique to focus GPT-4\u2019s attention to these suspicious locations to reduce both false postives and false negatives. The design of WitheredLeaf strikes a balance between the ac- curacy of an EIB detection and the cost of detection, both in terms of execution time and monetary expense. Our evaluation shows that WitheredLeaf achieves a coverage of 62.1% and a reasonable precision of 23.57% (a significant improvement from less than 1% for direct application of GPT-4). Running the approach on 80 Python and 74 C repositories with over 1,000 stars on GitHub, we discovered 93 new bugs from the former and 30 new bugs from the latter. Contributions. We summarize our contributions as follows. \u2022 Understanding LLMs\u2019 Capabilities in EIB Detection. We report the first comprehensive measurement study on various LLMs\u2019 capabili- ties to detect EIBs, shedding light on their potentials and limitations in performing this task. \u2022 Design and implementation of WitheredLeaf. Based upon the understanding from our measurement study, WitheredLeaf, a cas- caded pipeline to overcome the limitations of LLMs, is developed, turning out to be both effective and efficient on various codebases. We plan to make WitheredLeaf publicly available. \u2022 New flaws discovered and fixed. WitheredLeaf discovered 123 unknown EIBs, in Python and C Github repositories with more 2 WitheredLeaf: Finding Entity-Inconsistency Bugs with LLMs than 1,000 stars, 45% of which can be exploited to disrupt the pro- gram\u2019s normal operations. We have submitted 69 pull requests to the relevant developers, out of which 27 have already been merged. \u2022 New EIB dataset. As a part of this work, we curated a compre- hensive dataset for the EIBs from historical bug fixes, new bugs discovered by WitheredLeaf, and our synthetic dataset. We will release this dataset to facilitate future research in this direction.",
16
+ "main_content": "In order to take the best advantage of LLMs for EIB detection, it is crucial to first understand the performance of LLMs for this specific task. In this section, we aim to comprehensively measure the performance of different language models, which not only includes the powerful state-of-the-art GPT-4, but also other smaller open-sourced models tailored for program code. We start with an overview of different language models and show how they can be utilized for EIB detection with a motivating example, then present the detailed measurement of each model\u2019s performance. We will also discuss our findings and insights drawn from this study. 2.1 Language-Model based EIB Detection Instruct Large Language Models. Represented by OpenAI\u2019s GPT4 [4], these models are trained with enormous textual data crawled from a wide range of sources, including program code, and finetuned to follow the instructions in the prompt. Consequently, these models can assist many different tasks (i.e., general-purpose). However, many such models are commercial and close-sourced (e.g., GPT-4), making the large-scale use and research expensive. Moreover, these models also suffer from the well-known \u201challucination\u201d problem [4], leading to inaccurate responses. Example of EIB Detection. As shown in Figure 1, upon receiving the buggy code and the straightforward prompt, ChatGPT-4 can promptly identify the EIB in its response. However, it is notable that ChatGPT-4\u2019s response also includes two false alarm EIB cases (e.g., that related to quote plus), despite the short length of the input code. This suggests that directly applying GPT-4 on large codebases for EIB detection could incur excessive false alarms, largely due to the distraction problems [52]. Code-specific Language Models. Many language models have been specifically trained for coding tasks (e.g., code generation and completion) recently. These models share a similar transformerbased model generating the probabilities of the tokens based on the context. CodeBERT [25] is one early model trained as a bidirectional encoder [20] and pre-trained on masked language modeling (MLM) tasks, thus capable of code infilling (i.e., predict the code entity such as a variable name at a certain location). More recent models like Code Llama [48], StarCoder [35], Stable Code [44], and DeepSeekCoder [27], are unidirectional, decoder-only transformers, meaning that they take sequential context and output the probabilities of only \u201cthe next token\u201d step by step. Nevertheless, benefiting from the fill-in-the-middle (FIM) training method [10], these models are also able to perform code-infilling tasks. Though not as powerful as general-purpose LLMs, all aforementioned code language models are open-source. Example of EIB Detection. The code-infilling capability of the above models can also support EIB detection. Specifically, we can compare a specific code entity in the original code with a model\u2019s \u201cpredicted\u201d entity at the same location (based on the context), if the two align, then likely that code entity is correct, otherwise, an EIB may exist. For instance, when we input the buggy code snippet in Figure 1 as a prompt into the Code Llama 7B [48] model, masking out the query variable at line 911 and executing it to fill in a variable at the same location, the model promptly suggested params as the output. Such inconsistency successfully indicates the EIB. Conclusion. We conclude that different language models, including both smaller models dedicated for code and powerful generalpurpose models, are all capable of detecting EIB issues with different methods. In the remainder of this section, we conduct an in-depth and comprehensive measurement study to better understand the capabilities of different language models. 2.2 Dataset Preparation To our best knowledge, there is no available EIB dataset. We then curate a dataset ourselves, with data collected after the model\u2019s knowledge cut-off date to minimize memorization concerns. We gather data from popular GitHub repositories (more than 1000 stars), to ensure a high code quality. We specifically extract functions from these repositories that have updates post the LLMs\u2019 knowledge cut-off date1. With these functions, we develop two datasets: (1) \ud835\udc37\ud835\udc60\ud835\udc4e\ud835\udc5a\ud835\udc5d\ud835\udc59\ud835\udc52\ud835\udc51: includes functions of varying sizes measured in lines of code (LOC), representing a broad spectrum of real-world code scenarios. (2) \ud835\udc37\ud835\udc60\ud835\udc66\ud835\udc5b\ud835\udc61\u210e\ud835\udc52\ud835\udc60\ud835\udc56\ud835\udc67\ud835\udc52\ud835\udc51: consists of functions intentionally modified to include synthesized bugs, specifically through entity mutations. To generate \ud835\udc37\ud835\udc60\ud835\udc66\ud835\udc5b\ud835\udc61\u210e\ud835\udc52\ud835\udc60\ud835\udc56\ud835\udc67\ud835\udc52\ud835\udc51, we implemented a process similar to previous identifier mutation testing methodologies [47]. This process begins with a comprehensive enumeration of all permissible entities that could feasibly replace a specific entity within a given code snippet. We then select pairs of entities for substitution based on two main criteria: those with the minimal edit distance, to reflect subtle typographical errors a developer might inadvertently make, and those with the smallest semantic discrepancy, to mirror potential confusion a developer might encounter when differentiating between two similar-meaning entities. Although one function could contain multiple such pairs satisfying the criteria, we always only mutate one entity and thus only introduce one EIB at each function. A subset of synthesized bugs in \ud835\udc37\ud835\udc60\ud835\udc66\ud835\udc5b\ud835\udc61\u210e\ud835\udc52\ud835\udc60\ud835\udc56\ud835\udc67\ud835\udc52\ud835\udc51is introduced on functions in \ud835\udc37\ud835\udc60\ud835\udc4e\ud835\udc5a\ud835\udc5d\ud835\udc59\ud835\udc52\ud835\udc51. 2.3 GPT-4\u2019s Performance on EIB Detection To understand general-purpose LLMs\u2019 EIB detection performance, we choose GPT-4 as the test subject in our measurement due to its superior performance [51, 52, 57] and comprehensive support for JSON-formatted output by OpenAI\u2019s API, which plays an important role in automating bug detection tasks. Prompt Design. Research indicates that in-context examples do not significantly improve GPT-4\u2019s accuracy in detecting vulnerabilities [51]. Therefore, given the diversity of EIB patterns, we opted for zero-shot prompts to avoid the influence of irrelevant examples [49]. 1We use gpt-4-0125-preview with training data up to Dec. 2023. 3 Trovato and Tobin, et al. Specifically, we leverage a template configuration approach to construct a system prompt and multiple rounds of interactive prompts to identify the EIB with associated information about the EIB, represented as a set of properties summarized in Table 1. Instead of querying all properties (i.e., information) in a single prompt, our template configuration enables us to distribute the queries for these properties across different rounds in interactive sessions. This approach can enhance effectiveness: for example, some properties allow GPT-4 to reflect about its previous results so that potential false alarms can be filtered out (i.e., cross-examination). Based on the properties required in each round, a prompt that queries these properties can be automatically generated. We divide the properties into two categories: (1) mandatory properties must occur in specific rounds, and (2) selectable properties could occur in certain rounds depending on the template configuration. Note that our multi-round template design and the selection of various EIB-related properties are based on multiple effective prompting techniques (e.g., Chain-ofThought, or CoT [56] and cross-examination [19]) to boost GPT-4\u2019s bug detection performance, also detailed in Table 1. We showcase one example of template configuration in Figure 2, which is very close to what we eventually adopt in WitheredLeaf. The system prompt asks GPT-4 to act as a programming language expert (i.e., persona adoption [41]) to help improve its performance. In the first round, we ask GPT-4 to locate the potential buggy code lines and provide explanations. The second round further crossvalidates and scrutinizes the identified potential EIBs to reduce false alarms. We additionally incorporate three selectable properties in this round: self-evaluated categorization, fix proposal, and merely fixable predicate. The former asks GPT-4 to classify the bug for a better confirmation and understanding, the latter properties prompt it to generate a bug fix (typically an identifier replacement at the AST leaf node due to the nature of EIB), which serves as a step in CoT that helps LLM determine if the bug can be merely fixed and helps researchers develop bug patches. As shown in our later measurement results, this well-rounded property combination enhances GPT-4\u2019s performance in EIB detection. Please refer to Appendix \u00a7A for details of the prompt templates in Table 2. Measurement Results. To understand the performance of different prompt templates in terms of recall, specificity, and monetary cost, we first configure a wide range of representative templates featuring different numbers of prompt rounds and sets of selectable properties. Measured templates are listed in Table 2, where the digits represent the order of the prompt round and the alphabet abbreviation (see Table 1) stands for selectable properties involved in certain rounds. For instance, the template in Figure 2 is denoted as 1/2FMCa. Although each function in \ud835\udc37\ud835\udc60\ud835\udc66\ud835\udc5b\ud835\udc61\u210e\ud835\udc52\ud835\udc60\ud835\udc56\ud835\udc67\ud835\udc52\ud835\udc51contains only one known EIB, GPT-4 is capable of identifying multiple EIBs within a single function. For this reason, we employ two statistical approaches: 1) regarding each function as a single report (i.e., function-wise), and 2) regarding each bug as a single report (i.e., bug-wise). We then test all template configurations on subsets of \ud835\udc37\ud835\udc60\ud835\udc66\ud835\udc5b\ud835\udc61\u210e\ud835\udc52\ud835\udc60\ud835\udc56\ud835\udc67\ud835\udc52\ud835\udc51 and \ud835\udc37\ud835\udc60\ud835\udc4e\ud835\udc5a\ud835\udc5d\ud835\udc59\ud835\udc52\ud835\udc51. Due to a limited budget, we randomly sampled 66 functions from each of these datasets. With a simple single-round vanilla template (labeled \u201c1\u201d), we note a high FP rate on \ud835\udc37\ud835\udc60\ud835\udc66\ud835\udc5b\ud835\udc61\u210e\ud835\udc52\ud835\udc60\ud835\udc56\ud835\udc67\ud835\udc52\ud835\udc51and high FP number on \ud835\udc37\ud835\udc60\ud835\udc4e\ud835\udc5a\ud835\udc5d\ud835\udc59\ud835\udc52\ud835\udc51, System: You\u2019re a Python expert. Your job is to inspect if the code contains any semantic bugs. Semantic bugs, also known as semantic inconsistencies or logical errors, are a type of bug that occurs when the code is syntactically correct but does not behave as intended or produces incorrect results. These bugs arise from mismatches between the programmer\u2019s intended logic and the actual implementation in the code, where incorrect variable/method name usage or assignment can lead to bugs or vulnerabilities in the program. Here are some requirements: \u2022 Assume the code is syntactically correct, and input parameters to the functions are well-formed and valid. \u2022 Focus solely on detecting semantic bugs, and ignore other problems (e.g., undefined symbols). \u2022 Output in valid JSON format. Round 1 Properties: code line + explanation Round 1 Prompt: {code} Output exact lines of semantic bugs and concise explanations of the bugs. Round 2 Properties: code line* + explanation* + fixed line + merely fixable + category Round 2 Prompt: Inspect these bugs, excluding 1. incorrect or unlikely bugs; 2. bugs caused by undefined symbols; 3. non-semantic bugs. The left bugs only break the intended functionality or lead to vulnerabilities. Answer if the bugs can be fixed by merely changing a variable/method name and fix them. Also, classify this bug into these categories using the code and previous contexts: Security Vulnerability, Logic Bug, Enhancement, Unexpected Behavior, Symbol Not Defined, Module Not Imported, Bad Smell, Not a Bug, or Others. Note that the snippet is from popular repositories and runs, so correct symbols not defined in the snippet are defined at other places, which is not Logic Bug. If you assign Others category, explicitly name the category. Figure 2: A two-round prompt template with system prompt. predominantly attributed to hallucination, as verified through selective manual inspections. First, we want to understand the effectiveness of different properties used as filters (see Table 1). We find that the merely fixable property is effective in filtering out nonEIB cases, which cannot be simply fixed by changing a single code entity. This filtering increases the specificity by 25%-72%, at the cost of a slight increase in FNs. Exclude candidates whose category \u2209{Logic Bug, Security Vulnerability, Unexpected behavior, Bad Smell} also effectively increases the specificity. However, using priority != high as a filtering criterion, although reducing FPs, also leads to missing of TP cases, which is considered worse than the category filter. This might be caused by GPT-4\u2019s inadequate grasp of the bug severity and consequences. On the other hand, we analyze the effect of additional prompting rounds. Introducing the second round prompt increases the recall by approximately 4% from 1FM to 1/2FM and specificity by 32%-46%, demonstrating cross-examination can eliminate many FP cases in EIB detection. Notably, adding the third prompt round turns out to be less effective in both specificity, recall, and monetary cost (i.e., extra rounds lead to more tokens). This can be attributed to unnecessary cross-examination, as the ground truth bugs are detected but filtered out. The root cause might be distraction [52], 4 WitheredLeaf: Finding Entity-Inconsistency Bugs with LLMs Table 1: Mandatory and selectable requested properties in JSON-formatted tool calling. Property Ab. Cat. Description Purpose Prompt Tec. code line Co 1M. the exact line of code with semantic bug locating the bug / explanation E 1M. a concise explanation of this bug reasoning about of the bug / code line* Co* 23M. the original (unfixed) code line with semantic bug, same as in previous round locating the bug in follow-up rounds / explanation* E* 23M. a concise explanation of this bug and your reasoning reasoning about of the bug in follow-up rounds / merely fixable M 12S. the bug can be fixed by merely changing a variable/method name to another one filtering out bugs that does not occur on a single AST node Cross-exam. fixed line F 12S. the line with the bug fixed CoT for merely fixable and helping fix CoT category Ca 23S. the category of this bug; be cautious when answering Security Vulnerability and Logic Bug filtering out non-semantic bugs Cross-exam. priority P 23S. the priority level of this bug: high/medium/low filtering out less interesting bugs Cross-exam. Ab. represents the abbreviation of the property; Cat. represents the category of the property: in round X, it is mandatory (M.) or selectable (S.) in which some irrelevant information in the first and second rounds distracts the LLM. Thus, two-round prompting is enough. Conclusion. Based on our measurement results, we draw two major conclusions. (1) Template Choice. We decide that the template denoted by 1/2FMCa (detailed in Figure 2) has the best overall performance, which is eventually adopted by WitheredLeaf with suspicious lines highlighted. We elaborate on the details of the highlighting mechanism in \u00a73.4 (2) Limitations of GPT-4 on EIB Detection. From our study, GPT-4 shows some potential in EIB detection, however, its performance is still far from practical. A significant portion of the EIBs (26 out of 66) is missed even by the most performant \u201c1/2FMCa\u201d template. Moreover, EIBs in real-world code can be much rarer than EIBs in \ud835\udc37\ud835\udc60\ud835\udc66\ud835\udc5b\ud835\udc61\u210e\ud835\udc52\ud835\udc60\ud835\udc56\ud835\udc67\ud835\udc52\ud835\udc51. Demonstrated via specificity evaluated on \ud835\udc37\ud835\udc60\ud835\udc4e\ud835\udc5a\ud835\udc5d\ud835\udc59\ud835\udc52\ud835\udc51, 19 out of 66 FP reports remain after filtering, resulting in a high false positive rate if we naively let GPT-4 scan the whole codebase. 2.4 Code Language Models on EIB Detection As explained in \u00a72.1, the code infilling capability of smaller code language models can also be utilized for EIB detection (e.g., discrepancies between \u201cpredicted\u201d and actual code entities indicate potential EIBs). In this part, we measure various code language models\u2019 EIB detection performance. Although general-purpose LLMs like GPT-4 are also capable of code infilling, the associated monetary cost could be very high for large codebases. For example, infilling every entity for a function of 40 LOC costs roughly $2.1 for GPT-4, and the cost increases quadratically with code size. We thus focus on open-source code language models in this measurement. Hardware and Parameters. We conduct the experiments on a Linux server with two AMD EPYC 9124 CPUs and one NVIDIA H100 GPU, with the temperature set to 0 for all models. Among our tested models, CodeBERT operates on the CPU, while the others utilize the GPU. Consistency and Efficiency of Code Infilling. Code infilling is the backbone capability of code language models enabling them to detect EIBs, we thus first measure this basic ability of various code models. At a high level, we mask each variable and method call name entity, one at a time, in the code from \ud835\udc37\ud835\udc60\ud835\udc4e\ud835\udc5a\ud835\udc5d\ud835\udc59\ud835\udc52\ud835\udc51and then let various code models fill in the \u201cblank spaces\u201d, gauging whether they can correctly \u201crecover\u201d the masked code entities (i.e., consistency) and how fast the infilling can be performed (i.e., efficiency). To explore the impact of context length (e.g., the amount of code surrounding the \u201cblank space\u201d available to code models) on code infilling performance, our measurement tests 6 different buckets of context lengths (listed in Table 3). Our results are detailed in Table 3. Overall, open-source code models consistently fill in the correct code entities in more than 80% cases, with Deepseek-Coder 1.3B and CodeBERT being exceptions. Regarding the efficiency, most models take longer to process extended contexts, except for CodeBERT, as its token limitation is 512 and the input is truncated. For the impact of context length on consistency, in general, we only observe a marginal influence for most models. However, for Code Llama and Stablecode, minimal context (0-20 LOC) results in consistency deterioration. Upon further analysis, we find these models struggle to generate consistent tokens when the masked original tokens appear only once in the context a situation more prevalent with smaller contexts. We also note a slight decline in consistency in context length of 40-80 LOC. Longer functions might provide more context or redundancy, helping the model to better understand the overall purpose and functionality of the code. In contrast, mid-length functions might provide insufficient context for the model to accurately infer the missing information, while shorter functions might be simple enough that each line carries more predictive weight. Importantly, Code Llama 13B encounters memory limitations for large context lengths of 320-640 LOC, suggesting that models with over 13B parameters may not be suitable for single GPU setups on extensive contexts the higher hardware requirement makes it more difficult to deploy. Recall of Code Infilling. Models with over 2 billion parameters display a promising capability to generate consistent tokens according to our previous measurement (Table 3). We then proceed to test their capability of recalling known EIBs, following the method explained in \u00a72.1. For this purpose, we use a randomly selected subset of \ud835\udc37\ud835\udc60\ud835\udc66\ud835\udc5b\ud835\udc61\u210e\ud835\udc52\ud835\udc60\ud835\udc56\ud835\udc67\ud835\udc52\ud835\udc51, which includes 1,383 artificially introduced EIBs in the functions contained in \ud835\udc37\ud835\udc60\ud835\udc4e\ud835\udc5a\ud835\udc5d\ud835\udc59\ud835\udc52\ud835\udc51. Specifically, we let the code models to infilling: (1) the locations of the buggy code entities, seeing whether there are any discrepancies (e.g., indicating EIBs), and (2) the other locations on the same line as the buggy 5 Trovato and Tobin, et al. Table 2: Bug detection on GPT-4 with different prompt templates. Ground Truth Synthesized Dataset; Contains One Bug Sampled Dataset Template Function-wise Bug-wise Total Cost Function-wise Bug-wise Total Cost TP F. FN U. FN Rec. TP FP FN TN FP TN Spe. FP TN Spe. 1 38 0 28 57.6 43 228 0 0 1.33$ 66 0 0.0 323 0 0.0 1.43$ 1FM 37 1 28 56.1 37 47 3 62 1.28$ 64 2 3.0 143 49 25.5 1.41$ 1/2FM 40 2 24 60.6 42 38 9 177 3.24$ 45 21 31.8 91 234 72.0 3.80$ 1/2FM/3P 18 22 26 27.3 18 14 34 206 5.38$ 28 38 57.6 50 276 84.7 6.11$ 1/2FM/3Ca 22 16 28 33.3 24 25 35 219 5.47$ 31 35 53.0 54 302 84.8 6.47$ 1/2FMCa 39 1 26 59.1 43 18 3 183 3.23$ 19 47 71.2 40 268 87.0 3.77$ 1/2FMCa w/ HL 55 9 2 83.3 58 11 14 116 3.11$ 32 34 51.5 47 195 80.6 3.36$ TP = True Positives; TN = True Negative; FP = False Positives; FN = False Negatives; F.FN = Filtered FN; U.FN = Unfound FN; Rec. = Recall Percentage; Spe. = Specificity Percentage. We set the temperature parameter to 0 in this experiment. We assume all functions are free of EIB in the sampled dataset, as the density of EIB is very low in real-world code. The abbreviations used in the templates can be found in Table 1. The setting 1/2FMCa w/ HL is used in WitheredLeaf, highlighting the line of EIB with other three randomly selected lines in the code (detailed in \u00a73.4). Table 3: Language models\u2019 infilling consistency on the sampled dataset of various context lengths. Models Size 0-20 LOC 20-40 LOC 40-80 LOC 80-160 LOC 160-320 LOC 320-640 LOC Avg. Cons. Cons. Speed Cons. Speed Cons. Speed Cons. Speed Cons. Speed Cons. Speed Code Llama 7B 83.4 243 88.0 431 83.7 647 87.4 1362 85.6 1689 86.0 6236 85.7 Code Llama 13B 85.2 360 92.1 685 87.6 966 88.5 1710 86.7 2750 OOM 88.2 Deepseek 1.3B 56.7 311 42.6 341 36.8 333 61.2 333 65.8 506 74.8 2066 56.3 Deepseek 6.7B 83.8 214 88.0 332 76.9 524 84.9 896 80.5 1550 76.3 2736 81.8 Stablecode 3B 83.9 148 88.6 127 81.6 176 86.5 168 84.3 216 83.9 530 84.8 CodeBERT 125M 31.1 202 31.2 242 23.2 216 23.1 207 24.4 202 17.0 261 25.0 Consistency is abbreviated as Cons., and written in percentage. Speed represents the average generation speed for each identifier in milliseconds. 100 functions are sampled for LOC sizes of 0-20, 20-40, and 40-80 LOC with 1703, 4713, and 9278 infilling tasks, respectively; 50 functions for 80-160 LOC with 7499 FIM tasks; 20 functions for 160-320 LOC with 5287 FIM tasks; 10 functions for 320-640 LOC with 7319 FIM tasks. Table 4: Lanugage models\u2019 infilling recall and consistency on the synthesized dataset. Model Size Recall on EIB Cons. of Others Code Llama 7B 99.0 79.3 Code Llama 13B 99.4 81.4 Deepseek 1.3B 97.8 46.4 Deepseek 6.7B 99.0 76.3 Stablecode 3B 98.8 78.3 CodeBERT 125M 90.0 34.0 location, to assess the impact of EIB on code infilling for nearby code locations. Our measurement results are depicted in Table 4, as seen, nearly all models, except CodeBERT, demonstrate a high recall for known EIBs, suggesting the feasibility of utilizing code models to hunt for EIBs. For nearby code locations on the same line of the EIB sites, interestingly, we observe 6%-7% drops in the consistency (the \u201cCons. of Others\u201d column in Table 4) of infilling tasks for all models except CodeBERT. This phenomenon indicates that EIB can \u201cinterfere\u201d with the infilling task, which has been utilized to enhance the robustness of EIB detection, as discrepancies are observable not only at the exact locations of the bugs but also in nearby areas. The related design details are presented in \u00a73.4. A potential explanation for this phenomenon is that EIBs increase the perplexity, leading to larger uncertainty in token prediction [55]. Conclusion. We highlight major conclusions from our study: (1) Effective True Negative Filtering. As mentioned, if the code models\u2019 predicted code entity is consistent with the original one, we deem the non-existence of EIBs for that entity. The high consistency (higher than 80%) on the basically EIB-free \ud835\udc37\ud835\udc60\ud835\udc4e\ud835\udc5a\ud835\udc5d\ud835\udc59\ud835\udc52\ud835\udc51of various code models (Table 3) thus suggests that they can effectively filter out more than 80% EIB-free code entities (i.e., True Negatives). Moreover, this filtering only introduces very few false negatives as shown in Table 4, the vast majority (e.g., at least 97.8% for all models except CodeBERT) of true EIB cases will still trigger inconsistencies. These results highlight the values of code models as effective true negative filters. (2) Limitations of Code Models on EIB Detection. Despite that code models can filter out a large number of true negatives, the filtered results could still contain lots of false alarms. For instance, there are 10% 20% inconsistencies (e.g., suspicious EIBs) in Table 3, but the real EIBs might only take less than 1%, yielding a high false positive rate. The primary reason is that code models\u2019 token generation is largely uncontrolled, as they are not exclusively trained to generate specific entities in the FIM task. Instead, they inconsistently produce characters like pound signs (to start a Python comment), spaces, and parentheses, even when asked to infill a variable name. These issues make it difficult to detect EIBs solely relying on code models. 6 WitheredLeaf: Finding Entity-Inconsistency Bugs with LLMs Figure 3: Pipeline of WitheredLeaf. 3 DESIGN AND IMPLEMENTATION Inspired by our measurement in \u00a72, we present WitheredLeaf, an automatic system to effectively and efficiently detect EIBs in various codebases. In this section, we detail its design and implementation. 3.1 Overview Our measurement study (\u00a72) reveals that, though both generalpurpose LLMs (e.g., GPT-4) and smaller code language models show some potentials in EIB detection, each of them has its own weaknesses. Fortunately, these weaknesses can be largely mitigated with the synergy of different models. For example, the high expenses and false positive rate of GPT-4 can be alleviated if we use a smaller and cheaper model to filter out most EIB-free code locations in advance and provide precise locations of suspicious EIBs. Similarly, the excessive inconsistencies identified by code language models can also be further verified by the powerful GPT-4. Based on this core idea, WitheredLeaf employs a cascaded approach that integrates different models in a pipeline to achieve an optimal result. WitheredLeaf\u2019s Pipeline. We depict WitheredLeaf\u2019s pipeline in Figure 3. Initially, the code undergoes processing by a static analyzer to compile a comprehensive list of all code entities subject to EIB detection. Then, WitheredLeaf transforms this list into code-infilling tasks, undertaken by locally deployed small models. These models sequentially filter out most EIB-free code locations. WitheredLeaf then automatically prompts GPT-4 to investigate and verify the remaining suspicious EIB sites. GPT-4\u2019s responses will undergo a secondary filtration process and then be compiled into bug reports for further manual inspection. This cascaded design eliminates most true negative cases while keeping most true positive cases (as discussed in \u00a72), thereby preventing GPT-4 from being overwhelmed and significantly reducing the false positive rate (\u00a73.2). For local models, we develop schemes to control token generation (\u00a73.3), which enhances WitheredLeaf \u2019s specificity and true negative filtering. Finally, WitheredLeaf precisely identifies suspicious EIB locations (with the help of code models) and feeds them to GPT-4, which compels GPT-4 to thoroughly reason about suspicious code lines (i.e., highlight), thereby reducing the false negative rate (\u00a73.4). 3.2 Cascaded Detection As indicated by the measurement results in \u00a72, achieving high coverage, precision, and speed simultaneously is challenging. Our basic idea is to integrate different models in a cascaded design to achieve a desirable tradeoff of the aforementioned aspects, the problem then becomes which exact models we should choose for each pipeline stage. We reason about our choices in this section. Problem Formalization. We assume the pipeline has \ud835\udc56stages, each of which is equipped with a language model. We use \ud835\udc5d\ud835\udc56to denote the true negative rate (i.e., specificity) of the model at stage \ud835\udc56, \ud835\udc5e\ud835\udc56the true positive rate (i.e., recall), and \ud835\udc61\ud835\udc56the throughput (i.e., execution speed). Let \ud835\udc41\ud835\udc56represent the number of potential EIB cases that need to be processed at stage \ud835\udc56, and \ud835\udf16\ud835\udc56represents the ratio of the number of actual EIB instances relative to \ud835\udc41\ud835\udc56. We can derive the number of true/false positives/negatives at each stage: \ud835\udc41\ud835\udc47\ud835\udc41,\ud835\udc56= \ud835\udc41\ud835\udc56\ud835\udc5d\ud835\udc56(1 \u2212\ud835\udf16\ud835\udc56) \ud835\udc41\ud835\udc47\ud835\udc43,\ud835\udc56= \ud835\udc41\ud835\udc56\ud835\udc5e\ud835\udc56\ud835\udf16\ud835\udc56 \ud835\udc41\ud835\udc39\ud835\udc43,\ud835\udc56= \ud835\udc41\ud835\udc56(1 \u2212\ud835\udc5d\ud835\udc56)(1 \u2212\ud835\udf16\ud835\udc56) \ud835\udc41\ud835\udc39\ud835\udc41,\ud835\udc56= \ud835\udc41\ud835\udc56(1 \u2212\ud835\udc5e\ud835\udc56)\ud835\udf16\ud835\udc56 After stage \ud835\udc56, true negatives \ud835\udc41\ud835\udc47\ud835\udc41,\ud835\udc56are correctly filtered out, while false negatives \ud835\udc41\ud835\udc39\ud835\udc41,\ud835\udc56incorrectly. True and false positives are escalated to the next stage (\ud835\udc41\ud835\udc56+1 = \ud835\udc41\ud835\udc47\ud835\udc43,\ud835\udc56+ \ud835\udc41\ud835\udc39\ud835\udc43,\ud835\udc56). We can then define the maximum number of missed bugs \ud835\udc40of the entire pipeline and the total execution time \ud835\udc47(assuming \ud835\udc5bstages in total): \ud835\udc40= \ud835\udc5b \u2211\ufe01 \ud835\udc56=0 \ud835\udc41\ud835\udc56(1 \u2212\ud835\udc5e\ud835\udc56)\ud835\udf16\ud835\udc56 \ud835\udc47= \ud835\udc5b \u2211\ufe01 \ud835\udc56=0 \ud835\udc41\ud835\udc56\ud835\udc61\ud835\udc56 Additionally, the true positive proportion of stage \ud835\udc56+ 1 can also be derived from metrics of the previous stage \ud835\udc56: \ud835\udf16\ud835\udc56+1 = \ud835\udc41\ud835\udc47\ud835\udc43,\ud835\udc56 \ud835\udc41\ud835\udc47\ud835\udc43,\ud835\udc56+ \ud835\udc41\ud835\udc39\ud835\udc43,\ud835\udc56 = \ud835\udc5e\ud835\udc56\ud835\udf16\ud835\udc56 1 \u2212\ud835\udc5d\ud835\udc56\u2212\ud835\udf16\ud835\udc56+ (\ud835\udc5d\ud835\udc56+ \ud835\udc5e\ud835\udc56)\ud835\udf16\ud835\udc56 For an \ud835\udc5b-stage pipeline, the aggregated precision of all models before the final stage is then \ud835\udf16\ud835\udc5b. For the final stage \ud835\udc5b, WitheredLeaf adopts powerful state-of-the-art GPT-4, which will process \ud835\udc41\ud835\udc5bEIB candidates, with the specificity and recall as \ud835\udc5d\ud835\udc5band \ud835\udc5e\ud835\udc5b, respectively. Next, we formulate the total cost of the detection pipeline, which includes the cost of computing resources \ud835\udc36\ud835\udc50\ud835\udc5c\ud835\udc5a\ud835\udc5dper time unit, the cost per API invocation \ud835\udc36\ud835\udc4e\ud835\udc5d\ud835\udc56of commercial LLMs like GPT-4, and the penalties associated with missed bugs \ud835\udc36\ud835\udc5a\ud835\udc56\ud835\udc60\ud835\udc60(e.g., missed vulnerabilities can cause significant financial loss) and false alarms \ud835\udc36\ud835\udc50\u210e\ud835\udc52\ud835\udc50\ud835\udc58(e.g., extra manual reviewing efforts): \ud835\udc36= \ud835\udc36\ud835\udc4e\ud835\udc5d\ud835\udc56\ud835\udc41\ud835\udc5b+ \ud835\udc36\ud835\udc50\ud835\udc5c\ud835\udc5a\ud835\udc5d\ud835\udc47+ \ud835\udc36\ud835\udc5a\ud835\udc56\ud835\udc60\ud835\udc60\ud835\udc40\u2032 + \ud835\udc36\ud835\udc50\u210e\ud835\udc52\ud835\udc50\ud835\udc58\ud835\udc41\u2032 \ud835\udc39\ud835\udc43 Here, \ud835\udc40\u2032 represents the final count of missed bugs, and \ud835\udc41\u2032 \ud835\udc39\ud835\udc43 denotes the final count of false positive cases for the entire pipeline. While the values of \ud835\udc36\ud835\udc4e\ud835\udc5d\ud835\udc56and \ud835\udc36\ud835\udc50\ud835\udc5c\ud835\udc5a\ud835\udc5dcan be obtained from service providers,\ud835\udc36\ud835\udc5a\ud835\udc56\ud835\udc60\ud835\udc60and\ud835\udc36\ud835\udc50\u210e\ud835\udc52\ud835\udc50\ud835\udc58are user-configurable variables, depending on different needs and priorities. Given this cost formalization, the model selection problem for pipeline stages can be transformed to an optimization problem aiming to minimize the total cost. WitheredLeaf\u2019s Model Selection. As an experimental project, we weigh more on the bug coverage (i.e., fewer false negatives), so we would set a higher \ud835\udc36\ud835\udc5a\ud835\udc56\ud835\udc60\ud835\udc60and put a focus on reducing \ud835\udc40\u2032. As indicated by the formula of \ud835\udc40, it increases as more pipeline stages are added. In other words, additional pipeline stages can enhance the precision, but at the cost of more missed bugs and longer execution time. For example, assuming \ud835\udf160 = 0.002, \ud835\udc5d\ud835\udc56= 0.8 and \ud835\udc5e\ud835\udc56= 0.9, maximally 9.9% of EIBs evades after the first local LLM, 7 Trovato and Tobin, et al. 18.9% evades after the second, 27.1% after the third, while reaching to 34.4% with the introduction of the fourth stage. Therefore, to maximize WitheredLeaf\u2019s capability to capture EIBs, we fix the number of cascaded local models to two, with one GPT-4 stage. We expect roughly 20% precision on the final report, which aligns with previous work on inconsistency bug detection [7] and is acceptable by developer community [8]. However, calculating with the metrics measured previously in \u00a72, such a goal is not practical without improvements on the specificity of current models. To address this problem, we develop techniques to control the token generation of code langauge models and subsequently improve their specificity, with neglectable loss of recall (\u00a73.3). Finally, considering the execution time and above metrics, we select CodeBERT, as the first local model to pre-filter negative EIBs and Code Llama 7B as the second local model. More details on their metrics and calculations are presented in \u00a74.1. 3.3 Controlled Token Generation As measured in \u00a72.4, local LLMs are not generating tokens in a controlled manner, leading to a large number of inconsistencies in infilling tasks for non-EIB entities. The major causes of such inconsistencies are different for different models. Specifically, the encoder-based model (i.e., CodeBERT) is unaware of the actual length of the entity in infilling, while decoder-only models (e.g., Code Llama) are unaware of what kind of entity should be generated. The high-level idea is to inspect the probabilities of generated tokens, and we propose two algorithms for these models. Static Analysis for Infilling Task Preparation. An initial step involved the construction of input infilling tasks to the models. Each infilling task is constituted by an altered function from the code snippet for detection, in which one of the entities in the function is replaced by a designated token indicating the infilling position. Such entities are derived from abstract syntax trees (AST) leaf nodes. Associated information, such as substituted original tokens and the entity type, will be used to guide token generation. A function may contain a number of entities, introducing unnecessary overhead to subsequent tasks. In order to expedite the detection process without detriment to precision, certain heuristic approaches were employed. Currently, WitheredLeaf targets EIBs on variable and function names. To exclude entities with a minimal likelihood of containing bugs, it skips: 1) function names in their declarations; 2) declarations of variables, including formal arguments; and 3) imported library names (e.g., torch in Python). These entities typically cannot incur EIBs, as related errors are mostly caught in compilers or interpreters. CodeBERT. By design, the CodeBERT model aims to generate a fixed number of tokens in the code. An entity name usually contains more than one token. For example, \u201dparams quoted\u201d contains four tokens: params, , qu, and oted. By evaluating the \ud835\udc37\ud835\udc60\ud835\udc66\ud835\udc5b\ud835\udc61\u210e\ud835\udc52\ud835\udc60\ud835\udc56\ud835\udc67\ud835\udc52\ud835\udc51, we find a feasible search range that can cover 91.2% cases. Given the entity\u2019s potential token lengths range, we can use CodeBERT to check the consistency of the original tokens, as shown in the Algorithm 1. For each possible token length, CodeBERT will generate an entity candidate and its corresponding score. Since CodeBERT serves as the pre-filter, it is crucial to minimize the Algorithm 1: CodeBERT-based Consistency Check Input: maskedCode, originalTokens, searchRange, tokenThreshold, consistencyThreshold Output: consistencyFlag Initialize candidateScores as empty map ; originalLength \u2190originalTokens.tokensLength; tokensLengthsRange \u2190lengthRange(originalLength, searchRange); foreach tokensLength in tokensLengthsRange do entity, score \u2190CodeBERTPredict(maskedCode, tokensLength) ; /* entity.tokensLength equals to tokensLength */ if score <tokenThreshold then skip to the next iteration; candidateScores[entity] \u2190score ; if originalTokens in candidateScores then foreach entity, score in candidateScores do if score > consistencyThreshold and entity \u2260originalTokens then return False; return True; return False; false negative rate. Therefore, we select a \ud835\udc61\ud835\udc5c\ud835\udc58\ud835\udc52\ud835\udc5b\ud835\udc47\u210e\ud835\udc5f\ud835\udc52\ud835\udc60\u210e\ud835\udc5c\ud835\udc59\ud835\udc51; if the returned entity candidate\u2019s score falls below this threshold, we skip this token length and proceed to the next iteration. In the end, if all of the possible context lengths are skipped, we consider it as unpredictable for CodeBERT and escalate it to the next component. For the inconsistency check, if the original entity name is the only entity name generated by CodeBERT, we consider this as semantic consistency, and will not escalate any report to the next component. If there are multiple entity names in the candidates list and the original entity name is included, we check the score of each candidate. Only if no candidate other than the original entity name has a confidence score higher than consistencyThreshold, we consider this as semantic consistency. Besides, if the original entity name does not occur in the entity candidates list, we consider this as semantic inconsistency. Decoder-only Models. Token generation for decoder-only models like Code Llama is different from CodeBERT, as these models are not aware of the number of tokens to generate, and the generation stops till it generates a special end-of-sequence token. Intuitively, one way to enhance the specificity is to inspect the probabilities of generated tokens and regard a negative case if the original tokens are highly probable, if not the most probable. However, these models generate tokens sequentially, conditioned on the previous generation step, which renders inspection of the probabilities of a token meaningless if the model has already generated tokens inconsistent with the original ones. Thus, if the first generated token is \u201cinvalid\u201d, the whole generation process will follow this invalid direction. For example, when a variable name is masked at the beginning of the function body, the model usually begins to generate a documentation string of the function (i.e.,, the token \u201d\u201d\u201d), leading subsequent generation following a documentation string rather than the variable name. Therefore, WitheredLeaf needs to harness the generation, ensuring the generated tokens constitute a syntactically valid entity. For example, variable names in Python can only contain capital 8 WitheredLeaf: Finding Entity-Inconsistency Bugs with LLMs Algorithm 2: Consistency Check for Decoder-only Models Input: maskedCode, originalTokens, probThresh, rankThresh, Output: consistencyFlag rankSum \u21900; generatedTokens, leftTokens \u2190[], originalTokens; while leftTokens.length >0 do nextTokenProbList \u2190top k(CodeLlamaPredict(maskedCode, generatedTokens)); foreach token, prob in nextTokenProbList do if validateToken(token) then if leftTokens.startswith(token) then leftTokens.remove(token) generatedTokens.append(Token) break to generate the next token; else if prob >probThresh then return False; rankSum \u2190rankSum + 1; else continue to the next iteration; if rankSum >rankThresh then return False; return True; and lower English letters, underscore, and non-beginning digits. The high-level idea of our approach is to select the tokens guided by the original tokens, and inspect the probabilities to judge the consistency. The detailed algorithm is demonstrated in Algorithm 2. WitheredLeaf generates tokens one by one, and fixes the token from the original tokens before generating the next one to ensure the generation strictly follows the original tokens. It inspects the probabilities of the generated token candidates in each generation step. For each generated token, WitheredLeaf first checks if the token is syntactically valid via validateToken(). Then, if the leftTokens starts with the generated token, meaning the generation follows the original tokens, it updates the variable and continues to generate the next token. Otherwise, the generated token is penalized for not being consistent with the original ones. In a case where the probability prob is greater than a threshold \ud835\udc5d\ud835\udc5f\ud835\udc5c\ud835\udc4f\ud835\udc47\u210e\ud835\udc5f\ud835\udc52\ud835\udc60\u210e, meaning that there is another more probably valid token different from the original tokens, the check directly returns False. It adds rankSum to 1, penalizing original tokens not being the most probable generated token. By comparing rankSum with a threshold rankThresh, the algorithm determines the consistency. Unlike CodeBERT, which is operated to lower the false negative rate, Code Llama is used as a post-filter and it is optimized to reduce false positive rate. Therefore, the thresholds are tuned to return Ture in more cases. 3.4 Inconsistent Position Highlighting As measured in \u00a72.3, GPT-4 can produce a number of false negatives, especially in a longer context. A previous study also notice the pattern and attributed it to distraction in long context [52]. Thus, we design a prompt engineering technique, forcing GPT-4 to attend back on suspicious positions. Such positions are collected from the inconsistent positions determined by local models. In addition to the existing prompt, WitheredLeaf instructs GPT-4: \u201cAlso, pay additional attention to these lines: {suspicious lines}\u201d. The whole line, rather than the suspicious entity, is highlighted since the observation in \u00a72.4 indicates the consistency of the infilling task conducted on the same line is negatively affected. Thus, the inconsistency detected may imply an EIB at another entity located on the same line. 3.5 Implementation WitheredLeaf relies on tree-sitter [3] to parse the source code, since it supports multiple programming languages. CodeBERT2. CodeBERT is employed as a pre-filter in WitheredLeaf due to its rapid processing and compact size, allowing it to run on the CPU concurrently with other components utilizing the GPU and enhancing overall system efficiency. One limitation is its max token length is 512, so the masked code is truncated to fit into this limitation. Since CodeBERT fill each masked token independently, in CodeBERTPredict, we concatenates tokens in each token position to generate the predicted entity name. Every confidence score is a float number ranging from zero to one. The generated concatenated entity name\u2019s score is the product of the scores of each token. For the parameter selection, after evaluating the \ud835\udc37\ud835\udc60\ud835\udc66\ud835\udc5b\ud835\udc61\u210e\ud835\udc52\ud835\udc60\ud835\udc56\ud835\udc67\ud835\udc52\ud835\udc51, we selected searchRange, tokenThreshold and consistencyThreshold that effectively achieve a high recall rate, while maintaining a reasonably low false positive rate (see Table 5). StableCode. StableCode is employed as an alternative pre-filter local model. It was released after our experiment with CodeBERT on Python code was finished, so we used it experimentally on C. Code Llama. Code Llama is employed as the post-filter local model. Code Llama 7B base model is selected owing to its better performance and stability, as the 13B model occasionally results in out-ofmemory errors. Although Meta provides 34B and 70B models, they are not shipped with FIM capability. We implement Algorithm 2 using PyTorch library [9]. We selected rankThresh and probThresh based on the evaluation of \ud835\udc37\ud835\udc60\ud835\udc66\ud835\udc5b\ud835\udc61\u210e\ud835\udc52\ud835\udc60\ud835\udc56\ud835\udc67\ud835\udc52\ud835\udc51to achieve high specificity, with a minor increase of false negative rate (see \u00a74.1). GPT-4. GPT-4 is used as the state-of-the-art LLM to detect EIBs following the prompts. We use the prompt template shown in Figure 2 with suspicious line highlighted. New models such as Claude 3 and Llama 3 have been recently released, claiming to be new state-ofthe-art. However, their APIs currently only offer limited support to JSON output, which can hardly be integrated into an automated detection pipeline. We leave the measurement and integration of these models as future work. 4 EVALUATION In this section, we first conduct controlled experiments on our prepared datasets (\u00a72.2). This is to (1) evaluate the effectiveness of individual components in our design, and (2) compare WitheredLeaf with existing detectors. After that, we evaluate WitheredLeaf on real-world code repositories and demonstrate its ability to discover new bugs. The experiments are conducted on a Ubuntu 22.4 Linux server with two AMD EPYC 9124 CPUs, one NVIDIA H100 GPU, and 384GB memory. 2We employ CodeBERT (MLM) on to verify consistency. 9 Trovato and Tobin, et al. 4.1 Controlled Experiments 4.1.1 Individual Components. First, to access the effectiveness of our design, we evaluate the performance of individual components in the pipeline. Similar to the experiments in \u00a72.4 with \ud835\udc37\ud835\udc60\ud835\udc4e\ud835\udc5a\ud835\udc5d\ud835\udc59\ud835\udc52\ud835\udc51and \ud835\udc37\ud835\udc60\ud835\udc66\ud835\udc5b\ud835\udc61\u210e\ud835\udc52\ud835\udc60\ud835\udc56\ud835\udc67\ud835\udc52\ud835\udc51, we assess the recall and consistency of code models, but with controlled token generation algorithms as detailed in \u00a73.3. The evaluation also helps us to establish the algorithm parameters, which will be used in real-world EIB detection tasks in \u00a74.2. CodeBERT. We explored various configurations of the parameters searchRange, tokenThreshold and consistencyThreshold in Algorithm 1 to evaluate CodeBERT\u2019s performance, as shown in Table 5. A higher searchRange enhances recall but increases the running time. Similarly, tokenThreshold is positively correlated with recall; however, higher values significantly decrease the consistency rate. Conversely, consistencyThreshold is inversely proportional to recall and directly affects the consistency rate. Considering the computational costs, and the recall, we selected 2 as the searchRange and 0.8 as consistencyThreshold to optimally balance recall with a satisfactory consistency rate. Additionally, we fixed the searchRange and consistencyThreshold to evaluates tokenThreshold as shown in Table 6, and finally, we selected 0.1 as the tokenThreshold. Code Llama. To evaluate the effectiveness of controlled token generation proposed in Algorithm 2, we test the infilling task with different threshold parameters on Code Llama 7B model. The scores are presented in Table 7 in percentage. Our algorithm can roughly enhance the consistency by 5%, with the cost of 9% reduction on recall. This is expected, as tuning towards enhancing consistency results in reduced recall. We believe this is a worthy tradeoff, as EIBs are very rare in real-world code a higher consistency will help eliminate more true negative cases and improve the overall precision. Therefore, we select the parameters of probThresh = 0.90 and rankThresh = 3. GPT-4. For the additional prompt template highlighting suspicious bugs, the experiment is identical to \u00a72.3, and the results are listed in Table 2 along with the measurement results. In the template, the ground truth EIB line, with three additional randomly selected lines are highlighted in the prompt. This mechanism significantly enhances the number of EIBs being detected by 24% on \ud835\udc37\ud835\udc60\ud835\udc66\ud835\udc5b\ud835\udc61\u210e\ud835\udc52\ud835\udc60\ud835\udc56\ud835\udc67\ud835\udc52\ud835\udc51 compared to the best prompt template without highlighting, successfully reducing the FNs potentially caused by distraction. However, such a mechanism is not perfect, as it can incur more FPs (by 20% function-wise and by 7% bug-wise) if the line with EIB is not properly highlighted, as accessed on \ud835\udc37\ud835\udc60\ud835\udc4e\ud835\udc5a\ud835\udc5d\ud835\udc59\ud835\udc52\ud835\udc51. We believe this is a worthy tradeoff, as the density of EIBs reaches a high level at this stage. Also, please note that highlighting is only made possible with the local models. 4.1.2 Whole System. To assess the performance of our bug detection system in a controlled setting, we evaluate it on our synthesized dataset, \ud835\udc37\ud835\udc60\ud835\udc66\ud835\udc5b\ud835\udc61\u210e\ud835\udc52\ud835\udc60\ud835\udc56\ud835\udc67\ud835\udc52and sampled dataset, \ud835\udc37\ud835\udc60\ud835\udc4e\ud835\udc5a\ud835\udc5d\ud835\udc59\ud835\udc52\ud835\udc51. The purpose of this evaluation is to measure our system\u2019s ability to detect these artificially generated bugs, which resemble real-world bugs caused by incorrect identifier usage. Therefore, we can access the recall and specificity of our system. In \ud835\udc37\ud835\udc60\ud835\udc4e\ud835\udc5a\ud835\udc5d\ud835\udc59\ud835\udc52\ud835\udc51, WitheredLeaf reported 57 out of 100 functions have EIBs, resulting in 43% specificity. In \ud835\udc37\ud835\udc60\ud835\udc66\ud835\udc5b\ud835\udc61\u210e\ud835\udc52\ud835\udc60\ud835\udc56\ud835\udc67\ud835\udc52, WitheredLeaf identified 41 bugs from the 66 samples. Moreover, our system demonstrates superior monetary efficiency: 730K tokens are transmitted in total, incurring a cost of 9.92$ on GPT-4 API. Using the formulas in \u00a73.2 and the metrics of recall and consistency in bold in Table 5, Table 7, and Table 2, we could theoretically estimate the metrics of the whole system. As consistency is a lowerbound estimation of specificity, we derive the lower-bound recall and precision as 67.8% and 14.5%, respectively, when assuming \ud835\udf160 = 0.002. Such theoretical values are on par with the experimental results on our datasets and in real-world detection (\u00a74.2). Notably, when solely using GPT-4 as an EIB detector, the precision is as low as 0.8%. Besides, WitheredLeaf can effectively amplify the density of EIB by at least 70 times, making them detectable. 4.1.3 Comparison with Previous Work. Mansour et al. [45] introduced DeepBugs, a machine learning-based approach to namebased bug detection. While DeepBugs relies on an extensive dataset to train its classifier model, it inherently fails to identify bug types not present in the dataset. This limitation confines DeepBugs to detect only the three types of name-based bugs detailed in their paper. In a similar vein, Michael et al. [7] introduced FICS, a method clustering AST of all functions in a repository and reporting the outliers as potential bugs. As they noted, FICS eliminates all variable names when processing the AST, which precludes its ability to detect the EIB discussed in this paper. Moreover, their methodology implies a basic assumption: to detect a bug, there must be at least another \u201ccorrect\u201d function with a similar AST. Such an assumption is a bit too strong, especially for small code repositories, as similar ASTs may not exist for most functions. Unlike these systems, WitheredLeaf requires no training or fine-tuning. The method presented by Baleegh et al. in [6] relates closely to our study. They propose using LLM for bug detection by comparing original code against code generated by ChatGPT. Our evaluation, however, reveals two major concerns with this approach: 1) it fails to detect a significant number of bugs and it generates a high rate of false positives; and 2) it is much less economically viable compared to ours. We experiment with subsets of datasets comprising 66 functions from \ud835\udc37\ud835\udc46\ud835\udc66\ud835\udc5b\ud835\udc61\u210e\ud835\udc52\ud835\udc60\ud835\udc56\ud835\udc67\ud835\udc52\ud835\udc51, each with an EIB, and 100 functions from \ud835\udc37\ud835\udc46\ud835\udc4e\ud835\udc5a\ud835\udc5d\ud835\udc59\ud835\udc52\ud835\udc51. In this dataset, FLAG used approximately 12.33 million tokens, costing $246.89 in GPT-4 API. For benign codes without bugs, FLAG erroneously highlighted 95 out of 100 samples, identifying totally 1751 lines as buggy out of 7062. For the buggy functions, it successfully detected issues in 32 out of 66 cases, highlighting 2454 buggy lines out of 7531. In contrast, our system demonstrates less false positive reports and superior monetary efficiency as mentioned previously. 4.2 Real-world EIB Detection To assess the effectiveness of WitheredLeaf on real-world code repositories, we test it on Python and C repositories collected from GitHub with more than 1000 starts that are actively maintained. The summarized results are presented in Table 8. It\u2019s worth mentioning that our result exceeds our expectation of 20% precision on the final report, which aligns with previous work on inconsistency bug detection [7] and is acceptable by the developer community [8]. 10 WitheredLeaf: Finding Entity-Inconsistency Bugs with LLMs Table 5: Recall and consistency of CodeBERT with different threshold parameters in the infilling task. consistencyThreshold searchRange=1 searchRange=2 searchRange=3 searchRange=4 searchRange=5 Recall Cons. Recall Cons. Recall Cons. Recall Cons. Recall Cons. 0.98 72.9 65.5 75.2 68.2 75.8 65.5 75.3 59.7 75.1 61.5 0.90 77.7 61.7 87.4 61.6 87.5 59.4 87.7 53.3 86.8 54.4 0.80 78.0 58.4 91.2 54.1 90.5 52.5 90.5 43.6 90.0 44.5 0.70 79.3 55.7 92.6 47.7 91.3 45.7 91.3 36.4 90.9 37.2 0.60 80.3 53.1 93.6 43.0 93.3 38.8 92.9 31.0 92.5 31.7 Avg. Time 506 640 689 933 1103 Avg. Time represents the average time in milliseconds spent to accomplish an infilling task on CPU. Table 6: Recall and consistency of CodeBERT with different tokenThreshold. tokenThreshold Recall Cons. 0.1 91.2 54.1 0.2 92.3 40.6 0.3 94.1 35.8 0.4 94.6 32.6 0.5 95.0 32.0 Evaluated with the specified parameters of searchRange=2 and consistencyThreshold=0.80. Table 7: Recall and consistency of Code Llama 7B of different threshold parameters in the infilling task. probThresh rankThresh=1 rankThresh=2 rankThresh=3 Recall Cons. Recall Cons. Recall Cons. 0.98 92.7 89.0 88.4 90.8 85.8 91.8 0.90 94.6 88.6 91.1 90.1 89.2 91.0 0.80 95.7 88.2 93.1 89.5 91.5 90.2 0.70 96.4 87.7 95.2 88.9 93.9 89.6 0.60 97.3 87.1 96.4 88.1 95.4 88.8 Detecting EIBs on Python Repositories. For Python repositories, WitheredLeaf uses CodeBERT and Code Llama 7B as local models to process code from 80 collected repositories. Initially, CodeBERT deals with \u02dc80K infilling tasks, derived from static analysis. Then we drop the consistent ones (\u02dc48K) and tasks associated with very long code snippets that are usually configuration or data files. After this pre-filter, \u02dc21K infilling tasks are scheduled for Code Llama 7B (i.e., the post-filter), which finally discovers 2,679 inconsistency entities. Escalating the functions containing these inconsistencies to GPT-4 for EIB detection, we received 1,082 reports, of which 314 are left after filtering based on the properties mentioned in \u00a72.3. Three security experts spend about 10 hours in total to inspect these reports. Finally, we confirm that 74 of them are true positive, resulting in a precision of 23.57% on Python repositories. We believe the detection cost is well acceptable, as it spends less than 20 machine hours in total and less than 40$ for querying GPT-4 APIs. With a server cost of 52$3 , the total cost to find these EIBs is less than 100$, which we believe economically efficient, especially when considering the potential severe security consequences of the vulnerabilities. 3A 1xH100 server costs 2.49$/hr on Lambda GPU cloud. Detecting EIBs on C Repositories. We also conduct a experiment on C repositories at a smaller scale, where the entities for infilling tasks are generated according to an AST similarity strategy [11, 30] following previous research [7]. With this approach, AST subtrees with similar types and structures within a function were identified, and only entities from these corresponding AST subtrees were analyzed. The pre-filter local model used for C repositories is StableCode, as CodeBERT is not trained on C code. We observe a enhanced precision of WitheredLeaf on C repositories. This is because Code Llama performs significantly better on C than Python, filtering out 97.1% consistent entities. A potential explanation is that C language induces fewer defects than Python [46], making the code more well-formed and entities predictable for Code Llama.Nevertheless, the inclusion of both Python and C repositories in our real-world evaluation allows us to assess the generalizability of WitheredLeaf across different programming languages and projects, which is crucial for understanding its potential adaptability and weaknesses in large-scale deployment. Repository Exclusion. It is worth noting that we intentionally exclude 10 repositories in this evaluation due to several reasons. Some repositories, particularly those written in Python 2 using outdated syntax and APIs, can deteriorate the performance of the models. This is because the models are trained from the corpus collected in recent years, resulting in a lack of training material in Python 2. We also observe the consistency of infilling drops on repositories related to AI, security, and system-level projects, such as chat-langchain [1] and hosts [2]. For AI-related projects, the technologies are evolving very fast, thus low consistency can be attributed to a lack of up-to-date corpus. However, for security and low-level system code, we think the low consistency is caused by the unusual nature of their functionalities. For example, exploit generation tools include insecure statements in their code, which can be regarded as inconsistent by local models in infilling, or reported to contain vulnerabilities by GPT-4. However, these tools are meant to be \u201cinsecure\u201d. We believe such weakness could be solved by fine-tuning or more fine-grained prompt engineering, which we leave as future work. 4.3 Study of the Discovered EIBs WitheredLeaf has identified 123 previously unknown EIBs in realworld codebases (\u00a74.2). We submitted 69 pull requests to address these bugs, of which 27 have already been confirmed and merged by developers. The distribution of bugs across different categories 11 Trovato and Tobin, et al. Table 8: Summary of real-world detection results. Python Repos C Repos # Repositories 80 74 # Submitted PR 55 14 # Merged PR 26 1 # Functionality Bugs 37 7 # Security Bugs 8 3 # Bad Smell 48 20 Pre-filter Execution Time 9.0 CPU Hrs 15.2 GPU Hrs Post-filter Execution Time 20.9 GPU Hrs 17.1 GPU Hrs Cost on GPT-4 API 39.96$ 16.21$ # Infilling Tasks for Pre-filter 79,724 75,695 # Inconsis. Entities after Pre-filter 31,284 25,214 # Infilling Tasks for Post-filter 20,616 25,214 # Inconsis. Entity after Post-filter 2,679 733 # Reports from GPT-4 1,082 473 # Reports after Filtering 314 77 # Confirmed Reports 74 28 Precision (%) 23.57% 36.36% is detailed in Table 8. The security bugs identified in Python repositories can lead to denial-of-service (DoS) attacks, while those in C repositories may cause memory corruption, potentially leading to severe consequences such as privilege escalation. Remarkably, our discovered EIBs impact prominent repositories. For example, a director from wolfSSL [58] scheduled a meeting with us and mentioned that they were actively investigating the bugs we reported. In the remainder of this section, we showcase some of our discovered EIBs to further demonstrate WitheredLeaf\u2019s capabilities. Case Study. We present some of our found EIBs, with the corresponding fixes (the code entity highlighted in red is replaced with that in green ). We demonstrate the consequences of these bugs and analyze the potential cause of them. Variable Misuse In the case Listing 1, the last branch of the switch statement incorrectly returns the value of threadFrameStackSize, which is the same as the previous branch. However, it should return threadExceptionFlowSize. The sizes are used in buffer initialization and could be adjusted via the APIs provided in this library. This bug can potentially lead to buffer overflow, causing more severe security consequences. We suspect the developer forgot to correct the variable name when copy and paste within the code. 1 switch (optionId) \u2013 2 ... 3 case InterpreterThreadFrameStackSize: 4 return s\u02d9threadFrameStackSize; 5 case InterpreterThreadExceptionFlowSize: 6 return s threadFrameStackSize s threadExceptionFlowSize ;\u02dd Listing 1: Example of Variable Misuse Error in Function Call This bug in Listing 2 is found in WolfSSL, a security-sensitive repository. The sizeof method is used to determine the size (in bytes) of a data type or a variable. However, in this case, sizeof method is invoked on xSize variable, which is already declared as the size of freertos sockaddr invoking sizeof on it will result in an unexpected size, likely leading to an overflow or underflow. After a deeper investigation, we find that the third size parameter of the function FreeRTOS bind is not currently in use, so this bug does not manifest security consequences at this point. However, such a bug is like a time bomb, which will cause trouble when the third parameter takes effect in the future. We think this bug might be caused by naming confusion of the variable xSize. 1 socklen\u02d9t xSize = sizeof(struct freertos\u02d9sockaddr); 2 xSocket\u02d9t xClientSocket = NULL; 3 struct freertos\u02d9sockaddr xRemoteAddress; 4 ... 5 FreeRTOS\u02d9bind(xClientSocket , &xRemoteAddress , sizeof(xSize) xSize ); Listing 2: Example of Error in Function Call Denial of Service In the code snippet in Listing 3, if has aux is False, aux will be a tuple object with zero length. Attempting to access the last element of aux using an index of -1 will throw index out of range exception, causing Denial of Service (DoS). 1 if has\u02d9aux: 2 example , *aux = example 3 else: 4 aux = tuple () 5 rand\u02d9state = random.Random( aux[-1] aux[-1] if aux else 0 ) Listing 3: Example of DoS 5 RELATED WORK LLM\u2019s Application On Software Security. Recent advancements have made LLMs designed for coding widely accessible, either through public API [12] or open-source sharing of code and models [25, 27, 48]. LLMs have demonstrated significant contributions to areas like fuzzing [5, 21, 39, 59], code repair [22, 31, 32, 43, 60], exploit generation [23], etc. Despite these developments, the application of LLMs in bug detection, specifically entity inconsistency bug (EIB), remains underexplored and largely experimental. Semantic Bug Detection. Semantic bug detection is an active research area with various methodologies addressing bugs that exhibit both well-defined and ambiguous patterns. Li et al. pioneered the application of LLMs to target Use Before Initialization bugs, demonstrating LLMs\u2019 potential to enhance conventional static analysis techniques [34]. Further advancements by researchers like Sun et al. and Wei et al. have expanded LLM applications to address a wider range of logic bugs, including API misuse [52, 53, 57]. Traditional methods such as static analysis and formal verification also play significant roles in identifying and resolving semantic bugs with fixed patterns [24, 28, 36, 37, 40]. Recently, Natural Language Processing (NLP) has been leveraged to analyze documentation and pinpoint potential semantic inconsistencies or discrepancies between implementation and documentation [13\u201316, 42]. However, these NLP-based approaches heavily rely on the quality of the documentation, facing substantial challenges with ambiguous or outdated materials prevalent in open-source environments. FLAG [6] is a recent initiative that utilizes LLMs for bug detection by masking every line of code and comparing the code line generated by ChatGPT with the original line. As we demonstrated in \u00a74.1.3, such methods exhibit limitations when applied to large-scale, real-world analysis. Beyond LLM-based methods, other approaches 12 WitheredLeaf: Finding Entity-Inconsistency Bugs with LLMs such as [7, 45] utilize learning based methods to pinpoint potential bugs. However, these methods require specific code characteristics or depend on ideal, comprehensive training datasets. 6 DISCUSSION Limitations. Currently, decoder-only models like Code Llama are limited to infilling at a single position each time. Although WitheredLeaf can independently verify the consistency of each entity, it may not detect EIBs associated with multiple positions in the context. Future work. Currently, we are using existing infilling models to provide GPT-4 with highlights of potentially buggy lines. It may be beneficial to fine-tune or train a model that exhibits improved consistency and recall rates. Fine-tuned model demonstrates better capability on auditing smart contracts [38]. Replaced Tokens Detection [17], although simple, fits our scenario with new model trained. Researchers currently consider this task as a form of pretraining, which enhances the model\u2019s capability to comprehend semantics more effectively. As mentioned in \u00a72.4, EIB can negatively affect the consistency of infilling tasks on surrounding positions, which might be qualitatively explainable with a confidence measure [50]. Enhancing this observation and proposing a method providing a security guarantee to EIB detection is a direction for future investigation. 7 CONCLUSION We conducted a systematic measurement study on LLMs\u2019 capabilities to detect EIBs and identified their strengths and weaknesses. Based on the measurement results, we design WitheredLeaf, an EIB detection pipeline based on LLMs. WitheredLeaf successfully discovered 123 unknown bugs on Python and C code repositories, demonstrating the effectiveness of our design and implementation. ACKNOWLEDGMENTS To Robert, for the bagels and explaining CMYK and color spaces."
17
+ }
intro_8K/test_introduction_short_2405.01686v1.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.01686v1",
3
+ "title": "Automatically Extracting Numerical Results from Randomized Controlled Trials with Large Language Models",
4
+ "abstract": "Meta-analyses statistically aggregate the findings of different randomized\ncontrolled trials (RCTs) to assess treatment effectiveness. Because this yields\nrobust estimates of treatment effectiveness, results from meta-analyses are\nconsidered the strongest form of evidence. However, rigorous evidence syntheses\nare time-consuming and labor-intensive, requiring manual extraction of data\nfrom individual trials to be synthesized. Ideally, language technologies would\npermit fully automatic meta-analysis, on demand. This requires accurately\nextracting numerical results from individual trials, which has been beyond the\ncapabilities of natural language processing (NLP) models to date. In this work,\nwe evaluate whether modern large language models (LLMs) can reliably perform\nthis task. We annotate (and release) a modest but granular evaluation dataset\nof clinical trial reports with numerical findings attached to interventions,\ncomparators, and outcomes. Using this dataset, we evaluate the performance of\nseven LLMs applied zero-shot for the task of conditionally extracting numerical\nfindings from trial reports. We find that massive LLMs that can accommodate\nlengthy inputs are tantalizingly close to realizing fully automatic\nmeta-analysis, especially for dichotomous (binary) outcomes (e.g., mortality).\nHowever, LLMs -- including ones trained on biomedical texts -- perform poorly\nwhen the outcome measures are complex and tallying the results requires\ninference. This work charts a path toward fully automatic meta-analysis of RCTs\nvia LLMs, while also highlighting the limitations of existing models for this\naim.",
5
+ "authors": "Hye Sun Yun, David Pogrebitskiy, Iain J. Marshall, Byron C. Wallace",
6
+ "published": "2024-05-02",
7
+ "updated": "2024-05-02",
8
+ "primary_cat": "cs.CL",
9
+ "cats": [
10
+ "cs.CL",
11
+ "cs.AI"
12
+ ],
13
+ "label": "Original Paper",
14
+ "paper_cat": "LLM Fairness",
15
+ "gt": "Quantitative measures of comparative treatment effectiveness are reported primarily in unstructured (natural language) published articles that describe the design, protocol, and results of randomized controlled trials (RCTs). Individual trial results are noisy and often biased, motivating the need for rigorous statistical meta-analysis of all trials of a particular \u00a9 2024 H.S. Yun, D. Pogrebitskiy, I.J. Marshall & B.C. Wallace. arXiv:2405.01686v1 [cs.CL] 2 May 2024 Automatically Extracting Numerical Results from RCTs with LLMs treatment to produce a robust estimate of efficacy (Lau et al., 1995; Borenstein et al., 2021).1 Estimates from meta-analyses of primary findings are considered one of the highest forms of evidence in medicine (Murad et al., 2016). However, conducting a meta-analysis requires manually extracting from individual ar- ticles the data elements necessary for synthesis, i.e., numbers required to calculate metrics of interest\u2014for example, odds ratios or mean differences for dichotomous and continuous outcomes, respectively\u2014and associated variances. This time-consuming but critical step still must be manually performed, which precludes a fully automated \u201con-demand\u201d meta- analysis of all evidence relevant to a given clinical question. Prior work (Yun et al., 2023) has reported that domain experts who conduct systematic reviews and meta-analyses view data extraction as a key part of the process which may be amenable to automation via LLMs. In this work, we ask: Are modern LLMs sufficiently capable of data extraction to permit accurate, fully automated meta-analysis? To answer this question empiri- cally, we annotate modest but granular validation and test sets of clinical trial reports with numerical findings attached to interventions, comparators, and outcomes (ICO triplets). We use these annotations to evaluate a wide range of LLMs\u2014including accessible, smaller models and closed, massive models\u2014 for their ability to infer structured numerical findings for specific comparisons of interest. That is, we assess whether LLMs can reliably extract from trial reports the data necessary for statistical meta-analysis. We find that LLMs can permit accurate automated meta-analyses with some success. Unsurprisingly, massive LLMs with large input context windows such as GPT-4 outperform smaller, open-source models at extracting binary (dichotomous) outcomes. For extracting continuous outcomes, LLMs perform comparatively poorly, especially in settings where there are multiple similar outcome measures reported in a trial report. The takeaway from this work is that modern LLMs offer a promising path toward fully automatic meta-analysis, but further improvements are needed before this will be reliable. We hope the data we release with this effort will support future work in this direction. Generalizable Insights about Machine Learning in the Context of Healthcare LLMs have realized remarkable performance across a diverse range of NLP tasks in recent years, motivating several efforts to investigate the use of LLMs for healthcare specifically (Singhal et al., 2023; Lehman et al., 2023; Naik et al., 2023; Wadhwa et al., 2023b). Here we provide empirical insights concerning the use of such models to fully automate meta- analysis of clinical trial results; the results also have implications for related numerical data extraction tasks in healthcare, and how well LLMs are likely to be able to perform them. More concretely, the main generalizable contributions this work offers are: 1. We release an evaluation dataset containing extensive annotations for the task of ex- tracting numerical clinical findings for a given intervention, comparator, and outcome (ICO triplets) necessary for conducting meta-analyses. 2. We report the quantitative and qualitative results evaluating modern LLMs on ex- tracting numerical findings from RCTs using the annotated dataset. Furthermore, 1. These are essentially weighted averages of comparative effect estimates reported in trials, where weights are inverse to reported variances. 2 Automatically Extracting Numerical Results from RCTs with LLMs Is smoking status a continuous or binary outcome? RESEARCH ARTICLE Open Access Group hypnosis vs. relaxation for smoking cessation in adults: a cluster-randomised controlled trial Maria Dickson-Spillmann*, Severin Haug and Michael P Schaub Abstract Background: Despite the popularity of hypnotherapy for smoking cessation, the efficacy of this method is unclear. We aimed to investigate the efficacy of a single-session of group hypnotherapy for smoking cessation compared to relaxation in Swiss adult smokers. Methods: This was a cluster-randomised, parallel-group, controlled trial. A single session of hypnosis or relaxation for smoking cessation was delivered to groups of smokers (median size = 11). Participants were 223 smokers consuming \u22655 cigarettes per day, willing to quit and not using cessation aids (47.1% females, M = 37.5 years [SD = 11.8], 86.1% Swiss). Nicotine withdrawal, smoking abstinence self-efficacy, and adverse reactions were assessed at a 2-week follow-up. The main outcome, self-reported 30-day point prevalence of smoking abstinence, was assessed at a 6-month follow up. Abstinence was validated through salivary analysis. Secondary outcomes included number of cigarettes smoked per day, smoking abstinence self-efficacy, and nicotine withdrawal. Results: At the 6-month follow up, 14.7% in the hypnosis group and 17.8% in the relaxation group were abstinent. The intervention had no effect on smoking status (p = .73) or on the number of cigarettes smoked per day (p = .56). Smoking abstinence self-efficacy did not differ between the interventions (p = .14) at the 2-week follow-up, but non-smokers in the hypnosis group experienced reduced withdrawal (p = .02). Both interventions produced few adverse reactions (p = .81). Conclusions: A single session of group hypnotherapy does not appear to be more effective for smoking cessation than a group relaxation session. Trial registration: Current Controlled Trials ISRCTN72839675. Keywords: Smoking cessation, Cigarette smoking, Hypnosis, Group therapy, Cluster-randomised controlled trial Background Many smokers would like to give up smoking. Studies indicate that between 40\u201380% are willing to quit [1-3]. Up to 80% try to quit smoking without using any assis- tance [4,5]. However, long-term success with the \u201ccold turkey method\u201d is low at 7\u20138% [5,6]. As smokers hold misperceptions regarding nicotine replacement therapies (NRT) for smoking cessation, these methods are under- used [7-11]. For those smokers who are looking for ces- sation assistance, but who are not motivated to try NRT or other medications for smoking cessation, a range of other cessation methods, including hypnotherapy, is avai- lable. At least one-quarter of smokers who intend to quit are interested in hypnotherapy [12,13]. There is also re- markable interest among Swiss smokers [14]. The efficacy of hypnosis as a smoking cessation method has been investigated [15-26], but there is heterogeneity in study designs. Several reviews and meta-analyses on hy- pnotherapy for smoking cessation have been undertaken [27-31] which, apart from two exceptions [32,33], have not been able to clearly support the efficacy of hypnothe- rapy as a smoking cessation method. In this paper, we report findings from a cluster- randomised trial that investigates the efficacy of hypno- therapy for smoking cessation compared to relaxation. * Correspondence: [email protected] Swiss Research Institute for Public Health and Addiction ISGF, University of Zurich, Konradstrasse 32, Postfach, 8031 Z\u00fcrich, Switzerland \u00a9 2013 Dickson-Spillmann et al.; licensee BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Dickson-Spillmann et al. BMC Public Health 2013, 13:1227 http://www.biomedcentral.com/1471-2458/13/1227 binary Hypnotherapy vs relaxation for smoking status relaxation hypnotherapy smoking not smoking 17 19 99 98 favors hypnotherapy favors relaxation Odds ratio 1 1. Infer outcome type 2. Prompt with article, intervention, comparator, and outcome (with type) 3. Raw data to metric and variance Figure 1: Overview of this work. (1.) We use an LLM to infer a given outcome type based on its description, then (2.) Prompt the LLM to extract raw data (e.g., number of participants who died in each treatment group). Finally, we (3.) transform this into a suitable metric (e.g., an odds ratio) and associated variance; this is the data necessary for statistical meta-analysis. we provide an example of a fully automated meta-analysis to show the suitability of using LLMs for the end-to-end process. We hope that this effort highlights challenges that must be addressed to fully automate medical evidence synthesis, and ultimately brings us closer to that vision.",
16
+ "main_content": "Previous work has explored the use of NLP technologies to assist the process of metaanalyses and evidence synthesis. For example, there have been efforts to automate screening to identify all studies relevant to a clinical question (Wallace et al., 2010; Kusa et al., 2024). Our focus, however, is on the data extraction. In early work on automatic data extraction, Kiritchenko et al. (2010) presented an automatic information extraction system called ExaCT that located and extracted key trial information such as sample size, drug dosage, and primary outcomes from full-text RCT articles. Elsewhere, Summerscales et al. (2011) proposed an automated way to create summaries from abstracts of RCTs by extracting numerical quantities of treatment groups and outcomes. Both of these efforts used statistical NLP models based on bag-of-words, as they pre-dated LLMs. Consequently, they achieved somewhat mediocre performance on this challenging task. Do current-generation LLMs allow us to do much better? A few recent efforts have partially investigated this question. Mutinda et al. (2022) proposed a BERT (Devlin et al., 2018) based named entity recognition (NER) model to identify relevant trial information 3 Automatically Extracting Numerical Results from RCTs with LLMs from research abstracts and parsed the numeric outcomes for statistical analysis. However, this work used an encoder-only model (BERT) which is considered small by modern standards and required explicit supervision for the task. Other recent work (Shamsabadi et al., 2024) has shown that LLMs can be fine-tuned to perform information extraction tasks from scientific and biomedical literature in general, though these efforts were not focused on supporting meta-analysis. More recently, several works have explored extracting experimental findings of clinical trials using LLMs in zero-shot (a setup in which a model observes tasks or data that it has not explicitly seen during training) (Khraisha et al., 2023; Kartchner et al., 2023). However, these works often evaluated a couple of massive, closed LLMs and did not focus on numerical findings. Naik et al. (2023) investigated numerical information extraction using several LLMs. However, theirs was a general extraction scheme intended to support a range of downstream functionalities; by contrast, we focus more narrowly on how well LLMs can extract the numerical data necessary for meta-analysis, conditioned on a specific intervention, comparator, and outcome of interest. The few focused efforts that have evaluated LLMs for data extraction from full-text articles for synthesis (Gartlehner et al., 2023; Sun et al., 2024; Reason et al., 2024) have used very small convenience samples of articles and evaluated only closed models, providing a somewhat limited view of current model capabilities generally. In our work, we construct an evaluation dataset of several hundred samples with both abstracts and results sections from full-texts and evaluate numerical data extraction for meta-analyses across a diverse range of LLMs in zero-shot settings. In addition to massive, closed LLMs we evaluate smaller, accessible models to provide a more complete view of the capabilities of modern LLMs for numerical data extraction of RCTs. 3. Extracting Numerical Results from Clinical Trial Reports We are interested in the task of inferring structured findings reported in clinical trial reports that correspond to a specific intervention, comparator, and outcome (ICO). Past work (Lehman et al., 2019; Wadhwa et al., 2023b) has treated this as a classification task, categorizing articles as reporting that the given intervention (e.g., aspirin) induced a significant increase, significant decrease, or no significant difference with respect to the outcome of interest (e.g., duration of headache), relative to the specified comparator (e.g., placebo). In this work, we consider a more challenging variant of this task in which the aim is to infer numerical data associated with each ICO. Specifically, we investigate the ability of LLMs to extract the numbers required to derive point estimates (which capture the sign and magnitude of relative treatment effects) and corresponding variances. If we could reliably infer such results from individual articles describing trials, this would permit automatic statistical meta-analysis on demand. Briefly, a standard fixed-effects meta-analysis\u2014which assumes studies measure a common underlying effect (Hedges and Vevea, 1998)\u2014aggregates point estimates of interest (e.g., odds ratios) \u02c6 \u03b8i across k studies (indexed by i) by taking a weighted sum, where weights wi are inverse to the variance associated with \u02c6 \u03b8i: \u02c6 \u03b8FE = Pk i=1 wi\u02c6 \u03b8i Pk i=1 wi (1) 4 Automatically Extracting Numerical Results from RCTs with LLMs Meta-analysis of RCT evidence is critical because individual trials are inherently noisy and may reflect statistical biases. Aggregating evidence from independent trials in this way to estimate an overall treatment effect permits a robust effectiveness estimate (Egger et al., 1997). For this reason, meta-analytic results provide reliable clinical evidence, and meta-analyses often inform healthcare policy and patient care guidelines. A downside of meta-analyses is that they are laborious to produce and keep up to date with new findings, especially given the rate at which new evidence accumulates (Bastian et al., 2010). This has motivated work on automating or semi-automating aspects of evidence synthesis, but reliably extracting the numerical values associated with ICO triplets has been a task too difficult for prior language technologies to reliably execute. Given the rapid progress in NLP\u2014and specifically the capabilities of LLMs\u2014we investigate, in this work, the degree to which modern LLMs can accurately extract numerical results from clinical trials sufficient for meta-analysis, fully automatically. 4. Annotation To evaluate the feasibility of using LLMs to extract numerical data from RCTs for metaanalysis, we annotated a dataset of 699 abstracts and result sections from randomized controlled trials indexed in PubMed Central. Our dataset contains extensive annotations for the task of extracting numerical clinical findings for a given intervention, comparator, and outcome (ICO triplets) necessary for conducting meta-analyses (Equation 1). Our dataset is derived from the Evidence Inference corpus (Lehman et al., 2019; DeYoung et al., 2020). This comprises articles describing RCTs.2 For all of these RCT reports, medical experts have assessed the directionality of the evidence reported concerning given ICO triplets. Specifically, these annotations take the form of ternary classifications indicating whether the Intervention significantly increased, significantly decreased, or had no significant effect on the Outcome, as compared to the Comparator. We enrich this dataset by attaching numerical results to ICO triplets for a subset of the articles; these are at the level necessary to permit meta-analysis and so (much) more granular than the existing categorical annotations. Annotation Details We randomly sampled 120 RCTs from the Evidence Inference dataset (DeYoung et al., 2020). Full-text papers are quite lengthy. Therefore, as a practical matter, we focus on only abstracts and the results sections of the full-text papers, where numerical results are most often reported. We compress the XML-formatted sections of these papers to reduce input length.3 We further pre-process the XML by removing all the attributes mostly about style rather than content. Finally, we convert each processed XML to markdown, further compressing the content while maintaining a vital organizational structure. We release the post-processed version of the data used in this work to ensure reproducibility. During the annotation process, we made some changes to the Intervention, Comparator, and Outcomes fields for certain cases. For Outcome fields that included multiple parts, we divided the parts into separate individual Outcomes and associated them with the same Intervention and Comparator values. For example, if the original Outcome label was \u201cweight 2. Since it is constructed from articles in the open access subset, this dataset includes full-texts of papers in XML format. 3. We use https://onlinexmltools.com/minify-xml. 5 Automatically Extracting Numerical Results from RCTs with LLMs reduction at 12 months, BMI reduction at 12 months,\u201d then we created two separate Outcome values\u2014one as \u201cweight reduction at 12 months\u201d and the other as \u201cBMI reduction at 12 months\u201d\u2014and associated these two new Outcomes with the same Intervention and Comparator. This results in having more ICO instances in the dataset than the original. In addition, we found cases when one Intervention field would have multiple interventions and the RCT report did not report numerical data for the combined, multiple interventions. In this case, we took a similar approach above and separated the parts into individual Interventions and associated them with the same Outcome and Comparator values. Finally for cases when the Outcome is associated with multiple numerical measurements in the RCT report, we transformed these Outcomes into multiple Outcomes with the specific measurement included in the value. For example, the Outcome of \u201cmotion range\u201d had several measurements including shoulder internal rotation, shoulder external rotation, and shoulder extension. In this particular example, we created three separate records in the annotated dataset each with different Outcomes: \u201cmotion range shoulder internal rotation,\u201d \u201cmotion range shoulder external rotation,\u201d and \u201cmotion range shoulder extension.\u201d These derived Outcomes had the same Intervention and Comparator as the original Outcome. We annotated each ICO from the dataset by following the annotation schema. The annotation schema included the following: 1. Type of outcome: binary or continuous 2. Data extraction for binary outcomes: intervention event number, intervention group size, comparator event number, and comparator group size (2x2 contingency table) 3. Data extraction for continuous outcomes: intervention mean, intervention standard deviation, intervention group size, comparator mean, comparator standard deviation, and comparator group size If a specific part of the numerical data is not found in the abstract or results section, the annotators left the field blank. We recorded metadata concerning whether the numerical information could be located within a figure or table and documented whether all of the desired numerical data was successfully retrieved. To keep the task fairly straightforward, we decided to leave instances of RCTs that reported medians instead of means blank. If there are multiple time points for a given Outcome and the time point is not explicitly stated, we use the primary time point based on the RCT. If this is not provided, we use the last time point. Additionally, we only include the standard deviations if they were explicitly reported, or if the 95% confidence intervals were provided, by back-calculating the standard deviations using the formula outlined in Higgins et al. (2018). We also report the token counts for each RCT abstract and results text (under OpenAI\u2019s tokenizer) to provide a sense of the input document length. We set aside 10 random RCTs as a development set for prompt engineering and used the remaining 120 RCTs as a test or evaluation set. We release this dataset via a GitHub repository.4 Annotator backgrounds Two co-authors of the paper with backgrounds in computer science but familiarity with standard meta-analysis metrics annotated each record independently. The annotators met regularly to agree on the final annotations. We also solicited 4. https://github.com/hyesunyun/llm-meta-analysis 6 Automatically Extracting Numerical Results from RCTs with LLMs Table 1: Statistics for the annotated dataset. Metric Dev Test Total # PMC Articles 10 110 120 # Prompts (ICOs) 43 656 699 # Binary Outcomes 11 172 183 # Continuous Outcomes 32 484 516 % With Enough Data for Point Estimates 62.79 58.84 59.08 Mean Articles Tokens 3331 3603 3581 feedback from a clinical researcher with expertise in evidence synthesis to discuss edge cases and ensure annotation quality. Dataset statistics Table 1 gives an overview of statistics for our final dataset. Our dataset includes 120 RCTs (abstracts and results sections) and 699 total records. The development set has 10 RCTs with 43 records and the test set has 110 RCTs with 656 records. A total of 183 outcomes are categorized as binary while 516 as continuous. We found that a total of 413 records (59.08%) have sufficient data to calculate the point estimates of the clinical trials. The numerical data extracted during annotations were mostly found in the tables of the RCTs as 471 instances fell under this case. The average token number for each RCT was 3,581. 5. Can LLMs Accurately Extract Numerical Results from Trial Reports? We evaluate whether modern LLMs are capable of extracting the numerical data necessary for meta-analysis from RCT reports. We assume a zero-shot setting here, i.e., we instruct models to extract the data elements of interest, without additional supervision. Few-shot learning may realize better performance for this task. However, RCT reports are often quite lengthy, even when considering only abstracts and results sections. This makes it practically difficult (and expensive) to include full examples in context. Therefore, we evaluate only the zero-shot case in this work. Ideally, one might ask directly for the point estimate of interest, for example, the odds ratio with respect to a (dichotomous) outcome of interest between the treatment and control groups. However, deriving this often requires intermediate steps to transform raw data (e.g., a two-by-two table tallying the number of participants in each group who did and did not experience a specific outcome) into point estimates (e.g., odds ratios) and standard errors. Given that LLMs are middling at math (Hong et al., 2024; Satpute et al., 2024; Urrutia and Araya, 2024)\u2014and the fact that statistical meta-analysis packages like metafor (Viechtbauer, 2010) will readily compute such quantities from raw data\u2014we instead adopt a stepwise approach intended to extract raw data. Specifically, we first prompt the model to infer whether a given outcome is binary or continuous based on a (natural language) description of the outcome, e.g, elevation of glucose after 1 hour; these descriptions are part of the annotations provided in the evidence inference dataset (DeYoung et al., 2020). Note that this assessment is made independently of the input article, and in some cases, the model may respond that it is unable to make this inference. The output from this step informs whether we prompt the model to extract raw data as would be reported in 7 Automatically Extracting Numerical Results from RCTs with LLMs Table 2: Full list of instruction-tuned LLMs used for the evaluation experiments. Model Type Parameters Sequence Length Training Data GPT-4 0125 General 1.7T 128000 Unknown GPT-3.5 Turbo 0125 General 175B 16385 Unknown Alpaca General 13B 4096 instruction-following demonstrations generated from OpenAI\u2019s text-davinci-003 Mistral Instruct v2 General 7B 32768 Unknown Gemma Instruct General 7B 8192 Web Documents, Code, Mathematical text, Instructions OLMo Instruct Science 7B 2048 Tulu 2 SFT Mix, Ultrafeedback Cleaned PMC LLaMA Biomedicine 13B 2048 Semantic Scholar Open Research Corpus, PMC LLaMA instructions BioMistral Biomedicine 7B 2048 PubMed Central Open Access Subset the dichotomous case (i.e., a two-by-two table tallying numbers of events in the treatment and control groups, respectively) and continuous instance (measurements and variances associated with the outcome of interest in both groups). Due to the limited sequence lengths of many smaller, open-source models, it was necessary to divide texts into segments sufficiently small to conform to a maximum length. This chunking process involves breaking down texts into smaller sections. Our approach includes pre-processing the text, converting number words to digits, and removing sentences without numerical values. We then iteratively concatenate segments until we have a set of chunks that are together at the token limit. We pass the raw data extracted from this step through specialized statistical software (statsmodels; Seabold and Perktold 2010 or metafor; Viechtbauer 2010) to derive point estimates and standard errors, i.e., the inputs necessary for meta-analysis. This sequential approach provides a form of transparency: Users can inspect the \u201craw\u201d outputs from LLMs to verify their correctness. We release the code and prompts used for the evaluation: https: //github.com/hyesunyun/llm-meta-analysis. 5.1. Large Language Models (LLMs) Recent work has demonstrated that LLMs are strong entity and relation extractors. Wadhwa et al. (2023a) showed that LLMs can achieve comparable results to fully supervised models for relation extraction, zero-shot. Related efforts have demonstrated the capabilities of zero-shot LLMs in healthcare specifically (Wei et al., 2021; Agrawal et al., 2022; Singhal et al., 2023; Sivarajkumar et al., 2024). In this work, we evaluate the following LLMs with respect to their ability to extract the data from RCT reports necessary for meta-analysis: \u2022 GPT-4 (Achiam et al., 2023): The latest version of the GPT-family of models from OpenAI. This is a large multimodal (text and image) model. We use gpt-4-0125 which has a maximum context length of 128k. Context length is the amount of text that an LLM can process and retain in memory at any given time. \u2022 GPT-3.55: Also from OpenAI, the model behind ChatGPT. We use gpt-3.5-turbo-0125 which can handle contexts of up to 16,385 tokens. 5. https://platform.openai.com/docs/models/gpt-3-5-turbo 8 Automatically Extracting Numerical Results from RCTs with LLMs \u2022 Alpaca 13B (Taori et al., 2023): An instruction fine-tuned version of the LLaMA-13B (Touvron et al., 2023) model with a max context length of 4,096 tokens. \u2022 Mistral 7B Instruct v26: An instruction fine-tuned version of the Mistral-7B-v0.1 (Jiang et al., 2023) model. Allows for a context length of 32,768 tokens. \u2022 Gemma 7B Instruct (Team et al., 2024): An open-weights LLM from Google DeepMind. Max context length: 8,192 tokens. \u2022 OLMo 7B Instruct (Groeneveld et al., 2024): Open Language Model (OLMo) from AI2. Features an input sequence length upper bound of 2,048 tokens. \u2022 PMC LLaMa (Wu et al., 2023): A 13B parameter model which was initialized to LLaMA-13B (Touvron et al., 2023) and then further pretrained over medical corpora. The model was then instruction-tuned. It offers a max input and output sequence length of 2,048. \u2022 BioMistral (Labrak et al., 2024): A Mistral-based model which was further pre-trained over data from PubMed Central Open Access. It has an input upper bound of 2,048 tokens (even though the base model has a significantly greater sequence length; presumably this was done due to resource constraints). 5.2. Metrics To measure the performance of LLMs in terms of their ability to extract numerical data from trial reports, we first consider their accuracy with respect to categorizing outcome types (as binary or continuous), and we then evaluate data extraction accuracy both in terms of exact and partial matches with reference (manually extracted) data. We consider data extracted from a trial report partially correct where a subset of the extracted numerical data matches the reference. For example, for a binary outcome with 4 numerical values (two-by-two table entries), correctly extracting 1, 2, or 3 of these may be viewed as a partial match. We calculate and report partial accuracy for all possible partial matches. Articles sometimes report results ambiguously, or not at all. In these cases, we wish the model to abstain from providing data. We also report the number of times the model mistakenly outputs \u201cunknown\u201d for a specific label. This measure is useful for understanding how conservative the model is, as it captures instances where humans do not find the results ambiguous but the models do. In addition to considering whether extracted data matches reference data (one binary indicator per ICO), we consider the magnitude of difference between extracted values and references (in standardized units). This is informative as it indicates whether errors are likely to have an important impact on downstream meta-analysis (large magnitude differences would affect the overall treatment efficacy estimate \u02c6 \u03b8FE more). We calculate mean standardized errors (average of all the differences between the efficacy estimates from numerical data of LLMs and reference) and their variance (standard error and 95% confidence interval). 6. https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2 9 Automatically Extracting Numerical Results from RCTs with LLMs Table 3: LLM performances for inferring outcome type (binary or continuous). # Unknowns refers to instances when the model outputs the unknown token \u201cx\u201d; these are effectively incorrect (here the type should be inferable). GPT-4 GPT-3.5 Alpaca Mistral Gemma OLMo PMC LLaMA BioMistral Accuracy 0.713 0.607 0.739 0.201 0.665 0.290 0.732 0.133 F1 Binary 0.735 0.680 0.000 0.576 0.590 0.424 0.124 0.275 F1 Continuous 0.836 0.690 0.851 0.183 0.716 0.079 0.848 0.135 # Unknowns 155 152 1 489 0 5 15 409 6. Results We evaluated the performance of LLMs on three different tasks that are aimed to assist in automating meta-analyses in zero-shot. The three tasks are inferring outcome type, extracting numerical results for binary outcomes, and extracting numerical results for continuous outcomes. These tasks were performed independently of each other. LLMs were tasked to output the answers in a structured format (categorical answer for inferring outcome type and YAML format for data extraction). Lastly, we show the promise of using LLMs endto-end for automating meta-analyses with a case study where we reproduce a meta-analysis with fully automated data. 6.1. Extracting Data from Trial Reports Inferring outcome types Results for inferring outcome types are reported in Table 3. Alpaca-13B realizes the highest exact match accuracy of 0.739, followed by PMC LLaMA with 0.732. Although Alpaca-13B and PMC LLaMA have high accuracy, their F1 scores for the binary label are poor because the model mostly predicts the outcome type to be \u201ccontinuous\u201d, which is the majority class (74.78% of the test set). GPT-4 and GPT-3.5 achieve high accuracy and F1 scores, besting the open-source models. The open, smaller models we evaluated appear less stable, i.e., the F1 score varies considerably. Among these models, Gemma offers the best performance for exact match accuracy, F1 scores, and the number of instances where the model designates the outcome type as \u201cunknown\u201d. Binary outcomes Performance metrics for binary outcome data extraction are available in Table 4. Based on the accuracy metrics and number of complete data, GPT-4 significantly outperforms all the other models. GPT-4 has the highest overall exact match accuracy of 0.655, followed by GPT-3.5 with 0.298. Similar to the task of inferring outcome types, the open, smaller models vary substantially in terms of their match accuracies. Especially, the models trained on biomedical text have one of the worst performances with both PMC LLaMA and BioMistral achieving a mere 0.035 in total exact accuracy and producing the most number of \u201cunknowns\u201d leading to the lowest percentage of complete data. The mean standardized errors of the log odds ratio calculated from the extracted data are lowest for the best performing models and highest for the worst performing models. GPT10 Automatically Extracting Numerical Results from RCTs with LLMs Table 4: Performance in terms of extracting numerical values for binary outcomes. IE: intervention events; IGS: intervention group size; CE: comparator events, and; CGS comparator group size. Partial match numbers reflect leniency in the number of matches required for an instance to count as (partially) \u201ccorrect\u201d. The MSE is the mean standardized error of the log odds ratios calculated with data extracted from the model. The number of unknowns refers to the number of times the model produced the unknown token \u201cx\u201d when the reference was not unknown; in this case, the reference data contained unknowns 72 times. The percentage of complete data shows how many model outputs had enough data to calculate the point estimate (and variance), compared to the reference data. GPT-4 GPT-3.5 Alpaca Mistral Gemma OLMo PMC LLaMA BioMistral Total 0.655 0.298 0.035 0.164 0.135 0.012 0.035 0.035 IE 0.749 0.462 0.129 0.345 0.275 0.076 0.146 0.158 Exact Match IGS 0.842 0.655 0.094 0.515 0.509 0.170 0.088 0.053 CE 0.737 0.392 0.129 0.333 0.275 0.123 0.158 0.158 CGS 0.830 0.649 0.094 0.567 0.556 0.140 0.058 0.053 3 0.690 0.415 0.035 0.251 0.216 0.035 0.035 0.035 Partial Match 2 0.901 0.696 0.181 0.637 0.585 0.187 0.164 0.175 1 0.912 0.749 0.193 0.708 0.678 0.275 0.216 0.175 MSE 0.101 0.441 0.485 0.657 0.913 1.253 1.523 # Unknowns 41 145 490 28 90 319 524 612 % Complete 87.94 61.70 9.22 87.23 58.87 24.11 7.09 0.00 4\u2019s MSE is 0.101 with a standard error of 0.043 (95% CI 0.017 to 0.186) while BioMistral did not extract any computable data comparable with our reference data. Continuous outcomes Table 5 presents the performance metrics for continuous outcome data extraction. Due to this task involving more values to extract, LLM performance was quite low with even GPT-4 achieving exact match accuracy of 0.487. The second best model is GPT-3.5 with a total exact match accuracy dropping to 0.280. Similarly to binary outcome data extraction, the smaller, open-source models performed poorly. The mean standardized errors show the comparison between the effect estimates calculated from model outputs and the reference data. GPT-4 has the lowest value of 0.290 (SE: 0.112; 95% CI: 0.071 to 0.510), followed by GPT-3.5 with 0.951 (SE: 0.420, 95% CI: 0.127 to 1.775). LLMs further trained on biomedical texts consistently performed the worst with BioMistral producing no complete data to calculate the standardized mean difference. Error Analysis To characterize the types of errors that LLMs make when extracting numerical values from trial reports, we qualitatively evaluated mistakes observed on validation data. For this analysis, we focused on outputs from GPT-4 and Mistral as representative models; these were the best-performing closed and open LLMs evaluated, respectively. For outcome type inference, GPT-4 produced an \u201cunknown\u201d a total of 155 times, inferred an outcome to be continuous when it was in fact binary 25 times, and predicted a continuous 11 Automatically Extracting Numerical Results from RCTs with LLMs Table 5: Performance of extracting numerical values for continuous outcomes. IM: intervention mean, ISD: intervention standard deviation, IGS: intervention group size, CM: comparator mean, CSD: comparator standard deviation, and CGS: comparator group size. Partial match numbers refer to how many parts need to match the reference for the instance to count as correct. The MSE is the mean standardized error of the standardized mean differences calculated with data extracted from the model. The number of unknowns refers to the number of times the model produced the unknown token \u201cx\u201d when the reference was not unknown; in this case, the reference data contained unknowns 925 times. The percentage of complete data shows how many of the model outputs had enough data to calculate the point estimate (and variance), compared to the reference data. GPT-4 GPT-3.5 Alpaca Mistral Gemma OLMo PMC LLaMA BioMistral Total 0.487 0.280 0.039 0.095 0.087 0.035 0.039 0.041 IM 0.720 0.538 0.309 0.348 0.328 0.221 0.369 0.390 ISD 0.751 0.606 0.334 0.375 0.412 0.311 0.447 0.470 Exact Match IGS 0.734 0.641 0.216 0.507 0.534 0.190 0.107 0.087 CM 0.720 0.526 0.330 0.361 0.324 0.227 0.390 0.402 CSD 0.738 0.584 0.338 0.390 0.404 0.282 0.456 0.472 CGS 0.691 0.608 0.181 0.427 0.447 0.184 0.109 0.087 5 0.542 0.336 0.045 0.115 0.103 0.060 0.058 0.054 4 0.724 0.555 0.293 0.293 0.342 0.173 0.375 0.402 Partial Match 3 0.765 0.645 0.311 0.421 0.408 0.231 0.392 0.408 2 0.913 0.814 0.470 0.691 0.699 0.408 0.497 0.501 1 0.922 0.872 0.551 0.794 0.810 0.507 0.518 0.501 MSE 0.290 0.951 6.257 1.138 3.466 1.738 # Unknowns 422 437 1169 483 775 1213 1778 1985 % Complete 63.64 62.40 31.82 62.81 40.08 11.98 4.96 0.00 outcome as binary 8 times. Mistral 7B Instruct had 480, 34, and 1 errors, respectively. In addition, Mistral produced 9 badly formatted YAML outputs. For binary outcome extraction, GPT-4 errors broke down as follows: Producing incorrect numbers (34); Producing \u201cunknown\u201d for inferable values (20); and Outputting actual numbers when the reference is \u201cunknown\u201d (12). Mistral 7B Instruct yielded invalid output formats 3 times. In addition, Mistral produced incorrect numerical values (123); \u201cunknown\u201d values when they should not have (7); and confabulated numerical values for \u201cunknown\u201d reference values (25). For continuous outcome extraction, the majority of errors involved producing the wrong numerical value (GPT-4: 113; Mistral: 226); producing \u201cunknown\u201d values inappropriately (GPT-4: 142; Mistral: 171); and generating numerical values for \u201cunknown\u201d reference values (GPT-4: 38; Mistral: 171). Mistral also produced 5 poorly formatted outputs in this case. Appendix A visualizes the types and number of errors from GPT-4 and Mistral. We sampled 136 errors from GPT-4 and 148 errors from Mistral. We sampled up to 20 errors from each general error category mentioned above. Then, we conducted a qualitative error analysis to better characterize the mistakes made by these LLMs. We inductively 12 Automatically Extracting Numerical Results from RCTs with LLMs annotated possible sources and reasons for each sampled error and aggregated commonly found reasons. Selected samples of errors are available in Appendix B. In addition to LLMs producing the wrong labels for clearly defined outcomes, our analysis revealed that errors in inferring outcome type are often attributable to ambiguous phrasing of the outcome values. Since the additional context of the trial reports is not provided as input, some outcome values can be either reported as a binary outcome or continuous. Examples of this include words such as \u201crate\u201d and \u201clevel\u201d even in cases where they are treated as dichotomous, and \u201cfailure\u201d and \u201cdiscontinuation\u201d even when they are continuous. This suggests that providing additional context from the article when making this classification may improve performance. For outcome extraction tasks, we found that smaller models such as Mistral sometimes produced formats that do not conform to the expected outputs whereas GPT-4 did not have this problem. These bad formats were often the contributing factor for having more \u201cunknowns\u201d since badly formatted outputs were treated as \u201cunknown\u201d numerical data. We observed that incorrect outputs were often due to \u201challucinations\u201d (outputting numerical data not found anywhere in the trial report) or extracting values from the wrong intervention, comparator, or outcome measures from the wrong timepoint. Also, models often produced \u201cunknown\u201d as an answer if the target numerical value was a negative value. One possible explanation can be that these negative numbers are often reported in RCTs with two hyphens (\u201c--\u201d) rather than a single one preceding a number which can lead to the model ignoring them as proper numerical values. When RCTs report preand posttreatment measures, LLMs sometimes extract the values only from the pre-treatment and ignore the post-treatment numbers (whereas we need both for meta-analysis; typically one is interested in the comparative differences). One common apparent source of error occurs when the LLMs need to perform simple math such as division or subtraction to get the correct total group sizes. For example, an abstract might report that 40 participants were randomly assigned to two study groups. In this case, most humans would assume (absent additional information) that each group was assigned 20 participants. However, LLMs sometimes fail to make such inferences and output totally irrelevant numbers or \u201cunknown.\u201d Another common group size mistake appears to be due to the trials containing several different group sizes which correspond to different ICO triplets. Some outcome measures are reported based on the sample that successfully finished the study, for example. Finally, while apparently rare, LLMs sometimes confuse medians and means, or standard errors and standard deviations. 6.2. Towards Fully Automatic Meta-analysis: An Example To demonstrate the feasibility of conducting a standard (fixed effects) meta-analysis with LLMs, we provide a case study based on an existing meta-analysis published in the Cochrane Library on remdesivir for treatment of COVID-19 (Grundeis et al., 2023). For this exercise, we focus on one outcome: \u201cAll-cause mortality at up to day 28\u201d. This outcome is a dichotomous (binary) outcome, and the comparison is between remdesivir as the intervention and a comparator of standard care. The original meta-analysis includes four trial reports and reports all important numerical values extracted from each trial. 13 Automatically Extracting Numerical Results from RCTs with LLMs We followed the same pre-processing approach for preparing the trial reports after downloading their full text in XML via Bio.Entrez7 Python package (Cock et al., 2009). We prompted both GPT-4 and Mistral Instruct 7B to extract the numerical values. We start by inferring the outcome type, and then we use the outputs from this step to prompt the model to output the numerical values relevant to the given ICO (here: remdesivir, standard care, all-cause mortality at up to day 28). (A) Cochrane meta-analysis (reference) (B) meta-analysis from GPT-4 outputs (C) meta-analysis from Mistral Instruct 7B outputs 0.01 0.1 1 10 100 Wang, 2020 Beigel, 2020 Spinner, 2020 WHO STC, 2021 22 59 3 285 158 541 193 2743 10 77 4 289 78 521 200 2708 3.6% 17.7% 1.0% 77.7% 1.10 [0.49, 2.45] 0.71 [0.49, 1.01] 0.77 [0.17, 3.50] 0.97 [0.82, 1.15] 0.92 [0.79, 1.07] Study Odds Ratio [95% CI] Events Total Events Total Weight Remdesivir Control Odds ratio, 95% CI Total (95% CI) 3635 3507 100.0% Favors remdesivir Favors control 369 380 0.01 0.1 1 10 100 Wang, 2020 Beigel, 2020 Spinner, 2020 WHO STC, 2021 22 59 5 301 158 541 396 2743 10 77 4 303 78 521 200 2708 3.5% 17.0% 1.3% 78.2% 1.10 [0.49, 2.45] 0.71 [0.49, 1.01] 0.63 [0.17, 2.36] 0.98 [0.83, 1.16] 0.92 [0.80, 1.07] Study Odds Ratio [95% CI] Events Total Events Total Weight Remdesivir Control Odds ratio, 95% CI Total (95% CI) 3838 3507 100.0% Favors remdesivir Favors control 387 394 0.01 0.1 1 10 100 Wang, 2020 Beigel, 2020 Spinner, 2020 WHO STC, 2021 22 59 2 301 158 541 197 2743 10 77 4 303 78 521 200 2708 3.5% 17.1% 0.8% 78.6% 1.10 [0.49, 2.45] 0.71 [0.49, 1.01] 0.50 [0.09, 2.78] 0.98 [0.83, 1.16] 0.92 [0.80, 1.07] Study Odds Ratio [95% CI] Events Total Events Total Weight Remdesivir Control Odds ratio, 95% CI Total (95% CI) 3639 3507 100.0% Favors remdesivir Favors control 384 394 Figure 2: Forest plots showing fixed-effect meta-analyses from (A) Cochrane review (reference; performed with manually extracted data), (B) GPT-4 extracted data, (C) Mistral Instruct 7B extracted data. For this particular case, both LLMs permit highly accurate meta-analytic estimates, fully automatically. 7. https://biopython.org/docs/1.75/api/Bio.Entrez.html 14 Automatically Extracting Numerical Results from RCTs with LLMs Both GPT-4 and Mistral correctly categorize the outcome type as binary. GPT-4 had an exact match accuracy of 0.500 and was able to match at least 2 values of the 2x2 outcome table correctly for all trials. Mistral performed comparably to GPT-4 in this case. The exact extracted data from the Cochrane reference and the two models are shown in Figure 2 as forest plots for comparison.8 Both models made mistakes in extracting data from two studies (Consortium, 2021; Spinner et al., 2020). The errors for Consortium (2021) were related to the correct numbers not being readily available in the trial text but requiring extra calculations. For Spinner et al. (2020), GPT-4 hallucinated the events and the total numbers for the intervention, while Mistral extracted the wrong number due to choosing from the wrong timepoint for the outcome measure. These errors were (very) minor when put into a fixed-effect meta-analysis; using outputs from both models resulted in the correct total log odds ratio and the 95% CI was off by only 0.1 (Figure 2). 7. Discussion In this work, we aim to assess whether modern LLMs are sufficiently capable of extracting data to produce accurate, fully automated meta-analyses. To this end, we comprehensively annotated a modest evaluation set of clinical trial reports with numerical findings attached to interventions, comparators, and outcomes. We then used this dataset to evaluate a wide range of LLMs in terms of their ability to infer structured numerical findings. Our results indicate that modern LLMs can perform this task with some accuracy, but difficulties remain. Massive general models like GPT-4 and GPT-3.5 perform fairly well, especially for tasks inferring outcome types and extracting binary outcomes. Smaller, opensource models also showed some capability but fared comparatively worse. For some of the smaller models with minimal context windows, this may be because we had to pass chunks of the input trial reports through the network, increasing noise. Mistral 7B Instruct performed the best of the open-source models considered. Interestingly, despite being further trained on domain-specific text, PMC LLaMA and BioMistral performed considerably worse than other similar-sized general-purpose LLMs by producing the most number of \u201cunknowns\u201d and the lowest percentage of complete data. For extracting continuous outcomes, all the LLMs we evaluated performed below 50% exact match accuracy. We further demonstrated the use of LLMs in a case study in which we conducted a metaanalysis with automatically extracted results. This showed good results in almost replicating the exact total log odds ratio and confidence interval, suggesting a future in which metaanalysis is fully automatable. However, LLMs performed fairly well in this example probably because the constituent RCT reports have a clear structure in the outcome measure reports with limited ambiguity (such as different time points). Also, the example only contains 4 RCTs, which helps in reducing the accumulation of errors that can often happen with the inclusion of more data. Our work shows that modern LLMs such as GPT-4 and Mistral are promising for extracting numerical findings from trial reports to produce automated meta-analyses. However, they fail to extract findings that require more specialized inference when multiple similar outcome measures are reported or the outcome measures are ambiguous. For now, 8. Forest plots show point estimates and associated confidence intervals corresponding to individual studies, as well as the aggregated (overall) estimate and interval. 15 Automatically Extracting Numerical Results from RCTs with LLMs these should remain an assistive technology, although it seems plausible that near-term improvements in LLMs may permit robust automatic meta-analysis. Limitations There are several limitations to this work. Due to the time required to perform detailed annotations, the evaluation dataset we have introduced is a small sample. In addition, while the annotations were performed carefully by individuals familiar with meta-analysis, they did not have clinical background (that said, this work was done in consultation with a clinician to ensure accuracy). We only investigated zero-shot applications of LLMs, mostly because of the context window that would be required to include examples of data extraction from full-texts. We did not attempt few-shot or fine-tuning. Furthermore, we did only minimal and informal \u201cprompt engineering\u201d; it is certainly possible that alternative instructions may yield better (or worse) performance. A related limitation is that we tasked LLMs with inferring outcome types from only descriptions; providing additional context from articles may improve the performance on this first subtask. Acknowledgments This research was partially supported by National Science Foundation (NSF) grants RI2211954 and IIS-1750978, and by the National Institutes of Health (NIH) under the National Library of Medicine (NLM) grant 2R01LM012086."
17
+ }
intro_8K/test_introduction_short_2405.01693v1.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.01693v1",
3
+ "title": "Adversarial Attacks on Reinforcement Learning Agents for Command and Control",
4
+ "abstract": "Given the recent impact of Deep Reinforcement Learning in training agents to\nwin complex games like StarCraft and DoTA(Defense Of The Ancients) - there has\nbeen a surge in research for exploiting learning based techniques for\nprofessional wargaming, battlefield simulation and modeling. Real time strategy\ngames and simulators have become a valuable resource for operational planning\nand military research. However, recent work has shown that such learning based\napproaches are highly susceptible to adversarial perturbations. In this paper,\nwe investigate the robustness of an agent trained for a Command and Control\ntask in an environment that is controlled by an active adversary. The C2 agent\nis trained on custom StarCraft II maps using the state of the art RL algorithms\n- A3C and PPO. We empirically show that an agent trained using these algorithms\nis highly susceptible to noise injected by the adversary and investigate the\neffects these perturbations have on the performance of the trained agent. Our\nwork highlights the urgent need to develop more robust training algorithms\nespecially for critical arenas like the battlefield.",
5
+ "authors": "Ahaan Dabholkar, James Z. Hare, Mark Mittrick, John Richardson, Nicholas Waytowich, Priya Narayanan, Saurabh Bagchi",
6
+ "published": "2024-05-02",
7
+ "updated": "2024-05-02",
8
+ "primary_cat": "cs.CR",
9
+ "cats": [
10
+ "cs.CR"
11
+ ],
12
+ "label": "Original Paper",
13
+ "paper_cat": "Model AND Based AND Reinforcement AND Learning",
14
+ "gt": "Deep Reinforcement Learning (DRL) has been successfully used to train agents in several tactical and real-time strat- egy games such as StarCraft [1] and DoTA [2], which in- volve complex planning and decision-making. These agents have demonstrated proficiency in coming up with winning strategies comparable to that of experienced human players (AlphaStar [3], OpenAI Five [4]) through techniques like self\u2013 play, imitation learning, etc. As a result, in recent years, there has been mounting interest in the military research community in applying these RL techniques to tasks such as operational planning and command and control (C2). Simultaneously, traditional game engines have been repurposed to facilitate automated learning (pySC2 [5], SMAC [6,7], pyDoTA [8]) and new ones developed for battlefield simulation [9\u201313], cre- ating what are effectively digital wargames. The driving force ... ... suboptimal action Trained\u00a0 C2 Agent ... ... WIN LOSS Figure 1: Robustness Evaluation Methodology: The figure shows the difference between a benign (top) and malicious (bottom) environment at timestep t. Observations at t \u22121 are input to a C2 agent that has been pretrained in a benign environment. The agent samples a suboptimal action as a result of the injected adversarial perturbations (orange) in the input which eventually leads to a loss for the BlueForce. behind this research has been to improve and augment strate- gies used on the battlefields of the future, which are expected to be more complex and unconventional, possibly beyond the cognitive abilities of a human commander. Recent works [14] have had considerable success in win- ning simulated wargames using C2 agents that have been trained through reinforcement learning techniques and syn- thetic data. This has been possible partly due to the scalability of RL training which has proved to be a massive advantage for exploring and exploiting different strategies when faced with difficult or complicated scenarios and only partial infor- mation about the environment. However, these evaluations are done in benign environments where information available to the C2 agent is assumed to be uncorrupted. Realistically, this is unlikely in battlefield situations as information there may have inherent noise because of the mode of collection (from sensors or other input source) or may be tampered with by enemy forces. In this work, we evaluate the robustness of such a trained agent when subject to potentially adversarial 1 arXiv:2405.01693v1 [cs.CR] 2 May 2024 inputs in the context of C2. In order to do so, first, we use the StarCraft II Learning Environment (SCLE [15]) to model conflict between two teams, the BlueForce and the RedForce. The C2 agent directs the BlueForce to win the battle by eliminating the RedForce troops. Next, we assume the an attacker present in the en- vironment tampers with the observations collected from the battlefield before they are made available to the C2 agent. The added perturbations termed adversarial perturbations are constructed to be highly imperceptible to evade detection while maximally subverting the C2 agent\u2019s policy to some- thing deleterious (Figure 1). We then evaluate the resulting drop in performance of the agent on several metrics as well as analyze the deviation in the course of action from a military perspective. Our key contributions are summarized as follows: \u2022 We empirically show the vulnerability of the trained C2 agent to even small adversarial perturbations in the input observations. Our studies quantify some expected trends and bring out some non-obvious trends. For example, our studies reveal that partially trained agents appear to be more resistant to noise than fully trained agents. \u2022 For generality, we evaluate the effectiveness of the attacks on two different scenarios which correspond to the C2 agent\u2019s task to attack and defend respectively. \u2022 We also evaluate agents trained using two state-of-the-art RL algorithms, A3C and PPO, and comment on their robust- ness to injected noise. \u2022 We provide interpretability to the model\u2019s outputs by pro- filing the shift in action distributions predicted by the policy network caused by the attacker\u2019s perturbations. Our evaluations demonstrate the susceptibility of vanilla RL training to adversarial perturbations and the need for robust training mechanisms and sophisticated detection and preven- tion techniques especially for such critical scenarios. The structure of the paper is as follows. First we provide brief backgrounds on the use of RL for C2 followed by a description of the StarCraft Environment and two custom scenarios \u2013 TigerClaw and NTC in Section 2.1, which we use for training our agent. In Section 4, we describe the state and action space of our custom scenarios and the details of the RL agent. Section 4.2 and Section 5 contain the attack methodology and the evaluations respectively. Finally we include a discussion on the need for utilizing adversarially robust training techniques and directions for future work.",
15
+ "main_content": "2.1 RL for Command and Control (C2) Mission success in military C2 requires disseminating control actions to designated forces based on real-time Intelligence, Surveillance, and Reconnaissance (ISR) information of the operational environment, such as kinetic and non-kinetic (e.g., weather, political, economic, cultural) variables, and terrain information. Before the battle begins, the commander and their specialized staffing officers must develop a detailed mission plan encapsulated as a Course Of Action (COA). The development of a COA requires a detailed analysis of the operational environment, predictions of the opposing force\u2019s COA (strategy), and wargaming to identify a friendly force COA that is finely tuned to meet the mission requirements [16]. Typical military planning is solely based on the commander and staffing officers, and their allocated time before the battle dictates the number of possible COAs that can be considered. Additionally, each potential COA must be wargamed and fine-tuned against a small set of opposing force COAs to identify strengths and weakness. This can result in suboptimal (heuristic-based) COA [17]. To circumvent this limitation, future military planning is envisioned to incorporate an Artificial Intelligent (AI) commander\u2019s assistant that can generate and recommend COAs to aid in the military planning process. Recent developments in deep RL for strategy games provides a promising direction to develop control policies for C2 [3]. The algorithms developed allow an AI agent to learn the best control policy that optimizes a predefined reward function by playing millions of simulated games through the exploration of many environmental state and action pairs. To extend these formulations for military C2 requires modeling, simulating, and wargaming a large number of battles faster than real-time in a virtual environment that emulates realistic combat characteristics. Furthermore, the existing RL algorithms must be adapted to handle a large number of heterogeneous actors, doctrine based control strategies, complex state and action space, and uncertain information. Previous work on RL for C2 found that the StarCraft II gaming engine, developed by Blizzard Entertainment [1], provides a simulation environment that can be militarized and used for prototyping an AI commander\u2019s assistant [14, 18]. The following subsections provide details of the StarCraft II environment and the scenarios considered in this paper. 2.2 StarCraft II C2 Environment StarCraft II is a multi-agent real-time strategy game developed by Blizzard Entertainment [1] that consists of multiple players competing for influence and resources with the ultimate goal of defeating the other players. As a collaboration between Deepmind and Blizzard Entertainment, the StarCraft II Learning Environment (SC2LE) Machine Learning API was developed to allow researcher to study many difficult challenges associated with RL [5]. For example, controlling heterogeneous assets in a complex state and action space with uncertainties. Furthermore, the StarCraft II Editor allows developers to construct custom scenarios, making it possible to 2 develop RL agents for C2. Previous work extended this framework to militarize the SC2LE and develop a C2 simulation and experimentation capability that interfaces with deep RL algorithms via RLlib [18], an open-source industry-standard RL library [19]. The icons were re-skinned to portray standard military symbology, new StarCraft II maps were designed to emulate realistic combat characteristics, such as environment terrain and asset attributes (e.g., visibility, weapons, weapons ranges, and damage), and a custom wrapper was designed to train RL agents. For a detailed description of this framework, see [18]. In our work, we use this framework to develop baseline RL agents for C2 and study the effects of adversarial attacks on the learned policies in the following two scenarios. 2.2.1 Custom StarCraft II Scenarios Figure 2: TigerClaw Scenario Figure 3: TigerClaw: (Right) The geographical map of the scenario and (Left) the correspondingly designed map in StarCraft TigerClaw: The TigerClaw melee map (Figure 3) is a highlevel recreation of the TigerClaw combat scenario (Figure 2) developed using the StarCraft II map editor. The scenario was developed by Army subject-matter experts (SMEs) at the Captain\u2019s Career Course, Fort Moore, Georgia. The BlueForce is an Armored Task Force (TF) which consists of combat armor with M1A2 Abrams, mechanized infantry with Bradley Fighting Vehicles (BFV), mortar, armored recon cavalry with BFV, and combat aviation. The RedForce is a Battalion Tactical Group (BTG) with attached artillery battery and consists of mechanized infantry with BMP, mobile artillery, armored recon cavalry, combat aviation, anti-armor with anti-tank guided missiles (ATGM), and combat infantry. As seen in Figure 2, the BlueForce is a supporting effort with a mission to cross a dry riverbed (Wadi) and defeat the defending RedForce in preparation for a forward passage of lines by the main effort. The terrain is challenging in this scenario because there are only two viable wadi crossing points (Figure 3). Figure 4: NTC: (Right) The geographical map of the scenario and (Left) the correspondingly designed map in StarCraft National Training Center (NTC): The NTC map is a representation of the Ft. Irwin and Bicycle Lake area as seen in Figure 4. The Blue and RedForce units are the same as TigerClaw, but there is also a capability to randomize the starting forces. RedForce initially begins in the Bicycle Lake region, while BlueForce is set up in defense of Fort Irwin. Thus the RedForce will maneuver from the Bicycle Lake area and attempt to destroy the BlueForce in the Ft. Irwin area. The goal of this map was to investigate the impact of new terrain and to determine if the BlueForce would adopt a defensive strategy. In both SC2 maps, each side has been represented down to the platoon echelon. The speed, attack range, and damage attributes of SC2 units have been scaled to estimate the capabilities of the relevant platforms in order to facilitate reinforcement learning. 2.2.2 Reinforcement Learning Algorithms Asynchronous Advantage Actor-Critic (A3C) [20]: A3C is an asynchronous version of the Advantage Actor-Critic (A2C) algorithm for RL that uses multiple agents to learn a policy and an estimate of the value function in parallel. At each timestep, the agent takes an action based on the current policy and receives a reward while transitioning to a new state. It then calculates the advantage function and updates its local copies of the actor-critic networks. This process takes place independently and asynchronously for each agent. A central parameter server that stores the global parameters of 3 the networks is updated periodically by each agent and is used to initialize the local parameters of the agents\u2019 local networks. The asynchronous nature of the updates leads to more efficient exploration of the environment and reduces the correlation between the updates leading to more stable and efficient learning. Typically, the policy and value functions are parameterized by a shared neural network with a softmax output for the policy and a linear output for the value function. Proximal Policy Optimization (PPO) [21]: PPO is a policy gradient method that aims to improve the performance and stability of trust-region [22] methods by introducing a clipped surrogate objective function. This objective function effectively restricts policy change to a small range thus reducing the variability in training of the actor. During training, the PPO algorithm iteratively updates the actor as well as the action value function and state value function using the temporal-difference (TD) method. Further, this objective function enables PPO to guarantee monotonic improvements in the objective. This allows for faster convergence without strict constraints leading to more accurate and stable performance of the agent. 3 RL Environment In this section we provide technical details of the RL environment used to train the C2 agent. 3.1 State and Action Space The state space observed from the StarCraft II C2 environment consists of both a screen or visual representation and a nonspatial representation. The screen representation is a size 256\u00d7256 image of the the minimap that depicts the current environmental state. The value of each pixel provides the agent with an understanding of the BlueForce and RedForce units\u2019 positions along with terrain information. The nonspatial representation consists of a vector of nonspatial features of size 287 that encodes all of the game and unit information, such as unit type, health, position, game scores etc. The action space within the StarCraft II environment is large since it is a combination of the following three components: the number of units available to the commander, the number of possible actions that each unit can execute, and the (x,y) pixel location within the size 256 \u00d7 256 minimap where the action will be executed. To reduce the action space, the StarCraft II C2 environment first restricts the number of units by defining control groups, which lumps common units together to reduce the overall number of units needed to be controlled. In our custom scenarios, the control groups are defined as \u2013 1 BlueForce: \u201cAVIATION\", \u201cMECH_INF\", \u201cMORTAR\", \u201cSCOUT\" and \u201cTANK\" 2 RedForce: \u201cANTI_ARMOR\", \u201cARTILLERY\", \u201cAVIATION\", \u201cINFANTRY\" and \u201cMECH_INF\" Within each control group, we restrict the possible actions to be either \u201cNO_OP\" or \u201cATTACK(x,y)\", where the function of the former is to essentially do nothing, while the latter moves the control group (i.e., all associated units) to a desired (x,y) location and attacks any enemy within its firing range along the way. Additionally, the number of (x,y) pixel locations is reduced by segmenting the minimap into nine disjoint quadrants with locations defined as \u201cLEFT\", \u201cCENTER\", and \u201cRIGHT\" for the x-axis, and \u201cTOP\", \u201cCENTER\", and \u201cBOTTOM\" for the y-axis, where the exact pixel location is the center of the quadrant. 3.1.1 Reward Structure TigerClaw: The reward function for the TigerClaw map consists of \u2022 Terrain Reward: +10 points for each BlueForce unit crossing the Wadi (a dry river bed) and \u221210 points for retreating back. \u2022 Attrition Reward: +10 points for destroying a RedForce unit and \u221210 points if a BlueForce unit is destroyed. The terrain reward is meant to reward an offensive strategy for the BlueForce. It is meant to reinforce crossing of the Wadi to initiate conflict with the RedForce in order to take over desired locations. To drive this scenario, the RedForce has been scripted to defend the desired locations. NTC: The NTC map shares the same attrition reward function as TigerClaw, but the terrain rewards are not included. That is, \u2022 Attrition Reward: +10 points for destroying a RedForce unit and \u221210 points if a BlueForce unit is destroyed. The lack of terrain reward is meant to encourage the BlueForce to defend and focus on maximizing RedForce losses. To drive this scenario the RedForce has been scripted to seek and destroy the BlueForce. 3.2 RL Agent Description 3.2.1 The Policy Network The C2 agent in this paper uses the same policy network as our previous work [14]. The network takes as inputs, three kinds of observations and the control group \u2022 screen The screen representation discussed in Section 3.1 consisting of a vector of an image of size 256\u00d7256. \u2022 nonspatial The nonspatial representation discussed in Section 3.1 consisting of a vector of size 287. \u2022 action mask The action mask is used to restrict the action space to the allowed actions as described in Section 3.1. 4 Conv2D (conv1) Conv2D (conv2) Conv2D (conv3) screen \u00a0features screen nonspatial + control group FC (fc1) nonspatial features FC (fcv) FC (fc2) action logits FC (fcx) x logits value FC (fcy) y logits masked logits action mask Figure 5: C2 Policy Network: Compuational graph of the policy network of our C2 agent. The inputs and outputs are shown in blue and yellow respectively. Shaded rectangles represent the concatenate operation. Conv2D and FC layers are ReLU activated. \u2022 control group A one-hot encoding of the selected control group (specified in Section 3.1) that will be the focus of action prediction by the policy network. The encoding is then concatenated to the nonspatial input vector. At each time step, the control group is sequentially selected to identify their next actions. The output of the network is an approximation of the value function and an 8 element vector arranged as (action-logits, x-logits, y-logits). The action-logits is a 2 element vector that determines the action. The x,y-logits are 3 dimensional entries each with the logit values corresponding to the positions (LEFT,CENTER,RIGHT) and (TOP,CENTER,BOTTOM) on the map, respectively. This output vector is used to create a probability distribution over the action space from which the next action is sampled by the agent. 4 Adversarial Attacks on RL agents Prior works on the robustness of RL training have focused on evaluating the algorithms from a perspective of sensitivity to environment dynamics [23,24] or the ability to train adaptive adversarial policies against them [25, 26]. In this work, we focus on the former approach. A lot of previous research has shown that neural network predictions are highly sensitive to perturbations in their input space [27\u201329]. As DRL approaches typically rely on parameterizing policies with neural networks, they suffer from the same vulnerabilities. 4.1 Adversarial Attacks on Image Classifiers An adversarial perturbation is a small perturbation that is added to a benign input to fool a trained network into predicting an incorrect output. Typically the perturbation is constrained to be small enough to avoid detection. For example, in the case of image classifiers, the ideal adversarial perturbations in pixel space would be imperceptible to a human observer but would cause the neural network classifier to predict an incorrect class with high confidence. Formally, for a trained classifier f(w;.) and an input image x, computing an adversarial sample x\u2032 = x+\u03c3 involves computing \u03c3 such that f(w,x\u2032) \u0338= f(w,x) while minimizing d(x\u2032,x), where d is a distance metric such as Euclidean distance. 4.1.1 Fast Gradient Sign Method (FGSM) [28] FGSM provides an efficient method for generating adversarial samples given whitebox access to the model. Given a trained classifier model f(w;.), a first order attacker can generate adversarial perturbations for the benign sample x by first computing the gradient of the classifier loss (L) with respect to the input. The weighted perturbation when added to x, creates the adversarial sample x\u2032. x\u2032 = x+\u03b5\u00b7sgn(\u2207xL( f(w;x),yt)) where yt is the ground truth label of the sample x and \u03b5 is the perturbation budget that controls the amount of distortion to the original sample. In the case of FGSM, it bounds the l\u221enorm of the perturbation added to the original image. Intuitively, the attack moves the sample x in the direction of \u2207xL(f(w;x),yt) which maximizes the classifier loss L. In this work, we prefer FGSM to more powerful attacks such as PGD [29] and C&W [27] because of its lower computational cost, which leads to a more efficient attack. 4.2 Inference Time Attacks on Policies Inference time attacks are used against pre-trained agents which are deployed in the wild. The goal of the attack is to induce the trained policy network into predicting a suboptimal action distribution by surreptitiously perturbing the input observations. This inevitably leads to the agent losing its expected reward. In the wild, such an attack could be realized as a cyberattack where sensors collecting data from the battlefield could be compromised and transmit corrupted data. A high level overview of the attack is presented in Figure 1. 4.2.1 Threat Model First we assume a C2 agent that has been trained in a benign environment and has learned an optimum policy. This agent is subsequently deployed in an unsafe setting to direct BlueForce troops in a battle against the RedForce. Next, we consider an attacker that has the ability to intercept and modify observations coming from the environment before they are received by the C2 agent to select the next actions. Through 5 (a) PPO/TigerClaw: Episode Rewards over 100 episodes (b) PPO/TigerClaw: Reward trend w.r.t perturbation budget (\u03b5). (c) PPO/NTC: Episode Rewards over 100 episodes (d) PPO/NTC: Reward trend w.r.t perturbation budget (\u03b5). Figure 6: Inference time attack on an agent in the TigerClaw (a),(b) and NTC (c),(d) scenario trained using PPO. (a) A3C/TigerClaw: Episode Rewards over 100 episodes (b) A3C/TigerClaw: Reward trend w.r.t perturbation budget (\u03b5). (c) A3C/NTC: Episode Rewards over 100 episodes (d) A3C/NTC: Reward trend w.r.t perturbation budget (\u03b5). Figure 7: Inference time attack on an agent in the TigerClaw (a),(b) and NTC (c),(d) scenario trained using A3C. such modifications, the attacker hopes to influence the agent to sample an incorrect action that leads to a low reward for the BlueForce. To generate such input perturbations efficiently, we also assume that the attacker has white-box access to the policy network used by the C2 agent. 4.2.2 Attack Methodology We use FGSM as a basis for generating perturbations at inference time and use the modification presented in [23] to target policies instead of classifiers. Unlike supervised learning, we do not know the \u201cground truth\" action at any given timestep in RL and we assume that the action predicted by the policy network with the highest likelihood is optimal. It should be noted that since the output of our policy network has three different components (action logits, x-logits, y-logits), ideally the ground truth vector should be computed for each component separately. With this in mind we construct the ground truth vector with a degenerate distribution. That is, for an input observation x, for each component of the output, we take the element with the highest value in the vector y = [y(1),y(2),y(3)] = f(w;x) and assign weight 1 to it\u2019s logit value and 0\u2019s everywhere else. In other words, y\u2032(j) i = \u001a 1, if y(j) i = max(y(j)) 0, otherwise where yi is the ith element of vector y. However, we find that treating the components as a single unit and calculating the degenerate distribution on the entire output is more efficient while making for an effective attack. After making this relaxation, the malicious perturbation is then calculated for x using the gradient \u2207xL( f(w;x),y\u2032), where L is the cross-entropy (CE) loss function. We use this technique to craft perturbations for two of the three input components namely \u2013 the screen and nonspatial components. We do not perturb the action mask as it is simply used to mask out invalid actions predicted by the network. As a result, perturbing this component does not conform to any realistic setting. 5 Evaluations 5.1 Experimental Setup Our experiments were conducted on a cluster node with two AMD Epyc 7763 \u201cMilan\" CPUs 128 cores with 256GBs of main memory. We used the SC2 framework described in Section 2.2 to train our RL agents. Each training run was performed using 90 workers, each occupying a single core (90 parallel instances of the game) and consumed about 170GBs of memory. The PPO and A3C agents were trained for 5K/25K iterations respectively corresponding to around 40M/100M timesteps or 134K/376K episodes respectively. The rollouts were visualized using a custom pygameinterface. 6 5.2 Evaluation of Inference Time Attacks on Agent Reward For evaluating effectiveness of inference time attacks, first we train our C2 agent in a benign environment until it learns a policy that consistently achieves a high reward on the given scenario. The trend of the attained reward when the agent is deployed in the presence of an attacker is then studied over the course of a 100 episodes or rollouts. For a comprehensive evaluation, we consider two scenarios or tasks \u2013 1 TigerClaw (Attacking BlueForce) and 2 NTC (Defending BlueForce). We also consider two different state-of-the-art training algorithms, A3C and PPO. In both scenarios the C2 agent controls the movements of the BlueForce while the RedForce follows a fixed policy. A detailed description of these scenarios was given in Section 2.2. To further analyze and understand the shift in agent strategy we use several quantitative metrics as well as observe multiple rollouts. Our insights are presented below. 5.2.1 Vulnerability to Adversarial Perturbations Figures 6 and 7 show the resulting reward trends for an agent trained using PPO and A3C, respectively. We present boxplot statistics aggregated over 100 episodes, for the rewards attained by the agent under attack with different perturbation levels in Figures 6b and 6d (PPO) and Figures 7b and 7d (A3C). \u03b5 = 0.0 corresponds to the benign case when no perturbations are made. In most cases, we observe a steep decrease in the median reward even for minute perturbations (\u03b5 = 0.05,0.08). Increasing \u03b5 also shows diminishing returns for attack effectiveness. To maintain the secrecy of the attack, the perturbation budget needs to be small enough to be imperceptible to a human auditor, especially for the screen component of the input. Figure 8 shows a visual comparison of perturbations of the screen component at different levels. From our evaluations we observe that \u03b5 = 0.1 is optimal in lowering the reward while keeping perturbations to a minimum. Figure 8: Visualizing perturbations of the screen component at different \u03b5 levels in the TigerClaw scenario. Higher \u03b5 levels result in a greater amount of noise. Figure 9 provides explainability into the utility of the attack by showing how it changes the actions taken by the agent. We first plot the action distribution predicted by the policy network on benign observations collected at certain timesteps during a PPO/TigerClaw rollout. These are shown in Figure 9a, 9c, and 9e, respectively. We then compare it to the distribution predicted by the same policy network after maliciously perturbing the observations with our attack for \u03b5 = 0.1 in Figures 9b, 9d, and 9f, respectively. The probabilities are plotted on a log scale. As can be observed the actions with the highest likelihood shift from (1,1,2) \u2192(1,2,1),(1,1,2) \u2192 (0,2,1) & (0,0,0) \u2192(1,2,1) at timestep 10, 21, and 100. timestep = 10 (a) Predicted Action: ATTACK(CENTER,BOTTOM) (b) Predicted Action: ATTACK(RIGHT,CENTER) timestep = 21 (c) Predicted Action: ATTACK(CENTER,BOTTOM) (d) Predicted Action: NO_OP timestep = 100 (e) Predicted Action: NO_OP (f) Predicted Action: ATTACK(RIGHT,CENTER) Figure 9: Shift in the action distributions predicted by the policy network at different timesteps during a PPO/TigerClaw rollout. (Left) Action distribution on benign observations. (Right) Action distribution predicted by network after perturbing observation with \u03b5 = 0.1. Each caption represents the action sampled with highest likelihood. 5.2.2 Analyzing Agent Behavior under Attack We observe a definite change in the strategy used by the C2 agent under an attacker\u2019s influence. Over a number of rollouts in either scenario, we frequently observe artifacts like erratic troop movements where the BlueForce troops keep oscillating about a single position for certain number of timesteps, straying off course of the aviation units which are crucial for winning the TigerClaw scenario, etc. Similarly in the NTC scenario, the attacks cause the BlueForce to retreat towards the bottom of the map, away from the advancing RedForce as opposed to following the original strategy \u2013 to 7 Figure 10: Sampled actions of a PPO agent over 5 TigerClaw rollouts. We can observe a shift in the distribution in the presence of an attacker. aggressively pursue and eliminate the RedForce units. To better understand this shift in behavior, we compare actions taken by the agent in a benign environment to those taken by the same agent when an adversary is present. That is, at each timestep we compute the actions that would be taken by the agent if the observations were maliciously modified and compare these \u201csubverted actions\" to the actions that are actually taken by the agent at that timestep. We do so by perturbing the observations using our attack (with \u03b5 = 0.1) and using the agent to predict the next action. The normalized action frequencies are plotted in Figure 10 for a PPO/TigerClaw agent over 5 episodes. A clear difference is observed in the actions taken in the two cases. Actions (0,2,1),(1,1,1),(1,2,1) are sampled frequently when perturbed observations are presented to the agent. These actions misdirect the BlueForce as they target areas on the map that do not contain any RedForce troops. Figure 11: Actions taken by the BlueForce during a TigerClaw rollout in a benign setting compared to those taken in the presence of an inference time (FGSM) attacker. In order to examine the BlueForce movements more thoroughly, we select one episode and for each timestep plot the subverted actions and the actual actions taken (Figure 11) by the agent. A rollout in a benign environment sees the BlueForce enter conflict and destroy most of the RedForce troops in the first 100 timesteps. This corresponds to the actions (1,1,2),(1,1,1),(1,1,0) which are taken frequently within the first 50 timesteps. The (0,1,1) actions correspond to NO_OPs that are taken after exiting conflict when most of the RedForce are killed. In the presence of an attacker, however, in the first 50 timesteps the perturbations cause the C2 agent to sample actions like (1,2,1) with high frequency. As a result, the BlueForce are misdirected and end up being killed by the RedForce. (a) Casualty Comparison (b) BlueForce Partial Win % (c) BlueForce Health Left (d) RedForce Health Left Figure 12: Comparing the impact of inference time attack on additional game metrics for a PPO agent in TigerClaw. The results are aggregated over 100 rollouts. This is also reflected in the casualty metrics of both teams in Figure 12a. There are greater BlueForce casualties and a fewer RedForce casualties in the presence of an \u03b5 = 0.1 attacker. To get an idea of the impact of each unit on their respective team, we look at the health remaining percentage of each control group at the end of an episode. Figures 12c and 12d shows the health statistics for the BlueForce and RedForce respectively, aggregated over 100 episodes. We can see in Figure 12c that the health remaining of the aviation units drops significantly to 2% in the presence of an attacker. Rollouts show that these units are critical to winning the TigerClaw scenario for the BlueForce. On the other hand the RedForce aviation units actually see an increase in remaining health (77%). Other groups of the BlueForce also see a noticeable decrease in their health metrics. Similarly in the NTC scenario, the health remaining for the RedForce increase greatly on attack (Figure 13d), while the BlueForce health left only decreases marginally. Observing multiple rollouts reveals the attacks cause the BlueForce units to get misdirected and as a result, they do not get into conflict with the RedForce. This is also supported by the casualty metrics for the two teams (Figure 13a) with the BlueForce only having a slightly higher number of casualties when compared to the benign case. Finally, the attack causes a sharp drop in the number of games that end with higher health levels for the BlueForce than the RedForce (termed a partial win). This can be seen in Figures 12b and 13b. 8 (a) Casualty Comparison (b) BlueForce Partial Win % (c) BlueForce Health Left (d) RedForce Health Left Figure 13: Comparing the impact of inference time attack on additional game metrics for a PPO agent in NTC. The results are aggregated over 100 rollouts. 5.2.3 Strength & Reliability of the Attack For small perturbations, the attack is not very reliable as evidenced by the large dispersion in attained reward. High variance can be observed in Figures 6a, 6c, 7a, and 7c where we plot the (EMA smoothed) episode reward over 100 episodes. This can be explained partly by the stochastic nature of action sampling and the reward structure associated with the map. This dispersion is reduced when increasing \u03b5, leading to greater reliability but trading off the secrecy of the attack. A surprising instance is that of an A3C/TigerClaw agent where the attack remains ineffective even for larger perturbations. We analyze this in more detail below. Attacking the A3C/TigerClaw agent: As can be seen in Figure 7b, the attack appears to fail on the C2 agent trained using A3C on the TigerClaw map. Even for large perturbations (\u03b5 > 0.1), the attack fails to degrade the agent\u2019s reward as extensively as the other cases. Interestingly, we note that even in a benign environment, the A3C/TigerClaw agent achieves a significantly lower reward than the PPO/TigerClaw agent. First, to rule out the training algorithm as a potential factor and for a fair comparison, we partially train a PPO agent on the TigerClaw scenario until it achieves a similar reward to the A3C agent and study the rewards achieved by this agent in the presence of an attacker. Our observations are shown in Figure 14. In Figure 14a we see that the partially trained PPO agent (PPO\u2013partial) gets similar rewards as the A3C agent over 100 rollouts. Notably, its reward trend when attacked (Figure 14b) is comparable to that of the A3C agent (Figure 7b). This empirically shows (a) Episode rewards of the agent under benign conditions. (b) PPO-partial/TigerClaw: Reward trend w.r.t \u03b5 Figure 14: Comparing the effect of the inference time attack on the A3C agent with the partially trained PPO agent (PPOpartial). This agent was trained for 1M timesteps. that the attack is largely unaffected by the training algorithm. Coupled with the previous observation, we hypothesize that the effectiveness of the attack is correlated to the quality of the trained agent. (a) A3C/TigerClaw (b) PPO-Partial/TigerClaw Figure 15: Action distribution shifts over 5 episodes for A3C/TigerClaw and PPO-Partial/TigerClaw To test this hypothesis in greater detail we turn to the frequencies of actions taken by both agents over multiple rollouts. Similar to the analysis in previous sections, to visualize the shift induced by the attack we plot the subverted and actual (\u03b5 = 0.1) actions sampled over 5 episodes in Figure 15 for both agents. Surprisingly, we observe an almost complete overlap of the two action plots for the A3C agent (Figure 15a) and only a marginal difference for the PPO-partial agent. This indicates that the attack is in most cases incapable of flipping the actions sampled by the agent. As the malicious perturbations are constructed using a loss gradient, we try to understand the loss landscape of the poorly trained policy networks to explain the attack\u2019s ineffectiveness. To do so, we plot the loss value for 104 randomly sampled points in the l\u221e-ball of \u03b5 radius around a fixed observation Oc. The loss is computed as the component-wise sum of the CE loss between the predicted action distribution on that observation and a fixed ground-truth action distribution. The ground truth action distribution is computed corresponding to the actual action taken by the agent on Oc (Section 4.2). The CE loss is calculated for each output component (action 9 (a) PPO/TigerClaw Agent mean reward comparison. (b) PPO/NTC Agent mean reward comparison. (c) A3C/TigerClaw Agent mean reward comparison. (d) A3C/NTC Agent mean reward comparison. Figure 16: Component wise impact of the input on mean episode reward. Figure 17: Loss Value plots for the agent policy networks. The y axis represent the normalized frequencies (104 trials) for the loss value on the x axis. logit, x-value, y-value) and summed to get the final loss. Figure 17 shows the results for \u03b5 = 0.1 for both the A3C and PPO-Partial agents when compared to the PPO/TigerClaw agent as a baseline. Compared to the PPO agent, both the A3C and PPO-Partial agent\u2019s predictions are highly similar to the ground truth as evidenced by the highly frequent \u223c0 loss value. This corresponds to a flatness in the prediction space of both (A3C and PPO-Partial) policy networks where inputs in neighbourhood result in the same prediction. Consequently this leads to greater robustness to injected noise \u2013 benign and adversarial as small perturbations are not enough to significantly change the predicted action distributions. Investigating the relation between this perceived robustness and the quality of training is left as future work. 5.2.4 Component-wise Impact of Input on the Attack In this section we analyze the impact of the screen and nonspatial components on the effectiveness of the attack. To do so, we perform the attack by restricting malicious perturbations to only the screen or nonspatial components respectively. We track the mean reward achieved by the C2 agent in each case and compare them to the baseline attack which perturbs both components. We record the rewards gained over 100 rollouts and show the results in Figure 16. Figures 16a, 16b, and 16d indicate that for smaller perturbations levels (\u03b5 < 0.5), modifications to the screen component are primarily responsible for a successful attack. Further, we only see a minor degradation in the reward for small nonspatial-only perturbations in both scenarios. On the other hand, the drop in mean reward because of screen-only perturbations is similar to that in the baseline attack. Intuitively, this skewed importance makes sense as the screen component is a much larger input component (256\u00d7256 versus 287) when compared to the nonspatial component. The screen component further encodes important spatial information on the terrain and location of troops that has a significant impact on the policy network\u2019s prediction. We also note the anomalous trends in Figure 16c for the A3C/TigerClaw agents. It appears that the nonspatial component is more important to the policy network, however since this agent is poorly trained, we cannot conclusively reason about the resulting trends. 5.2.5 Adversarial Robustness of the Training Algorithm To compare the robustness of an agent trained using PPO to one trained using A3C we compare the trend followed by the mean relative reward (Rr) defined as the ratio of the mean reward obtained by an agent under attack (Ra) to the mean reward earned in a benign environment (Rb). That is, Rr = Ra/Rb. A larger value of Rr signifies more robustness. Figure 18 presents the results of our comparisons. In both cases, we see that the A3C agent seems to perform marginally better than the PPO agent for small perturbations and worse for higher perturbation levels. As in the previous section, for a fair comparison we compare the partially trained PPO agent (PPO-partial) with the A3C agent (Figure 18a) and see a similar decreasing trend for Rr. While we cannot conclusively comment on the robustness of A3C vs PPO, the robustness of the PPO-partial agent compared to a fully trained PPO agent makes for a compelling argument. This has been explored in more detail in the preceding section. It should be noted however that PPO does seem to be functionally more performant than A3C as evidenced 10 (a) Relative Rewards of PPO vs A3C on TigerClaw (b) Relative Rewards of PPO vs A3C on NTC Figure 18: Comparison of Algorithm Robustness: Mean Relative Reward with respect to the perturbation budget by the higher rewards in both scenarios. 6 Discussions and Future Work In this work we have focused on evaluating the robustness of RL agents used for C2 through the lens of an inference time attacker. Our investigations supported by existing literature reveal that even the latest RL training algorithms cannot be trusted to train agents that can be reliably deployed in unsafe environments. Our evaluations show that well-trained agents are highly sensitive to even minute perturbations in their input space and act suboptimally as a result. In an arena such as the battlefield where observations received can be prone to noise either benign or malicious, this raises serious questions on the use of such agents. This directly leads us to two avenues for future work. The first is developing explainable and controllable approaches to COA generation using RL training and developing robustness mechanisms for RL agents that work both during inference and training time. Such mechanisms can be a combination of adversarial perturbation detection and prevention mechanisms that can be deployed on top of pre-existing agents. The second avenue for future work is existing training algorithms can be augmented to train agents that are certifiably robust to malicious noise. In this vein, adversarial training offers a promising alternative to train robust agents. In this work, while we present a preliminary study relating the flatness of the loss landscape of the policy network to it\u2019s apparent robustness, further research is required to quantify the susceptibility of a policy network to noise. 7 Conclusions Our evaluations reveal the fragile nature of vanilla RL agents trained for C2 when deployed in insecure environments where even minute perturbations to the input introduced by a malicious actor are sufficient to introduce a large variability in the agent\u2019s prediction. We analyze this susceptibility of a C2 agent when commanding the BlueForce in two custom scenarios and discuss the reasons behind such behavior and the implications from a strategic perspective. Finally, we emphasize the need to develop robust training algorithms for RL which would be critical for reliable mission planning on battlefields of the future. Funding This material is based in part upon work supported by the Army Research Lab (ARL) under Contract number W911NF2020-221. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the sponsors."
16
+ }
intro_8K/test_introduction_short_2405.01705v1.json ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.01705v1",
3
+ "title": "Long Tail Image Generation Through Feature Space Augmentation and Iterated Learning",
4
+ "abstract": "Image and multimodal machine learning tasks are very challenging to solve in\nthe case of poorly distributed data. In particular, data availability and\nprivacy restrictions exacerbate these hurdles in the medical domain. The state\nof the art in image generation quality is held by Latent Diffusion models,\nmaking them prime candidates for tackling this problem. However, a few key\nissues still need to be solved, such as the difficulty in generating data from\nunder-represented classes and a slow inference process. To mitigate these\nissues, we propose a new method for image augmentation in long-tailed data\nbased on leveraging the rich latent space of pre-trained Stable Diffusion\nModels. We create a modified separable latent space to mix head and tail class\nexamples. We build this space via Iterated Learning of underlying sparsified\nembeddings, which we apply to task-specific saliency maps via a K-NN approach.\nCode is available at\nhttps://github.com/SugarFreeManatee/Feature-Space-Augmentation-and-Iterated-Learning",
5
+ "authors": "Rafael Elberg, Denis Parra, Mircea Petrache",
6
+ "published": "2024-05-02",
7
+ "updated": "2024-05-02",
8
+ "primary_cat": "cs.CV",
9
+ "cats": [
10
+ "cs.CV",
11
+ "cs.AI",
12
+ "I.4; I.2"
13
+ ],
14
+ "label": "Original Paper",
15
+ "paper_cat": "Diffusion AND Model",
16
+ "gt": "With the rise of large multimodal models [10, 22, 33] and Latent Diffusion models [26, 30], both image analysis and generation tasks have become dependent on the availability and quality of balanced training data [3]. Due to the represen- tation capabilities of these large models with several million, or even billions of parameters, volumes of data in similar magnitudes are required to avoid issues such as overfitting or learning undesirable biases. Obtaining extensive and well-distributed data is not an option for several essential domains. For example, for the analysis and generation of medical images, obtaining data can be complex (as they are subject to patient confidential- ity) and expensive (as they are tied to real-world medical procedures and imaging exams) [11]. Moreover, it is often impossible to obtain large, well-distributed samples of im- ages corresponding to certain diseases and anomalies that, while relevant, do not occur frequently [12]. Two avenues for solving the lack of data in these domains are resampling and data augmentation. The former consists of artificially oversampling examples from low-frequency data while sometimes reducing the number of samples from high-frequency data. The latter consists of generating syn- thetic data for under-represented classes to even out the distribution of a dataset. Resampling techniques have been used with relative suc- cess in several long-tailed problems, but can introduce un- wanted biases into downstream tasks [31] and often lead to overfitting [18]. Data augmentation is the natural response to these is- sues. It represents a booming area of research comprising several different families of algorithms, such as geometric transformations (rotations, scaling, cropping, etc.), creation of synthetic samples, mixing-based methods [2, 7, 35] do- main translation-based methods [38] and generative methods [23, 24]. We propose a new data augmentation method that manipu- lates latent space representations of images from pre-trained diffusion models, thereby generating new images to augment under-represented classes. Specific features of the data are selected via activation maps, which are then combined to pro- duce images similar to the ones from actual data belonging to long tail classes. The combination of latent space representations is chal- lenging to perform through naive methods due to interference phenomena between the post-processing of the features. We tackle this issue as a problem of compositional generaliza- tion and apply the framework of iterated learning (IL) [28] with sparsified embeddings to our target data augmentation framework. The main inspiration of IL comes from models of cultural evolution [19] in which iterations of teacher-student interac- tions encourage useful compression and the formation of a \u201dshared language\u201d adapted to a task [4, 20, 28]. In particular, recently [29] have obtained favorable results related to com- posing distinct features when using sparsified state spaces, 1 arXiv:2405.01705v1 [cs.CV] 2 May 2024 with a sparsification method called Simplicial Embedding (SE) [21]. The concept entails that if, in an IL iteration, the \u201dteacher\u201d version of the model is obliged to propagate a spar- sified version of latent vectors to a new \u201dstudent\u201d version before training, it will impose an information bottleneck across iterations, resulting in significant improvements at downstream tasks that require compositional reasoning. In short, we applied a new version of the IL+SE method to map the already rich latent space of a pre-trained Stable Diffusion Model [30] using task-specific activation masks [8]. We propose mixing existing points in this sparse latent space to achieve better fusion at the feature level.",
17
+ "main_content": "As previously mentioned, several data augmentation techniques have been used recently; see [34] for a review. Of the methods described, the most relevant to this work are Image Mixing and Deep Generative Model approaches. 2.1. Image mixing Image mixing generally consists of creating new data points by combining two or more existing ones. MixUp[36] and SMOTE[7] utilize convex combinations of existing data to create new samples, with SMOTE selecting the same class neighbors as pairs. CutMix[35] randomly samples from the base dataset and removes patches from an image to replace with a patch from another. These methods can also be performed in the feature space. The main problems of these methods are that they fail to produce novel, realistic samples and lack consistency in preserving labels before and after augmentation. We aim to solve the first issue by leveraging the latent space of a pre-trained stable diffusion model. For the second issue, we attempt to solve the problem using saliency methods and a separable sparse latent space. 2.2. Deep Generative Models Approaches based on generative models sample new training examples through model inference. GANs [14] are the most popular generative framework for data augmentation due to their fast inference and realistic generation [5]. However, GANs are widely known to be unstable at training time, and are prone to mode collapse [37]. Latent Diffusion Models [30] have long surpassed GANs in image generation quality [9], and off-the-shelf models have been used to significant effect in data augmentation [27, 32]. However, Diffusion Models generally suffer from slow inference speed and quality degeneration with longtailed data [27]. To address these issues, we work within a modified sparse version of the rich latent space defined by a trained Stable Diffusion Model, to mix existing data points instead of generating new ones from scratch. By doing this, Figure 1. Proposed method: Stage 1 (Iterated training), iteratively train a student network Si to imitate a frozen teacher network Ti, which corresponds to the student network of the previous iteration Si\u22121 in mapping the original latent vectors Z to a semantically separable sparse domain Zs. Also, jointly train said student with a classifier C and a decoder D to classify and map vectors from the sparse domain back to the original domain. In Stage 2 (CAM generation), we use EigenCAM to generate class activation maps (Mi for classes I in [1, k]) for each vector, using the classifier trained in Stage 1. Finally, in stage 3 (Inference), we find a head class near neighbor Zs h for each tail class vector Zs t , and we combine them using their respective Class Activation Maps (CAM) as masks, taking the top activations from the tail vector and the bottom activations from the head vector. Finally, we combine these activations and pass them through D to generate a new tail class vector. we can create high-quality samples with very few (if any) diffusion inference steps. 3. Method Our proposed method, shown in Figure 1, consists of three stages: (i) iterated training, (ii) class activation map generation, and (iii) inference. Note that each stage is applied to the latent vectors produced by a pre-trained Latent Diffusion Model, and not to the images themselves. 3.1. Iterated training In this stage, we learn a translation from the diffusion latent space to a sparse high dimensional representation [21], while training a convolutional classifier for this space. To do this, we implement a student-teacher [1] training regime, which iteratively runs through two phases: imitation and interaction. We train a student network Si : \u03b8 \u00d7 Z \u2192Zs S during the 2 (a.1) (a.2) (a.3) (b.1) (b.2) (b.3) (c) (d) (e) Figure 2. Fusion process applied to an image from the tail class Tortuous Aorta (a.1) and one of its neighbor images from the head class Atelectasis (b.1). (a.2) and (b.2) are channelwise Maximum Intensity Projections of the sparse vectors obtained from (a.1) and (b.1) respectively. In (a.3) and (b.3), we use EigenCAM to find attention maps for each sparse vector and define binary masks (yellow is one and dark purple is zero) using \u03c4h = \u03c4l = 0, 4 as thresholds. We combine the masked sparse vectors into (c) and decode the vector into a fused image (d). Finally, we apply five inference steps in (e) to obtain a less noisy image. imitation phase from scratch. This network transforms latent vectors Z \u2286RH\u00d7W \u00d7C into sparse high dimensional vectors Zs S \u2286RH\u00d7W \u00d7C\u2032, with C < C\u2032. We enforce sparsity by applying a channel-wise softmax to these output vectors, as follows: zS ijk := exp(Si(zijk)) PC\u2032 \u02c6 k=1 exp(Si(zij\u02c6 k)) . (1) We train Si to imitate a teacher network Ti, which is the student of the previous iteration, Ti = Si\u22121. Instead of imitating Ti directly, we use the channel-wise probabilities coded in Ti(zS) to sample a binary vector zS b , where each spatial coordinate is a one-hot vector of dimension C\u2032. Minimizing the imitation loss then becomes a multilabel classification problem: LI = BCE(zS b , Si(zS)) (2) During the interaction phase, we minimize losses related to two other networks. First, we aim to make the sparse vectors good representations of the original latent vectors. To this end, we jointly train a network D : \u03b8 \u00d7 Z\u2032 \u2192Z with Si, to minimize the reconstruction loss: LR = MSE(z, D(Si(zs)). (3) Our objective entails ensuring that the sparse vectors are easily separable in their respective image classes; with this in mind, we trained a classifier C : \u03b8\u00d7X\u2032 \u2192Y to minimize the multilabel classification loss: LC = (y, C(Si(z))). (4) We couple the two losses via a new hyperparameter \u03bb, and the interaction phase translates into the optimization: min \u03b8Si,\u03b8D,\u03b8C Ez,y\u2208Z[\u03bbLR + (1 \u2212\u03bb)LC]. (5) 3.2. Class Activation Map Generation In this stage, we use the classifier C from the previous stage to generate simple and interpretable activation maps for each class c \u2208K, to select relevant or non-relevant coordinates for classification as c. First, based on the classifier C, we define class activation maps Mc \u2286{0, 1}H\u00d7W for every sparse vector zs \u2208Zs and every class c \u2208K, where Mc \u22481 defines which spatial coordinates of the sparse vectors are important for classification as class c. The CAMs are generated with EigenCAM [25], which is adapted to the task because it does not require correct classification to generate attention maps. Next, following [8], we separate class-specific and classgeneric features by a threshold method, as follows. With upper threshold value \u03c4h < 1 and lower value \u03c4l > 0, set M S c := sgn(Mc \u2212\u03c4h) M G c := 1 \u2212sgn(\u03c4l \u2212Mc), (6) whose interpretation is that M S c contains coordinates of zs relevant to classify the vector as c, while M G c contains nonrelevant coordinates. 3.3. Inference In this stage, we generate new samples from tail classes. Let H and T be sets of head and tail classes. We first find the head class Hc of highest confusion for a given tail class, provided by the highest estimated value index in the C classifier prediction for the head classes. We then sample an element for each tail class example from their highest confusion head class. A possible naive approach would have been to create a fused vector by adding the class-specific tail features with the class generic head features, using their respective masks. However, note that the class-specific mask for the first vector and the class-generic mask for the second vector might overlap or leave empty spaces in the final vector. For these uncovered and overlapping parts of the vectors, it is not clear how 3 to update their values in an interpretable manner. To solve this, whenever both masks M S c , M G c are 0 or both are 1, we choose which vector to use at random, introducing a random mask M R. The values of the maps also need to broadcasted across C\u2032 channels as follows: for M \u2208{0, 1}H\u00d7W , let M \u2208{0, 1}H\u00d7W \u00d7C\u2032 be the tensor given by repeating C\u2032 times the values of M. Then, define M R \u223c [Unif({0, 1})]H\u00d7W , (7) zs R := zs t \u2299M R + zs h(1 \u2212M R), (8) e zs R := zs R \u2299(1 \u2212M G c \u2212M S c + 2M G c \u2299M S c ), (9) e zs t := zs t \u2299(M S c \u2299(1 \u2212M G c )), (10) e zs h := zs h \u2299(M G c \u2299(1 \u2212M S c )), (11) zs F := e zs h + e zs t + e zs R. (12) Having defined the sparse fusion vector zs F , we translate it back into the base latent space Z using the decoder network D trained in stage 1: zF := D(zs F ) (13) Our results suggest that, for fast data augmentation, this approach tends to suffice; however, for high-fidelity image generation, an extra step is added, as this vector does not necessarily lie within the pre-trained VAE domain. Using the frozen pre-trained Unet U \u2217from the Stable Diffusion Model, conditioned on the class names associated with the tail class image, the vector can be translated into the VAE domain. We take N/d denoising steps, where N is the number of inference steps in the original Stable Diffusion model, and d \u2208[1, N]. 4. Results The method was tested by generating tail classes of a sampled version of MIMIC-CXR-LT 2023 [13, 16], a multilabel multiclass long-tailed dataset of chest X-rays. Our reduced sample version has only five head classes and five tail classes. We compare our method with SMOTE [7] and RoentGEN [6], which is used as our Stable Diffusion Model, on image generation quality using FID [15]. We also evaluate using no extra inference steps, one step, and five. We train a Densenet121 [17] model using each augmented dataset to classify our sample train set and test it on a separate test set from the same challenge. We evaluate the Mean Average Precision for head and tail classes for each described method and compare them with a classifier trained on our unaugmented sample set. As Tab. 1 shows, we obtain lower image quality than the other methods when using 0 and 1 inference steps. However, we get a lower FID with only five inference steps than RoentGEN using 75 inference steps. Unexpectedly, all tested augmentation methods have worse results when used to augment a classifier than the baseline. Model Avg Tail FID\u2193Head mAP\u2191Tail mAP\u2191 Baseline 0.618 0.155 SMOTE 171.864 0.578 0.151 RoentGEN@75 138.963 0.618 0.152 Ours@0 191.873 0.607 0.152 Ours@1 191.646 0.595 0.143 Ours@5 130.110 0.595 0.144 Table 1. Results measuring FID [15] and Mean Average Precision for tail and head classes. We use 75 inference steps for RoentGEN, following [6], and 0, 1, and 5 steps for our method. One possible explanation for this behavior is how labels are assigned to new data: As a multi-labeled classification problem, our approach assigns labels from the corresponding head and tail images to the fused latent vectors. However, this may introduce biases that lead to misclassification of real samples. Furthermore, despite the good FID performance, our method performs noticeably worse in the classification task (mAP). This might be a consequence of the diffusion process, which does not guarantee maintaining the existing labels in the fused latent vector. 5. Conclusion In this work, we present a novel method for data augmentation and data generation on long-tailed datasets. By leveraging pre-trained Latent Diffusion Models, compositional learning, and saliency methods, we generate new examples of underrepresented classes. We give a detailed mathematical description of our method and run experiments on image generation and data augmentation in the medical domain for multi-label classification using a small subset of MIMICCXR-LT [13, 16]. Using the Latent Diffusion Model RoentGEN [6] to run five inference steps on our generated vectors, we obtain competitive results in image generation quality. Counterintuitively, we found that using more inference steps negatively impact the downstream image classification when using our augmented data. This might be a consequence of our approach not necessarily maintaining data labels throughout the diffusion process. In future steps, we plan to experiment with larger datasets and different techniques for assigning labels and compare our results with a broader range of generation and augmentation methods across a more comprehensive range of tasks. 6. Acknowledgements This work was supported by ANID Chile, Fondecyt Regular grant 1231724, as well research centers of excellence with code FB210017 (Basal CENIA), ICN2021 004 (Millenium iHealth), and ICN17 002 (Millenium IMFD). 4"
18
+ }
intro_8K/test_introduction_short_2405.01724v1.json ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.01724v1",
3
+ "title": "Large Language Models are Inconsistent and Biased Evaluators",
4
+ "abstract": "The zero-shot capability of Large Language Models (LLMs) has enabled highly\nflexible, reference-free metrics for various tasks, making LLM evaluators\ncommon tools in NLP. However, the robustness of these LLM evaluators remains\nrelatively understudied; existing work mainly pursued optimal performance in\nterms of correlating LLM scores with human expert scores. In this paper, we\nconduct a series of analyses using the SummEval dataset and confirm that LLMs\nare biased evaluators as they: (1) exhibit familiarity bias-a preference for\ntext with lower perplexity, (2) show skewed and biased distributions of\nratings, and (3) experience anchoring effects for multi-attribute judgments. We\nalso found that LLMs are inconsistent evaluators, showing low \"inter-sample\"\nagreement and sensitivity to prompt differences that are insignificant to human\nunderstanding of text quality. Furthermore, we share recipes for configuring\nLLM evaluators to mitigate these limitations. Experimental results on the RoSE\ndataset demonstrate improvements over the state-of-the-art LLM evaluators.",
5
+ "authors": "Rickard Stureborg, Dimitris Alikaniotis, Yoshi Suhara",
6
+ "published": "2024-05-02",
7
+ "updated": "2024-05-02",
8
+ "primary_cat": "cs.CL",
9
+ "cats": [
10
+ "cs.CL",
11
+ "cs.AI",
12
+ "68T50 (Primary) 68T01, 68T37, 91F20 (Secondary)",
13
+ "I.2; I.2.7; I.7"
14
+ ],
15
+ "label": "Original Paper",
16
+ "paper_cat": "LLM Fairness",
17
+ "gt": "The advancement of NLP research has relied much on automatic evaluation to conduct quantitative analysis by comparing proposed and existing so- lutions for shared problems. The use cases for automatic evaluation are extensive, but most fa- mously text generation tasks such as text summa- rization and machine translation, with classic evalu- ation metrics, including the family of ROUGE (Lin, 2004) and BLEU (Papineni et al., 2002) scores, still widely in use today. A core limitation of automatic evaluation is in developing new metrics and scaling them beyond limited benchmark datasets, primarily due to their common reliance on reference outputs. While there \u2217Work done while at Grammarly. is a line of work in reference-free automatic eval- uation metrics, it is known that it is less reliable than the current reference-based metrics (Fabbri et al., 2021; Deutsch et al., 2022). Large Language Models (LLMs) have proven useful in this domain due to their demonstrated high natural language understanding abilities and performance at adher- ing to instructions. Furthermore, with the powerful zero-shot capability, LLMs do not require refer- ence texts and can generate scores directly from the system output. This has led to great interest in developing LLM-based automatic evaluation met- rics (Zheng et al., 2023b; Fu et al., 2023; Lin and Chen, 2023; Chiang and Lee, 2023; Chen et al., 2023; Wang et al., 2023a; Liu et al., 2023a; Gao et al., 2023; Shen et al., 2023; Luo et al., 2023; Chan et al., 2023); LLM evaluators (also known as LLM-as-a-judge) have become part of automatic evaluation for commonly used benchmarks for a variety of NLP tasks (Li et al., 2024; Huang et al., 2024) including LLM benchmarks such as MT- Bench (Zheng et al., 2023b). However, little is known about the robustness of these LLM evaluators. A few studies have looked deeper into this point (Wang et al., 2023b; Zheng et al., 2023b; Liu et al., 2023c; Li et al., 2024); there is a need for further analysis into potential risks and failure points when using them, especially if used in sensitive applications. Therefore, in this paper, we aim to study two important characteristics of the LLM evaluator, namely bias and consistency, in order to understand and share the limitations of LLM evaluators. To this end, we conduct extensive experiments using GPT-3.5 and GPT-4, which are commonly used as LLM evaluators, with various prompts and generation configurations on the sum- marization evaluation benchmarks SummEval and RoSE datasets. In this paper, we quantitatively analyze biases in LLM evaluators, while linking the biased behaviors with those of humans. arXiv:2405.01724v1 [cs.CL] 2 May 2024 First, we use the perplexity as a familiarity met- ric and analyze the relationship between the av- erage perplexity and each rating returned by the LLM evaluator. We show that the average perplex- ity shows a descending trend as the score increases. The results support that LLM evaluators have fa- miliarity bias (Zajonc, 1968)\u2014LLM evaluators tend to develop a preference for texts simply be- cause they are familiar with them. Second, we explore scoring granularity and report that LLM evaluators exhibit score biases, including round number bias Thomas and Morwitz (2009), assign- ing some scores more frequently than others. Third, we report that LLM evaluators experience anchor- ing effects (Tversky and Kahneman, 1974) when multiple labels are predicted in one output. Then, we analyze the consistency of the LLM evaluator and show that LLM evaluators signifi- cantly change their judgments for different samples, demonstrating significantly lower inter-sample agreement than human experts\u2019 inter-annotator agreement. We also analyze LLM evaluators\u2019 in- consistent behaviors by changing the prompt con- figuration that should not affect the judgment. Throughout analyzing these issues, we compiled findings into a set of recipes for LLM evaluators. We used the recipes to develop our new LLM eval- uator and compared it with two existing LLM eval- uators for text summarization. Experiment results on the RoSE dataset (Liu et al., 2023d) show that our new LLM evaluator statistically significantly improves upon the state-of-the-art.",
18
+ "main_content": "Analysis and results in this paper are the result of more than 560,000 generated outputs by LLMs. 2.1 Datasets To investigate the performance of LLM-based evaluators, we test predictions on two main datasets. We use SummEval (Fabbri et al., 2021) as our development set, perform extensive analyses of LLM-based evaluators on this set, and then use RoSE (Liu et al., 2023d) as an evaluation set for our case study comparing our system with the current SOTA LLM evaluator for summarization. 2.1.1 SummEval Introduced by Fabbri et al. (2021), SummEval is a dataset of human annotated evaluations for automatically produced summaries for CNN/Daily Mail news articles. The dataset annotates summaries on four dimensions: Coherence (collective quality of sentences in the summary), Consistency (factual alignment with the source), Fluency (quality of the individual sentences), and Relevance (well-selected content). The dataset includes expert human judgments for 16 summaries produced by varying models on 100 articles over these four dimensions. 2.1.2 RoSE RoSE (Liu et al., 2023d) is a benchmark of three datasets covering common summarization datasets: CNN/Daily Mail News articles (Nallapati et al., 2016), SAMSum dataset on chat dialogues (Gliwa et al., 2019), and XSum containing extremely short abstractive summaries of text documents (Narayan et al., 2018). Annotations for RoSE are done to record recall of \u201cAtomic Content Units (ACU)\u201d, which is a recall-like metric measuring how many of the atomic facts displayed within an article were captured by the summary. We choose this benchmark due to its target labels very unlikely inclusion in any OpenAI model training given the time of its release, the high quality labels they achieve through a novel method for multi-stage annotation, and three domains to stress test our system on. 2.1.3 Models We run our experiments in the analysis on a mix of GPT-3.5 (gpt-3.5-turbo-0301) and GPT-4 (gpt4-0613). GPT-4 consistently outperforms GPT-3.5Turbo. For the eventual test evaluation reported in Section 4 on RoSE, we run previous work and our own approach using GPT-4-Turbo. Perplexity calculations are done using text-davinci-003 to match the LLM evaluator models as close as possible. We report our values against our own implementation of G-Eval to limit any potential differences in performance due to changes by OpenAI. 2.1.4 Prompts Following Stureborg et al. (2024), we use a slight variations on a prompt derived from Liu et al. (2023b) to prompt LLMs for scores. The full prompt we use is shown in Figure 1 and Figure 8. This prompt takes five input strings: metric, metric_definition, aspects, article, and summary. We replace metric with a name describing what dimension of analysis to focus on. For SummEval, this is replaced with the string \u2018Coherence\u2019 to investigate the first label, for example. Further, metric_definition is replaced with a written explanation of what the metric is You are the automatic summary evaluator of a writing editor: You consider an input document and a corresponding summary You evaluate the summary according to one important quality: 1. {{metric}} (1-10) {{metric_definition}} All ratings are between 1-10 where 1 is very poor and 10 is very good. Your evaluation should be critical and careful, and should closely match the ratings of experts. This evaluation is very important. Consider these aspects when evaluating: {{aspects}} The user will give you both the article (document) and summary, and prompt you to provide an evaluation. Respond with your integer 1-10 score first, then a rationale. Example: Figure 1: System text input for prompting chat-based LLMs to generate automatic evaluation scores in text summarization. This prompting strategy is generalized to allow for use of evaluating any metric(s) of interest, whether multiple or just one. meant to indicate, while aspects explains some broader considerations that are helpful in assessing the quality of a summary on this dimension. Finally, article, and summary are replaced with the source document and summary for the models to make a prediction on. 2.2 Evaluation Metrics The goal of automatic evaluation is to provide scores highly correlated with human judgments on the task at hand. In our work, we primarily measure this through Kendall\u2019s \u03c4 correlation on scores produced for each label in SummEval (Coherence, Consistency, Fluency, Relevance), following the convention in other work on automatic evaluation of text summarization. 3 Results and Analysis In this section, we perform extensive analysis into the performance of LLM evaluators, we uncover several issues of bias and inconsistency with these systems, and propose potential solutions. 1 2 3 4 5 Assigned Score 6.0 6.5 7.0 7.5 Mean Perplexity (GPT-3) GPT-4 Experts Figure 2: Average perplexity for each rating by GPT4 and Experts. Summaries are grouped by evaluation scores (as assigned either by Experts or by GPT-4). GPT4 exhibits a disproportionate bias toward low perplexity summaries compared to expert annotators, demonstrating a familiarity bias. 3.1 Familiarity Bias We investigate the bias models have toward low perplexity examples. Summaries are first grouped by evaluation scores (as assigned either by Experts or an LLM evaluator). This group of summaries is held separate for each dimension of analysis in SummEval. Perplexities are then computed with GPT-3 on the summary text, and a mean score is calculated for each group of summaries. Figure 2 shows that GPT-4 is disproportionately biased towards low perplexity summaries as compared with expert annotators. The mean perplexities of summaries assigned high scores (5s) are lower than that for expert raters, while mean perplexities of low assigned scores (1-3) are higher than expert raters. Full results are reported in Table 1. We would like to note that LLM evaluators are even biased by the source document, as LLM evaluators\u2019 ratings are still negatively correlated with the average perplexity of source documents, for which human experts\u2019 ratings show no correlation. As system summaries in the SummEval are genearted by various summarization models and the perplexity of the summaries negatively correlates with the LLM evaluator\u2019s rating, we confirm that we can expand the notion of self-enhancement bias into familiarity bias. 3.2 Scoring Granularity and Score Biases A common scale for scoring is 1-5 (Nemoto and Beglar, 2014). However, when producing scores for automatic evaluation, ties between candidate Avg. perplexity of summary Avg. perplexity of source document GPT-4 Human experts GPT-4 Human experts Rating Coh Con Flu Rel Coh Con Flu Rel Coh Con Flu Rel Coh Con Flu Rel 1 \u2013 7.05 \u2013 8.42 7.03 7.47 7.66 7.53 \u2013 7.76 \u2013 8.51 7.21 7.66 7.68 7.94 2 8.15 7.61 7.45 7.53 6.80 7.42 7.71 7.14 8.32 7.86 7.77 7.79 7.67 7.61 8.01 7.53 3 7.60 7.46 7.92 7.33 6.58 7.07 6.96 6.73 8.09 7.69 8.48 7.90 7.73 7.52 7.55 7.67 4 6.44 6.83 6.48 6.44 6.37 6.67 6.96 6.42 7.72 8.00 7.74 7.75 7.81 7.29 7.99 7.81 5 5.34 6.06 6.01 5.51 6.36 6.43 6.39 6.26 6.51 7.44 7.06 6.84 7.63 7.75 7.69 7.58 Table 1: Average Perplexity of Summary and Source documents for each rating by GPT-4/Human experts. examples are often undesirable. To reduce ties, we aim to increase scoring granularity: the distinct number of possible scores for candidate responses. We explore the following methods for increasing granularity: \u2022 1-5 star: Resulting prediction when a model is instructed to provide an integer rating between 1-5 (inclusive). \u2022 1-5 + word modifier: Model is instructed to provide an integer rating between 1-5 along with a single word modifier indicating if it is \u2018strong\u2019 or \u2018weak\u2019. For example, a summary may be rated as a \u201c3\u201d, \u201cweak 5\u201d, or \u201cstrong 4\u201d. To map these ratings to a numerical value, we convert the \u2018strong\u2019 modifier to add 0.33 to the base rating, and \u2018weak\u2019 subtracts 0.33 (similar to grading scales). \u2022 1-5 + float modifier: this score is directly predicting the resulting numerical value from the word modifier. We instruct the model to predict values on a GPA scale (1.0, 1.33, 1.67, 2...). \u2022 1-10 score: instruct model to provide integer ratings between 1-10. \u2022 1-100 score: instruct model to provide integer ratings between 1-100. For each of these cases, we also consider methods of taking a sample average. In this approach, we produce N model responses 1 and average the resulting scores to provide a final float value with a greater granularity without changing the prompt. This approach is similar to the approach outlined in G-Eval (Liu et al., 2023b), where each potential score is multiplied by its token probability to get an expected value score. Since OpenAI does not allow access to log probabilities of their top-end models we instead sample several times at a temperature of 1.0, which approximates the expected value score and maintains an increased granularity. 1For all experiments in this work, we set N=10 to balance reducing variance with avoiding prohibitive cost increases 20 40 60 80 100 Score 0.00 0.05 0.10 0.15 0.20 Frequency Distribution of Assigned Scores in 1-100 Setting Figure 3: Frequencies of each possible score as found in 64,000 predictions using the 1-100 scale. Models sparsely predict scores within the range. Frequencies of some scores, such as 90 and 95, are far higher than \u2018odd\u2019 scores such as 92 or 19, and much of the range is almost entirely ignored (1-60). Interestingly, 1-60 is a range often largely ignored in academic grading scales. This indicates an issue within instruction-following specific to automatic evaluation. Figure 3 shows the distribution of scores produced when instructing GPT-3.5-Turbo and GPT-4 to rate summaries on a 1-100 scale. The scores in this distribution are not respected as intended, and the model assigns outsized probabilities to certain scores such as 90 and 95. This reaffirms results by Zheng et al. (2023a) which found that multiplechoice selections by LLMs suffered from similar token biases, deteriorating performance. The full range is also not utilized, with predicted scores largely occurring between 70 and 100. Figure 3 also shows that the score distribution has several peaks for round numbers such as 60, 70, 80, 90 (Similarly for 75, 85, and 95), indicating that LLM evaluators also have round number bias like human.2 To verify which rating scales produce higher quality responses by LLM evaluator frameworks, we run a comparative analysis of the cases men2It is indeed an interesting open question that how LLMs inherit round number bias from human through text written by human. Existing work reported round number bias for human judgments (Coupland, 2011; Honda et al., 2022). Method G Coh Con Flu Rel Avg 1-5 star 5 .332 .362 .325 .337 .339 1-5 avg 41 .422 .370 .356 .439 .397 5 +word mod. 13 .361 .408 .345 .363 .369 5 +word (avg) 121 .394 .364 .316 .419 .373 5 +float mod. 13 .425 .453 .380 .395 .413 5 +float (avg) 121 .416 .378 .334 .438 .392 1-10 score 10 .450 .433 .366 .462 .428 1-10 avg 91 .424 .366 .332 .435 .389 1-100 score 100 .463 .423 .308 .339 .383 1-100 avg 991 .406 .351 .343 .414 .379 Table 2: Correlation with human judgement for GPT4 by method for increased granularity. \u201cG\u201d is the effective granularity (number of unique scores) possible within the given scale. Methods denoted \u201cavg\u201d are a 10-sample average run with temperature 1.0, while all other methods benefited from reducing temperature to 0. It seems that increasing granularity generally helps lowgranularity methods, while high-granularity methods are harmed by increasing granularity. This may be due to the increase in temperature setting. Our results indicate that there may be diminishing returns of increasing scoring granularity. tioned in \u00a73.2. Table 2 shows performance of GPT-4 based evaluators on SummEval under the mentioned rating scales. The performance of 110 score performs best on average, with an average score of 0.428 Kendall\u2019s \u03c4 across the labels in SummEval. This method also performs the best on relevance, at 0.462 Kendall\u2019s \u03c4, while 1-100 scoring performs better on Coherence and the float modification method performs best on both Consistency and Fluency3. Ultimately, increasing scoring granularity is shown to improve performance in our experiments, which should be carefully conducted for the risk of score bias and round number bias. 3.3 Anchoring Effect in Multiple Judgments During evaluation of text, it is often helpful to describe several attributes regarding the text at the same time. For some tasks (such as hierarchical classification (Zhu et al., 2024) or N-ary relation extraction (Cheung et al., 2023)), the large set of target labels and long required contexts make separating annotation into independent generations infeasible; it is cheaper to predict all labels within the same output (Gao et al., 2023). We explore whether doing so is beneficial for the performance of the model, since it could be argued that this is similar 3It is unclear why the results should differ across each dimension, indicating another potential issue with LLM evaluation: hyper-parameters may not be stable across different labels. to a multi-task setting where scores of one feature may help determine the correlation of others. However, conditioning on previously generated scores may bias generation on previous predictions in the context, thereby worsening performance. We prompt GPT-4 to produce scores for Coherence, Consistency, Fluency and Relevance in a single generation (in that order). We then look at the distributions of, for example, Consistency given each predicted score on Coherence. Formally, we are interested in using our predictions to estimate the conditional probability: P(Consistency = X | Coherence = Y ) We then plot the frequency of evaluated scores when the previous score was above or below 5 out of 10. Figure 4 shows one such plot, and the remainder of pairings are shown in Appendix C. We find that there is a disproportionate biasing effect from the model, where the mean score assigned to samples with a previous assigned score above 5 is substantially greater than the mean score assigned to samples with previous scores of 5, while these scores should not be so strongly correlated. In other words, LLM evaluators tend to overrely on this adjustment of its priors\u2014experiencing an anchoring effect. This is unsurprising due to LLM\u2019s auto-regressive generation, but points out the need to correct for such biases if utilizing multi-attribute predictions. 0 2 4 6 8 10 GPT-4 Score for Consistency 0.0 0.2 0.4 0.6 Frequency Previous Score of 1-5 Previous Score of 6-10 Means 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 Human Expert Score for Consistency 0 1 2 Frequency Previous Score of 1-3 Previous Score of 3.33-5 Means Figure 4: (Top) Score distribution for consistency, conditioned on the previously assigned score for coherence when predicting both within the same context. (Bottom) Human-determined scores for consistency conditioned on what range the score fell into for coherence.4Human scores are correlated by Pearson\u2019s r = 0.315, while GPT-4 scores are correlated by r = 0.979. The above figures clearly show how previous scores bias the distribution of future scores in the generation. While such biasing is natural (and in part valid), the effect here is so large it harms performance. As seen in Figure 4 , one source of poor performance for GPT-4 is that humans mostly rate summaries as highly consistent (4-5) while GPT-4 questions consistency very often, assigning relatively low scores. We run another experiment where we again generate all four scores on SummEval within one output, but change the relative order of the Coherence attribute as compared to the other three attributes. As in Table 3, we find labels predicted later in the LLM generation experience a degradation in correlation against expert annotators (\u03c4). The results indicate that the judgment for the target attribute (i.e., Coherence) was influenced by the previous judgments for the other attributes and LLM evaluators can experience anchoring effects when multiple attributes are judged in the same prompt. N 1 2 3 4 \u03c4 0.400 0.391 0.359 0.368 Table 3: Performance of GPT-3.5-Turbo on Coherence attribute when it is the N-th attribute predicted. 3.4 Self-Inconsistency The general performance and self consistency of LLM-based metrics is problematic when considering actual uses. While Stureborg et al. (2024) point out that even low correlations with human judgements can be used to make high-confidence comparisons on the system level, correlation needs to be very high for any individual prediction by the automatic evaluator to be trusted. Figure 5 shows scatter plots of predictions made by an LLM evaluator as compared to human judgements. It shows that even predictions on a single example can vary widely by the same model given slight prompt modifications or even just sampling at temperature settings > 0. To analyze the self-inconsistency of LLM evaluators, we calculated inter-sample agreement using Krippendorff\u2019s \u03b1. Table 4 shows that the selfconsistency is worse than the consistency between multiple human annotators. 3.5 Sensitivity to Temperature and CoT Chiang and Lee (2023) determined that CoT is not always helpful in improving the performance of 4Yet again we note the LLM evaluator does not make use of the full range of the scores, with no predictions of 5/10 for consistency in this experiment \u03b1 Inter-annotator agreement (Human) 0.659 Inter-sample agreement (GPT-4) 0.587 Table 4: Krippendorff\u2019s \u03b1 for inter-annotator agreement (Human) and inter-sample agreement (GPT-4). LLM-evaluation. We investigate this further by tuning temperature settings on the task under CoT and non-CoT approaches. Many guidelines for LLM prompt-engineering have unintuitive implications when combined. Generally, lower temperature generations are preferred during simple inference tasks with LLMs. Also, Chain-of-Though (CoT) is a popular (Wei et al., 2022) strategy to increase text generation quality, reasoning, and task performance across many settings. However, we find that when using CoT prompting, lower temperatures are not preferable. This result is not immediately obvious. Instead, we propose using multiple generations at higher temperatures. Looking through the raw outputs, this seems to be due to a more diverse set of explanations that lead to a more robust numerical prediction. We posit this is similar to combining many weak estimators, and that increasing temperature helps decrease the correlation between each estimators prediction. Figure 6 shows that CoT prompting benefits from higher temperatures, while non-CoT performs better with lower temperatures. Setting outputs to deterministic generation (temperature of 0) may serve counter-productive since generating the most likely token ensures granularity is limited by the original range of the scoring. We produce predictions at various temperatures using GPT-3.5-Turbo in Figure 6, showing that increasing temperature steadily reduces performance of non-CoT prompts, while performance of CoT prompts increases sharply until approximately 0.5. CoT prompts performance then subsequently drops off or plateaus as temperature is increased further. This trend is not just over the average scores on SummEval. Figure 11 shows similar plots for both non-CoT and CoT prompts, plotting the performance on each label in SummEval individually. The trends described in Figure 6 seem to replicate on each label in this dataset. Our findings show that a single generation at temperature 0 outperforms the best tuning of multi-sample CoT is cheaper 1 2 3 4 5 Mean Expert Score on Coh 2 4 6 8 10 GPT-4 Coh Kendall's : 0.424 1 2 3 4 5 Mean Expert Score on Con 2 4 6 8 10 GPT-4 Con Kendall's : 0.366 1 2 3 4 5 Mean Expert Score on Flu 2 4 6 8 10 GPT-4 Flu Kendall's : 0.332 1 2 3 4 5 Mean Expert Score on Rel 2 4 6 8 10 GPT-4 Rel Kendall's : 0.435 Figure 5: Scatter-plots of evaluated score versus expert judgements reveal that while many papers claim 0.40 \u03c4 is strong performance, the correlation with human judgements still needs substantial improvements. Even with correlation of over 0.40 Kendall\u2019s \u03c4, we notice that any individual evaluation may lie within a very wide range as compared to the ground-truth labeled by experts. Note that the full range of 1-10 is underutilized again. 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 T emperature 0.26 0.28 0.30 0.32 0.34 Mean Kendall's non-CoT CoT Figure 6: Performance of CoT and non-CoT prompting at varying Temperatures. Each prediction is computed by the average of 10 generations. Low temperatures are beneficial when making simple predictions, but higher temperatures (to a point) help improve performance when using Chain-of-Thought (CoT) prompting. This could be because of a more diverse set of explanations, leading to more unique features for prediction. and simpler than the weighted average approach from Liu et al. (2023b). When using CoT, our results motivate drawing multiple samples while tuning temperature appropriately to maximize performance. 3.6 Sensitivity to Source Document While the long-context abilities of LLMs allow predictions over more complex documents, we find that the model\u2019s use of the provided source document (the article being summarized) is questionable during automatic evaluation. The presence of this source document substantially affects ratings on fluency, which should be independent of the article text. The table below shows performance drops of LLM-based evaluation using GPT-3.5-Turbo when removing the Source document, although many of the categories which surely require the document to render a sensible judgement remain relatively highperforming. The LLM-evaluator may be picking up on spuriously correlated features when predicting its judgement, indicating a potentially problematic bias. Source Doc Coh Con Flu Rel Avg Included .346 .250 .237 .330 .291 Excluded .291 .167 .212 .183 .213 \u2206 -.055 -.083 -.025 -.147 -.078 %\u2206 -15.9 -33.2 -10.6 -44.6 -26.7 Table 5: Performance of GPT-3.5-Turbo with and without Source Document. Removing the source document (unsurprisingly) substantially reduces the performance of the automatic evaluator. However, this is also true for attributes that should not be dependent on the source document in the first place, such as Fluency. For categories such as relevance, making a prediction on the summary quality without the article should be impossible. Overall performance drops by 27% (relative), heavily driven by a drop in performance on relevance. While relevance is a dimension of evaluation that depends entirely on the source documents match with the summary, GPT-3.5-Turbo is able to find features that may be correlated with the expert scoring. 4 Case Study Using the lessons learned from SummEval in Section 3, we determine a few simple guidelines to significantly improve automatic evaluation with LLMs (see Table 6). We evaluate whether these guidelines improve performance by comparing to two previous works: G-Eval (Liu et al., 2023b) and a followup work by Chiang and Lee (2023). Chiang and Lee (2023) establish SOTA performance on SummEval, beating G-Eval\u2019s correlation with human judgements on the dataset. However, some (Bhandari et al., 2020; Liu et al., 2023d) have pointed out issues in these style of datasets, including that (1) expert ratings themselves include a lot of disagreement, (2) closed-source LLMs may have been trained on these well-established datasets, and (3) conclusions on these datasets don\u2019t always hold for new systems. For these reasons, we evaluate our system on RoSE, a summary evaluation dataset built carefully in a multi-stage process to maximize label quality and is unlikely to be included in GPT training data. RoSE\u2019s target label is the metric Atomic Content Units (ACU) which is a normalized metric ranging from 0 to 1. Note that the CNNDM partition of the dataset is shared with SummEval, meaning that performance on this data is an in-domain test, while the other two partitions of RoSE serve as out-ofdomain tests. Implementation of Previous Work Chiang and Lee (2023) point out issues in replicating the reported correlation values from the G-Eval paper. Therefore, we compare with these works by reimplementing their systems using the descriptions in their methods and released code, and compute all correlation values from scratch. Both Chiang and Lee (2023) and G-Eval were approaches designed for OpenAI\u2019s Completions API endpoint, as opposed to a ChatCompletion end-point, which is more limited in formatting and has no access to token probabilities. We map the prompts into a Chat format by simply placing them into the user prompt.5 For G-Eval, we sample 10 times and average the score to approximate their expected value calculation (which was done by multiplying token probabilities extracted from the model). We use auto-CoT as specified, but notice that this causes a higher proportion of \u201cfailed\u201d generations which give texts but omit any final, parseable score. Chiang and Lee (2023) suggest not including auto-CoT or any evaluation steps in their approach. For our method, we include the evaluation steps undergone by annotators for the ACU metric. This text is taken directly from Liu et al. (2023d) with edits only for grammar and conciseness. Finally, we use the rate-explain setting they describe since it is one of their two best settings. They state rate-explain and analyze-rate are \u201cdo not see rate-explain to be 5Experiments with using our own prompt this way indicated a small but not statistically significant performance increase. significantly better (or worse) than analyze-rate\u201d. While the authors don\u2019t point this out, rate-explain is much cheaper and faster for generation given you can safely stop generation after the rating has been produced. We compare all methods on GPT-4-Turbo. Our method, as determined by insights from Section 3, relies on a 1-10 scoring granularity and includes both evaluation steps and a definition of ACU (which is copy pasted from Liu et al. (2023d) and also added to other two approaches). We use nonCoT prompting at a temperature of 0, and generate a single output. Table 6 summarizes these approaches. None of these parameters are tuned on RoSE. While each solution in Table 6 might look commonly used techniques, to the best of our knowledge, none of existing work has combined them into a single recipe and conduct an empirical study to verify the effectiveness of the techniques. Results Our method outperforms both G-Eval and rate-explain on the CNNDM and SAMSum partitions. The performance of our method achieves Kendall\u2019s \u03c4 = 0.220 on the in-domain test set, and \u03c4 = 0.308 on SAMSum, indicating this partition may be easier to evaluate. While we outperform Chiang and Lee (2023) on SAMSum, the difference is not statistically significant. This significant variation in performance is due to prompting strategies, indicating a lot of room for performance improvements by closer studies in prompt engineering. 0.00 0.05 0.10 0.15 0.20 0.25 0.30 Kendall's Correlation with Human Judgments of ACU XSum SAMSum CNNDM 0.120 0.184 0.106 0.143 0.308 0.220 0.148 0.287 0.190 G-Eval Chiang et al. Ours 90% C.I. Figure 7: Performance Comparison on the RoSE benchmark. Our approach performs statistically significantly better than the SOTA LLM-evaluator for summarization (Chiang and Lee, 2023) on the CNNDM dataset partition, and significantly better than G-Eval on both CNNDM and SAMSum. Confidence intervals are computed through bootstrap sampling. Issue w/ LLM evaluators Reasonable Approach to Mitigate Low granularity for distinguishing summaries Widen scores to 1-10 star scale CoT prompting requires tuning temperature Remove CoT and set temperature to 0 Removing source document impacts performance Keep source even for attributes which don\u2019t require it Multi-attribute labels are highly correlated Predict only one attribute per generation Table 6: Identified issues have immediate and actionable mitigations 5 Related Work Automatic evaluation has been dependent on human annotations. Traditional automatic evaluation metrics such as ROUGE (Lin, 2004), BLEU (Papineni et al., 2002), and METEOR (Banerjee and Lavie, 2005) consider token-level n-gram matching between system outputs and reference texts. Later, embedding-based automatic evaluation such as BERTScore (Zhang et al., 2020), BLEURT (Sellam et al., 2020), and MoverScore (Zhao et al., 2019) were developed to take the semantic similarity into account. Extensive efforts to remove the reliance on manually written reference texts have been attempted by creating reference-free automatic evaluation metrics (Louis and Nenkova, 2013; Fonseca et al., 2019; Scialom et al., 2019, 2021; Vasilyev et al., 2020; Rei et al., 2021). However, Deutsch et al. (2022) have pointed out the current limitations as the measures of how well models perform a task. Following the line of work, recent studies in LLM evaluators have shown that LLMs can be high-quality evaluators for various NLP tasks (Fu et al., 2023) including Summarization (Chen et al., 2023; Wang et al., 2023a; Liu et al., 2023a; Gao et al., 2023; Shen et al., 2023; Wu et al., 2023), Machine Translation (Kocmi and Federmann, 2023), Factual Consistency Evaluation (Luo et al., 2023), and other text generation tasks (Chen et al., 2023; Wang et al., 2023a; Chan et al., 2023; Kasner and Du\u0161ek, 2024). However, they have primarily focused on improvements through prompt engineering. Among them, only a few studies have tried to reveal the limitations of LLM evaluators. They have reported that LLM evaluators have position bias\u2014a preference for the first example of a pairwise comparison (Wang et al., 2023b; Zheng et al., 2023b); verbosity bias\u2013preference for longer texts (Zheng et al., 2023b; Wu and Aji, 2023); and selfenhancement bias\u2014LLM evaluators prefer text generated by themselves (Zheng et al., 2023b; Panickssery et al., 2024). Koo et al. (2023) have reported cognitive biases in LLM evaluators. Following the studies, our paper aims to dig deeper to share quantitative analysis on these points and beyond. Our work partially overlaps with the recent work by Ohi et al. (2024), who studies likelihood bias in LLM evaluators across data-to-text and grammatical error correction tasks. However, our work differs in that we use a different metric (i.e., perplexity) to assess the bias and focus on a different target task (i.e., summarization), providing a new perspective on this issue. 6 Conclusion We have provided a series of analyses into biased and inconsistent behaviors exhibited by LLM evaluators for the task of text summarization. Our findings show that (1) LLM evaluators are disproportionately biased towards low perplexity summaries than is helpful (familiarity bias), (2) they fail to respect scoring scales given to them when attempting to increase the granularity of scores (score bias), (3) they show degradation in multi-attribute judgment, being influenced by their previous ratings (anchoring effect). They are inconsistent their own judgements depending on settings such as inclusion of source documents. In attempts to solve some of these issues, we share a recipe to mitigate these issues and show that we are able to significantly outperform the current SOTA method for LLM-based summary evaluation on the CNNDM partition of RoSE 90% confidence. Our work suggests that more effort should be allocated towards understanding and remedying the issues exhibited by LLM evaluators. Limitations Reliance on GPT-based models. We experiment primarily on GPT-based, proprietary models from OpenAI due to their SOTA performance on automatic evaluation of text summarization. However, this means it is unclear how well our results generalize to other LLMs such as Llama-2, Vicuna, Alpaca, etc. Do to constraints in time and budget, extending the analysis to investigate other LLMs was not possible during the time this work was carried out. This project involved generating more than 560,000 outputs from OpenAI models; repeating the experiments on several models amounts to substantial effort and resources. Future work could aim to replicate and extend our analysis to further models. Reliance on SummEval for analysis. Our analysis section primarily investigates issues by measuring performance of various model and prompt configurations against SummEval. There is a risk that our results to do generalize well beyond For this reason, we also sought to measure performance on the RoSE benchmark, which is comprised of three datasets in different domains. We find that addressing the issuess seen in SummEval significantly improves performance on one of the domains, and has insignificant but positive results on the other domains. Limited solutions. Although we investigate solutions to some of the identified issues in this paper, many remain to be studied and may provide the research community with directions for future research efforts. LLM\u2019s inconsistencies and biases as automatic evaluators is tough to build solutions around. There is ample opportunity for creative solutions, and while our work offers some, its main focus is in identifying the existing issues in the first place. Ethics Statement As this study focuses on text summarization and uses publicly available datasets, we do not see any clear ethical implications or considerations. We adhere to ethical research practices."
19
+ }
intro_8K/test_introduction_short_2405.01730v1.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.01730v1",
3
+ "title": "Converting Anyone's Voice: End-to-End Expressive Voice Conversion with a Conditional Diffusion Model",
4
+ "abstract": "Expressive voice conversion (VC) conducts speaker identity conversion for\nemotional speakers by jointly converting speaker identity and emotional style.\nEmotional style modeling for arbitrary speakers in expressive VC has not been\nextensively explored. Previous approaches have relied on vocoders for speech\nreconstruction, which makes speech quality heavily dependent on the performance\nof vocoders. A major challenge of expressive VC lies in emotion prosody\nmodeling. To address these challenges, this paper proposes a fully end-to-end\nexpressive VC framework based on a conditional denoising diffusion\nprobabilistic model (DDPM). We utilize speech units derived from\nself-supervised speech models as content conditioning, along with deep features\nextracted from speech emotion recognition and speaker verification systems to\nmodel emotional style and speaker identity. Objective and subjective\nevaluations show the effectiveness of our framework. Codes and samples are\npublicly available.",
5
+ "authors": "Zongyang Du, Junchen Lu, Kun Zhou, Lakshmish Kaushik, Berrak Sisman",
6
+ "published": "2024-05-02",
7
+ "updated": "2024-05-02",
8
+ "primary_cat": "eess.AS",
9
+ "cats": [
10
+ "eess.AS",
11
+ "cs.SD"
12
+ ],
13
+ "label": "Original Paper",
14
+ "paper_cat": "Diffusion AND Model",
15
+ "gt": "Emotions play a vital role in natural speech, as they convey one\u2019s feelings, moods, and personality [1]. Speech expressive- ness encompasses a diverse range of emotions [2]. Expressive voice conversion aims to jointly perform speaker identity and style transfer for emotional speakers, which poses huge poten- tial in movie dubbing, voice acting, and human-computer inter- action [3\u20135]. Voice conversion (VC) is the task of converting speaker identity while preserving linguistic content [6]. Early VC stud- ies focus on learning a statistical mapping between speech recordings of source and target speakers [7,8]. Recent advance- ments in deep learning-based approaches significantly improve the performance of VC by learning the disentanglement across various speech characteristics. For instance, separate encoders guided by appropriate constraints have been explored to learn disentangled content and speaker representations [9\u201311]. Sev- eral studies introduce information bottleneck [10, 12\u201314], in- stance normalization [15, 16] and mutual information [11, 17] to achieve better disentanglement between content and speaker components. We note that most VC studies primarily focus on neutral speech, overlooking the prosodic variations manifested in different emotions. Expressive VC aims to fill this gap. It remains a formidable undertaking to model emotional styles for expressive VC due to the hierarchical structure, in- Codes & Speech Samples: https://a2023aa.github.io/DEVC/ Junchen Lu and Kun Zhou contributed to this work during their intern- ship at UT Dallas. Expressive Voice Conversion Speaker identity with emotion cues Voice Conversion Speaker identity\u00a0\u00a0 Emotional Voice Conversion Emotion state\u00a0 Figure 1: A comparison between the conversion of different speech components across VC [6], expressive VC [3], and emo- tional VC [18]. herent complexity, subjective nature, and variability of hu- man emotional speech [3]. Moreover, emotional style con- tains both speaker-independent and speaker-dependent features [19\u201321]. Speaker-independent emotional features often exhibit consistent associations with the same emotion across different speakers [22], while speaker-dependent ones are manifested in speaker characteristics and are unique to each speaker [19]. Despite their significant roles in conveying emotional style, speaker-dependent emotional features have been overlooked in previous VC studies [4,5]. In this paper, we aim to enhance the expressiveness of converted speech for VC systems by explor- ing both speaker-dependent and speaker-independent emotional features. Another limitation of previous approaches [3\u20135] of expres- sive voice conversion stems from their reliance on vocoders to reconstruct speech waveform from acoustic features. Conse- quently, the quality of the synthesized speech is intricately influ- enced by vocoder\u2019s performance [23], which typically requires a substantial amount of high-quality speech data for training. To address these issues, we propose a fully end-to-end diffusion- based framework for expressive VC for the first time. We also demonstrate the flexibility of our model on any-to-any conver- sion. In this paper, we introduce an expressive voice conversion framework, namely DEVC, which enables effective speaker identity conversion for emotionally expressive speakers. To achieve this, we employ three encoders dedicated to handling content representation, speaker representation incorporating speaker-dependent emotional cues, and speaker-independent emotion representation, respectively. Content conditioning is facilitated through speech units, while deep features derived from speaker verification (SV) and speech emotion recogni- tion (SER) tasks are utilized to capture speaker-dependent and speaker-independent emotional information. Furthermore, we arXiv:2405.01730v1 [eess.AS] 2 May 2024 (a) Speaker representations with emotion cues (b) Speaker-independent emotion representations Figure 2: Visualization of speaker representations and speaker-independent emotion representations of 50 randomly selected utterances from 4 speakers across 4 different emotional states. Each point represents one expressive utterance and the legend indicates speaker identity and emotional state information. employ a conditional denoising diffusion probabilistic model (DDPM) to iteratively reconstruct the waveform from Gaussian noise, conditioned on the content, speaker, and emotion repre- sentations. Our main contributions include: \u2022 We propose a fully end-to-end expressive voice conver- sion framework based on a conditional diffusion model without the need for large-scale training data and manual annotations; \u2022 Our findings reveal that speaker embeddings derived from an SV model pre-trained on neutral data ef- fectively capture speaker-dependent emotional cues, thereby demonstrating their utility in enhancing expres- sive voice conversion; and \u2022 Our proposed framework shows flexibility in identity conversion for both seen and unseen emotional speakers, achieving any-to-any expressive voice conversion. The rest of this paper is organized as follows: Section 2 provides an introduction to the related work. In Section 3, we undertake a comprehensive analysis of the emotion and speaker representations employed in our framework. Section 4 presents the details of our proposed framework. The experimental results are reported in Section 5. In Section 7, we conclude the study.",
16
+ "main_content": "2.1. Expressive Voice Conversion Expressive speech is a complex composition of various components, including speaker identity, emotional style, and linguistic content. Previous models in expressive voice conversion [4, 5] primarily focus on eliminating redundant information between emotional style and other speech components, such as content and speaker identity. This objective has been achieved through techniques like mutual information loss [5] and prosody filters [4]. In another work [3], a StarGAN-based model incorporates deep features from an SER model as emotion representation and a simple one-hot vector as the speaker representation. However, this simplistic speaker representation may not adequately capture the nuances of emotional speakers, thus limiting its ability to achieve any-to-any voice conversion for arbitrary speakers. Another related task is emotional voice conversion, which focuses on converting the emotional state of the speaker while preserving the speaker\u2019s identity [18,24]. In contrast, expressive voice conversion aims to simultaneously convert both speaker identity and speech style for speakers displaying emotional expression [3], as shown in Figure 1. This paper will concentrate on expressive voice conversion. 2.2. Denoising Diffusion Probabilistic Models in Speech Synthesis Denoising diffusion probabilistic models [25] are a class of diffusion models that uses a Markov chain to gradually convert a simple distribution into a complicated data distribution. It contains two processes: a diffusion process where clean input data is converted to an isotropic Gaussian distribution by adding noise step by step; and a reverse process where clean input can be recovered from Gaussian noise by predicting and removing the noise introduced in each step of the diffusion process. Denoising diffusion probabilistic models have achieved great success in various speech synthesis tasks, such as vocoding [26], text-to-speech [27], and VC [28\u201330]. We further develop the idea and introduce an end-to-end framework for expressive voice conversion, which will be elaborated in Section 4. 2.3. Representation Learning with Self-Supervised Models Collecting labeled speech data is difficult, expensive, and timeconsuming [31\u201333]. Self-supervised speech models pre-trained on a large amount of unlabeled data can learn high-level speech representations without relying on labeled data [34\u201338]. Selfsupervised representations have been proven to be highly effective in various speech synthesis tasks due to their ability to capture essential speech content information [39, 40]. The majority of existing VC methods leveraging such representations separate content from speaker-related information through discretization [41,42]. To retain more linguistic content in VC, van Niekerk et al. [43] propose to learn soft speech units by predicting a distribution over discretized representations, further improving the intelligibility of converted speech. Inspired by this, we incorporate soft speech units as content conditioning in our proposed method. Table 1: Mean Euclidean distance between speaker representations from different emotion pairs among 4 speakers. The first row and the first column respectively represent emotion states of two groups, with each cell indicating a unique emotion pair. Note that speakers here are the same set of speakers from Figure 2. (a) Speaker: 0013 (M1) Angry Happy Neutral Sad Angry 0.670 0.724 0.761 0.739 Happy 0.722 0.719 0.773 0.754 Neutral 0.753 0.765 0.676 0.703 Sad 0.729 0.751 0.705 0.667 (b) Speaker: 0020 (M2) Angry Happy Neutral Sad Angry 0.705 0.774 0.811 0.867 Happy 0.784 0.669 0.761 0.783 Neutral 0.832 0.754 0.693 0.735 Sad 0.873 0.776 0.733 0.615 (c) Speaker: 0016 (F1) Angry Happy Neutral Sad Angry 0.699 0.803 0.782 0.955 Happy 0.811 0.751 0.783 0.904 Neutral 0.787 0.774 0.726 0.896 Sad 0.952 0.896 0.890 0.682 (d) Speaker: 0018 (F2) Angry Happy Neutral Sad Angry 0.672 0.745 0.725 0.808 Happy 0.756 0.700 0.732 0.758 Neutral 0.724 0.733 0.668 0.780 Sad 0.814 0.752 0.772 0.695 3. Emotion and Speaker Representations: A Novel Analysis for Expressive VC Expressive speech introduces variations in the acoustic features, resulting in increased complexity for speaker identity among emotional speakers [44]. In expressive voice conversion, it is expected that speaker identity contains both speaker-dependent emotional cues and other speaker-related information. Speakerdependent emotional cues refer to the emotional information that is specific to an individual speaker. These cues include variations in intonation, rhythm, and voice quality that reflect the speaker\u2019s emotional state. To incorporate such speaker-dependent emotional cues into our expressive voice conversion model, we extract speaker representations from a pre-trained SV model [45]. These speaker representations serve as conditioning information for our expressive voice conversion model, allowing it to generate converted speech that preserves both target speaker identity and their emotional nuances. By employing the t-SNE algorithm [46], we visualize the speaker representations of two female speakers and two male speakers, as shown in Fig. 2(a). We observe that the speaker representations form separate clusters for each speaker, indicating successful differentiation of speaker identities. We also notice the formation of smaller clusters within the same speaker and emotional state, indicating the presence of speakerdependent emotional information. To further evaluate the similarity of speaker representations, we perform a comprehensive analysis. We randomly selected 240 emotional speech utterances from the ESD dataset [18] and divided them equally into two groups. We calculate the Euclidean distance between utterance pairs from these groups and obtain the mean distance of emotion pairs. As presented in Table 1, the results consistently demonstrate that the distances within the same emotional state are lower compared to those between different emotions. This consistent pattern strongly supports our hypothesis that the speaker representations not only capture speaker-related information but also encompass speaker-dependent emotional cues. In our proposed method, we adopt deep features obtained from a pre-trained speaker-independent SER model [47] as speaker-independent emotion representations, following the approach in [48]. To visually analyze the speaker-independent emotion representations, we employ the t-SNE algorithm [46] and present the results in Figure 2(b). The visualization demonstrates that the emotion representations derived from different emotions form well-separated clusters. Within each cluster, representations from the same emotion but different speakers tend to be mixed up. This observation confirms that our emotion representations encompass speaker-independent emotional style information, allowing for generalization across different individuals. Our analysis provides valuable insights into the nature of our emotion and speaker representations. By incorporating these representations, our framework enables a single DDPM to synthesize expressive speech for any given speaker by effectively modeling both common characteristics of each emotional state across different speakers and emotional nuances associated with each individual. Further details on this approach will be provided in Section 4. 4. Proposed Method DEVC In this section, we introduce our fully end-to-end expressive VC model, Diffusion-based Expressive Voice Conversion (DEVC). DEVC consists of a content encoder, a speaker encoder, a speaker-independent emotion encoder, and a diffusion-based decoder, as illustrated in Figure 3. These three encoders extract content representations, speaker representation with emotional cues, and speaker-independent emotion representations from input expressive speech, respectively. Utilizing these auxiliary representations as conditions, the diffusion-based decoder iteratively generates the converted expressive speech, starting from Gaussian noise. 4.1. Training Stage Given an expressive speech utterance with S audio segments, the content encoder extracts a 256-dimensional content representation Rc \u2208RS\u00d7256 from the waveform. The speaker encoder extracts a 256-dimension utterance-level speaker representation Rs with emotional cues, as reported in Table 1 and Figure 2(a). The emotion encoder extracts an utterancelevel speaker-independent emotional style representation Re with 128 dimensions. To align with Rc, we upsample the two utterance-level Rs and Re to R\u2032 s \u2208RS\u00d7256 and R\u2032 e \u2208RS\u00d7128 by repeating them, respectively. We formulate the conditioning Diffusion Decoder Diffusion Decoder Residual Block PostNet xt-1 xt PreNet t Step Encoder N blocks Expressive Speech Waveform Content Encoder Speaker\u00a0Encoder Emotion\u00a0Encoder Encoders Rs Re Rc Diffusion Decoder c Figure 3: An illustration of the training phase of the proposed DEVC, where the green boxes represent the modules that are involved in the training while the others are not. c by segment-wise concatenating Rc with R\u2032 s and R\u2032 e: c = concat(Rc, R\u2032 s, R\u2032 e) (1) The conditional DDPM-based decoder has two subprocesses: the diffusion process and the reverse process. Given a segment of waveform x0, the diffusion process is the process of x0 being gradually corrupted to Gaussian noise xT within finite T steps. Assume {x1, ..., xT \u22121} is a sequence of latent variables where xt is transformed from xt\u22121 by adding Gaussian noises at each timestep t \u2208[1, T]: q(xt|xt\u22121) = N(xt; p 1 \u2212\u03b2txt\u22121, \u03b2tI) (2) where {\u03b21, ...\u03b2T } is a fixed variance schedule. Given clean data x0, sampling of xt can be written in a closed form: q(xt|x0) = N(xt; \u221a\u00af \u03b1tx0, (1 \u2212\u00af \u03b1t)I) xt = \u221a\u00af \u03b1tx0 + \u221a 1 \u2212\u00af \u03b1t\u03f5 (3) where \u03b1t = 1 \u2212\u03b2t and \u00af \u03b1t = Qt s=1 \u03b1s. Noise \u03f5 \u223cN(0, I) has the same dimensionality as data x0 and latent variables x1, ...xT . The reverse process generates a reverse sequence by sampling the posteriors q(xt\u22121|xt), starting from a Gaussian noise sample xT . However, since q(xt\u22121|xt) is intractable, the decoder learn parameterized Gaussian transitions p\u03b8(xt\u22121|xt) with a learned mean \u00b5\u03b8(xt, t, c) and a fixed variance \u03c32 t I [25]: p\u03b8(xt\u22121|xt) = N(\u00b5\u03b8(xt, t, c), \u03c32 t I) (4) where \u00b5\u03b8(xt, t, c) is the function of a noise approximator \u03f5\u03b8(xt, t, c). Based on speech-conditioning pairs, we then learn the conditional diffusion-based decoder via: L\u03b8 = \u2225\u03f5 \u2212\u03f5\u03b8(xt, t, c)\u22252 2 (5) where \u03f5 \u223cN(0, I) is the noise and \u03f5\u03b8 denotes the decoder with learnable parameters \u03b8. 4.2. Run-time Conversion At run-time, DEVC takes a source utterance from one speaker and a reference utterance from a target speaker, which is either seen or unseen during training, as shown in Figure 4. The encoders extract content representation and speaker-independent emotional style representation from the source utterance and speaker representation with emotional cues from the reference Content Encoder Speaker\u00a0Encoder Emotion\u00a0Encoder Diffusion Decoder Rs Re Rc Reference utterance (Spk A) Source utterance (Spk B) Converted uttrance Figure 4: An illustration of the run-time phase of the proposed DEVC, highlighting that waveform generation occurs without the use of a vocoder. utterance and the diffusion-based decoder generates converted expressive speech from Gaussian noise and timestep conditioned on those representations. We note that emotion representations are derived from a speaker-independent emotion recognizer, thereby inherently assumed to be speaker-independent (as shown in Section 3). Through our experiments, we illustrate that the emotion encoder can take input from either the source utterance or the reference. 5. Experiments 5.1. Dataset We evaluate the proposed DEVC on the ESD dataset [18]. For each emotion and each speaker, there are 300 training utterances, 20 reference utterances, and 30 test utterances. In our experiments, we select four emotions (Neutral, Angry, Happy, and Sad) and four speakers (0013, 0016, 0018, and 0020) as seen speakers for training, while randomly selecting four other speakers as unseen speakers. We evaluate our model in three scenarios: the conversion between seen speakers (S2S), the conversion from seen speakers to unseen speakers (S2U), and the conversion between unseen speakers (U2U), commonly referred to as the any-to-any task. All speech samples are sampled at 16 kHz and saved in 16-bit format. Table 2: Objective evaluations results for the conversion between speakers seen during training (S2S), the conversion for seen to unseen speakers (S2U), and the conversion between unseen speakers (U2U). MCD is measured in dB. Neutral Angry Happy Sad MCD\u2193 SV\u2191 FFE\u2193 VDE\u2193 MCD\u2193 SV\u2191 FFE\u2193 VDE\u2193 MCD\u2193 SV\u2191 FFE\u2193 VDE\u2193 MCD\u2193 SV\u2191 FFE\u2193 VDE\u2193 Baseline [3] 8.86 0.72 0.39 0.34 8.79 0.75 0.39 0.34 8.84 0.72 0.47 0.37 9.08 0.73 0.41 0.34 S2S DEVC 8.18 0.84 0.32 0.29 7.99 0.87 0.33 0.30 8.42 0.82 0.41 0.34 8.01 0.80 0.34 0.30 Baseline-U 8.81 0.68 0.42 0.37 8.74 0.53 0.46 0.39 8.54 0.71 0.47 0.39 8.51 0.67 0.43 0.35 S2U DEVC 8.80 0.80 0.34 0.30 8.61 0.71 0.39 0.34 8.43 0.73 0.42 0.36 8.12 0.76 0.39 0.33 Baseline-U 9.02 0.65 0.42 0.37 9.26 0.69 0.47 0.40 8.72 0.70 0.48 0.39 8.62 0.67 0.45 0.38 U2U DEVC 8.67 0.79 0.35 0.31 8.62 0.84 0.38 0.33 8.39 0.79 0.41 0.35 8.19 0.77 0.40 0.35 Table 3: The MOS results with 95% confidence interval to evaluate the speech quality of converted speech. Framework S2S S2U U2U Baseline [3] 2.12\u00b10.15 NA NA Baseline-U NA 2.45\u00b10.16 2.46\u00b10.18 DEVC 3.28\u00b10.08 3.16\u00b10.12 3.27\u00b10.13 Table 4: Ablation studies results in S2S setting. Model MCD [dB] \u2193 SV \u2191 F0-RMSE [Hz] \u2193 Rc, Rs 10.81 0.71 82.38 Rc, Re 8.34 0.79 55.62 Rc, Re, Rs 8.15 0.85 53.98 5.2. Experimental Setup DEVC consists of a content encoder, a speaker encoder, an emotion encoder, and a diffusion-based decoder. The content encoder is based on the HuBERT-Base backbone network, augmented with a linear layer [43] and pre-trained on the LibriSpeech-960 dataset [49]. The emotion encoder consists of a three-dimensional CNN layer, a BLSTM layer, an attention layer, and a fully-connected (FC) layer. It is initially pre-trained on the IEMOCAP dataset [50] and subsequently fine-tuned using the ESD dataset [48]. The speaker encoder follows the architecture in [45], utilizing a 3-layer LSTM with projection to generate 256-dimensional representations. For the diffusion decoder, the architecture is similar to that described in [26]. The step encoder comprises a 128dimensional sinusoidal position encoding and two FC layers with Switch activations. The PreNet consists of a Conv1\u00d71 layer followed by ReLU activation. The output of the step encoder and latent variables are then fed into a stack of 64 residual blocks with 128 residual channels. The skip connections from all residual layers are summed before entering the PostNet, which includes two Conv1\u00d71 layers. DEVC is trained on a single Nvidia 3080Ti GPU for 1.2 million steps using Adam optimizer with a batch size of 16 and a learning rate of 0.0002. 5.3. Baselines In a comparative study, we adopt the following two expressive VC models as our baseline: \u2022 Baseline: JES-StarGAN [3], a many-to-many expressive voice conversion framework for S2S scenarios; and \u2022 Baseline-U: We replace the one-hot speaker label in JES-StarGAN [3] with speaker representations for S2U and U2U settings. 5.4. Objective Evaluations Table 2 presents the objective evaluation results of our proposed framework and the baselines. Mel-cepstral distortion (MCD) [51] is utilized to measure spectral distortion between synthesized samples and target samples. Lower MCD values indicate better quality and closer similarity between the synthesized samples and the target samples [52]. Our proposed method outperforms the baselines in all settings (S2S, S2U, and U2U) by achieving consistently lower MCD values, which indicates a better conversion performance. To evaluate speaker similarity, we utilize a pre-trained SV model1 and report the SV accuracy in Table 2. Higher SV accuracy indicates better conversion of speaker identity [31]. Our proposed framework achieves higher speaker similarity compared to the baselines, highlighting its capability to accurately convert the characteristics of emotional speakers. In addition, we employ two pitch similarity metrics: Voicing Decision Error (VDE) [53] and F0 Frame Error (FFE) [54]. VDE measures the proportion of voiced/unvoiced decision difference between the converted and target utterances, while FFE captures both pitch similarity and voiced/unvoiced decision differences [42]. Our proposed method outperforms the baselines, indicating its effectiveness in capturing and reproducing the desired characteristics in the converted speech. 5.5. Subjective Evaluation We conduct listening experiments with 13 subjects to assess the speech quality, speaker similarity, and emotional style similarity. Each of them listens to 192 converted utterances in total. In the Mean Opinion Score (MOS) test, participants evaluated the converted utterances using a rating scale ranging from 1 (poor) to 5 (excellent) to assess speech quality. The results for S2S, S2U, and U2U are presented in Table 3. As shown in Table 3, DEVC outperforms the baselines in terms of speech quality. In addition, we perform ABX preference tests to evaluate speaker similarity in the S2S, S2U, and U2U settings. The listeners compare the converted utterances and select the utterance that sounds closer to the reference in terms of speaker identity. The results, illustrated in Figure 5(a), demonstrate that our proposed framework significantly outperforms the baseline, indicating that our method excels in accurately converting speaker identities for both seen and unseen speakers. We further conduct ABX preference tests to evaluate the emotional style similarity in S2S, S2U, and U2U settings. These tests require listeners to compare the emotional styles of the reference and converted utterances. Figure 5(b) presents the results, clearly demonstrating the superior performance of our proposed framework over the baseline. These findings highlight the capability of our method to successfully convert emotional 1 https://github.com/resemble-ai/Resemblyzer 7.7% 15.4% 6.4% 6.9% 12.3% 6.4% 85.4% 72.3% 87.2% 0% 20% 40% 60% 80% 100% U2U S2U S2S Baseline No Preference DEVC (a) Speaker Similarity (b) Emotional Style Similarity Figure 5: ABX preference results for S2S, S2U, and U2S settings to evaluate: (a) speaker similarity and (b) emotional style similarity. We used Baseline (JES-StarGAN) for S2S, and Baseline-U for S2U and U2S settings. styles for arbitrary speakers, further validating the effectiveness of our emotional style modeling approach. It is worth mentioning that the conversion between unseen speakers to unseen speakers is particularly challenging due to the lack of training data. However, even in this challenging scenario, our proposed method achieves excellent results in subjective evaluation, further affirming its ability to conduct highquality conversion in any-to-any setting. 5.6. Ablation studies We conduct ablation studies to analyze the impact of our speaker representations with emotion cues and speakerindependent emotion representations in the context of expressive voice conversion. Specifically, we investigate three different models, all sharing the same diffusion-based decoder architecture: the first model includes content representations and speaker representations, denoted as Rc, Rs. The second model incorporates content representations, speaker-independent emotional style representations, and one-hot speaker labels, denoted as Rc, Re. The third model represents our proposed method, incorporating content representations, speaker representations with emotion cues, and speaker-independent emotion representations, denoted as Rc, Re, Rs. In the S2S setting, we evaluate the performance of these models by calculating metrics MCD, SV accuracy, and F0 root mean squared errors (F0-RMSE) [55], as presented in Table 4. From the results, our proposed method, which utilizes speaker representations instead of one-hot speaker labels, exhibits superior performance compared to the model incorporating one-hot speaker vectors. This highlights the advantage of our speaker representation with emotion cues, indicating that it captures and represents speaker-dependent emotion cues more effectively. In addition, we observe that our proposed method outperforms the other models. This finding suggests the effectiveness of our emotional style modeling method in the expressive voice conversion task. 6. Discussion We note that our proposed method has the potential to be extended to convert emotion states for different speakers with various reference utterance settings. However, in this paper, our focus is solely on expressive VC. We intend to broaden the scope of this work to include the conversion of emotion states for the same speaker (emotional VC) or different speakers in our future research. 7. Conclusion In this paper, we present a novel framework for expressive voice conversion using a diffusion-based approach that supports anyto-any conversion. Our framework leverages speech units as content representations and incorporates deep features extracted from speech emotion recognition and speaker verification tasks to capture emotion and speaker characteristics. A key finding of our research is that speaker embeddings obtained from a pretrained SV model using neutral data inherently contain speakerdependent emotional features, thereby proving advantageous for the expressive voice conversion task. Our proposed framework exhibits remarkable flexibility in converting both seen and unseen speakers without the need for vocoders, marking it as the first end-to-end diffusion model-based expressive voice conversion framework known to us. 8."
17
+ }
intro_8K/test_introduction_short_2405.01744v1.json ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.01744v1",
3
+ "title": "ALCM: Autonomous LLM-Augmented Causal Discovery Framework",
4
+ "abstract": "To perform effective causal inference in high-dimensional datasets,\ninitiating the process with causal discovery is imperative, wherein a causal\ngraph is generated based on observational data. However, obtaining a complete\nand accurate causal graph poses a formidable challenge, recognized as an\nNP-hard problem. Recently, the advent of Large Language Models (LLMs) has\nushered in a new era, indicating their emergent capabilities and widespread\napplicability in facilitating causal reasoning across diverse domains, such as\nmedicine, finance, and science. The expansive knowledge base of LLMs holds the\npotential to elevate the field of causal reasoning by offering\ninterpretability, making inferences, generalizability, and uncovering novel\ncausal structures. In this paper, we introduce a new framework, named\nAutonomous LLM-Augmented Causal Discovery Framework (ALCM), to synergize\ndata-driven causal discovery algorithms and LLMs, automating the generation of\na more resilient, accurate, and explicable causal graph. The ALCM consists of\nthree integral components: causal structure learning, causal wrapper, and\nLLM-driven causal refiner. These components autonomously collaborate within a\ndynamic environment to address causal discovery questions and deliver plausible\ncausal graphs. We evaluate the ALCM framework by implementing two\ndemonstrations on seven well-known datasets. Experimental results demonstrate\nthat ALCM outperforms existing LLM methods and conventional data-driven causal\nreasoning mechanisms. This study not only shows the effectiveness of the ALCM\nbut also underscores new research directions in leveraging the causal reasoning\ncapabilities of LLMs.",
5
+ "authors": "Elahe Khatibi, Mahyar Abbasian, Zhongqi Yang, Iman Azimi, Amir M. Rahmani",
6
+ "published": "2024-05-02",
7
+ "updated": "2024-05-02",
8
+ "primary_cat": "cs.LG",
9
+ "cats": [
10
+ "cs.LG",
11
+ "cs.AI",
12
+ "cs.CL",
13
+ "stat.ME"
14
+ ],
15
+ "label": "Original Paper",
16
+ "paper_cat": "Knowledge AND Graph",
17
+ "gt": "The process of causal discovery, essential in various domains and scientific discoveries, seeks to reveal complex causal relationships in observational data [31, 32, 13]. For instance, in healthcare, this process is crucial and instrumental for pinpointing disease etiologies, devising effective interventions, and prevention strategies [48]. Subsequently, causal inference allows for the quantification of the influence exerted by different variables on one another, once a causal structure is identified. This phase, often referred to as causal estimation, relies on the construction of a preliminary causal 1 arXiv:2405.01744v1 [cs.LG] 2 May 2024 graph, which, despite its theoretical significance, poses considerable practical challenges, demanding substantial domain-specific expertise. In fact, studies using real-world datasets demonstrate that inferring causal graphs\u2013which is the focus of this paper\u2013from data is still a complex challenge in practical applications [34, 42, 6]. Causal discovery and causal inference, as highlighted in seminal works by Pearl and others [31, 32, 23, 13], are two key components of causal reasoning to address causal questions in diverse fields. Within the literature, numerous studies have contributed significantly to the development of a variety of efficient causal discovery algorithms aimed at uncovering the underlying causal structure from observational data. This body of research can be broadly categorized into two main groups: conventional data-driven causal discovery algorithms and those based on LLMs. Conventional causal discovery algorithms focus on learning the causal graph from samples of the joint probability distribution of observational data. They utilize various statistical techniques, including conditional independence tests, machine learning approaches, generative models, deep learning methodologies, and reinforcement learning strategies [35] to understand the joint distribution of observed variables and extract the causal connections among them. Subsequently, these algorithms assess how well the candidate causal graph aligns with the data [48, 14, 13]. Conventional causal discovery algorithms, despite being designed to be powerful and scalable, face several challenges. These include a heavy dependence on domain experts [12], who are often limited and inconsistent, and the issues of data bias, imbalance, and inadequacy which affect the accuracy of capturing true probability distributions [6]. Additionally, the use of static data can compromise model accuracy in dynamic environments, and the task of fully determining edge orientations is hindered by the presence of multiple equivalent Directed Acyclic Graphs (DAGs) [6, 35], which exponentially increase with the number of nodes [50], leading to inaccuracies and unreliability in the estimated causal graphs. Recent advancements in Large Language Models (LLMs) have significantly impacted artificial intelligence, exhibiting notable reasoning capabilities [20, 44, 9, 22, 4]. These achievements stem from the extensive data used for training LLMs, essential for effective causal reasoning [20, 9]. However, current LLM-based causal reasoning research, mainly focusing on pairwise analysis, faces scalability issues as it struggles with the complexity of full causal graph construction and handling large datasets [43, 20, 6, 7, 30]. These models often fall short in accurately and efficiently inferring comprehensive causal relationships, especially in large-scale settings [7, 6, 23, 18]. Despite some efforts to integrate LLMs with causal discovery processes [43, 7, 39], challenges remain due to inherent limitations and the complexity of causal inference. A synergistic approach combining LLMs with other methods may provide a more nuanced and complete understanding of causal mechanisms and address these challenges effectively. In this paper, we present an LLM-powered causal discovery framework\u2013ALCM: a multi-component Autonomous LLM-Augmented Causal Discovery Framework. ALCM proposes a synergized reason- ing method and entails three components: causal structure learning, causal wrapper, and LLM- driven refiner components to generate more accurate and robust causal graphs. ALCM is engineered to autonomously untangle causal structures by deciphering those causal relations embedded in ob- servational data. ALCM capitalizes on observed data, data-driven causal reasoning algorithms, and the implicit knowledge embedded in LLMs to optimize and streamline the entire causal reasoning process. This approach aims to establish a more robust, applicable, and reliable foundation for causal reasoning and estimation as well. We conduct a comprehensive performance evaluation of ALCM, employing LLMs and assessing their capabilities on widely recognized benchmarks [36, 42]. We compare our framework with conventional causal discovery algorithms and LLMs prompting. 2 Furthermore, we implement an automatic pipeline for making the causal discovery an automatic task.",
18
+ "main_content": "In this section, we outline the existing research on causal structure learning within the literature, delineating it into two primary groups: 1) Conventional data-driven causal discovery algorithms; and 2) Using LLMs for causal discovery. 1) Conventional data-driven causal discovery algorithms: conventional data-driven causal discovery algorithms are broadly classified into five categories as follows: \u2022 Score-Based Algorithms: They operate on scores and engage in a comprehensive exploration of the entire space of potential Directed Acyclic Graphs (DAGs) to identify the most suitable graph for explaining the underlying data. Typically, such score-based approaches consist of two integral components: (i) a systematic search strategy tasked with navigating through the potential search states or the space of candidate graphs, denoted as G\u2019, and (ii) a score function designed to evaluate the viability of these candidate causal graphs. The synergy between the search strategy and the score function is instrumental in optimizing the exploration of all conceivable DAGs. A widely employed score function in the selection of causal models is the Bayesian Information Criterion (BIC) [14]. Some examples of scorebased algorithms are Greedy Equivalence Search (GES) [11], Fast Greedy Search (FGS) [33], and A* Search [47]. \u2022 Constraint-Based Algorithms: This category, exemplified by Peter-Clark (PC) [38] algorithm, employs conditional independence (CI) tests to reveal the graph\u2019s skeleton and vstructures, ultimately returning the Directed Acyclic Graph (DAG) of the functional causal model while considering v-structures and doing edge-orientations [14]. Other constraint-bsaed algorithms are like Fast Causal Inference (FCI), Anytime FCI, RFCI, PC-stable, and so forth. \u2022 Hybrid Algorithms: Hybrid approaches are founded on the integration of various causal discovery methods, combining constraint-based, score-based, Functional Causal Model (FCM)based, gradient-based, and other techniques. This amalgamation reflects a comprehensive strategy that leverages the strengths of different methodologies to enhance the robustness and effectiveness of causal discovery in complex systems. Max-Min Hill Climbing (MMHC) [40]\u2013 belonging to this category\u2013stands out as a hybrid causal discovery technique that seamlessly integrates principles from both score-based and constraint-based algorithms. This hybrid approach combines the advantages of scoring methods and constraint-based strategies, offering a comprehensive and effective framework for uncovering causal relationships in complex systems. \u2022 Function-Based Algorithms: Approaches grounded in Functional Causal Models (FCM) delineate the causal connections between variables within a defined functional structure. In FCMs, variables are expressed as functions of their direct causes (parents), augmented by an independent noise term denoted as E. The distinguishing feature of FCM-based methodologies lies in their capacity to differentiate between various Directed Acyclic Graphs (DAGs) within the same equivalence class. This discrimination is achieved by introducing supplementary assumptions concerning data distributions and/or function classes. Several notable 3 FCM-based causal discovery methodologies are introduced, including Linear Non-Gaussian Acyclic Model (LiNGAM) [37] and Structural Agnostic Modeling (SAM) [19]. SAM employs an adversarial learning methodology for causal graph identification. Specifically, SAM utilizes Generative Adversarial Neural Networks (GANs) to seek a Functional Causal Model (FCM) while ensuring the detection of sparse causal graphs through the incorporation of appropriate regularization terms. The optimization process involves a learning criterion that integrates distribution estimation, sparsity considerations, and acyclicity constraints. This holistic criterion facilitates end-to-end optimization of both the graph structure and associated parameters, accomplished through stochastic gradient descent. The previous three-mentioned categories may be limited to the Markov equivalence class, posing constraints. Function-based algorithms like LiNGAM [44] aim to uniquely identify causal DAGs by exploiting data generative process asymmetries or causal footprints. \u2022 Optimization-Based Algorithms: Recent investigations in causal discovery have approached the structure learning problem by casting it as a continuous optimization task, employing the least squares objective and an algebraic representation of Directed Acyclic Graphs (DAGs). Notably, this transformation converts the combinatorial nature of the structure learning problem into a continuous framework, and solutions are obtained through the application of gradient-based optimization techniques. These methods exploit the gradients of an objective function concerning the parameterization of a DAG matrix to achieve effective structure learning. NOTEARS [50] is among the causal discovery algorithms that formulate the structure learning problem as a purely continuous constrained optimization task. 2) Using LLM for causal discovery task: Leveraging recent advancements in LLMs and Natural Language Processing (NLP) presents an opportunity to offer enhanced capabilities in capturing causal concepts and relations while handling large-scale datasets more effectively [26, 10, 27]. This proficiency is rooted in the extensive training LLMs undergo on vast, high-quality datasets [18]. LLMs possess the ability to establish a comprehensive knowledge base across diverse domains, facilitating language understanding, ensuring generalizability, automating the causal reasoning pipeline, and enabling plausible reasoning. In this regard, the second group, namely using LLMs for causal discovery, is introduced. This group is classified into three major groups as follows: \u2022 Fine-tuning: This category mainly focuses on fine-tuning LLMs to empower LLMs with causal-and-effect knowledge and address the causal reasoning challenges [17, 2, 16]. For instance, Jin et al. [17] introduce the CORR2CAUSE benchmark dataset on which they fine-tune their model. This is done to both asses and empower LLMs with causal reasoning ability. In fact, CORR2CAUSE dataset serves as a tool to evaluate the proficiency of LLMs in discerning causal relationships, particularly when the LLMs are fine-tuned to distinguish causation from correlational statements in the context of NLP. \u2022 Performance Evaluation: The second category focuses on using LLM for causal discovery and delves into emerging research that explores the causal analysis capabilities of Large Language Models. In contrast to causal discovery algorithms relying on statistical patterns in the data, this group utilizes LLMs to discover causal structures from variables. A majority of these methods solely utilize LLMs to predict pairwise causal relationships among a given set of variables [46, 24, 20, 41, 30, 6, 49]. 4 \u2022 Prior or Posterior Knowledge: In the third category, focused on employing LLMs, the objective is either to assign direction to undirected edges generated by causal discovery algorithms or to impose constraints on the edge orientation and functionality of these algorithms. [7, 6, 43]. Despite these efforts from conventional data-driven causal discovery algorithms to propose robust, precise, adaptable, efficient, and scalable causal discovery algorithms, encountered limitations and inefficiencies persist. These challenges are as follows. 1) Real-world data, often sparse and insufficient for accurately capture authentic probability distributions [6]. 2) Sole reliance on precollected static data introduces accuracy risks, particularly when models must adapt to dynamic real-world data and unforeseen factors. 3) Inferring complete edge orientations from observed data is hindered by the existence of equivalent Directed Acyclic Graphs (DAGs) [6, 35]. 4) Algorithm dependence on domain knowledge experts, who may be scarce, time/resource-intensive, or exhibit variable quality across domains [12]. 5) Traditional causal discovery algorithms fall short in answering user-submitted causal questions due to a lack of proficiency in language understanding and processing. These challenges collectively contribute to diminished accuracy, incompleteness, and unreliability in the estimated causal graph. On the other hand, significant advances have been made in utilizing LLMs for causal tasks. However, their inherent limitations in precision and complexity handling remain evident. These challenges are highlighted as follows. 1) LLMs inherently lack the precision necessary for accurately responding to complex, user-generated causal queries [41]. 2) LLMs are limited in their ability to dissect and comprehend nuanced causal concepts without additional data-driven causal reasoning algorithms. 3) There is a challenge in constructing complete causal graphs and unraveling intricate causal relations due to the oversimplified understanding of LLMs. 4) LLMs struggle with handling extensive datasets, often failing to capture the depth and variability within them. These issues collectively hinder the effectiveness of LLMs in accurately and reliably determining causal relationships. Consequently, data-driven causal reasoning algorithms assume a critical role in mitigating the limitations of LLMs in causal tasks, offering nuanced comprehension of causal concepts, unraveling intricate causal relations, constructing complete causal graphs, and handling extensive datasets. In light of these considerations, a unified, comprehensive causal framework that integrates LLMs with data-driven conventional causal discovery algorithms is required. To address this need, we propose the development of ALCM. ALCM aims to enhance the robustness and accuracy of causal discoveries by leveraging the conventional causal discovery algorithms and LLMs. Table 1 indicates the capabilities of two distinct causal discovery methods\u2014Conventional datadriven Causal Discovery (CCD), LLMs-based approaches, and ALCM framework\u2014across essential functional attributes. Dynamic Data Adaptability[23, 5, 49] is the capability of a method to adjust to changing data, while Detection of Hidden Variables[23, 49] refers to identifying unobserved influencers within the dataset. Comprehensive Graph Model Representation [6] assesses the completeness of the depicted causal structure, and Predictive Accuracy[20, 39, 23, 43, 41, 30, 49] measures the success in forecasting the correct causal relations. CCD methods are limited by their reliance on pre-defined statistical models as well as domain knowledge expert validation, lacking adaptability to dynamic data, generalizability[20, 15] to unseen data, autonomy, and lack of accuracy. Similarly, while LLMs are adept at dynamicity of data, generalizability, and detecting hidden variables, they fall short in providing comprehensive graph model representations, interpretability, explainability, autonomy, and precision for causal discovery task. ALCM combining these strengths while 5 enhancing user independence from expert validation [20] and interpretability[8] in causal discovery. Table 1: Comparative Analysis of CCD, LLMs, and ALCM across Key Functional Attributes Descriptive Attribute CCD 1 LLMs ALCM Dynamic Data Adaptability \u00d7 \u2713 \u2713 Detection of Hidden Variables \u00d7 \u2713 \u2713 Comprehensive Graph Model Representation \u2713 \u00d7 \u2713 Predictive Accuracy \u00d7 \u00d7 \u2713 Autonomous Operation \u00d7 \u00d7 \u2713 Generalizability to Unseen Data \u00d7 \u2713 \u2713 Autonomous Expert Validation \u00d7 \u2713 \u2713 Interpretability and Explainability \u2713 \u00d7 \u2713 1 CCD methods often rely on pre-defined statistical models and assumptions about the data generation process. 2 LLMs-based methods may utilize vast amounts of data and natural language processing to infer causal relationships, potentially incorporating domain expertise. 3 ALCM synthesizes the strengths of both CCDs and LLMs to uncover causal connections. 3 Proposed Framework In this section, we present ALCM, an advanced causal discovery framework aimed to leverage the combined strengths of traditional causal discovery algorithms and LLMs. ALCM provide an automated pipeline constructing a comprehensive causal graph, refining it, and incorporating previously overlooked insights to enrich the resulting causal model. This integration aims to utilize the precision of conventional causal discovery algorithms in identifying data relationships, while also enhancing and validating these findings with insights from LLMs. Fig. 1 indicates an overview of the ALCM framework. The algorithmic perspective of the ALCM framework is detailed in Algorithm 1. The ALCM framework includes three principal components: Causal Structure Learning, Causal Wrapper, and LLM-driven Refiner. To clarify the functionality and definitions of the framework, we present and exemplify these components in the following. 6 Causal Structure Learning LLM-driven Refiner Dataset Final Causal Graph Causal Graph Causal Wrapper Set of Causal Prompts X1 X2 X3 X4 X5 X6 X1 X2 X3 X4 X5 X6 X4 Data-driven Causal Structure Learning Algorithm Refinery Visualization Perception X7 X7 Figure 1: ALCM Architecture Algorithm 1 ALCM Require: Observed dataset, O; Contextual Causal Information, C; Metadata, M Ensure: Causal DAG, DAG 1: Initialize and run the selected data-driven causal discovery algorithms CD, Gi \u2190CD(O) 2: Generate the causal prompt by injecting C and M 3: for each z = (ei, ej) in Gi do 4: if z is validated by LLM-Driven Refiner then 5: Gi \u2190Gi \u222a\u2205 6: end if 7: if z orientation is revised by LLM-Driven Refiner then 8: Gi \u2190z\u2032 \u222aGi 9: end if 10: if z is removed by LLM-Driven Refiner then 11: Gi \u2190Gi \u2212z\u2032 12: end if 13: if a new z\u2032\u2032 is added by LLM-Driven Refiner then 14: Gi \u2190z\u2032\u2032 \u222aGi 15: end if 16: end for 17: return Gi 7 3.1 Causal Structure Learning The Causal Structure Learning component is our data-driven conventional causal discovery component. It receives a dataset as its input and generates the initial causal graphs from the dataset. This component uncovers causal insights by analyzing purely observational data, and it develops graphical structures that can be interpreted causally. This component directly influences the accuracy and reliability of both the final causal graph and future causal inferences drawn from the data. Conventional causal discovery algorithms can be leveraged in this component. For instance, we can implement conventional causal discovery algorithms, including Peter-Clark (PC) [38] and Linear Non-Gaussian Acyclic Model (LiNGAM) [37] to discern the probabilistic dependencies and independencies among variables. These algorithms are selected based on their proven efficacy in uncovering intricate causal relationships within complex data. For the implementation part, we implement PC conventional causal discovery algorithms; moreover, we propose a hybrid method combining the PC and LiNGAM algorithms to identify causal relationships effectively. Building on this foundation, the causal structure learning component crafts an initial causal graph, encapsulating the potential causal linkages derived from the datasets. Finally, this generated causal graph is then relayed to the Causal Wrapper component for further processing. 3.2 Causal Wrapper The Causal Wrapper component serves as a critical intermediary or bridge between the causal structure learning and LLM-driven refiner components. This component encapsulates and translates the raw, initial causal graph into a series of contextual, causal-aware prompts (i.e., causal prompts). These prompts are fed to the LLM-driven refiner. The primary aim of these causal prompts is to act as guides for the LLM-driven refiner, aiding it in comprehending the initial causal graph. Furthermore, these causal prompts direct the LLM-driven refiner to identify and integrate the relevant and updated causal background knowledge to make the solution more suited to the specific causal discovery problem at hand. Given these reasons, this prompting strategy ensures that the final causal graph is not only precise but also robust and reflective of the underlying causal mechanisms within the dataset. Equation 1 shows our causal-aware prompting strategy by infusing the context of problem and metadata information into the prompts. This prompting strategy was inspired from an effort by Kim et al. [21]. They demonstrated that contextual information is important in boosting the overall performance of LLMs\u2019 responses. Causalprompt = Instruction + Causal Context + Metadata + Question + Output format (1) This enhancement is accomplished by incorporating explicit elements into the prompt, with each edge being transformed into a causal prompt structured as follows: Instructions: This section clarifies the role of LLMs, their objectives, and the expected behavior. Causal Context: It includes details about the selected causal discovery algorithm, such as its name and output. Metadata: This section outlines the dataset domain or variable names along with their descriptions. Question: It specifies the precise query, for example, whether A causes B. Output format: This delineates the desired format for the output. Figure 2 illustrates an example of the causal wrapper\u2019s functionality. Additionally, the output can incorporate supplementary reasoning and confidence levels for the generated response. For in8 stance, a simple instruction can prompt the LLM-driven refiner to engage in step-by-step reasoning, employing a Chain-of-Thought (CoT) approach [45]. Moreover, it can request the LLM to indicate its level of confidence or likelihood regarding the generated output, using either a log-likelihood value or a confidence level. Once these causal prompts are generated, they are dispatched to the LLM-driven refiner component. This method ensures that the ALCM framework optimally utilizes LLMs for uncovering, refining, and validating causal relationships, thereby advancing the field of causal discovery with a high level of accuracy. Initial Causal Graph Cancer Dysponea Pollution X-ray Smoker Causal Wrapper Causal Prompt #1 Assume you are an expert on Cancer Risk Factors, Genetic Cancer Relation [Metadata], along with the domain of causal discovery. [Instruction] Consider yo have received the results from a causal discovery algorithm (PC) executed on a \"Cancer dataset.\" [Causal Context] The algorithm suggests a causal link where 'pollution' causes 'cancer'. [Question] Based on your current comprehensive understanding of this field, please evaluate, and adjust this conclusion as necessary. You may modify, delete, or add nodes/ edges, or change the orientation of the edges. Ensure that your modifications are grounded in actual data and avoid making unfounded assumptions. [Instruction] In terms of the output format, denote the correctness of the causal discovery algorithm\u2019s output as True or False, represent the causal relationship in the form \"('', '')\", and include your confidence level for each pair you propose [Output format]. Figure 2: Causal Prompt Demonstration 3.3 LLM-driven Refiner The LLM-driven Refiner leverages advanced language models in the refinement and evaluation of causal graphs. This component receives a series of intricately designed, contextual causal prompts from the causal wrapper component, which serve as a nuanced guide for its operations. Its core function is to assess, refine, and potentially augment the initial causal graph by evaluating the causal edges/nodes, and, where necessary, adding or removing nodes and edges to represent the underlying causal mechanisms better. The significance of the LLM-driven Refiner lies in its capacity to address and alleviate inherent limitations present in both the causal discovery algorithms and the datasets themselves. This component plays a pivotal role in uncovering and assimilating previously overlooked or concealed causal information, thereby elevating the accuracy and comprehensiveness of the causal graph. The identification and integration of hidden causal relationships into the graph are essential, as they can reveal causal connections or nodes that traditional causal discovery methods might miss or that dataset constraints could obscure. Upon completion of the refinement process, the results are saved, and various post-processing techniques are applied to generate the final graph. These techniques involve leveraging natural language processing (NLP) to parse and extract causal relationships from textual responses provided by LLMs. Subsequently, these extracted relationships undergo validation and structuring to form a coherent causal graph. 4 Implementation We elucidate the technical underpinnings and strategic choices behind the deployment of the ALCM framework. We provide two demonstrations of implementation of our framework to show that our 9 framework can enhance the accuracy and generalizability. 4.1 Implementation 1 (ALCM-PC) For the first implementation, we select PC causal discovery algorithm. The PC algorithm is renowned for its robustness in dealing with large datasets and its ability to infer causal structures through conditional independence tests, making it highly efficient in uncovering complex causal networks. For the causal wrapper component, we utilize causal prompt. We illustrate one example of our prompt in Figure 3. For LLM-driven refiner, we exploit OpenAI GPT-4 [29, 3] in our pipeline. Answer: Given the context of neuropathic pain and causal discovery, the output from the PC (Peter and Clark) algorithm suggesting that 'L Wrist pain' causes 'R Shoulder pain' warrants careful consideration. ... Based on the standard understanding of neuropathic pain pathways and without additional context justifying this causal relationship, the answer to the correctness of the algorithm's output would be: False. Prompt: Presuming your expertise lies in diagnosing neuropathic pain and the realm of causal discovery, consider the scenario where you are presented with the findings from a causal discovery algorithm (PC) that has been applied to a \"neuropathic dataset\". The algorithm deduces that 'L Wrist pain' (indicating left wrist pain) causes 'R Shoulder pain' (denoting right shoulder pain), with \"R\" and \"L\" symbolizing the right and left sides of the body, respectively. Leveraging your current, in-depth understanding of this field, you are asked to evaluate and amend this conclusion if necessary. Additionally, you should classify the algorithm's output as True or False to indicate whether it is correct or incorrect based on your assessment ALCM Figure 3: Prompt Template 4.2 Implementation 2 (ALCM-Hybrid) For the second implementation, we leverage a hybrid approach (which outlined in Section 3.1) including PC [38] and LiNGAM [37] algorithms due to their complementary strengths and proven effectiveness in identifying causal relationships. This hybrid method utilizes a majority vote mechanism for identifying common causal edges recognized by both algorithms. For edges that are uniquely identified by only one algorithm and not common to both, we introduce an extra step by employing LLMs as a decisive judge. This entails presenting these edges to LLMs to ascertain their potential as causal links based on contextual understanding and causal reasoning capabilities. If affirmed, these edges are added to an augmented set of causal connections. The causal wrapper component applies the causal prompt template and the result is sent to the LLM-driven refiner. The LLM-driven refiner (OpenAI model) Component evaluates, refines, and enhances the causal graph to produce a final, enhanced causal structure. 10 5 Experiments In this section, we first present benchmark datasets used in our expermients. Next, we outline the evaluation metrics selected to measure the framework\u2019s performance in terms of accuracy, robustness, and reliability. Finally, we summarize the experimental results, demonstrating the effectiveness of the ALCM framework in generating and refining causal graphs, and its ability to reveal latent causal relationships, showcasing its advancement over existing methods. 5.1 Benchmark Datasets We utilize six benchmark datasets and their ground truth causal graphs from the BN repository: Asia, Cancer, Child, Insurance, Sachs, Sangiovese [36, 25], and also the well-known Neuropathetic dataset [42] to evaluate the efficacy of the ALCM framework. These datasets are chosen for their diverse origins and complexities, covering a range of scenarios from medical studies to insurance modeling and genetic pathways. The importance of utilizing these benchmark datasets lies in their ability to provide a standardized basis for comparison, enabling the assessment of the ALCM framework\u2019s performance across varied domains and conditions. Table 2 indicates a summary of these datasets. Table 2: Summary of Datasets Dataset Domain #Nodes #Edges Asia Social Science 8 8 Cancer Medical 11 18 Child Social Science 20 31 Insurance Finance 27 43 Sachs Biological 11 18 Neuropathic Medical 221 475 Sangiovese Social Science 36 47 To ensure these datasets are compatible with the input requirements of causal discovery algorithms within the ALCM framework, we implement a series of preprocessing techniques as part of the causal structure learning component. This preprocessing involves cleaning the data, handling missing values, and normalizing data formats, among other adjustments, to tailor the datasets for optimal processing. By meticulously preparing these datasets, we facilitate their effective use as inputs for the causal discovery algorithms, ensuring that the initial causal graphs generated are as accurate and informative as possible. 5.2 Evaluation Metrics We select five metrics to assess the effectiveness and precision of the ALCM framework\u2019s causal discovery capabilities. The evaluation of the predicted causal graphs against the ground truth is paramount to validate the accuracy and reliability of our methodology. To this end, we employ five key metrics: precision, recall, F1-score, accuracy, and Normalized Hamming Distance (NHD), each selected for its ability to provide a comprehensive understanding of the framework\u2019s performance from different perspectives [48]. 11 \u2022 Precision: measures the proportion of correctly identified causal relationships out of all relationships identified by the algorithm. This metric is crucial for ensuring that the causal links proposed by our framework are indeed valid, minimizing false positives. \u2022 Recall: assesses the fraction of true causal relationships that have been correctly identified by the algorithm, highlighting the framework\u2019s ability to uncover the full extent of causal connections present within the data. \u2022 F1-score: serves as a harmonic mean of precision and recall, offering a single metric that balances both the accuracy and completeness of the identified causal relationships. This is particularly useful for comparing the overall performance of different causal discovery approaches. \u2022 Accuracy: evaluates the overall correctness of the causal graph, including both the presence of true causal connections and the absence of false ones. This metric provides a straightforward assessment of the model\u2019s overall predictive performance. \u2022 Normalized Hamming Distance (NHD): quantifies the difference between the predicted causal graph and the ground truth by measuring the proportion of mismatched edges, adjusted for the size of the graph. NHD is instrumental in assessing the structural similarity of the causal graphs, offering insights into the nuanced differences that may not be captured by other metrics. In the context of a graph with m nodes, the NHD between the predicted graph G p and the ground-truth graph G is determined by calculating the number of edges that exist in one graph but not the other. This count is then divided by the total number of all possible edges\u2013this formula is defined in Equation 2. In essence, the NHD provides a normalized measure of dissimilarity, offering insights into the accuracy of the predicted graph compared to the ground-truth graph, accounting for the total potential edges in the graph with m nodes. NHD = m X i=1 m X j=1 1 m2 \u00b7 1, where Gij \u0338= Gpij. (2) 5.3 Experimental Results In this section, we present our experimental results and a comprehensive analysis of the performance of the ALCM framework, utilizing the seven aforementioned datasets and the five evaluation metrics to benchmark against existing methodologies in causal discovery, including conventional algorithms and approaches leveraging LLMs. Specifically, we implement the PC algorithm as a representative of conventional causal discovery methods and utilize LLMs powered by OpenAI\u2019s technology as a cutting-edge counterpart. LLM-based approaches generates the pairwise sets of nodes and analyze them. The outcomes of these evaluations are indicated in Table 3, which underscores the substantial enhancements achieved by our ALCM framework across various metrics. We also include the implementation of PC as the backbone of causal discovery algorithm in ALCM-PC and for ALCMHybrid, we employ the hybrid algorithm. 12 Table 3: Evaluation Results for Various Causal Discovery Methods Dataset Method Metrics Precision Recall F1-Score Accuracy NHD Asia PC 0.75 0.375 0.5 33.33 0.1429 LLMs 0.1428 0.2174 0.1742 16.00 0.75 ALCM-PC 1.0 0.5945 0.746 87 0.0893 ALCM-Hybrid 0.8889 1.0 0.9412 96.55 0.0179 Cancer PC 0.5 0.5 0.5 33.33 0.2 LLMs 0.158 0.75 0.261 21.4 0.85 ALCM-PC 0.667 1.0 0.800 85.71 0.1 ALCM-Hybrid 0.9333 0.9655 0.9492 90.32 0.0333 Child PC 0.20 0.28 0.233 27.00 0.121 LLMs 0.0657 0.48 0.1156 29.21 0.8765 ALCM-PC 1.0 0.6185 0.764 78.89 0.047 ALCM-Hybrid 0.95 0.72 0.819 98.00 0.018 Insurance PC 0.2153 0.2692 0.2393 13.59 0.8640 LLMs 0.069 0.5833 0.1234 22.90 0.8620 ALCM-PC 1.0 0.857 0.923 94.8 0.054 ALCM-Hybrid 1.0 0.9 0.947 96.4 0.037 Neuropathetic PC 0.45 0.551 0.4954 51.7 0.1364 LLMs 0.105 0.2831 0.202 10.2 0.4537 ALCM-PC 0.8846 0.6201 0.7291 89.26 0.0575 ALCM-Hybrid 0.8667 0.9692 0.9151 98.00 0.0165 Sachs PC 0.4167 0.5882 0.4878 80.91 0.209 LLMs 0.2081 0.6471 0.3149 63.24 0.9051 ALCM-PC 0.6117 0.7059 0.6554 87.5 0.1881 ALCM-Hybrid 0.7294 1.0 0.8435 90.44 0.1727 Sangiovese PC 0.4348 0.1818 0.2564 14.71 0.2761 LLMs 0.2880 0.6545 0.400 25.0 0.5143 ALCM-PC 0.6548 1.0 0.7914 65.48 0.1381 ALCM-Hybrid 0.8209 1.0 0.9016 93.41 0.0659 Our analysis shows that, compared to the baseline PC algorithm and LLM-based approaches, both ALCM-PC and ALCM-Hybrid demonstrate a notable increase in precision, recall, F1-score, and accuracy, indicating a significant improvement in both the reliability and completeness of the identified causal relationships. Conversely, the NHD exhibits a marked decrease for both ALCMPC and ALCM-Hybrid suggesting a closer structural alignment with the ground truth causal graph and, therefore, a more accurate representation of the causal dynamics within the datasets. The superior performance of ALCM-Hybrid over ALCM-PC can primarily be attributed to its use of a dual strategy that combines conventional causal discovery algorithms and employs a majority voting mechanism, alongside leveraging LLMs to incorporate the latest information from the Internet. ALCM-PC and ALCM-Hybrid\u2019s superior performance stem from blending conventional causal discovery techniques with LLMs and an automated refinement pipeline. This innovative mix not only utilizes conventional methods for identifying causal links but also benefits from LLMs\u2019 ability to process contextual information and updates. This synergy significantly enhances causal 13 relationship accuracy and graph reliability. The result is a robust causal discovery tool that demonstrates marked improvements in key metrics and a closer alignment with the ground truth causal dynamics. We can also observe a low precision and high NHD values for some results of LLMsbased approaches, suggesting that a significant portion of the relationships identified by the LLMs were not actually present in the ground truth, highlighting a potential issue with the algorithm\u2019s specificity or its tendency to overgeneralize from the input data. We depict the causal graphs obtained by a couple of causal discovery methods on Sachs dataset in Figure 4. The Sachs dataset [36] includes data on 11 phosphorylated proteins and phospholipids from human immune cells, providing a basis for analyzing protein signaling pathways and constructing causal networks. It is especially valuable for causal discovery research, with data collected from cells under different experimental conditions, making it an excellent benchmark for testing causal discovery algorithms. Graph of ground truth, LLMs-based approach, PC, ALCM, ALCM-Hybrid are shown in Figures 4a, 4b, 4c, 4d, 4e, respectively. 14 Mek Jnk Pka Pkc Raf P38 PIP3 PIP2 Akt Plcg Erk (a) Causal graph for Ground Truth Jnk Mek Raf P38 Akt Plc Erk (b) Causal graph for LLMs-based Mek Jnk Pka Pkc Raf P38 PIP3 PIP2 Akt Plc Erk (c) Causal graph for PC Mek Jnk Pka Pkc Raf P38 PIP2 Akt Plc Erk (d) Causal graph for ALCM Mek Jnk Pka Pkc Raf P38 PIP2 Akt Plc Erk PIP3 (e) Causal graph for ALCM-Hybrid Figure 4: Causal graphs Demonstrations The enhanced performance across all metrics for both ALCM-PC and ALCM-Hybrid variants can be directly linked to their innovative methodologies. ALCM\u2019s use of LLMs introduces a layer 15 of causal reasoning and validation that is absent in traditional approaches, while the hybrid model further capitalizes on this by combining algorithmic precision with AI\u2019s contextual insights. This strategic amalgamation ensures that our framework is at the forefront of causal discovery, setting a new benchmark for accuracy, comprehensiveness, and applicability in the field. We also visualize the additive contributions of each causal discovery framework in Figures 5 and 6 on two benchmarks\u2013 neuropathetic and sachs. 89.26 51.7 98.00 Causal Discovery Accuracy PC LLM+PC+LiNGAM LLM+PC ALCM-PC ALCM-Hybrid Figure 5: Additive Contribution on Causal Discovery Accuracy on Neuropathetic Pain 87.5 80.91 90.44 Causal Discovery Accuracy PC LLM+PC+LiNGAM LLM+PC ALCM-PC ALCM-Hybrid Figure 6: Additive Contribution on Causal Discovery Accuracy on Sachs 5.4 Results for Adding New Nodes or Edges The extensive updated knowledge and expert supervision provided by LLMs significantly facilitate the identification of elusive variables (Markov blanket) and causal connections. These might remain undetected or in the dataset or overlooked by causal discovery algorithms. Figure7a and 7b show these capability of unmasking these hidden aspects. As Figure 7a demonstrates, the causal discovery algorithm (PC) fails to detect all of the true nodes and edges, but ALCM can provide us with new nodes or edges that not present in the output set of causal discovery algorithm as illustrated in 7b. We prompted LLMs to provide us the confidence level for its responses as well. The validity of ALCM answer is also confirmed by the up-to-the-date medical articles, including [28]. The traditional causal discovery depends on the structured dataset and their quality which are curated and annotated by human experts. However, these dataset are neither available in a wide range of domains or can be generalize to the new tasks. Hence, we empower ALCM by viture of LLMs component with this capability to uncover hidden variables and causal connections. Figure 8 indicates the ALCM capability to entangle the hidden variables and causal relations which are not present in the dataset. 16 Bronchitis Asia Dyspnea Tuberculosis (a) Causal graph for PC Bronchitis Dyspnea Tuberculosis Lung Cancer Genetic Factors Environmental Factors Smoking (b) Causal graph for ALCM Figure 7: Causal graphs for demonstrating new nodes or edges Figure 8: Results for Uncover, Hidden, or Ignored Nodes and Edges 6 Future Work In the subsequent phases of our research, we aim to develop a more sophisticated causal-aware framework. This framework will leverage the power of knowledge graphs, which are instrumental in augmenting the accuracy of our models. Furthermore, we plan to explore the integration of our framework with Monte Carlo Tree Search (MCTS). This integration is envisioned to evolve our system into a more dynamic and adaptive problem-solving agent. Additionally, to advance the ALCM framework\u2019s capabilities and address the issue of LLM 17 hallucination, we propose integrating ALCM with the Retrieval-Augmented Generation (RAG) system and openCHA [1]. This integration aims to harness RAG\u2019s ability to augment LLMs\u2019 generative processes with data retrieval, ensuring that causal discovery are grounded in relevant and factual information. openCHA sophisticated dialogue capabilities will further enhance ALCM by enabling dynamic, interactive validation of causal hypotheses. 7 Conclusion This study underscored the transformative potential of combining LLMs with data-driven causal discovery algorithms through the introduction of the Autonomous LLM-Augmented Causal Discovery Framework (ALCM). The ALCM emerges as a groundbreaking solution, aiming to enhance the generation of causal graphs by leveraging the sophisticated capabilities of LLMs alongside conventional causal discovery techniques. By integrating causal structure learning, a causal wrapper, and an LLM-driven causal refiner, ALCM facilitated an autonomous approach to causal discovery, significantly outperforming existing methodologies in both accuracy and interpretability. The empirical validation of ALCM not only attests to its superior efficacy over prevailing LLM methods and conventional causal reasoning mechanisms but also illuminates new pathways for leveraging LLMs in uncovering intricate causal relationships across a myriad of domains."
19
+ }