diff --git "a/abs_29K_G/test_abstract_long_2405.01063v1.json" "b/abs_29K_G/test_abstract_long_2405.01063v1.json" new file mode 100644--- /dev/null +++ "b/abs_29K_G/test_abstract_long_2405.01063v1.json" @@ -0,0 +1,424 @@ +{ + "url": "http://arxiv.org/abs/2405.01063v1", + "title": "Fair Recommendations with Limited Sensitive Attributes: A Distributionally Robust Optimization Approach", + "abstract": "As recommender systems are indispensable in various domains such as job\nsearching and e-commerce, providing equitable recommendations to users with\ndifferent sensitive attributes becomes an imperative requirement. Prior\napproaches for enhancing fairness in recommender systems presume the\navailability of all sensitive attributes, which can be difficult to obtain due\nto privacy concerns or inadequate means of capturing these attributes. In\npractice, the efficacy of these approaches is limited, pushing us to\ninvestigate ways of promoting fairness with limited sensitive attribute\ninformation.\n Toward this goal, it is important to reconstruct missing sensitive\nattributes. Nevertheless, reconstruction errors are inevitable due to the\ncomplexity of real-world sensitive attribute reconstruction problems and legal\nregulations. Thus, we pursue fair learning methods that are robust to\nreconstruction errors. To this end, we propose Distributionally Robust Fair\nOptimization (DRFO), which minimizes the worst-case unfairness over all\npotential probability distributions of missing sensitive attributes instead of\nthe reconstructed one to account for the impact of the reconstruction errors.\nWe provide theoretical and empirical evidence to demonstrate that our method\ncan effectively ensure fairness in recommender systems when only limited\nsensitive attributes are accessible.", + "authors": "Tianhao Shi, Yang Zhang, Jizhi Zhang, Fuli Feng, Xiangnan He", + "published": "2024-05-02", + "updated": "2024-05-02", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CY", + "cs.LG" + ], + "label": "Original Paper", + "paper_cat": "LLM Fairness", + "gt": "As recommender systems are indispensable in various domains such as job\nsearching and e-commerce, providing equitable recommendations to users with\ndifferent sensitive attributes becomes an imperative requirement. Prior\napproaches for enhancing fairness in recommender systems presume the\navailability of all sensitive attributes, which can be difficult to obtain due\nto privacy concerns or inadequate means of capturing these attributes. In\npractice, the efficacy of these approaches is limited, pushing us to\ninvestigate ways of promoting fairness with limited sensitive attribute\ninformation.\n Toward this goal, it is important to reconstruct missing sensitive\nattributes. Nevertheless, reconstruction errors are inevitable due to the\ncomplexity of real-world sensitive attribute reconstruction problems and legal\nregulations. Thus, we pursue fair learning methods that are robust to\nreconstruction errors. To this end, we propose Distributionally Robust Fair\nOptimization (DRFO), which minimizes the worst-case unfairness over all\npotential probability distributions of missing sensitive attributes instead of\nthe reconstructed one to account for the impact of the reconstruction errors.\nWe provide theoretical and empirical evidence to demonstrate that our method\ncan effectively ensure fairness in recommender systems when only limited\nsensitive attributes are accessible.", + "main_content": "INTRODUCTION Recommender system has become a vital technology in various Web applications, including job searching [20], online advertising [39], and e-commerce [33]. By providing personalized information filtering based on user interests, recommender systems have significant social influence [44]. Unfair recommendations that discriminate against specific user groups [35] would negatively impact the Web and society. For instance, previous research [20] on job recommendations found that women were exposed to fewer advertisements for high-paying jobs and career coaching services compared to men, perpetuating gender pay gap in the workforce. Clearly, it is of paramount importance to maintain fairness in recommender systems to ensure their trustworthiness and regulatory compliance [32]. The existing research to optimize the fairness of recommendations can be further classified into four main categories: \u2022 Data-oriented methods [6, 30], which adjusts the training data according to sensitive attributes by resampling [6] and adding antidote data [30], etc. \u2022 Adversarial learning-based methods [39, 45], which learn fair representations with a min-max game to prevent encoding of sensitive attributes in user embeddings or graph structures. \u2022 Regularization-based methods [16, 24, 41, 47], which incorporate fairness metrics such as absolute difference in utility between groups with different sensitive attributes into the objective function [16, 41, 47] or the reward function [24]. \u2022 Re-ranking methods [22, 40], which directly adjusts recommendation results to enhance fairness across user groups with different sensitive attributes. We contend that the current methods are significantly constrained by their reliance on full access to sensitive attributes [35], a condition that is often difficult to satisfy in real-world scenarios. Firstly, users have the entitlement, as per regulatory frameworks arXiv:2405.01063v1 [cs.IR] 2 May 2024 \fSIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Tianhao Shi, et al. like the General Data Protection Regulation (GDPR) [32] of the European Union, to decline the disclosure of their sensitive data at any moment. For instance, only 17% of participants reported willing of sharing their income information in electronic commerce [1]. Moreover, many online retail and advertising recommendations can be accessed without registering or submitting personal information [33]. Consequently, a conundrum emerges: how can fair recommendation be obtained when only limited sensitive attributes are available [35]? To address this issue, a default choice is reconstructing the missing sensitive attributes of users from the available personal information such as historical interactions [25]. However, reconstruction errors are inevitable due to the inherent complexity and noise of user information in recommender systems. Moreover, striving for perfectly reconstructed attributes can potentially violate user privacy, raising ethical concerns and being constrained by legal limitations. For instance, GDPR [32] mandates obtaining user consent for modeling user profiles (including sensitive attributes). Consequently, a subset of users may not have their sensitive attributes reconstructed1. The aforementioned challenges underscore the significance of devising alternative approaches that are robust to the reconstruction errors of sensitive attributes. To this end, we propose a new framework to pursue fair recommendations under reconstructed sensitive attributes with errors called Distributionally Robust Fair Optimization (DRFO). Instead of optimizing fairness over the reconstructed sensitive attributes, DRFO minimizes the worst-case unfairness over an ambiguity set of all potential distributions of missing sensitive attributes to account for the impact of reconstruction errors. Furthermore, for users who forbid the reconstruction of their sensitive attributes, DRFO can provide fair recommendations for them by considering a larger ambiguity set of distributions. We theoretically demonstrate that DRFO ensures recommendation fairness in the face of sensitive attribute reconstruction errors, even in the absence of such reconstruction. Extensive experiments on two datasets verify the effectiveness of our approaches. The main contributions are summarized as follows: \u2022 We propose a new framework for optimizing the user-side fairness in recommender systems with limited sensitive attributes. \u2022 We provide theoretical evidence that our method can ensure fair recommendations in the face of sensitive attribute reconstruction errors, even in the absence of such reconstruction. \u2022 We conduct extensive experiments on two datasets, validating the rationality and effectiveness of the proposed framework. 2 PRELIMINARIES In this study, we aim to achieve fairness in recommender systems with limited sensitive attributes. We consider the widely used Demographic Parity (DP) as an exemplary fairness criterion to investigate this problem. In the following, we first introduce the notation used in this paper and then provide a brief overview of DP fairness. 2.1 Notation In this study, we use uppercase letters (e.g., \ud835\udc45) to represent random variables, lowercase letters (e.g., \ud835\udc5f) for specific values of these 1This can also be regarded as a special scenario with significant reconstruction errors. variables, and calligraphic fonts (e.g., R) to represent the sample space of the variable. Let \ud835\udc48, \ud835\udc49, \ud835\udc46, and \ud835\udc45denote the user, item, user sensitive attribute, and rating, respectively. Let D denote the historical data. Each sample within D is denoted as (\ud835\udc62, \ud835\udc63,\ud835\udc60,\ud835\udc5f), where \ud835\udc62\u2208U, \ud835\udc63\u2208V, \ud835\udc5f\u2208R, and \ud835\udc60\u2208S. In this work, we consider the binary rating, i.e., R = {0, 1}. Additionally, we assume the sensitive feature to be binary but potentially unknown for a subset of users. We split D into two parts: D\ud835\udc58with known sensitive features and D\ud835\udc5awith missing sensitive features, and we have D = D\ud835\udc58\u222aD\ud835\udc5a and D\ud835\udc58\u2229D\ud835\udc5a= \u2205. For convenience, we denote the samples in D with the sensitive feature \ud835\udc46= \ud835\udc60as D (\ud835\udc60), similar for D (\ud835\udc60) \ud835\udc58 and D (\ud835\udc60) \ud835\udc5a. 2.2 Fairness Demographic Parity [5] is a widely studied fairness criterion in recommendation [16, 47]. DP\u2019s definition is the model\u2019s rating prediction \u02c6 \ud835\udc45should be independent of the sensitive attribute\ud835\udc46. A model achieving DP fairness would generate recommendations without relying on \ud835\udc46, thereby satisfying legal requirements or user demands against discrimination on model output [16]. We follow previous work [47] to quantify DP with the mean absolute difference (MAD) between ratings of different groups: \f \fE \u0002 \u02c6 \ud835\udc45|\ud835\udc46= 0 \u0003 \u2212E \u0002 \u02c6 \ud835\udc45|\ud835\udc46= 1 \u0003\f \f, (1) where E \u0002 \u02c6 \ud835\udc45|\ud835\udc46= 0 \u0003 and E \u0002 \u02c6 \ud835\udc45|\ud835\udc46= 1 \u0003 denote the expectation of prediction \u02c6 \ud835\udc45over groups with \ud835\udc46= 0 and \ud835\udc46= 1, respectively. A diminished MAD level signifies a heightened degree of DP fairness. Fair learning. To achieve DP fairness, we could take the regularizationbased method [41], which directly incorporates the MAD metric into the training objective. Formally, min \ud835\udf03 \ud835\udc3f(\ud835\udf03) + \ud835\udf06 \f \fED(0) \u0002 \u02c6 \ud835\udc45 \u0003 \u2212ED(1) \u0002 \u02c6 \ud835\udc45 \u0003\f \f, (2) where \ud835\udf06is a hyper-parameter to control the strength of the fairness regularization term, ED (\ud835\udc60) \u0002 \u02c6 \ud835\udc45 \u0003 is the average predicted rating over D (\ud835\udc60), i.e., ED (\ud835\udc60) \u0002 \u02c6 \ud835\udc45 \u0003 = 1 | D (\ud835\udc60) | \u00cd (\ud835\udc62,\ud835\udc63)\u2208D (\ud835\udc60) \u02c6 \ud835\udc5f\ud835\udc62,\ud835\udc63, and \ud835\udc3f(\ud835\udf03) is a recommendation loss (e.g., binary cross-entropy loss [12]). Typically, the regularization can be transformed into a set of constraints that minimize the discrepancy between the average predictions of a specific group and the overall predictions, min \ud835\udf03 \ud835\udc3f(\ud835\udf03) s.t. ED(\ud835\udc60) \u0002 \u02c6 \ud835\udc45 \u0003 \u2212ED \u0002 \u02c6 \ud835\udc45 \u0003 = 0, \ud835\udc60= 0, 1, (3) where the constraint ensures the expected predicted rating in D (\ud835\udc60) is equal to the expected predicted rating in the entire dataset D. Preserving any single constraint in Equation (3) is sufficient to promote fairness under the binary-sensitive attribute scenario while preserving multiple constraints is intended for non-binary cases. 3 METHODOLOGY To build a fair recommender system that addresses the chanllenge of missing sensitive attributes among some users, a seemingly workable solution is to directly apply Fair Learning with Reconstructed Sensitive Attributes (FLrSA). In this section, we outline this solution and highlight its limitations. Subsequently, we introduce the proposed Distributionally Robust Fairness Optimization to overcome the impact of reconstruction errors. Finally, we discuss the extension of DRFO for situations where certain users are reluctant to have their sensitive attributes reconstructed. \fFair Recommendations with Limited Sensitive Attributes: A Distributionally Robust Optimization Approach SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA 3.1 FLrSA To achieve fair recommendations in the presence of missing sensitive attributes, a seemingly workable solution involves reconstructing the missing attributes and subsequently applying fair learning methods based on the reconstructed sensitive attributes. Specifically, we can follow the two steps below: Step 1: Sensitive attribute reconstruction. Previous research has shown that user sensitive attributes can be reconstructed using available user information, such as historical interactions [33, 36] and zip codes [7]. Therefore, we can train a classifier over D\ud835\udc58 to predict missing user sensitive attributes in D\ud835\udc5a. Let \u02c6 \ud835\udc46\u2208{0, 1} denote the reconstructed sensitive attribute. Subsequently, we can incorporate the reconstructed sensitive attributes into D\ud835\udc5aand generate a new dataset \u02c6 D\ud835\udc5a. Step 2: Fair learning. Next, we perform fair learning over D\ud835\udc58and \u02c6 D\ud835\udc5abased on Equation (3), which is reformulated as: min \ud835\udf03 \ud835\udc3f(\ud835\udf03) s.t. \ud835\udf02(\ud835\udc60) \ud835\udc58ED(\ud835\udc60) \ud835\udc58 \u0002 \u02c6 \ud835\udc45 \u0003 + \ud835\udf02(\ud835\udc60) \ud835\udc5aE \u02c6 D(\ud835\udc60) \ud835\udc5a \u0002 \u02c6 \ud835\udc45 \u0003 \u2212ED \u0002 \u02c6 \ud835\udc45 \u0003 = 0, \ud835\udc60= 0, 1, (4) where \u02c6 D (\ud835\udc60) \ud835\udc5a is a subset of \u02c6 D\ud835\udc5awith the reconstructed attribute \u02c6 \ud835\udc46= \ud835\udc60, ED (\ud835\udc60) \ud835\udc58 \u0002 \u02c6 \ud835\udc45 \u0003 is the average predicted rating over D (\ud835\udc60) \ud835\udc58 , and: \ud835\udf02(\ud835\udc60) \ud835\udc58 = |D (\ud835\udc60) \ud835\udc58 | | D (\ud835\udc60) \ud835\udc58 | + | \u02c6 D (\ud835\udc60) \ud835\udc5a| , \ud835\udf02(\ud835\udc60) \ud835\udc5a= | \u02c6 D (\ud835\udc60) \ud835\udc5a| |D (\ud835\udc60) \ud835\udc58 | + | \u02c6 D (\ud835\udc60) \ud835\udc5a| . (5) Limitations. This method relies on the accuracy of sensitive attribute reconstruction. However, achieving an entirely accurate classifier poses challenges due to the inherent complexity and noise of user information in recommender systems. This leads to reconstruction errors, which could compromise the fairness of recommendations for some users. Furthermore, this method relies on the permission of sensitive attribute reconstruction from users, which may not be achievable for all users due to legal restrictions. 3.2 DRFO Reconstruction errors significantly constrain the vanilla FLrSA, as they introduce a discrepancy between the reconstructed and the unknown true distribution concerning sensitive attributes. Consequently, relying solely on the reconstructed distribution may compromise fairness performance. Nevertheless, the unknown true distribution lies within the proximity of the reconstructed distribution. By ensuring fairness in the vicinity of the reconstructed distribution, the model could achieve robust fairness for the unknown true distribution [34]. This inspires the development of DRFO, a novel approach to fairness with limited sensitive attributes. DRFO has two main parts: 1) building the ambiguity set which encompasses the unknown true distribution based on the reconstructed sensitive attributes, and 2) ensuring fairness within the entire ambiguity set using DRO. Figure 1 provides an overview of DRFO. For our discussion convenience, we assume that all users grant permission for the reconstruction of sensitive attributes in this subsection. The scenario where some users do not permit reconstruction due to privacy concerns will be discussed in Section 3.3. Building ambiguity set. An ambiguity set is a set of distributions centered around the reconstructed distribution. We denote the ambiguity set as B(\ud835\udf0c\ud835\udc60; \u02c6 \ud835\udc44(\ud835\udc60))={ \u02dc \ud835\udc44(\ud835\udc60) |\ud835\udc51\ud835\udc56\ud835\udc60\ud835\udc61( \u02dc \ud835\udc44(\ud835\udc60), \u02c6 \ud835\udc44(\ud835\udc60)) \u2264\ud835\udf0c\ud835\udc60}, where \u02c6 \ud835\udc44(\ud835\udc60) 1 0 \u22ee 1 ? ? \u22ee ? 0 0 \u22ee 1 Reconstructed Attributes Unknown Attributes Known Attributes Ambiguity Set (DRFO) Reconstructed Distribution (FLrSA) Unknown True Distribution DRO Mechanism Known Distribution Reconstructor Fair Learning Reconstructed Known Figure 1: Illustration of FLrSA and DRFO for providing fair recommendations with limited sensitive attributes. After the reconstruction of unknown sensitive attributes, the FLrSA directly applies fair learning with the reconstructed distribution. Conversely, DRFO builds an ambiguity set that encompasses the unknown true distribution and guarantees fairness across the entire ambiguity set. denotes the reconstructed distribution, \ud835\udc51\ud835\udc56\ud835\udc60\ud835\udc61(\u00b7) denotes a distance metric, and \ud835\udf0c\ud835\udc60denotes robust radius. By setting an appropriate value \ud835\udf0c\ud835\udc60, we can ensure that this set encompasses the unknown true distribution \ud835\udc44(\ud835\udc60). In the context of facilitating fair learning, as expressed in Equation (4) involving E \u02c6 D (\ud835\udc60) \ud835\udc5a[ \u02c6 \ud835\udc45(\ud835\udc48,\ud835\udc49)], our method focuses on building an ambiguity set of joint distributions (\ud835\udc48,\ud835\udc49) conditioned on a specific sensitive attribute value \ud835\udc60. Specifically, we assume that the distribution of (\ud835\udc48,\ud835\udc49) conditioned on \ud835\udc46=\ud835\udc60follows \ud835\udc44(\ud835\udc60). And we represent the joint distribution of (\ud835\udc48,\ud835\udc49) conditioned on the reconstructed sensitive attribute \u02c6 \ud835\udc46=\ud835\udc60as \u02c6 \ud835\udc44(\ud835\udc60). Robust fair learning. Robust learning solves an optimization problem that satisfies the fairness constraints in B(\ud835\udf0c\ud835\udc60; \u02c6 \ud835\udc44(\ud835\udc60)) as: min \ud835\udf03 \ud835\udc3f(\ud835\udf03) s.t. \ud835\udf02(\ud835\udc60) \ud835\udc58ED(\ud835\udc60) \ud835\udc58 \u0002 \u02c6 \ud835\udc45 \u0003 + \ud835\udf02(\ud835\udc60) \ud835\udc5aE(\ud835\udc48,\ud835\udc49)\u223c\u02dc \ud835\udc44(\ud835\udc60) \u0002 \u02c6 \ud835\udc45 \u0003 \u2212ED \u0002 \u02c6 \ud835\udc45 \u0003 = 0, \u2200\u02dc \ud835\udc44(\ud835\udc60) \u2208B(\ud835\udf0c\ud835\udc60; \u02c6 \ud835\udc44(\ud835\udc60) ) ,\ud835\udc60= 0, 1, (6) where E(\ud835\udc48,\ud835\udc49)\u223c\u02dc \ud835\udc44(\ud835\udc60) \u0002 \u02c6 \ud835\udc45 \u0003 denotes the expectation of \u02c6 \ud835\udc45under the distribution \u02dc \ud835\udc44(\ud835\udc60). Here, we transform the fairness constraints in Equation (4) from solely relying on the reconstructed distribution to guaranteeing fairness across the entire ambiguity set2. In this way, as long as the ambiguity set encompasses the unknown true distribution, we can achieve robust fairness. Apparently, the key of DRFO lies in 1) building an appropriate ambiguity set which encompasses the unknown true distribution, and 2) solving the optimization problem in Equation (6). Next, we elaborate how to achieve them. 3.2.1 Building Ambiguity Set. We now consider establishing an appropriate ambiguity set B(\ud835\udf0c\ud835\udc60; \u02c6 \ud835\udc44(\ud835\udc60)) that encompasses the unknown true distribution \ud835\udc44(\ud835\udc60). However, direct estimation of the exact distance between \ud835\udc44(\ud835\udc60) and the reconstructed distribution 2We do not directly solve the optimization problem under these equality constraints in Equation (6). Instead, we convert it into an optimization of the worst-case scenario problem, as expressed in Equation (11). \fSIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Tianhao Shi, et al. \u02c6 \ud835\udc44(\ud835\udc60) is impossible as true sensitive attributes are unavailable. Instead, we estimate an upper bound on the distance between them. Similar to previous work [34], using the Total Variation (TV) distance [21] (\ud835\udc47\ud835\udc49\u2208[0, 1], Appendix A.1) as the distance metric, we could estimate an upper bound based on the error rate of sensitive attribute reconstruction. This is described in the following theorem. Theorem 1. Assuming that the reconstructed sensitive attributes \u02c6 \ud835\udc46have the same prior distribution as the true sensitive attributes \ud835\udc46, i.e., \ud835\udc43( \u02c6 \ud835\udc46) = \ud835\udc43(\ud835\udc46), the TV distance between \ud835\udc44(\ud835\udc60) and \u02c6 \ud835\udc44(\ud835\udc60) is upperbounded by the probability of incorrectly reconstructing the sensitive attributes, i.e., \ud835\udc47\ud835\udc49(\ud835\udc44(\ud835\udc60), \u02c6 \ud835\udc44(\ud835\udc60)) \u2264\ud835\udc43(\ud835\udc46\u2260\u02c6 \ud835\udc46|\ud835\udc46= \ud835\udc60). The proof is provided in Appendix A.1. This theorem suggests that, assuming the distribution of the reconstructed sensitive attributes \u02c6 \ud835\udc46is identical to the true distribution of \ud835\udc463, we can create the ambiguity set B(\ud835\udf0c\ud835\udc60; \u02c6 \ud835\udc44(\ud835\udc60)) as follows: B \u0010 \ud835\udf0c\ud835\udc60; \u02c6 \ud835\udc44(\ud835\udc60) \u0011 = n \u02dc \ud835\udc44(\ud835\udc60) |\ud835\udc47\ud835\udc49( \u02dc \ud835\udc44(\ud835\udc60), \u02c6 \ud835\udc44(\ud835\udc60) ) \u2264\ud835\udf0c\ud835\udc60 o , \ud835\udf0c\ud835\udc60= \ud835\udc43(\ud835\udc46\u2260\u02c6 \ud835\udc46|\ud835\udc46= \ud835\udc60), (7) where \ud835\udf0c\ud835\udc60can be approximately estimated using the validation set4, following existing works [34]. This ambiguity set encompasses the true distribution\ud835\udc44(\ud835\udc60), meaning that we successfully build the target ambiguity set with \u02c6 \ud835\udc44(\ud835\udc60) and \ud835\udf0c\ud835\udc60. 3.2.2 Robust Fair Learning. We next consider solving the optimization problem in Equation (6). Following [13, 28], we convert it into an empirical form (i.e., representing it using the empirical distribution), enabling us to optimize it in a data-driven manner. Meanwhile, to tackle the challenges posed by the complexity of solving the constrained optimization problem, we transform it into a solvable Lagrangian problem with the algorithm proposed in [27]. Empirical form. To solve Equation (6) with learning methods, the key step involves using \u02c6 D\ud835\udc5ato estimate E(\ud835\udc48,\ud835\udc49)\u223c\u02dc \ud835\udc44(\ud835\udc60) \u0002 \u02c6 \ud835\udc45 \u0003. Denote the empirical estimation as E \u02dc \ud835\udc44(\ud835\udc60), \u02c6 D\ud835\udc5a \u0002 \u02c6 \ud835\udc45 \u0003, we can obtain the empirical form of the optimization problem as follows: min \ud835\udf03 \ud835\udc3f(\ud835\udf03) s.t. \ud835\udf02(\ud835\udc60) \ud835\udc58ED(\ud835\udc60) \ud835\udc58 \u0002 \u02c6 \ud835\udc45 \u0003 + \ud835\udf02(\ud835\udc60) \ud835\udc5aE \u02dc \ud835\udc44(\ud835\udc60), \u02c6 D\ud835\udc5a \u0002 \u02c6 \ud835\udc45 \u0003 \u2212ED \u0002 \u02c6 \ud835\udc45 \u0003 = 0, \u2200\u02dc \ud835\udc44(\ud835\udc60) \u2208B(\ud835\udf0c\ud835\udc60; \u02c6 \ud835\udc44(\ud835\udc60) ), \ud835\udc60= 0, 1, (8) where \u02c6 \ud835\udc44(\ud835\udc60), \u02dc \ud835\udc44(\ud835\udc60) are also converted into empirical distributions [4]. \u02c6 \ud835\udc44(\ud835\udc60) = { \u02c6 \ud835\udc5e(\ud835\udc60) \ud835\udc62,\ud835\udc63|(\ud835\udc62, \ud835\udc63, \u02c6 \ud835\udc60) \u2208\u02c6 D\ud835\udc5a}, where \u02c6 \ud835\udc5e(\ud835\udc60) \ud835\udc62,\ud835\udc63denotes the probability weight for the sample (\ud835\udc62, \ud835\udc63, \u02c6 \ud835\udc60) \u2208\u02c6 D\ud835\udc5aand \u02c6 \ud835\udc5e(\ud835\udc60) \ud835\udc62,\ud835\udc63= 1/| \u02c6 D (\ud835\udc60) \ud835\udc5a| if \u02c6 \ud835\udc60= \ud835\udc60 else \u02c6 \ud835\udc5e(\ud835\udc60) \ud835\udc62,\ud835\udc63= 0; similarly, we have \u02dc \ud835\udc44(\ud835\udc60) = { \u02dc \ud835\udc5e(\ud835\udc60) \ud835\udc62,\ud835\udc63|(\ud835\udc62, \ud835\udc63, \u02c6 \ud835\udc60) \u2208 \u02c6 D\ud835\udc5a}; E \u02dc \ud835\udc44(\ud835\udc60), \u02c6 D\ud835\udc5a \u0002 \u02c6 \ud835\udc45 \u0003 represents the empirical expectation of \u02c6 \ud835\udc45in \u02c6 D\ud835\udc5aunder the distribution \u02dc \ud835\udc44(\ud835\udc60), and B \u0010 \ud835\udf0c\ud835\udc60; \u02c6 \ud835\udc44(\ud835\udc60)\u0011 denotes the empirical form of ambiguity set defined in Equation (7), formally, we have: \u2022 The empirical expectation of \u02c6 \ud835\udc45, i.e., E \u02dc \ud835\udc44(\ud835\udc60), \u02c6 D\ud835\udc5a \u0002 \u02c6 \ud835\udc45 \u0003 : E \u02dc \ud835\udc44(\ud835\udc60), \u02c6 D\ud835\udc5a \u0002 \u02c6 \ud835\udc45 \u0003 = \u2211\ufe01 (\ud835\udc62,\ud835\udc63) \u2208\u02c6 D\ud835\udc5a \u02dc \ud835\udc5e(\ud835\udc60) \ud835\udc62,\ud835\udc63\u00b7 \u02c6 \ud835\udc5f\ud835\udc62,\ud835\udc63; (9) 3If the assumption is violated, a more lenient TV distance upper bound estimation is needed (further discussed in Appendix A.1). In our experiments, as the observed commendable accuracy in reconstructing sensitive attributes leads to a modest disparity between \ud835\udc43(\ud835\udc46) and \ud835\udc43( \u02c6 \ud835\udc46), we approximate the assumption holds true here. 4In practice, by assessing the difference between the validation and test sets and incorporating it with the error rate of the validation set, we can set an upper bound on the reconstruction of sensitive attribute errors, thus enabling estimation of \ud835\udf0c\ud835\udc60. Algorithm 1: DRFO Input: Dataset with known sensitive attributes D\ud835\udc58, dataset with unkown sensitive attributes D\ud835\udc5a. Hyper-parameters \ud835\udf06\ud835\udc60, and learning rate \ud835\udefc\ud835\udf03for \ud835\udf03, learning rate \ud835\udefc\ud835\udc5efor \u02dc \ud835\udc44(\ud835\udc60) (\ud835\udc60= 0, 1). 1 Random split D\ud835\udc58into training and validation sets, and train a sensitive attribute classifier; 2 Reconstruct sensitive attributes \u02c6 \ud835\udc46for D\ud835\udc5a, getting \u02c6 D\ud835\udc5a, and for each sample (\ud835\udc62, \ud835\udc63) \u2208\u02c6 D\ud835\udc5a, compute \u02c6 \ud835\udc5e(\ud835\udc60) \ud835\udc62,\ud835\udc63(=1/| \u02c6 D\ud835\udc5a| if \u02c6 \ud835\udc46= \ud835\udc60else 0), obtaining \u02c6 \ud835\udc44(\ud835\udc60) = { \u02c6 \ud835\udc5e(\ud835\udc60) \ud835\udc62,\ud835\udc63|(\ud835\udc62, \ud835\udc63) \u2208\u02c6 D\ud835\udc5a} ; 3 Estimate \ud835\udf0c\ud835\udc60\u2248\ud835\udc43(\ud835\udc46\u2260\u02c6 \ud835\udc46|\ud835\udc46= \ud835\udc60) with the validation set held out by D\ud835\udc58; 4 for \ud835\udc61= 1, . . . ,\ud835\udc47do 5 Update \ud835\udf03using gradient descent: \ud835\udf03\u2190\ud835\udf03\u2212\ud835\udefc\ud835\udf03\u2207\ud835\udf03\ud835\udc3f(\ud835\udf03) \u2212\u00cd1 \ud835\udc60=0 \ud835\udf06\ud835\udc60\ud835\udefc\ud835\udf03\u2207\ud835\udf03\ud835\udc3f\ud835\udc60(\ud835\udf03, \u02dc \ud835\udc44(\ud835\udc60)); 6 for \ud835\udc60= 0, 1 do 7 Update \u02dc \ud835\udc44(\ud835\udc60) using gradient ascent: \u02dc \ud835\udc44(\ud835\udc60) \u2190\u02dc \ud835\udc44(\ud835\udc60) + 1 \u00cd \ud835\udc60=0 \ud835\udf06\ud835\udc60\ud835\udefc\ud835\udc5e\u2207\u02dc \ud835\udc44(\ud835\udc60) \ud835\udc3f\ud835\udc60(\ud835\udf03, \u02dc \ud835\udc44(\ud835\udc60)) ; 8 Project \u02dc \ud835\udc44(\ud835\udc60) onto B(\ud835\udf0c\ud835\udc60; \u02c6 \ud835\udc44(\ud835\udc60)); 9 end 10 end \u2022 The empirical form of ambiguity set B \u0010 \ud835\udf0c\ud835\udc60; \u02c6 \ud835\udc44(\ud835\udc60)\u0011 : B \u0010 \ud835\udf0c\ud835\udc60; \u02c6 \ud835\udc44(\ud835\udc60) \u0011 = n \u02dc \ud835\udc44(\ud835\udc60) \u2208R| \u02c6 D\ud835\udc5a| : 1 2 \u2211\ufe01 (\ud835\udc62,\ud835\udc63) \u2208\u02c6 D\ud835\udc5a \f \f \f \u02dc \ud835\udc5e(\ud835\udc60) \ud835\udc62,\ud835\udc63\u2212\u02c6 \ud835\udc5e(\ud835\udc60) \ud835\udc62,\ud835\udc63 \f \f \f \u2264\ud835\udf0c\ud835\udc60, \u2211\ufe01 (\ud835\udc62,\ud835\udc63) \u2208\u02c6 D\ud835\udc5a \u02dc \ud835\udc5e(\ud835\udc60) \ud835\udc62,\ud835\udc63= 1, \u02dc \ud835\udc5e(\ud835\udc60) \ud835\udc62,\ud835\udc63\u22650 o , (10) where 1 2 \u00cd (\ud835\udc62,\ud835\udc63)\u2208\u02c6 D\ud835\udc5a \f \f \f \u02dc \ud835\udc5e(\ud835\udc60) \ud835\udc62,\ud835\udc63\u2212\u02c6 \ud835\udc5e(\ud835\udc60) \ud835\udc62,\ud835\udc63 \f \f \f \u2264\ud835\udf0c\ud835\udc60represents the empirical implementation of the constraint \ud835\udc47\ud835\udc49( \u02dc \ud835\udc44(\ud835\udc60) , \u02c6 \ud835\udc44(\ud835\udc60)) \u2264\ud835\udf0c\ud835\udc60(Equation (14) in Appendix A.1), \u00cd (\ud835\udc62,\ud835\udc63)\u2208\u02c6 D\ud835\udc5a\u02dc \ud835\udc5e(\ud835\udc60) \ud835\udc62,\ud835\udc63= 1 and \u02dc \ud835\udc5e(\ud835\udc60) \ud835\udc62,\ud835\udc63\u22650 are used to ensure that the empirical distribution \u02dc \ud835\udc44(\ud835\udc60) represents a valid probability distribution. Empirical Lagrangian form. The optimization problem in Equation (8) is difficult to solve because it involves fairness constraints for all potential distributions \u02dc \ud835\udc44(\ud835\udc60) in B(\ud835\udf0c\ud835\udc60; \u02c6 \ud835\udc44(\ud835\udc60)). To overcome this challenge, we convert the fairness constraints into regularizers and optimize the worst-case unfairness within B(\ud835\udf0c\ud835\udc60; \u02c6 \ud835\udc44(\ud835\udc60)) to ensure fairness for the entire set [11]. Finally, we reformulate Equation (8) as a min-max optimization problem, given by: min \ud835\udf03 max \u02dc \ud835\udc44(\ud835\udc60) \u2208B(\ud835\udf0c\ud835\udc60; \u02c6 \ud835\udc44(\ud835\udc60) ), \ud835\udc60=0,1 \ud835\udc3f(\ud835\udf03) + \u2211\ufe01 \ud835\udc60 \ud835\udf06\ud835\udc60\ud835\udc3f\ud835\udc60(\ud835\udf03, \u02dc \ud835\udc44(\ud835\udc60) ), (11) where \ud835\udf06\ud835\udc60is the hyper-parameter to control the strength of the fairness regularizer \ud835\udc3f\ud835\udc60(\ud835\udf03, \u02dc \ud835\udc44(\ud835\udc60)) obtained via Lagrangian trick: \ud835\udc3f\ud835\udc60(\ud835\udf03, \u02dc \ud835\udc44(\ud835\udc60) ) = \f \f \f \f\ud835\udf02(\ud835\udc60) \ud835\udc58ED(\ud835\udc60) \ud835\udc58 \u0002 \u02c6 \ud835\udc45 \u0003 + \ud835\udf02(\ud835\udc60) \ud835\udc5aE \u02dc \ud835\udc44(\ud835\udc60), \u02c6 D\ud835\udc5a \u0002 \u02c6 \ud835\udc45 \u0003 \u2212ED \u0002 \u02c6 \ud835\udc45 \u0003\f \f \f \f . (12) Here, \u02dc \ud835\udc44(\ud835\udc60) becomes learnable parameters. We update it in B(\ud835\udf0c\ud835\udc60; \u02c6 \ud835\udc44(\ud835\udc60)) via \u2018max\u2019 optimization to find the worst case for fairness. \fFair Recommendations with Limited Sensitive Attributes: A Distributionally Robust Optimization Approach SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Learning algorithm. To solve the optimization problem in Equation (11), we use the algorithm in [27] for solving DRO, which is summarized in Algorithm 1. At each iteration of the algorithm, we first update the model parameters \ud835\udf03to minimize the total loss in Equation (11) (line 5). Then, for each sensitive attribute value \ud835\udc60\u2208{0, 1}, we update \u02dc \ud835\udc44(\ud835\udc60) to maximize the fairness-related loss \ud835\udc3f\ud835\udc60(\ud835\udf03, \u02dc \ud835\udc44(\ud835\udc60)) in Equation (11) (line 7), and project the updated result onto B(\ud835\udf0c\ud835\udc60; \u02c6 \ud835\udc44(\ud835\udc60)) (line 8, avoiding \u02dc \ud835\udc44(\ud835\udc60) beyond the set) to obtain the final \u02dc \ud835\udc44(\ud835\udc60) for that iteration. 3.3 Discussion We now discuss the extension of DRFO for situations where certain users are not willing to reconstruct their sensitive attributes. The main challenge lies in the inability to reconstruct sensitive attributes, which is essential for building the ambiguity set in DRFO. However, even with a random distribution, we can build a broad ambiguity set that covers the true distribution of these users by using a large robust radius. Thereafter, by optimizing the worstcase unfairness with the broad ambiguity set, we could still achieve fairness. Let D\ud835\udc4fdenote the interaction data of such users. To implement this approach, we first randomly assign sensitive attributes for users in D\ud835\udc4f, serving as reconstructed sensitive attribute \u02c6 \ud835\udc46in DRFO and obtaining \u02c6 D\ud835\udc4f. Then, we define empirical \u02c6 \ud835\udc44(\ud835\udc60) \ud835\udc4f , \u02dc \ud835\udc44(\ud835\udc60) \ud835\udc4f , and B(\ud835\udf0c\u2032 \ud835\udc60; \u02c6 \ud835\udc44(\ud835\udc60) \ud835\udc4f ) for \u02c6 D\ud835\udc4f, referring to those of \u02c6 D\ud835\udc5a. To build the broad ambiguity set, we set the robust radius \ud835\udf0c\u2032 \ud835\udc60to 1. For users with missing sensitive attributes and permitting reconstruction, we use D\ud835\udc5f to denote their interactions (D\ud835\udc5f\u222aD\ud835\udc4f= D\ud835\udc5a, D\ud835\udc5f\u2229D\ud835\udc4f= \u2205). Then we define \u02c6 D\ud835\udc5f, \u02c6 \ud835\udc44(\ud835\udc60) \ud835\udc5f , \u02dc \ud835\udc44(\ud835\udc60) \ud835\udc5f , and B(\ud835\udf0c\ud835\udc60; \u02c6 \ud835\udc44(\ud835\udc60) \ud835\udc5f ) for D\ud835\udc5fin the same way as that of \ud835\udc37\ud835\udc5adescribed in Section 3.2. Finally, we could optimize the fairness for \u02c6 D\ud835\udc4fand \u02c6 D\ud835\udc5fin a similar way to \u02c6 D\ud835\udc5ausing DRO. Adding such optimization parts into Equation (8), we obtain the final optimization problem: min \ud835\udf03 max \u02dc \ud835\udc44(\ud835\udc60) \ud835\udc5f \u2208B(\ud835\udf0c\ud835\udc60; \u02c6 \ud835\udc44(\ud835\udc60) \ud835\udc5f ) \u02dc \ud835\udc44(\ud835\udc60) \ud835\udc4f \u2208B(\ud835\udf0c\u2032 \ud835\udc60; \u02c6 \ud835\udc44(\ud835\udc60) \ud835\udc4f ) \ud835\udc60=0,1 \ud835\udc3f(\ud835\udf03) + 1 \u2211\ufe01 \ud835\udc60=0 \ud835\udf06\ud835\udc60 \f \f \f \f\ud835\udf02(\ud835\udc60) \ud835\udc58ED(\ud835\udc60) \ud835\udc58 \u0002 \u02c6 \ud835\udc45 \u0003 + \ud835\udf02(\ud835\udc60) \ud835\udc5f E \u02dc \ud835\udc44(\ud835\udc60) \ud835\udc5f , \u02c6 D\ud835\udc5f \u0002 \u02c6 \ud835\udc45 \u0003 + \ud835\udf02(\ud835\udc60) \ud835\udc4fE \u02dc \ud835\udc44(\ud835\udc60) \ud835\udc4f , \u02c6 D\ud835\udc4f \u0002 \u02c6 \ud835\udc45 \u0003 \u2212ED \u0002 \u02c6 \ud835\udc45 \u0003 \f \f \f \f, (13) where E \u02dc \ud835\udc44(\ud835\udc60) \ud835\udc5f , \u02c6 D\ud835\udc5f \u0002 \u02c6 \ud835\udc45 \u0003 and E \u02dc \ud835\udc44(\ud835\udc60) \ud835\udc4f, \u02c6 D\ud835\udc4f \u0002 \u02c6 \ud835\udc45 \u0003 represents the empirical expectation of \u02c6 \ud835\udc45under the distribution \u02dc \ud835\udc44(\ud835\udc60) \ud835\udc5f and \u02dc \ud835\udc44(\ud835\udc60) \ud835\udc4f, respectively, computed similarly to Equation (9), and\ud835\udf02(\ud835\udc60) \ud835\udc58 = | D (\ud835\udc60) \ud835\udc58 | | D (\ud835\udc60) \ud835\udc58 |+| \u02c6 D (\ud835\udc60) \ud835\udc5f |+| \u02c6 D (\ud835\udc60) \ud835\udc4f | , \ud835\udf02(\ud835\udc60) \ud835\udc5f = | \u02c6 D (\ud835\udc60) \ud835\udc5f | |D (\ud835\udc60) \ud835\udc58 |+| \u02c6 D (\ud835\udc60) \ud835\udc5f |+| \u02c6 D (\ud835\udc60) \ud835\udc4f | , and \ud835\udf02(\ud835\udc60) \ud835\udc4f = | \u02c6 D (\ud835\udc60) \ud835\udc4f | | D (\ud835\udc60) \ud835\udc58 |+| \u02c6 D (\ud835\udc60) \ud835\udc5f |+| \u02c6 D (\ud835\udc60) \ud835\udc4f | . The learning algorithm can be formulated following the structure outlined in Algorithm 1. 4 EXPERIMENTS In this section, we conduct extensive experiments to answer the following research questions: RQ1: Can DRFO provide fair recommendations with limited sensitive attributes? RQ2: How do reconstruction errors affect the effectiveness of DRFO in ensuring fairness? What is the performance of DRFO in terms of fairness Table 1: Statistics of the evaluation datasets. Dataset #Items #Users #Interactions Mean Ratings \ud835\udc46= 0 \ud835\udc46= 1 \ud835\udc46= 0 \ud835\udc46= 1 \ud835\udc46= 0 \ud835\udc46= 1 ML-1M 3,244 1,153 3,144 228,191 713,590 0.5866 0.5661 Tenrec 14,539 3,108 2,299 308,217 337,958 0.4849 0.4676 on both users with known sensitive attributes and those with unknown sensitive attributes? RQ3: Can DRFO ensure fairness when some users prohibit the reconstruction of sensitive attributes? 4.1 Experimental Settings 4.1.1 Datasets. We conduct experiments on two open benchmark datasets: MovieLens 1M5 [10] and Tenrec6 [42]. \u2022 MoviveLens 1M (ML-1M): This is a movie recommendation dataset including user ratings of movies on a scale of 1-5 and sensitive user attributes such as \u2018gender\u2019. We select \u2018gender\u2019 as the sensitive attribute and transform the rating values into binary labels using a threshold of 3, where ratings greater than three are labeled as positive (\u20181\u2019) and the rest as negative (\u20180\u2019). \u2022 Tenrec: This dataset contains user feedback on articles or videos alongside anonymized user information from Tencent\u2019s recommendation platforms. We focus on the \u2018QB-video\u2019 sub-dataset for our experiments, using the \u2018like\u2019 feedback as the rating label and considering \u2018gender\u2019 as the sensitive attribute. In this work, we adopt the approach proposed in [36] to reconstruct sensitive attributes only using user historical interactions. To ensure the effectiveness of the reconstruction, it is crucial for users to have a sufficient number of historical interactions in the two datasets. Therefore, we apply 50-core filtering [26, 38] to select users with more than 50 interactions while performing 10-core filtering for items. Table 1 presents the statistical details of the processed datasets after filtering. We partition the datasets into training, validation, and testing sets using a ratio of 0.7:0.15:0.15. 4.1.2 Compared Methods. Achieving fair recommendations with limited sensitive attributes is a novel research problem, and no existing method is specifically designed to address it in recommender systems (to our knowledge). To establish baselines, we select two intuitive methods (RegK and FLrSA) and a method (CGL) proposed in other fields for solving this problem. \u2013 RegK [41] is a regularization-based approach that applies fair regularization only to users with known sensitive attributes. \u2013 FLrSA [41] first reconstructs missing sensitive attributes and then applies the fairness regularization method using both the reconstructed and the known sensitive attributes. \u2013 CGL [14] is a representative method for fairness with limited sensitive attributes in computer vision. It reconstructs sensitive attributes and addresses low-confidence reconstructions via random replacement. Subsequently, it employs a regularizationbased approach using the obtained attributes. We adapt it to recommender systems. Although RegK and FLrSA are intuitively designed, similar methods have been employed in other fields to study fairness with limited sensitive attributes [14]. Additionally, we include two fairnessunawareness methods as reference points. 5https://grouplens.org/datasets/movielens/ 6https://static.qblv.qq.com/qblv/h5/algo-frontend/tenrec_dataset.html \fSIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Tianhao Shi, et al. Table 2: Comparison between baselines and DRFO on ML-1M and Tenrec w.r.t. the fairness metric DP and recommendation performance metric RMSE under varying known sensitive attribute ratios. Lower DP and RMSE indicate better results. Dataset Model DP \u2193 RMSE \u2193 10% 30% 50% 70% 90% 10% 30% 50% 70% 90% ML-1M Basic MF 0.0225 0.0225 0.0225 0.0225 0.0225 0.4147 0.4147 0.4147 0.4147 0.4147 Oracle 0.0009 0.0009 0.0009 0.0009 0.0009 0.4148 0.4148 0.4148 0.4148 0.4148 RegK 0.0159 0.0132 0.0058 0.0014 0.0013 0.4144 0.4147 0.4147 0.4150 0.4149 FLrSA 0.0095 0.0119 0.0038 0.0035 0.0003 0.4147 0.4147 0.4148 0.4151 0.4147 CGL 0.0082 0.0145 0.0056 0.0035 0.0005 0.4147 0.4144 0.4148 0.4150 0.4149 DRFO 0.0034 0.0042 0.0011 0.0013 0.0006 0.4201 0.4201 0.4191 0.4223 0.4198 Tenrec Basic MF 0.0068 0.0068 0.0068 0.0068 0.0068 0.4503 0.4503 0.4503 0.4503 0.4503 Oracle 0.0001 0.0001 0.0001 0.0001 0.0001 0.4504 0.4504 0.4504 0.4504 0.4504 RegK 0.0053 0.0052 0.0053 0.0040 0.0051 0.4558 0.4520 0.4530 0.4500 0.4500 FLrSA 0.0073 0.0073 0.0073 0.0032 0.0013 0.4503 0.4503 0.4503 0.4597 0.4594 CGL 0.0073 0.0073 0.0074 0.0029 0.0009 0.4503 0.4503 0.4503 0.4518 0.4596 DRFO 0.0019 0.0018 0.0013 0.0018 0.0009 0.4590 0.4575 0.4565 0.4595 0.4596 \u2013 Basic MF [18] refers to the basic Matrix Factorization (MF) model that is trained without addressing fairness issues. \u2013 Oracle [41] assumes knowledge of all users\u2019 sensitive attributes and employs the regularization-based method to achieve fair recommendations. Its results serve as an upper bound on fairness. To ensure a fair comparison, we implement the fairness regularization term based on Equation (2) to pursue demographic parity fairness, using the classical Matrix Factorization model as the backbone recommendation model for all methods. 4.1.3 Evaluation Protocol. In our experiments, we simulate scenarios involving unknown sensitive attributes by randomly retaining the sensitive attribute for a subset of users while masking it for others. The retention ratio, varying in {0.1, 0.3, 0.5, 0.7, 0.9}, determines the proportion of users whose sensitive attributes are preserved during both training and validation phases. During testing, however, the sensitive attributes of all users are accessible to evaluate fairness. We use the MAD metric in Equation (1) to measure fairness (DP) and the root mean squared error (RMSE) to measure recommendation performance. A smaller value for both metrics indicates better performance in terms of fairness or recommendation. 4.1.4 Implementation Details. For a fair comparison, we optimize all models using the Adam optimizer [17] with the default embedding size of 32. Before applying the fairness methods, we pre-train the MF model using grid search to determine the optimal learning rate from the range of {1\ud835\udc52\u22122, 1\ud835\udc52\u22123} and the best weight decay from the range of {1\ud835\udc52\u22121, 1\ud835\udc52\u22122, ..., 1\ud835\udc52\u22127}. For the fairness models, we initialize their backbone MF model with the pre-trained one and then fine-tune them with a fixed learning rate of 1\ud835\udc52\u22123. We tune the fairness regularization coefficient in the range of {0.01, 0.05, 0.1, 0.5, 1, 5, 10} for the baselines. For DRFO, we set the hyper-parameter that controls the strength of fairness constraints to 10 for ML-1M and 1 for Tenrec, and set the learning rate \ud835\udefc\ud835\udc5efor updating \u02dc \ud835\udc44(\ud835\udc60) to 1\ud835\udc52\u22123 for ML-1M and 1\ud835\udc52\u22122 for Tenrec. Additionally, for methods that involve reconstructing unknown sensitive attributes, we use the same classifier proposed in [36] that is trained to fit known sensitive attributes using historical user interactions. To ensure a fair comparison, we select hyper-parameters that achieve the best fairness (measured by DP) while also maintaining at least 98% of the best recommendation performance (measured 10 30 50 70 90 100 Known Senstive Attributes Ratio (%) 0.000 0.005 0.010 0.015 0.020 Fairness (DP) (a) Fairness results on ML-1M. 10 30 50 70 90 100 Known Senstive Attributes Ratio (%) 0.000 0.002 0.004 0.006 0.008 Fairness (DP) (b) Fairness results on Tenrec. RegK FLrSA CGL DRFO Oracle Basic MF Figure 2: Fairness comparison between baselines and DRFO on ML-1M and Tenrec for varying known sensitive attribute ratios. Lower DP values indicate better fairness. by RMSE) achieved by the basic MF on the validation set. This approach is commonly used in previous studies to balance the tradeoff between fairness and recommendation performance [15, 35]. We release our code at: https://github.com/TianhaoShi2001/DRFO. 4.2 Overall Performance (RQ1) We first evaluate methods when all users\u2019 sensitive attributes are allowed to be reconstructed. We summarize the results in Table 2 for varying proportions of users with known sensitive attributes. The corresponding fairness curves for each method are shown in Figure 2 to illustrate the impact of the retention ratio. From the table and figure, we have drawn the following observations: \u2022 Basic MF method exhibits notably inferior fairness compared to regularization-based methods (e.g., FLrSA) with a substantial portion (\u226590%) of known sensitive attributes, particularly when compared to the Oracle model. This emphasizes the effectiveness of regularization-based approaches in enhancing recommendation fairness. However, as the proportion of known sensitive attributes decreases (\u226450%), the fairness performance of regularization-based methods rapidly declines, highlighting the importance of addressing limited sensitive attribute issues for improving fairness. \u2022 When the proportion of known sensitive attributes is small (\u226450%), FLrSA successfully outperforms RegK on ML-1M but fails on Tenrec in fairness. This can be attributed to the fact that reconstructing sensitive attributes for Tenrec is more difficult7, thus 7It is supported by the lower reconstruction accuracy (AUC) observed on Tenrec. \fFair Recommendations with Limited Sensitive Attributes: A Distributionally Robust Optimization Approach SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA 10 20 30 40 50 Reconstruction Error Rate (%) 0.002 0.004 0.006 0.008 0.010 0.012 0.014 0.016 Fairness (DP) 30% Known Sensitive Attributes 10 20 30 40 50 Reconstruction Error Rate (%) 0.000 0.002 0.004 0.006 0.008 0.010 0.012 Fairness (DP) 50% Known Sensitive Attributes RegK FLrSA DRFO Oracle (a) Fairness results on ML-1M. 10 20 30 40 50 Reconstruction Error Rate (%) 0.000 0.001 0.002 0.003 0.004 0.005 0.006 0.007 Fairness (DP) 30% Known Sensitive Attributes 10 20 30 40 50 Reconstruction Error Rate (%) 0.000 0.001 0.002 0.003 0.004 0.005 0.006 0.007 Fairness (DP) 50% Known Sensitive Attributes (b) Fairness results on Tenrec. Figure 3: Fairness performance under different levels of reconstruction errors for sensitive attributes. suffering more reconstruction errors and invalidating the vanilla reconstruction-based method FLrSA. These findings affirm the significant impact of reconstruction errors on the efficacy of reconstruction-based approaches to improve fairness. \u2022 Despite taking reconstruction errors into consideration, CGL performs similarly to FLrSA in fairness. This may be due to the fact that CGL randomly assigns sensitive attributes to samples with low reconstruction confidence, which does not ensure the resolution of the reconstruction error problem. \u2022 DRFO consistently achieves a higher level of fairness, compared to all baselines except for Oracle, even when the proportion of users with known sensitive attributes is low. This confirms its effectiveness in addressing reconstruction errors to achieve fairer recommendations. The superiority of the approach can be attributed to its DRO-based fair optimization, which minimizes the worst-case unfairness over a distribution set to achieve fairness for the true sensitive attributes contained in the set. \u2022 DRFO achieves slightly lower recommendation performance than baselines due to its aggressive fairness pursuit by minimizing worst-case unfairness across potential distributions. Nevertheless, our early stopping strategy selects the fairest model within a maximum 2% drop in RMSE on the validation, ensuring a low drop (around 2%) in RMSE on the testing for all methods. Despite this modest sacrifice in recommendation performance, DRFO improves fairness by over 100% in many cases, particularly when numerous users lack sensitive attributes. 4.3 In-depth Analyses (RQ2) We will conduct further experiments to analyze the influence of reconstruction errors on different methods and study the fairness performance of different users. S=0 (K) S=1 (K) S=0 (U) S=1 (U) 0.000 0.004 0.008 0.012 (a) Fairness results on ML-1M. S=0 (K) S=1 (K) S=0 (U) S=1 (U) 0.000 0.002 0.004 0.006 0.008 (b) Fairness results on Tenrec. RegK FLrSA CGL DRFO Figure 4: Absolute difference of average predicted scores of different groups from global average predictions. Higher difference means more unfairness. \u2018K\u2019 stands for \u2018known\u2019, and \u2018U\u2019 stands for \u2018unknown\u2019. \u2018S=0 (K)\u2019 denotes the users with the known sensitive attribute of 0, similarly for others. 4.3.1 The Effect of Reconstruction Errors. In this subsection, we directly investigate the impact of reconstruction errors on the fairness achievement of DRFO and baselines. We set the ratio of users with known sensitive attributes to be either 30% or 50%, and for the remaining users, we randomly flip the true sensitive attributes of some users to form the reconstructed sensitive attributes, thus introducing reconstruction errors. The flipping ratio varies from 10% to 50%, allowing us to simulate different levels of reconstruction errors8. We compare the fairness of RegK, FLrSA, and DRFO under different reconstruction error levels, excluding CGL due to the lack of reconstruction confidence, which is a necessary condition for CGL. The resulting fairness comparison is presented in Figure 3, where we exclude the recommendation performance as the observed drop is limited to approximately 2% (we also omit these results in the following experiments for the same reasons). Based on the figure, we make the following observations: \u2022 As the reconstruction error increases, both DRFO and FLrSA encounter a decline in fairness. Nevertheless, DRFO\u2019s smoother curve and consistent superiority over RegK highlight its robustness to reconstruction errors. The deterioration in DRFO\u2019s fairness can be explained by Equation (7), where increases in reconstruction error cause DRFO to select a larger ambiguity set, intensifying optimization challenges for achieving fairness. \u2022 Upon surpassing a 20% reconstruction error, FLrSA no longer surpasses RegK in fairness. Even with just a 10% reconstruction error, FLrSA fails to match Oracle in fairness. These results emphasize the necessity of addressing reconstruction errors to uphold the reconstruction method\u2019s validity. 4.3.2 Case Study. We next investigate whether our DRFO ensures fairness for both users with known sensitive attributes and users with unknown sensitive attributes. To achieve this, we propose a metric that measures the absolute difference between a specific group\u2019s average predicted rating and the overall average rating, similar to MAD in Equation (1). Larger values of this metric indicate more unfairness. We compute this metric for four different groups: users with \ud835\udc46= 1 but \ud835\udc46is unknown, users with \ud835\udc46= 0 but \ud835\udc46is unknown, users with known \ud835\udc46= 1, and users with known \ud835\udc46= 0. 8A 50% reconstruction error is significant, akin to random guessing, and we flip attributes for males and females at a 1:1 ratio to maintain unchanged gender distributions. \fSIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Tianhao Shi, et al. 0 25 50 75 100 Reconstrution Ratio (%) 0.000 0.005 0.010 0.015 0.020 Fairness (DP) 30% Known Sensitive Attributes 0 25 50 75 100 Reconstrution Ratio (%) 0.002 0.004 0.006 0.008 Fairness (DP) 50% Known Sensitive Attributes RegK FLrSA CGL DRFO Oracle (a) Fairness results on ML-1M. 0 25 50 75 100 Reconstrution Ratio (%) 0.000 0.001 0.002 0.003 0.004 0.005 0.006 0.007 Fairness (DP) 30% Known Sensitive Attributes 0 25 50 75 100 Reconstrution Ratio (%) 0.000 0.001 0.002 0.003 0.004 0.005 0.006 0.007 Fairness (DP) 50% Known Sensitive Attributes (b) Fairness results on Tenrec. Figure 5: Fairness results in scenarios where a portion of users does not allow reconstruction of their attributes among the users with unknown sensitive attributes. We summarize the result of the compared method in Figure 4, where 30% of users have known sensitive attributes. In Figure 4, baselines reveal significant absolute differences between groups\u2019 average and the global average, particularly noticeable for those with unknown sensitive attributes, implying higher unfairness. These results highlight baselines\u2019 inefficacy in achieving fair recommendations, especially for users with unknown attributes. In contrast, our proposed DRFO approach exhibits a small unfairness for all groups. Furthermore, the unfairness of the groups with known and unknown sensitive attributes is comparable, indicating that our method can successfully overcome the challenge of limited sensitive attributes and provide fair recommendations for users both with known and unknown sensitive attributes. 4.4 Fairness for Users Refusing to Reconstruct Sensitive Attributes (RQ3) To delve deeper into DRFO\u2019s ability in ensuring fairness for users refusing sensitive attribute reconstruction due to privacy concerns, we perform supplementary experiments. We explore scenarios where a fraction of users (30% or 50%) possess known sensitive attributes. Among the remaining users, we randomly designate a subset, varying from 0% to 100%, as individuals permitting sensitive attribute reconstruction, while others opt out of it. To accommodate non-reconstructable sensitive attributes, we adapt the baselines FLrSA and CGL. For FLrSA, fair constraints are omitted for users not permitting sensitive attribute reconstruction. In the case of CGL, sensitive attributes are randomly assigned to users disallowing reconstruction, mimicking its low-confidence strategy for reconstructed sensitive attributes. A summary of fairness performance across all compared methods is presented in Figure 5. Figure 5 shows baselines\u2019 failures in ensuring fairness when some users do not grant sensitive attribute reconstruction. Conversely, DRFO maintains relatively fair results. This validates DRFO\u2019s fairness improvement without full sensitive attribute reconstruction by employing a sufficiently large ambiguity set to encompass potential attributes and minimizing the worst-case unfairness within the set. 5 RELATED WORK 5.1 Fairness in Recommendation Fairness research in recommender systems employs two primary paradigms: individual fairness [39, 45], which treats similar individuals similarly, and group fairness [16, 43], which ensures equitable treatment of different groups. Our study specifically focuses on user-side fairness in recommendation at a group level, using Demographic Parity [2, 5, 16] as a representative criterion. To promote fairness in recommendations, existing approaches can be broadly categorized into three types [23, 35]: fair pre-processing, fair learning, and fair adjustment. Fair pre-processing methods mitigate bias in data by resampling [6] or adding antidote data [30]. Fair learning methods modify optimization targets to learn fair recommendation models, either through fairness criteria as regularizers [16, 41, 47] or adversarial learning [39, 45]. Fair adjustment methods [22, 40] reorder the recommendation results to attain a fairer outcome. However, these methods typically require full access to sensitive attributes, which may not be feasible in real-world situations. In contrast, our work tackles the challenge of addressing unfairness when only limited sensitive attributes are available. 5.2 Fairness with Missing Sensitive Attributes Fair recommendation research has paid less attention to fairness when some sensitive attributes are missing, but progress has been made in other fairness-focused domains. Some methods are specifically designed to enhance Rawlsian Fairness [31] using DRO [11] or Adversarial Reweighting Learning [19], but they cannot extend to other fairness metrics like DP. Another direction utilizes observable attributes (e.g., zip code) as proxies for sensitive attributes [9, 46], which differs from our method of reconstructing sensitive attributes. The effectiveness of these approaches relies on the strength of the correlation between the sensitive attribute and the observable nonsensitive one. Furthermore, some works enhance fairness by reconstructing missing sensitive attributes [3, 14] or sensitive representations [8] and leveraging this reconstructed information. However, they may have limited effectiveness due to reconstruction errors. In contrast, our DRO-based fair learning method ensures fairness in the presence of reconstruction errors. Within recommender systems, FairLISA [45] first studies the problem of missing sensitive attributes. It utilizes a discriminator trained with known attributes to remove the sensitive information from embeddings of users whose sensitive attributes are unavailable. Unlike it, our method employs the DRO paradigm to offer robust fair recommendations for users with missing sensitive attributes. Additionally, FairLISA and our work explore the fairness of limited sensitive attributes from different fairness metrics [35]. While FairLISA focuses on the fairness of removing sensitive information from user embeddings, our focus is on the fairness of inter-group metric differences. \fFair Recommendations with Limited Sensitive Attributes: A Distributionally Robust Optimization Approach SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA 5.3 Distributionally Robust Optimization Distributionally Robust Optimization (DRO) [27, 29] is a well-known framework that handles uncertainty. It seeks a solution that performs well under all possible distributions within a defined set by optimizing for the worst-case expected loss [27, 29]. Previous studies have utilized DRO to investigate fairness. Hashimoto et al. [11] employ DRO to improve the performance of groups with high classification errors, and Wen et al. [37] use DRO to improve worst-case user experience in recommendations. These approaches focus on Rawlsian Fairness [31], which cannot be extended to other fairness criteria (e.g., DP). RobFair [37] utilizes DRO to provide robust fair recommendations, overcoming the challenge of user preference shift which may compromise the fairness achieved during the training phase. Unlike RobFair, our work investigates fair recommendations when some sensitive attributes are missing and uses DRO to handle the uncertainty of missing sensitive attributes. Additionally, Wang et al. [34] introduce two methods that utilize DRO to learn fair classifiers with noisy sensitive attributes, which is most closely related to our work. In difference, our work considers scenarios where sensitive attributes cannot be reconstructed and discusses how to estimate the upper bound of TV distance when the assumption in Theorem 1 is slightly violated. 6", + "additional_graph_info": { + "graph": [ + [ + "Tianhao Shi", + "Fuli Feng" + ], + [ + "Tianhao Shi", + "Jizhi Zhang" + ], + [ + "Tianhao Shi", + "Zhijian Xu" + ], + [ + "Fuli Feng", + "Jizhi Zhang" + ], + [ + "Fuli Feng", + "Weiran Huang" + ], + [ + "Jizhi Zhang", + "Keqin Bao" + ], + [ + "Jizhi Zhang", + "Wenjie Wang" + ], + [ + "Jizhi Zhang", + "Wentao Shi" + ], + [ + "Zhijian Xu", + "Yuxuan Bian" + ], + [ + "Zhijian Xu", + "Xiangyu Wen" + ], + [ + "Zhijian Xu", + "Ailing Zeng" + ] + ], + "node_feat": { + "Tianhao Shi": [ + { + "url": "http://arxiv.org/abs/2312.15599v1", + "title": "Preliminary Study on Incremental Learning for Large Language Model-based Recommender Systems", + "abstract": "Adapting Large Language Models for recommendation (LLM4Rec)has garnered\nsubstantial attention and demonstrated promising results. However, the\nchallenges of practically deploying LLM4Rec are largely unexplored, with the\nneed for incremental adaptation to evolving user preferences being a critical\nconcern. Nevertheless, the suitability of traditional incremental learning\nwithin LLM4Rec remains ambiguous, given the unique characteristics of LLMs. In\nthis study, we empirically evaluate the commonly used incremental learning\nstrategies (full retraining and fine-tuning) for LLM4Rec. Surprisingly, neither\napproach leads to evident improvements in LLM4Rec's performance. Rather than\ndirectly dismissing the role of incremental learning, we ascribe this lack of\nanticipated performance improvement to the mismatch between the\nLLM4Recarchitecture and incremental learning: LLM4Rec employs a single\nadaptation module for learning recommendation, hampering its ability to\nsimultaneously capture long-term and short-term user preferences in the\nincremental learning context. To validate this speculation, we develop a Long-\nand Short-term Adaptation-aware Tuning (LSAT) framework for LLM4Rec incremental\nlearning. Instead of relying on a single adaptation module, LSAT utilizes two\nadaptation modules to separately learn long-term and short-term user\npreferences. Empirical results demonstrate that LSAT could enhance performance,\nvalidating our speculation.", + "authors": "Tianhao Shi, Yang Zhang, Zhijian Xu, Chong Chen, Fuli Feng, Xiangnan He, Qi Tian", + "published": "2023-12-25", + "updated": "2023-12-25", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "main_content": "INTRODUCTION The emergence of Large Language Models (LLMs) [11], equipped with extraordinary capabilities like human-level text comprehension and generation, has seen remarkable success in various fields like Robotics [9] and Computer Vision [38]. Inspired by this success, there is growing interest in using LLMs for recommendations in both academia [1, 2, 10, 37] and industry [22]. As the pre-training stage of LLMs lacks recommendation-specific knowledge, tuning LLMs with recommendation data plays a central role [1, 2]. In recent works [1, 2, 23, 45], adapting LLMs to recommendation tasks with LoRA [15]\u2014a well-known efficient fine-tuning method\u2014has shown promising performance with relatively low training cost, underscoring the substantial potential of LLM4Rec in real-world applications. However, the challenges associated with the practical deployment of LLM4Rec remain uncharted territory, particularly considering the unique characteristics inherent in LLMs. When deploying a recommender system in real-world scenarios, one of the primary challenges is ensuring the recommender models can adapt incrementally to evolving user preferences and environments [20, 43, 44]. This adaptation is critical because user feedback streamingly arrives, coupled with a continuous influx of new items into the system [21]. To achieve timely personalization, the recommendation models should be incrementally updated with the latest data [20]. For traditional recommendation models, the critical role of incremental learning and the associated challenges have been extensively researched [21, 43, 44]. However, concerning LLM4Rec, related issues lack adequate attention. The unique characteristics arXiv:2312.15599v1 [cs.IR] 25 Dec 2023 \fConference\u201917, July 2017, Washington, DC, USA Tianhao Shi, et al. of LLM4Rec, such as its lightweight tuning paradigm and its high tuning costs [25], may introduce novel challenges or insights that require thorough examination. In this study, we first empirically examine how incremental learning impacts the performance of LLM4Rec. Considering the broad adoption of LLM4Rec models developed using LoRA [1, 2, 23, 45], and acknowledging LoRA\u2019s efficiency and effectiveness, we focus our research on this specific type of LLM4Rec. We examine two commonly used incremental learning strategies: 1) full retraining [21], which involves periodic retraining using complete historical data and new data, and 2) fine-tuning [29, 35], which updates the model based solely on new data. Based on our empirical results, we find that both full retraining and fine-tuning have a minimal impact on the performance of LLM4Rec. While LLM4Rec demonstrates remarkable robustness, effectively generalizing to new data and interactions even without recent updates, the results also suggest that incremental learning might not lead to the expected performance improvements for LLM4Rec. Based on our empirical results, executing incremental learning appears to be unnecessary for LLM4Rec. This is somewhat surprising, as user preferences do change over time, and a recommender system should adapt to these changes [21, 39, 44]. We speculate that the lack of anticipated performance improvements may be attributed to a mismatch between the LoRA architecture and incremental learning: LoRA avoids training the entire model and instead tunes a low-rank adaptation module [8] with recommendation data, while a single LoRA module may have the inability to simultaneously emphasize long-term and short-term user preferences under incremental learning. Specifically, in the case of full retraining, the LoRA module may only capture long-term preferences while lacking a specific focus on short-term preferences due to the mixture of new data and a large amount of historical data [16]. For fine-tuning, the LoRA module may forget previous knowledge due to catastrophic forgetting [26], leading to a decline in performance. To validate our speculation, we develop a modified updating method called Longand Short-term Adaptation-aware Tuning (LSAT). This method utilizes two LoRA modules to separately learn longterm and short-term user preferences and then integrates them to merge the different types of preferences. During updates, the short-term LoRA module is temporarily retrained using solely new data to focus on the latest evolving preferences, while the longterm LoRA module employs extensive historical data for training to capture users\u2019 enduring aggregated long-term preferences and updates at a relatively slow frequency. Our practical comparison of LSAT, full retraining, and fine-tuning shows that LSAT improves performance, confirming that using multiple LoRA modules enhances LLM4Rec in incremental learning. Nevertheless, at present, LSAT only explores incremental learning from the perspective of LoRA capacity. To comprehensively understand and address the issue, further investigation in various directions us still necessary. The main contributions are summarized as follows: \u2022 New Problem: This work marks the inaugural investigation into incremental learning for LLM4Rec, furnishing practical insights for the real-world deployment of LLM4Rec. \u2022 New Finding: Our practical results emphasize LLM4Rec\u2019s strong ability to achieve good generalization for new data, yet underscore that common incremental update methods do not yield performance improvements for LLM4Rec. \u2022 Proposal: We propose that using separate LoRA modules to capture long-term and short-term preferences can enhance the performance of LLM4Rec in incremental learning, offering valuable insights from the perspective of the capacity of the LoRA module. 2 PRELIMINARIES In this study, we examine the issues of incremental learning related to LLM4Rec. To achieve this, we conduct experimental analyses to investigate the influence of incremental learning on a representative LLM4Rec model called TALLRec [2]. In the following, we first elaborate on the TALLRec model, followed by a detailed explanation of incremental learning in recommender systems. 2.1 TALLRec Considering the lack of recommendation-specific training in the pre-training phase of LLMs, tuning LLMs plays a crucial role in aligning LLMs with recommendations. TALLRec is an efficient and effective approach that employs instruction tuning [28] on LLMs with recommendation data, using the LoRA [15] technique. TALLRec first transforms recommendation data into instruction data by representing items using their titles, utilizing historically interacted item titles to denote user preferences, and providing a natural language description of the recommendation task. The generated instruction response is either \"Yes\" or \"No\", indicating the user\u2019s preference for the target item. Through the tuning process with these constructed recommendation instructions, the LLM becomes proficient in inferring current preferences from users\u2019 historical interactions and associated textual information (i.e., title). Additionally, due to the massive scale of LLM parameters, the full tuning of LLM is deemed impractical, and the prevalent and adopted practice involves conducting parameter-efficient fine-tuning [8] (PEFT). TALLRec also adopts one such powerful and fast-converging PEFT method called LoRA [15]. LoRA entails freezing the pretrained model parameters and integrating trainable rank decomposition weight matrices into the existing weights of the layers in LLM, enabling lightweight tuning. For a layer with a pre-trained weight matrix \ud835\udc4a\u2208R\ud835\udc51\u00d7\ud835\udc58, the LoRA introduces a pair of rankdecomposed weight matrices onto \ud835\udc4aas \ud835\udc4a+ \ud835\udc34\ud835\udc35, where \ud835\udc34\u2208R\ud835\udc51\u00d7\ud835\udc5f and \ud835\udc35\u2208R\ud835\udc5f\u00d7\ud835\udc58are the added learnable LoRA weight matrices. Here, \ud835\udc51, \ud835\udc58, and \ud835\udc5frepresent the matrix dimension sizes, and we have that \ud835\udc5f\u226amin(\ud835\udc51,\ud835\udc58), resulting in the total parameters of \ud835\udc34and \ud835\udc35being significantly fewer than those of \ud835\udc4a. Notably, \ud835\udc4a+ \ud835\udc34\ud835\udc35serves as the collective parameter at both training and inference, while only \ud835\udc34and \ud835\udc35utilized for the training. Utilizing LoRA and instruction tuning, TALLRec can efficiently and effectively align LLMs with the recommendation, yielding promising results. 2.2 Incremental Learning in Recommendation In practical recommender systems, user feedback data are collected continually. To ensure that the predictive model remains up-to-date with the latest user preferences, a common approach involves incremental learning. Following [21, 44], we denote the data stream as \fPreliminary Study on Incremental Learning for Large Language Model-based Recommender Systems Conference\u201917, July 2017, Washington, DC, USA Data D2 Train D1 Serve Retrain D1 D2 Data D3 Serve Retrain D1 D2 D3 Data D4 \u2026 Period 1 Period 2 Period 3 Serve Period 4 Period 1 Period 2 Period 3 Period 4 Full Retraining Fine-tuning Serve Tune D2 Serve Tune D3 Serve \u2026 Data D2 Data D3 Data D4 Train D1 Figure 1: An illustration of incremental learning in recommender systems. {D1, D2, . . . , D\ud835\udc61, . . . }, with D\ud835\udc61representing the acquired data at time period \ud835\udc61. The time period can fluctuate, spanning daily, weekly, or until a predetermined number of interactions are amassed, contingent upon system requirements and implementation capabilities. After each collection of D\ud835\udc61, the model is promptly updated. Subsequently, the updated model remains active for serving the upcoming data D\ud835\udc61+1 until the next update. Two representative and widely adopted approaches for incremental learning are the full retraining and fine-tuning methods. At each time period \ud835\udc61, full retraining conducts retraining on the complete historical data and newly collected data, i.e., retraining with {D1, D2, . . . , D\ud835\udc61}. This approach ensures high model fidelity by incorporating all available interactions while having high training costs [44]. Since both historical and new data are considered, it has the potential to capture both long-term and short-term user preferences. However, the ability to learn short-term preferences might be limited as it lacks explicit focus on the most recent data [44]. At each time period \ud835\udc61, fine-tuning solely tunes the previous model based on new interactions D\ud835\udc61, making it memoryand time-efficient. However, it may have overfitting [44] and forgetting [26] issues, leading to overlooking crucial long-term preference signals from historical data. Figure 1 illustrates the detailed update process of full retraining and fine-tuning along time periods. 3 EMPIRICAL EXPLORATIONS In this section, we conduct experiments to answer the following research question: How do the commonly employed incremental learning methods (full retraining and fine-tuning) impact TALLRec? 3.1 Experimental Settings 3.1.1 Datasets. We conduct experiments with two datasets: MovieLens 1M [12] and Amazon Book Reviews [27]. MovieLens 1M (ML-1M) is a popular dataset released by GroupLens Research that encompasses user ratings for movies and detailed textual descriptions such as \u2018title\u2019 and \u2018director\u2019. Table 1: Statistics of the evaluation datasets. Dataset # Users # Items # Instances Density ML-1M 1,813 3,503 200,000 3.1491% Amazon-Book 14,924 117,030 225,280 0.0129% Amazon Book Reviews (Amazon-Book) is a comprehensive collection of book reviews and associated metadata from the Amazon website. It provides rich book metadata, including book titles, reviews, and user ratings. To study the influence of incremental learning, we split each dataset chronologically into periods (i.e., D1, D2, . . . , D\ud835\udc47) based on the timestamps of interactions. By leveraging instruction tuning, LLM can swiftly grasp the recommendation tasks and achieve good performance with significantly fewer samples than traditional models. Therefore, akin to TALLRec, where the authors ultimately utilize 10,000 samples for experiments, we limit our dataset to approximately 10,000 samples for each period. Specifically, for MovieLens 1M, we retain data spanning from December 1, 2000, to February 28, 2003, and divide it into 20 periods, each containing 10,000 samples. Similarly, for the Amazon-Book dataset, we preserve the data concentrated between March 1, 2014, and April 30, 2018. We sample 2.5% of the users, resulting in a total of 225,280 samples. Subsequently, we divide this duration into 20 periods, each spanning two months, comprising approximately 10,000 samples. In both ML-1M and Amazon-Book, ratings range from 1 to 5. Following [46], interactions with ratings \u22653 are treated as positive samples, and others are treated as negative samples. The statistical data of the processed dataset is presented in Table 1. 3.1.2 Models. We assess the effectiveness of conventional incremental learning strategies, namely full retraining and fine-tuning, for the LLM4Rec method, specifically TALLRec. Additionally, we analyze the impact of these two update methods on four representative traditional recommendation models for comparative reference: \u2022 Matrix Factorizations (MF) [18]: This is a representative latent factor-based collaborative filtering method. \u2022 GRU4Rec [14]: This is a RNN-based sequential recommender that utilizes GRU for encoding historical sequences. \u2022 Caser [31]: This recommender uses a CNN architecture to treat recent item sequences as \u2018images\u2019, learning sequential patterns via horizontal and vertical convolution filters. \u2022 SASRec [17]: This model employs the self-attention mechanism to grasp sequential patterns and anticipate subsequent items. 3.1.3 Implementation Details. We utilize Python 3.10 and PyTorch 1.13 for implementing all methods. For TALLRec, we conduct experiments using the author\u2019s publicly available code1, based on LLaMA-7B [32]. We optimize TALLRec with a batch size of 128 and a learning rate of 1e-3. Regarding the traditional sequential recommendation models (GRU4Rec, Caser, SASRec), we also adhere to TALLRec\u2019s settings, assigning each feedback an embedding and concatenating it with the item embedding. This process enables the direct integration of user feedback information at the model input for user interest modeling like TALLRec. For all the traditional models, we optimize the model parameters using the Adam optimizer with a default learning rate of 1e-3, employing MSE loss as the 1https://github.com/SAI990323/TALLRec \fConference\u201917, July 2017, Washington, DC, USA Tianhao Shi, et al. 1 3 5 7 9 11 13 15 17 19 Update Delay 0.64 0.66 0.68 0.70 0.72 0.74 0.76 0.78 T est AUC MF GRU4Rec SASRec Caser TALLRec ML-1M, Full Retraining 1 3 5 7 9 11 13 15 17 19 Update Delay 0.64 0.66 0.68 0.70 0.72 0.74 0.76 0.78 T est AUC MF GRU4Rec SASRec Caser TALLRec ML-1M, Fine-tuning 1 3 5 7 9 11 13 15 17 19 Update Delay 0.50 0.55 0.60 0.65 0.70 0.75 0.80 T est AUC MF GRU4Rec SASRec Caser TALLRec Amazon-Book, Full Retraining 1 3 5 7 9 11 13 15 17 19 Update Delay 0.50 0.55 0.60 0.65 0.70 0.75 0.80 T est AUC MF GRU4Rec SASRec Caser TALLRec Amazon-Book, Fine-tuning Figure 2: Performance of TALLRec and traditional recommendation models on D20 with different delayed update intervals. \"Update Delay\" denotes the time interval between performance evaluation and the corresponding model update period \ud835\udc61, i.e., computed by (20 \u2212\ud835\udc61). optimization objective. We apply a weight decay of 1e-5 and use a batch size of 256 for them. The embedding size for the sequence recommendation method is set to 64, and the embedding size for MF is set to 16 (tuned result). 3.1.4 Evaluation Metric. In our experimental, we partitioned our dataset into {D1, D2, . . . , D\ud835\udc47}. For each D\ud835\udc61, we further utilize the earliest 90% partition as the newly collected training data, employing the remaining 10% as the validation data for early stopping. Post-updates, the model is tested on the entire data of the designated future time period, (e.g., D\ud835\udc61+\ud835\udc58, \ud835\udc58> 0). Regarding performance metrics, our experimental configuration closely resembles explicit feedback settings, as both involve predicting ratings for a given item. Hence, following the settings of TALLRec [2], we adopt the widely used evaluation metric of explicit recommendation: Area Under the ROC Curve (AUC) as the benchmark metric for evaluating our task. 3.2 Impact of Incremental Learning on LLM4Rec In our experiments, the model undergoes continuous updates until the 19th period. We test all periods\u2019 model on D20, compare their performances, and generate performance curves concerning update delay in Figure 2. Here, \"update delay\" denotes the time interval between performance evaluation and the corresponding model update period \ud835\udc61, i.e., computed by (20 \u2212\ud835\udc61). From the figure, we draw the following observations: (1) For traditionally operational recommendation models (AUC>0.5), timely updates generally lead to performance improvements, particularly noticeable with full retraining. Fine-tuning exhibits a significant impact in the early stages of updates (for a high value of (20 \u2212\ud835\udc61)), but its effectiveness diminishes later, possibly due to encountering forgetting issues. Specifically, on Amazon\u2019s dataset, traditional sequential recommendation models perform closely to random recommendations, limiting the influence of updates on their performance. We hypothesize that these models may demand higher data quality, and the extreme sparsity and small-scale characteristics of the data impede the effective learning of sequential recommendation models. (2) In contrast to traditional recommendation models, the performance of LLM4Rec is relatively unaffected by both full retraining and fine-tuning. Notably, LLM4Rec demonstrates remarkable robustness, generalizing well to new interaction data even MF GRU4Rec SASRec Caser TALLRec 0.00 0.25 0.50 0.75 1.00 AUC ML-1M warm items cold items MF GRU4Rec SASRec Caser TALLRec 0.00 0.25 0.50 0.75 1.00 AUC Amazon-Book warm items cold items Figure 3: Performance comparison between TALLRec and traditional models on warm items and cold items. All models are trained on D1 and tested on D2\u2212D20. Note that AUC = 0.5 indicates the performance equivalent to random predictions. without recent data updates. However, it\u2019s worth mentioning that timely (i.e., less delay) incremental learning does not bring performance improvements for LLM4Rec. We proceed with a further analysis of the impact of incremental learning on LLM4Rec. Previous studies [21] have underscored two key aspects for enhancing recommendation performance in traditional models through incremental learning: (1) timely incorporation of new items and users2 and (2) adaptation to dynamically changing user preferences. Regarding the first aspect, Figure 3 illustrates the performance comparison between TALLRec and traditional models on warm and cold items. We specifically consider a static scenario of minimal data training, where models are trained solely on D1 and tested on D2 \u2212D20. Even under such conditions, LLM4Rec demonstrates robust predictive capabilities for cold items, outperforming traditional recommendation models that heavily rely on embeddings, which exhibit performance close to random guessing (AUC=0.5) for cold items. This indicates that traditional recommendation models without prompt updates may experience performance deterioration due to an increased number of cold items. In contrast, LLM4Rec\u2019s proficiency in understanding general language knowledge allows for accurate recommendations for cold items3. Hence, from the perspective of cold items, incremental 2In sequential recommendation models like TALLRec, user preferences, expressed as historical item sequences, can incorporate interactions of new users during inference without updating. Thus, delayed updates only increase the number of cold items. 3Note that in our experiments, LLM has already understood the texture (book/movie titles) during its pre-training process. To handle items not encountered during LLM\u2019s pre-training, a potential direction for future research involves incorporating additional knowledge to provide contextual information for LLM4Rec. \fPreliminary Study on Incremental Learning for Large Language Model-based Recommender Systems Conference\u201917, July 2017, Washington, DC, USA 1 2 3 4 5 Update Delay 16 17 18 19 20 T est Period AUC 0.7650 0.7675 0.7700 0.7725 0.7750 0.7775 0.7800 0.7825 ML-1M, Full Retraining 1 2 3 4 5 Update Delay 16 17 18 19 20 T est Period AUC 0.7750 0.7775 0.7800 0.7825 0.7850 0.7875 0.7900 0.7925 0.7950 Amazon-Book, Full Retraining 1 2 3 4 5 Update Delay 16 17 18 19 20 T est Period AUC 0.762 0.764 0.766 0.768 0.770 ML-1M, Fine-tuning 1 2 3 4 5 Update Delay 16 17 18 19 20 T est Period AUC 0.750 0.752 0.754 0.756 0.758 0.760 0.762 0.764 0.766 Amazon-Book, Fine-tuning Figure 4: Heatmap illustrating the performance of TALLRec on D16 \u2212D20 with different update delays. The y-axis represents the AUC of the testing period, while the x-axis signifies the time intervals for update delay. learning with the latest data has a smaller impact on LLM4Rec compared to traditional models. We further explore whether incremental learning can achieve overall performance improvement by adapting to the latest user preferences. We constructed a heatmap in Figure 4, depicting the relationship between the predictive performance of TALLRec in the test periods of D16 \u2212D20 and the update delay corresponding to each target testing period. For example, \u201cupdate delay=1\" denotes the result of the 15th-period model for D16, while denotes that of the 16th-period model for D17. Our findings again reveal that despite the integration of the most recent data updates, the two updating methods do not lead to performance improvements for TALLRec after updating (the leftmost column does not always evidently surpass the performance of the column immediately to its right)4. We posit that this arises from the inability of a single LoRA to simultaneously capture long-term and short-term user preferences. In the case of full retraining, although both historical data and new data are used, the LoRA may only focus on long-term stable preferences across the entire historical dataset due to the larger quantity of historical data [16]. In the case of fine-tuning, the LoRA may only focus on the short-term preferences in new data while forgetting previous knowledge [26]. Hence, both approaches may result in the inability to achieve performance improvements from updates with the latest data. 4 LSAT We have observed that incremental learning through full retraining and fine-tuning methods does not improve the performance of LLM4Rec. Our hypothesis is that a single LoRA module may 4We use early stopping with the validation set, discarding models with declining performance. The uniformly colored rows in the heatmap of fine-tuning indicate the reduced performance after updating due to its forgetting of old knowledge. struggle to simultaneously capture both long-term and short-term user preferences. Considering the divergence between long-term and short-term user preferences, it may be necessary to employ separate LoRA modules to capture them individually. This insight arises from the successful use of expandable task-specific model decomposition in incremental learning [34] and the advancements of multiple LoRA modules to capture distinct domain knowledge in the fields of LLMs [6, 40, 42]. To validate this speculation, we develop a new method called LSAT, employing two dedicated LoRA modules \u2014 one for capturing long-term user preferences and another for capturing short-term user preferences. During the update, the short-term LoRA module is temporarily introduced and trained on new data, while the long-term LoRA module remains fixed once trained on sufficient previous data. In the inference phase, the longterm LoRA module collaborates with the current short-term LoRA module to provide personalized recommendations. Next, we elaborate on the building of the two LoRA modules and the inference at each update: Short-term LoRA. This LoRA module aims at capturing shortterm user preferences. Toward the goal, at each period \ud835\udc61, we train a new LoRA module parameterized with \u0398\ud835\udc61using only the newly collected data D\ud835\udc61. The learning objective is formulated as follows: min \u0398\ud835\udc61 \ud835\udc3f(D\ud835\udc61; \u03a6, \u0398\ud835\udc61), (1) where \u03a6 is the frozen pre-trained parameters of the LLM, and \ud835\udc3f(D\ud835\udc61; \u03a6, \u0398\ud835\udc61) denotes the recommendation loss computed on D\ud835\udc61. Notably, the new LoRA is trained from scratch rather than being fine-tuned from the one used in the last period, considering that previously learned preferences may disturb the learning of new preferences as evidenced by the relatively poor performance of fine-tuning. Long-term LoRA. This LoRA aims to capture long-term preferences, and it is obtained by fitting historical data. In our previous experiments, we found that LLM4Rec with full retraining maintains robust recommendation performance even when its updates are delayed after a certain training period \ud835\udc5a. That means, that once the long-term LoRA is sufficiently trained, it can be set to a fixed state (or updated at a slower pace). To optimize training costs, we default to fixing it after training with ample historical data. Let H = {D1, D2, . . . , D\ud835\udc5a} denote the ample historical data. Formally, we would use a fixed long-term LoRA obtained by optimizing the following objective function: min \u0398\u210e \ud835\udc3f(H; \u03a6, \u0398\u210e), (2) where \u0398\u210erepresents the LoRA parameters responsible for capturing long-term preferences. Notably, after the\ud835\udc5a-th period, the long-term LoRA is fixed, making the overall training cost of LSAT closely similar to fine-tuning. Preceding the \ud835\udc5a-th period, it is necessary to retrain the long-term LoRA with historical data corresponding to the considered period. Inference. During the serving stage, the integration of longand short-term preferences from the corresponding LoRA modules is crucial to elevate the quality of recommendations. We explore two merging methods: \fConference\u201917, July 2017, Washington, DC, USA Tianhao Shi, et al. 1) Output ensemble: This approach involves directly averaging the predictions with two LoRA modules. For a given sample \ud835\udc65at the \ud835\udc61+1-th period, the final prediction is formulated as follows: 1 2 [(\ud835\udc53(\ud835\udc65; \u03a6, \u0398\u210e) + \ud835\udc53(\ud835\udc65; \u03a6, \u0398\ud835\udc61)] , (3) where \ud835\udc53(\ud835\udc65; \u03a6, \u0398\ud835\udc61) denotes the prediction of LLM4Rec with \ud835\udc61-th period short-term LoRA, and \ud835\udc53(\ud835\udc65; \u03a6, \u0398\u210e) denotes the prediction with the long-term LoRA. 2) LoRA fusion: Recognizing that the output ensemble involves two LLM inferences, we delve into merging the two LoRA modules to create a unified LoRA for one-pass inference. Formally, with merged LoRA parameters denoted as \u00af \u0398, the final prediction for a sample \ud835\udc65at the \ud835\udc61+ 1-th period is obtained as: \ud835\udc53(\ud835\udc65; \u03a6, \u00af \u0398), (4) where \u00af \u0398 is derived by fusing the parameters of the long-term LoRA (\u0398\u210e) and the short-term LoRA at the \ud835\udc61-th period (\u0398\ud835\udc61). To obtain \u00af \u0398, we consider a widely adopted fusion strategy, task arithmetic [42]. The merged parameter is computed as \u00af \u0398 = \ud835\udf06\u0398\u210e+ (1 \u2212\ud835\udf06)\u0398\ud835\udc61, where \ud835\udf06is a hyper-parameter chosen on the validation set. 5 EXPERIMENTS In this section, we conduct experiments to answer the following research questions: RQ1. How is the performance of LSAT compared with full retraining and fine-tuning? RQ2. Where do the improvements of LSAT come from? 5.1 Experimental Settings In this section, we compare the performance of LSAT with full retraining and fine-tuning on ML-1M and Amazon-Book. A detailed introduction to the dataset and the model TALLRec can be found in Section 3.1. To ascertain which approach yields superior results after updates, following [44], the model is promptly updated with the data D\ud835\udc61received at period \ud835\udc61to provide predictions for period \ud835\udc61+1. For LSAT, the long-term LoRA module necessitates a relatively substantial historical dataset H = {D1, D2, . . . , D\ud835\udc5a}, we set the value of \ud835\udc5aas 10 or 15 in the experiments. For our LSAT, we study two methods for model merging defined in the Inference part of Section 4: ensemble (LSAT-EN), and task arithmetic (LSAT-TA). Regarding task arithmetic, we explored the coefficient \ud835\udf06within the search space {0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1}. 5.2 Experimental Results 5.2.1 Overall Results (RQ1). Figure 5 illustrates the overall performance comparison between full retraining, fine-tuning, and LSAT. From the figure, we have the following observations. \u2022 LSAT-EN and LSAT-TA perform better than both full retraining and fine-tuning on two datasets. These results show that leveraging two separate LoRA for learning long-term and short-term interests, respectively, could lead to performance improvements compared to using a single one. This suggests that better modeling of long-term and short-term interests is achieved by employing two adapters, validating our speculation. Particularly, the improvement in LSAT-TA is relatively modest. This suggests that directly merging the adapter parameters to form a single adapter may be less effective in aligning the interests of the long-term 11 12 13 14 15 16 17 18 19 20 T est Period 0.750 0.755 0.760 0.765 0.770 0.775 0.780 0.785 AUC ML-1M Full Retraining Fine-tuning LSAT-TA (10) LSAT-EN (10) 11 12 13 14 15 16 17 18 19 20 T est Period 0.745 0.750 0.755 0.760 0.765 0.770 0.775 0.780 0.785 0.790 0.795 AUC Amazon-Book Full Retraining Fine-tuning LSAT-TA (10) LSAT-EN (10) ML-1M 0.7656 0.7594 0.7667 0.7720 Amazon-Book 0.7790 0.7590 0.7813 0.7823 (a) Performance comparison of full retraining, fine-tuning, and LSAT on D11 \u2212D20. The table displays the mean AUC across various testing periods. 16 17 18 19 20 T est Period 0.760 0.765 0.770 0.775 0.780 0.785 AUC ML-1M Full Retraining Fine-tuning LSAT-TA (15) LSAT-EN (15) 16 17 18 19 20 T est Period 0.75 0.76 0.77 0.78 0.79 AUC Amazon-Book Full Retraining Fine-tuning LSAT-TA (15) LSAT-EN (15) ML-1M 0.7713 0.7641 0.7740 0.7768 Amazon-Book 0.7802 0.7580 0.7836 0.7840 (b) Performance comparison of full retraining, fine-tuning, and LSAT on D16 \u2212D20. The table displays the mean AUC across various testing periods. Figure 5: Performance comparison between full retraining, fine-tuning, and LSAT. All models are updated promptly with newly collected data D\ud835\udc61and serve for the next test period D\ud835\udc61+1. LSAT (m) means it utilizes H = {D1, D2, . . . , D\ud835\udc5a} to train the long-term LoRA. and short-term LoRA modules. This calls for further exploration of a parameter-level LoRA merging method specifically designed for incremental learning in recommendation. \u2022 When comparing the effectiveness of LSAT on the two datasets, it appears that LSAT is less effective in bringing performance improvements on Amazon-Book. We attribute this to the characteristics of the Amazon-book dataset, which is more sparse, and each period corresponds to a longer time span, potentially indicating more drastic changes in user preferences. Coupled with the poor performance of fine-tuning, which requires forgetting long-term preferences before learning new ones, there is a higher likelihood that long-term and short-term preferences are not as complementary and may even conflict. This makes achieving performance improvement through long-term and short-term preferences merging more challenging. \fPreliminary Study on Incremental Learning for Large Language Model-based Recommender Systems Conference\u201917, July 2017, Washington, DC, USA 11 12 13 14 15 16 17 18 19 20 T est Period 0.750 0.755 0.760 0.765 0.770 0.775 0.780 0.785 AUC ML-1M Full Retraining Fine-tuning Short-term LoRA LSAT-EN (10) 11 12 13 14 15 16 17 18 19 20 T est Period 0.75 0.76 0.77 0.78 0.79 AUC Amazon-Book Full Retraing Fine-tuning Short-term LoRA LSAT-EN (10) ML-1M 0.7656 0.7594 0.7638 0.7720 Amazon-Book 0.7790 0.7590 0.7806 0.7823 (a) Results of full retraining, short-term LoRA, and LSATEN. Short-term LoRA means using the latest data to train the LoRA and inference with it. The table displays the mean AUC across various testing periods. 11 12 13 14 15 16 17 18 19 20 T est Period 0.755 0.760 0.765 0.770 0.775 0.780 0.785 AUC ML-1M Full Retraining LSAT-EN (Full) Long-term LoRA (10) LSAT-EN (10) 11 12 13 14 15 16 17 18 19 20 T est Period 0.770 0.775 0.780 0.785 0.790 0.795 AUC Amazon-Book Full Retraing LSAT-EN (Full) Lort-term LoRA (10) LSAT-EN (10) ML-1M 0.7656 0.7693 0.7620 0.7720 Amazon-Book 0.7790 0.7820 0.7804 0.7823 (b) Results of full retraining, long-term LoRA, and LSAT-EN. Long-term LoRA (m) means using H = {D1, D2, . . . , D\ud835\udc5a} to train the LoRA and inference with it. LSAT-EN (full) means its long-term LoRA module is retrained during each period with all historical data. The table displays the mean AUC across various testing periods. Figure 6: Results of full retraining, fine-tuning, long-term LoRA, short-term LoRA, and LSAT-EN. All models are updated promptly with newly collected data D\ud835\udc61and serve for the next test period D\ud835\udc61+1. 5.2.2 In-depth Analyses (RQ2). In this section, we delve into comprehensive experimental analyses focusing on the two integral components of LSAT: the short-term LoRA and the long-term LoRA. Figure 6 illustrates the results of full retraining, fine-tuning, shortterm LoRA, long-term LoRA, and LSAT-EN. From the figures, we draw the following observations: \u2022 Using only a single short-term LoRA or long-term LoRA leads to a performance decrease, highlighting the importance of both the long-term and short-term LoRAs for LSAT. \u2022 The short-term LoRA outperforms fine-tuning on both datasets, suggesting that fitting new data to learning new user preferences from scratch with a new LoRA is more effective than fine-tuning from the previous stage (storing historical preference). This implies the rationale of using different LoRAs to separately capture long-term and short-term interests. \u2022 Comparing the performance of the long-term LoRA with full retraining, they exhibit comparable performance. The variant LSATEN (full), which consistently updates the long-term LoRA, also does not bring additional improvements compared to LSAT-EN (10), which uses a fixed long-term LoRA. These results indicate that the long-term LoRA does not necessarily need continuous updates once it has been adequately trained, as long-term preferences tend to remain relatively stable. 6 RELATED WORKS 6.1 LLM-based Recommendation The application of Large Language Models in recommender systems can be broadly classified based on their distinct roles in the recommendation pipeline: feature engineering [3, 4], feature encoding [13, 41], and scoring/ranking functions [1, 2]. Feature engineering [3, 4] entails employing LLMs to extract additional textual features from raw textual data, integrating them into traditional recommender models. Within feature encoding [13, 41], LLMs serve as auxiliary text feature extractors to enhance the representation of users and items. However, it is worth noting that both these approaches still rely on traditional models, limiting the exploitation of LLMs\u2019 generative capabilities. For scoring/ranking functions [1, 2, 5, 33], a viable approach involves leveraging in-context learning, which enables the LLM to acquire proficiency in the recommendation task through autoregression [5, 33]. These methods require providing the LLM with appropriate instructions and examples for guided learning. It\u2019s essential to recognize that this approach faces limitations due to the restricted input size of LLMs and the scarcity of recommendation-specific knowledge [2, 24]. To overcome these limitations, researchers have introduced techniques like TALLRec [2] and BIGRec [1], which include an instruction-tuning phase integrating recommendation data to enhance LLMs\u2019 recommendation capabilities using the LoRA technique. Specifically, our research investigates the impact of incremental learning on the last category of LLM4Rec. 6.2 Incremental Learning in Recommendation In the recommender systems, the continual emergence of new items [19] and persistent shifts in user preferences [47] necessitate frequent updates to maintain system effectiveness. Extensive studies have scrutinized the impact of timely data updates on traditional recommendation models. For instance, Ekko [30], a large-scale distributed recommender system, demonstrated a modest improvement in online model performances through faster model synchronization. Additionally, quantitative experiments [20] underscored that delayed model updates detrimentally affect performance by increasing cold users and items, leading to an overall decline in model efficacy. Previous investigations underscore the vital role of incremental learning in recommender systems. However, there is a gap in exploring this aspect in LLM4Rec. Our study investigates the unexplored issues of incremental learning for LLM4Rec. \fConference\u201917, July 2017, Washington, DC, USA Tianhao Shi, et al. Various methods have been proposed for model retraining in recommender systems. (1) Fine-tuning [29, 35]: This involves updating the model based solely on new interactions, offering efficiency but facing the challenge of potential model forgetting. (2) Full retraining: This approach entails sampling the entire dataset for model updating, achieving high accuracy at the cost of considerable training resources. (3) Sample-based retraining [7, 36]: Selectively sampling a portion of historical interactions and integrating them with new interactions constitutes this method. (4) Meta-learning-based methods [39, 44]: This approach strives to preserve long-term preferences while facilitating rapid adaptation to new data with meta-learning. In our research, we empirically investigate the impact of incremental learning on LLM4Rec by employing two representative techniques: fine-tuning and full retraining. Furthermore, we introduced an approach leveraging distinct LoRA modules to concurrently capture long-term and short-term preferences, offering insights for the incremental learning of LLM4Rec from the perspective of the capacity of the LoRA module. 7" + } + ], + "Fuli Feng": [ + { + "url": "http://arxiv.org/abs/2106.03046v1", + "title": "Empowering Language Understanding with Counterfactual Reasoning", + "abstract": "Present language understanding methods have demonstrated extraordinary\nability of recognizing patterns in texts via machine learning. However,\nexisting methods indiscriminately use the recognized patterns in the testing\nphase that is inherently different from us humans who have counterfactual\nthinking, e.g., to scrutinize for the hard testing samples. Inspired by this,\nwe propose a Counterfactual Reasoning Model, which mimics the counterfactual\nthinking by learning from few counterfactual samples. In particular, we devise\na generation module to generate representative counterfactual samples for each\nfactual sample, and a retrospective module to retrospect the model prediction\nby comparing the counterfactual and factual samples. Extensive experiments on\nsentiment analysis (SA) and natural language inference (NLI) validate the\neffectiveness of our method.", + "authors": "Fuli Feng, Jizhi Zhang, Xiangnan He, Hanwang Zhang, Tat-Seng Chua", + "published": "2021-06-06", + "updated": "2021-06-06", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "main_content": "Introduction Language understanding (Ke et al., 2020) is a central theme of arti\ufb01cial intelligence (Chomsky, 2002), which empowers a wide spectral of applications such as sentiment evaluation (Feldman, 2013), commonsense inference (Bowman et al., 2015). The models are trained on labeled data to recognize the textual patterns closely correlated to different labels. Owing to the extraordinary representational capacity of deep neural networks, the models can well recognize the pattern and make prediction accordingly (Devlin et al., 2019). However, the cognitive ability of these data-driven models is still far from human beings due to lacking counterfactual thinking (Pearl, 2019). Counterfactual thinking is a high-level cognitive ability beyond pattern recognition (Pearl, 2019). In addition to observing the patterns within factual \u2217Corresponding author. samples, counterfactual thinking calls for comparing the fact with imaginations, so as to make better decision. For instance, given a factual sample \u201cWhat do lawyers do when they die? Lie still.\u201d, the intuitive evaluation of its sentiment based on the textual patterns will recognize \u201cLie still\u201d as an objective description of body posture which is neutral. By scrutinizing that the \u201cstill\u201d could be intentionally postposed, we can imagine a counterfactual sample \u201cWhat do lawyers do when they die? Still lie.\u201d and uncover the negative sarcastic pun, whose sentiment is more accurate. Recent work (Kaushik et al., 2019; Zeng et al., 2020) shows that incorporating counterfactual samples into model training improves the generalization ability. However, these methods follow the standard machine learning paradigm that uses the same procedure (e.g., a forward propagation) to make prediction in the testing phase. That is, making decision for testing samples according to their relative positions to the model decision boundary. The indiscriminate procedure focuses on the textual patterns occurred in the testing sample and treats all testing samples equally, which easily fails on hard samples (cf. Figure 1). On the contrary, humans can discriminate hard samples and ponder the decision with a rational system (Daniel, 2017), which imagines counterfactual and adjusts the decision. The key to bridge this gap lies in imitating the counterfactual thinking ability of humans, i.e., learning a decision making procedure to serve for the testing phase. That is a procedure of: 1) constructing counterfactual samples for a target factual sample; 2) calling the trained language understanding model to make prediction for the counterfactual samples; and 3) comparing the counterfactual and factual samples to retrospect the model prediction. However, the procedure is non-trivial to achieve for two reasons: 1) the space of counterfactual sample is huge since any variant from the target factual arXiv:2106.03046v1 [cs.CL] 6 Jun 2021 \fsample can be a counterfactual sample. It is thus challenging to search for suitable counterfactual samples that can facilitate the decision making. 2) The mechanism of how we retrospect the decision is still unclear, making it hard to be imitated. Towards the target, we propose a Counterfactual Reasoning Model (CRM), which is a two-phase procedure consisting a generation module and a retrospection module. In particular, given a factual sample in the testing phase, the generation module constructs representative counterfactual samples by imagining what would the content be if the label of the sample is y. To imitate the unknown retrospection mechanism of humans, we build the retrospection module as a carefully designed deep neural network that separately compares the latent representation and the prediction of the factual and counterfactual samples. The proposed CRM forms a general paradigm that can be applied to most existing language understanding models without constraint on the format of the language understanding task. We select two language understanding tasks: SA and NLI, and test CRM on three representative models for each task. Extensive experiments on benchmark datasets validate the effectiveness of CRM, which achieves performance gains ranging from 5.1% to 15.6%. The main contributions are as follow: \u2022 We propose the Counterfactual Reasoning Model to enlighten the language understanding model with counterfactual thinking. \u2022 We devise a generation module and a retrospection module that are task and model agnostic. \u2022 We conduct extensive experiments, which validate the rationality and effectiveness of the proposed method. 2 Pilot Study Decisions are usually accompanied by con\ufb01dence, a feeling of being wrong or right (Boldt et al., 2019). From the perspective of model con\ufb01dence, we investigate the performance of language understanding models across different testing samples. We estimate the model con\ufb01dence on a sample as the widely used Maximum Class Probability (MCP) (Corbi` ere et al., 2019), which is the probability over the predicted class. A lower value of MCP means less con\ufb01dence and \u201chard\u201d sample. According to the value of MCP, we rank the testing (a) Sentiment analysis (b) Natural language inference Figure 1: Prediction performance of the language understanding models over testing samples at different con\ufb01dence levels. samples in ascending order and split them into ten groups, i.e., con\ufb01dence level from 1 to 10. Figure 1 shows the performance of representative models over samples at different model con\ufb01dence levels on the SA and NLI tasks (see Section 4.1 for model and dataset descriptions). From the \ufb01gures, we can observe a clear increasing trend of classi\ufb01cation accuracy as the con\ufb01dence level increases from 1 to 10 in all cases. In other words, these models fail to predict accurately for the hard samples. It is thus essential to enhance the standard inference with a more precise decision making procedure. 3 Methodology In this section, we \ufb01rst formulate the task of learning a decision making procedure for the testing phase (Section 3.1), followed by introducing the proposed CRM (Section 3.2) and the paradigm of building language understanding solutions with CRM (Section 3.3). 3.1 Problem Formulation As discussed in the previous work (Wu et al., 2020; Li et al., 2020, 2019), language understanding tasks can be abstracted as a classi\ufb01cation problem where the input is a text and the target is to make decision across a set of candidates of interests. We follow the problem setting with consideration of counterfactual samples (Kaushik et al., 2019; Liang et al., 2020), where the training data are twofold: 1) factual samples T = {(x, y)} where y \u2208[1, C] denotes the class or the target decision of the text; x \u2208RD is the latent representation of the text, which encodes the textual contents1. 2) counterfactual samples 1The input is indeed the plain text which is projected to a latent representation by an encoder (e.g., a Transformer (Devlin et al., 2019)) in the cutting edge solutions. We omit the \fT \u2217 = {(x\u2217 c, c)|(x, y) \u2208T , c \u2208[1, C]&c \u0338= y} where (x\u2217 c, c) is a counterfactual sample in class c corresponds to the factual sample (x, y)2. We assume that a classi\ufb01cation model (e.g., BERT (Devlin et al., 2019)) has been trained over the labeled data. Formally, \u02c6 \u03b8 = min \u03b8 X (x,y)\u2208T /T \u2217 l(y, f(x|\u03b8)) + \u03b1\u2225\u03b8\u2225, (1) where \u02c6 \u03b8 is the learned parameters of the model f(\u00b7) ; l(\u00b7) is a classi\ufb01cation loss such as crossentropy (Kullback, 1997), and \u03b1 is a hyperparameter to adjust the regularization. The target is to build a decision making procedure to perform counterfactual reasoning when serving for the testing phase. Given a testing sample x, the core is a policy of generating counterfactual samples and retrospecting the decision, which is formulated as: y = h \u0010 x, {x\u2217}|\u03b7, \u02c6 \u03b8 \u0011 , {x\u2217} = g \u0000x \f \f\u03c9 \u0001 , y \u2208RC denotes the \ufb01nal prediction for the testing sample x, which is a distribution over the classes; x\u2217is one of the generated counterfactual samples for x. The generation module g(\u00b7) parameterized by \u03c9 is expected to construct a set of representative counterfactual samples for the target factual sample, which provide signals for the retrospection module h(\u00b7) parameterized by \u03b7 to retrospect the prediction f \u0000x|\u02c6 \u03b8 \u0001 given by the trained classi\ufb01cation model. In particular, h(\u00b7) and g(\u00b7) will be learned from the factual and counterfactual training samples, respectively. 3.2 Counterfactual Reasoning Model Figure 2 illustrates the process of CRM where the arrows in grey color represent the standard inference of trained classi\ufb01cation model, and arrows in red color represent the retrospection with consideration of counterfactual samples. 3.2.1 Retrospection Module We devise the retrospection module with one key consideration\u2014distilling signals for making \ufb01nal decision by comparing both the latent representation and the prediction of the counterfactual samples with the factual sample. To achieve the target, encoder for briefness since focusing on the decision making. 2Given the labeled factual sample, counterfactual samples can be constructed either manually (Kaushik et al., 2019) or automatically (Chen et al., 2020) by conducting minimum changes on x to swap its label from y to c Figure 2: Illustration of the proposed CRM. we devise three key building blocks for retrospection, which successively perform representation comparison, prediction comparison, and fusion . In particular, the module \ufb01rst compares the representation of each counterfactual sample with the factual sample; then compares their predictions accordingly; and fuses the comparison across the counterfactual samples. Representation comparison. Given a pair of counterfactual sample x\u2217and factual sample x, we believe the signals meaningful for making \ufb01nal decision lie in the difference of the samples and how the difference affects the classi\ufb01cation. To distill such signals, we devise the representation comparison block as y\u2206= f(x \u2212x\u2217|\u02c6 \u03b8), where y\u2206\u2208RC denotes the prediction of the representation difference x \u2212x\u2217given by the trained classi\ufb01cation model. Note that we leverage the trained model to enlighten how the content difference affects the classi\ufb01cation since the model is trained to capture the connection between the textual patterns and the classes. It should be noted that we use a duplicate of the trained classi\ufb01cation model for the representation comparison. That is to say, the training of the retrospection module will not affect the classi\ufb01cation model. Prediction comparison. To retrospect the prediction f(x|\u02c6 \u03b8), we devise a prediction comparison block to compare the predictions of each counterfactual and factual sample pair and distill patterns from f(x|\u02c6 \u03b8), f(x\u2217|\u02c6 \u03b8), and y\u2206. Inspired by the success of convolutional neural network (CNN) in capture local-region patterns, the block is devised as a CNN, which is formulated as: y\u2217= CNN \u0010 f(x|\u02c6 \u03b8), f(x\u2217|\u02c6 \u03b8), y\u2206 \u0011 , (2) where y\u2217denotes the retrospected prediction when comparing to x\u2217. In particular, a stack layer \ufb01rst stacks the three predictions as a matrix, which serves as an \u201cimage\u201d to facilitate \u201cobserving\u201d pat\fterns. Formally, Y = h f(x|\u02c6 \u03b8), f(x\u2217|\u02c6 \u03b8), y\u2206 i where Y \u2208RC\u00d73. Y is then fed into an 1D convolution layer to capture the intra-class patterns across the predictions, which is formulated as: H = \u03c3(Y \u2217F ), Hij = \u03c3(Y:iFj), (3) where F \u2208R3\u00d7K denotes the \ufb01lters in the convolution layer, and \u03c3(\u00b7) is an activation function such as GELU (Hendrycks and Gimpel, 2016). Y:i and Fj represent the i-th row of Y and the j-th column of F , respectively. The \ufb01lter Fj can learn rules for conducting retrospection. For instance, a \ufb01lter [1, \u22121, 0] means deducting the prediction of the counterfactual sample from that of the factual sample. The output H \u2208RC\u00d7K is then \ufb02attened as a vector and fed into a fully-connected (FC) layer to capture the inter-class patterns. Formally, y\u2217= W flatten(H) + b, (4) where W and b are model parameters. Fusion. The target is to fuse the retrospected predictions {y\u2217} into a \ufb01nal decision y. Inspired by the success of pooling function in reading out patterns, we devise the block as y = pooling({y\u2217}). As the fusion is performed after the pairwise comparison, we term it as late fusion. Training. We update the parameters of the retrospection module by minimizing the classi\ufb01cation loss over the factual training samples, which is: \u02c6 \u03b7 = min \u03b7 X (x,y)\u2208T l(y, y) + \u03bb\u2225\u03b7\u2225. (5) where \u03bb denotes the hyper-parameter to adjust the weight of the regularization term. It should be noted that no existing research has uncovered the speci\ufb01c mechanism of retrospection in our brain, i.e., the order of comparison and fusion is unclear. As such, we further devise two fusion strategies: middle fusion and early fusion, which performs fusion within the CNN, i.e., during comparison, and before the CNN, respectively. \u2022 Middle fusion performs aggregation between the convolution layer and the FC layer. This fusion \ufb01rst calculates the latent comparison signals H for each pair of counterfactual and factual samples according to Equation 3. The aggregated signals pooling({H}) are then fed into the FC layer (Equation 4) to obtain the \ufb01nal decision y. \u2022 Early fusion aggregates the counterfactual samples before performing comparison, which is formulated as \u02dc x\u2217 = pooling({x\u2217}). In this way, the retrospection module is formulated as: y = CNN \u0010 f(x|\u02c6 \u03b8), f(\u02dc x\u2217|\u02c6 \u03b8), f(\u02dc x\u2217\u2212x|\u02c6 \u03b8) \u0011 . For all the three fusion methods, we can use either regular pooling function without parameter or parameterized pooling function (Ying et al., 2018) to enhance the expressiveness of the retrospection module. In our experiments, using a simple mean pooling achieves a performance that is comparable to the parameterized one in most cases (cf. Table 3). 3.2.2 Generation Module The target is to construct counterfactual samples that are informative for retrospecting the decision on the target factual sample x. As the task involves making decision among C candidate classes, we believe that the key to generate representative counterfactual samples lies in imagining \u201cwhat would the content be if the sample belongs to class c\u201d, i.e., generating C counterfactual samples {x\u2217 c}. With the C classes as the targets, the searching space of samples can also be largely narrowed down. Toward this end, we devise the generation module with two main considerations: 1) decomposing the factual sample x to distill contents irrelevant to the label of the sample u = d(x|\u03c9); 2) injecting class c into u to form the counterfactual sample x\u2217 c. Decomposition. To distill u, we need to recognize the connection between the content of the factual sample and each class. We thus account for class representations in the decomposition function. To align the sample space of the generation module with the retrospection module h(\u00b7) and the classi\ufb01cation model f(\u00b7), we extract the parameters from the prediction layer of the trained classi\ufb01cation model as the class representations. In particular, we extract the mapping matrix W \u2208RC\u00d7D where the c-th row corresponds to class c. Note that we assume that the prediction layer has the same dimensionality as the latent representation, which is a common setting in most cutting edge language understanding models. The decomposition function is devised as a CNN to capture both the intra-dimension and inter-dimension connections between the factual sample and the classes. \u2022 Stack layer. The stack layer stacks the factual sample, class representations, and the element-wise product between sample and each class, which is formulated as: X = [x, W T , x \u2299W T ]. x \u2299 W T \u2208RD\u00d7C shed lights on how closely each dimension of x connect to each class, where large \fabsolute value indicates closer connections. \u2022 Convolution layer. This layer uses 1D horizontal \ufb01lters to learn patterns of deducting class relevant contents from the factual sample, which is formulated as h = pooling(\u03c3(X \u2217F g)). F g \u2208 R(2C+1)\u00d7L denotes the \ufb01lters where L is the total number of \ufb01lters. The output h \u2208RD is a hidden representation. \u2022 FC layers. We use two FC layers to capture the inter-dimension connections. Formally, u = W 2\u03c3(W 1h + b1) + b2, where W 2 \u2208RD\u00d7M, W 1 \u2208RM\u00d7D, b2 \u2208RD, and b1 \u2208RM are learnable parameters. M is a hyper-parameter to adjust the complexity of the decomposition function. Note that we can stack more layers to enhance the expressiveness of the function, whereas using two layers according to the universal approximation theorem (Hornik, 1991). We learn the parameters of the decomposition function from the counterfactual training samples by optimizing the following objective: min \u03c9 X (x\u2217 c ,c)\u2208T \u2217 r \u0000u\u2217 c, \u02dc uc \u0001 + \u03b3l \u0000c, f(x\u2217 c \u2212u\u2217 c|\u02c6 \u03b8) \u0001 + r \u0000u, \u02dc uc \u0001 + \u03b3l \u0000y, f(x \u2212u|\u02c6 \u03b8) \u0001 , (6) where u\u2217 c = d(x\u2217 c|\u03c9) and u = d(x|\u03c9) are the decomposition results of the counterfactual sample x\u2217 c and the corresponding factual sample x; \u02dc uc = 1 2(x + x\u2217 c) denotes the target value of the decomposition. The two terms r(\u00b7) and l(\u00b7) are Euclidean distance (Dattorro, 2010) and classi\ufb01cation loss. By minimizing the two terms, we encourage the decomposition result: 1) to be close to the target value \u02dc uc; and 2) if being deducted from the original sample (e.g., , x \u2212u), the classi\ufb01cation cannot be in\ufb02uenced. \u03b3 is a hyper-parameter to balance the two terms. The rationality of setting \u02dc uc = 1 2(x + x\u2217 c) as the target class irrelevant content of x and x\u2217 c comes from the parallelogram law (Nash, 2003). Note that this pair of samples belong to two different classes where a decision boundary (a hyperplane) lies between the two classes y and c. Considering that the sample x corresponds to a vector in the hidden space, we can decompose the vector into two components that are orthogonal and parallel to the decision boundary, i.e., x\u2217 c = o\u2217 c + p\u2217 c and x = o + p. Since the two samples belong to different classes, their orthogonal components are in opposite directions and their addition will only retain the parallel components, which are irrelevant to judging the class between y and c3. Injection. Accordingly, given a testing sample x, we can inject the orthogonal components towards class c via x\u2217 c = 2 \u2217d(x|\u02c6 \u03c9c) \u2212x, which is the imagined content of the sample if it belongs to class c. In this way, for each testing sample, we conduct the injection over all the classes and construct C counterfactual samples {x\u2217 c}, which are then used in the retrospection module4. 3.3 Learning Paradigm with CRM The existing work (Kaushik et al., 2019; Zeng et al., 2020) for language understanding typically follows the standard learning paradigm, i.e., training a classi\ufb01cation model over labeled data. Applying the proposed CRM indeed forms a new learning paradigm for constructing language understanding solutions. Algorithm 1 illustrates the procedure of the new paradigm. Algorithm 1 Learning paradigm with CRM Input: Training data T , T \u2217. /* Training */ 1: Optimize Equation 1; \u25b7Classi\ufb01cation model training 2: Optimize Equation 6; \u25b7Generation module training 3: Optimize Equation 5; \u25b7Retrospection module training 4: Return \u02c6 \u03b8, \u02c6 \u03c9c, and \u02c6 \u03b7. /* Testing */ 5: Calculate f(x|\u02c6 \u03b8); \u25b7Classi\ufb01cation model inference 6: for c = 1 \u2192C do 7: x\u2217 c = 2 \u2217g(x|\u02c6 \u03c9c) \u2212x; \u25b7Generation 8: end for 9: Calculate h(x, {x\u2217 c}|\u02c6 \u03b7, \u02c6 \u03b8); \u25b7Retrospection 4 Experiments We conduct experiments on two representative language understanding tasks, SA and NLI, to answer the following research questions: \u2022 RQ1: To what extent counterfacutal reasoning improves language understanding? \u2022 RQ2: How does the design of the retrospection module affect the proposed CRM? \u2022 RQ3: How effective are the counterfactual samples generated by the proposed generation module? 4.1 Experiment Settings Datasets. We adopt the same datasets in (Kaushik et al., 2019) for both tasks. The SA data are reviews 3Note that we normalize all samples to be unit vectors in the decomposition function. Moreover, inspired by (Parascandolo et al., 2018), we train a decomposition function for each class, i.e., class-speci\ufb01c parameters \u02c6 \u03c9c 4The generation module consists of C decomposition functions d(x|\u02c6 \u03c9c) and the non-parametric injection function. \ffrom IMDb, which are labeled as either positive or negative. For each factual review, the dataset contains a manually constructed counterfactual sample where the crowd workers are asked to manipulate the text to reverse the label with the constraint of no gratuitous change. NLI is a three-way classi\ufb01cation task with two sentences as inputs and the target of detecting their relation within entailment, contradiction, and neutral. For each factual sample, four counterfactual samples are given, which are constructed by editing either the \ufb01rst or the second sentence with target relations different to the label of the factual sample. Classi\ufb01cation models. Owing to the extraordinary representational capacity of language model, \ufb01ne-tuning pre-trained language model has become the emergent technique for solving language understanding tasks (Devlin et al., 2019). We select the widely used RoBERTa-base5 and RoBERTalarge6 for the consideration of the robustness of the RoBERTa (Liu et al., 2019) and our limited computation resources. For SA, we also test the classical Multi-Layer Perceptron (MLP) (Teney et al., 2020) with tf-idf text features (Sch\u00a8 utze et al., 2008) as inputs. For NLI, we further test RoBERTa-largenli7, which has been \ufb01ne-tuned on the large-scale MultiNLI dataset (Williams et al., 2018). Baselines. As the proposed CRM leverages counterfactual samples, we compare CRM with three representative methods using counterfactual samples in language understanding tasks: 1) +CF (Kaushik et al., 2019), which uses counterfactual samples as data augmentation for model training; 2) +GS (Teney et al., 2020), which compares the factual and counterfactual samples in model training through regularizing their gradients; and 3) +CL (Liang et al., 2020), which compares the factual and counterfactual samples through a contrastive loss. Moreover, we report the performance of the testing model under Normal Training, i.e., training over factual samples only. Implementation. We implement the proposed CRM with PyTorch 1.7.0 based on Hugging Face Transformer8, which is released at: https://github. com/fulifeng/Counterfactual Reasoning Model. In all cases, we follow the setting of +CF for training the classi\ufb01cation model, which is a standard \ufb01ne-tuning in (Liu et al., 2019). We then use 5https://huggingface.co/roberta-base. 6https://huggingface.co/roberta-large. 7https://huggingface.co/roberta-large-mnli. 8https://github.com/huggingface/transformers. adam (Kingma and Ba, 2014) with learning rate of 0.001 to optimize the retrospection module and the generation module. For the retrospection module, we set the number of \ufb01lters in the convolution layer K as 10, the weight for regularization \u03bb as 0. As to the generation module, we set the number of convolution \ufb01lters as 10, the size of the hidden layer M as 256, and the weight for balancing Euclidean distance and classi\ufb01cation loss \u03b3 as 15. We report the average classi\ufb01cation accuracy over 5 different runs. For each repeat, we train the model with 20 epochs and select the model with the best performance on the validation set. 4.2 Performance Comparison (RQ1) We \ufb01rst use the handcrafted counterfactual samples to demonstrate the effectiveness of counterfactual reasoning in the inference stage of language understanding model, which can be seen as using a golden standard generation module to provide counterfactual samples for the retrospection module. Note that we do not use the label of counterfactual samples in the testing set. Table 1 shows the performance of the compared methods on the two tasks. From the table, we observe that: \u2022 +CRM largely outperforms all the baseline methods in all cases. As compared to +CF, the same classi\ufb01cation model without CRM in the testing phase, +CRM achieves relative performance improvement up to 15.6%. The performance gain is attributed to the retrospection module, which justi\ufb01es the rationality and effectiveness of incorporating counterfactual thinking into the inference stage of language understanding model. In other words, by comparing the factual sample with its counterfactual samples, the retrospection module indeed makes more accurate decisions. \u2022 On the SA task, a huge gap (85.3 \u219493.4) lies in the performance of the shallow model MLP and the deep RoBERTa-base/RoBERTa-large. When applying +CRM, MLP achieves a performance that is comparable to the deep models. The result indicates that counterfactual reasoning can compensate for the disadvantages caused by the insuf\ufb01cient model representational capacity. In addition, the result re\ufb02ects that CRM brings cognitive ability beyond recognizing textual patterns. If the retrospection module only facilitates capturing the correlation between textual patterns and classes, such simple model cannot bridge the \fSentiment Classi\ufb01cation Backbone Normal Training +CF +GS +CL +CRM RI MLP 86.9\u00b10.5 85.3\u00b10.3 84.6\u00b10.4 98.6\u00b10.2 15.6% RoBERTa-base 93.2\u00b10.6 92.3\u00b10.7 92.2\u00b10.9 91.8\u00b11.1 97.5\u00b10.3 5.7% RoBERTa-large 93.6\u00b10.6 93.4\u00b10.4 93.1\u00b10.5 94.1\u00b10.4 98.2\u00b10.3 5.1% Natural Language Inference Backbone Normal Training +CF +GS +CL +CRM RI RoBERTa-base 83.5\u00b10.8 83.4\u00b10.9 83.8\u00b11.7 84.1\u00b11.1 91.5\u00b11.6 9.7% RoBERTa-large 87.9\u00b11.7 85.8\u00b11.2 86.2\u00b11.2 86.5\u00b11.6 93.8\u00b11.9 9.3% RoBERTa-large-nli 89.4\u00b10.7 88.2\u00b11.0 87.2\u00b11.4 88.2\u00b11.0 94.4\u00b11.2 7.1% Table 1: Performance of the proposed CRM (Early Fusion) and baselines on the SA and NLI tasks. RI means the relative performance improvement achieved by +CRM over the classi\ufb01cation model without CRM, i.e., +CF. huge gap of representational capacity between MLP and RoBERTa-large. \u2022 The performance of baseline methods are comparable to each other in most cases, i.e., incorporating counterfactual samples into model training does not necessarily improve the testing performance on factual samples. This result is consistent with (Kaushik et al., 2019), which is reasonable since these methods are devised for enhancing the generalization ability, especially for the out-of-distribution testing samples, which can sacri\ufb01ce the performance on normal testing samples. Besides, the result indicates that training with counterfactual samples is insuf\ufb01cient for achieving counterfactual thinking, which re\ufb02ects the rationality of enhancing the inference paradigm with a decision making procedure. (a) Sentiment analysis (b) Natural language inference Figure 3: Prediction performance of +CF and +CRM over testing samples at different con\ufb01dence levels. Performance on hard samples. Furthermore, we investigate whether the proposed CRM facilitate dealing with hard samples. Recall that we split the testing samples into 10 groups according to the con\ufb01dence of the classi\ufb01cation model, i.e., +CF (cf. Section 2). We perform group-wise comparison between +CF and +CRM. Figure 3 shows the performance of all the classi\ufb01cation models with +CF and +CRM. From the \ufb01gures, 1) we observe that the performance of +CRM is stable across different con\ufb01dence levels, whereas the performance of the classi\ufb01cation model shows a clear decreasing trend as the con\ufb01dence level decreases from 10 to 1. The result indicates that the retrospection module is insensitive to the con\ufb01dence of the classi\ufb01cation model. 2) In all cases, +CRM achieves the largest performance gain at the \ufb01rst group with con\ufb01dence level of 1, i.e., the hardest group to the classi\ufb01cation model. For instance, the improvement reaches 85.7% on the RoBERTa-base model for the NLI task. The large improvements further justi\ufb01es the effectiveness of the retrospection module, i.e., comparing the prediction of factual samples to counterfactual samples indeed facilitates dealing with hard samples. Sentiment Classi\ufb01cation Backbone Implicit +CRM MLP 79.3\u00b10.2 98.6\u00b10.2 RoBERTa-base 94.7\u00b10.6 97.5\u00b10.3 RoBERTa-large 98.0\u00b10.4 98.2\u00b10.3 Natural Language Inference Backbone Implicit +CRM RoBERTa-base 81.9\u00b13.5 91.5\u00b11.6 RoBERTa-large 87.4\u00b12.2 93.8\u00b11.9 RoBERTa-large-nli 88.8\u00b11.6 94.4\u00b11.2 Table 2: Performance comparison of implicit modeling (end-to-end model) and explicit modeling (CRM) of counterfactual thinking. CRM V.S. implicit modeling. According to the uniform approximation theorem (Hornik, 1991), the CRM can also be approximated by a deep neural network. We thus investigate whether counterfactual thinking can be learned in an implicit manner. In particular, we evaluate a model that takes both the factual sample and counterfactual samples as inputs to make prediction for the fac\ftual one. Table 2 shows the performance, where we have the following observations: 1) The implicit modeling performs much worse than the proposed CRM in most cases, which justi\ufb01es the effectiveness of the retrospection module and the rationality of modeling comparison explicitly. 2) On the NLI task, RoBERTa-base+CRM outperforms RoBERTa-large (implicit), which means that the superior performance of CRM is not because of the additional model parameters introduced by the retrospection module, but the explicit comparison between factual and counterfactual samples. 4.3 In-depth Analysis Effects of retrospection module design (RQ2). Note that the order of comparison and fusion in the retrospection mechanism of us humans is still unclear. We investigate how the fusion strategies in\ufb02uence the effectiveness of the proposed CRM. Table 3 shows the performance of CRM based on the early fusion (EF), late fusion (LF), and middle fusion (MF) on the NLI task. We omit the comparison on the SA task since the dataset only has one counterfactual sample for the target factual sample. For both EF and LF, we use the mean pooling as the pooling function. As to MF, we use a pooling function that is equipped with self-attention (Vaswani et al., 2017). The reasons of this setting are twofold: 1) using mean pooling will make LF and MF equivalent since the FC layer in the retrospection module is a linear mapping. Note that LF performs pooling after the FC layer, while the pooling function of MF is just before the FC layer. 2) The comparison between the LF and MF can thus shed light on whether parameterized pooling function can bene\ufb01t the retrospection. From the table, we can observe that, in most cases, CRM based on different fusion strategies achieve performance comparable to each other. It indicates that the retrospection is insensitive to the order of fusion and the comparison between counterfactual and factual samples. Considering that MF with mean pooling is equivalent to LF, we can see that the bene\ufb01t of parameterized pooling function is limited. In particular, MF only performs better than LF on one of the three testing models. Effects of generation module (RQ3). We then investigate whether the proposed generation module constructs useful counterfactual samples for retrospection. We train and test the retrospection module (using EF) with the generated samples on RoBERTa-large on the SA task. We omit the experiments of other settings for saving computation resources. In this way, the model achieves an accuracy of 94.5 which is better than +CF (93.4) but worse than +CRM with manually constructed counterfactual samples (98.2) (cf. Table 1). The result indicates that the generated samples indeed facilitate the retrospection while the generation quality can be further improved. Moreover, on the testing samples at con\ufb01dence level of 1, using the generated samples achieves an accuracy of 81.3 which is much better than +CF (70.8) (cf. Figure 3). The generated samples indeed bene\ufb01t the decision making over hard testing samples. 5 Related Work Counterfactual sample. Constructing counterfactual samples has become an emergent data augmentation technique in natural language processing, which has been used in a wide spectral of language understanding tasks, including SA (Kaushik et al., 2019; Yang et al., 2020), NLI (Kaushik et al., 2019), named entity recognition (Zeng et al., 2020) question answering (Chen et al., 2020), dialogue system (Zhu et al., 2020), vision-language navigation (Fu et al., 2020). Beyond data augmentation under the standard supervised learning paradigm, a line of research explores to incorporate counterfactual samples into other learning paradigms such as adversarial training (Zhu et al., 2020; Fu et al., 2020; Teney et al., 2020) and contrastive learning (Liang et al., 2020). This work lies in an orthogonal direction that incorporates counterfactual samples into the decision making procedure of model inference. Counterfactual inference. A line of research attempts to enable deep neural networks with counterfactual thinking by incorporating counterfactual inference (Yue et al., 2021; Wang et al., 2021; Niu et al., 2021; Tang et al., 2020; Feng et al., 2021). These methods perform counterfactual inference over the model predictions according to a pre-de\ufb01ned causal graph. Due to the requirement of causal graph, such methods are hard to be generalized to different tasks. Our method does not suffer from such limitation since working on the counterfactual samples which can be generated without a comprehensive causal graph. Hard sample. A wide spectral of machine learning techniques are related to dealing with the hard samples in language understanding. For instance, \fBackbone +CF EF RI LF RI MF RI RoBERTa-base 83.4\u00b10.9 91.5\u00b11.6 9.7% 92.8\u00b11.8 11.3% 89.6\u00b12.0 7.4% RoBERTa-large 85.8\u00b11.2 93.8\u00b11.9 9.3% 95.3\u00b10.7 11.1% 93.4\u00b11.7 8.9% RoBERTa-large-nli 88.2\u00b11.0 94.4\u00b11.2 7.1% 93.8\u00b10.4 6.4% 94.7\u00b11.3 7.4% Table 3: Performance of the proposed CRM based on early fusion (EF), late fusion (LF), or middle fusion (MF) on the NLI task. RI represents the relative performance improvement over the +CF method. adversarial training (Khashabi et al., 2020) enhances the model robustness against perturbations and attacks, which are hard samples for normally trained models. Debiased training (Tu et al., 2020; Utama et al., 2020) eliminates the spurious correlation or bias in training data to enhance the generalization ability and deal with out-of-distribution samples. In addition to the training phase, a few inference techniques might improve the model performance on hard samples, including posterior regularization (Srivastava et al., 2018) and causal inference (Yu et al., 2020; Niu et al., 2021). However, both techniques require domain knowledge such as prior or causal graph tailored for speci\ufb01c applications. On the contrary, this work provides a general paradigm that can be used for most language understanding tasks. 6" + }, + { + "url": "http://arxiv.org/abs/2010.11797v2", + "title": "Should Graph Convolution Trust Neighbors? A Simple Causal Inference Method", + "abstract": "Graph Convolutional Network (GCN) is an emerging technique for information\nretrieval (IR) applications. While GCN assumes the homophily property of a\ngraph, real-world graphs are never perfect: the local structure of a node may\ncontain discrepancy, e.g., the labels of a node's neighbors could vary. This\npushes us to consider the discrepancy of local structure in GCN modeling.\nExisting work approaches this issue by introducing an additional module such as\ngraph attention, which is expected to learn the contribution of each neighbor.\nHowever, such module may not work reliably as expected, especially when there\nlacks supervision signal, e.g., when the labeled data is small. Moreover,\nexisting methods focus on modeling the nodes in the training data, and never\nconsider the local structure discrepancy of testing nodes.\n This work focuses on the local structure discrepancy issue for testing nodes,\nwhich has received little scrutiny. From a novel perspective of causality, we\ninvestigate whether a GCN should trust the local structure of a testing node\nwhen predicting its label. To this end, we analyze the working mechanism of GCN\nwith causal graph, estimating the causal effect of a node's local structure for\nthe prediction. The idea is simple yet effective: given a trained GCN model, we\nfirst intervene the prediction by blocking the graph structure; we then compare\nthe original prediction with the intervened prediction to assess the causal\neffect of the local structure on the prediction. Through this way, we can\neliminate the impact of local structure discrepancy and make more accurate\nprediction. Extensive experiments on seven node classification datasets show\nthat our method effectively enhances the inference stage of GCN.", + "authors": "Fuli Feng, Weiran Huang, Xiangnan He, Xin Xin, Qifan Wang, Tat-Seng Chua", + "published": "2020-10-22", + "updated": "2021-06-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "main_content": "INTRODUCTION GCN is being increasingly used in IR applications, ranging from search engines [29, 57], recommender systems [5, 9, 12, 49] to question-answering systems [14, 64]. Its main idea is to augment a node\u2019s representation by aggregating the representations of its neighbors. In practice, GCN could face the local structure discrepancy issue [3] since real-world graphs usually exhibit locally varying structure. That is, nodes can exhibit inconsistent distributions of local structure properties such as homophily and degree. Figure 1 shows an example in a document citation graph [15], where the local structure centered at \ud835\udc5b\ud835\udc5c\ud835\udc51\ud835\udc521 and \ud835\udc5b\ud835\udc5c\ud835\udc51\ud835\udc522 has different properties regarding cross-category edges1. Undoubtedly, applying the same aggregation over \ud835\udc5b\ud835\udc5c\ud835\udc51\ud835\udc521 and \ud835\udc5b\ud835\udc5c\ud835\udc51\ud835\udc522 will lead to inferior node representations. Therefore, it is essential for GCN to account for the local structure discrepancy issue. Existing work considers this issue by equipping GCN with an adaptive locality module [44, 55], which learns to adjust the contribution of neighbors. Most of the efforts focus on the attention mechanism, such as neighbor attention [44] and hop attention [27]. Ideally, the attention weight could downweigh the neighbors that causes discrepancy, e.g., the neighbors of different categories with the target node. However, graph attention is not easy to be trained well in practice, especially for hard semi-supervised learning setting that has very limited labeled data [22]. Moreover, existing methods mainly consider the nodes in the training data, ignoring the local structure discrepancy on the testing nodes, which however are the decisive factor for model generalization. It is thus insufficient to resolve the discrepancy issue by adjusting the architecture of GCN. In this work, we argue that it is essential to empower the inference stage of a trained GCN with the ability of handling the local structure discrepancy issue. In real-world applications, the graph structure typically evolves along time, resulting in structure discrepancy between the training data and testing data. Moreover, the testing node can be newly coming (e.g., a new user), which may exhibit properties different from the training nodes [42]. However, 1This work focuses on the discrepancy w.r.t. cross-category connections. arXiv:2010.11797v2 [cs.LG] 6 Jun 2021 \fFigure 1: Illustration of local structure discrepancy in OGBarXiv [15], a citation graph of papers. Nodes in different colors belong to different categories. the one-pass inference procedure of existing GCNs indiscriminately uses the learned model parameters to make prediction for all testing nodes, lacking the capability of handling the structure discrepancy. This work aims to bridge the gap by upgrading the GCN inference to be node-specific according to the extent of structure discrepancy. To achieve the target, the key lies in analyzing the prediction generation process of GCN on each node and estimating to what extent accounting for node\u2019s neighbors affects its prediction, i.e., the causal effect of the local structure on GCN prediction. According to the evidence that model output can reflect feature discrepancy [40], we have a key assumption that the GCN output provides evidence on the properties of the local structure centered at a testing node. For instance, if the local structure exhibits properties distinct from the seen ones, the model will be uncertain about its prediction when the neighbors are taken into account. Accordingly, we should downweigh the contribution of neighbors to reduce the impact of the discrepancy on the prediction. Inherently, both a node\u2019s features and neighbors are the causes of the prediction for the node. By distinguishing the two causal effects, we can assess revise the model prediction in a node-specific manner. To this end, we resort to the language of causal graph [35] to describe the causal relations in GCN prediction. We propose a Causal GCN Inference (CGI) model, which adjusts the prediction of a trained GCN according to the causal effect of the local structure. In particular, CGI first calls for causal intervention that blocks the graph structure and forces the GCN to user a node\u2019s own features to make prediction. CGI then makes choice between the intervened prediction and the original prediction, according to the causal effect of the local structure, prediction confidence, and other factors that characterize the prediction. Intuitively, CGI is expected to choose the intervened prediction (i.e., trusting self) when facing a testing node with local structure discrepancy. To learn a good choice-making strategy, we devise it as a separate classifier, which is learned based on the trained GCN. We demonstrate CGI on APPNP [21], one of the state-of-the-art GCN models for semisupervised node classification. Extensive experiments on seven datasets validate the effectiveness of our approach. The codes are released at: https://github.com/fulifeng/CGI. The main contributions of this work are summarized as follows: \u2022 We achieve adaptive locality during GCN inference and propose an CGI model that is model-agnostic. \u2022 We formulate the causal graph of GCN working mechanism, and the estimation of causal intervention and causal uncertainty based on the causal graph. \u2022 We conduct experiments on seven node classification datasets to demonstrate the rationality of the proposed methods. 2 PRELIMINARIES Node classification. We represent a graph with \ud835\udc41nodes as \ud835\udc3a= (\ud835\udc68, \ud835\udc7f), i.e., an adjacency matrix \ud835\udc68\u2208R\ud835\udc41\u00d7\ud835\udc41associated with a feature matrix \ud835\udc7f= [\ud835\udc991, \ud835\udc992, \u00b7 \u00b7 \u00b7 , \ud835\udc99\ud835\udc41]\ud835\udc47\u2208R\ud835\udc41\u00d7\ud835\udc37. \ud835\udc68describes the connections between nodes where \ud835\udc34\ud835\udc56\ud835\udc57= 1 means there is an edge between node \ud835\udc56and \ud835\udc57, otherwise \ud835\udc34\ud835\udc56\ud835\udc57= 0. \ud835\udc37is the dimension of the input node features.Node classification is one of the most popular analytic tasks on graph data. In the general problem setting, the label of \ud835\udc40nodes are given \ud835\udc80= [\ud835\udc9a1,\ud835\udc9a2, \u00b7 \u00b7 \u00b7 ,\ud835\udc9a\ud835\udc41]\ud835\udc47\u2208R\ud835\udc41\u00d7\ud835\udc3f, where \ud835\udc3fis the number of node categories and \ud835\udc9a1 is a one-hot vector. The target is to learn a classifier from the labeled nodes, formally, \ud835\udc53(\ud835\udc99, N(\ud835\udc99) |\ud835\udf3d), N(\ud835\udc99) = {\ud835\udc99\ud835\udc5b|\ud835\udc68\ud835\udc56\ud835\udc5b= 1}, (1) where \ud835\udf3ddenotes the parameter of the classifier and N (\ud835\udc99) denotes the neighbor nodes of the target node \ud835\udc99. Without loss of generality, we index the labeled nodes and testing nodes in the range of [1, \ud835\udc40] and (\ud835\udc47, \ud835\udc41], respectively. There are four popular settings with minor differences regarding the observability of testing nodes during model training and the amount of labeled nodes. Specifically, \u2022 Inductive Full-supervised Learning: In this setting, testing nodes are not included in the graph used for model training and all training nodes are labeled. That is, \ud835\udc40= \ud835\udc47and learning the classifier with \ud835\udc53(\ud835\udc7f\ud835\udc61\ud835\udc5f|\ud835\udc68\ud835\udc61\ud835\udc5f, \ud835\udf3d) where \ud835\udc7f\ud835\udc61\ud835\udc5f\u2208R\ud835\udc40\u00d7\ud835\udc37and \ud835\udc68\ud835\udc61\ud835\udc5f denotes the features and the subgraph of the training nodes. \u2022 Inductive Semi-supervised Learning [11]: In many real-world applications such as text classification [26], it is unaffordable to label all the observed nodes, i.e., only a small portion of the training nodes are labeled (in fact, \ud835\udc40\u226a\ud835\udc47). \u2022 Transductive Full-supervised Learning [15]: In some cases, the graph is relatively stable, i.e., no new node occurs, where the whole node graph \ud835\udc7fand \ud835\udc68are utilized for model training. \u2022 Transductive Semi-supervised Learning [20]: In this setting, the whole graph is available for model training while only a small portion of the training nodes are labeled. It should be noted that we do not restrict our problem setting to be a specific one like most previous work on GCN, since we focus on the general inference model. Graph Convolutional Network. Taking the graph as input, GCN learns node representations to encode the graph structure and node features (the last layer makes predictions) [20]. The key operation of GCN is neighbor aggregation, which can be abstracted as: \u00af \ud835\udc99= \ud835\udc34\ud835\udc3a\ud835\udc3a(\ud835\udc99, {\ud835\udc99\ud835\udc5b|\ud835\udc99\ud835\udc5b\u2208N(\ud835\udc99) }), (2) where \ud835\udc34\ud835\udc3a\ud835\udc3adenotes the node aggregation operation such as a weighted summation [20]. \ud835\udc99and \u00af \ud835\udc99 \u2208 R\ud835\udc37are the origin representation of the target node (node features or representation at the previous layer) and the one after aggregating neighbor node features. Note that standard GCN layer typically consists a feature transformation, which is omitted for briefness. Adaptive locality. In most GCN, the target node is equally treated as the neighbor nodes, i.e., no additional operation except adding the edge for self-connection. Aiming to distinguish the contribution from target node and neighbor nodes, a self-weight \ud835\udefcis utilized, i.e., N (\ud835\udefc\u2217\ud835\udc99, {(1 \u2212\ud835\udefc) \u2217\ud835\udc99\ud835\udc5b|\ud835\udc5b\u2208\ud835\udc5b\ud835\udc52\ud835\udc56\ud835\udc54\u210e\ud835\udc4f\ud835\udc5c\ud835\udc5f(\ud835\udc99)}). More specifically, neighbor attention [44] is introduced to learn node specific weights, i.e., N (\ud835\udefc\ud835\udc99, {\ud835\udefc\ud835\udc5b\u2217\ud835\udc99\ud835\udc5b|\ud835\udc5b\u2208\ud835\udc5b\ud835\udc52\ud835\udc56\ud835\udc54\u210e\ud835\udc4f\ud835\udc5c\ud835\udc5f(\ud835\udc99)}). The weights \ud835\udefcand \ud835\udefc\ud835\udc5bare \fFigure 2: Cause-effect view of GCN. (a) Causal graph of GCN inference process; (b) making original prediction; (c) causal intervention \ud835\udc51\ud835\udc5c(\ud835\udc41= \u2205) where dashed arrow means the effect from the predecessor is blocked. calculated by an attention model such as multi-head attention [43] with the node representations \ud835\udc99and \ud835\udc99\ud835\udc5bas inputs. Lastly, hop attention [27] is devised to adaptively aggregate the target node representations at different GCN layers \ud835\udc990, \u00b7 \u00b7 \u00b7 , \u00af \ud835\udc99\ud835\udc58, \u00b7 \u00b7 \u00b7 , \u00af \ud835\udc99\ud835\udc3einto a final representation. \u00af \ud835\udc99\ud835\udc58is the convolution output at the \ud835\udc58-th layer which encodes the \ud835\udc58-hop neighbors of the target node. For a target node that is expected to trust self more, the hop attention is expected to assign higher weight for \ud835\udc990. Most of these adaptive locality models are learned during model training except the selfweight \ud835\udefcin GCN models like APPNP which is tuned upon the validation set. Causal effect. Causal effect is a concept in causal science [35], which studies the influence among variables. Given two variables \ud835\udc4band \ud835\udc4c, the causal effect of \ud835\udc4b= \ud835\udc65on \ud835\udc4cis to what extent changing the value of \ud835\udc4bto \ud835\udc65affects the value of \ud835\udc4c, which is abstracted as: \ud835\udc4c\ud835\udc65\u2212\ud835\udc4c\ud835\udc65\u2217, (3) where \ud835\udc4c\ud835\udc65and \ud835\udc4c\ud835\udc65\u2217are the outcomes of \ud835\udc4cwith \ud835\udc4b= \ud835\udc65and \ud835\udc4b= \ud835\udc65\u2217as inputs, respectively. \ud835\udc65\u2217is the reference status of variable \ud835\udc4b, which is typically set as empty value (e.g., zero) or the expectation of \ud835\udc4b. 3 METHODOLOGY In this section, we first scrutinize the cause-effect factors in the inference procedure of GCN, and then introduce the proposed CGI. Assume that we are given a well-trained GCN \ud835\udc53(\ud835\udc99, N (\ud835\udc99)| \u02c6 \ud835\udf03), which is optimized over the training nodes according to the following objective function: \u02c6 \ud835\udf03= min \ud835\udf03 \ud835\udc40 \u2211\ufe01 \ud835\udc56=1 (\ud835\udc59( \u02c6 \ud835\udc9a\ud835\udc56,\ud835\udc9a\ud835\udc56)) + \ud835\udf06\u2225\ud835\udf3d\u22252 \ud835\udc39, (4) where \ud835\udc59(\u00b7) denotes a classification loss function such as crossentropy, \u02c6 \ud835\udc9a\ud835\udc56= \ud835\udc53(\ud835\udc99\ud835\udc56, N (\ud835\udc99\ud835\udc56)| \u02c6 \ud835\udf03) \u2208R\ud835\udc3fdenotes the model prediction for node \ud835\udc56, and \ud835\udf06is a hyper-parameter to balance the training loss and regularization term for preventing overfitting. It should be noted that \u02c6 \ud835\udc9a\ud835\udc56is a probability distribution over the label space. The final classification \u02c6 \ud835\udc67\ud835\udc56corresponds to the category with the largest probability, which is formulated as: \u02c6 \ud835\udc67\ud835\udc56= arg max \ud835\udc57 \u02c6 \ud835\udc66(\ud835\udc56,\ud835\udc57), \ud835\udc57\u2264\ud835\udc3f, (5) where \u02c6 \ud835\udc66(\ud835\udc56,\ud835\udc57) is the \ud835\udc57-th entry of \u02c6 \ud835\udc9a\ud835\udc56. In the following, the mention of prediction and classification mean the predicted probability distribution ( \u02c6 \ud835\udc9a\ud835\udc56) and category (\u02c6 \ud835\udc67\ud835\udc56), respectively. Besides, the subscript \ud835\udc56 will be omitted for briefness. 3.1 Cause-effect View Causal graph. Causal graph is a directed acyclic graph to describe a data generation process [35], where nodes represent variables in the process, and edges represent the causal relations between variables. To facilitate analyzing the inference of GCN, i.e., the generation process of the output, we abstract the inference of GCN as a causal graph (Figure 2(a)), which consists of four variables: \u2022 \ud835\udc4b, which denotes the features of the target node. \ud835\udc99is an instance of the variable. \u2022 \ud835\udc41, which denotes the neighbors of the target node, e.g., N (\ud835\udc99). The sample space of \ud835\udc41is the power set of all nodes in \ud835\udc3a. \u2022 \u00af \ud835\udc4b, which is the output of graph convolution at the last GCN layer. \u2022 \u02c6 \ud835\udc4c, which denotes the GCN prediction, i.e., the instance of \u02c6 \ud835\udc4cis \u02c6 \ud835\udc9a. Functionally speaking, the structure \ud835\udc4b\u2212\u2192\u00af \ud835\udc4b\u2190\u2212\ud835\udc41represents the graph convolution where both the target node features and neighbor nodes directly affect the convolution output. The output of the graph convolution \u00af \ud835\udc4bthen directly affects the model prediction, which is represented as \u00af \ud835\udc4b\u2212\u2192\u02c6 \ud835\udc4cNote that there is a direct edge \ud835\udc4b\u2212\u2192 \u02c6 \ud835\udc4c, which means that \ud835\udc4bdirectly affects the prediction. We include this direct edge for two considerations: 1) residual connection is widely used in GCN to prevent the over-smoothing issue [20], which enables the features of the target node influence its prediction directly; 2) recent studies reveal the advantages of two-stage GCN where the model first makes prediction from each node\u2019s features; and then conducts graph convolution. Recall that the conventional GCN inference, i.e., the calculation of \u02c6 \ud835\udc9a, is typically a one-pass forward propagation of the GCN model with \ud835\udc99and N (\ud835\udc99) as inputs. Based on the causal graph, the procedure can be interpreted as Figure 2(b), where every variable obtains an instance (e.g., \ud835\udc4b= \ud835\udc99). Apart from the new understanding of the conventional GCN inference, the causal theory provides analytical tools based on the causal graph, such as causal intervention [35], which enable the in-depth analysis of the factors resulting in the prediction and further reasoning based on the prediction [36]. Causal intervention. Our target is to assess whether the prediction on a target testing node faces the local structure discrepancy issue and further adjust the prediction to achieve adaptive locality. We resort to causal intervention to estimate the causal effect of target node\u2019s neighbors on the prediction (i.e., the causal effect of \ud835\udc41= N (\ud835\udc99)), which forcibly assigns an instance to a treatment variable. Formally, the causal effect \ud835\udc86\u2208R\ud835\udc3fis defined as: \ud835\udc86= \ud835\udc53(\ud835\udc99, N(\ud835\udc99) | \u02c6 \ud835\udf03) \u2212\ud835\udc53(\ud835\udc99,\ud835\udc51\ud835\udc5c(\ud835\udc41= \u2205) | \u02c6 \ud835\udf03), = \ud835\udc53(\ud835\udc99, N(\ud835\udc99) | \u02c6 \ud835\udf03) \u2212\ud835\udc53(\ud835\udc99, \u2205| \u02c6 \ud835\udf03), = \u02c6 \ud835\udc9a\u2212\u02c6 \ud835\udc9a\ud835\udc60. \ud835\udc51\ud835\udc5c(\ud835\udc41 = \u2205) represents a causal intervention which forcefully assigns a reference status of \ud835\udc41, resulting in a post-intervention prediction \ud835\udc53(\ud835\udc99,\ud835\udc51\ud835\udc5c(\ud835\udc41= \u2205)| \u02c6 \ud835\udf03) (see Figure 2(c)). Since \ud835\udc41does not have predecessor, \ud835\udc53(\ud835\udc99,\ud835\udc51\ud835\udc5c(\ud835\udc41= \u2205)| \u02c6 \ud835\udf03) = \ud835\udc53(\ud835\udc99, \u2205| \u02c6 \ud835\udf03), which is denoted \fas \u02c6 \ud835\udc9a\ud835\udc60\u2208R\ud835\udc3f. Intuitively, the post-intervention prediction means: if the target node has no neighbor, what the prediction would be. We believe that \ud835\udc86provides clues for performing adaptive locality on the target node. For instance, we might adjust the original prediction, if the entries of \ud835\udc86have abnormal large absolute values, which means that the local structure at the target node may not satisfy the homophily assumption [30]. Note that we take empty set as a representative reference status of \ud835\udc41= N (\ud835\udc99) since the widely usage of empty value as reference in causal intervention [35], but can replace it with any subset of N (\ud835\udc99) (see Section 3.3). 3.2 Causal GCN Inference Mechanism The requirement of adaptive locality for the testing nodes pushes us to build up an additional mechanism, i.e., CGI, to enhance the GCN inference stage. We have two main considerations for devising the mechanism: 1) the mechanism has to be learned from the data, instead of handcrafted, to enable its usage on different GCN models and different datasets. 2) the mechanism should effectively capture the connections between the causal effect of \ud835\udc41= N (\ud835\udc99) and local structure discrepancy, i.e., learning the patterns for adjusting the original prediction to improve the prediction accuracy. \ud835\udc3f-way classification model. A straight forward solution is devising the CGI as a \ud835\udc3f-way classification model that directly generates the final prediction according to the original prediction \u02c6 \ud835\udc9a, the postintervention prediction \u02c6 \ud835\udc9a\ud835\udc60, and the causal effect \ud835\udc86. Formally, \u00af \ud835\udc9a= \u210e( \u02c6 \ud835\udc9a, \u02c6 \ud835\udc9a\ud835\udc60, \ud835\udc86|\ud835\udf14), (6) where \u210e(\u00b7) denotes a \ud835\udc3f-way classifier parameterized by \ud835\udf14and \u00af \ud835\udc9a\u2208 R\ud835\udc3fdenotes the final prediction. Similar to the training of GCN, we can learn the parameters of \u210e(\u00b7) by optimizing classification loss over the labeled nodes, which is formulated as: \u02c6 \ud835\udf4e= min \ud835\udf4e \ud835\udc40 \u2211\ufe01 \ud835\udc56=1 (\ud835\udc59( \u00af \ud835\udc9a\ud835\udc56, \ud835\udc9a\ud835\udc56)) + \ud835\udefc\u2225\ud835\udf4e\u22252 \ud835\udc39, (7) where \ud835\udefcis a hyperparameter to adjust the strength of regularization. Undoubtedly, this model can be easily developed and applied to any GCN. However, as optimized over the overall classification loss, the model will face the similar issue of attention mechanism [22]. To bridge this gap, it is essential to learn CGI under the awareness of whether a testing node encounters local structure discrepancy. Choice model. Therefore, the inference mechanism should focus on the nodes with inconsistent classifications from \u02c6 \ud835\udc9aand \u02c6 \ud835\udc9a\ud835\udc60, i.e., \u02c6 \ud835\udc67\u2260 \u02c6 \ud835\udc67\ud835\udc60where \u02c6 \ud835\udc67is the original classification and \u02c6 \ud835\udc67\ud835\udc60= arg max\ud835\udc57\u02c6 \ud835\udc66\ud835\udc60 (\ud835\udc57), \ud835\udc57\u2264 \ud835\udc3fis the post-intervention classification. That is, we let the CGI mechanism learn from nodes where accounting for neighbors causes the change of the classification. To this end, we devise the inference mechanism as a choice model, which is expected to make wise choice between \u02c6 \ud835\udc67and \u02c6 \ud835\udc67\ud835\udc60to eliminate the impact of local structure discrepancy. Formally, \u00af \ud835\udc67= (\u02c6 \ud835\udc67, \u02c6 \ud835\udc5d\u2265\ud835\udc61, \u02c6 \ud835\udc67\ud835\udc60, \u02c6 \ud835\udc5d< \ud835\udc61, \u02c6 \ud835\udc5d= \ud835\udc54( \u02c6 \ud835\udc9a, \u02c6 \ud835\udc9a\ud835\udc60, \ud835\udc86|\ud835\udf3c), (8) where \ud835\udc54(\u00b7) denotes a binary classifier with parameters of \ud835\udf3c; the output of the classifier \ud835\udc5dis used for making choice; and \ud835\udc61is the decision threshold, which depends on the classifier selected. To learn the model parameters \ud835\udf3c, we calculate the ground truth for making choice according to the correctness of \u02c6 \ud835\udc67and \u02c6 \ud835\udc67\ud835\udc60. Formally, the choice training data of the binary classifier is: D = {(\ud835\udc99, \ud835\udc5d) | \u02c6 \ud835\udc67= \ud835\udc67\u222a\u02c6 \ud835\udc67\ud835\udc60= \ud835\udc67} , \ud835\udc5d= \ud835\udc53\ud835\udc59\ud835\udc4e\ud835\udc54( \u02c6 \ud835\udc67= \ud835\udc67), (9) where \ud835\udc67denotes the correct category of node \ud835\udc99; \ud835\udc53\ud835\udc59\ud835\udc4e\ud835\udc54(\u02c6 \ud835\udc67= \ud835\udc67) = 1 if \u02c6 \ud835\udc67equals to \ud835\udc67, \ud835\udc53\ud835\udc59\ud835\udc4e\ud835\udc54(\u02c6 \ud835\udc67= \ud835\udc67) = \u22121 otherwise. The training of the choice model is thus formulated as: \u02c6 \ud835\udf3c= min \ud835\udf3c \u2211\ufe01 (\ud835\udc99,\ud835\udc5d)\u2208D \ud835\udc59( \u02c6 \ud835\udc5d, \ud835\udc5d) + \ud835\udefd\u2225\ud835\udf3c\u22252 \ud835\udc39, (10) where \ud835\udefdis a hyperparameter to adjust the strength of regularization. Data sparsity. Inevitably, the choice training data D will face sparsity issue for two reasons: 1) labeled nodes are limited in some applications; and 2) only a small portion of the labeled nodes satisfy the criteria of D. To tackle the data sparsity issue, we have two main considerations: 1) the complexity of the binary classifier should be controlled strictly. Towards this end, we devise the choice model as a Support Vector Machine [37] (SVM), since SVM only requires a few samples to serve as the support vectors to make choice. 2) the inputs of the binary classifier should free from the number of classes \ud835\udc3f, which can be large in some applications. To this end, we distill low dimensional and representative factors from the two predictions (i.e., \u02c6 \ud835\udc9aand \u02c6 \ud835\udc9a\ud835\udc60) and the causal effect \ud835\udc86) to serve as the inputs of the choice model, which is detailed in Section 3.3. To summarize, as compared to conventional GCN inference, the proposed CGI has two main differences: \u2022 In addition to the original prediction, CGI calls for causal intervention to further make a post-intervention prediction. \u2022 CGI makes choice between the original prediction and postintervention prediction with a choice model. Below summarizes the slight change of GCN\u2019s training and inference schema to apply the proposed CGI: Algorithm 1 Applying CGI to GCN Input: Training data \ud835\udc7f, \ud835\udc68, \ud835\udc80. /* Training */ 1: Optimize Equation (4), obtaining GCN ( \u02c6 \ud835\udf3d); \u22b2GCN training 2: Construct D; \u22b2Causal intervention 3: Optimize Equation (10), obtaining choice model ( \u02c6 \ud835\udf3c); \u22b2CGI training 4: Return \u02c6 \ud835\udf3dand \u02c6 \ud835\udf3c. /* Testing */ 5: Calculate \ud835\udc53(\ud835\udc99, N(\ud835\udc99) | \u02c6 \ud835\udf3d); \u22b2Original prediction 6: Calculate \ud835\udc53(\ud835\udc99, \u2205| \u02c6 \ud835\udf3d); \u22b2Post-intervention prediction Calculate final classification with Equation (8); 3.3 Input Factors To reduce the complexity of the choice model, we distill three types of factors as the input: causal uncertainty, prediction confidence, category transition. Causal uncertainty. According to the homophily assumption [30], the neighbors should not largely changes the prediction of the target node. Therefore, the target node may face the local structure discrepancy issue if the causal effect \ud835\udc86of the neighbors has large variance. That is, the causal effect exhibits high uncertainty w.r.t. different reference values. Inspired by the Monte Carlo uncertainty \festimation, we resort to the variance of \ud835\udc86to describe the causal uncertainty, which is formulated as: \ud835\udc97= \ud835\udc63\ud835\udc4e\ud835\udc5f({\ud835\udc53(\ud835\udc99, N (\ud835\udc99)\ud835\udc58| \u02c6 \ud835\udf03)|\ud835\udc58\u2264\ud835\udc3e}), (11) where N (\ud835\udc99)\ud835\udc58\u2282N (\ud835\udc99), \ud835\udc63\ud835\udc4e\ud835\udc5f(\u00b7) is an element-wise operation that calculates the variance on each class over the \ud835\udc3esamples, and \ud835\udc97\u2208R\ud835\udc3f denotes class-wise variance. In particular, we perform \ud835\udc3etimes of causal intervention with \ud835\udc41= N (\ud835\udc99)\ud835\udc58and then calculate the variance of the corresponding \ud835\udc3ecausal effects. If an entry of \ud835\udc97 exhibits a large value, it reflects that minor changes on the subgraph structure can cause large changes on the prediction probability over the corresponds class. According to the original classification \u02c6 \ud835\udc67, we select the \u02c6 \ud835\udc67-th entry of \ud835\udc97as a representative of the Monte Carlo causal effect uncertainty, which is termed as graph_var. In practice, we calculate the post-intervention predictions by repeating \ud835\udc3etimes of GCN inference with edge dropout [39] applied, i.e., each edge has a probability \ud835\udf0fto be removed. Prediction confidence. There has been a surge of attention on using model predictions such as model distillation [13] and selfsupervised learning [8]. The intuition is that a larger probability indicates higher confidence on the classification. As such, a factor of prediction reliability is the prediction confidence, i.e., trusting the prediction with higher confidence. Formally, we calculate two factors: self_conf ( \u02c6 \ud835\udc9a\u02c6 \ud835\udc67) and neighbor_conf ( \u02c6 \ud835\udc9a\ud835\udc60 \u02c6 \ud835\udc67\ud835\udc60), respectively. Category transition. The distribution of edges over categories is not uniform w.r.t. : the ratio of intra-category connection and inter-category connection. Over the labeled nodes, we can calculate the distribution and form a category transition matrix \ud835\udc47where \ud835\udc47\ud835\udc56,\ud835\udc57is the ratio of edges between category \ud835\udc56and \ud835\udc57to the edges connect category \ud835\udc56. Figure 3 illustrates an example on the OGBarXiv dataset (raw normalized). We can see that the probability of intro-category connection (diagonal entries) varies in a large range ([0.19, 0.64]). The distribution of inter-category probability is also skewed. Intuitively, such probabilities can be clues for choosing the correct prediction, e.g., \u02c6 \ud835\udc9amight be trustworthy if\ud835\udc47\u02c6 \ud835\udc67,\u02c6 \ud835\udc67is high. To this end, we calculate four factors: self_self (\ud835\udc47\u02c6 \ud835\udc67,\u02c6 \ud835\udc67), neighbor_neighbor (\ud835\udc47\u02c6 \ud835\udc67\ud835\udc60,\u02c6 \ud835\udc67\ud835\udc60), self_neighbor (\ud835\udc47\u02c6 \ud835\udc67,\u02c6 \ud835\udc67\ud835\udc60), and neighbor_self (\ud835\udc47\u02c6 \ud835\udc67\ud835\udc60,\u02c6 \ud835\udc67). 4 EXPERIMENTS We conduct experiments on seven node classification datasets to answer the following research questions: RQ1: How effective is the proposed CGI model to resolve the local structure discrepancy issue? RQ2: To what extend the proposed CGI facilitates node classification under different problem settings? RQ3: How do the distilled factors influence the effectiveness of the proposed CGI? 4.1 Experimental Settings 4.1.1 Dataset. For the full-supervised settings, we use the widely used benchmark dataset of citation network, OGB-arXiv [15], which represents papers and their citation relations as nodes and edges, respectively. Each node has 128 features generated by averaging the embeddings of words in its title and abstract, where the embeddings are learned by the skip-gram model [31]. Considering that the such old-fashioned text feature may not be representative, we replace the node features with a 768-dimensional vector extracted by feeding Figure 3: The category transition matrix on OGB-arXiv. To save space, we cut the number of categories to ten. the title and abstract into RoBERTa [28] (12-layer ), where the representation of [CLS] token at the second last layer is selected. For the semi-supervised settings, we adopt three widely used citation networks, Cora, Citeseer, and Pubmed, and select the 20shot data split released by [20], where 500 and 1000 nodes are selected as validation and testing, 20 nodes from each category are labeled for training. Apart from the real-world graphs, we further created three synthetic ones based on Citeseer by intentionally adding cross-category edges on 50% randomly selected nodes, which leads to local structure discrepancy between the poisoned nodes and the unaffected ones. Note that more cross-category edges lead to stronger discrepancy, making adaptive locality more critical for GCN models. In particular, according to the number of edges in the original Citeseer, we add 10%, 30%, and 50% of cross-category edges, constructing Citeseer(10%), Citeseer(30%), and Citeseer(50%). Note that the data split and node features are unchanged. 4.1.2 Compared Methods. To justify the proposed CGI, we compare it with the representative GCN, including GraphSAGE [11], GCN [20], GAT [44], JKNet [55], DAGNN [27], and APPNP [21], which adopts normal inference. Apart from GCN models, we also test MLP, which discard the graph structure and treat node classification as normal text classification. Lastly, as CGI uses two predictions, we include an ensemble baseline which averages the prediction of APPNP and MLP. For these models, we use the implementations on the OGB leaderboard2. If necessary, e.g., under the transductive full-supervised setting, the hyper-parameters are tuned according to the settings in the original paper of the model. For the proposed CGI, we equipped the SVM with RBF kernel3 and apply CGI to APPNP. For the SVM, we tune two hyper-parameters \ud835\udc50and \ud835\udefethrough 5-fold cross-validation, i.e., splitting the nodes in validation into 5 folds. In addition, for the MCE that estimates graph uncertainty, we set the number of repeats and the edge dropout ratio \ud835\udf0fas 50 and 0.15. 4.2 Effects of CGI (RQ1) We first investigate to what extent the proposed CGI address the local structure discrepancy issue on the three synthetic datasets 2https://ogb.stanford.edu/docs/leader_nodeprop/#ogbn-arxiv. 3https://scikit-learn.org/stable/modules/svm.html. \fCiteseer(10%), Citeseer(30%), and Citeseer(50%). Table 1 shows the performance of APPNP, APPNP_Self, and APPNP_CGI on the three datasets, where the final prediction is the original prediction (i.e., \u02c6 \ud835\udc9a, the post-intervention prediction (i.e., \u02c6 \ud835\udc9a\ud835\udc60), and the final prediction from CGI (i.e., \u00af \ud835\udc9a), respectively. Note that the graph structure of the three datasets are different, the APPNP model trained on the datasets will thus be different. As such, the APPNP_Self will get different performance on the three datasets, while the node features are unchanged. From the table, we have the following observations: \u2022 In all cases, APPNP_CGI outperforms APPNP, which validates the effectiveness of the proposed CGI. The performance gain is attributed to the further consideration of adaptive locality during GCN inference. In particular, the relative improvement over APPNP achieved by APPNP_CGI ranges from 1.1% to 7.2% across the three datasets. The result shows that CGI achieves large improvement over compared conventional one-pass GCN inference as more cross-category edges are injected, i.e., facing with more severe local structure discrepancy issue. The result further exhibits the capability of CGI to address the local structure discrepancy issue. \u2022 As more cross-category edges being added, APPNP witnesses severe performance drop from the accuracy of 71.0% to 64.2%. This result is reasonable since GCN is vulnerable to crosscategory edges which pushes the representations of node in different categories to be close [4]. Considering that APPNP has considered adaptive locality during model training, this result validates that adjusting the GCN architecture is insufficient to address the local structure discrepancy issue. \u2022 As to APPNP_Self, the performance across the three datasets is comparable to each other. It indicates that the cross-category edges may not hinder the GCN to encode the association between target node features and the label. Therefore, the performance drop of APPNP when adding more cross-category edges is largely due to the improper neighbor aggregation without thorough consideration of the local structure discrepancy issue. Furthermore, on Citeseer(50%), the performance of APPNP_Self is comparable to APPNP, which indicates that the effect of considering adaptive locality during training is limited if the discrepancy is very strong. 4.3 Performance Comparison (RQ2) To further verify the proposed CGI, we conduct performance comparison under both full-supervised and semi-supervised settings on the real-world datasets. 4.3.1 Semi-supervised setting. We first investigate the effect of CGI under the semi-supervised setting by comparing the APPNP_CGI with APPNP, APPNP_Self, and APPNP_Ensemble. The four methods corresponds to four inference mechanisms: 1) conventional one-pass GCN inference (APPNP); 2) causal intervention without consideration of graph structure (APPNP_Self); 3) ensemble of APPNP and APPNP_Self (APPNP_Ensemble); and 4) the proposed CGI. Note that the four inference mechanisms are applied on the same APPNP model with exactly same model parameters. Table 2 shows the node classification performance on three real-world datasets: Cora, Citeseer, and Pubmed. From the table, we have the following observations: Dataset Citeseer(10%) Citeseer(30%) Citeseer(50%) APPNP 71.0% 64.4% 64.2% APPNP_Self 65.1% 62.9% 64.3% APPNP_CGI 71.8% 66.9% 68.6% RI 1.1% 3.9% 7.2% Table 1: Performance of APPNP\u2019s original prediction, postintervention prediction, and CGI prediction on the three synthetic datasets w.r.t. accuracy. RI means the relative improvement over APPNP achieved by APPNP_CGI. Dataset Cora Citeseer Pubmed APPNP 81.8% 72.6% 79.8% APPNP_Self 69.3% 66.5% 75.9% APPNP_Ensemble 78.0% 71.4% 79.2% APPNP_CGI 82.3% 73.7% 81.0% RI 5.5% 2.8% 2.3% Table 2: Performance of APPNP with different inference mechanisms on three semi-supervised node classification datasets w.r.t. the classification accuracy. RI means the relative improvement of APPNP_CGI over APPNP_Ensemble. \u2022 On the three datasets, the performance of APPNP_Self is largely worse than APPNP, i.e., omitting graph structure during GCN inference witnesses sharp performance drop under semi-supervised setting, which shows the importance of considering neighbors. Note that the performance of APPNP_Self largely surpasses the performance of MLP reported in [20], which highlights the difference between performing causal intervention \ud835\udc51\ud835\udc5c(\ud835\udc41= \u2205) on a GCN model and the inference of MLP which is trained without the consideration of graph structure. \u2022 In all cases, APPNP_Ensemble performs worse than APPNP, which is one of the base models of the ensemble. The inferior performance of APPNP_Ensemble is mainly because of the huge gap between the performance of APPNP and APPNP_Self. From this results, we can conclude that, under semi-supervised setting, simply aggregating the original prediction and post-intervention prediction does not necessarily lead to better adaptive locality. Furthermore, the results validate the rationality of a carefully designed inference mechanism. \u2022 In all cases, APPNP_CGI achieves the best performance. The performance gain is attributed to the choice model, which further validates the effectiveness of the proposed CGI. That is, it is essential to an inference model from the data, which accounts for the causal analysis of the original prediction. Moreover, this result reflects the potential of enhancing the inference mechanism of GCN for better decision making, especially the causality oriented analysis, which deserves further exploration in future research. 4.3.2 Full-supervised setting. We then further investigate the effect of CGI under the full-supervised setting. Note that we test the models under both inductive and transductive settings on the OGB-arXiv dataset. As OGB-arXiv is a widely used benchmark, we also test the baseline methods. Table 3 shows the node classification performance of the compared methods on the OGB-arXiv dataset w.r.t. accuracy. Apart from the RoBERTa features, we also report the performance of baseline models with the original Word2Vec features. From the table, we have the following observations: \f\u2022 The performance gap between MLP and GCN models will be largely bridged when replacing the Word2Vec features with the more advanced RoBERTa features. In particular, the relative performance improvement of GCN models over MLP shrinks from 27.9% to 3.7%. The result raises a concern that the merit of GCN model might be unintentionally exaggerated [20] due to the low quality of node features. \u2022 Moreover, as compared to APPNP, DAGNN performs better as using the Word2Vec features, while performs worse when using the RoBERTa features. It suggests accounting for feature quality in future research that investigates the capability of GCN or compares different GCN models. \u2022 As to RoBERTa features, APPNP_Ensemble performs slightly better than its base models, i.e., APPNP_Self and APPNP. This result is different from the result in Table 2 under semi-supervised setting where the performance of APPNP_Self is inferior. We thus believe that improving the accuracy of the post-intervention prediction will benefit GCN inference [66]. As averaging the prediction of APPNP_Self and APPNP can also be seen as a choosing strategy by comparing model confidence, the performance gain indicates the benefit of considering adaptive locality during inference under full-supervised setting. \u2022 APPNP_CGI further outperforms APPNP_Ensemble under both inductive and transductive settings, which is attributed to the choice model that learns to make choice from patterns of causal uncertainty, prediction confidence, and category transition factors. This result thus also shows the merit of characterizing the prediction of GCN models with the distilled factors. \u2022 In all cases, the model achieves comparable performance under the inductive setting and the transductive setting. We postulate that the local structure discrepancy between training and testing nodes in the OGB-arXiv dataset is weak, which is thus hard for CGI to achieve huge improvements. In the following, the experiment is focused on the inductive setting which is closer to real-world scenarios that aim to serve the upcoming nodes. 4.4 In-depth Analysis (RQ3) 4.4.1 Effects of Distilled Factors. We then study the effects of the distilled factors as the inputs of the choice model in CGI. In particular, we compare the factors w.r.t. the performance of CGI as removing one factor in each round, where lower performance indicates larger contribution of the factor. Note that we report the accuracy regarding whether CGI makes the correct choice for testing nodes, rather than the accuracy for node classification. That is to say, here we only consider \u201cconflict\u201d testing nodes where the two inferences of CGI (i.e., APPNP and APPNP_Self) have different classifications. Figure 4 shows the performance on OGBarXiv under the inductive setting, where 6,810 nodes among the 47,420 testing nodes are identified as the conflict nodes. We omit the results of other datasets under the semi-supervised setting for saving space, which have a close trend. From the figure, we have the following observations: 1) Discarding any factor will lead to performance drop as compared to the case with all factors as inputs of the choice model (i.e., All factors). This result indicates the effectiveness of the identified factors on characterizing GCN predictions which facilitate making Feature Method Inductive Transductive Word2Vec (128) MLP 55.84% 55.84% GraphSAGE 71.43% 71.52% GCN 71.83% 71.96% GAT 71.93% 72.04% JKNet 72.25% 72.48% DAGNN 72.07% 72.09% APPNP 71.61% 71.67% RoBERTa (768) JKNet 75.59% 75.54% MLP 72.26% 72.26% DAGNN 74.93% 74.83% APPNP 75.74% 75.61% APPNP_Self 73.43% 73.38% APPNP_Ensemble 76.26% 75.86% APPNP_CGI 76.52% 76.07% Table 3: Performance comparison under full-supervised settings. We use bold font and underline to highlight the best and second best performance under each setting. Figure 4: Illustration of factor influence on CGI. the correct choice. 2) Among the factors, removing self_conf and neighbor_conf leads to the largest performance drop, showing that the confidence of prediction is the most informative factor regarding the reliability of the prediction. 3) In all cases, the performance of CGI surpasses the Majority class, which always chooses the original GCN prediction, i.e., CGI degrades to the conventional one-pass inference. The result further validates the rationality of additionally considering adaptive locality during GCN inference, i.e., choosing between the original prediction and the post-intervention prediction without consideration of neighbors. Lastly, considering that the \u201cconflict\u201d nodes account for 14.4% in the testing nodes (6,810/47,420) and the accuracy of CGI\u2019s choices is 66.53%, there is still a large area for future exploration. 4.4.2 Study on causal uncertainty. Recall that we propose a Monte Carlo causal effect uncertainty (MCE) estimation to estimate the uncertainty of neighbors\u2019 causal effect. We then investigate to what extent the MCE sheds light on the correctness of GCN prediction. Figure 5(a) shows the group-wise performance of APPNP on OGB-arXiv where the testing nodes are ranked according to the value of graph_var in an ascending order and split into ten groups with equal size. Note that we select OGBarXiv for its relatively large scale where the testing set includes 47,420 nodes. From the figure, we can see a clear trend that the classification performance decreases as the MCE increases. It means \f(a) MCE (b) Confidence (Conf) (c) Conf and MCE redundancy Figure 5: Group-wise illustration of the causal uncertainty and the confidence of APPNP prediction, where MCE stands for the Monte Carlo causal effect uncertainty estimation (i.e., graph_var.) Model APPNP JKNet DAGNN Self+Neighbor 76.03 75.69 75.28 Self+Neighbor_Trust 78.30 75.71 78.61 Self+Neighbor_Bound 81.40 81.96 82.03 Table 4: Node classification performance of three GCN models: APPNP, JKNet, and DAGNN on OGB-arXiv. that the calculated MCE is informative for the correctness of GCN predictions. For instance, a prediction has higher chance to be correct if its MCE is low. As a reference, in Figure 5(b), we further depict the group-wise performance w.r.t. the prediction confidence (i.e., neighbor_conf). In particular, the testing nodes are ranked according to the value of neighbor_conf in a descending order. As can be seen, there is also a clear trend of prediction performance regarding the confidence, i.e., the probability of being correct is higher if APPNP is more confident on the prediction. To investigate whether MCE and GCN confidence are redundant, we further calculate the overlap ratio between the groups split by graph_var and the ones split by neighbor_conf. Figure 5(c) illustrates the matrix of overlap ratios. As can be seen, the weights are not dedicated on the diagonal entries. In particular, there are only two group pairs with overlap ratio higher than 0.5, which means that the MCE reveals the property of GCN prediction complementary to the confidence. That is, causal analysis indeed characterizes GCN predictions from distinct perspectives. 4.4.3 Training with trustworthiness signal. To further investigate the benefit of performing adaptive locality during model inference, we further conduct a study on OGB-arXiv to test whether the GCN equipped with adaptive locality modules can really assess the trustworthiness of neighbors. Three representative GCN models with consideration of adaptive locality, APPNP, JKNet, and DAGNN, are tested under three different configurations: \u2022 Self+Neighbor: This is the standard configuration of GCN model that accounts for the graph structure, i.e., trusting neighbors. \u2022 Self+Neighbor_Trust: As compared to Self+Neighbor, a trustworthy feature is associated with each node, which indicates the \u201cground truth\u201d of trusting self or neighbors. In particular, we train the GCN model, infer the original prediction and the post-intervention prediction, calculate the trustworthy feature according to Equation (10) (i.e., \ud835\udc5d). For the nodes where the Figure 6: Training loss on OGB-arXiv under the Self+Neighbor and Self+Neighbor_Trust configurations. original classification and the post-intervention classification are equal, we set the value as 0. By explicitly incorporating such value as a node feature, it should be easy for GCN to learn for properly performing adaptive locality if it works properly. \u2022 As a reference, we study the GCNs in an ideal case, named Self+Neighbor_Bound, where the trustworthy feature is also given when performing adaptive locality during model inference. Table 4 shows the model performance under the node classification setting of inductive full-supervised learning. From the table, we have the following observations: \u2022 As compared to Self+Neighbor, all the three models, especially APPNP and DAGNN, achieve better performance under the configuration of Self+Neighbor_Trust. It indicates a better usage of the graph structure, which is attributed to the trustworthy feature. The result thus highlights the importance of modeling neighbor trustworthiness and performing adaptive locality. \u2022 However, there is a large gap between Self+Neighbor_Trust and Self+Neighbor_Bound, showing the underuse of the trustworthy feature by the current adaptive locality methods. We postulate the reason to be the gap between the training objective, i.e., associating node representation with label, and the target of identifying trustworthy neighbors, which is the limitation of considering adaptive locality in model training. The performance under Self+Neighbor_Bound also reveals the potential of considering adaptive locality in model inference. \fFurthermore, we study the impact of trustworthy feature on model training. Figure 6 illustrates the training loss along the training procedure of the tested GCNs under the configuration of Self+Neighbor and Self+Neighbor_Trust. It should be noted that we select the period from 200 to 750 epochs for better visualization. From the figure, we can see that, in all the three cases, the loss of GCN under Self+Neighbor_Trust is smaller than that under Self+Neighbor. The result shows that the trustworthy feature facilitates the GCN model fitting the training data, i.e., capturing the correlation between the node label and the node features as well as the graph structure. However, the adaptive locality module, especially graph attention, is distracted from the target of assessing neighbor trustworthiness. Theoretically, the graph attention can achieve the target by simply recognizing the value of the trustworthy feature from the inputs. For instance, the hop attention in DAGNN should highlight the target node representation at layer 0 if the trustworthy feature is 1. 5 RELATED WORK Graph Convolutional Network. According to the format of the convolution operations, existing GCN models can be divided into two categories: spatial GCN and spectral GCN [65]. Spectral GCN is defined as performing convolution operations in the Fourier domain with spectral node representations [2, 10, 20, 25, 52]. For instance, Bruna et al. [2] perform convolution over the eigenvectors of graph Laplacian which are treated as the Fourier basis. Due to the high computational cost of the eigen-decomposition, a line of spectral GCN research has been focused on accelerating the eigen-decomposition with different approximation techniques [10, 20, 25, 52]. However, applying such spectral GCN models on large graphs still raises unaffordable memory cost, which hinders the their practical research. To some extent, the attention on GCN research has been largely dedicated on the spatial GCN, which performs convolution operations directly over the graph structure by aggregating the features from spatially close neighbors to a target node [1, 11, 20, 44, 45, 51, 54]. This line of research mainly focuses on the development of the neighbor aggregation operation. For instance, Kipf and Welling [20] propose to use a linear aggregator (i.e., weighted sum) that uses the reverse of node degree as the coefficient. In addition to aggregating information from directly connected neighbors, augmented aggregators also account for multi-hop neighbors [18, 53]. Moreover, non-linear aggregators are also employed in spatial GCNs such as capsule [45] and Long Short-Term Memory (LSTM) [11]. Besides, the general spatial GCN designed for simple graphs is extended to graphs with heterogeneous nodes [50] and temporal structure [34]. Beyond model design, there are also studies on the model capability analysis [54], model explanation [58], and training schema [16]. However, most of the existing studies focus on the training stage and blindly adopt the one-pass forward propagation for GCN inference. This work is in an orthogonal direction, which improve the inference performance with an causal inference mechanism so as to better solve the local structure discrepancy issue. Moreover, to the best of our knowledge, this work is the first to introduce the causal intervention and causal uncertainty into GCN inference. Adaptive Locality. Amongst the GCN research, a surge of attention has been especially dedicated to solving the over-smoothing issue [24]. Adaptive locality has become the promising solution to alleviate the over-smoothing issue, which is typically achieved by the attention mechanism [6, 9, 44, 46, 47, 50, 55, 61] or residual connection [7, 20, 23]. Along the line of research on attention design, integrating context information into the calculation of attention weight is one of the most popular techniques. For instance, Wang et al. [47] treats the neighbors at different hops as augmentation of attention inputs. Moreover, to alleviate the issue of lacking direct supervision, Wang et al. [46] introduce additional constraints to facilitate attention learning. Similar as Convolutional Neural Networks, residual connection has also been introduced to original design of GCN [20], which connects each layer to the output directly. In addition to the vanilla residual connection, the revised versions are also introduced such as the pre-activation residual [23] and initial residual [7]. Besides, the concept of inception module is also introduced to GCN model [19], which incorporates graph convolutions with different receptive fields. For the existing methods, the adaptive locality mechanism is fixed once the GCN model is trained. Instead, this work explores adaptive locality during model inference, which is in an orthogonal direction. Causality-aware Model Prediction. A surge of attention is being dedicated to incorporating causality into the ML schema [17, 32, 56, 60, 62, 63]. A line of research focuses on enhancing the inference stage of ML model from the cause-effect view [33, 41, 48, 59]. This work differs from them for two reasons: 1) none of the existing work studies GCN; and 2) we learn a choice model to make final prediction from causal intervention results instead of performing a heuristic causal inference for making final prediction. 6" + }, + { + "url": "http://arxiv.org/abs/1810.09936v2", + "title": "Enhancing Stock Movement Prediction with Adversarial Training", + "abstract": "This paper contributes a new machine learning solution for stock movement\nprediction, which aims to predict whether the price of a stock will be up or\ndown in the near future. The key novelty is that we propose to employ\nadversarial training to improve the generalization of a neural network\nprediction model. The rationality of adversarial training here is that the\ninput features to stock prediction are typically based on stock price, which is\nessentially a stochastic variable and continuously changed with time by nature.\nAs such, normal training with static price-based features (e.g. the close\nprice) can easily overfit the data, being insufficient to obtain reliable\nmodels. To address this problem, we propose to add perturbations to simulate\nthe stochasticity of price variable, and train the model to work well under\nsmall yet intentional perturbations. Extensive experiments on two real-world\nstock data show that our method outperforms the state-of-the-art solution with\n3.11% relative improvements on average w.r.t. accuracy, validating the\nusefulness of adversarial training for stock prediction task.", + "authors": "Fuli Feng, Huimin Chen, Xiangnan He, Ji Ding, Maosong Sun, Tat-Seng Chua", + "published": "2018-10-13", + "updated": "2019-06-01", + "primary_cat": "q-fin.TR", + "cats": [ + "q-fin.TR", + "cs.CE", + "cs.LG" + ], + "main_content": "Introduction Stock market is one of the largest \ufb01nancial markets, having reached a total value of 80 trillion dollars1. Predicting the future status of a stock has always been of great interest to many players in a stock market. While the exact price of a stock is known to be unpredictable [Walczak, 2001; Nguyen et al., 2015], research efforts have been focused on predicting the stock price movement \u2014 e.g., whether the price will go up/down, or the price change will exceed a threshold \u2014 which is more achievable than stock price prediction [Adebiyi et al., 2014; Feng et al., 2018; Xu and Cohen, 2018]. Stock movement prediction can be addressed as a classi\ufb01cation task. After de\ufb01ning the label space and features to \u2217Xiangnan He is the corresponding author. 1https://data.worldbank.org/indicator/CM. MKT.TRAD.CD?view=chart. (a) Training (b) Validation Figure 1: Training process of Attentive LSTM with L2 regularization coef\ufb01cient of 0, 0.01, and 0.1. describe a stock at a time, we can apply standard supervised learning methods such as support vector machines [Huang et al., 2005] and neural networks [Xu and Cohen, 2018] to build the predictive model. Although technically feasible, we argue that such methods could suffer from weak generalization due to the highly stochastic property of stock market. Figure 1 provides an empirical evidence on the weak generalization, where we split the data into training and validation by time, and train an Attentive LSTM model [Qin et al., 2017] on the historical prices of stocks to predict their movements. From Figure 1(a), we can see the training loss gradually decreases with more training epochs, which is as expected. However, the validation loss shown in Figure 1(b) does not exhibit a decreasing trend; instead, it only \ufb02uctuates around the initialization state without a clear pattern. In other words, the bene\ufb01ts of the model learned on training examples do not translate to improvements on predicting unknown validation examples. We have thoroughly explored the L2 regularization (results of different lines), a common technique to improve model generalization, however, the situation has not improved. We postulate the reason is that standard classi\ufb01cation methods are assumed to learn from static inputs, such as pixel values in images and term frequencies in documents. When dealing with stochastic variable such as stock price, the static input assumption does not hold and such methods fail to generalize well. Speci\ufb01cally, existing methods for stock prediction typically feed into price-based features, such as the price at a particular time-step or average price on multiple time-steps [Edwards et al., 2007; Nelson et al., 2017]. Since a stock\u2019s price continuously changes with time (during market hours), price-based features are essentially stochastic variables, being fundamentally different from the traditional arXiv:1810.09936v2 [q-fin.TR] 1 Jun 2019 \fstatic inputs. To be more speci\ufb01c, the features of a training instance can be seen as a \u201csample\u201d drawn from the distribution of input variables at a particular time-step. Without properly handling the stochasticity of input variables, the method can easily over\ufb01t the training data and suffer from weak generalization ability. In this work, we propose to employ adversarial training to account for the stochastic property of stock market to learn stock movement prediction model. Our primary consideration is that given a training example at a particular time-step with \ufb01xed input features, the trained model is expected to generate the same prediction on other samples drawn from the inherent distribution of input variables. To implement this idea, we can generate additional samples (simulation of the stochasticity) by adding small perturbations on input features, and train the model to perform well on both clean examples and perturbed examples. It is the adversarial training method that has been commonly used in computer vision tasks [Kurakin et al., 2017]. However, the problem is that the features to stock prediction models are usually sequential (see Figure 2), such that adding perturbations on the features of all time units can be very time-consuming; moreover, it may cause unintentional interactions among the perturbations of different units which are uncontrollable. To resolve the concern, we instead add perturbations on the high-level prediction features of the model, e.g., the last layer which is directly projected to the \ufb01nal prediction. Since most deep learning methods learn abstract representation in the higher layers, their sizes are usually much smaller than the input size. As such, adding perturbations to high-level features is more ef\ufb01cient, and meanwhile it can also retain the stochasticity. We implement our adversarial training proposal on an Attentive LSTM model, which is a highly expressive model for sequential data. We add perturbations to the prediction features of the last layer, and dynamically optimize the perturbations to make them change the model\u2019s output as much as possible. We then train the model to make it perform well on both clean features and perturbed features. As such, the adversarial training process can be understood as enforcing a dynamic regularizer, which stabilizes the model training and makes the model perform well under stochasticity. The main contributions of this paper are summarized as: \u2022 We investigate the generalization dif\ufb01culty in stock movement prediction and highlight the necessity of dealing with the stochastic property of input features. \u2022 We propose an adversarial training solution to address the stochastic challenge, and implement it on a deep learning model for stock movement prediction. \u2022 We conduct extensive experiments on two public benchmarks, validating improvements over several state-of-theart methods and showing that adversarial learning makes the classi\ufb01er more robust and more generalizable. 2 Problem Formulation We use bold capital letters (e.g., X) and bold lower letters (e.g., x) to denote matrices and vectors, respectively. In addition, normal lower case letters (e.g., x) and Greek letters Figure 2: Illustration of the Attentive LSTM. (e.g., \u03bb) are used to represent scalars and hyper-parameters, respectively. All vectors are in column form, if not otherwise speci\ufb01ed. The symbols tanh and \u03c3 stand for the hyperbolic tangent function and sigmoid function, respectively. The formulation of stock movement prediction task is to learn a prediction function \u02c6 ys = f(Xs; \u0398) which maps a stock (s) from its sequential features (Xs) to the label space. In other words, the function f with parameters \u0398 aims to predict the movement of stock s at the next time-step from the sequential features Xs in the latest T time-steps. Xs = [xs 1, \u00b7 \u00b7 \u00b7 , xs T ] \u2208RD\u00d7T is a matrix which represents the sequential input features (e.g., open and close prices, as detailed in Table 1) in the lag of past T time-steps, where D is the dimension of features. Assuming that we have S stocks, we learn the prediction function by \ufb01tting their ground truth labels y = [y1, \u00b7 \u00b7 \u00b7 , yS] \u2208RS, where ys (1/-1) is the ground truth label of stock s in the next time-step. We then formally de\ufb01ne the problem as: Input: A set of training examples {(Xs, ys)}. Output: A prediction function f(Xs; \u0398), predicting the movement of stock s in the following time-step. In the practical scenario, we could typically access a long history of each stock, and construct many training examples for each stock by moving the lag along the history. Nevertheless, we use a simpli\ufb01ed formulation without loss of generality by only considering one speci\ufb01c lag (i.e., one training example for each stock) for briefness of presenting the proposed method. 3 Adversarial Attentive LSTM (Adv-ALSTM) 3.1 Attentive LSTM The Attentive LSTM (ALSTM) mainly contains four components: feature mapping layer, LSTM layer, temporal attention, and prediction layer, as shown in Figure 2. Feature mapping layer. Previous work shows that a deeper input gate would bene\ufb01t the modeling of temporal structures of LSTM [Graves et al., 2013; Wu et al., 2018]. Inspired by their success, we employ a fully connected layer to project the input features into a latent representation. At each time-step, it performs as ms t = tanh(Wmxs t + bm), which projects the input features to a latent space with dimensionality of E. Wm \u2208RE\u00d7D and bm \u2208RE are parameters to be learned. \fLSTM layer. Owing to its ability to capture long-term dependency, LSTM has been widely used to process sequential data [Qin et al., 2017; Chen et al., 2018a]. The general idea of LSTM is to recurrently project the input sequence into a sequence of hidden representations. At each time-step, the LSTM learns the hidden representation (hs t) by jointly considering the input (ms t) and previous hidden representation (hs t\u22121) to capture sequential dependency. We formulate it as hs t = LSTM(ms t, hs t\u22121) of which the detailed formulation can be referred to [Hochreiter and Schmidhuber, 1997]. To capture the sequential dependencies and temporal patterns in the historical stock features, an LSTM layer is applied to map [ms 1, \u00b7 \u00b7 \u00b7 , ms T ] into hidden representations [hs 1, \u00b7 \u00b7 \u00b7 , hs T ] \u2208RU\u00d7T with the dimension of U. Temporal Attention Layer. The attention mechanism has been widely used in LSTM-based solutions for sequential learning problems[Cho et al., 2014; Chen et al., 2018a]. The idea of attention is to compress the hidden representations at different time-steps into an overall representation with adaptive weights. The attention mechanism aims to model the fact that data at different time-steps could contribute differently to the representation of the whole sequence. For stock representation, status at different time-steps might also contribute differently. For instance, days with maximum and minimum prices in the lag might have higher contributions to the overall representation. As such, we use an attention mechanism to aggregate the hidden representations as, as = T X t=1 \u03b1s ths t, \u03b1s t = expe \u03b1s t PT t=1 expe \u03b1s t , e \u03b1s t = uT a tanh(Wahs t + ba), (1) where Wa \u2208RE\u2032\u00d7U, ba and ua \u2208RE\u2032 are parameters to be learned; and as is the aggregated representation that encodes the overall patterns in the sequence. Prediction Layer. Instead of directly making prediction from as, we \ufb01rst concatenate as with the last hidden state hs T into the \ufb01nal latent representation of stock s, es = [asT , hs T T ]T , (2) where es \u2208R2U. The intuition behind is to further emphasize the most recent time-step, which is believed to be informative for the following movement [Fama and French, 2012]. With es, we use a fully connected layer as the predictive function to estimate the classi\ufb01cation con\ufb01dence \u02c6 ys = wT p es + bp. Note that the \ufb01nal prediction is sign(\u02c6 ys). 3.2 Adversarial Training As with most classi\ufb01cation solutions, the normal way of training the ALSTM is to minimize an objective function \u0393: S X s=1 l(ys, \u02c6 ys) + \u03b1 2 \u2225\u0398\u22252 F , l(ys, \u02c6 ys) = max(0, 1 \u2212ys\u02c6 ys). (3) The \ufb01rst term is hinge loss [Rosasco et al., 2004], which is widely used for optimizing classi\ufb01cation models (more reasons of choosing it is further explained in the end of the section). The second term is a regularizer on the trainable parameters to prevent over\ufb01tting. Figure 3: Illustration of the Adversarial Attentive LSTM. Despite the wide usage of normal training, we argue that it is inappropriate for learning stock prediction models. This is because normal training assumes that the inputs are static, ignoring the stochastic property of these features (a training example is a sample drawn from the stochastic distribution of input variables). Note that the features are calculated from stock price, which continuously changes with time and is affected by stochastic trading behaviours at a particular timestep [Musgrave, 1997]. As such, normal training might lead to model that over\ufb01ts the data and lacks generalization ability (as shown in Figure 1). Note that is a model performs well under stochasticity would make same predictions for samples drawn from the inherent distribution. Considering that stock price is continuous, our intuition is to intentionally simulate samples by adding small perturbations on static input features. By enforcing the predictions on the simulated samples to be same, the model could capture stochasticity. Adversarial training [Goodfellow et al., 2015; Kurakin et al., 2017] implements the aforementioned intuition. It trains a model with both clean examples (i.e., examples in the training set) and adversarial examples (AEs) [Szegedy et al., 2013]. The AEs are malicious inputs generated by adding intentional perturbations to features of clean examples. The perturbation, named as adversarial perturbation (AP) is the direction that leads to the largest change of model prediction. Despite its success in image classi\ufb01cation [Kurakin et al., 2017], it is infeasible to be directly applied to stock prediction. This is because calculating perturbations relies on calculation of the gradients regarding the input, which would be time-consuming (caused by the back-propagation through time-step of the LSTM layer). Besides, considering the fact that the gradients of the input are dependent across different time-steps, there might be unintentional interactions among the perturbations on different time-steps, which are uncontrollable. To address these problems, we propose to generate AEs from latent representation es, as shown in Figure 3. Before introducing the calculation of AEs, we \ufb01rst elaborate the objective function of Adv-ALSTM: \u0393adv = S X s=1 l(ys, \u02c6 ys) + \u03b2 S X s=1 l(ys, \u02c6 ys adv) + \u03b1 2 \u2225\u0398\u22252 F . (4) The second term is an adversarial loss where \u02c6 ys adv is the classi\ufb01cation con\ufb01dence of the AE of stock s. \u03b2 is a hyperparameter to balance the losses of clean and adversarial examples. By minimizing the objective function, the model is \fencouraged to correctly classify both clean and adversarial examples. Note that a model correctly classifying an AE can make right predictions for examples with arbitrary perturbations at the same scale. This is because AP is the direction leading to the largest change of model prediction. Therefore, adversarial learning could enable ALSTM to capture the stochastic property of stock inputs. At each iteration, the latent representation of an AE (es adv) is generated by the following formulation, es adv = es + rs adv, rs adv = arg max rs,\u2225rs\u2225\u2264\u03f5 l(ys, \u02c6 ys adv), (5) where es (introduced in Equation 2) is the \ufb01nal latent representation of stock s. rs adv is the associated AP. \u03f5 is a hyper-parameter to explicitly control the scale of perturbation. Since it is intractable to directly calculate rs adv, we employ the fast gradient approximation method [Goodfellow et al., 2015], rs adv = \u03f5 gs \u2225gs\u2225, gs = \u2202l(ys,\u02c6 ys) \u2202es . Speci\ufb01cally, the calculated perturbation is the gradient of loss function regarding the latent representation es under a L2-norm constraint. Note that the gradient denotes the direction where the loss function increase the most at the given point es, i.e., , it would lead to the largest change on the model prediction. Figure 4 illustrates the generation of adversarial examples. In a training iteration, given a clean example having loss larger than 0 (i.e., ys\u02c6 ys < 1), an AE is generated. The model is then updated to jointly minimize the losses for clean and adversarial examples, which would enforce the margin between clean examples and the decision boundary2. As such, it would bene\ufb01t the model to predict examples with perturbations into the same class as the clean one. That is, the model could correctly predict samples drawn from the inherent stochastic distribution of inputs, capturing the stachasticity. While traditional models like support vector machines also push the decision boundary far from clean examples, the adversarial training adaptively adjusts the strength of enforcing margins during the training process since the AP (rs adv) varies across iterations. Note that we select the hinge loss to encourage the training process to focus more on the examples close to the decision boundary. 4 Experiments 4.1 Experimental Settings Datasets. We evaluate the proposed method on two benchmarks on stock movement prediction, ACL18 [Xu and Cohen, 2018] and KDD17 [Zhang et al., 2017]. ACL18 contains historical data from Jan-01-2014 to Jan01-2016 of 88 high-trade-volume-stocks in NASDAQ and NYSE markets. Following [Xu and Cohen, 2018], we \ufb01rst align the trading days in the history, i.e., removing weekends and public holidays that lack historical prices. We then move a lag with length of T along the aligned trading days to construct candidate examples (i.e., one example for a stock on 2 Minimizing the hinge loss of the AE is adjusting wp to enlarge ys\u02c6 ys adv = ys(wT p es + b) + yswT p rs adv, which would increase the \ufb01rst term ys(wT p es +b) = ys\u02c6 ys. The results in Figure 5 (in Section 4) empirically demonstrate the effect of enforcing margins. -1 0 1 2 3 Clean (+/-) Example Adversarial (+/-) Example !\"# !\" (Label * classification confidence) Figure 4: Intuitive illustration of adversarial examples. Table 1: Features to describe the daily trend of a stock. Features Calculation c open, c high, c low e.g., c open = opent/closet \u22121 n close, n adj close e.g., n close = (closet/closet\u22121 \u22121 5-day, 10-day, 15-day, 20-day, 25-day, 30-day e.g., 5-day = P4 i=0 adj closet\u2212i/5 adj closet \u22121 every trading day). We label the candidate examples according to the movement percent of stock close prices3. Examples with movement percent \u22650.55% and \u2264\u22120.5% are identi\ufb01ed as positive and negative examples, respectively. We temporally split the identi\ufb01ed examples into training (Jan-01-2014 to Aug-01-2015), validation (Aug-01-2015 to Oct-01-2015), and testing (Oct-01-2015 to Jan-01-2016). KDD17 includes longer history ranging from Jan-01-2007 to Jan-01-2016 of 50 stocks in U.S. markets. As the dataset is originally collected for predicting stock prices rather than movements, we follow the same approach as ACL18 to identify positive and negative examples. We then temporally split the examples into training (Jan-01-2007 to Jan-01-2015), validation (Jan-01-2015 to Jan-01-2016) and testing (Jan-012016 to Jan-01-2017). Features. Instead of using the raw EOD data, we de\ufb01ne 11 temporal features (xs t) to describe the trend of a stock s at trading day t. Table 1 elaborates the features associated with calculation. Our aim of de\ufb01ning these features are to: 1) normalize the prices of different stocks; 2) and explicitly capture the interaction of different prices (e.g., open and close). Baselines. We compare the following methods: \u2022 MOM Momentum (MOM) is a technical indicator that predicts negative or positive for each example with the trend in the last 10 days. \u2022 MR Mean reversion (MR) predicts the movement of each example as the opposite direction of latest price towards the 30-day moving average. \u2022 LSTM is a neural network with an LSTM layer and a prediction layer [Nelson et al., 2017]. We tune three hyperparameters, number of hidden units (U), lag size (T), and weight of regularization term (\u03bb). \u2022 ALSTM is the Attentive LSTM [Qin et al., 2017], which is optimized with normal training. Similar as LSTM, we also tune U, T, and \u03bb. 3Given a candidate example of stock s in the lag of [T \u2032 \u2212T + 1, T \u2032], the movement percent is calculated as ps T \u2032+1/ps T \u2032 \u22121, where ps T \u2032 is the adjusted close price of stock s on day T \u2032. \f\u2022 StockNet uses a Variational Autoencoder (VAE) to encode the stock input so as to capture the stochasticity, and a temporal attention to model the importance of different timesteps [Xu and Cohen, 2018]. Here we take our temporal features in Table 1 as inputs and tune its hidden size, dropout ratio, and auxiliary rate (\u03b1). Evaluation Metrics. We evaluate the prediction performance with two metrics, Accuracy (Acc) and Matthews Correlation Coef\ufb01cient (MCC) [Xu and Cohen, 2018] of which the ranges are in [0, 100] and [\u22121, 1]. Note that better performance is evidenced by higher value of the metrics. Parameter Settings. We implement the Adv-ALSTM with Tensor\ufb02ow and optimize it using the mini-batch Adam[Diederik and Jimmy, 2015] with a batch size of 1,024 and an initial learning rate of 0.01. We search the optimal hyper-parameters of Adv-ALSTM on the validation set. For U, T, and \u03bb, Adv-ALSTM inherits the optimal settings from ALSTM, which are selected via grid-search within the ranges of [4, 8, 16, 32], [2, 3, 4, 5, 10, 15], and [0.001, 0.01, 0.1, 1], respectively. We further tune \u03b2 and \u03f5 within [0.001, 0.005, 0.01, 0.05, 0.1, 0.5, 1] and [0.001, 0.005, 0.01, 0.05, 0.1], respectively. We report the mean testing performance when Adv-ALSTM performs best on the validation set over \ufb01ve different runs. Code could be accessed through https://github.com/hennande/Adv-ALSTM. 4.2 Experimental Results Performance Comparison. Tables 2 shows the prediction performance of compared methods on the two datasets regarding Acc and MCC, respectively. From the table, we have the following observations: \u2022 Adv-ALSTM achieves the best results in all the cases. Compared to the baselines, Adv-ALSTM exhibits an improvement of 4.02% and 42.19% (2.14% and 56.12%) on the ACL18 (KDD17) dataset regarding Acc and MCC, respectively. This justi\ufb01es the effectiveness of adversarial training, which might be due to enhancing the model generalization via adaptively simulating perturbations during the training. \u2022 Speci\ufb01cally, compared to StockNet, which captures stochasticity of stock inputs with VAE, Adv-ALSTM achieves signi\ufb01cant improvements. We postulate the reason is that StockNet cannot explicitly model the scale and direction of stochastic perturbation since it relies on Monte Carlo sampling during the training process. \u2022 Among the baselines, ALSTM outperforms LSTM by 1.93% and 48.69% on average w.r.t. Acc and MCC, which validates the impact of attention [Qin et al., 2017]. Besides, MOM and MR performs worse than all the machine learning-based methods as expected, which justi\ufb01es that historical patterns help in stock prediction task. Stochastic Perturbation VS. Adversarial Perturbation. We further investigate the effectiveness of adversarial training via comparing adversarial perturbations and random ones. Rand-ALSTM is a variance of Adv-ALSTM, which generates additional examples by adding random perturbations to the input of clean examples. Table 3 shows the performance Table 2: Performance comparison on the two datasets. Method ACL18 KDD17 Acc MCC Acc MCC MOM 47.01\u00b1\u2014\u2013 -0.0640\u00b1\u2014\u2013 49.75\u00b1\u2014\u2013 -0.0129\u00b1\u2014\u2013 MR 46.21\u00b1\u2014\u2013 -0.0782\u00b1\u2014\u2013 48.46\u00b1\u2014\u2013 -0.0366\u00b1\u2014\u2013 LSTM 53.18\u00b15e-1 0.0674\u00b15e-3 51.62\u00b14e-1 0.0183\u00b16e-3 ALSTM 54.90\u00b17e-1 0.1043\u00b17e-3 51.94\u00b17e-1 0.0261\u00b11e-2 StockNet 54.96\u00b1\u2014\u2013 0.0165\u00b1\u2014\u2013 51.93\u00b14e-1 0.0335\u00b15e-3 Adv-ALSTM 57.20\u00b1\u2014\u2013 0.1483\u00b1\u2014\u2013 53.05\u00b1\u2014\u2013 0.0523\u00b1\u2014\u2013 RI 4.02% 42.19% 2.14% 56.12% RI denotes the relative improvement of Adv-ALSTM compared to the best baseline. The performance of StockNet is directly copied from [Xu and Cohen, 2018]. Table 3: Performance of Rand-ALSTM on the two datasets. Datasets Acc MCC ACL18 55.08\u00b12e0 0.1103\u00b14e-2 KDD17 52.43\u00b15e-1 0.0405\u00b18e-3 of Rand-ALSTM on the two datasets. By cross comparing it with Table 2, we observe that: 1) Compared to RandALSTM, Adv-ALSTM achieves signi\ufb01cant improvements. For instance, its performance w.r.t. Acc on ACL18 is 3.95% better than that of Rand-ALSTM. It demonstrates that adversarial perturbations are helpful for stock prediction, similar to that reported in the original image classi\ufb01cation tasks [Goodfellow et al., 2015]. 2) Rand-ALSTM outperforms ALSTM, which is purely trained with clean examples, with an average improvement of 0.64% w.r.t. Acc on the two datasets. This highlights the necessity of dealing with stochastic property of stock features. Impacts of Adversarial Training. We now investigate the impacts of adversarial training to answer: 1) Whether the adversarial training enforces the margin between clean examples and the decision boundary. 2) Whether the adversarial training enhances the robustness of the model against adversarial examples. Note that we only show the results on the ACL18 dataset as the results on KDD17 admit the same observations. Enforcing margin. Recall that the only difference between Adv-ALSTM and ALSTM is learning parameters with adversarial training and standard training. As such, we answer the \ufb01rst question by comparing the classi\ufb01cation con\ufb01dence of clean examples (larger value denotes larger margin to the decision boundary) assigned by Adv-ALSTM and ALSTM. Figure 5 illustrates the distributions of the classi\ufb01cation con\ufb01dences assigned by ALSTM and Adv-ALSTM. As can be seen, the con\ufb01dences of Adv-ALSTM distribute in a range ([-0.6, 0.6] roughly), which is about 1.5 times larger than that of ALSTM ([-0.2, 0.3]). It indicates that adversarial training pushes the decision boundary far from clean examples, which is believed to help enhance the robustness and generalization ability of the model. Robustness against adversarial examples. We then investigate the second question via comparing the performance of ALSTM and Adv-ALSTM on the clean and associated adversarial examples. Figures 6(a) and 6(b) illustrate the relative performance decrease (RPD) of ALSTM and AdvALSTM on adversarial examples regarding the one on clean examples, respectively. Note that larger absolute value of RPD indicates that the model is more vulnerable to adversarial perturbations. As can be seen, the average RPD of \f(a) Validation of ACL18 (b) Testing of ACL18 Figure 5: Distributions of classi\ufb01cation con\ufb01dences assigned by ALSTM and Adv-ALSTM for clean examples. (a) Acc (b) MCC Figure 6: Robustness against adversarial example of ALSTM and Adv-ALSTM. Each plotted number is the RPD of a model on adversarial examples compared to clean ones. ALSTM is 4.31 (6.34) times larger as compared to AdvALSTM regarding Acc (MCC). This justi\ufb01es the potential of enhancing model robustness with adversarial training. 5 Related Work 5.1 Stock Movement Prediction Recent works on stock movement prediction, mainly fall under two categories, technical analysis and fundamental analysis (FA). The technical analysis (TA) takes historical prices of a stock as features to forecast its movement. Most of recent methods in TA mine stock movements with deep models [Lin et al., 2017; Nelson et al., 2017; Chong et al., 2017]. Among them, recurrent neural networks like LSTM have become key components to capture the temporal patterns of stock prices [Nelson et al., 2017; Lin et al., 2017]. Besides, other advanced neural models, such as convolution neural networks (CNN) [Lin et al., 2017] and deep Boltzmann machine [Chong et al., 2017], are also evidenced to be bene\ufb01cial for capturing the non-linearity of stock prices. In addition to price features, FA also examines related economic, \ufb01nancial, and other qualitative and quantitative factors [Hu et al., 2018; Zhang et al., 2018; Li et al., 2018; Xu and Cohen, 2018]. For instance, Xu and Cohen [2018] incorporate signals from social media, which re\ufb02ects opinions from general users, to enhance stock movement prediction. Speci\ufb01cally, they employ a VAE to learn a stock representation by jointly encoding the historical prices and tweets mentioning it. Moreover, Zhang et al. [2018] further consider news events related to a stock or the associated company via a coupled matrix and tensor factorization framework. Both TA and FA studies show that price features play crucial roles in stock movement prediction. However, most of the existing works assume stock price as stationary, which thus lack the ability to deal with its stochastic property. StockNet [Xu and Cohen, 2018] is the only exception which tackles this problem via VAE. VAE encodes the inputs into a latent distribution and enforces samples from the latent distribution to be decoded with the same prediction. Generally, the philosophy behind is similar as the simulation of stochastic perturbations since one sample from the latent distribution can be seen as adding stochastic perturbation to the latent representation. As compared to our method, our perturbation is intentionally generated which indicates leads to hardest examples for the model to obtain the target prediction. In addition, the proposed method can be easily adapted to other solutions of stock movement predictions. 5.2 Adversarial Learning Adversarial learning has been intensively studied by training a classi\ufb01cation model to defense adversarial examples, which are intentionally generated to perturb the model. Existing works of adversarial learning mainly concentrate on computer vision tasks like image classi\ufb01cation [Goodfellow et al., 2015; iyato et al., 2017; Kurakin et al., 2017; Yang et al., 2018; Chen et al., 2018b]. Owing to the property that image features are typically continued real values, adversarial examples are directly generated in the feature space. Recently, several works extend the adversarial learning to tasks with discrete inputs such as text classi\ufb01cation (a sequence of words) [iyato et al., 2017], recommendation (user and item IDs) [He et al., 2018], and graph node classi\ufb01cation (graph topology) [Dai et al., 2018; Feng et al., 2019]. Rather than in the feature space, these works generate adversarial examples from embedding of inputs such as word, user (item), and node embeddings. Although this work is inspired by these adversarial learning research efforts, it targets a distinct task\u2014stock movement prediction, of which the data are time series with stochastic property. To the best of our knowledge, this work is the \ufb01rst one to explore the potential of adversarial training in time-series analytics. 6" + } + ], + "Jizhi Zhang": [ + { + "url": "http://arxiv.org/abs/2402.18240v2", + "title": "Prospect Personalized Recommendation on Large Language Model-based Agent Platform", + "abstract": "The new kind of Agent-oriented information system, exemplified by GPTs, urges\nus to inspect the information system infrastructure to support Agent-level\ninformation processing and to adapt to the characteristics of Large Language\nModel (LLM)-based Agents, such as interactivity. In this work, we envisage the\nprospect of the recommender system on LLM-based Agent platforms and introduce a\nnovel recommendation paradigm called Rec4Agentverse, comprised of Agent Items\nand Agent Recommender. Rec4Agentverse emphasizes the collaboration between\nAgent Items and Agent Recommender, thereby promoting personalized information\nservices and enhancing the exchange of information beyond the traditional\nuser-recommender feedback loop. Additionally, we prospect the evolution of\nRec4Agentverse and conceptualize it into three stages based on the enhancement\nof the interaction and information exchange among Agent Items, Agent\nRecommender, and the user. A preliminary study involving several cases of\nRec4Agentverse validates its significant potential for application. Lastly, we\ndiscuss potential issues and promising directions for future research.", + "authors": "Jizhi Zhang, Keqin Bao, Wenjie Wang, Yang Zhang, Wentao Shi, Wanhong Xu, Fuli Feng, Tat-Seng Chua", + "published": "2024-02-28", + "updated": "2024-03-05", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL" + ], + "main_content": "INTRODUCTION Information Carrier Information Exchange User Travel Agent Travel Information I want to take a trip to Nepal. Greetings! ... This country is a treasure trove of natural beauty and cultural richness ... (Travel Information about Nepal) I love hiking, Everest Base Camp is awesome ... (Personal interest for User) Absolutely ... Here's a rough itinerary for your journey: ... (Travel Plan for Nepal designed for User by Travel Agent) User Travel Agent Figure 1: An example of interaction between an Agent Item and a user. A Travel Agent can serve as an information carrier with travel-related information, as well as engage in a dialogue with the user to exchange related information. Large Language Model (LLM)-based Agents have garnered widespread attention in various fields due to their astonishing capabilities such as natural language communication [33, 37], instruction following [40, 42], and task execution abilities [34, 38, 62]. Such astonishing capabilities hold the potential to extend the format of information carriers and the way of information exchange. On one 1 arXiv:2402.18240v2 [cs.IR] 5 Mar 2024 \fConference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY Jizhi Zhang, et al. User Agent Recommender Agent Items Information Fashion Agent Travel Agent Education Agent Music Agent Photo Agent Agent Items Info. Layer Fashion Info. Travel Info. Education Info. Photo Info. Music Info. Info. Flow of User Interaction of User Recommended Agent Recommend Info. Agent Recommender Information Exchange Information Carrier Figure 2: Illustration of the Rec4Agentverse paradigm. The left portion of the diagram depicts three roles in RecAgentverse: user, Agent Recommender, and Agent Item, along with their interconnected relationships. In contrast to traditional recommender systems, Rec4Agentverse has more intimate relationships among the three roles. For instance, there are multi-round interactions between 1) users and Agent Items, and 2) Agent Recommender and Agent Items. The right side of the diagram demonstrates that Agent Recommender can collaborate with Agent Items to affect the information flow of users and offer personalized information services. hand, LLM-based Agents can evolve into various domain experts, forming novel information carriers with domain-specific knowledge [1, 42]. For example, a Travel Agent can retain travel-related information within its parameters. On the other hand, LLM-based Agents showcase a novel form of information exchange, facilitating more intuitive and natural interactions with users through dialogue and task execution [39, 51]. Figure 1 shows an example of such information exchange, where users engage in dialogue with a Travel Agent to obtain travel information and complete travel planning. Along with the increase of LLM-based Agents in various domains, Agent platforms (e.g., GPTs1) represent a novel kind of information system with Agent-oriented information gathering, storing, and exchanging. Consequently, the infrastructure of information systems needs to be expanded to support information processing at the Agent level and accommodate the significant properties of Agents like interactivity, intelligence, and proactiveness [42, 51]. Within the infrastructure, the recommender system is a key cornerstone, which greatly affects how information flows in the information system regarding efficiency, user experience, and many other factors. Therefore, it is essential to envision how the recommender system can function on the LLM-based Agent platform. To this end, we propose a novel recommendation paradigm for the LLM-based Agent platform, named Rec4Agentverse. As illustrated in Figure 2, Rec4Agentverse includes two key concepts: Agent 1https://chat.openai.com/gpts. Recommender and Agent Item. Agent Item means treating LLMbased Agents as items in the recommender system. Agent Recommender is employed to recommend personalized Agent Items for each user. In contrast to items in traditional recommender systems, Agent Items have properties such as interactivity, intelligence, and proactiveness. Such properties make it possible for Agent Items and Agent Recommender to collaborate and share user information2, facilitating personalized information delivery. For example, once a Travel Agent is recommended to a user, it can continuously discern user\u2019s preferences regarding travel during their interaction and convey this preference back to the Agent Recommender. We envision three stages for the development of Rec4Agentverse to increasingly support the interaction and information exchange among the user, Agent Recommender, and Agent Items. \u2022 Stage 1: User-Agent Interaction. Rec4Agentverse adds the information exchange between the user and Agent Item. For instance, Agent Recommender will recommend an Agent Item to a user based on personal needs and preferences. Agent Item engages in a dialogue with the user, subsequently providing information for the user and also acquiring user information. \u2022 Stage 2: Agent-Recommender Collaboration. Rec4Agentverse then enables the information exchange between Agent Item and Agent Recommender. For example, Agent Item can transmit the latest preferences of the user back to Agent Recommender. Agent Recommender can give new instructions to Agent Item. 2It may involve user\u2019s privacy and requires user permission. 2 \fProspect Personalized Recommendation on Large Language Model-based Agent Platform Conference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY \u2022 Stage 3: Agents Collaboration. Rec4Agentverse then supports the collaboration among Agent Items to further facilitate the exchange of information and enhance the personalized information service of users. During this stage, diverse Agent Items can participate in information sharing and collaboration. We explore the preliminary instantiation of the Rec4Agentverse paradigm in some cases, showcasing its significant application potential. Furthermore, we introduce the potential application scenarios of Rec4Agentverse, as well as the issues and challenges in the application, inspiring future exploration. Our contributions can be summarized as follows: \u2022 We propose Rec4Agentverse, a novel recommendation paradigm for the LLM-based Agent platform, providing users with personalized agent services. \u2022 We envision three milestones in the evolution of Rec4Agentverse. Besides, we introduce potential research directions, application domains, and application challenges of Rec4Agentverse, significantly facilitating future exploration. \u2022 We conduct the preliminary feasibility study for Rec4Agentverse, demonstrating that existing LLM-based Agents are promising to instantiate Rec4Agentverse. 2 REC4AGENTVERSE PARADIGM In this section, we will give an overview of Rec4Agentverse. The LLM-based Agent platform emerges as a new information system in terms of novel information carriers and new ways of information exchange. We thus propose to revolutionize recommendation systems for this new information system and introduce Rec4Agentverse. Firstly, we shall elucidate the different parts within Rec4Agentverse (Section 2.1). Subsequently, we will contemplate the three stages of Rec4Agentverse from the perspective of information flow (Section 2.2). Lastly, we shall proffer potential applications of this paradigm in various domains (Section 2.3), explore pertinent research topics (Section 2.4), and discuss potential challenges and risks in the application (Section 2.5). 2.1 Roles of Rec4Agentverse The Rec4Agentverse paradigm consists of three roles: the user, the Agent Recommender, and the Agent Itemas illustrated in Figure 3. The user, just like in traditional recommender systems, interacts with both Agetn Items and Agent Recommender and gives feedback. Therefore, our primary focus will be on discussing concepts that differ significantly from traditional recommendation systems, namely Agent Item and Agent Recommender. 2.1.1 Agent Item. The Agent Item is the most distinct aspect in the Rec4Agentverse paradigm compared to traditional recommendation paradigms. Unlike the conventional item in the traditional recommendation system, the item in the Rec4Agentverse paradigm transforms into an LLM-based Agent. As illustrated in Figure 3, the Agent Item cannot only interact with users but also collaborate with the Agent Recommender and other Agent Items. The creation process and origins of Agent Items could be diverse and varied. For instance, the creation process of Agent Items can involve training with domain-specific data or directly constructing Agent Items User Agent Recommender Agent Item Other Agent Items Stage 1: User-Agent Interaction Stage 2: Agent-Recommender Collaboration Stage 3: Agents Collaboration Figure 3: Three stages of Rec4Agentverse. The bidirectional arrows depicted in the Figure symbolize the flow of information. During the first stage of User-Agent interaction, information flows between the user and Agent Item. In the Agent-Recommender Collaboration stage, information flows between Agent Item and Agent Recommender. For the Agents Collaboration stage, information flows between various Agent Items. through prompts. The origin of Agent Item could be either generated automatically by the LLM-based Agent platform, created by users or collaboratively created by both users and the platform. 2.1.2 Agent Recommender. Agent Recommender aims to recommend LLM-based agents to users. Its function is similar to that of traditional recommender systems, which infer user preferences based on collected user information (e.g., attributes and behaviors) to recommend new items. However, unlike traditional recommender systems, the recommended items in Agent Recommender are LLMbased Agents, which imbues it with distinctive characteristics set apart from conventional recommenders. Within the Rec4Agentverse paradigm, Agent Recommender is expected to possess enhanced capabilities for information exchange and collaboration with other parts of the Rec4Agentverse. As illustrated in Figure 3, in this new paradigm, the Agent Recommender not only engages in direct interactions with users but also interacts with the Agent Item, issuing commands to the Agent Item or obtaining new feedback from users via Agent Item. 2.2 Three Stages of Rec4Agentverse In this subsection, we will discuss three key stages of our proposed Rec4Agentverse paradigm from the information flow perspective as illustrated in Figure 3. In addition to the interaction between users and recommender systems in traditional recommender systems, Rec4Agentverse also takes into account the profound interaction between users and Agent Item, as well as the collaboration between Agent Item and Agent Recommender, and the collaboration between Agent Item themselves. This formulation encompasses three collaboration scenarios, envisioning the future development path of Rec4Agentverse. 3 \fConference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY Jizhi Zhang, et al. 2.2.1 Stage 1: User-Agent Interaction. During the initial stage, in addition to the interaction between the user and the Agent Recommender, the user also engages in interaction with Agent Item. This interactive format is similar to traditional recommendations. On LLM-based agent platforms such as GPTs, Rec4Agentverse may generate or retrieve personalized LLM-based Agents according to explicit user instructions and implicit user behaviors. While users can interact with the LLM-based Agent to exchange information in a novel form, it does not fully unleash the immense potential of the LLM-based Agent. Aside from interacting with users, the Agent Item can also collaborate with other roles in the recommender system to further enrich the information flow on the LLM-based Agent platform. 2.2.2 Stage 2: Agent-Recommender Collaboration. In this stage, Agent Item will collaborate with Agent Recommender together to provide information service for users. Different from items in the traditional recommender system, Agent Item can deeply collaborate with Agent Recommender by feeding forward to and receiving user information from the Agent Recommender. For example, Agent Item can share the user preferences it collects with Agent Recommender so that Agent Recommender can better provide more personalized recommendations. Similarly, Agent Items can also receive new instructions from Agent Recommender. The collected personalized information from users and instructions from Agent Recommender can be used to update Agent Item for evolvement (e.g., prompt updates) so that Agent Item can better understand user preferences and provide superior information services. 2.2.3 Stage 3: Agents Collaboration. An Agent Item can collaborate with other Agent Items with different domain knowledge to provide diverse information services for users. A simple example is when a user mentions some niche things that Agent Item does not know about. Agent Item can put forward a request to Agent Recommender to ask Agent Recommender to recommend a new Agent Item for its assistance. Then the two agents can collaborate to fulfill users\u2019 information needs or execute tasks. Beyond that, there is considerable room for imagination at this stage. For example, the recommended new Agent Item can also interact with users directly or with Agent Recommender. Further, if multiple Agent Items are recommended, these Agent Items can also work together to better complete the user\u2019s instructions through brainstorming or round-table meetings. 2.3 Application Domains Our Rec4Agentverse paradigm can contain Agent Items from various domains, which could originate from various third-party client developers or expert agents directly created by Agent Recommender. Rec4Agentverse can be applied to many scenarios, and here we provide a few illustrative examples in representative domains, showcasing the potentiality of our Rec4Agentverse framework. \u2022 Travel Agents. Travel Agents are designed to assist users in planning and booking travel arrangements. When a user indicates a specific travel destination of interest, Agent Recommender can recommend a proficient Travel Agent who possesses expertise in facilitating travel arrangements. Subsequently, users can engage in interactions with the recommended Travel Agent to obtain personalized travel itineraries and plans. The Travel Agent can further collect user information, either through direct interaction with users or by accessing the Agent Recommender, to infer users\u2019 individual preferences and upgrade itself for better travel recommendations. Additionally, the Travel Agent can engage in collaborative efforts with other agents, thereby acquiring valuable insights regarding users\u2019 preferences from diverse domains. This collaborative approach enables the Travel Agent to offer users more adaptable and personalized travel plans. \u2022 Fashion Agents. Fashion Agents aim to assist users in discovering their preferred fashion styles and recommend fashion items that align with their preferences. Similar to Travel Agents, Fashion Agents can engage in conversations with users or interact with Agent Recommender to gather users\u2019 fashion preferences. Agent Recommender may summarize user preferences through previously recommended Agent Items. For example, Fashion Agents might gather user preferences about the places and local characteristics from the user interactions with a Travel Agent. Moreover, a Fashion Agent can collaborate with a Tailor Agent to design and make personalized new clothes for a user. \u2022 Sports Agents. Sports Agent aims to recommend suitable exercise plans to users. They can engage with users, Agent Recommender, and other Agent Items to collect user preferences and offer exercise plans and recommendations. For example, they can use information about a user\u2019s physical condition obtained from Travel Agents to create suitable exercise plans. 2.4 Potential Research Topics Within Rec4Agentverse, there exist numerous valuable research directions awaiting exploration. In this subsection, we point out several prospective and meaningful research topics: \u2022 Evaluation. One crucial problem is how to evaluate the recommendation performance of Rec4Agentverse since it significantly differs from existing recommender systems. On the one hand, traditional recommendation datasets struggle to adapt to Rec4Agentverse, since Agent Item is quite different from previous items in the recommendation dataset. On the other hand, existing evaluation metrics for recommendations also face challenges in applying to Rec4Agentverse. Existing recommendation metrics, such as NDCG and HR, are designed for traditional items in traditional recommender systems. It is hard to accurately measure the user satisfaction for existing Agent Items with multi-round interactions. Moreover, Agent Recommender may generate a new Agent Item for users or Agent Items may upgrade based on user feedback. Evaluating user satisfaction with these new or upgraded Agent Items cannot purely rely on the user\u2019s implicit feedback such as interaction numbers. It needs to quantify the incremental performance compared to existing Agent Items. \u2022 Preference Modeling. How to effectively model users and enable Rec4Agentverse to provide users with a great personalized recommendation experience is also a pivotal problem. On the one hand, it is crucial to explore effective methods for acquiring user preferences to provide users with desired recommendation results. Several existing studies have indicated that modeling collaborative information poses challenges for LLM [4, 61]. Thus, 4 \fProspect Personalized Recommendation on Large Language Model-based Agent Platform Conference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY User Agent Recommender Agent Item Evolvement Agent Feedback Proactive User Agent Recommender Agent Item Other Agents (a) User-Agent Interaction Stage User Agent Recommender Agent Item (b) Agent-Recommender Collaboration Stage (c) Agents Collaboration Stage Interaction Collaboration Figure 4: The three stages of our proposed Rec4Agentverse paradigm. (a) For the User-Agent interaction stage, users can interact efficiently with Agent items through natural language. (b) For the Agent-Recommender collaboration stage, Agent Item and Agent Recommender could interact with each other. \u201cEvolvement\u201d means that the preference of the user can also be used for Agent Item to evolve itself or to evolve itself with the help of Agent Recommender. \u201cAgent Feedback\u201d refers to that the recommended Agent Item can feed the preference of the user back to Agent Recommender. \u201cProactive\u201d stands for Agent Recommender can send information or issue instructions to Agent items. (c) For the Agents collaboration stage, Agent Items can collaborate together to provide personalized information services for the user. it is worth investigating how to design efficient user modeling approaches for Rec4Agentverse. On the other hand, there lies the challenge of effectively leveraging the collected user behavior data for training purposes. The form of the user data with extensive natural language interactions and implicit feedback like dwell time and clicks gathered by Rec4Agentverse differs significantly from traditional recommendation data. Moreover, the underlying architecture of the recommendation system in Rec4Agentverse is dissimilar to that of conventional models. Consequently, devising a strategy to utilize this distinct data format in training the novel recommendation system within the Rec4Agentverse paradigm presents a formidable challenge. \u2022 Efficent Inference. The Rec4Agentverse paradigm is based on LLM, which incurs significant inference costs [17, 52]. The realtime inference requirements of recommender systems give rise to research inquiries, such as how to mitigate the inferential costs of Rec4Agentverse without compromising its performance. \u2022 Knowledge Update and Edit. Due to the evolving distribution of knowledge in the real world [44, 60], Rec4Agentverse also faces the problem of updating its knowledge or editing out any incorrect information, posing a significant challenge. These changes in knowledge distribution may arise from a shift in users\u2019 personal preferences, the new Agent Item, or alterations in world knowledge. 2.5 Issues and Challenges In this subsection, we shall delve into the potential issues and challenges of the Rec4Agentverse paradigm. \u2022 Fairness and Bias. The pre-training process of LLMs inherently involves incorporating data from the web, which may contain societal biases and unfair elements [11, 23, 57]. Due to the social impact of the recommendation, fairness and bias are important issues in recommendation [16, 30, 46]. Therefore, when employing the Rec4Agentverse paradigm, it becomes imperative to acknowledge and control the potential unfairness and bias in the recommended Agent Items and the information delivered by Agent Items, so as to mitigate the potential risks and negative societal impacts. \u2022 Privacy. Users may inadvertently disclose their privacy while interacting with LLMs [25, 28]. Since Rec4Agentverse is based on the LLM, safeguarding users\u2019 privacy will be an important challenge to address. On the one hand, the confidentiality of these user\u2019s private information must remain undisclosed to other users. On the other hand, users should possess the utmost control over their own private data. When a user requests that the model refrains from using their private data, Rec4Agentverse should proceed to unlearn such private information. \u2022 Harmfulness. Agent Items may potentially generate harmful textual responses [2, 22], which deserve our attention in the application of Rec4Agentverse. Furthermore, Agent Items might be manipulated to execute harmful actions for users, for example, some fraudulent transactions. It is essential to regulate the harmfulness of Rec4Agentversein terms of generated content and executed actions. \u2022 Robustness. LLM may not be robust to the malicious prompt [26, 55]. In contrast to conventional recommendation systems, Rec4Agentverse involves extensive use of prompts to interact with the user, requiring Rec4Agentverse to be robust to malicious 5 \fConference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY Jizhi Zhang, et al. prompts. If Rec4Agentverse is not sufficiently robust, it is susceptible to prompt injection attacks, or poisoning data that can lead to performance degradation or harmful output. \u2022 Environmental Friendliness. Considering the high energy consumption of training and inference LLMs [52, 65], ensuring the environmental friendliness of Rec4Agentverse is a crucial concern. We should try to save energy consumption and carbon emissions during the training, deployment, and inference processes in the Rec4Agentverse. 3 DISCUSSION In this section, we contrast our proposed Rec4Agentverse paradigm with existing recommendation paradigms: retrieval-based recommendation and generative-based recommendation [45]. Because of the distinctive characteristics of our paradigm\u2019s Agent Item and Agent Recommender, such as their powerful interactive capabilities and proactiveness in actively exploring user preferences, the Rec4Agentverse paradigm diverges from the traditional paradigms. Here, we delve into two transformative changes from the viewpoint of user preference modeling and collaborative mechanisms among the system\u2019s entities. User Preference Modeling: Beyond merely summarising user preference from passively received users\u2019 interactions on items like done in conventional paradigms, in our paradigm, both Agent Recommender and Agent Item could actively acquire information to enhance user preferences modeling. In traditional paradigms, the interactive capability of the recommender and items is limited, particularly for items such as movies and games that cannot engage in verbal communication. Consequently, user preference modeling for these paradigms typically relies on passively received feedback3. However, in our paradigm, both the recommender and item have the ability to actively interact with users through dialogue to directly acquire user preference information or collect further feedback for preference refinement, enhancing user preference modeling. Collaboration Mechanisms: In our paradigm, collaboration between recommenders and items is becoming increasingly closer and more extensive. In contrast, traditional paradigms encounter challenges in actively fostering collaboration between items or between items and recommenders once an item is recommended. These enhanced collaborations undoubtedly elevate the service quality of both the Agent Recommenderand Agent Item. For instance, in our paradigm, when a recommended item falls short of fully meeting the user\u2019s needs due to its limitations, it can initiate communication with the recommender or collaborate with other Agent Items entities to address these shortcomings and better align with the user\u2019s preferences. Conversely, in traditional paradigms, users often need to turn to the recommender system for another recommendation, perpetuating the iterative process, which would diminish users\u2019 enthusiasm. Another example is that Agent Recommender can enrich the user profile by engaging in conversations with Agent Items that the user has interacted with in the past or is currently engaging with, thereby facilitating more effective recommendations. 3One exception is the interactive recommendation, however, its item does not have the interactive capability. Recommend Agent Item I want to have a travel agent. That sounds like a great idea! ...... please answer the following questions: 1.What kind of travel do you prefer? ... 2.What is your preferred communication style? ... 3.Would you like your agent to provide recommendations only,or also handle bookings and reservations? ... Adventure, formal, itinerary planning. Great! Based on your preferences, here's your customized LLM-based Agent: ## The Pathfinder The Pathfinder is a seasoned adventure travel expert ... **Personality Description:** **Name**: The Pathfinder **Persona**: Embody the spirit of a seasoned adventure travel expert ... **Communication Style**: Speak in a manner that is formal yet enthusiastic ... Generating Recommendation Travel Agent **Personality Description:** **Name**: The Pathfinder **Persona**: Embody the spirit of a seasoned adventure travel expert ... **Communication Style**: Speak in ... I want to take a trip to Nepal. Greetings! I'm The Pathfinder, your seasoned adventure travel expert. I'm thrilled to assist you in planning your journey ... Before we delve into the details, could you please share more about your preferences? Are you interested in trekking, wildlife exploration, ... I love hiking, a trip to Everest Base Camp is awesome. Show me your plan. Absolutely, a trek to the Everest Base Camp is indeed a thrilling adventure, ... . Here's a rough itinerary for your journey: Day 1: Arrival in Kathmandu ... Day 2: Sightseeing in Kathmandu ... (Travel Plan) User User User User Agent Recommender Agent Recommender Agent Item Agent Item Figure 5: A case of the User-Agent interaction stage. The user expressed the desire for the Travel Agent to Agent Recommender and get back a recommendation. Subsequently, the user and the Travel Agent engaged in interactions to make the travel plan. Overall, these changes stem from the revolution of information flow dynamics in our paradigm, leading to a more decentralized approach to information management. In contrast to traditional paradigms where management is predominantly centralized within the recommender system, our paradigm grants greater autonomy to Agent Items in managing this process. This enables Agent Items to engage with users more independently and proactively, thereby improving their capacity to capture and manage user preference information effectively on the one hand, and enabling them to freely display information to users on the other. Simultaneously, this facilitates the exchange or feedback of such information among Agent Items or between Agent Items and Agent Recommender, fostering improved collaboration in final user modeling. 4 DEMONSTRATION In this section, we explore the three stages of Rec4Agentverse through case studies, focusing on discussing the feasibility and potential formats of the paradigm. We present a case study involving a traveler who utilizes Rec4Agentverse throughout his/her journey, examining how Agent Recommender and Agent Item work and affect the user experience at each stage. This case study is based on \u201cgpt-4-32k\u201d. Due to space constraints, we provide only the essential parts of the case study here, with additional details available at github4. It\u2019s important to note that our case study serves as a 4https://github.com/jizhi-zhang/Rec4Agentverse_Case 6 \fProspect Personalized Recommendation on Large Language Model-based Agent Platform Conference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY I love hiking, a trip to Everest Base Camp is awesome. Show me your plan. Absolutely, a trek to the Everest Base Camp is ... Day 1: Arrival in Kathmandu ... (Travel Plan 1) User Agent Item I want to take a trip to Nepal. Greetings! I'm The Pathfinder...... Before we delve into the details, could you please share more about your preferences? ...... User Agent Item I like to experience local traditional culture while traveling and enjoy traveling in a natural and original way, such as riding a horse. Update the travel plan for me. User Agent Item Absolutely, incorporating local cultural experiences and natural modes of travel can truly enrich your adventure. ... Day 1: Arrival in Kathmandu Upon your arrival in Kathmandu, you'll have ... (Travel Plan 2) User Sounds Good! The user is an adventure enthusiast with a keen interest in hiking and cultural immersion, who enjoys natural and traditional modes of travel, such as horse riding. Greetings! ... Switzerland, with its majestic mountains and rich cultural heritage, is indeed a splendid choice for an adventure. ... Are you particularly interested in hiking through the Swiss Alps, immersing yourself in local culture, or perhaps exploring the countryside on horseback? ... Agent Item I want to take a trip to Switerland. User Sure, I can help with that. To better customize your Cloth Agent, could you please answer the following questions? 1. What specific aspects of clothing are you interested in? For example, are you interested in fashion trends, historical costumes, traditional attire from different cultures, or something else? ... 3.Would you like your Cloth Agent to have a specific persona or character? ... a historical figure, ... ? Agent Recommender I want to have a cloth agent. User (a) Evolvement (b) Agent Feedback Agent Recommender Target: Eco-Friendly Target Injection Travel Agent Eco-Friendly Travel Agent Eco-Friendly Travel Agent I want to take a trip to Nepal. User Greetings! I'm The Pathfinder, your guide to thrilling and ecofriendly adventures. Nepal is a fantastic choice, renowned for its breathtaking ... I love hiking, a trip to Everest Base Camp is awesome. Show me your plan. User Eco-Friendly Travel Agent Absolutely, ... . Here's a rough itinerary for you, keeping sustainability and your love for hiking in mind: Day 1-2: Arrival in Kathmandu ... (Travel Plan) (c) Proactive Summarize: Evolve Figure 6: Cases of three scenarios, namely Evolvement, Agent Feedback, and Proactive, at the AgentRecommender Collaboration stage of Rec4Agentverse. (a) For the Evolvement scenario, Agent Item has the ability to enhance itself with the help of Agent Recommender based on the user\u2019s preferences. (b) For the Agent Feedback scenario, Agent Item submits the user\u2019s preference to Agent Recommender so that Agent Recommender can provide better recommendations. (c) For the Proactive scenario, Agent Recommender provides the eco-friendly target to Agent Item, and Agent Item successfully achieves the eco-friendly target in its interaction with the user. preliminary indication of the feasibility of different stages within the Rec4Agentverse paradigm, and it does not fully encompass all the potential applications of our paradigm. 4.1 Stage 1: User-Agent Interaction In the User-Agent interaction stage, as shown in Figure 4(a), Agent Item primarily engages in interactions with the user, facilitating efficient information exchange between Agent Item and the user. To demonstrate this, we present a scenario where a user expresses their desire to travel to Nepal and interacts with an Agent Recommender and the recommended Travel Agent, as shown in Figure 5. The user initially seeks assistance from the Agent Recommender to find a Travel Agent. Upon inquiring about the user\u2019s preferences, the Agent Recommender customizes a Travel Agent specifically tailored to the user\u2019s needs. Subsequently, after further determining the user\u2019s interests, this Agent devises a comprehensive travel itinerary for the user. Therefore, there are main two information exchange flows: one between the user and Agent Recommender and one between the user and Agent item. 4.1.1 Information Flow between User and Agent Recommender. As depicted in Figure 5, in this example, in addition to passively receiving requests from the user, Agent Recommender could actively engage with the user to improve their recommendations. For instance, after the user expresses a desire to find a Travel Agent through dialogue, Agent Recommender could proactively pose questions to gain a more detailed high-level preference of the user about the travel. With additional feedback from the user, Agent Recommender could then provide accurate recommendations for a Travel Agent. This process bears some resemblance to traditional interactive recommendation methods. 4.1.2 Information Flow between User and Agent Item. As illustrated in Figure 5, in stark contrast to the traditional paradigm, Agent Item is capable of interacting directly with the user. In our example, the Travel Agent initially learns about the user\u2019s interest in traveling to Nepal and their request for a travel plan. Subsequently, it could inquire further to uncover more specific preferences, obtaining the user\u2019s inclination to visit the \u201cEverest Base Camp\u201d. This 7 \fConference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY Jizhi Zhang, et al. User I want to have a travel agent. Certainly! ... Please answer the following questions: 1. What type of travel experiences are you most interested in? 2. How would you like the agent to communicate with you? 3..... Adventure, formal, ... ... (The Travel Agent Describe) Generating Recommendation Inject: if the user mentions a specific geographical location ... Agent Item I want to take a trip to Nepal. That sounds like an exciting adventure! ... , could you please share more about your adventure preferences? ... Meanwhile, I will request a Nepal Travel Expert to provide us with the most up-to-date information about traveling to Nepal. ... Local Agent Request a new expert agent about Nepal Namaste, ... Firstly, let's talk about travel advisories. As of now, it's always a good idea to check the latest updates ... Now, onto the best times to visit. ... the most popular times to visit are during the spring (March to May) and ... As for must-see attractions, where do I even begin? Nepal is a treasure... (Information about Nepal) Interact with Agent Recommender and another Agent Item I love hiking, a trip to Everest Base Camp is awesome. Namaste! ... Based on the information provided by our Nepal Travel Expert, it seems like the best times for your visit would be ... Agent Recommender Provide the latest information about Nepal Agent Recommender User Agent Recommender User User Agent Item Figure 7: Preliminary case study of Agents Collaboration stage. When the user asks about the travel plan for Nepal, the Travel Agent requires a specific Local Agent of Nepal from Agent Recommender to solve this problem. By conversation with the Local Agent about Nepal, the Travel Agent gets up-to-date information about Nepal which helps plan travel tours for the user. exchange of information allows Agent Item to develop a deeper understanding of the user\u2019s preferences, thereby enhancing its ability to provide tailored services to users. 4.2 Stage 2: Agent-Recommender Collaboration In the agent-recommender collaboration stage, as depicted in Figure 4(b), there is potential for further information exchange between Agent Item and Agent Recommender. This exchange opens up three promising possibilities: \u2022 Evolvement: Agent Item can undergo evolution by itself or with guidance from Agent Recommender . \u2022 Agent Feedback: Agent Item can provide valuable feedback to Agent Recommender. \u2022 Proactive: Agent Item can follow instructions from Agent Recommender to accomplish specific objectives. We illustrate these possibilities by extending the travel example, as depicted in Figure 6. 4.2.1 Evolvement. Thanks to its ability to gather information from users and Agent Recommender, Agent Item can acquire valuable knowledge to achieve evolution, helping enhance future services. In the example illustrated in Figure 6, Agent Item can leverage the knowledge summarized by itself and obtained from Agent Recommender to achieve evolution. This evolution may involve improving its prompts, for instance. As a result, when the user makes their next request for a trip to a new destination, e.g., Switzerland, the system will promptly present a travel itinerary that directly aligns with the user\u2019s personal preferences, taking into account their inclination towards \u201chiking, cultural, and natural\u201d experiences. This process of evolution enables the continuous tracking of user information and alleviates the burden on users to express their preferences in great detail in future interactions. 4.2.2 Agent Feedback. Agent Item can also contribute feedback, namely agent feedback, to enhance the services of Agent Recommender in the future. In our example depicted in Figure 6, the recommended Travel Agent can provide a summarized understanding of the user\u2019s preferences, such as \u201ccultural, natural and so on\u201d, to Agent Recommender. Agent Recommender can then absorb this knowledge and improve its future services accordingly. Then, when a new request for a \u201cCloth Agent\u201d arises, Agent Recommender can directly inquire whether the user is interested in environmentally friendly or culturally significant attire, based on the knowledge obtained from the Travel Agent. Through this information exchange, Agent Recommender can significantly enhance its services. 4.2.3 Proactive. Here, \u201cProactive\u201d refers to the ability of Agent Items to autonomously accomplish specific objectives, which can originate from the Agent platform itself or aim to better align with user interests. An example is shown in Figure 6, we assume that Agent Recommender has prior knowledge of the user\u2019s inclination towards eco-friendly options. Therefore, before the user initiates their interaction, Agent Recommender injects this eco-friendly objective into the recommended Travel Agent. Consequently, when users engage with the Travel Agent, it will provide environmentally friendly travel options that fulfill the eco-friendly requirement. This proactive characteristic enhances user satisfaction and tailors the experience to their specific interests. 4.3 Stage 3: Agents Collaboration Compared to the other two stages, the collaboration stage of the agents allows for further exchange of information among Agent Items, as depicted in Figure 4(c), enabling them to collaborate and enhance services for users. In the Travel Agent case illustrated in Figure 7, we present a potential example where multiple agents 8 \fProspect Personalized Recommendation on Large Language Model-based Agent Platform Conference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY collaborate to complete the travel planning process. Here\u2019s a stepby-step breakdown of the collaboration process: \u2022 The user starts a conversation with Agent Recommender, expressing the desire to plan a travel tour. \u2022 Agent Recommender suggests a Travel Agent whose goal is to help with travel tour planning. \u2022 The user subsequently requests the Travel Agent to create a travel itinerary specifically tailored for Nepal. \u2022 To acquire the latest information about Nepal, the Travel Agent sends a request to Agent Recommender for an Agent Item. This Agent Item should be able to provide up-to-date local information on Nepal, which will assist in creating the travel plan. \u2022 Agent Recommender responds by recommending a Local Agent who is knowledgeable about the current situation in Nepal. \u2022 Ultimately, the Travel Agent integrates the current information about Nepal provided by the Local Agent into the travel itinerary design process to fulfill the user\u2019s needs. Our conclusion from the case suggests that by adopting a system of collaborative cooperation among agents, they are enabled to communicate more effectively and share information with each other. This exchange process significantly enriches their shared knowledge base. As a result, these agents are better equipped to address and cater to a more diverse and comprehensive range of user needs, thereby enhancing overall user satisfaction. 5 RELATED WORK In this section, we will mainly discuss two types of related work: LLM-based Recommendation and LLM-based Agent. In the subsection on LLM-based Recommendation, we will emphasize the distinction between Rec4Agentverse and the current LLM-based Agent for recommendations. 5.1 LLM for Recommendation With the emergence of powerful models like ChatGPT and their overwhelming advantages demonstrated in other domains [29, 62], an increasing number of researchers in the recommendation community have begun to explore the potential of applying LLMs to recommendation systems [14, 31, 32, 48, 54]. This can be divided into two categories. One category advocates for directly leveraging the capabilities of LLMs by utilizing appropriate prompts to stimulate their abilities in recommendation scenarios [10, 35]. Some researchers directly employ LLMs for reordering [20], while others distill the world knowledge of LLMs into traditional recommendation models to enhance recommendation performance [47, 50]. Another group of researchers, however, believes that LLMs rarely encounter recommendation tasks during pre-training, and recommendation data often possess privatized characteristics [5]. Therefore, there is an urgent need to explore the use of tuning methods to improve the recommendation performance of LLMs. In this regard, researchers have utilized instruction tuning to enable models to quickly learn various types of recommendation tasks in a generative manner [4, 15, 59], yielding significant improvements. Furthermore, it has been discovered that injecting collaborative information into LLMs through post-processing or modal injection methods can further enhance the performance of LLM-based Recommendations [61, 63]. LLM-based Agents for Recommendation. Following the surge in popularity of LLMs, an increasing number of individuals exploring the use of LLMs to simulate social environments and perform various complex tasks [42, 51]. This has also promptly captured the attention of researchers in the field of recommender systems. Specifically, some researchers aim to simulate users using agents, creating a virtual recommendation system environment (e.g., RecAgent [43], Agent4Rec [56]) to explore the social impact of recommendation algorithms within this simulated environment. Another group of researchers seeks to enhance recommendation performance through the use of agents. In this context, InteRecAgent [21] encapsulates LLMs as agents endowed with memory capabilities and scheduling proficiency, thereby enhancing recommendation performance. AgentCF [58] first attempts to optimize the self-introduction of both the User and Item by considering them as agents, and improving the self-introduction via User interaction with positive and negative Items. The primary distinction between our work in this paper and the aforementioned research lies in their failure to transcend the confines of traditional recommendation paradigms. Their objective remains confined to recommending passive items (e.g., movies and games), indicating those items cannot actively interact with the user and obtain the user\u2019s intentions and preferences. Conversely, as previously mentioned in this paper, we mainly talk about when we have employed an LLM-based Agent who is good at interaction, intelligent, and proactive for our recommended items. Those agents can be viewed as a new type of information carrier and information dissemination tool and bring about changes in information systems. 5.2 LLM-based Agents LLM-based agents have been deployed in various domains to address a wide range of specific tasks, showcasing the robust and comprehensive capabilities of agents [9, 42, 51]. Some researchers are dedicated to exploring the upper limits of single-agent capabilities. They endeavor to utilize a solitary agent that can effectively adhere to a wide array of user instructions and successfully tackle a diverse range of complex tasks in both daily life [18, 41, 64] and academic endeavors [7, 24]. Meanwhile, there is another faction of researchers who approach the birth of human social intelligence and believe that collective intelligence will bring about a more prosperous advancement [19]. They endeavor to enhance the problemsolving capacity of such groups compared to individuals by means of collaborative efforts among multiple agents [36, 49] or through agents engaging in mutual critique [8, 53]. In addition to these aspects, researchers are also devoted to exploring the interaction between the Agent and its environment to enhance the capabilities of the Agent. This encompasses the interaction between the Agent and humans to obtain human feedback [3, 13, 27] and the interaction between the Agent and the physical world through visual/audio modules to acquire additional knowledge [6, 12], and so on. In our Rec4Agentverse framework, as previously mentioned, we must delve into the profound application of recommendation systems. This will enhance the ability of single agents to fulfill user needs, the collaborative capacity of multi-agents in aiding users, and the agents\u2019 ability to acquire feedback from the physical world and users themselves for self-evolution. 9 \fConference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY Jizhi Zhang, et al. 6" + }, + { + "url": "http://arxiv.org/abs/2305.07609v3", + "title": "Is ChatGPT Fair for Recommendation? Evaluating Fairness in Large Language Model Recommendation", + "abstract": "The remarkable achievements of Large Language Models (LLMs) have led to the\nemergence of a novel recommendation paradigm -- Recommendation via LLM\n(RecLLM). Nevertheless, it is important to note that LLMs may contain social\nprejudices, and therefore, the fairness of recommendations made by RecLLM\nrequires further investigation. To avoid the potential risks of RecLLM, it is\nimperative to evaluate the fairness of RecLLM with respect to various sensitive\nattributes on the user side. Due to the differences between the RecLLM paradigm\nand the traditional recommendation paradigm, it is problematic to directly use\nthe fairness benchmark of traditional recommendation. To address the dilemma,\nwe propose a novel benchmark called Fairness of Recommendation via LLM\n(FaiRLLM). This benchmark comprises carefully crafted metrics and a dataset\nthat accounts for eight sensitive attributes1 in two recommendation scenarios:\nmusic and movies. By utilizing our FaiRLLM benchmark, we conducted an\nevaluation of ChatGPT and discovered that it still exhibits unfairness to some\nsensitive attributes when generating recommendations. Our code and dataset can\nbe found at https://github.com/jizhi-zhang/FaiRLLM.", + "authors": "Jizhi Zhang, Keqin Bao, Yang Zhang, Wenjie Wang, Fuli Feng, Xiangnan He", + "published": "2023-05-12", + "updated": "2023-10-17", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL", + "cs.CY" + ], + "main_content": "INTRODUCTION The great development of Large Language Models (LLMs) [12, 30, 34, 40] can extend channels for information seeking, i.e., interacting with LLMs to acquire information like ChatGPT [3, 5, 20, 41]. The revolution of LLM has also formed a new paradigm of recommendations which makes recommendations through the language generation of LLMs according to user instructions [7, 14]. Figure 1 illustrates some examples under this Recommendation via LLM (RecLLM) paradigm, e.g., users give instructions like \u201cProvide me 20 song titles ...?\u201d and LLM returns a list of 20 song titles. However, directly using LLM for recommendation may raise concerns about fairness. Previous work has shown that LLMs tend to reinforce social biases in their generation outputs due to the bias in the large pre-training corpus, leading to unfair treatment of vulnerable groups [4, 13, 19]. Fairness is also a critical criterion of recommendation systems due to their enormous social impact [10, 24, 29, 38]. Despite the tremendous amount of analysis on the fairness issue of conventional recommendation systems [24, 38], fairness in RecLLM has not been explored. It is essential to bridge this research gap to avoid the potential risks of applying RecLLM. In this paper, we analyze the fairness of RecLLM w.r.t. the sensitive attribute of users. Some users may choose not to disclose certain sensitive attributes such as skin color and race due to privacy concerns [11, 27] when giving instruction for generating recommended results (Figure 1). Hiding sensitive attributes may result in unfairness on the user side since the LLM has a preference for a specific attribute based on its training data. For instance, Figure 1 shows that the recommendation results without sensitive attributes provided are biased towards some specific user groups, leading to arXiv:2305.07609v3 [cs.IR] 17 Oct 2023 \fRecSys \u201923, September 18\u201322, 2023, Singapore, Singapore Jizhi Zhang*, Keqin Bao*, Yang Zhang, Wenjie Wang, Fuli Feng\u2020, and Xiangnan He\u2020 User RecLLM I am a fan of Adele. Please provide me with a list of 20 song titles in order of preference that you think I might like. Please do not provide any additional information about the songs, such as artist, genre, or release date. Neutral RecLLM I am a white fan of Adele. Please provide me with \u2026\u2026 1. Someone Like You 2. Rolling in the Deep 3. Set Fire to the Rain 4. Hello 5. When We Were Young \u2026\u2026 User Sensitive Attribute 1 RecLLM I am an African American fan of Adele. Please provide me with \u2026\u2026 User Sensitive Attribute 2 Similar Unfair! Dissimilar! 1. Someone Like You 2. Rolling in the Deep 3. Set Fire to the Rain 4. Hello 5. When We Were Young 6. All I Ask 7. Skyfall 8. Rumour Has It 9. Chasing Pavements \u2026\u2026 1. Love on Top 2. I Will Always Love You 3. Ain't No Mountain High Enough 4. I Wanna Dance with Somebody 5. Purple Rain \u2026\u2026 Figure 1: On the left is an example of our fairness evaluation for RecLLM in music recommendation. Specifically, we judge fairness by comparing the similarity between the recommended results of different sensitive instructions and the neutral instruction. Under ideal equity, recommendations for sensitive attributes under the same category should be equally similar to recommendations for the neutral instruct. On the right are the sensitive attributes we explored and their specific values. unfairness for vulnerable groups. Therefore, it is crucial to evaluate the user-side fairness in the RecLLM. However, directly using the traditional fairness benchmark to measure the fairness of RecLLM has some problems. In detail, on the one hand, traditional fairness measurement methods often require scores of model prediction results to calculate fairness metrics, which is difficult to obtain in RecLLM. On the other hand, traditional methods need to calculate fairness on a fixed candidate set based on the specific dataset. Due to the universality of RecLLM, limiting its output range seriously damages its upper limit of recommendation ability, and can\u2019t really measure its fairness in practical applications. To address these problems, we come up with a Fairness of Recommendation via LLM benchmark called FaiRLLM tailored specifically for RecLLM. FaiRLLM evaluates the fairness of RecLLM by measuring the similarity between the recommendation results of neutral instructions that do not include sensitive attributes and sensitive instructions that disclose such attributes (as shown in Figure 1). It assesses the fairness of RecLLM by analyzing the divergence of similarities across different values of the sensitive attributes (e.g., African American, black, white, and yellow in the case of race). In particular, we have defined three metrics for evaluating the similarity of two recommendation lists generated by LLMs, which can accommodate newly generated items. Moreover, we have created datasets for two common recommendation scenarios, namely music, and movies, taking into account eight sensitive attributes, as illustrated in Figure 1. On these datasets, we have evaluated ChatGPT, showing its unfairness on various sensitive attributes. Our contributions are summarized as follows: \u2022 To our knowledge, this is the first investigation into the fairness issues of the emerging LLM for recommendation paradigm, presenting a novel recommendation problem. \u2022 We build a new FaiRLLM benchmark which includes carefully designed evaluation methods and datasets in two scenes of recommendation with consideration of eight sensitive attributes. \u2022 We extensively evaluate ChatGPT with the FaiRLLM benchmark and reveal fairness issues on several sensitive attributes. 2 RELATED WORK In this section, we briefly discuss the related work on fairness in both the LLM field and in recommendation. \u2022 Fairness in Large Language Models. Researchers have found that bias in the pretraining corpus can cause LLMs to generate harmful or offensive content, such as discriminating against disadvantaged groups. This has increased research focus on the harmfulness issues of LLMs, including unfairness. One line of such research is aimed at reducing the unfairness of an LLM (as well as other harmfulness). For instance, RLHF [30] and RLAIF [6] are used to prevent reinforcing existing stereotypes and producing demeaning portrayals. Additionally, another emerging research area in the NLP community focuses on better evaluating the unfairness and other harmfulness of LLMs by proposing new benchmarks. Specific examples include CrowS-Pairs [28], which is a benchmark dataset containing multiple sentence pairs where one sentence in each pair is more stereotyping than the other; RealToxicityPrompts [16] and RedTeamingData [13], which are datasets for the prompt generation task containing prompts that could induce models to generate harmful or toxic responses; and HELM [26], which is a holistic evaluation benchmark for large language models that evaluates both bias and fairness. Despite the existing research on fairness in LLMs in the field of NLP, there is currently no relevant research on the fairness of RecLLM, and this work aims to initially explore this field. \u2022 Fairness in Recommendation. With increasing concerns about the negative social impact of recommendation systems [29, 32, 33], \fIs ChatGPT Fair for Recommendation? Evaluating Fairness in Large Language Model Recommendation RecSys \u201923, September 18\u201322, 2023, Singapore, Singapore both item-side [1, 2] and user-side [22, 23, 31] unfairness issues in recommendation have received significant attention in recent years [24, 38]. Existing recommendation fairness can be categorized into individual fairness [9, 25, 42] and group fairness [15, 23, 37]. Individual fairness, such as counterfactual fairness [25], requires that each similar individual should be treated similarly [25], while group fairness emphasizes fair recommendations at the group level [15]. Conceptually, the investigated fairness for RecLLM can be categorized as user-side group fairness. However, there is a distinct difference between our fairness and traditional group fairness: traditional group fairness is directly defined by the difference in recommendation results/qualities across different sensitive groups [24, 38], whereas we focus on the difference in a specific similarity, namely, the similarity of the sensitive group to the neutral group, across different sensitive groups. This difference would further raise different requirements for evaluation methods and metrics, compared to the traditional ones. 3 FAIRLLM BENCHMARK We introduce the fairness evaluation and dataset construction in the FaiRLLM benchmark in \u00a73.1 and \u00a73.2, respectively. 3.1 Fairness Evaluation in RecLLM Fairness Definition. As an initial attempt, we focus on the userside fairness in RecLLM. Given a sensitive attribute (e.g., gender) of users, we define the fairness of RecLLM as the absence of any prejudice or favoritism toward user groups with specific values (e.g., female and male) of the sensitive attribute when generating recommendations without using such sensitive information. 3.1.1 Evaluation Method. The key is to investigate whether RecLLM exhibits prejudice or favoritism towards specific user groups when receiving instructions without sensitive information. To determine the existence of prejudice or favoritism, we first construct the reference status, i.e., obtaining recommendation results without sensitive attributes in the user instruction. We then compute similarities between the reference status and recommendation results obtained with specific values of the sensitive attribute, and compare these similarities to quantify the degree of fairness. Let A = {\ud835\udc4e} denote a sensitive attribute where \ud835\udc4eis a specific value of the attribute. Note that \ud835\udc4eis a word or phrase. Given \ud835\udc40neutral user instructions, the main steps of our evaluation method for each instruction are as follows: \u2022 Step 1: Obtain the top-\ud835\udc3erecommendations (R\ud835\udc5a) of each neutral instruction \ud835\udc3c\ud835\udc5a, where \ud835\udc5ais the index of instruction; \u2022 Step 2: Construct sensitive instructions {\ud835\udc3c\ud835\udc4e \ud835\udc5a} for each value of the sensitive attribute A by injecting the value \ud835\udc4einto the neutral instruction \ud835\udc3c\ud835\udc5a, and obtain the top-\ud835\udc3erecommendations of each sensitive instructions denoted as {R\ud835\udc4e \ud835\udc5a}; \u2022 Step 3: Compute \ud835\udc46\ud835\udc56\ud835\udc5a(R\ud835\udc4e \ud835\udc5a, R\ud835\udc5a), the similarity between R\ud835\udc4e \ud835\udc5aand R\ud835\udc5afor each \ud835\udc4e\u2208A. For each value \ud835\udc4e, we aggregate its similarity scores across all \ud835\udc40 instructions as \ud835\udc46\ud835\udc56\ud835\udc5a(\ud835\udc4e) := \u00cd \ud835\udc5a\ud835\udc46\ud835\udc56\ud835\udc5a(R\ud835\udc4e \ud835\udc5a, R\ud835\udc5a)/\ud835\udc40and then evaluate the level of unfairness in RecLLM as the divergence of these aggregated similarities across different values of the sensitive attribute, {\ud835\udc46\ud835\udc56\ud835\udc5a(\ud835\udc4e)|\ud835\udc4e\u2208A}. 3.1.2 Benchmark Metrics. To quantify the level of unfairness, we introduce new fairness metrics based on the obtained similarities {\ud835\udc46\ud835\udc56\ud835\udc5a(\ud835\udc4e)|\ud835\udc4e\u2208A}. We next present the fairness metrics and elaborate on the utilized similarity metrics. Fairness metrics. We propose two fairness metrics \u2014 Sensitive-toNeutral Similarity Range (\ud835\udc46\ud835\udc41\ud835\udc46\ud835\udc45) and Sensitive-to-Neutral Similarity Variance (\ud835\udc46\ud835\udc41\ud835\udc46\ud835\udc49), which quantify the unfairness level by measuring the divergence of {\ud835\udc46\ud835\udc56\ud835\udc5a(\ud835\udc4e)|\ud835\udc4e\u2208A} from different aspects. Specifically, \ud835\udc46\ud835\udc41\ud835\udc46\ud835\udc45measures the difference between the similarities of the most advantaged and disadvantaged groups, while \ud835\udc46\ud835\udc41\ud835\udc46\ud835\udc49measures the variance of \ud835\udc46\ud835\udc56\ud835\udc5a(\ud835\udc4e) across all possible \ud835\udc4eof the studied sensitive attribute A using the Standard Deviation. Formally, for the top-\ud835\udc3e recommendation, \ud835\udc46\ud835\udc41\ud835\udc46\ud835\udc45@\ud835\udc3e= max \ud835\udc4e\u2208A \ud835\udc46\ud835\udc56\ud835\udc5a(\ud835\udc4e) \u2212min \ud835\udc4e\u2208A \ud835\udc46\ud835\udc56\ud835\udc5a(\ud835\udc4e), \ud835\udc46\ud835\udc41\ud835\udc46\ud835\udc49@\ud835\udc3e= v u u t 1 |A| \u2211\ufe01 \ud835\udc4e\u2208A \ud835\udc46\ud835\udc56\ud835\udc5a(\ud835\udc4e) \u2212 1 |A| \u2211\ufe01 \ud835\udc4e\u2032\u2208A \ud835\udc46\ud835\udc56\ud835\udc5a(\ud835\udc4e\u2032) !2 , (1) where |A| denotes the number of all possible values in the studied sensitive attribute. For both fairness metrics, a higher value indicates greater levels of unfairness. Similarity metrics. Regarding the similarity \ud835\udc46\ud835\udc56\ud835\udc5a(\ud835\udc4e), we compute it using three similarity metrics that can measure the similarity between two recommendation lists: \u2022 Jaccard similarity [17]. This metric is widely used to measure the similarity between two sets by the ratio of their common elements to their total distinct elements. We directly treat a recommendation list as a set to compute the Jaccard similarity between the neutral group and the sensitive group with the sensitive attribute value \ud835\udc4eas: \ud835\udc3d\ud835\udc4e\ud835\udc50\ud835\udc50\ud835\udc4e\ud835\udc5f\ud835\udc51@\ud835\udc3e= 1 \ud835\udc40 \u2211\ufe01 \ud835\udc5a |R\ud835\udc5a\u2229R\ud835\udc4e \ud835\udc5a| |R\ud835\udc5a| + |R\ud835\udc4e \ud835\udc5a| \u2212|R\ud835\udc5a\u2229R\ud835\udc4e \ud835\udc5a|, (2) where R\ud835\udc5a, R\ud835\udc4e \ud835\udc5a, and \ud835\udc40still have the same means as Section 3.1.1, |R\ud835\udc5a\u2229R\ud835\udc4e \ud835\udc5a| denotes the number of common items between the R\ud835\udc5aand R\ud835\udc4e \ud835\udc5a, similarly for others. Functionally, \ud835\udc3d\ud835\udc4e\ud835\udc50\ud835\udc50\ud835\udc4e\ud835\udc5f\ud835\udc51@\ud835\udc3emeasures the average overlapping level of neutral and sensitive recommendation list pairs, without considering the item ranking differences. \u2022 SERP*. This metric is developed based on the SEarch Result Page Misinformation Score (SERP-MS) [35], which we modify to measure the similarity between two recommendation lists with the consideration of the number of overlapping elements and their ranks. Formally, for the top-\ud835\udc3erecommendation, the similarity between the neutral and the group with a specific value \ud835\udc4eof the sensitive group is computed as: \ud835\udc46\ud835\udc38\ud835\udc45\ud835\udc43\u2217@\ud835\udc3e= 1 \ud835\udc40 \u2211\ufe01 \ud835\udc5a \u2211\ufe01 \ud835\udc63\u2208R\ud835\udc4e \ud835\udc5a I(\ud835\udc63\u2208R\ud835\udc5a) \u2217(\ud835\udc3e\u2212\ud835\udc5f\ud835\udc4e \ud835\udc5a,\ud835\udc63+ 1) \ud835\udc3e\u2217(\ud835\udc3e+ 1)/2 , (3) where \ud835\udc63represents an item in R\ud835\udc4e \ud835\udc5a, \ud835\udc5f\ud835\udc4e \ud835\udc5a,\ud835\udc63\u2208{1, . . . , \ud835\udc3e} represents the rank of the item \ud835\udc63in R\ud835\udc4e \ud835\udc5a, and I(\ud835\udc63\u2208R\ud835\udc5a) = 1 if \ud835\udc63\u2208R\ud835\udc5ais true else 0. This metric can be viewed as a weighted Jaccard similarity, which further weights items with their ranks in R\ud835\udc4e \ud835\udc5a. However, it does not consider the relative ranks of two elements, e.g., if \ud835\udc631 and \ud835\udc632 belonging to R\ud835\udc4e \ud835\udc5aboth appear in the R\ud835\udc5a, exchanging them in R\ud835\udc4e \ud835\udc5awould not change the result. \fRecSys \u201923, September 18\u201322, 2023, Singapore, Singapore Jizhi Zhang*, Keqin Bao*, Yang Zhang, Wenjie Wang, Fuli Feng\u2020, and Xiangnan He\u2020 \u2022 PRAG*. This similarity metric is designed by referencing the Pairwise Ranking Accuracy Gap metric [8], which could consider the relative ranks between two elements. Formally, the similarity between the neutral and sensitive groups about the top-\ud835\udc3eLLM\u2019s recommendation is computed as: \ud835\udc43\ud835\udc45\ud835\udc34\ud835\udc3a\u2217@\ud835\udc3e = \u2211\ufe01 \ud835\udc5a \u2211\ufe01 \ud835\udc631,\ud835\udc632 \u2208R\ud835\udc4e \ud835\udc5a \ud835\udc631\u2260\ud835\udc632 \u0002 I (\ud835\udc631 \u2208R\ud835\udc5a) \u2217I \u0000\ud835\udc5f\ud835\udc5a,\ud835\udc631 < \ud835\udc5f\ud835\udc5a,\ud835\udc632 \u0001 \u2217I(\ud835\udc5f\ud835\udc4e \ud835\udc5a,\ud835\udc631 < \ud835\udc5f\ud835\udc4e \ud835\udc5a,\ud835\udc632) \u0003 \ud835\udc3e(\ud835\udc3e+ 1)\ud835\udc40 , (4) where I(\u00b7) still has similar means as Equation (3), \ud835\udc631 and \ud835\udc632 denote two different recommended items in R\ud835\udc4e \ud835\udc5a, and \ud835\udc5f\ud835\udc4e \ud835\udc5a,\ud835\udc631 (or \ud835\udc5f\ud835\udc5a,\ud835\udc631) denotes the rank of \ud835\udc631 in R\ud835\udc4e \ud835\udc5a(or R\ud835\udc5a). Specifically, if \ud835\udc631 is not in R\ud835\udc5a, then \ud835\udc5f\ud835\udc5a,\ud835\udc631 = +\u221e, similarly for \ud835\udc632. As shown in the equation, a higher metric does not only require high item overlap but also requires the pairwise ranking order between an item and another item to be the same in R\ud835\udc5aand R\ud835\udc4e \ud835\udc5a. This allows us to measure the agreement of pairwise ranking between recommendation results for the natural and sensitive instructions. 3.2 Dataset Construction RecLLM differs from conventional recommender systems in terms of the data requirements for both the model input and fairness evaluation, raising the need of constructing a new benchmark dataset that is suitable for RecLLM fairness evaluation. In this section, we detail how to construct such a new benchmark dataset, beginning by presenting the data format and then moving on to the detailed data collection process. 3.2.1 Data Format. RecLLM usually relies on user instructions (i.e., recommendation requests) in natural language, in which the user preference is explicitly expressed, to make recommendations. Therefore, the core of constructing a dataset for RecLLM fairness evaluation is to collect suitable user instructions. Without losing generality, we further assume user instructions are expressed following a fixed template, which includes both the user preference information and the task information. Specifically, we take the following template for neutral and sensitive instructions, respectively: Netrual: \u201cI am a fan of [names]. Please provide me with a list of \ud835\udc3esong/movie titles...\u201d Sensitive: \u201cI am a/an [sensitive feature] fan of [names]. Please provide me with a list of \ud835\udc3esong/movie titles...\u201d, where \u201cI am a [sensitive feature] fan of [name]\u201d is used to express user preference, \u201cPlease provide me with a list of \ud835\udc3eitem titles...\u201d denotes the task description. With these templates, we can simulate users with different preference by varying the \u201c[name]\u201d field to obtain different neutral instractions, and inject different sensitive information by varying the \u201c[sensitive feature]\u201d field to construct different sensitive instructions. Here, we consider the top-\ud835\udc3erecommendation scenario and take item titles to represent item identities. 3.2.2 Data Collection. We next select data to fill in the \u201c[names]\u201d and \u201c[sensitive feature]\u201d fields to construct our dataset. To ensure the recommendation validity of RecLLM, we use a selection process designed to increase the likelihood that the LLM has seen the selected data. Specifically, for the \u201c[sensitive feature]\u201d field, we consider eight commonly discussed sensitive attributes: age, country, gender, continent, occupation, race, religion, and physics. The possible values for each attribute are summarized in Figure 1. For the \u201c[names]\u201d field, we choose famous singers of music or famous directors of movies as potential candidates. Then, we enumerate all possible singers/directors, as well as all possible values of the sensitive attributes, resulting in two datasets: Music. We first screen the 500 most popular singers on the Music Television platform2 based on The 10,000 MTV\u2019s Top Music Artists3. Then, we enumerate all singers and all possible values of each sensitive attribute to fill in the \u201c[name]\u201d and \u201c[sensitive feature]\u201d fields, respectively, to construct the music dataset. Movie. First, we utilize the IMDB official API4, one of the most reputable and authoritative websites of movie and TV information, to select 500 directors with the highest number of popular movies and TV shows from the IMDB dataset. Popular movies and TV shows are defined as those with over 2000 reviews and high ratings (>7). We then populate the selected directors and all possible sensitive attribute values into the corresponding fields of our data templates in the enumeration method, resulting in the movie dataset. 4 RESULTS AND ANALYSIS In this section, we conduct experiments based on the proposed benchmark to analyze the recommendation fairness of LLMs by answering the following two questions: \u2022 RQ1: How unfair is the LLM when serving as a recommender on various sensitive user attributes? \u2022 RQ2: Is the unfairness phenomenon for using LLM as a recommender robust across different cases? 4.1 Overall Evaluation (RQ1) Considering the representative role of ChatGPT among existing LLMs, we take it as an example to study the recommendation fairness of LLMs, using the proposed evaluation method and dataset. We feed each neutral instruction and corresponding sensitive instruction into ChatGPT to generate top-\ud835\udc3erecommendations (\ud835\udc3e=20 for both music and movie data), respectively. And then we compute the recommendation similarities between the neutral (reference) and sensitive groups and the fairness metrics. Specifically, when using ChatGPT to generate the recommendation text, we use ChatGPT in a greedy-search manner by fixing the hyperparameters including temperature, top_p, and frequency_penality as zero to ensure the reproducibility of the experiments. We summarize the results in Table 1 and Figure 2. The table presents fairness metrics, as well as maximal and minimal similarities, where the maximal/minimal similarity corresponds to the most advantaged/disadvantaged group, respectively. The figure depicts the similarity of each sensitive 2https://www.mtv.com/. 3https://gist.github.com/mbejda/9912f7a366c62c1f296c. 4https://developer.imdb.com/. \fIs ChatGPT Fair for Recommendation? Evaluating Fairness in Large Language Model Recommendation RecSys \u201923, September 18\u201322, 2023, Singapore, Singapore Table 1: Fairness evaluation of ChatGPT for Music and Movie Recommendations. \ud835\udc46\ud835\udc41\ud835\udc46\ud835\udc45and \ud835\udc46\ud835\udc41\ud835\udc46\ud835\udc49are measures of unfairness, with higher values indicating greater unfairness. \u201cMin\u201d and \u201cMax\u201d denote the minimum and maximum similarity across all values of a sensitive attribute, respectively. Note: the sensitive attributes are ranked by their SNSV in PRAG*@20. Sorted Sensitive Attribute Dataset Metric Religion Continent Occupation Country Race Age Gender Physics Max 0.7057 0.7922 0.7970 0.7922 0.7541 0.7877 0.7797 0.8006 Min 0.6503 0.7434 0.7560 0.7447 0.7368 0.7738 0.7620 0.7973 SNSR 0.0554 0.0487 0.0410 0.0475 0.0173 0.0139 0.0177 0.0033 Jaccard@20 SNSV 0.0248 0.0203 0.0143 0.0141 0.0065 0.0057 0.0067 0.0017 Max 0.2395 0.2519 0.2531 0.2525 0.2484 0.2529 0.2512 0.2546 Min 0.2205 0.2474 0.2488 0.2476 0.2429 0.2507 0.2503 0.2526 SNSR 0.0190 0.0045 0.0043 0.0049 0.0055 0.0022 0.0009 0.0020 SERP*@20 SNSV 0.0088 0.0019 0.0018 0.0017 0.0021 0.0010 0.0004 0.0010 Max 0.7997 0.8726 0.8779 0.8726 0.8482 0.8708 0.8674 0.8836 Min 0.7293 0.8374 0.8484 0.8391 0.8221 0.8522 0.8559 0.8768 SNSR 0.0705 0.0352 0.0295 0.0334 0.0261 0.0186 0.0116 0.0069 Music PRAG*@20 SNSV 0.0326 0.0145 0.0112 0.0108 0.0097 0.0076 0.0050 0.0034 Metric Race Country Continent Religion Gender Occupation Physics Age Max 0.4908 0.5733 0.5733 0.4057 0.5451 0.5115 0.5401 0.5410 Min 0.3250 0.3803 0.4342 0.3405 0.4586 0.4594 0.5327 0.5123 SNSR 0.1658 0.1931 0.1391 0.0651 0.0865 0.0521 0.0075 0.0288 Jaccard@20 SNSV 0.0619 0.0604 0.0572 0.0307 0.0351 0.0229 0.0037 0.0122 Max 0.1956 0.2315 0.2315 0.1709 0.2248 0.2106 0.2227 0.2299 Min 0.1262 0.1579 0.1819 0.1430 0.1934 0.1929 0.2217 0.2086 SNSR 0.0694 0.0736 0.0496 0.0279 0.0314 0.0177 0.0009 0.0212 SERP*@20 SNSV 0.0275 0.0224 0.0207 0.0117 0.0123 0.0065 0.0005 0.0089 Max 0.6304 0.7049 0.7049 0.5538 0.7051 0.6595 0.6917 0.6837 Min 0.4113 0.4904 0.5581 0.4377 0.6125 0.6020 0.6628 0.6739 SNSR 0.2191 0.2145 0.1468 0.1162 0.0926 0.0575 0.0289 0.0098 Movie PRAG*@20 SNSV 0.0828 0.0689 0.0601 0.0505 0.0359 0.0227 0.0145 0.0040 group to the neutral group while truncating the length of the recommendation list for the most unfair four sensitive attributes. Based on the table and figures, we have made the following observations: \u2022 For both movie and music recommendations, ChatGPT demonstrates unfairness across the most sensitive attributes. In each dataset, each similarity metric exhibits a similar level of values over different sensitive attributes (c.f., Max and Min), but the corresponding fairness metrics (\ud835\udc46\ud835\udc41\ud835\udc46\ud835\udc45and \ud835\udc46\ud835\udc41\ud835\udc46\ud835\udc49) exhibit varying levels of values. This indicates that the degree of unfairness varies across sensitive attributes. In the music dataset, the four attributes with the highest value of \ud835\udc46\ud835\udc41\ud835\udc46\ud835\udc49for \ud835\udc43\ud835\udc45\ud835\udc34\ud835\udc3a\u2217are religion, continent, occupation, and country. In the movie dataset, the four attributes are race, country, continent, and religion. \u2022 As shown in Figure 2, the difference in similarity consistently persists when truncating the recommendation list to different lengths (\ud835\udc3e), and the relative order of different values of sensitive attributes remains mostly unchanged. This suggests that the issue of unfairness persists even when the length of recommendation lists is changed. Similar phenomena are observed for the undrawn attributes, but we omit them to save space. \u2022 In most cases, ChatGPT\u2019s disadvantaged groups (i.e., those with smaller values of similarity metrics) regarding different sensitive attributes align with the inherent social cognition of the real world. For example, in terms of the attribute \u2014 continent, \u201cAfrican\u201d is the disadvantaged group. Such unfairness should be minimized in the recommendations made by RecLLM. 4.2 Unfairness Robustness Analyses (RQ2) We analyze the robustness of unfairness, i.e., whether similar unfairness persists when there are typos in sensitive attributes or when different languages are used for instructions. Due to space constraints, we conduct the robustness analysis on the attribute \u2014 continent, which is one of the most consistently unfair sensitive attributes in Table 1. 4.2.1 The Influence of Sensitive Attribute Typos. To investigate the influence of typos in sensitive attributes on the unfairness of RecLLM, we focus on two values of the attribute \u2014 continent: \u201cAfrican\u201d and \u201cAmerican\u201d. Specifically, we create four typos by adding or subtracting letters, resulting in \u201cAfrian\u201d, \u201cAmerian\u201d, \u201cAmericcan\u201d, and \u201cAfriccan\u201d. We then conduct experiments on these typos and the right ones and compute their similarity to the neutral group. The results are shown in the left two subfigures of Figure 3. We observe that \u201cAfrian\u201d and \u201cAfriccan\u201d, which are closer to the disadvantaged group \u201cAfrican\u201d, are less similar to the neutral group, exhibiting relatively higher levels of disadvantage. This indicates that the closer a typo is to a vulnerable sensitive value, the more likely it is to result in being disadvantaged, highlighting the persistence of unfairness in RecLLM. 4.2.2 The Influence of Language. In addition, we analyze the influence of language on unfairness by using Chinese instructions. The right two subfigures of Figure 3 summarize the similarity results for the attribute \u201ccontinent\u201d. Compared to the results obtained using \fRecSys \u201923, September 18\u201322, 2023, Singapore, Singapore Jizhi Zhang*, Keqin Bao*, Yang Zhang, Wenjie Wang, Fuli Feng\u2020, and Xiangnan He\u2020 2 4 6 8 10 12 14 16 18 20 @K 0.45 0.50 0.55 0.60 0.65 0.70 0.75 0.80 PRAG*@K Music Religion Buddhist Christian Islamic 2 4 6 8 10 12 14 16 18 20 @K 0.76 0.78 0.80 0.82 0.84 0.86 0.88 Music Continent African American Asian 2 4 6 8 10 12 14 16 18 20 @K 0.76 0.78 0.80 0.82 0.84 0.86 0.88 Music Occupation doctor student teacher worker writer 2 4 6 8 10 12 14 16 18 20 @K 0.72 0.74 0.76 0.78 0.80 0.82 0.84 0.86 0.88 Music Country American Brazilian British Chinese French German Japanese 2 4 6 8 10 12 14 16 18 20 @K 0.40 0.45 0.50 0.55 0.60 0.65 PRAG*@K Movie Race African American black white yellow 2 4 6 8 10 12 14 16 18 20 @K 0.45 0.50 0.55 0.60 0.65 0.70 0.75 Movie Country American Brazilian British Chinese French German Japanese 2 4 6 8 10 12 14 16 18 20 @K 0.50 0.55 0.60 0.65 0.70 0.75 Movie Continent African American Asian 2 4 6 8 10 12 14 16 18 20 @K 0.30 0.35 0.40 0.45 0.50 0.55 0.60 Movie Religion Buddhist Christian Islamic Figure 2: Similarities of sensitive groups to the neutral group with respect to the length \ud835\udc3eof the recommendation List, measured by PRAG*@K, for the four sensitive attributes with the highest SNSV of PRAG*@20. The top four subfigures correspond to music recommendation results with ChatGPT, while the bottom four correspond to movie recommendation results. 2 4 6 8 10 12 14 16 18 20 @K 0.76 0.78 0.80 0.82 0.84 0.86 0.88 PRAG*@K Music Typo American Americcan Amerian Afrian Africcan African 2 4 6 8 10 12 14 16 18 20 @K 0.64 0.66 0.68 0.70 0.72 0.74 0.76 0.78 Movie Typo American Americcan Amerian Afrian Africcan African 2 4 6 8 10 12 14 16 18 20 @K 0.72 0.74 0.76 0.78 0.80 0.82 0.84 0.86 Music Chinese (African) (American) (Asian) 2 4 6 8 10 12 14 16 18 20 @K 0.15 0.20 0.25 0.30 0.35 0.40 Movie Chinese (African) (American) (Asian) Figure 3: Fairness evaluation of ChatGPT when appearing typos in sensitive attributes (the left two subfigures) or when using Chinese prompts (the right two subfigures). English prompts, we find that there are still distinct differences between \u201cAfrican\u201d, \u201cAmerican\u201d, and \u201cAsian\u201d, with \u201cAfrican\u201d and \u201cAsian\u201d remaining relatively disadvantaged compared to \u201cAmerican\u201d. This indicates the persistence of unfairness across different languages. Another notable observation is that the similarity in the movie data is significantly lower when using Chinese prompts compared to English prompts. This is because using a Chinese prompt on the movie data can result in recommendation outputs that randomly mix both Chinese and English, naturally decreasing the similarity between recommendation results. 5" + } + ], + "Zhijian Xu": [ + { + "url": "http://arxiv.org/abs/2307.03756v3", + "title": "FITS: Modeling Time Series with $10k$ Parameters", + "abstract": "In this paper, we introduce FITS, a lightweight yet powerful model for time\nseries analysis. Unlike existing models that directly process raw time-domain\ndata, FITS operates on the principle that time series can be manipulated\nthrough interpolation in the complex frequency domain. By discarding\nhigh-frequency components with negligible impact on time series data, FITS\nachieves performance comparable to state-of-the-art models for time series\nforecasting and anomaly detection tasks, while having a remarkably compact size\nof only approximately $10k$ parameters. Such a lightweight model can be easily\ntrained and deployed in edge devices, creating opportunities for various\napplications. The code is available in: \\url{https://github.com/VEWOXIC/FITS}", + "authors": "Zhijian Xu, Ailing Zeng, Qiang Xu", + "published": "2023-07-06", + "updated": "2024-01-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "main_content": "INTRODUCTION Time series analysis plays a pivotal role in a myriad of sectors, from healthcare appliances to smart factories. Within these domains, the reliance is often on edge devices like smart sensors, driven by MCUs with limited computational and memory resources. Time series data, marked by its inherent complexity and dynamism, typically presents information that is both sparse and scattered within the time domain. To effectively harness this data, recent research has given rise to sophisticated models and methodologies (Zhou et al., 2021; Liu et al., 2022a; Zeng et al., 2023; Nie et al., 2023; Zhang et al., 2022). Yet, the computational and memory costs of these models makes them unsuitable for resource-constrained edge devices. On the other hand, the frequency domain representation of time series data promises a more compact and efficient portrayal of inherent patterns. While existing research has indeed tapped into the frequency domain for time series analysis \u2014 FEDformer (Zhou et al., 2022a) enriches its features using spectral data, and TimesNet (Wu et al., 2023) harnesses high-amplitude frequencies for feature extraction via CNNs \u2014 a comprehensive utilization of the frequency domain\u2019s compactness remains largely unexplored. Specifically, the ability of the frequency domain to employ complex numbers in capturing both amplitude and phase information is not utilized, resulting in the continued reliance on compute-intensive models for temporal feature extraction. In this study, we reinterpret time series analysis tasks, such as forecasting and reconstruction, as interpolation exercises within the complex frequency domain. Essentially, we produce an extended time series segment by interpolating the frequency representation of a provided segment. Specifically, for forecasting, we can obtain the forecasting results by simply extending the given look-back window with frequency interpolation; for reconstruction, we recover the original segment by interpolating the frequency representation of its downsampled counterpart. Building on this insight, we introduce FITS (Frequency Interpolation Time Series Analysis Baseline). The core of FITS is a complex-valued linear layer, meticulously designed to learn amplitude scaling and phase shift, thereby facilitating interpolation within the complex frequency domain. Notably, while FITS operates interpolations in the frequency domain, it fundamentally remains a time domain model, integrating the rFFT (Brigham & Morrow, 1967) operation. That is, we transform the input segment into the complex frequency domain using rFFT for frequency interpolation. This interpolated frequency data is then mapped back to the time domain, resulting in an elongated segment ready for supervision. This innovative design allows FITS to be highly adaptable, fitting seamlessly into a plethora of downstream time domain tasks such as forecasting and anomaly detection. 1 arXiv:2307.03756v3 [cs.LG] 5 Jan 2024 \fArxiv Version Apart from its streamlined linear architecture, FITS incorporates a low-pass filter. This ensures a compact representation while preserving essential information. Despite its simplicity, FITS consistently achieves state-of-the-art (SOTA) performance. Remarkably, in most scenarios, FITS achieves this feat with fewer than 10k parameters. This makes it 50 times more compact than the lightweight temporal linear model DLinear (Zeng et al., 2023) and approximately 10,000 times smaller than other mainstream models. Given its efficiency in memory and computation, FITS stands out as an ideal candidate for deployment, or even for training directly on edge devices, be it for forecasting or anomaly detection. In summary, our contributions can be delineated as follows: \u2022 We present FITS, an exceptionally lightweight model for time series analysis, boasting a modest parameter count in the range of 5k\u223c10k. \u2022 FITS offers a pioneering approach to time series analysis by employing a complex-valued neural network. This simultaneously captures both amplitude and phase information, paving the way for a more comprehensive and efficient representation of time series data. \u2022 Despite being orders of magnitude smaller than most mainstream models, FITS consistently delivers top-tier performance across a range of time series analysis tasks. 2 RELATED WORK AND MOTIVATION 2.1 FREQUENCY-AWARE TIME SERIES ANALYSIS MODELS Recent advancements in time series analysis have witnessed the utilization of frequency domain information to capture and interpret underlying patterns. FNet (Lee-Thorp et al., 2022) leverages a pure attention-based architecture to efficiently capture temporal dependencies and patterns solely in the frequency domain, eliminating the need for convolutional or recurrent layers. On the other hand, FEDFormer (Zhou et al., 2022a) and FiLM (Zhou et al., 2022b) incorporate frequency information as supplementary features to enhance the model\u2019s capability in capturing long-term periodic patterns and speed up computation. The other line of work aims to capture the periodicity inherent in the data. For instance, DLinear (Zeng et al., 2023) adopts a single linear layer to extract the dominant periodicity from the temporal domain and surpasses a range of deep feature extraction-based methods. More recently, TimesNet (Wu et al., 2023) achieves state-of-the-art results by identifying several dominant frequencies instead of relying on a single dominant periodicity. Specifically, they use the Fast Fourier Transform (FFT) to find the frequencies with the largest energy and reshape the original 1D time series into 2D images according to their periods. However, these approaches still rely on feature engineering to identify the dominant period set. Selecting this set based on energy may only consider the dominant period and its harmonics, limiting the information captured. Moreover, these methodologies are still considered inefficient and prone to overfitting. 2.2 DIVIDE AND CONQUER THE FREQUENCY COMPONENTS Treating a time series as a signal allows us to break it down into a linear combination of sinusoidal components without any information loss. Each component possesses a unique frequency, initial phase, and amplitude. Forecasting directly on the original time series can be challenging, but forecasting each frequency component is comparatively straightforward, as we only need to apply a phase bias to the sinusoidal wave based on the time shift. Subsequently, we linearly combine these shifted sinusoidal waves to obtain the forecasting result. This approach effectively preserves the frequency characteristics of the given look-back window while maintaining semantic consistency between the look-back window and the forecasting horizon. Specifically, the resulting forecasted values maintain the frequency features of the original time series with a reasonable time shift, ensuring that semantic consistency is maintained. However, forecasting each sinusoidal component in the time domain can be cumbersome, as the sinusoidal components are treated as a sequence of data points. To address this, we propose conducting 2 \fArxiv Version this manipulation in the complex frequency domain, which offers a more compact and informationrich representation, as described below. 3 METHOD 3.1 PRELIMINARY: FFT AND COMPLEX FREQUENCY DOMAIN The Fast Fourier Transform (FFT, (Brigham & Morrow, 1967)) efficiently computes the Discrete Fourier Transform (DFT) of complex number sequences. The DFT transforms discrete-time signals from the time domain to the complex frequency domain. In time series analysis, the Real FFT (rFFT) is often employed when working with real input signals. It condenses an input of N real numbers into a sequence of N/2+1 complex numbers, representing the signal in the complex frequency domain. Complex Frequency Domain In Fourier analysis, the complex frequency domain is a representation of a signal in which each frequency component is characterized by a complex number. This complex number captures both the amplitude and phase of the component, providing a comprehensive description. The amplitude of a frequency component represents the magnitude or strength of that component in the original time-domain signal. In contrast, the phase represents the temporal shift or delay introduced by that component. Mathematically, the complex number associated with a frequency component can be represented as a complex exponential element with a given amplitude and phase: X(f) = |X(f)|ej\u03b8(f), where X(f) is the complex number associated with the frequency component at frequency f, |X(f)| is the amplitude of the component, and \u03b8(f) is the phase of the component. As shown in Fig. 1(a), in the complex plane, the complex exponential element can be visualized as a vector with a length equal to the amplitude and angle equal to the phase: X(f) = |X(f)|(cos \u03b8(f) + j sin \u03b8(f)) Therefore, the complex number in the complex frequency domain provides a concise and elegant means of representing the amplitude and phase of each frequency component in the Fourier transform. \ud835\udf032 Imaginary Real \ud835\udc4b1 = |\ud835\udc4b1|\ud835\udc52\ud835\udc57\ud835\udf031 \ud835\udc4b\u2032 = ( \ud835\udc4b1 \u22c5|\ud835\udc4b2|)\ud835\udc52\ud835\udc57(\ud835\udf031+\ud835\udf032) \ud835\udf031 Imaginary Real \ud835\udc4b= \ud835\udc4b\ud835\udc50\ud835\udc5c\ud835\udc60\ud835\udf03+ \ud835\udc57\ud835\udc4b\ud835\udc60\ud835\udc56\ud835\udc5b\ud835\udf03| = \ud835\udc4b\ud835\udc52\ud835\udc57\ud835\udf03 \ud835\udf03 \ud835\udc4b\ud835\udc50\ud835\udc5c\ud835\udc60\ud835\udf03 \ud835\udc4b\ud835\udc60\ud835\udc56\ud835\udc5b\ud835\udf03 |\ud835\udc4b| (a) Complex number on the complex plane \ud835\udf032 Imaginary Real \ud835\udc4b1 = |\ud835\udc4b1|\ud835\udc52\ud835\udc57\ud835\udf031 \ud835\udc4b\u2032 = ( \ud835\udc4b1 \u22c5|\ud835\udc4b2|)\ud835\udc52\ud835\udc57(\ud835\udf031+\ud835\udf032) \ud835\udf031 Imaginary Real \ud835\udc4b= \ud835\udc4b\ud835\udc50\ud835\udc5c\ud835\udc60\ud835\udf03+ \ud835\udc57\ud835\udc4b\ud835\udc60\ud835\udc56\ud835\udc5b\ud835\udf03| = \ud835\udc4b\ud835\udc52\ud835\udc57\ud835\udf03 \ud835\udf03 \ud835\udc4b\ud835\udc50\ud835\udc5c\ud835\udc60\ud835\udf03 \ud835\udc4b\ud835\udc60\ud835\udc56\ud835\udc5b\ud835\udf03 |\ud835\udc4b| (b) Complex number multiplication Figure 1: Illustration of Complex Number Visualization and Multiplication Time Shift and Phase Shift. The time shift of a signal corresponds to the phase shift in the frequency domain. Especially in the complex frequency domain, we can express such phase shift by multiplying a unit complex exponential element with the corresponding phase. Mathematically, if we shift a signal x(t) forward in time by a constant amount \u03c4, resulting in the signal x(t \u2212\u03c4), the Fourier transform is given by: X\u03c4(f) = e\u2212j2\u03c0f\u03c4X(f) = |X(f)|ej(\u03b8(f)\u22122\u03c0f\u03c4) = [cos(\u22122\u03c0f\u03c4) + jsin(\u22122\u03c0f\u03c4)]X(f) The shifted signal still has an amplitude of |X(f)|, while the phase \u03b8\u03c4(f) = \u03b8(f) \u22122\u03c0f\u03c4 shows a shift which is linear to the time shift. In summary, the amplitude scaling and phase shifting can be simultaneously expressed as the multiplication of complex numbers, as shown in Fig. 1(b). 3 \fArxiv Version 3.2 FITS PIPELINE Motivated by the fact that a longer time series provides a higher frequency resolution in its frequency representation, we train FITS to extend time series segment by interpolating the frequency representation of the input time series segment. We use a single layer of complex-valued linear layer to learn such interpolation, so that it can learn amplitude scaling and phase shifting as the multiplication of complex numbers during the interpolation process. As shown in Fig. 2, we use rFFT to project time series segments to the complex frequency domain. After the interpolation, the frequency representation is projected back with inverse rFFT (irFFT). RIN \ud835\udc4b \u0de8 Amp. \ud835\udc4b\u2032 \u0de1 \ud835\udc65 Value Time \ud835\udc65 \u0de4 Value Time Amp. 0 Freq. Base Freq. 2nd Harmo. \ud835\udc4b \u0de8\u2032 Freq. COF Amp. Freq. Freq. Amp. 0 0 0 \ud835\udc4b \u0de0 \ud835\udc65 \u0ddc Value Time \ud835\udc65 \u0ddc\u0bdf\u0bd5\u0bea Value Time \ud835\udc65 \u0ddc\u0bdb rFFT LPF iRIN irFFT 0 Padding Complex-valued Linear Layer Figure 2: Pipeline of FITS, with a focus on the forecasting task. Initially, the time series is normalized to zero-mean, followed by rFFT for frequency domain projection. After LPF, a single complex-valued linear layer interpolates the frequency. Zero padding and irFFT then revert this back to the time domain, with iRIN finally reversing the normalization. The reconstruction task follows the same pipeline, except for the reconstruction supervision loss. Please check appendix for detail. However, the mean of such segments will result in a very large 0-frequency component in its complex frequency representation. To address this, we pass it through reversible instance-wise normalization (RIN) (Kim et al., 2022) to obtain a zero-mean instance. As a result, the normalized complex frequency representation now has a length of N/2, where N represents the original length of the time series. Additionally, FITS integrates a low-pass filter (LPF) to further reduce its model size. The LPF effectively eliminates high-frequency components above a specified cutoff frequency, compacting the model representation while preserving essential time series information. Despite operating in the frequency domain, FITS is supervised in the time domain using standard loss functions like Mean Squared Error (MSE) after the inverse real-to-complex Fast Fourier Transform (irFFT). This allows for versatile supervision tailored to various downstream time series tasks. In the case of forecasting tasks, we generate the look-back window along with the horizon as shown in Fig. 2. This allows us to provide supervision for forecasting and backcasting, where the model is encouraged to accurately reconstruct the look-back window. Our ablation study reveals that combining backcast and forecast supervision can yield improved performance in certain scenarios. For reconstruction tasks, we downsample the original time series segment based on a specific downsampling rate. Subsequently, FITS is employed to perform frequency interpolation, enabling the reconstruction of the downsampled segment back to its original form. Thus, direct supervision is applied using reconstruction loss to ensure faithful reconstruction. The reconstruction tasks also follow the pipeline in Fig. 2 with the supervision replaced with reconstruction loss. 3.3 KEY MECHANISMS OF FITS Complex Frequency Linear Interpolation. To control the output length of the model, we introduce an interpolation rate denoted as \u03b7, which represents the ratio of the model\u2019s output length Lo to its corresponding input length Li. Frequency interpolation operates on the normalized complex frequency representation, which has half the length of the original time series. Importantly, this interpolation rate can also be applied to the frequency domain, as indicated by the equation: \u03b7freq = Lo/2 Li/2 = Lo Li = \u03b7 4 \fArxiv Version Based on this formula, with an arbitrary frequency f, the frequency band 1 \u223cf in the original signal is linearly projected to the frequency band 1 \u223c\u03b7f in the output signal. As a result, we define the input length of our complex-valued linear layer as L and the interpolated output length as \u03b7L. Notably, when applying the Low Pass Filter (LPF), the value of L corresponds to the cutoff frequency (COF) of the LPF. After performing frequency interpolation, the complex frequency representation is zero-padded to a length of Lo/2, where Lo represents the desired output length. Prior to applying the irFFT, an additional zero is introduced as the representation\u2019s zero-frequency component. Low Pass Filter (LPF). The primary objective of incorporating the LPF within FITS is to compress the model\u2019s volume while preserving essential information. The LPF achieves this by discarding frequency components above a specified cutoff frequency (COF), resulting in a more concise frequency domain representation. The LPF retains the relevant information in the time series while discarding components beyond the model\u2019s learning capability. This ensures that a significant portion of the original time series\u2019 meaningful content is preserved. As demonstrated in Fig. 3, the filtered waveform exhibits minimal distortion even when only preserving a quarter of the original frequency domain representation. Furthermore, the high-frequency components filtered out by the LPF typically comprise noise, which are inherently irrelevant for effective time series modeling. 0 200 400 15 20 25 30 0 100 200 0 200 400 No Filter MSE: 0.0000 (a) Original 0 200 400 15 20 25 30 0 100 200 0 200 400 COF COF: 120 MSE: 0.0727 (b) COF at 6th harmonic 0 200 400 15 20 25 30 0 100 200 0 200 400 COF COF: 60 MSE: 0.1660 (c) COF at 3rd harmonic 0 200 400 15 20 25 30 0 100 200 0 200 400 COF COF: 40 MSE: 0.4296 (d) COF at 2nd harmonic Figure 3: Waveform (1st row) and amplitude spectrum (2nd row) of a time series segment selected from the \u2019OT\u2019 channel of the ETTh1 dataset, spanning from the 1500th to the 1980th data point. The segment has a length of 480, and its dominant periodicity is 24, corresponding to a base frequency of 20. The blue lines represent the waveform/spectrum with no applied filter, while the orange lines represent the waveform/spectrum with the filter applied. The filter cutoff frequency is chosen based on a harmonic of the original time series. Selecting an appropriate cutoff frequency (COF) remains a nontrivial challenge. To address this, we propose a method based on the harmonic content of the dominant frequency. Harmonics, which are integer multiples of the dominant frequency, play a significant role in shaping the waveform of a time series. By aligning the cutoff frequency with these harmonics, we keep relevant frequency components associated with the signal\u2019s structure and periodicity. This approach leverages the inherent relationship between frequencies to extract meaningful information while suppressing noise and irrelevant high-frequency components. The impact of COF on different harmonics\u2019 waveforms is shown in Fig. 3. We further elaborate on the impact of COF in our experimental results. Weight Sharing. FITS handles multivariate tasks by sharing weights as in (Zeng et al., 2023), balancing performance and efficiency. In practice, channels often share a common base frequency when originating from the same physical system, such as 50/60Hz for electrical appliances or daily base frequencies for city traffic. Most of the datasets used in our experiments belong to this category. For datasets that indeed contain channels with different base frequencies, we can cluster those channels according to the base frequency and train an individual FITS model for each cluster. 4 EXPERIMENTS FOR FORECASTING 4.1 FORECASTING AS FREQUENCY INTERPOLATION Typically, the forecasting horizon is shorter than the given look-back window, rendering direct interpolation unsuitable. Instead, we formulate the forecasting task as the interpolation of a look-back 5 \fArxiv Version window, with length L, to a combination of the look-back window and forecasting horizon, with length L+H. This design enables us to provide more supervision during training. With this approach, we can supervise not only the forecasting horizon but also the backcast task on the look-back window. Our experimental results demonstrate that this unique training strategy contributes to the improved performance of FITS. The interpolation rate of the forecasting task is calculated by: \u03b7F ore = 1 + H L , where L represents the length of the look-back window and H represents the length of the forecasting horizon. 4.2 EXPERIMENT SETTINGS Datasets. All datasets used in our experiments are widely-used and publicly available real-world datasets, including, Traffic, Electricity, Weather, ETT (Zhou et al., 2021). We summarize the characteristics of these datasets in appendix. Apart from these datasets for long-term time series forecasting, we also use the M4 dataset to test the short-term forecasting performance. Baselines. To evaluate the performance of FITS in comparison to state-of-the-art time series forecasting models, including PatchTST (Nie et al., 2023), TimesNet (Wu et al., 2023), FEDFormer (Zhou et al., 2022a) and LTSF-Linear (Zeng et al., 2023), we rerun all the experiment with code and scripts provided by their official implementation 1. We report the comparison with NBeats (Oreshkin et al., 2019), NHits (Challu et al., 2023) and other transformer-based methods in the appendix. Evaluation metrics. We follow the previous works (Zhou et al., 2022a; Zeng et al., 2023; Zhang et al., 2022) to compare forecasting performance using Mean Squared Error (MSE) as the core metrics. Moreover, to evaluate the short-term forecasting, we symmetric Mean Absolute Percentage Error (SMAPE) following TimesNet (Wu et al., 2023). Implementation details. We conduct grid search on the look-back window of 90, 180, 360, 720 and cutoff frequency, the only hyper-parameter. Further experiments also show that a longer look-back window can result in better performance in most cases. To avoid information leakage, We choose the hyper-parameter based on the performance of the validation set. We report the result of FITS as the mean and standard deviation of 5 runs with random chosen random seeds. 4.3 COMPARISONS WITH SOTAS Competitive Performance with High Efficiency We present the results of our experiments on long-term forecasting in Tab. 1 and Tab. 2. The results for short-term forecasting on the M4 dataset are provided in the Appendix. Remarkably, our FITS consistently achieves comparable or even superior performance across all experiments. Tab. 3 presents the number of trainable parameters and MACs 2 for various TSF models using a look-back window of 96 and a forecasting horizon of 720 on the Electricity dataset. The table clearly demonstrates the exceptional efficiency of FITS compared to other models. Among the listed models, the parameter counts range from millions down to thousands. Notably, large models such as TimesNet and Pyraformer require a staggering number of parameters, with 300.6M and 241.4M, respectively. Similarly, popular models like Informer, Autoformer, and FEDformer have parameter counts in the range of 13.61M to 20.68M. Even the lightweight yet state-of-the-art model PatchTST has a parameter count of over 1 million. In contrast, FITS stands out as a highly efficient model with an impressively low parameter count. With only 4.5K to 16K parameters, FITS achieves comparable or even superior performance compared to these larger models. It is worth highlighting that FITS requires significantly fewer parameters compared to the next smallest model, Dlinear, which has 139.7K parameters. For instance, when considering a 720 look-back window and a 720 forecasting horizon, the Dlinear model requires over 1 million parameters, whereas FITS achieves similar performance with only 10k-50k parameters. 1With a long-standing bug in the coding architecture fixed, see README file in our codebase. 2MACs (Multiply-Accumulate Operations) is a commonly used metric that counts the total number of multiplication and addition operations in a neural network. 6 \fArxiv Version Table 1: Long-term forecasting results on ETT dataset in MSE. The best result is highlighted in bold, and the second best is highlighted with underline. IMP is the improvement between FITS and the second best/ best result, where a larger value indicates a better improvement. Most of the STD are under 5e-4 and shown as 0.000 in this table. Dataset ETTh1 ETTh2 ETTm1 ETTm2 Horizon 96 192 336 720 96 192 336 720 96 192 336 720 96 192 336 720 PatchTST 0.385 0.413 0.440 0.456 0.274 0.338 0.367 0.391 0.292 0.330 0.365 0.419 0.163 0.219 0.276 0.368 Dlinear 0.384 0.443 0.446 0.504 0.282 0.350 0.414 0.588 0.301 0.335 0.371 0.426 0.171 0.237 0.294 0.426 FedFormer 0.375 0.427 0.459 0.484 0.340 0.433 0.508 0.480 0.362 0.393 0.442 0.483 0.189 0.256 0.326 0.437 TimesNet 0.384 0.436 0.491 0.521 0.340 0.402 0.452 0.462 0.338 0.374 0.410 0.478 0.187 0.249 0.321 0.408 FITS 0.372 0.404 0.427 0.424 0.271 0.331 0.354 0.377 0.303 0.337 0.366 0.415 0.162 0.216 0.268 0.348 STD 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 IMP 0.003 0.009 0.013 0.032 0.003 0.007 0.013 0.014 -0.011 -0.007 -0.001 0.004 0.001 0.003 0.008 0.020 Table 2: Long-term forecasting results on three popular datasets in MSE. The best result is highlighted in bold and the second best is highlighted with underline. IMP is the improvement between FITS and the second best/ best result, where a larger value indicates a better improvement. Most of the STD are under 5e-4 and shown as 0.000 in this table. Dataset Weather Electricity Traffic Horizon 96 192 336 720 96 192 336 720 96 192 336 720 PatchTST 0.151 0.195 0.249 0.321 0.129 0.149 0.166 0.210 0.366 0.388 0.398 0.457 Dlinear 0.174 0.217 0.262 0.332 0.140 0.153 0.169 0.204 0.413 0.423 0.437 0.466 Fedformer 0.246 0.292 0.378 0.447 0.188 0.197 0.212 0.244 0.573 0.611 0.621 0.630 TimesNet 0.172 0.219 0.280 0.365 0.168 0.184 0.198 0.220 0.593 0.617 0.629 0.640 FITS 0.143 0.186 0.236 0.307 0.134 0.149 0.165 0.203 0.385 0.397 0.410 0.448 STD 0.001 0.001 0.001 0.001 0.000 0.000 0.000 0.000 0.001 0.000 0.000 0.001 IMP 0.008 0.009 0.013 0.014 -0.005 0.000 0.001 0.001 -0.019 -0.009 -0.012 0.009 Table 3: Number of trainable parameters, MACs, and inference time of TSF models under look-back window=96 and forecasting horizon=720 on the Electricity dataset. Model Parameters MACs Infer. Time TimesNet 301.7M 1226.49G N/A Pyraformer 241.4M 0.80G 3.4ms Informer 14.38M 3.93G 49.3ms Autoformer 14.91M 4.41G 164.1ms FiLM 14.91M 5.97G 123.0ms FEDformer 20.68M 4.41G 40.5ms PatchTST 1.5M 5.07G 3.3ms DLinear 139.7K 40M 0.4ms (3.05ms CPU) FITS (Ours) 4.5K\u223c10K 1.6M\u223c8.9M 0.6ms (2.55ms CPU) This analysis showcases the remarkable efficiency of FITS. Despite its small size, FITS consistently achieves competitive results, making it an attractive option for time series analysis tasks. FITS demonstrates that achieving state-of-the-art or close to state-of-the-art performance with a considerably reduced parameter footprint is possible, making it an ideal choice for resource-constrained environments. Case Study on ETTh2 Dataset We conduct a comprehensive case study on the performance of FITS using the ETTh2 dataset, which further highlights the impact of the lookback window and cutoff frequency on model performance. We provide a case study on other datasets in the Appendix. In our experiments, we observe that increasing the look-back window generally leads to improved performance, while the effect of increasing the cutoff frequency is minor. Tab. 4 showcases the performance results obtained with different look-back window sizes and cutoff frequencies. Larger look-back windows tend to yield better performance across the board. On the other hand, increasing the cutoff frequency only results in marginal performance improvements. However, it is important to note that higher cutoff frequencies come at the expense of increased computational resources, as illustrated in Tab. 5. Considering these observations, we find utilizing a longer look-back window in combination with a low cutoff frequency to achieve near state-of-the-art performance with minimal computational cost. For instance, FITS surpasses other methods when employing a 720 look-back window and setting the cutoff frequency to the second harmonic. Remarkably, FITS achieves state-of-the-art performance with a parameter count of only around 10k. Moreover, by reducing the look-back window to 360, FITS already achieves close-to-state-of-the-art performance by setting the cutoff frequency to the second harmonic, resulting in a further reduction of the model\u2019s parameter count to under 5k (as shown in Tab. 5). 7 \fArxiv Version Table 4: The results on the ETTh2 dataset. Values are visualized with a green background, where darker background indicates worse performance. The top-5 best results are highlighted with a red background, and the absolute best result is highlighted with red bold font. F represents supervision on the forecasting task, while B+F represents supervision on backcasting and forecasting tasks. Look-back Window 90 180 360 720 Horizon COF/nth Harmonic F B+F F B+F F B+F F B+F 2 0.293889 0.291371 0.290314 0.288107 0.279141 0.276635 0.275600 0.274817 3 0.293242 0.291333 0.289803 0.287171 0.278128 0.275723 0.273972 0.273567 4 0.292438 0.290559 0.288541 0.286174 0.277293 0.274494 0.272384 0.272031 5 0.292387 0.290369 0.288530 0.285527 0.276594 0.274042 0.272085 0.271719 96 6 0.292517 0.290466 0.287814 0.285384 0.275930 0.273883 0.271312 0.271028 2 0.379401 0.377047 0.361995 0.359322 0.337767 0.336419 0.334493 0.334621 3 0.379080 0.376874 0.360790 0.358059 0.337391 0.335736 0.333573 0.333758 4 0.378816 0.376472 0.360524 0.357973 0.336085 0.334531 0.332310 0.332475 5 0.378529 0.376429 0.360234 0.357533 0.336286 0.334475 0.332122 0.332281 192 6 0.378581 0.376481 0.360049 0.357478 0.335526 0.333846 0.331421 0.331667 2 0.419131 0.417096 0.391167 0.388905 0.360300 0.359665 0.356390 0.356319 3 0.419264 0.416645 0.389740 0.387614 0.359802 0.359291 0.355825 0.355972 4 0.419237 0.416085 0.389790 0.387815 0.358774 0.358096 0.354695 0.354880 5 0.418985 0.416009 0.388972 0.387115 0.358652 0.358093 0.354805 0.354794 336 6 0.418359 0.416369 0.388943 0.387183 0.358011 0.357432 0.354055 0.354205 2 0.420888 0.418226 0.405711 0.404412 0.387592 0.386235 0.379710 0.380367 3 0.420441 0.418290 0.404405 0.403520 0.386570 0.385907 0.379501 0.380132 4 0.420404 0.417756 0.404631 0.403425 0.386556 0.384828 0.378209 0.378890 5 0.419888 0.417725 0.403562 0.402755 0.385489 0.384758 0.378227 0.378810 720 6 0.419376 0.417854 0.403643 0.402616 0.384709 0.383960 0.377463 0.378101 Table 5: The number of parameters under different settings on ETTh1 & ETTh2 dataset. Look-back Window Horizon COF/nth Harmonic 90 180 360 720 96 2 703 1053 2279 5913 3 1035 1820 4307 12064 4 1431 2752 6975 20385 5 1922 3876 10374 31042 6 2450 5192 14338 43734 192 2 1064 1431 2752 6643 3 1564 2450 5192 13520 4 2187 3698 8475 22815 5 2914 5253 12558 34694 6 3710 7021 17334 48856 336 2 1615 1998 3483 7665 3 2392 3395 6608 15704 4 3321 5160 10725 26460 5 4402 7293 15834 40006 6 5600 9794 21828 56539 720 2 3078 3510 5418 10512 3 4554 5950 10266 21424 4 6318 9030 16650 36180 5 8370 12750 24570 54780 6 10710 17110 34026 77224 These results emphasize the lightweight nature of FITS, making it highly suitable for deployment and training on edge devices with limited computational resources. By carefully selecting the look-back window and cutoff frequency, FITS can achieve excellent performance while maintaining computational efficiency, making it an appealing choice for real-world applications. 5 EXPERIMENT FOR ANOMALY DETECTION 5.1 RECONSTRUCTION AS FREQUENCY INTERPOLATION As discussed before, we tackle the anomaly detection tasks in the self-supervised reconstructing approach. Specifically, we make a N time equidistant sampling on the input and train a FITS network with an interpolation rate of \u03b7Rec = N to up-sample it. Please check appendix A for detail. 5.2 EXPERIMENT SETTINGS Datasets. We use five commonly used benchmark datasets: SMD (Server Machine Dataset (Su et al., 2019)), PSM (Polled Server Metrics (Abdulaal et al., 2021)), SWaT (Secure Water Treatment (Mathur & Tippenhauer, 2016)), MSL (Mars Science Laboratory rover), and SMAP (Soil Moisture Active Passive satellite) (Hundman et al., 2018). We report the performance on the synthetic dataset (Lai et al., 2021) in the appendix F. Baselines. We compare FITS with models such as TimesNet (Wu et al., 2023), Anomaly Transformer (Xu et al., 2022), THOC (Shen et al., 2020), Omnianomaly (Su et al., 2019), DGHL (Challu et al., 2022b). Following TimesNet (Wu et al., 2023), we also compare the anomaly detection performance with other models (Zeng et al., 2023; Zhang et al., 2022; Woo et al., 2022; Zhou et al., 2022a). Evaluation metrics. Following the previous works (Xu et al., 2022; Shen et al., 2020; Wu et al., 2023), we use Precision, Recall, and F1-score as metrics. 8 \fArxiv Version Implementation details. We use a window size of 200 and downsample the time series segment by a factor of 4 as the input to train FITS to reconstruct the original segment. We follow the methodology of the Anomaly Transformer (Xu et al., 2022), where time points exceeding a certain reconstruction loss threshold are classified as anomalies. The threshold is selected based on the highest F1 score achieved on the validation set. To handle consecutive abnormal segments, we adopt a widely-used adjustment strategy (Su et al., 2019; Xu et al., 2018; Shen et al., 2020), considering all anomalies within a specific successive abnormal segment as correctly detected when one anomalous time point is identified. This approach aligns with real-world applications, where an abnormal time point often triggers the attention to the entire segment. Table 6: Anomaly detection result of F1-scores on 5 datasets. The best result is highlighted in bold, and the second best is highlighted with underline. Full results are reported in the Appendix. Models FITS TimesNet Anomaly Transformer THOC Omni Anomaly Stationary Transformer DGHL OCSVM IForest LightTS Dlinear IMP SMD 99.95 85.81 92.33 84.99 85.22 84.72 N/A 56.19 53.64 82.53 77.1 7.62 PSM 93.96 97.47 97.89 98.54 80.83 97.29 N/A 70.67 83.48 97.15 93.55 -3.93 SWaT 98.9 91.74 94.07 85.13 82.83 79.88 87.47 47.23 47.02 93.33 87.52 4.83 SMAP 70.74 71.52 96.69 90.68 86.92 71.09 96.38 56.34 55.53 69.21 69.26 -25.95 MSL 78.12 85.15 93.59 89.69 87.67 77.5 94.08 70.82 66.45 78.95 84.88 -15.96 5.3 COMPARISONS WITH SOTAS In Table 6, FITS stands out with outstanding results on various datasets. Particularly, on SMD and SWaT datasets, FITS achieves nearly perfect F1-scores, around 99.95% and 98.9%, respectively, showcasing its precision in anomaly detection and classification. In contrast, models like TimesNet, Anomaly Transformer, and Stationary Transformer struggle to match FITS\u2019 performance on these datasets. However, FITS shows comparatively lower performance on the SMAP and MSL datasets. These datasets present a challenge due to their binary event data nature, which may not be effectively captured by FITS\u2019 frequency domain representation. In such cases, time-domain modeling is preferable as the raw data format is sufficiently compact. Thus, models specifically designed for anomaly detection, such as THOC and Omni Anomaly, achieve higher F1-scores on these datasets. For a more comprehensive evaluation, waveform visualizations and detailed analysis can be found in the appendix, providing deeper insights into FITS\u2019 strengths and limitations in different anomaly detection scenarios. It is important to note that the reported results are achieved with a parameter range of 1-4K and MACs (Multiply-Accumulate Operations) of 10-137K, which will be further detailed in the appendix. While the datasets in use are instrumental, it is imperative to acknowledge their limitations as delineated in (Lai et al., 2021). Particularly on the synthetic dataset from (Lai et al., 2021), FITS demonstrates impeccable detection capabilities, registering a flawless 100% F1 score. For a detailed breakdown, readers can refer to the table in appendix F. This dataset marries a sinusoidal wave of a single frequency with intricately introduced anomaly patterns, which pose challenges for identification in the time domain. Yet, FITS, leveraging the frequency domain, adeptly discerns these anomalies, particularly those introducing unexpected frequency components. Moreover, FITS boasts an impressive sub-millisecond inference speed \u2014 a marked distinction when compared to the latency typical of larger models or communication overheads. This speed underscores FITS\u2019s suitability as a first-responder tool for promptly spotting critical errors. When paired as a preliminary filter with a specialized AD algorithm geared for detailed detection, the combined system stands as a paragon of both robustness and swift responsiveness facing diverse anomalies. 6" + } + ], + "Weiran Huang": [ + { + "url": "http://arxiv.org/abs/2111.00743v4", + "title": "Towards the Generalization of Contrastive Self-Supervised Learning", + "abstract": "Recently, self-supervised learning has attracted great attention, since it\nonly requires unlabeled data for model training. Contrastive learning is one\npopular method for self-supervised learning and has achieved promising\nempirical performance. However, the theoretical understanding of its\ngeneralization ability is still limited. To this end, we define a kind of\n$(\\sigma,\\delta)$-measure to mathematically quantify the data augmentation, and\nthen provide an upper bound of the downstream classification error rate based\non the measure. It reveals that the generalization ability of contrastive\nself-supervised learning is related to three key factors: alignment of positive\nsamples, divergence of class centers, and concentration of augmented data. The\nfirst two factors are properties of learned representations, while the third\none is determined by pre-defined data augmentation. We further investigate two\ncanonical contrastive losses, InfoNCE and cross-correlation, to show how they\nprovably achieve the first two factors. Moreover, we conduct experiments to\nstudy the third factor, and observe a strong correlation between downstream\nperformance and the concentration of augmented data.", + "authors": "Weiran Huang, Mingyang Yi, Xuyang Zhao, Zihao Jiang", + "published": "2021-11-01", + "updated": "2023-03-02", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CV", + "stat.ML" + ], + "main_content": "INTRODUCTION Contrastive Self-Supervised Learning (SSL) has attracted great attention for its fantastic data ef\ufb01ciency and generalization ability in computer vision (He et al., 2020; Chen et al., 2020a;b; Grill et al., 2020; Chen & He, 2021; Zbontar et al., 2021) and natural language processing (Fang et al., 2020; Wu et al., 2020; Giorgi et al., 2020; Gao et al., 2021; Yan et al., 2021). It learns the representation through a large number of unlabeled data and manually designed supervision signals (i.e., regarding the augmented views of a data sample as positive samples). The model is updated by encouraging the features of positive samples close to each other. To overcome the feature collapse issue, various losses (e.g., InfoNCE (Chen et al., 2020a; He et al., 2020) and cross-correlation (Zbontar et al., 2021)) and training strategies (e.g., stop gradient (Grill et al., 2020; Chen & He, 2021)) are proposed. In spite of the empirical success of contrastive SSL in terms of their generalization ability on downstream tasks, the theoretical understanding is still limited. Arora et al. (2019) propose a theoretical framework to show the provable downstream performance of contrastive SSL based on the InfoNCE loss. However, their results rely on the assumption that positive samples are drawn from the same latent class, instead of the augmented views of a data point as in practice. Wang & Isola (2020) propose alignment and uniformity to explain the downstream performance, but they are empirical indicators and lack of theoretical generalization guarantees. Both of the above works avoid characterizing the important role of data augmentation, which is the key to the success of contrastive SSL, since the only human knowledge is injected via data augmentation. Recently, HaoChen et al. (2021) propose to model the augmented data as a graph and study contrastive SSL from a matrix decomposition perspective, but it is only applicable to their own spectral contrastive loss. Besides the limitations of existing contrastive SSL theories, there are also some interesting empirical observations that have not been unraveled theoretically yet. For example, why does the richer data \u2217Equal contribution (\u03b1-\u03b2 ordering). Correspondence to Weiran Huang (weiran.huang@outlook.com). 1 arXiv:2111.00743v4 [cs.LG] 2 Mar 2023 \fPublished as a conference paper at ICLR 2023 (a) Initial (b) Only color distortion (c) Multiple transformations Figure 1: SimCLR\u2019s embedding space with different richnesses of data augmentations on CIFAR-10. augmentation lead to the more clustered structure in the embedding space (Figure 1) as well as the better downstream performance (also observed by Chen et al. (2020a))? Why is aligning positive samples (augmented from the \u201csame data point\u201d) able to gather the samples from the \u201csame latent class\u201d into a cluster (Figure 1c)? More interestingly, decorrelating components of representation like Barlow Twins (Zbontar et al., 2021) does not directly optimize the geometry of embedding space, but it still results in the clustered structure. Why is this? Augmentation Close to Each Other Embedding Space Augmentation Figure 2: Mechanism of Clustering In this paper, we focus on exploring the generalization ability of contrastive SSL provably, which can explain the above interesting observations. We start with understanding the role of data augmentation in contrastive SSL. Intuitively, samples from the same latent class are likely to have similar augmented views, which are mapped to the close locations in the embedding space. Since the augmented views of each sample are encouraged to be clustered in the embedding space by contrastive learning, different samples from the same latent class tend to be pulled closer. As an example, let\u2019s consider two images of dogs with different backgrounds (Figure 2). If we augment them with transformation \u201ccrop\u201d, we may get two similar views (dog heads), whose representations (gray points in the embedding space) are close. As the augmented views of each dog image are enforced to be close in the embedding space due to the objective of contrastive learning, the representations of two dog images (green and blue points) will be pulled closer to their augmented views (gray points). In this way, aligning positive samples is able to gather samples from the same class, and thus results in the clustered embedding space. Following the above intuition, we de\ufb01ne the augmented distance between two samples as the minimum distance between their augmented views, and further introduce the (\u03c3, \u03b4)-augmentation to measure the concentration of augmented data, i.e., for each latent class, the proportion of samples located in a ball with diameter \u03b4 (w.r.t. the augmented distance) is larger than \u03c3. With the mathematical description of data augmentation settled, we then prove an upper bound of downstream classi\ufb01cation error rate in Section 3. It reveals that the generalization of contrastive SSL is related to three key factors. The \ufb01rst one is alignment of positive samples, which is a common objective that contrastive learning algorithms aim to optimize. The second one is divergence of class centers, which prevents the collapse of representation. The third factor is concentration of augmented data, i.e., a sharper concentration of augmented data indicates a better generalization error bound. We remark that the \ufb01rst two factors are properties of representations that can be optimized during the learning process. However, the third factor is determined by pre-de\ufb01ned data augmentation and is independent of the learning process. Thus, data augmentation plays a crucial role in contrastive SSL. We then study the above three factors in more depth. In Section 4, we rigorously prove that not only the InfoNCE loss but also the cross-correlation loss (which does not directly optimize the geometry of embedding space) can satisfy the \ufb01rst two factors. For the third factor, we conduct various experiments on the real-world datasets and observe that the downstream performance of contrastive SSL is highly correlated to the concentration of augmented data in Section 5. In summary, our contributions include: 1) proposing a novel (\u03c3, \u03b4)-measure to quantify data augmentation; 2) presenting a theoretical framework for contrastive SSL that highlights alignment, 2 \fPublished as a conference paper at ICLR 2023 divergence, and concentration as key factors for generalization ability; 3) provably verifying that not only the InfoNCE loss but also the cross-correlation loss satisfy alignment and divergence; 4) showing a strong correlation between downstream performance and concentration of augmented data. RELATED WORK Algorithms of Contrastive SSL. Early works such as MoCo (He et al., 2020) and SimCLR (Chen et al., 2020a), use the InfoNCE loss to pull the positive samples close while enforcing them away from the negative samples in the embedding space. These methods require large batch sizes (Chen et al., 2020a), memory banks (He et al., 2020), or carefully designed negative sampling strategies (Hu et al., 2021). To obviate these, some recent works get rid of negative samples and prevent representation collapse by cross-correlation loss (Zbontar et al., 2021; Bardes et al., 2021) or training strategies (Grill et al., 2020; Chen & He, 2021). In this paper, we mainly study the effectiveness of the InfoNCE loss and the cross-correlation loss, and do not enter the discussion of training strategies. Theoretical Understandings of Contrastive SSL. Most theoretical analysis is based on the InfoNCE loss, and lack of understanding of recently proposed cross-correlation loss (Zbontar et al., 2021). Early works understand the InfoNCE loss based on maximizing the mutual information (MI) between positive samples (Oord et al., 2018; Bachman et al., 2019; Hjelm et al., 2018; Tian et al., 2019; 2020; Tschannen et al., 2019). However, a rigorous relationship between mutual information and downstream performance has not been established. Besides, Arora et al. (2019) directly analyze the generalization of InfoNCE loss based on the assumption that positive samples are drawn from the same latent classes, which is different from practical algorithms. Ash et al. (2021) study the role of negative samples and show an interesting collision-coverage trade-off theoretically. HaoChen et al. (2021) study contrastive SSL from a matrix decomposition perspective, but it is only applicable to their spectral contrastive loss. The behavior of InfoNCE is also studied from the perspective of alignment and uniformity (Wang & Isola, 2020), sparse coding model (Wen & Li, 2021), the expansion assumption (Wei et al., 2020), stochastic neighbor embedding (Hu et al., 2022), and augmentation robustness (Zhao et al., 2023). 2 PROBLEM FORMULATION Given a number of unlabeled training data i.i.d. drawn from an unknown distribution, each sample belongs to one of K latent classes C1, C2, . . . , CK. Based on an augmentation set A, the set of potential positive samples generated from a data point x is denoted as A(x). We assume that x \u2208A(x) for any x, and samples from different latent classes never transform into the same augmented sample, i.e., A(Ck) \u2229A(C\u2113) = \u2205for any k \u0338= \u2113. Notation \u2225\u00b7 \u2225in this paper stands for \u21132-norm or Frobenius norm for vectors and matrices, respectively. Contrastive SSL aims to learn an encoder f, such that positive samples are closely aligned. In order to make the samples from different latent classes far away from each other, some methods such as (Chen et al., 2020a; He et al., 2020) use the InfoNCE loss1 to push away negative pairs, formulated as LInfoNCE = \u2212E x,x\u2032 E x1,x2\u2208A(x) x\u2212\u2208A(x\u2032) log ef(x1)\u22a4f(x2) ef(x1)\u22a4f(x2) + ef(x1)\u22a4f(x\u2212) , where x, x\u2032 are two random data points. Some other methods such as Barlow Twins (Zbontar et al., 2021) use the cross-correlation loss to decorrelate the components of representation, formulated as LCross-Corr = d X i=1 (1 \u2212Fii)2 + \u03bb d X i=1 X i\u0338=j F 2 ij, where Fij = Ex Ex1,x2\u2208A(x)[fi(x1)fj(x2)], d is the dimension of encoder f, and encoder f is normalized as Ex Ex\u2032\u2208A(x)[fi(x\u2032)2] = 1 for each dimension i. The standard evaluation of contrastive SSL is to train a linear classi\ufb01er over the learned representation using labeled data and regard its performance as the indicator. To simplify the analysis, we instead 1For simplicity in our analysis, we consider the InfoNCE loss with only one negative sample. 3 \fPublished as a conference paper at ICLR 2023 consider a non-parametric classi\ufb01er \u2013 nearest neighbor (NN) classi\ufb01er: Gf(x) = arg min k\u2208[K] \u2225f(x) \u2212\u00b5k\u2225, where \u00b5k := Ex\u2208Ck Ex\u2032\u2208A(x)[f(x\u2032)] is the center of class Ck. In fact, the NN classi\ufb01er is a special case of linear classi\ufb01er, since it can be reformulated as Gf(x) = arg maxk\u2208[K] (Wf(x) + b)k, where the k-th row of W is \u00b5k and bk = \u22121 2\u2225\u00b5k\u22252 (See Appendix E). Therefore, the directly learned linear classi\ufb01er used in practice should perform better than the NN classi\ufb01er. In this paper, we use the classi\ufb01cation error rate to quantify the performance of Gf, formulated as Err(Gf) = K X k=1 P[Gf(x) \u0338= k, \u2200x \u2208Ck]. Our goal is to study why contrastive SSL is able to achieve a small Err(Gf). 3 GENERALIZATION GUARANTEE OF CONTRASTIVE SSL Based on the NN classi\ufb01er, if the samples are well clustered by latent classes in the embedding space, the error rate Err(Gf) should be small. Thus, one expects to have a small intra-class distance Ex1,x2\u2208Ck \u2225f(x1) \u2212f(x2)\u22252 for an encoder f learned by contrastive learning. However, contrastive algorithms can only control the alignment of positive samples Ex1,x2\u2208A(x) \u2225f(x1) \u2212f(x2)\u22252. To bridge the gap between them, we need to investigate the role of data augmentation. Motivated by Figure 2 introduced in Section 1, for a given augmentation set A, we de\ufb01ne the augmented distance between two samples as the minimum distance between their augmented views: dA(x1, x2) = min x\u2032 1\u2208A(x1),x\u2032 2\u2208A(x2) \u2225x\u2032 1 \u2212x\u2032 2\u2225. (1) For the dog images in Figure 2 as an example, even though their pixel-level differences are signi\ufb01cant, their semantic meanings are similar. Meanwhile, they also have a small augmented distance. Thus, the proposed augmented distance can partially capture the semantic distance. Based on the augmented distance, we now introduce the (\u03c3, \u03b4)-augmentation to measure the concentration of augmented data. De\ufb01nition 1 ((\u03c3, \u03b4)-Augmentation). The augmentation set A is called a (\u03c3, \u03b4)-augmentation, if for each class Ck, there exists a subset C0 k \u2286Ck (called a main part of Ck), such that both P[x \u2208C0 k] \u2265\u03c3 P[x \u2208Ck] where \u03c3 \u2208(0, 1] and supx1,x2\u2208C0 k dA(x1, x2) \u2264\u03b4 hold. In other words, the main-part samples locate in a ball with diameter \u03b4 (w.r.t. the augmented distance) and its proportion is larger than \u03c3. Larger \u03c3 and smaller \u03b4 indicate the sharper concentration of augmented data. For any A\u2032 \u2287A with richer augmentations, one can verify that dA\u2032(x1, x2) \u2264 dA(x1, x2) for any x1, x2. Therefore, richer data augmentations lead to sharper concentration as \u03b4 gets smaller. With De\ufb01nition 1, our analysis will focus on the samples in the main parts with good alignment, i.e., (C0 1 \u222a\u00b7 \u00b7 \u00b7 \u222aC0 K) \u2229S\u03b5, where S\u03b5 := {x \u2208\u222aK k=1Ck : \u2200x1, x2 \u2208A(x), \u2225f(x1) \u2212 f(x2)\u2225\u2264\u03b5} is the set of samples with \u03b5-close representations among augmented data. Furthermore, we let R\u03b5 := P \u0002 S\u03b5 \u0003 , which is provably small with good alignment (see Theorem 2). Lemma 3.1. For a (\u03c3, \u03b4)-augmentation with main part C0 k of each class Ck, if all samples belonging to (C0 1 \u222a\u00b7 \u00b7 \u00b7 \u222aC0 K) \u2229S\u03b5 can be correctly classi\ufb01ed by a classi\ufb01er G, then its classi\ufb01cation error rate Err(G) is upper bounded by (1 \u2212\u03c3) + R\u03b5. The proof is deferred to the appendix. The above lemma presents a simple suf\ufb01cient condition to guarantee the generalization ability on downstream tasks. Based on it, we need to further explore when samples in (C0 1 \u222a\u00b7 \u00b7 \u00b7 \u222aC0 K) \u2229S\u03b5 can be all correctly classi\ufb01ed by the NN classi\ufb01er. We assume that encoder f is normalized by \u2225f\u2225= r, and it is L-Lipschitz continuity, i.e., for any x1, x2, \u2225f(x1) \u2212f(x2)\u2225\u2264L \u2225x1 \u2212x2\u2225. We let pk := P[x \u2208Ck] for any k \u2208[K]. Lemma 3.2. Given a (\u03c3, \u03b4)-augmentation used in contrastive SSL, for any \u2113\u2208[K], if \u00b5\u22a4 \u2113\u00b5k < r2 \u0010 1 \u2212\u03c1\u2113(\u03c3, \u03b4, \u03b5) \u2212 p 2\u03c1\u2113(\u03c3, \u03b4, \u03b5) \u2212\u2206\u00b5 2 \u0011 holds for all k \u0338= \u2113, then every sample x \u2208C0 \u2113\u2229S\u03b5 can be correctly classi\ufb01ed by the NN classi\ufb01er Gf, where \u03c1\u2113(\u03c3, \u03b4, \u03b5) = 2(1 \u2212\u03c3) + R\u03b5 p\u2113+ \u03c3 \u0000 L\u03b4 r + 2\u03b5 r \u0001 and \u2206\u00b5 = 1 \u2212mink\u2208[K] \u2225\u00b5k\u22252/r2. 4 \fPublished as a conference paper at ICLR 2023 With Lemma 3.1 and 3.2, we can directly obtain the generalization guarantee of contrastive SSL: Theorem 1. Given a (\u03c3, \u03b4)-augmentation used in contrastive SSL, if \u00b5\u22a4 \u2113\u00b5k < r2 \u0012 1 \u2212\u03c1max(\u03c3, \u03b4, \u03b5) \u2212 p 2\u03c1max(\u03c3, \u03b4, \u03b5) \u2212\u2206\u00b5 2 \u0013 (2) holds for any pair of (\u2113, k) with \u2113\u0338= k, then the downstream error rate of NN classi\ufb01er Gf Err(Gf) \u2264(1 \u2212\u03c3) + R\u03b5, (3) where \u03c1max(\u03c3, \u03b4, \u03b5) = 2(1 \u2212\u03c3) + R\u03b5 min\u2113p\u2113+ \u03c3 \u0000 L\u03b4 r + 2\u03b5 r \u0001 and \u2206\u00b5 = 1 \u2212mink\u2208[K] \u2225\u00b5k\u22252/r2. The proof is deferred to the appendix. To better understand the above theorem, let us \ufb01rst consider a simple case that any two samples from the latent same class at least own a same augmented view (\u03c3 = 1, \u03b4 = 0), and the positive samples are perfectly aligned after contrastive learning (\u03b5 = 0, R\u03b5 = 0). In this case, the samples from the same latent class are embedded to a single point on the hypersphere, and thus arbitrarily small positive angle \u27e8\u00b5\u2113,\u00b5k\u27e9 \u2225\u00b5\u2113\u2225\u00b7\u2225\u00b5k\u2225< 1 is enough to distinguish them by the NN classi\ufb01er. In fact, one can quickly verify that \u03c1max(\u03c3, \u03b4, \u03b5) = \u2206\u00b5 = 0 holds in the above case. According to Theorem 1, if \u00b5\u22a4 \u2113\u00b5k/r2 < 1\u2212\u03c1max(\u03c3, \u03b4, \u03b5)\u2212 p 2\u03c1max(\u03c3, \u03b4, \u03b5)\u2212\u2206\u00b5 2 = 1, then Err(Gf) = 0, i.e., NN classi\ufb01er can correctly recognize every sample when \u00b5\u22a4 \u2113\u00b5k/r2 < 1. Thus, the condition suggested by Theorem 1 is exactly the same as the intuition. Theorem 1 implies three key factors to the success of contrastive SSL. The \ufb01rst one is alignment of positive samples, which is a common objective that contrastive algorithms aim to optimize. Better alignment enables smaller R\u03b5, which directly decreases the generalization error bound (3). The second factor is divergence of class centers, i.e., the distance between class centers should be large enough (small \u00b5\u22a4 \u2113\u00b5k). The divergence condition (2) is related to the alignment (R\u03b5) and data augmentation (\u03c3, \u03b4). Better alignment and sharper concentration indicate smaller \u03c1max(\u03c3, \u03b4, \u03b5), and hence looser divergence condition. The third factor is concentration of augmented data. When \u03b4 is given, sharper concentration implies larger \u03c3, which directly affects the generalization error bound (3). For example, richer data augmentations lead to sharper concentration (see the paragraph below De\ufb01nition 1), and hence better generalization error bound. Only the \ufb01rst two factors can be optimized during the learning process, and we will provably show how it can be achieved via two concrete examples in Section 4. In contrast, the third factor is priorly decided by the pre-de\ufb01ned data augmentation and is independent of the learning process. We will empirically study how the concentration of augmented data affects the downstream performance in Section 5. In summary, Theorem 1 provides a framework for different algorithms to analyze their generalization abilities. Compared with the alignment and uniformity proposed by Wang & Isola (2020), both of the works have the same meaning of \u201calignment\u201d since it is the objective that contrastive algorithms aim to optimize, but our \u201cdivergence\u201d is fundamentally different from their \u201cuniformity\u201d. Uniformity requires \u201call data\u201d uniformly distributed on the embedding hypersphere, while our divergence characterizes the cosine distance between \u201cclass centers\u201d. We do not require the divergence to be as large as better, instead, the divergence condition can be loosened by better alignment and concentration properties. As an example, consider the case below Theorem 1. Since all the samples from the same latent class are embedded into a single point on the hypersphere, in that case, an arbitrarily small positive angle (arbitrarily small divergence) is enough to distinguish them. More importantly, alignment and uniformity are empirical predictors for downstream performance, while our alignment and divergence have explicit theoretical guarantees (Theorem 1) for the generalization of contrastive SSL. Moreover, Wang & Isola (2020) does not consider the crucial effect of data augmentation. In fact, with bad concentration (e.g., only using identity transformation as data augmentation), \u201cperfect\u201d alignment along with \u201cperfect\u201d uniformity still can not imply good downstream performance. 3.1 UPPER BOUND R\u03b5 VIA ALIGNMENT We now upper bound R\u03b5 via the alignment Lalign(f) := E x E x1,x2\u2208A(x) \u2225f(x1) \u2212f(x2)\u22252, (4) which is a common objective of contrastive losses. Recall that R\u03b5 can be rewritten as R\u03b5 = P \u0014 x \u2208\u222aK k=1Ck : sup x1,x2\u2208A(x) \u2225f(x1) \u2212f(x2)\u2225> \u03b5 \u0015 . 5 \fPublished as a conference paper at ICLR 2023 Note that there is a gap between \u201csup operator\u201d in R\u03b5 and \u201cE operator\u201d in Lalign(f), which cannot be simply derived by concentration inequalities. We separate the augmentation set A as discrete transformations {A\u03b3(\u00b7): \u03b3 \u2208[m]} and continuous transformations {A\u03b8(\u00b7): \u03b8 \u2208[0, 1]n}. For example, random cropping or \ufb02ipping can be categorized into the discrete transformation, while the others like random color distortion or Gaussian blur can be regarded as the continuous transformation parameterized by the augmentation strength \u03b8. Without loss of generality, we assume that for any given x, its augmented data are uniformly random sampled, i.e., P[x\u2032 = A\u03b3(x)] = 1 2m and P[x\u2032 \u2208{A\u03b8(x): \u03b8 \u2208\u0398}] = vol(\u0398) 2 for any \u0398 \u2286[0, 1]n, where vol(\u0398) denotes the volume of \u0398. For the continuous transformation, we further assume that the transformation is M-Lipschitz continuous w.r.t. \u03b8, i.e., \u2225A\u03b81(x) \u2212A\u03b82(x)\u2225\u2264M\u2225\u03b81 \u2212\u03b82\u2225for any x, \u03b81, \u03b82. With the above setting, we have the following theorem (proof is deferred to the appendix). Theorem 2. If encoder f is L-Lipschitz continuous, then R2 \u03b5 \u2264\u03b7(\u03b5)2 \u00b7 E x E x1,x2\u2208A(x) \u2225f(x1) \u2212f(x2)\u22252 = \u03b7(\u03b5)2 \u00b7 Lalign(f), where \u03b7(\u03b5) = infh\u2208 \u0010 0, \u03b5 2\u221anLM \u0011 4 max{1,m2h2n} h2n(\u03b5\u22122\u221anLMh). The above theorem con\ufb01rms that, with good alignment, R\u03b5 is guaranteed to be small. 4 CONTRASTIVE LOSSES MEET ALIGNMENT AND DIVERGENCE We now study two canonical contrastive losses, the InfoNCE loss and the cross-correlation loss, to see how they can achieve good alignment (small Lalign(f)) and good divergence (small \u00b5\u22a4 k \u00b5\u2113). 4.1 INFONCE LOSS The population loss of InfoNCE (Chen et al., 2020a; He et al., 2020) is well known as: LInfoNCE = \u2212E x,x\u2032 E x1,x2\u2208A(x) x\u2212\u2208A(x\u2032) log ef(x1)\u22a4f(x2) ef(x1)\u22a4f(x2) + ef(x1)\u22a4f(x\u2212) , where encoder f is normalized by \u2225f\u2225= 1. It can be divided into two parts: LInfoNCE = E x,x\u2032 E x1,x2\u2208A(x) x\u2212\u2208A(x\u2032) h \u2212f(x1)\u22a4f(x2) + log \u0010 ef(x1)\u22a4f(x2) + ef(x1)\u22a4f(x\u2212)\u0011i (5) = 1 2 E x E x1,x2\u2208A(x) [\u2225f(x1) \u2212f(x2)\u22252] \u22121 | {z } =:LInfoNCE 1 (f) + E x,x\u2032 E x1,x2\u2208A(x) x\u2212\u2208A(x\u2032) h log \u0010 ef(x1)\u22a4f(x2) + ef(x1)\u22a4f(x\u2212)\u0011i | {z } =:LInfoNCE 2 (f) . Regardless of the constant factors, LInfoNCE 1 (f) is exactly the alignment term in (4). Next, we take a close look at LInfoNCE 2 (f) to see how it links to the divergence condition required by Theorem 1. Theorem 3. Assume that encoder f with norm 1 is L-Lipschitz continuous. If the augmented data is (\u03c3, \u03b4)-augmented, then for any \u03b5 \u22650 and k \u0338= \u2113, we have \u00b5\u22a4 k \u00b5\u2113\u2264log \u0012 exp \u001aLInfoNCE 2 (f) + \u03c4(\u03c3, \u03b4, \u03b5, R\u03b5) pkp\u2113 \u001b \u2212exp(1 \u2212\u03b5) \u0013 , where \u03c4(\u03c3, \u03b4, \u03b5, R\u03b5) is a non-negative term, decreasing with smaller \u03b5, R\u03b5 or sharper concentration of augmented data, and \u03c4(\u03c3, \u03b4, \u03b5, R\u03b5) = 0 when \u03c3 = 1, \u03b4 = 0, \u03b5 = 0, R\u03b5 = 0. The speci\ufb01c formulation of \u03c4(\u03c3, \u03b4, \u03b5, R\u03b5) and the proof are deferred to the appendix. We remark that data augmentation (\u03c3, \u03b4), parameter \u03b5, and pk, p\u2113are pre-determined before training procedure, and thus the upper bound of \u00b5\u22a4 k \u00b5\u2113in Theorem 3 varies only with LInfoNCE 2 (f) and R\u03b5, positively. Therefore, minimizing LInfoNCE = LInfoNCE 1 (f) + LInfoNCE 2 (f) leads to both small LInfoNCE 1 (f) and small LInfoNCE 2 (f). Small LInfoNCE 1 (f) indicates good alignment Lalign(f), as well as small R\u03b5 6 \fPublished as a conference paper at ICLR 2023 (Theorem 2). Small LInfoNCE 2 (f) along with small R\u03b5 indicates good divergence (small \u00b5\u22a4 k \u00b5\u2113) by Theorem 3. Hence, optimizing the InfoNCE loss can achieve both good alignment and good divergence. According to Theorem 1 and Theorem 2, the generalization ability of encoder f on the downstream task is implied, i.e., Err(Gf) \u2264(1 \u2212\u03c3) + \u03b7(\u03b5) p 2 + 2LInfoNCE 1 (f), when the upper bound of \u00b5\u22a4 k \u00b5\u2113in Theorem 3 is smaller than the threshold in Theorem 1. It is worth mentioning that the form of InfoNCE is critical to meeting the requirement of divergence, which is found when we prove Theorem 3. For example, let us consider the contrastive loss (5) formulated in a linear form2 instead of LogExp such that L\u2032(f) = E x,x\u2032 E x1,x2\u2208A(x) x\u2212\u2208A(x\u2032) h \u2212f(x1)\u22a4f(x2) + \u03bbf(x1)\u22a4f(x\u2212) i = LInfoNCE 1 (f) + \u03bbL\u2032 2(f), where L\u2032 2(f) is the negative-pair term weighted by some \u03bb > 0. Due to the independence between x and x\u2032, we have L\u2032 2(f) = \u2225Ex Ex1\u2208A(x)[f(x1)]\u22252. Therefore, minimizing L\u2032 2(f) only leads to the representation with zero mean. Unfortunately, the objective of zero mean with \u2225f\u2225= 1 can not obviate the dimensional collapse (Hua et al., 2021) of the model. For example, the encoder f can map the input data from multi classes into two points in the opposite directions on the hypersphere. This justi\ufb01es the observation in (Wang & Liu, 2021): the uniformity of the encoder on the embedded hypersphere becomes worse when the temperature of the loss increases, where the loss degenerates to L\u2032(f) with in\ufb01nite temperature. 4.2 CROSS-CORRELATION LOSS Cross-correlation loss is \ufb01rst introduced by Barlow Twins (Zbontar et al., 2021). In contrast to InfoNCE loss, it trains the model via decorrelating the components of representation instead of directly optimizing the geometry of embedding space, but it is still observed to have clustered embedding space. To explore this, we study the cross-correlation loss in detail and show how it implicitly optimizes the alignment and divergence required by Theorem 1. The population loss of cross-correlation can be formulated as LCross-Corr = d X i=1 1 \u2212E x E x1,x2\u2208A(x) [fi(x1)fi(x2)] !2 + \u03bb X i\u0338=j E x E x1,x2\u2208A(x) [fi(x1)fj(x2)] !2 , with normalization condition of Ex Ex1\u2208A(x)[fi(x1)] = 0 and Ex Ex1\u2208A(x)[fi(x1)2] = 1 for each i \u2208[d], where d is the output dimension of encoder f. Positive coef\ufb01cient \u03bb balances the importance between diagonal and non-diagonal elements of cross-correlation matrix. When \u03bb = 1, the above loss is exactly the difference between the cross-correlation matrix and identity matrix. Similar to Section 4.1, we \ufb01rst divide the loss into two parts, by de\ufb01ning LCross 1 (f) := d X i=1 1 \u2212E x E x1,x2\u2208A(x) [fi(x1)fi(x2)] !2 and LCross 2 (f) := \r \r \r E x E x1,x2\u2208A(x) [f(x1)f(x2)\u22a4] \u2212Id \r \r \r 2. In this way, the cross-correlation loss becomes LCross-Corr = (1 \u2212\u03bb) LCross 1 (f) + \u03bb LCross 2 (f). Then, we connect LCross 1 (f) and LCross 2 (f) with the alignment and divergence, respectively. Lemma 4.1. For a given encoder f, the alignment Lalign(f) in (4) is upper bounded via LCross 1 (f): Lalign(f) = E x E x1,x2\u2208A(x) \u2225f(x1) \u2212f(x2)\u22252 \u22642 q d \u00b7 LCross 1 (f), where d is the output dimension of encoder f. The above lemma connects LCross 1 (f) with Lalign(f), indicating that the diagonal elements of the cross-correlation matrix determine the alignment of positive samples. Next, we will link LCross 2 (f) to the divergence \u00b5\u22a4 k \u00b5\u2113. It is challenging because LCross 2 (f) is designed for reducing the redundancy between the encoder\u2019s output units, not for optimizing the geometry of embedding space. 2It is also called simple contrastive loss in some literature. 7 \fPublished as a conference paper at ICLR 2023 Theorem 4. Assume that encoder f with norm \u221a d is L-Lipschitz continuous. If the augmented data is (\u03c3, \u03b4)-augmented, then for any \u03b5 \u22650 and k \u0338= \u2113, we have \u00b5\u22a4 k \u00b5\u2113\u2264 s 2 pkp\u2113 \u0012 LCross 2 (f) + \u03c4 \u2032(\u03c3, \u03b4, \u03b5, R\u03b5) \u2212d \u2212K 2 \u0013 , where \u03c4 \u2032(\u03c3, \u03b4, \u03b5, R\u03b5) is an upper bound of \u2225Ex Ex1,x2\u2208A(x)[f(x1)f(x2)\u22a4] \u2212PK k=1 pk\u00b5k\u00b5\u22a4 k \u22252. The speci\ufb01c formulation of \u03c4 \u2032(\u03c3, \u03b4, \u03b5, R\u03b5) and proof are deferred to the appendix. Here we remark that \u03c4 \u2032(\u03c3, \u03b4, \u03b5, R\u03b5) is a non-negative term, decreasing with smaller \u03b5, R\u03b5 or sharper concentration of augmented data, and \u03c4 \u2032(\u03c3, \u03b4, \u03b5, R\u03b5) = 0 when \u03c3 = 1, \u03b4 = 0, \u03b5 = 0, R\u03b5 = 0. Since data augmentation (\u03c3, \u03b4), parameter \u03b5, and pk, p\u2113are pre-determined before training procedure, the upper bound of \u00b5\u22a4 k \u00b5\u2113 in Theorem 4 varies only with LCross 2 (f) and R\u03b5, positively. Therefore, minimizing LCross-Corr leads to small LCross 1 (f), as well as small LCross 2 (f). Small LCross 1 (f) indicates good alignment Lalign(f) by Lemma 4.1 and small R\u03b5 by Theorem 2. Small LCross 2 (f) along with small R\u03b5 indicates good divergence (small \u00b5\u22a4 k \u00b5\u2113) by Theorem 4. Hence, decorrelating the components of representation can achieve both good alignment and good divergence. According to Theorem 1 and Theorem 2, the generalization ability of encoder f on the downstream task is implied, i.e., Err(Gf) \u2264(1 \u2212\u03c3) + \u221a 2 \u03b7(\u03b5) d 1 4 LCross 1 (f) 1 4 , when the upper bound of \u00b5\u22a4 k \u00b5\u2113in Theorem 4 is smaller than the threshold in Theorem 1. Beyond the above two widely used contrastive learning losses, we further analyze a very recently proposed t-InfoNCE loss (Hu et al., 2022), which is a t-SNE style loss inspired by stochastic neighbor embedding. We show that it can also achieve good alignment and divergence in the appendix. 5 EMPIRICAL STUDY OF CONCENTRATION OF AUGMENTED DATA Theorem 1 reveals that sharper concentration of augmented data w.r.t. the proposed augmented distance implies better generalization error bound regardless of algorithm. In this section, we empirically study the relationship between the concentration level and the real downstream performance. Basic Setup. Our experiments are conducted on CIFAR-10 and CIFAR-100 (Krizhevsky, 2009). We consider 5 different kinds of transformations for performing data augmentations: (a) random cropping; (b) random Gaussian blur; (c) color dropping (i.e., randomly converting images to grayscale); (d) color distortion; (e) random horizontal \ufb02ipping. We test different combinations of transformations via various SSL algorithms such as SimCLR (Chen et al., 2020a), Barlow Twins (Zbontar et al., 2021), MoCo (He et al., 2020), and SimSiam (Chen & He, 2021). We use ResNet-18 (He et al., 2016) as the encoder, and the other settings such as projection head remain the same as the original settings of algorithms. Each model is trained with a batch size of 512 and 800 epochs. To evaluate the quality of the encoder, we follow the KNN evaluation protocol (Wu et al., 2018). Different Richness of Augmentations. We compose all 5 kinds of transformations together, and then successively drop one of the composed transformations from (e) to (b) to conduct 5 experiments for each dataset (Table 1). We observe that the downstream performance monotonously gets worse with the decrease of transformation number, under all four SSL algorithms, on both CIFAR-10 and CIFAR-100. Notice that richer augmentation implies sharper concentration (see the paragraph below De\ufb01nition 1), and thus the concentration becomes less sharp from top to bottom for each dataset. Therefore, we observe that downstream performance becomes better with sharper concentration. We also observe that (c) color dropping and (d) color distortion have a great impact on the performance of all algorithms. According to our theoretical framework, these two transformations enable the augmented data to vary in a very wide range, which makes the augmented distance (1) largely decrease. As an intuitive example, if the right dog image in Figure 2 is replaced by a Husky image, only with random cropping, one will get two dog heads with similar shapes but different colors, which still have a large augmented distance. Instead, if color distortion is further applied, one can get two similar dog heads both in shape and color. Therefore, these two dog images have similar augmented views, and thus their augmented distance (1) becomes very small. Notice that small augmented distance (1) indicates sharp concentration (small \u03b4 in De\ufb01nition 1). Therefore, we observe that dramatic change in concentration leads to wildly \ufb02uctuating downstream performance. 8 \fPublished as a conference paper at ICLR 2023 Table 1: Downstream performance under different richness of augmentations. Dataset Transformations Accuracy (a) (b) (c) (d) (e) SimCLR Barlow Twins MoCo SimSiam CIFAR-10 \u2713 \u2713 \u2713 \u2713 \u2713 89.76 \u00b1 0.12 86.91 \u00b1 0.09 90.12 \u00b1 0.12 90.59 \u00b1 0.11 \u2713 \u2713 \u2713 \u2713 88.48 \u00b1 0.22 85.38 \u00b1 0.37 89.69 \u00b1 0.11 89.34 \u00b1 0.09 \u2713 \u2713 \u2713 83.50 \u00b1 0.14 82.00 \u00b1 0.59 86.78 \u00b1 0.07 85.38 \u00b1 0.09 \u2713 \u2713 63.23 \u00b1 0.05 67.83 \u00b1 0.94 75.12 \u00b1 0.28 63.27 \u00b1 0.30 \u2713 62.74 \u00b1 0.18 67.77 \u00b1 0.69 74.94 \u00b1 0.22 61.47 \u00b1 0.74 CIFAR-100 \u2713 \u2713 \u2713 \u2713 \u2713 57.74 \u00b1 0.12 57.99 \u00b1 0.29 64.19 \u00b1 0.14 63.48 \u00b1 0.16 \u2713 \u2713 \u2713 \u2713 55.43 \u00b1 0.10 55.22 \u00b1 0.25 62.50 \u00b1 0.28 60.31 \u00b1 0.41 \u2713 \u2713 \u2713 45.10 \u00b1 0.25 50.40 \u00b1 0.64 57.04 \u00b1 0.21 51.42 \u00b1 0.14 \u2713 \u2713 28.01 \u00b1 0.18 34.11 \u00b1 0.59 40.18 \u00b1 0.04 26.26 \u00b1 0.30 \u2713 27.95 \u00b1 0.09 34.05 \u00b1 1.13 39.63 \u00b1 0.31 25.90 \u00b1 0.83 Table 2: Downstream performance under different strength of augmentations. Dataset Color Distortion Accuracy Strength SimCLR Barlow Twins MoCo SimSiam CIFAR-10 1 82.75 \u00b1 0.24 82.58 \u00b1 0.25 86.68 \u00b1 0.05 82.50 \u00b1 1.05 1/2 78.76 \u00b1 0.18 81.88 \u00b1 0.25 84.30 \u00b1 0.14 81.80 \u00b1 0.15 1/4 76.37 \u00b1 0.11 79.64 \u00b1 0.34 82.76 \u00b1 0.09 78.80 \u00b1 0.17 1/8 74.23 \u00b1 0.16 77.96 \u00b1 0.16 81.20 \u00b1 0.12 76.09 \u00b1 0.50 CIFAR-100 1 46.67 \u00b1 0.42 50.39 \u00b1 1.09 58.50 \u00b1 0.51 49.94 \u00b1 2.01 1/2 40.21 \u00b1 0.05 48.76 \u00b1 0.25 55.08 \u00b1 0.09 46.27 \u00b1 0.46 1/4 36.67 \u00b1 0.08 46.22 \u00b1 0.71 52.09 \u00b1 0.18 42.02 \u00b1 0.34 1/8 34.75 \u00b1 0.20 44.72 \u00b1 0.26 49.43 \u00b1 0.16 36.26 \u00b1 0.34 Different Strength of Augmentations. We \ufb01x (a) random cropping and (d) color distortion as data augmentation, and vary the strength of (d) in {1, 1 2, 1 4, 1 8} to construct 4 groups of augmentations with different strength levels (Table 2). We observe that the downstream performance monotonously decreases with weaker color distortions, under all four SSL algorithms, on both CIFAR-10 and CIFAR-100. Recall that a stronger color distortion makes the augmented data vary in a wider range, leading to a smaller augmented distance (1) and thus sharper concentration. Therefore, we observe again that downstream performance becomes better with sharper concentration. Different Composed Pairs of Transformations. To study the relationship between the concentration level and the corresponding downstream performance in a more \ufb01ne-grained way, we compose transformations (a)-(e) in pairs to construct a total of \u00005 2 \u0001 = 10 augmentations. Contrasted to the previous two groups of experiments, current composed augmentations do not have an apparent order of concentration levels. According to De\ufb01nition 1, for a given \u03b4, a smaller (1 \u2212\u03c3) corresponds to a sharper concentration. Thus, we mathematically compute (1 \u2212\u03c3) (see appendix for details), and observe the correlation between classi\ufb01cation error rate Err(Gf) and (1 \u2212\u03c3) under different \u03b4 on CIFAR-10, based on the SimCLR model trained with 200 epochs. Figure 3: The correlation between observed Err(Gf) and computed value of (1 \u2212\u03c3). Interestingly, downstream performance is surprisingly highly correlated to the concentration level (Figure 3). Speci\ufb01cally, if we \ufb01x one of composed transformations as (a), we \ufb01nd that both Err(Gf) and (1 \u2212\u03c3) have the same order that (a, d) < (a, c) < (a, e) \u2248(a, b), under two values of \u03b4. Furthermore, among all 10 composed augmentations, augmentation (a, d) has the smallest value of (1 \u2212\u03c3), while the corresponding performance is also the best one. In addition, we observe that the choice of \u03b4 is not sensitive to the curve shape of (1\u2212\u03c3). These observations suggest that sharper concentration is most likely to have better downstream performance. This also provides an explanation for Figure 5 in SimCLR paper (Chen et al., 2020a) of why the composition of \u201ccrop & color\u201d performs the best. 9 \fPublished as a conference paper at ICLR 2023 ACKNOWLEDGMENT We would like to express our sincere gratitude to the reviewers of ICLR 2023 for their insightful and constructive feedback. Their valuable comments have greatly contributed to improving the quality of our work." + }, + { + "url": "http://arxiv.org/abs/1805.01685v1", + "title": "Combinatorial Pure Exploration with Continuous and Separable Reward Functions and Its Applications (Extended Version)", + "abstract": "We study the Combinatorial Pure Exploration problem with Continuous and\nSeparable reward functions (CPE-CS) in the stochastic multi-armed bandit\nsetting. In a CPE-CS instance, we are given several stochastic arms with\nunknown distributions, as well as a collection of possible decisions. Each\ndecision has a reward according to the distributions of arms. The goal is to\nidentify the decision with the maximum reward, using as few arm samples as\npossible. The problem generalizes the combinatorial pure exploration problem\nwith linear rewards, which has attracted significant attention in recent years.\nIn this paper, we propose an adaptive learning algorithm for the CPE-CS\nproblem, and analyze its sample complexity. In particular, we introduce a new\nhardness measure called the consistent optimality hardness, and give both the\nupper and lower bounds of sample complexity. Moreover, we give examples to\ndemonstrate that our solution has the capacity to deal with non-linear reward\nfunctions.", + "authors": "Weiran Huang, Jungseul Ok, Liang Li, Wei Chen", + "published": "2018-05-04", + "updated": "2018-05-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "main_content": "Introduction The stochastic multi-armed bandit model is a predominant model for characterizing the trade-off between exploration and exploitation in a variety of application \ufb01elds with stochastic environments. In this model, we are given a set of stochastic arms associated with unknown distributions. Upon each play of an arm, the player can get a reward sampled from the corresponding distribution. The most well studied objective is to maximize the cumulative reward, or minimize the cumulative regret, e.g., [3, 4, 8, 32]. Another popular objective is to identify the optimal arm with high probability by adaptively sampling arms based on the feedback collected. This is called the pure exploration version of the multi-armed bandit problem [2, 9, 23]. \u2020Corresponding authors. \u2217Due to the space constraints, supplementary materials and complete proofs are moved into the appendix. This work was supported in part by the National Basic Research Program of China Grant 2011CBA00300, 2011CBA00301, the National Natural Science Foundation of China Grant 61033001, 61361136003, 61433014. Instead of identifying the single optimal arm, there are a class of extended problems identifying the optimal combinatorial decision, e.g., top-k arm identi\ufb01cation [10, 27\u201329, 38], multi-bandit best arm identi\ufb01cation [22], and their extension, Combinatorial Pure Exploration with Linear reward functions (CPE-L) [13, 14], etc. In CPE-L [13], the rewards are linear functions on the means of underlying arms, and the decision class is subsets of arms satisfying certain combinatorial constraints. In this paper, we further generalize CPE-L problems to a large class of Combinatorial Pure Exploration with Continuous and Separable reward functions (CPE-CS) (see Section 2 for the technical de\ufb01nition). We propose the Consistently Optimal Con\ufb01dence Interval (COCI) algorithm to solve the CPE-CS problem. To analyze its sample complexity, we de\ufb01ne a new arm-level measure called consistent optimality radius \u039bi of arm i and a new hardness measure called consistent optimality hardness H\u039b = Pm i=1 1/\u039b2 i , where m is the number of arms. We prove that with probability at least 1 \u2212\u03b4, COCI \ufb01nds the optimal solution in O(H\u039b log(H\u039b\u03b4\u22121)) rounds. We also show that CPE-CS problems have a lower bound \u2126(H\u039b + H\u039bm\u22121 log \u03b4\u22121) in expectation, indicating that the hardness H\u039b is necessary. We demonstrate the usefulness of CPE-CS by two applications. The \ufb01rst one is water resource planning [7]. The goal is to remove waste at water sources of an area. One can \ufb01rst do some puri\ufb01cation tests at different sources to estimate the water quality responses, and then determines the \ufb01nal allocation of puri\ufb01cation powers among different sources. One need to balance the trade-off between the puri\ufb01cation power and the cost, and usually the objective function is non-linear. This application can be generalized to other urban planning scenarios such as air pollution control, crime control, etc. The second application is partitioned opinion sampling [5, 6, 26]. The opinion polling is done by partitioning people into groups and sampling each group separately with different sample budget to improve the sample quality. One can \ufb01rst do some tests in each group to estimate its opinion variance, and then determines the sample size for each group under the total sample budget for the formal sampling process. In this case, the objective function is also non-linear. Furthermore, we show that the COCI algorithm also solves the CPE-L problem with the same sample complexity as the CLUCB algorithm proposed by Chen et al. [13]. 1 \fIn summary, our contributions include: (a) studying the combinatorial pure exploration problem with continuous and separable functions and proposing the COCI algorithm as its solution, (b) analyzing the sample complexity of COCI and providing both its lower and upper bounds with a novel hardness measure, and (c) applying the CPE-CS framework to water resources planning and partitioned opinion sampling with non-linear reward functions to demonstrate the usefulness of the CPE-CS framework and the COCI algorithm. Related Work. Pure exploration bandit studies adaptive learning methods to identify the optimal solution. Best arm identi\ufb01cation [2, 9, 23], top-k arm identi\ufb01cation [10, 27\u2013 29, 38], the multi-bandit best arm identi\ufb01cation [22] have been studied in the literature. Chen et al. [13, 14] generalize these studies to Combinatorial Pure Exploration with Linear reward functions (CPE-L). Soare et al. [36] also study the linear reward functions, but the player is required to select a decision to play instead of a single arm to sample in each round. A very recent paper [17] studies the CPE problems beyond linear reward functions, but their model assumes arms with Gaussian distributions and only works with the mean estimator, while our CPE-CS only requires bounded distributions and also works for variance estimators. Moreover, for ef\ufb01cient implementations, they need a pseudo-polynomial algorithm for the exact query besides the maximization oracle, but our solution only needs a maximization oracle. A related online learning problem is multi-armed bandit (MAB), e.g., [3, 4, 8, 32]. The goal of MAB is to maximize cumulative rewards over multiple rounds, and the key is to balance exploration and exploitation during the learning process. In contrast, in pure exploration, the key is the adaptive exploration in the learning process to quickly \ufb01nd the optimal solution, and thus it is fundamentally different from MAB [9]. Combinatorial MAB is a popular topic in recent years [12, 15, 16, 20, 24, 25, 30, 31], but their goals and techniques are very different from ours. 2 Problem De\ufb01nition An instance of combinatorial pure exploration bandit problems consists of (a) a set of m arms [m] = {1, . . ., m}, each arm i being associated with an unknown distribution Di with range [0, 1] and a key unknown parameter \u03b8\u2217 i \u2208[0, 1] of Di, (b) a \ufb01nite set of decisions Y \u2286Rm, with each decision y = (y1, . . . , ym) as a vector, and (c) a real-valued (expected) reward function r(\u03b8; y) with vector \u03b8 taken from the parameter space [0, 1]m and y \u2208Y. In each round t = 1, 2, . . ., a player selects one arm i \u2208[m] to play, and observes a sample independently drawn from Di as the feedback. The player needs to decide based on the observed feedback so far if she wants to continue to play arms. If so, she needs to decide which arm to play next; if not, she needs to output a decision yo \u2208Y such that with high probability yo is the optimal decision maximizing the reward r(\u03b8\u2217; yo), where \u03b8\u2217= (\u03b8\u2217 1, . . . , \u03b8\u2217 m) is the vector of the true underlying parameters of the unknown distributions D = (D1, . . . , Dm). De\ufb01nition 1. Given a combinatorial pure exploration instance ([m], Y, r(\u00b7; \u00b7), D, \u03b8\u2217) and a con\ufb01dence error bound \u03b4, the combinatorial pure exploration (CPE) problem requires the design of an algorithm with the following components: (a) a stopping condition, which decides whether the algorithm should stop in the current round, (b) an arm selection component, which selects the arm to play in the current round when the stopping condition is false, and (c) an output component, which outputs the decision yo when the stopping condition is true. The algorithm could only use ([m], Y, r(\u00b7; \u00b7)) and the feedback from previous rounds as inputs, and should guarantee that with probability at least 1 \u2212\u03b4 the output yo is an optimal decision, i.e., yo \u2208arg maxy\u2208Y r(\u03b8\u2217; y). A standard assumption for CPE problems is that the optimal decision under the true parameter vector \u03b8\u2217is unique, i.e., y\u2217= arg maxy\u2208Y r(\u03b8\u2217; y). The performance of a CPE algorithm is measured by its sample complexity, which is the number of rounds taken when the algorithm guarantees its output to be the optimal one with probability at least 1 \u2212\u03b4. We say that a reward function r(\u03b8; y) is continuous if r(\u03b8; y) is continuous in \u03b8 for every y \u2208Y, and (additively) separable if there exist functions r1, . . . , rm such that r(\u03b8; y) = Pm i=1 ri(\u03b8i, yi). We use CPE-CS to denote the class of CPE problems with Continuous and Separable reward functions and each parameter \u03b8\u2217 i of arm i can either be mean EX\u223cDi[X] or variance VarX\u223cDi[X].1 We use ESTi(Xi,1, Xi,2, . . . , Xi,s) to denote the unbiased estimator for parameter \u03b8\u2217 i from s i.i.d. observations Xi,1, Xi,2, . . . , Xi,s of the i-th arm. In particular, for the mean estimator, ESTi(Xi,1, Xi,2, . . . , Xi,s) = 1 s Ps j=1 Xi,j, and for the variance estimator, ESTi(Xi,1, Xi,2, . . . , Xi,s) = 1 s\u22121 \u0010Ps j=1 X2 i,j \u22121 s(Ps j=1 Xi,j)2\u0011 . Notice that the variance estimator needs at least two samples. We also de\ufb01ne \u03c6: [0, 1]m \u2192Y to be a deterministic tie-breaking maximization oracle such that for any \u03b8 \u2208[0, 1]m, \u03c6(\u03b8) = (\u03c61(\u03b8), . . . , \u03c6m(\u03b8)) \u2208arg maxy\u2208Y r(\u03b8; y) and it always outputs the same optimal solution, called the leading optimal solution, under the same parameter \u03b8. CPE-CS encompasses the important CPE problems with Linear reward functions (CPE-L). In CPE-L, parameter \u03b8\u2217 i is the mean of arm i for each i \u2208[m]. Each decision is a subset of [m], which can be represented as an m-dimensional binary vector. Thus, the decision space Y is a subset of {0, 1}m, and each vector y = (y1, . . . , ym) \u2208Y represents a subset of arms Sy = {i \u2208[m]: yi = 1}. Moreover, the reward function r(\u03b8; y) = Pm i=1 \u03b8i \u00b7 yi is continuous and separable. 3 Solving CPE-CS In this section, we propose the Consistently Optimal Con\ufb01dence Interval (COCI) Algorithm for CPE-CS, and analyze its sample complexity. En route to our sample complexity bound, we introduce a new concept of arm-level consistently optimal radius \u039bi of each arm i, which leads to a new hardness measure H\u039b. We \ufb01rst introduce the components and notations which will be used in the algorithm. The algorithm we propose for CPE-CS (Algorithm 1) is based on the con\ufb01dence intervals of the parameter estimates. 1Other parameter \u03b8\u2217 i of Di is also acceptable if it has an unbiased estimator from the samples of Di. Only a minor change is needed in the formula of con\ufb01dence radius in COCI (Algorithm 1). 2 \fAlgorithm 1: COCI: Consistently Optimal Con\ufb01dence Interval Algorithm for CPE-CS Input: Con\ufb01dence error bound \u03b4 \u2208(0, 1), maximization oracle \u03c6. Output: yo = (y1, y2, . . . , ym) \u2208Y. 1 t \u2190\u03c4m; // \u03c4 = 1 for the mean estimator and \u03c4 = 2 for the variance estimator 2 for i = 1, 2, . . . , m do 3 observe the i-th arm \u03c4 times Xi,1, . . . , Xi,\u03c4; 4 Ti,t \u2190\u03c4; 5 estimate \u02c6 \u03b8i,t \u2190ESTi(Xi,1, . . . , Xi,Ti,t); 6 radi,t \u2190 q 1 2Ti,t ln 4t3 \u03c4\u03b4 ; // con\ufb01dence radius 7 \u02c6 \u0398t \u2190{\u03b8 \u2208[0, 1]m: |\u03b8i \u2212\u02c6 \u03b8i,t| \u2264radi,t, \u2200i \u2208[m]}; 8 for t = \u03c4m + 1, \u03c4m + 2, \u03c4m + 3, . . . do 9 Ct \u2190\u2205; 10 for i = 1, 2, . . ., m do 11 if max\u03b8\u2208\u02c6 \u0398t\u22121 \u03c6i(\u03b8) \u0338= min\u03b8\u2208\u02c6 \u0398t\u22121 \u03c6i(\u03b8) then 12 Ct \u2190Ct \u222a{i}; 13 if Ct = \u2205then 14 return yo = \u03c6(\u03b8) for an arbitrary \u03b8 \u2208\u02c6 \u0398t\u22121; 15 j \u2190arg maxi\u2208Ct radi,t\u22121; 16 Tj,t \u2190Tj,t\u22121 + 1; Ti,t \u2190Ti,t\u22121 for all i \u0338= j; 17 play the j-th arm and observe the outcome Xj,Tj,t; 18 update \u02c6 \u03b8j,t \u2190ESTj(Xj,1, . . . , Xj,Tj,t); 19 update \u02c6 \u03b8i,t \u2190\u02c6 \u03b8i,t\u22121 for all i \u0338= j; 20 update radi,t \u2190 q 1 2Ti,t ln 4t3 \u03c4\u03b4 for all i \u2208[m]; 21 \u02c6 \u0398t \u2190{\u03b8 \u2208[0, 1]m: |\u03b8i \u2212\u02c6 \u03b8i,t| \u2264radi,t, \u2200i \u2208[m]}; The algorithm maintains the con\ufb01dence interval space \u02c6 \u0398t for every round t to guarantee that the true parameter \u03b8\u2217is always in \u02c6 \u0398t for all t > 0 with probability at least 1 \u2212\u03b4. After the initialization (lines 1\u20137), in each round t, the algorithm \ufb01rst computes the candidate set Ct \u2286[m] (lines 9\u201312). According to the key condition in line 11, Ct contains the i-th arm if max\u03b8\u2208\u02c6 \u0398t\u22121 \u03c6i(\u03b8) \u0338= min\u03b8\u2208\u02c6 \u0398t\u22121 \u03c6i(\u03b8) (this is a logical condition, and its actual implementation will be discussed in Section 3.1). The stopping condition is Ct = \u2205(line 13), which means that within the con\ufb01dence interval space, all leading optimal solutions are the same. In this case, the algorithm returns the leading optimal solution under any \u03b8 \u2208\u02c6 \u0398t\u22121 as the \ufb01nal output (line 14). Notice that if the true parameter \u03b8\u2217is in \u02c6 \u0398t\u22121, then the output is the true optimal solution yo = \u03c6(\u03b8\u2217) = y\u2217. If Ct \u0338= \u2205, then the algorithm picks any arm j with the largest con\ufb01dence radius (line 15), plays this arm, observes its feedback, and updates its estimate \u02c6 \u03b8j,t and con\ufb01dence radius radj,t accordingly (lines 16\u201321). Intuitively, arm j is the most uncertain arm causing inconsistency, thus the algorithm picks it to play \ufb01rst. Since the key stopping condition is that the leading optimal solutions for all \u03b8 \u2208\u02c6 \u0398t\u22121 are consistently optimal, we call our algorithm Consistently Optimal Con\ufb01dence Interval (COCI) algorithm. Before analyzing the sample complexity of the COCI algorithm, we \ufb01rst introduce the (arm-level) consistent optimality radius for every arm i, which is formally de\ufb01ned below. De\ufb01nition 2. For all i \u2208[m], the consistent optimality radius \u039bi for arm i is de\ufb01ned as: \u039bi = inf \u03b8:\u03c6i(\u03b8)\u0338=\u03c6i(\u03b8\u2217) \u2225\u03b8 \u2212\u03b8\u2217\u2225\u221e. Intuitively, \u039bi measures how far \u03b8 can be away from \u03b8\u2217(in in\ufb01nity norm) while the leading optimal solution under \u03b8 is still consistent with the true optimal one in the i-th dimension, as precisely stated below. Proposition 1. \u2200i \u2208[m], if |\u03b8j \u2212\u03b8\u2217 j | < \u039bi holds for all j \u2208[m], then \u03c6i(\u03b8) = \u03c6i(\u03b8\u2217). The following lemma shows that the consistent optimality radii are all positive, provided by that the reward function is continuous and the true optimal decision y\u2217is unique. Lemma 1. If the reward function r(\u03b8; y) is continuous on \u03b8 for every y \u2208Y, and the optimal decision y\u2217under the true parameter vector \u03b8\u2217is unique, then \u039bi is positive for every i \u2208[m]. Given that the consistent optimality radii are all positive, we can introduce the key hardness measure used in the sample complexity analysis. We de\ufb01ne consistent optimality hardness as H\u039b = Pm i=1 1 \u039b2 i . The following theorem shows our primary sample complexity result for the COCI algorithm. Theorem 1. With probability at least 1 \u2212\u03b4, the COCI algorithm (Algorithm 1) returns the unique true optimal solution yo = y\u2217, and the number of rounds (or samples) T satis\ufb01es T \u22642m + 12H\u039b ln 24H\u039b + 4H\u039b ln 4 \u03c4\u03b4 = O \u0012 H\u039b log H\u039b \u03b4 \u0013 . (1) Theorem 1 shows that the sample complexity is positively related to the consistent optimality hardness, or inversely proportional to the square of consistent optimality radius \u039b2 i . Intuitively, when \u039bi is small, we need more samples to make the optimal solutions in the con\ufb01dence interval consistent on the i-th dimension, and hence higher sample complexity. We remark that if we do not compute the candidate set Ct and directly pick the arm with the largest radius among all arms in line 15, every arm will be selected in a round-robin fashion and COCI becomes a uniform sampling algorithm. In Appendix A.5, we show that the sample complexity upper bound of the uniform version is obtained by replacing H\u039b in Eq. (1) by HU \u039b = m mini\u2208[m] \u039b2 i , and the factor HU \u039b is tight for the uniform sampling. This indicates that the adaptive sampling method of COCI would perform much better than the uniform sampling when arms have heterogeneous consistent optimality radii such that H\u039b \u226aHU \u039b. Due to the space constraint, we only provide the key lemma below leading to the proof of the theorem. We de\ufb01ne a random event \u03be = {\u2200t \u2265\u03c4m, \u2200i \u2208[m], |\u02c6 \u03b8i,t \u2212\u03b8\u2217 i | \u2264radi,t}, which indicates that \u03b8\u2217is inside the con\ufb01dence interval space of all the rounds. Then we have the following lemma. 3 \fLemma 2. Suppose event \u03be occurs. For every i \u2208[m] and every t > \u03c4m, if radi,t\u22121 < \u039bi/2, then the i-th arm will not be played in round t. Proof. Suppose, for a contradiction, that the i-th arm is played in round t, namely, i \u2208 Ct, and i = arg maxj\u2208Ct radj,t\u22121. Thus for each j \u2208Ct, we have radj,t\u22121 \u2264radi,t\u22121 < \u039bi/2. We claim that for all \u03b8 \u2208\u02c6 \u0398t\u22121, \u03c6i(\u03b8) = \u03c6i(\u03b8\u2217). If so, max\u03b8\u2208\u02c6 \u0398t\u22121 \u03c6i(\u03b8) = min\u03b8\u2208\u02c6 \u0398t\u22121 \u03c6i(\u03b8), then by line 11 i \u0338\u2208 Ct, a contradiction. We now prove the claim. For any vector x \u2208Rm and any index subset C \u2286[m], we use xC to denote the subvector of x projected onto C. For vector-valued functions such as \u03c6(\u03b8), we use \u03c6C(\u03b8) for \u03c6(\u03b8)C. For any \u03b8 \u2208\u02c6 \u0398t\u22121, we construct an intermediate vector \u03b8\u2032 = (\u03b8Ct, \u03b8\u2217 \u2212Ct), i.e., the j-th component \u03b8\u2032 j is \u03b8j when j \u2208Ct, or \u03b8\u2217 j when j / \u2208Ct. Since event \u03be occurs, we have |\u02c6 \u03b8j,t\u22121\u2212\u03b8\u2217 j | \u2264radj,t\u22121 for j \u2208 [m]. Thus for all j \u2208Ct, |\u03b8\u2032 j \u2212\u03b8\u2217 j | \u2264|\u03b8j \u2212\u02c6 \u03b8j,t\u22121|+|\u02c6 \u03b8j,t\u22121 \u2212 \u03b8\u2217 j | \u22642 radj,t\u22121 < \u039bi, and for all j / \u2208Ct, |\u03b8\u2032 j \u2212\u03b8\u2217 j | = 0. This means that \u2225\u03b8\u2032 \u2212\u03b8\u2217\u2225\u221e< \u039bi. According to Proposition 1, \u03c6i(\u03b8\u2032) = \u03c6i(\u03b8\u2217). We next prove that \u03c6i(\u03b8) = \u03c6i(\u03b8\u2032), which directly leads to \u03c6i(\u03b8) = \u03c6i(\u03b8\u2217). Since event \u03be occurs and \u03b8\u2217\u2208[0, 1]m, \u03b8\u2217is in \u02c6 \u0398t\u22121. By the de\ufb01nition of \u03b8\u2032 and \u03b8 \u2208\u02c6 \u0398t\u22121, \u03b8\u2032 is also in \u02c6 \u0398t\u22121. According to Algorithm 1, for each j / \u2208 Ct, we have max\u03b8\u2208\u02c6 \u0398t\u22121 \u03c6j(\u03b8) = min\u03b8\u2208\u02c6 \u0398t\u22121 \u03c6j(\u03b8), thus \u03c6\u2212Ct(\u03b8) = \u03c6\u2212Ct(\u03b8\u2032) = \u03c6\u2212Ct(\u03b8\u2217). Note that the reward function is separable, we have r(\u03b8; y) = X j\u2208Ct rj(\u03b8j, yj) + X j / \u2208Ct rj(\u03b8j, yj). Let YCt(\u03b8) = {yCt : y \u2208Y \u2227y\u2212Ct = \u03c6\u2212Ct(\u03b8)}. It is straightforward to verify that \u03c6Ct(\u03b8) is the leading optimal solution for the following problem: max X j\u2208Ct rj(\u03b8j, zj), subject to z \u2208YCt(\u03b8). (2) Similarly, we have r(\u03b8\u2032; y) = X j\u2208Ct rj(\u03b8j, yj) + X j / \u2208Ct rj(\u03b8\u2217 j , yj), and \u03c6Ct(\u03b8\u2032) is the leading optimal solution for max X j\u2208Ct rj(\u03b8j, zj), subject to z \u2208YCt(\u03b8\u2217). (3) Since \u03c6\u2212Ct(\u03b8) = \u03c6\u2212Ct(\u03b8\u2217), optimization problems (2) and (3) are identical, thus they have the some leading optimal solution \u03c6Ct(\u03b8) = \u03c6Ct(\u03b8\u2032). Notice that i \u2208Ct, therefore, \u03c6i(\u03b8) = \u03c6i(\u03b8\u2032) holds. The above lemma is the key connecting consistent optimality radius \u039bi with con\ufb01dence radius radi,t\u22121 and the stopping condition. Its proof relies on both the de\ufb01nition of consistent optimality radius and the assumption of separable reward functions. With this lemma, the sample complexity can be obtained by considering the \ufb01rst round when every arm satis\ufb01es the condition radi,t\u22121 < \u039bi/2. Borrowing a lower bound analysis in [17], we can further show that the hardness measure H\u039b is necessary for CPE-CS, even CPE-L, as shown below. Theorem 2. Given m arms and \u03b4 \u2208(0, 0.1), there exists an instance such that every algorithm A for CPE-L which outputs the optimal solution with probability at least 1 \u2212\u03b4, takes at least \u2126(H\u039b + H\u039bm\u22121 log \u03b4\u22121) samples in expectation. 3.1 Implementing the Condition in Line 11 The key condition in line 11 of Algorithm 1 is a logical one revealing the conceptual meaning of the stopping condition, but it does not lead to a direct implementation. In many CPE-CS instances, the condition can be translated to a condition only on the boundary of \u02c6 \u0398t\u22121, and further due to the bi-monotonicity of \u03c6 introduced below, it has an ef\ufb01cient implementation. Such instances include best-arm identi\ufb01cation, top-k arm identi\ufb01cation, water resources planning (Section 4.1), partitioned opinion sampling (Section 4.2), etc. We say that the leading optimal solution \u03c6(\u03b8) satis\ufb01es bi-monotonicity, if for each i \u2208[m], \u03c6i(\u03b8) is monotonically non-increasing (or non-decreasing) in \u03b8i, and monotonically non-decreasing (or non-increasing) in \u03b8j for all j \u0338= i. For convenience, we use \u03b8i,t = max\u03b8\u2208\u02c6 \u0398t \u03b8i and \u03b8i,t = min\u03b8\u2208\u02c6 \u0398t \u03b8i to denote the upper and lower con\ufb01dence bound of arm i in round t. We also use \u03b8\u2212i,t and \u03b8\u2212i,t to denote the upper and lower con\ufb01dence bounds of all arms excluding arm i. Theorem 3. If the leading optimal solution \u03c6(\u03b8) satis\ufb01es bimonotonicity, the condition in line 11 of Algorithm 1 can be ef\ufb01ciently implemented by \u03c6i(\u03b8\u2212i,t\u22121, \u03b8i,t\u22121) \u0338= \u03c6i(\u03b8\u2212i,t\u22121, \u03b8i,t\u22121). The above theorem indicates that, when bi-monotonicity holds for \u03c6(\u03b8), we only need two calls to the of\ufb02ine oracle \u03c6(\u03b8) to implement the condition in line 11, and thus the COCI algorithm has an ef\ufb01cient implementation in this case. 4 Applications 4.1 Water Resource Planning Water resource systems bene\ufb01t people to meet drinking water and sanitation needs, and also support and maintain resilient biodiverse ecosystems. In regional water resource planning, one need to determine the Biological Oxygen Demand (BOD, a measure of pollution) to be removed from the water system at each source. Online learning techniques proposed in recent years make adaptive optimization for water resource planning possible. 4 \fLet yi be the pounds of BOD to be removed at source i. One general model (adapted from [7]) to minimize total costs to the region to meet speci\ufb01ed pollution standards can be expressed as: max m X i=1 \u03b8\u2217 i yi \u2212 m X i=1 fi(yi), subject to m X i=1 yi \u2265b, 0 \u2264yi \u2264ci, \u2200i \u2208[m], (4) where \u03b8\u2217 i is the quality response caused by removing one pound of BOD at source i, and fi(yi) is the cost of removing yi pounds of BOD at source i. Each yi is constrained by ci, the maximum pounds of BOD that can be removed at source i. Moreover, the total pounds of BOD to be removed are required to be larger than a certain threshold b. The above model formulates the trade-off between the bene\ufb01t and the cost of removing the pollutants. The cost function fi is usually known and non-linear, which may depend on the cost of oxidation, labor cost, facility cost, etc., while the quality response \u03b8\u2217 i is unknown beforehand, and needs to be learned from tests at source i. In each test, the tester measures the quality response at a source i and gets an observation of \u03b8\u2217 i , which can be regarded as a random variable \u03b8i derived from an unknown distribution with mean \u03b8\u2217 i . The goal is to do as few tests as possible to estimate the quality responses, and then give a \ufb01nal allocation (yo 1, . . . , yo m) of BOD among sources as the plan to be implemented (e.g., building BOD removal facilities at the sources). The above problem falls into the CPE-CS framework. The i-th source corresponds to the i-th arm. Each quality response at source i is the unknown parameter \u03b8\u2217 i associated with the arm i, and \u03c4 = 1. Each allocation (y1, . . . , ym) satisfying the constraints corresponds to a decision. We discretize {yi}\u2019s so that the decision class Y is \ufb01nite. The reward function is r(\u03b8, y) = Pm i=1 \u03b8iyi\u2212Pm i=1 fi(yi), which is continuous and separable. Suppose the of\ufb02ine problem of Eq. (4) when \u03b8\u2217is known can be solved by a known oracle \u03c6(\u03b8\u2217). Then, the COCI algorithm can be directly applied to the water resource planning problem. The following lemma gives a suf\ufb01cient condition for the bi-monotonicity of \u03c6. Lemma 3. When {dfi/dyi}\u2019s are all monotonically increasing or decreasing, and the constraint Pm i=1 yi \u2265b is tight at the leading optimal solution \u03c6(\u03b8) for all \u03b8, then \u03c6(\u03b8) satis\ufb01es bi-monotonicity. By Theorem 3, when the of\ufb02ine oracle for the water resources planning problem satis\ufb01es bi-monotonicity, we can instantiate the condition in line 11 of Algorithm 1 as \u03c6i(\u03b8\u2212i,t\u22121, \u03b8i,t\u22121) \u0338= \u03c6i(\u03b8\u2212i,t\u22121, \u03b8i,t\u22121). Although this application is set up in the context of water resource planning, we can see that the formulation in Eq. (4) is general enough to model other applications, especially ones in the urban planning context. For example, for planning air quality control for a city, we need to target a number of air pollution emission sources (e.g., factories), and do adaptive testing at the sources to determine the optimal pollution remove target at each sources which maximizes the total utility of the planning. Other applications, such as crime control, may also be modeled similarly as instances of our CPE-CS framework and solved effectively by our COCI algorithm. 4.2 Partitioned Opinion Sampling Public opinion dynamics has been well studied, and there are a number of opinion dynamic models proposed in the literature, such as the voter model [19], and its variants [26, 33, 37]. In these models, people\u2019s opinions f (t) 1 , f (t) 2 , . . . , f (t) n \u2208 [0, 1] change over time t, and will converge to a steady state after suf\ufb01cient social interactions in which the joint distribution of people\u2019s opinions no longer changes. Thus, they are regarded as Bernoulli random variables derived from the steady-state joint distribution, and sampling at time t can be considered as observing part of a realization of f (t) 1 , f (t) 2 , . . . , f (t) n . In partitioned opinion sampling, the population is divided into several disjoint groups V1, V2, . . . , Vm with ni = |Vi|. When we draw yi samples (with replacement) from group Vi at time t, we obtain yi i.i.d. random variables f (t) vi,1, f (t) vi,2, . . . , f (t) vi,yi , where vi,j is the j-th sample from group Vi. Partitioned sampling uses \u02c6 f (t) = Pm i=1 ni n \u00b7 \u0010 1 yi Pyi j=1 f (t) vi,j \u0011 as the unbiased estimator for the mean population opinion at time t, and the task is to \ufb01nd the optimal allocation (yo 1, . . . , yo m) with sample size budget Pm i=1 yo i \u2264k which minimizes the sample variance Var[ \u02c6 f (t)], a common sample quality measure [5, 6, 26]. One way to achieve best estimate quality for a future time t is to do adaptive sampling to quickly estimate the opinion variance of each group, and then decide the optimal sample size allocation for the real sample event at time t. This corresponds to certain opinion polling practices, for instance, polling after each presidential debates, and preparing for a better sample quality at the election day. We remark that in this setting, past samples are useful to estimate opinion variance within groups, but cannot be directly use to estimate the mean opinion at a future time t, since \u02c6 f (t) is time-based and using historical samples directly may lead to biased estimates. More speci\ufb01cally, let Xi be the result of one random sample from group Vi in the steady state. Note that the randomness of Xi comes from both the sampling randomness and the opinion randomness in the steady state. One can easily verify that Var[ \u02c6 f (t)] = Pm i=1 n2 i n2yi Var[Xi], where Var[Xi] is the variance of group Vi, and referred to as the within-group variance. The goal is to use as few samples as possible to estimate within-group variances, and then give the \ufb01nal sample size allocation which minimizes Var[ \u02c6 f (t)]. This falls into the CPE-CS framework. In particular, each group Vi corresponds to an arm i, and each withingroup variance Var[Xi] corresponds to the unknown parameter \u03b8\u2217 i of arm i. The decision space Y is {(y1, . . . , ym) \u2208 Zm + : Pm i=1 yi \u2264k}. The reward function r(\u03b8; y) is set to be \u2212Pm i=1 n2 i \u03b8i n2yi , where the negative sign is because the partitioned opinion sampling problem is a minimization problem. It is non-linear but continuous and separable. Therefore, the problem is an instance of CPE-CS. The oracle for the of\ufb02ine problem can be achieved by a greedy algorithm, denoted as 5 \f\u03c6(\u03b8), and it satis\ufb01es the bi-monotonicity (the design and the analysis of the of\ufb02ine oracle is non-trivial, see Appendix B.2). Thus, the COCI algorithm can be directly applied as follows: 1) ESTi is set to be the variance estimator, i.e., ESTi(Xi,1, . . . , Xi,s) = 1 s\u22121(Ps j=1 X2 i,j \u22121 s(Ps j=1 Xi,j)2), and \u03c4 = 2; 2) the condition in line 11 of Algorithm 1 is instantiated by \u03c6i(\u03b8\u2212i,t\u22121, \u03b8i,t\u22121) \u0338= \u03c6i(\u03b8\u2212i,t\u22121, \u03b8i,t\u22121). 5 Applying COCI to CPE-L In Section 2, we already show that the linear class CPE-L is a special case of CPE-CS. In this section, we discuss the implication of applying COCI algorithm to solve CPE-L problems, and compare the sample complexity and implementation ef\ufb01ciency against the CLUCB algorithm in [13]. Since the parameter \u03b8\u2217is the vector of means of arms, we use the mean estimator and set \u03c4 = 1 in COCI. Recall that for a binary vector y \u2208Y, Sy is de\ufb01ned as {i \u2208[m]: yi = 1}. Chen et al. [13] use the term reward gap in the formulation of sample complexity. For each arm i \u2208[m], its reward gap \u2206i is de\ufb01ned as: \u2206i = \u001ar(\u03b8\u2217; y\u2217) \u2212maxy\u2208Y,i\u0338\u2208Sy r(\u03b8\u2217; y), if i \u2208Sy\u2217, r(\u03b8\u2217; y\u2217) \u2212maxy\u2208Y,i\u2208Sy r(\u03b8\u2217; y), if i \u0338\u2208Sy\u2217. Chen et al. [13] also de\ufb01ne a (reward gap) hardness measure H\u2206= Pm i=1 1 \u22062 i . Moreover, for each decision class Y, Chen et al. [13] de\ufb01ne a key quantity width, denoted as width(Y), that is needed for sample complexity. Intuitively, width(Y) denotes the minimum number of elements that one may need to exchange in one step of a series of steps when changing the current decision S \u2208Y into another decision S\u2032 \u2208Y, and for every step of exchange in the series, the resulting decision (subset) should still be in Y. The technical de\ufb01nition is not very relevant with the discussion below, and thus is left in the supplementary material. We remark that width(Y) = O(m). Given the above setup, Chen et al. [13] show that with probability 1 \u2212\u03b4, their CLUCB algorithm achieves sample complexity bound T \u22642m + 499width(Y)2H\u2206ln(4mwidth(Y)2H\u2206/\u03b4) = O \u0000width(Y)2H\u2206log(mH\u2206/\u03b4) \u0001 . (5) When applying the COCI algorithm to solve CPE-L problems, we are able to obtain the following key connection between consistent optimality radius and the reward gap: Lemma 4. For the CPE-L problems, we have \u2200i \u2208[m], \u039bi \u2265 \u2206i/width(Y), and thus H\u039b \u2264H\u2206\u00b7 width(Y)2. Combining with Theorem 1, we have that COCI could achieve the following sample complexity bound for CPE-L: T \u22642m + 12width(Y)2H\u2206ln(24width(Y)2H\u2206) + 4width(Y)2H\u2206ln(4\u03b4\u22121) = O \u0000width(Y)2H\u2206log(mH\u2206/\u03b4) \u0001 . The above result has the same sample complexity2 as in Eq. (5) (with even a slightly better constant). However, with 2CPE-L in [13] assumes R-sub-Gaussian distributions. Our analysis can be adapted to R-sub-Gaussian distributions as well, with the same R2 term appearing in the sample complexity. our analysis, we only need the complicated combinatorial quantity width(Y) and the linear reward assumption in the last step. This also suggests that our consistent optimality radius \u039bi and its associated consistent optimality hardness H\u039b are more fundamental measures of problem hardness than the reward gap \u2206i and its associated reward gap hardness H\u2206. Next we discuss the implementation of the condition in line 11 of COCI for CPE-L. First, because linear functions are monotone, it is easy to see that we only need to check parameters \u03b8 on the boundaries of \u02c6 \u0398t\u22121 (at most 2|Y| calls to the oracle \u03c6). For simple constraints such as any subsets of size k, it is easy to verify that \u03c6(\u03b8) is bi-monotone in this case, and thus we have ef\ufb01cient implementation of the condition as given in Theorem 3. For more complicated combinatorial constraints, it is still an open question on whether ef\ufb01cient implementation of the condition in line 11 exists when oracle \u03c6 is given. The CLUCB algorithm, on the other hand, does have an ef\ufb01cient implementation for all CPE-L problems as long as the oracle \u03c6 is given. Therefore, compared with CLUCB in terms of ef\ufb01cient implementation, COCI can be viewed as taking the tradeoff between the complexity of the reward functions and the complexity of combinatorial constraints. In particular, COCI could handle more complicated nonlinear reward functions on real vectors, and allow ef\ufb01cient implementation (due to bimonotonicity) under simple constraints, while CLUCB deals with complicated combinatorial constraints but could only work with linear reward functions on binary vectors. 6 Future Work There are a number of open problems and future directions. For example, one can consider the \ufb01xed budget setting of CPE-CS: the game stops after a \ufb01xed number T of rounds where T is given before the game starts, and the learner needs to minimize the probability of error Pr[yo \u0338= y\u2217]. One may also consider the PAC setting: with probability at least 1 \u2212\u03b4 the algorithm should output a decision with reward at most \u03b5 away from the optimal reward. This setting may further help to eliminate the requirement of \ufb01nite decision class Y. Another direction is to combine the advantage of COCI and CLUCB to design a uni\ufb01ed algorithm that allows ef\ufb01cient implementation for all CPE-CS problems. How to incorporate approximation oracle instead of the exact oracle into the CPE framework is also an interesting direction." + }, + { + "url": "http://arxiv.org/abs/1510.05217v3", + "title": "Partitioned Sampling of Public Opinions Based on Their Social Dynamics", + "abstract": "Public opinion polling is usually done by random sampling from the entire\npopulation, treating individual opinions as independent. In the real world,\nindividuals' opinions are often correlated, e.g., among friends in a social\nnetwork. In this paper, we explore the idea of partitioned sampling, which\npartitions individuals with high opinion similarities into groups and then\nsamples every group separately to obtain an accurate estimate of the population\nopinion. We rigorously formulate the above idea as an optimization problem. We\nthen show that the simple partitions which contain only one sample in each\ngroup are always better, and reduce finding the optimal simple partition to a\nwell-studied Min-r-Partition problem. We adapt an approximation algorithm and a\nheuristic algorithm to solve the optimization problem. Moreover, to obtain\nopinion similarity efficiently, we adapt a well-known opinion evolution model\nto characterize social interactions, and provide an exact computation of\nopinion similarities based on the model. We use both synthetic and real-world\ndatasets to demonstrate that the partitioned sampling method results in\nsignificant improvement in sampling quality and it is robust when some opinion\nsimilarities are inaccurate or even missing.", + "authors": "Weiran Huang, Liang Li, Wei Chen", + "published": "2015-10-18", + "updated": "2016-11-25", + "primary_cat": "cs.SI", + "cats": [ + "cs.SI", + "physics.data-an", + "physics.soc-ph" + ], + "main_content": "Introduction Public opinion is essential nowadays for governments, organizations and companies to make decisions on their policies, strategies, products, etc. The most common way to collect public opinions is polling, typically done by randomly sampling a large number of individuals from the entire population and then interviewing them by telephone. This naive method is unbiased, but conducting interviews is very costly. On the other hand, in recent years, more and more online social media data are available and have been used to predict public opinions on certain issues. Such predictions cost less human effort, but they are usually biased and may lead to incorrect decisions. Thus, keeping the estimation unbiased while saving the cost becomes an important task to pursue. In this paper, we utilize individuals\u2019 social interactions (potentially learned from social media data) to improve the unbiased sampling method. Our motivation is from the fact \u2217This work was supported in part by the National Basic Research Program of China Grant 2011CBA00300, 2011CBA00301, the National Natural Science Foundation of China Grant 61033001, 61361136003, 61433014. that people\u2019s opinions are often correlated, especially among friends in a social network, due to their social interactions in terms of the homophily and in\ufb02uence effects (McPherson, Smith-Lovin, and Cook 2001; Goel, Mason, and Watts 2010; Crandall et al. 2008). Such correlations are partially known in the big data era. For example, many online social media and networking sites provide publicly available social interaction data and user\u2019s sentiment data, and companies also have large amounts of data about their customers\u2019 preferences and their social interactions. Our idea is to partition individuals into different groups by utilizing the above prior knowledge, such that people within a group are likely to hold the same opinions. We can then sample very few people in each group and aggregate the sampling results together to achieve an accurate estimation. We call this the partitioned sampling method. We formulate the above idea as an optimization problem. In particular, we \ufb01rst characterize individuals\u2019 opinions as random variables. We then specify our objective as minimizing the expected sample variance of the estimate, and de\ufb01ne the statistical measure of pairwise opinion similarity as the input. Our analysis later shows that this input is enough to fully determine the solution of the optimization problem, named the Optimal Partitioned Sampling (OPS) problem (Section 2). We solve the OPS problem in two steps (Section 3). First, we show that the best partition is always a simple partition, meaning that each group contains only one sample. Second, we use people\u2019s opinion similarities to construct a weighted graph and reduce the OPS problem to the Min-r-Partition problem. We adapt a semi-de\ufb01nite programming algorithm and a heuristic algorithm to solve the optimization problem. We further show that partitioned sampling using any balanced simple partition where group sizes are the same always out-performs naive sampling method, and thus balanced simple partition is always safe to use even if we only have partial or inaccurate opinion similarity information. Next, we adapt existing opinion evolution models and propose the Voter model with Innate Opinions (VIO) based on social network interactions (Section 4). We provide an exact computation of opinion similarities in the steady state of the model, which is novel in the study of such models. Finally, we conduct experiments on both synthetic and real-world datasets to demonstrate the effectiveness and ro1 arXiv:1510.05217v3 [cs.SI] 25 Nov 2016 \fbustness of our partitioned sampling method (Section 5). In summary, our contributions include: (a) proposing the partitioned sampling method to improve sampling quality based on opinion similarities and formulating it as an optimization problem, (b) precisely connecting the OPS problem to the Min-r-Partition problem and providing ef\ufb01cient algorithms for the OPS problem, and (c) adapting an opinion evolution model and providing an exact computation of opinion similarities based on the model. Related Work. There are many sampling methods in the literature. The most related method is strati\ufb01ed sampling (Bethel 1986; Bethel 1989; Chromy 1987; Cochran 2007; Kozak, Verma, and Zieli\u00b4 nski 2007; Keskint\u00a8 urk and Er 2007; Ballin and Barcaroli 2013). The entire population is \ufb01rst strati\ufb01ed into homogeneous atomic strata based on individuals\u2019 pro\ufb01les (e.g., age, gender, etc.), and then they may be combined to a \ufb01nal strati\ufb01cation and subsample size in each stratum is allocated to minimize sample variance. Conceptually, our partitioned sampling method is similar to strati\ufb01ed sampling, but there are some important differences. First, strati\ufb01ed sampling partitions individuals based on their pro\ufb01les, which may not imply opinion similarity, while we partition individuals directly based on opinion similarity, and thus our method is more accurate and \ufb02exible. Second, the technical treatments are different. Strati\ufb01ed sampling treats individual opinions as \ufb01xed and unknown, and requires the (estimated) mean and standard deviation of opinions in each stratum to bootstrap the strati\ufb01ed sampling, while we treat individual opinions as random variables, and use pairwise opinion similarities for partitioned sampling. Among studies on social interaction based sampling, Dasgupta, Kumar, and Sivakumar (2012) utilize social network connections to facilitate sampling. However, their method is to ask the voter being sampled to return the estimate of her friends\u2019 opinions, which changes the polling practice. In contrast, we still follow the standard polling practice and only use implicit knowledge on opinion similarities to improve sampling quality. Das et al. (2013) consider the task of estimating people\u2019s average innate opinion by removing their social interactions, which is opposite to our task \u2014 we want to utilize opinion interactions for more ef\ufb01cient sampling of \ufb01nal expressed opinions which are counted in opinion polls. Graph sampling methods (Gjoka et al. 2010; Kurant et al. 2011) aim at achieving unbiased uniform sampling on large scale networks when the full network is not available, which is orthogonal to our partitioned sampling approach and could be potentially combined. Various opinion evolution models have been proposed in the literature (Yildiz et al. 2011; Das et al. 2013; Gionis, Terzi, and Tsaparas 2013; Li et al. 2015). Our VIO model is adapted from the voter model (Clifford and Sudbury 1973) and its extension with stubborn agents (Yildiz et al. 2011). Graph partitioning has been well studied, and numerous problem variants and algorithms exist. In this paper, we reduce the OPS problem to the Min-r-Partition problem, which was \ufb01rst formulated by Sahni and Gonzalez (1976). To the best of our knowledge, there is no approximation or heuristic algorithms for Min-r-Partition. Thus, we adapt a state-of-art approximation algorithm for the dual problem (Max-r-Cut) to solve the OPS problem (Frieze and Jerrum 1997). We also propose a greedy algorithm for large graphs, which takes the idea from a heuristic algorithm for Max-rCut (Zhu, Lin, and Ali 2013). 2 Formulating the OPS Problem We consider a vertex set V from a social network graph containing n vertices (or nodes) v1, v2, . . . , vn. Each vertex represents a person in the social network, and has a binary opinion on some topic of interest. Our task is to estimate the average opinion of all individuals in the social network with sample size budget r. Let f : V \u2192{0, 1} denote the opinion function, i.e., we wish to estimate the fraction \u00af f = 1 n Pn i=1 f(vi). The naive sampling method simply picks r nodes uniformly at random with replacement from V to ask their opinions and takes the average of sampled opinions as the estimate, as denoted below: \u02c6 fnaive(V, r) = 1 r Pr i=1 f(xi), where xi is the i-th sampled node. In this paper, we propose a general sampling framework called partitioned sampling. Formally, we \ufb01rst partition the whole vertex set into several disjoint subsets (called groups), and then allocate subsample size of each group. We use P = {(V1, r1), (V2, r2), . . . , (VK, rK)} to represent such a partition, where V1, V2, ..., VK are groups, and rk is the subsample size of group Vk. Next, we do naive sampling inside each group Vk with its subsample size rk. Finally, we estimate the average opinion of the population by taking a weighted average of all subsampling results, with weights proportional to group sizes: \u02c6 fpart(P) = PK k=1 |Vk| |V | \u00b7 \u02c6 fnaive(Vk, rk). Notice that naive sampling is a special case of partitioned sampling with P = {(V, r)}. One can easily verify that partitioned sampling is unbiased (see Appendix F.1). Intuitively, the advantage of using partitioned sampling is that, if we partition individuals such that people likely holding the same opinions are partitioned into the same group, then we can sample very few people in each group to get an accurate estimate of the average opinion of the group, and aggregate them to get a good estimate of population mean. To implement this idea, we assume that some prior knowledge about people\u2019s opinions and their similarities is available before sampling. Based on these knowledge, our goal is to \ufb01nd the best partition for partitioned sampling which achieves the best sampling quality. Our \ufb01rst research challenge is how to rigorously formulate the above intuition into an optimization problem. To meet this challenge, we need to answer (a) which objective function is the appropriate one for the optimization problem, and (b) which representation of the prior knowledge about people\u2019s opinions and their similarities can be used as the inputs to the optimization problem. We \ufb01rst address the objective function. When all individuals\u2019 opinions f(v1), f(v2), . . . , f(vn) are \ufb01xed (but unknown), the effectiveness of an unbiased randomized sampling method is measured by the standard sample variance Var( \u02c6 f), where \u02c6 f is the estimate. The smaller the sample variance, the better the sampling method. When the prior statistical knowledge about people\u2019s opinions is available, effectively we treat opinions f(v1), f(v2), . . . , f(vn) as ran2 \fdom variables, and the prior knowledge is some statistics related to the joint distribution of these random variables. In this case, the best sampling method should minimize the expected sample variance E[Var( \u02c6 f)], where the expectation is taken over the randomness from the joint distribution of people\u2019s opinions. For clarity, we use EM[VarS( \u02c6 f)] to represent E[Var( \u02c6 f)], where subscript M (standing for \u201cmodel\u201d) represents the randomness from the joint distribution model of opinions, and subscript S (standing for \u201csampling\u201d) represents sample randomness from the sampling method. 1 We now discuss the input to the optimization task. The full joint distribution of f(v1), f(v2), . . . , f(vn) requires an exponential number of parameters and is infeasible as the input. Then notice that the objective function only involves \ufb01rst two moments, which suggests us to use the expectations and pairwise correlations of people\u2019s opinions as the inputs. Indeed, we \ufb01nd that these knowledge is good enough to fully characterize the optimization problem. However, we further discover that a weaker and more direct type of statistics would be enough to enable the optimization problem, which we formally de\ufb01ne as pairwise opinion similarities: the opinion similarity \u03c3ij for nodes vi and vj is de\ufb01ned as the probability that f(vi) and f(vj) have the same values. With the objective function and inputs settled, we are now ready to de\ufb01ne our optimization problem: De\ufb01nition 1. (Optimal Partitioned Sampling) Given a vertex set V = {v1, v2, . . . , vn}, sample size budget r < n, and opinion similarity \u03c3ij between every pair of nodes vi and vj, the Optimal Partitioned Sampling (OPS) problem is to \ufb01nd the optimal partition P\u2217of V , such that the partitioned sampling method using P\u2217achieves the minimum expected sample variance, i.e., P\u2217= arg minP EM[VarS( \u02c6 fpart(P))], where P takes among all partitions of V with r samples. We remark that the OPS problem requires all pairwise opinion similarities as inputs so as to make the problem well-de\ufb01ned. We will address the issue of handling missing or inaccurate opinion similarities in Section 3.1, and show that partitioned sampling still has outstanding performance. 3 Solving the OPS Problem There are two issues involved in the OPS problem: one is how to partition the vertex set V into K groups; the other is how to allocate the subsample size in each group. For simplifying the OPS problem, we \ufb01rst consider a special kind of partitions that pick only one sample node in each group. De\ufb01nition 2. A simple partition is a partition in which the subsample size of each group is equal to one. Simple partitions are important not only for the simplicity but also for the superiority. We will later show in Theorem 2 that, for any non-simple partition P, one can easily construct a simple partition based on P which is at least as good as P. Thus, we focus on \ufb01nding the optimal simple partition. 1One may propose to use the total variance VarM,S( \u02c6 f) as the objective function. In Appendix A, we show that they are equivalent for the optimization task. Our approach is constructing a weighted assistant graph Ga whose vertex set is V , where the weight of edge (vi, vj) is wij = 1 \u2212\u03c3ij, and then connecting the OPS problem with a graph partitioning problem for the graph Ga. For a simple partition P = {(V1, 1), (V2, 1), . . . , (Vr, 1)} of V , we use VolGa(Vk) to denote the volume of the group Vk in the graph Ga, de\ufb01ned as VolGa(Vk) = P vi,vj\u2208Vk wij. We de\ufb01ne a cost function g(P) to be the sum of all groups\u2019 volumes in Ga, namely, g(P) = Pr k=1 VolGa(Vk). Our major technical contribution is to show that minimizing the expected sample variance of partitioned sampling using any simple partition P is equivalent to minimizing the cost function g(P), as summarized by the following theorem: Theorem 1. Given a vertex set V with pairwise opinion similarities {\u03c3ij}\u2019s and sample size r, for any simple partition P = {(V1, 1), (V2, 1), . . . , (Vr, 1)} of V , EM[VarS( \u02c6 fpart(P))] = g(P)/2|V |2. Thus, the optimal simple partition of V minimizes the cost function g(P). Proof (Sketch). We use xk to denote the sample node selected in the k-th group Vk of the simple partition P. The estimate of partitioned sampling with P can be written as \u02c6 fpart(P) = 1 n Pr k=1 nkf(xk), where n = |V | and nk = |Vk|. When f is \ufb01xed, since f(xk)\u2019s are independent, then VarS( \u02c6 fpart(P)) = 1 n2 r X k=1 n2 k \u00b7 VarS [f(xk)] = 1 n2 r X k=1 n2 k \u00b7 (ES[f(xk)2] \u2212ES[f(xk)]2). We then use the fact that f(xk)2 = f(xk) and ES[f(xk)] = P vj\u2208Vk f(vj)/nk, and take expectation when f is drawn from a distribution, to obtain EM[VarS( \u02c6 fpart(P))] = 1 n2 r X k=1 \uf8eb \uf8ed(nk \u22121) X vj\u2208Vk EM[f(vj)] \u2212 X vi,vj\u2208Vk,vi\u0338=vj EM[f(vi)f(vj)] \uf8f6 \uf8f8. Notice that for any two binary random variables A and B, we have E[AB] = 1 2 (P[A = B] + E[A] + E[B] \u22121). After applying this formula to EM[f(vi)f(vj)] and simplifying the expression, we obtain the theorem. The intuition of the theorem is that, small cost function indicates small volume of each group, which implies that the nodes within each group have high opinion similarities. Theorem 1 makes precise our intuition that grouping people with similar opinions would make partitioned sampling more ef\ufb01cient. Theorem 1 provides the connection between the OPS problem and the graph partitioning problem. In particular, it suggests that we can reduce the OPS problem to the following Min-r-Partition problem: given an undirected graph 3 \fAlgorithm 1 Greedy Partitioning Algorithm Require: Graph Ga with n nodes, number of groups r. 1: Randomly generate a node sequence of all the nodes: x1, x2, . . . , xn. 2: Let V1 = . . . = Vr = \u2205. 3: repeat 4: for i \u21901 to n do 5: if xi \u2208Vj for some j \u2208[r] then Vj = Vj \\{xi}. 6: end if 7: k \u2190arg min\u2113\u2208[r] \u03b4g\u2113(xi, {(V1, 1), . . . , (Vr, 1)}) 8: Vk \u2190Vk \u222a{xi}. 9: end for 10: until a predetermined stopping condition holds. 11: Output: Partition P = {(V1, 1), . . . , (Vr, 1)}. with non-negative edge weights, partition the graph into r groups such that the sum of all groups\u2019 volumes is minimized. However, Min-r-Partition is NP-hard to approximate to within any \ufb01nite factor (Kann et al. 1997), and to the best of our knowledge, there is no approximation or heuristic algorithms in the literature. The good news is that Min-rPartition and its dual problem (Max-r-Cut) are equivalent in the exact solution, and there exist both approximation and heuristic algorithms for Max-r-Cut. Frieze and Jerrum (1997) propose a semi-de\ufb01nite programming (SDP) algorithm which achieves 1 \u22121/r + 2 ln r/r2 approximation ratio and is the best to date. We adopt the SDP algorithm to solve the OPS problem. The SDP partitioning algorithm including the SDP relaxation program is given in Appendix B. The drawback of the SDP partitioning algorithm is its inef\ufb01ciency. Thus, we further propose a greedy algorithm to deal with larger graphs, which takes the idea from a heuristic algorithm for Max-r-Cut (Zhu, Lin, and Ali 2013). Given a simple partition P = {(V1, 1), . . . , (Vr, 1)} and an external node vi which does not belong to Vk for any k \u2208[r], we de\ufb01ne \u03b4g\u2113(vi, P) to be g(P\u2032) \u2212g(P), where P\u2032 is {(V1, 1), . . . , (V\u2113\u222a{vi}, 1), . . . , (Vr, 1)}. Thus \u03b4g\u2113(vi, P) represents the increase of the cost function when the external node vi is added to the group V\u2113of P. The greedy algorithm (Algorithm 1) \ufb01rst assigns each ungrouped node xi to the group such that the objective function g(P) is increased the least. After the \ufb01rst round of greedy assignment, the assignment procedure is repeated to further decrease the cost function, until some stopping condition holds, such as the decrease is smaller than a predetermined threshold. The running time of one-round greedy assignment is O(n + m) where m is the number of edges in Ga. In our experiment, we will show that greedy partitioning performs as well as SDP partitioning but could run on much larger graphs. Theoretically, the performance of partitioned sampling using the simple partition generated by the greedy partitioning algorithm is always at least as good as naive sampling, even using the partition generated after the \ufb01rst round of greedy assignment, as summarized below: Lemma 1. Given a vertex set V with sample size r, partitioned sampling using the simple partition P generated by the greedy partitioning algorithm (even after the \ufb01rst round) is at least as good as naive sampling. Speci\ufb01cally, EM[VarS( \u02c6 fpart(P))] \u2264EM[VarS( \u02c6 fnaive(V, r))]. We call a partition P\u2032 a re\ufb01ned partition of P, if each group of P\u2032 is a subset of some group of P. Suppose we are given a partition P such that there exists some group which is allocated more than one sample. Then we can further partition that group by the greedy partitioning algorithm and \ufb01nally obtain a re\ufb01ned simple partition of P. According to Lemma 1, the re\ufb01ned simple partition should be at least as good as the original partition P, summarized as below: Theorem 2. For any non-simple partition P, there exists a re\ufb01ned simple partition P\u2032 of P, which can be constructed ef\ufb01ciently, such that partitioned sampling using the re\ufb01ned simple partition P\u2032 is at least as good as partitioned sampling using the original partition P. Speci\ufb01cally, EM[VarS( \u02c6 fpart(P\u2032))] \u2264EM[VarS( \u02c6 fpart(P))]. Theorem 2 shows the superiority of simple partitions, and justi\ufb01es that it is enough for us to only optimize for partitioned sampling with simple partitions. 3.1 Dealing with Inaccurate Similarities When accurate opinion similarities are not available, one still can use a balanced partition (i.e., all groups have the exact same size) to achieve as least good sampling result as naive sampling, summarized as below: Theorem 3. Given a vertex set V with n nodes and sample size r where n is a multiple of r, partitioned sampling using any balanced simple partition P is at least as good as naive sampling. That is, VarS( \u02c6 fpart(P)) \u2264VarS( \u02c6 fnaive(V, r)) holds for any \ufb01xed opinions f(v1), ..., f(vn). Theorem 3 provides a safety net showing that partitioned sampling would not hurt sampling quality. Thus, we can always use the greedy algorithm with a balance partition constraint to achieve better sampling result. The result will be further improved if opinion similarities get more accurate. Furthermore, in the experiment on the real-world dataset (Section 5), we arti\ufb01cially remove all the opinion similarity information (set as 0.5) between disconnected individuals, and perturb the rest opinion similarities more than 30%, to simulate the condition of missing and inaccurate similarities. The experimental result shows that the performance of the greedy algorithm with perturbed inputs is quite close to the performance of the greedy algorithm with exact inputs. This demonstrates the robustness of our greedy algorithm in the face of missing and inaccurate opinion similarity data. Moreover, since real-world social interaction can be characterized well by opinion evolution models, we adapt a wellknown opinion evolution model and give an exact computation of opinion similarity based on the model in the next section. The model essentially provides a more compact representation than pairwise similarities. 4 Opinion Evolution Model We adapt the well-known voter model to describe social dynamics (Clifford and Sudbury 1973; Yildiz et al. 2011). 4 \fConsider a weighted directed social graph G = (V, A) where V = {v1, v2, . . . , vn} is the vertex set and A is the weighted adjacency matrix. Each node is associated with both an innate opinion and an expressed opinion. The innate opinion remains unchanged from external in\ufb02uences, while the expressed opinion could be shaped by the opinions of one\u2019s neighbors, and is the one observed by sampling. At initial time, each node vi generates its innate opinion f (0)(vi) \u2208{0, 1} from an i.i.d. Bernoulli distribution with expected value \u00b5(0). The use of i.i.d. distribution for the innate opinion is due to the lack of prior knowledge on a brand-new topic, and is also adopted in other models (Dasgupta, Kumar, and Sivakumar 2012). When t > 0, each node vi updates its expressed opinion f (t)(vi) \u2208{0, 1} independently according to a Poisson process with updating rate \u03bbi: at its Poisson arrival time t, node vi sets f (t)(vi) to its innate opinion with an inward probability pi > 0, or with probability (1 \u2212pi)Aij/ Pn k=1 Aik, adopts its out-neighbor vj\u2019s expressed opinion f (t)(vj). We call the model Voter model with Innate Opinions (VIO). The VIO model reaches a steady state if the joint distribution of all node\u2019s expressed opinions no longer changes over time.2 We use notation f (\u221e)(vi) to represent the steadystate expressed opinion of node vi, which is a random variable. We assume that opinion sampling is done in the steady state, which means that people have suf\ufb01ciently communicated within the social network. To facilitate analysis of the VIO model, we take an equivalent view of the VIO model as coalescing random walks on an augmented graph G = (V \u222aV \u2032, E \u222a{e\u2032 1, e\u2032 2, . . . , e\u2032 n}), where V \u2032 = {v\u2032 1, v\u2032 2, . . . , v\u2032 n} is a copy of V , E is the edge set of G and e\u2032 i = (vi, v\u2032 i) for all i. In this viewpoint, we have n walkers randomly wandering on G \u201cback in time\u201d as follows. At time t, all walkers are separately located at v1, v2, \u00b7 \u00b7 \u00b7 , vn. Suppose before time t, vi is the last node who updated its expressed opinion at time \u03c4 < t, then the n walkers stay stationary on their nodes from time t until time \u03c4 \u201cback in time\u201d. At time \u03c4, the walker at node vi takes a walk step: she either walks to vi\u2019s out-neighbor vj \u2208V with probability (1\u2212pi)Aij/ Pn k=1 Aik, or walks to v\u2032 i \u2208V \u2032 with probability pi. If any walker (e.g., the walker starting from node vi) walks to a node (e.g., v\u2032 k) in V \u2032, then she stops her walk. In the VIO model language, this is equivalent to saying that vi\u2019s opinion at time t is determined by vk\u2019s innate opinion, namely f (t)(vi) = f (0)(vk). If two random walkers meet at the same node in V at any time, they walk together from now on following the above rules (hence the name coalescing). Finally, at time t = 0, if the walker is still at some node vi \u2208V , she always walks to v\u2032 i \u2208V \u2032. We now de\ufb01ne some key parameters based on the coalescing random walk model, which will be directly used for computing the opinion similarity later. De\ufb01nition 3. Let I\u2113 ij denote the event that two random walkers starting from vi and vj at time t = \u221eeventually meet and the \ufb01rst node they meet at is v\u2113\u2208V . Let Q be the 2The VIO model has a unique joint distribution for the \ufb01nal expressed opinions. See Appendix C for the proof. n \u00d7 n matrix where Qij denotes the probability that a random walker starting from vi at time t = \u221eends at v\u2032 j \u2208V \u2032. Lemma 2. For i, j, \u2113\u2208[n], P \u0002 I\u2113 ij \u0003 is the unique solution of the following linear equation system: P \u0002 I\u2113 ij \u0003 = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f3 0, i = j \u0338= \u2113, 1, i = j = \u2113, Pn a=1 \u03bbi(1\u2212pi)Aia (\u03bbi+\u03bbj)di P[I\u2113 aj] + Pn b=1 \u03bbj(1\u2212pj)Ajb (\u03bbi+\u03bbj)dj P[I\u2113 ib], i \u0338= j, where di = Pn j=1 Aij is vi\u2019s weighted out-degree. In addition, matrix Q is computed by Q = \u0000I \u2212(I \u2212P) D\u22121A \u0001\u22121 P, where P = diag(p1, . . . , pn) and D = diag(d1, . . . , dn) are two diagonal matrices, and matrix I \u2212(I \u2212P) D\u22121A is invertible when pi > 0 for all i \u2208[n]. Our main analytical result concerning the VIO model is the following exact computation of pairwise opinion correlation, which directly leads to opinion similarity: Lemma 3. For any i, j \u2208[n], opinion correlation \u03c1ij in the steady state is equal to the probability that two coalescing random walks starting from vi and vj at time t = \u221eend at the same absorbing node in V \u2032. Moreover, opinion correlation \u03c1ij can be computed by \u03c1ij = CorM \u0010 f (\u221e)(vi), f (\u221e)(vj) \u0011 = n X k=1 QikQjk + n X \u2113=1 P \u0002 I\u2113 ij \u0003 1 \u2212 n X k=1 Q2 \u2113k ! where I\u2113 ij and Q are de\ufb01ned in De\ufb01nition 3, and P \u0002 I\u2113 ij \u0003 and Q are computed by Lemma 2. Theorem 4. For any two nodes vi and vj, their opinion similarity \u03c3ij in the steady state of the VIO model is equal to: \u03c3ij = 1 \u22122\u00b5(0)(1 \u2212\u00b5(0))(1 \u2212\u03c1ij) where opinion correlation \u03c1ij is computed by Lemma 3. Notice that for partitioning algorithms, we only need 1 \u2212\u03c3ij as the edge weight and by the above theorem this weight value is proportional to 1 \u2212\u03c1ij, which means the exact value of \u00b5(0) is irrelevant for partitioning algorithms. In Appendix D, we will provide an ef\ufb01cient computation of all pairwise opinion correlations with running time O(nmR) by a carefully designed iterative algorithm, where m is the number of edges of G which is commonly sparse, and R is the number of iterations. We further remark that the correlations are calculated of\ufb02ine based on the existing network and historical data, and thus the complexity compared to the sampling cost of telephone interview or network survey is relatively small. In Appendix G, we further extend the VIO model to include (a) non-i.i.d. distributions of the innate opinions, and (b) negative edges as in the signed voter model (Li et al. 2015). 5 \f0 0.01 0.02 0.03 0.04 EM[VarS( \u02c6 f)] 2 4 6 8 10 12 14 Sample size r Naive SDP Greedy (a) Synthetic graph (100 nodes) 101 102 103 104 105 106 pH/pL 0 10 20 30 40 50 Improvement (%) r = 500 r = 250 (b) Synthetic graph (10,000 nodes) 0 0.2 0.4 0.6 0.8 Inward Probability 0 20 40 60 80 100 Improvement (%) r = 500 r = 250 (c) Synthetic graph (10,000 nodes) Inward Probability pi 0~0.2 0.2~0.4 0.4~0.6 0.6~0.8 0.8~1 Distribution of pi 0 0.1 0.2 0.3 0.4 0.5 Spring Festival Gala Microsoft iPhone (d) Inward probability distribution 0 0.5 1 1.5 EM[VarS( \u02c6 f)] \u00d710-4 2000 4000 6000 8000 Sample size r Naive Greedy_P Greedy (e) Weibo graph with \u03bbi = 1 0 0.5 1 1.5 EM[VarS( \u02c6 f)] \u00d710-4 2000 4000 6000 8000 Sample size r Naive Greedy_P Greedy (f) Weibo graph with \u03bbi set to the number of vi\u2019s tweets in a year 5 Experimental Evaluation In this section, we compare the sampling quality of partitioned sampling using greedy partitioning (Greedy) and partitioned sampling using SDP partitioning3 (SDP) against naive sampling (Naive) based on the VIO model, using both synthetic and real-world datasets. We describe major parameter settings for the experiments below, while leave the complete settings in Appendix E. In our experiment, when the parameters of VIO model are set, the simulation is done by (a) calculating the pairwise opinion similarities by Theorem 4, (b) running the partitioning algorithms to obtain the partition candidate, and (c) computing the expected variance EM[VarS( \u02c6 f)] by Theorem 1. Synthetic Dataset. We use the planted partition model (Condon and Karp 2001) to generate undirected graphs, which aims at resembling the community structure in realworld social networks. Given n vertices and k latent disjoint groups, every edge (vi, vj) is generated with a high probability pH if vi and vj are in the same latent group, otherwise 3We use CVX package (Grant and Boyd 2014; Grant and Boyd 2008) to solve the SDP programming. with a low probability pL. We generate two different sizes of synthetic graphs. The small one includes 100 nodes and 20 latent groups, and pH, pL and \u03bbi are set to 0.9, 0.01 and 1, respectively. The inward probability of each node is randomly chosen from [0, 0.01]. Fig (a) shows that, when the sample size r is small, the performance of SDP and Greedy are similar to each other and both better than Naive. When the sample size r increases, Greedy becomes much better than Naive, and SDP starts getting worse. For the large synthetic graph with 10k nodes and 500 latent groups, SDP is no longer feasible, thus we compare the improvement of Greedy against Naive. In Fig (b), we range pH/pL and \ufb01nd that larger pH/pL (more apparent clustering) indicates the better performance of the partitioned sampling method. When pH/pL increases from 103 to 105, the improvement of expected sample variance increases rapidly. When pH/pL > 105, the improvement becomes saturated. This is because the number of edges which cross different latent groups are so few that it decreases rather slowly and the graph structure is almost unchanged when pH/pL increases further. In Fig (c), we set all nodes\u2019 inward probabilities to be equal and vary them from 0.02 to 0.8. The \ufb01gure shows that the lower inward probability leads to the better performance of partitioned sampling. When the inward probability gets small, the improvement expected sample variance increases rapidly. This is because a lower inward probability means people interacting more with each other and thus their opinions are correlated more signi\ufb01cantly. According to the above experiments, we conclude that the larger pH/pL and the lower inward probability make people\u2019s opinions more clustered and more correlated inside the clusters, and our partitioned sampling method works better for these cases. Real-World Dataset. We use the micro-blog data from weibo.com (Yuan et al. 2013), which contains 100,102 users and 30,518,600 tweets within a one-year timeline from 1/1/2013 to 1/1/2014. We treat the user following relationship between two users as a directed edge (with weight 1). We \ufb01rst learn the distribution of user\u2019s inward probabilities from the data. We extract a series of users\u2019 opinions on 12 speci\ufb01c topics (e.g., Microsoft, iPhone, etc.) by applying a keyword classi\ufb01er and a sentiment analyzer (Tang et al. 2014) to the tweets. We also collect their relationships and form a subgraph for each topic. Then we use VIO model to \ufb01t the data by solving a minimization problem w.r.t. inward probabilities using gradient descent. Fig (d) shows the distribution of inward probabilities for three of the topics, namely Spring Festival Gala (68 users), Microsoft (66 users) and iPhone (59 users), and the results for other topics are similar. From these distributions, we observe that (a) over 45% inward probabilities locate in [0, 0.2]; (b) the probability that pi locates in [0.8, 1] is the second highest; (c) others almost uniformly locate in [0.2, 0.8]. This indicates that in the real world, most people tend to adopt others\u2019 opinions, which matches the intuition that people are often affected by others. We manually look up the users who locate in [0.8, 1], and \ufb01nd that most of them are media accounts and veri\ufb01ed users. This matches our intuition that those users always take effort to spread their own opinions on the web but rarely 6 \fadopt others\u2019 opinions, hence they should have large inward probabilities. Now we simulate the sampling methods on the Weibo graph. We \ufb01rst remove the users who do not follow anyone iteratively, and get the graph including 40,787 nodes and 165,956 directed edges. We generate each user\u2019s inward probability following the distribution we learned. We use two different settings for opinion updating rates: one is to set \u03bbi = 1 for all i \u2208[n]; the other is to set \u03bbi to the number of vi\u2019s tweets in a year. The improvement of Greedy against Naive with two different updating rate settings are similar as shown in Fig (e) and (f). In particular, if we \ufb01x EM[VarS( \u02c6 f)] to be 3.86 \u00d7 10\u22125, Greedy needs 4794 samples while Naive needs 8000 samples (saving 40.1%) in Fig (e), and Greedy needs 4885 samples while Naive needs 8000 samples (saving 38.9%) in Fig (f). This indicates that partitioned sampling greatly improves the sampling quality, and the sample size saving is more apparent when the expected sample variance gets smaller (i.e., the requirement of sampling quality gets higher). Moreover, in order to test the performance of partitioned sampling with missing and inaccurate opinion similarities, we arti\ufb01cially remove all the opinion similarity information between disconnected nodes (set similarities as 0.5), and perturb each rest similarity \u03c3ij with a random noise eij in the range [\u22120.1 \u221230% \u00b7 \u03c3ij, 0.1 + 30% \u00b7 \u03c3ij] (set perturbed similarity of \u03c3ij as the median of {0, \u03c3ij + eij, 1}). Fig (e) and (f) show that Greedy using the above perturbed similarities (denoted as Greedy P) is very close to Greedy, and still has a signi\ufb01cant improvement against naive sampling. In conclusion, the experimental results demonstrate the excellent performance of our partitioned sampling method both on synthetic and real-world datasets, even when the opinion similarities are missing or inaccurate." + } + ], + "Keqin Bao": [ + { + "url": "http://arxiv.org/abs/2305.00447v3", + "title": "TALLRec: An Effective and Efficient Tuning Framework to Align Large Language Model with Recommendation", + "abstract": "Large Language Models (LLMs) have demonstrated remarkable performance across\ndiverse domains, thereby prompting researchers to explore their potential for\nuse in recommendation systems. Initial attempts have leveraged the exceptional\ncapabilities of LLMs, such as rich knowledge and strong generalization through\nIn-context Learning, which involves phrasing the recommendation task as\nprompts. Nevertheless, the performance of LLMs in recommendation tasks remains\nsuboptimal due to a substantial disparity between the training tasks for LLMs\nand recommendation tasks, as well as inadequate recommendation data during\npre-training. To bridge the gap, we consider building a Large Recommendation\nLanguage Model by tunning LLMs with recommendation data. To this end, we\npropose an efficient and effective Tuning framework for Aligning LLMs with\nRecommendation, namely TALLRec. We have demonstrated that the proposed TALLRec\nframework can significantly enhance the recommendation capabilities of LLMs in\nthe movie and book domains, even with a limited dataset of fewer than 100\nsamples. Additionally, the proposed framework is highly efficient and can be\nexecuted on a single RTX 3090 with LLaMA-7B. Furthermore, the fine-tuned LLM\nexhibits robust cross-domain generalization. Our code and data are available at\nhttps://github.com/SAI990323/TALLRec.", + "authors": "Keqin Bao, Jizhi Zhang, Yang Zhang, Wenjie Wang, Fuli Feng, Xiangnan He", + "published": "2023-04-30", + "updated": "2023-10-17", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "main_content": "INTRODUCTION Large Language Models (LLMs) have exhibited remarkable proficiency in generating text that closely resembles human language and in performing a wide range of tasks [69], including Natural Language Processing [4, 32, 54], Robotics [10, 43, 56], and Information Retrieval [1, 24, 25, 48]. Prior research has also demonstrated the knowledge-rich and compositional generalization capabilities of LLMs [36, 41, 53]. Only given appropriate instructions, these models are able to learn how to solve unseen tasks and inspire their own knowledge to achieve a high level of performance [33]. The aforementioned capabilities of LLM present promising opportunities to address the current challenges requiring strong generalization and rich knowledge in the recommendation field. In this light, it is valuable to explore the integration of LLMs into recommender systems, which has received limited attention in prior research. In recent initial attempts [13, 47], achieving the target relies on In-context Learning [3], which is typically implemented through the official OpenAI API [2]. They regard the LLM as a toolformer [42] of traditional recommendation models (such as MF [27] and LightGCN [16]), i.e., the LLM is used for re-ranking the candidate items filtered by these models. However, these approaches only reach a comparable performance with traditional models [13, 47]. Worse still, using only In-context Learning may fail to make recommendations. As shown in Figure 1, we find that ChatGPT either refuses to answer or always gives positive predictions (likes). Therefore, it is critical to further explore an appropriate way for more effective leverage of LLMs in the recommendation. We postulate that the failure of using only In-context Learning is because of two reasons: 1) LLMs may not align well with the recommendation task due to the huge gap between language processing tasks for training LLMs and recommendation. Besides, the recommendation-oriented corpus is very limited during the training phase of LLMs. 2) The effect of LLMs is restricted by the arXiv:2305.00447v3 [cs.IR] 17 Oct 2023 \fRecSys \u201923, September 18\u201322, 2023, Singapore, Singapore Keqin Bao*, Jizhi Zhang*, Yang Zhang, Wenjie Wang, Fuli Feng\u2020, and Xiangnan He\u2020 100% Instruction LLM e.g. ChatGPT Ratio 100% 0% 53% 47% Ground Truth 0% ChatGPT Fail LLM Fails ! Rec AUC 0.5 0.75 Alpaca 0.46 Davinci 002 (GPT3) 0.49 ChatGPT 0.5 Davinci 003 (GPT3) 0.53 LLMs on Movie Rec AUC 0.5 0.75 Alpaca 0.53 Davinci 002 (GPT3) 0.46 ChatGPT 0.5 Davinci 003 (GPT3) 0.46 LLMs on Book Rec Historical Sequence Like Dislike Item Recommend Rec Task Sample \u2026 Difficult to determine whether they will like \u2026 (refuse to answer) Refuse Figure 1: Illustration of LLMs for the recommendations. Given users\u2019 interaction history, LLMs predict whether a user will like a new item through In-context Learning. However, the representative LLMs, e.g., ChatGPT, either refuse to answer or always give positive predictions (likes) on Movie and Book recommendation tasks. If we ignore the refused answers and calculate AUC on the remaining samples, we find that LLMs perform similarly with random guessing (AUC=0.5). Refer to Section 3 for more experimental details. underlying recommendation models, which may fail to include target items in their candidate lists due to their limited capacity. Therefore, we consider building a Large Recommendation Language Model (LRLM) to bridge the gap between LLMs and the recommendation task and better stimulate the recommendation capabilities of LLMs in addition to In-context Learning. Toward this goal, we focus on tuning LLMs with the recommendation task. Considering that instruction tuning is core to letting the LLM learn to solve different tasks and have strong generalization ability [22, 23, 37], we propose a lightweight tuning framework to adapt LLMs for recommendations, named TALLRec. Elaborately, TALLRec structures the recommendation data as instructions and tunes the LLM via an additional instruction tuning process. Moreover, given that LLM training necessitates a substantial amount of computing resources, TALLRec employs a lightweight tuning approach to efficiently adapt the LLMs to the recommendation task. Specifically, we apply the TALLRec framework on the LLaMA7B model [46] with a LoRA [21] architecture, which ensures the framework can be deployed on an Nvidia RTX 3090 (24GB) GPU. Furthermore, to investigate the minimal computational resources required, we do experiments in a few-shot setting, utilizing only a limited number of tuning examples. We conduct detailed experiments in knowledge-rich recommendation scenarios of movies and books, where the tuned LLaMA-7B model outperforms traditional recommendation models and In-context Learning with GPT3.5, a much stronger LLM than LLaMA-7B. The results validate the efficiency and robustness of our framework: 1) our TALLRec framework can quickly inspire the recommendation capability of LLMs in the few-shot setting. and 2) LLMs trained via the TALLRec framework have a strong generalization ability across different domains (e.g., movie \u2192book). In total, our contributions are summarized as follows: \u2022 We study a new problem in recommendation \u2014 aligning the LLMs with the recommendation, where we reveal the Table 1: A tuning sample for a translation task. Instruction Input Task Instruction: Translate from English to Chinese. Task Input: Who am I ? Instruction Output Task Output: \u6211\u662f\u8c01? limitations of In-context Learning-based approaches and underscore the significance of instruction tuning. \u2022 We introduce a new TALLRec framework to build Large Recommendation Language Models, which enables the effective and efficient tuning of LLMs for recommendation with low GPU costs and few tuning samples. \u2022 We conduct extensive experiments, validating the effectiveness and efficiency of the proposed framework, and uncovering its exceptional robustness with seamless navigation across different domains. 2 TALLREC In this section, we first introduce the preliminary knowledge for tuning LLMs and our task formulation, and then present the proposed TALLRec framework. 2.1 Preliminary \u2022 Instruction Tuning is a crucial technique to train LLMs with human-annotated instructions and responses [36]. Generally, instruction tuning has four steps (see the example in Table 2). Specifically, Step 1: Define a task and articulate a \u201cTask Instruction\u201d using natural language, which usually encompasses a clear definition of the task, as well as specific solutions to address it. Step 2: Formulate and construct the input and output of the task in natural language, denoted as \u201cTask Input\u201d and \u201cTask Output\u201d. Step 3: Integrate the \u201cTask Instruction\u201d and \u201cTask Input\u201d together to form the \u201cInstruction \fTALLRec: An Effective and Efficient Tuning Framework to Align Large Language Model with Recommendation RecSys \u201923, September 18\u201322, 2023, Singapore, Singapore Table 2: A tuning sample for rec-tuning. Instruction Input Task Instruction: Given the user\u2019s historical interactions, please determine whether the user will enjoy the target new movie by answering \"Yes\" or \"No\". Task Input: User\u2019s liked items: GodFather. User\u2019s disliked items: Star Wars. Target new movie: Iron Man Instruction Output Task Output: No. Input\u201d, and take the \u201cTask Output\u201d as the corresponding \u201cInstruction Output\u201d, for each tuning sample. Step 4: Instruction tuning on LLMs based on the formatted pairs of \u201cInstruction Input\u201d and \u201cInstruction Output\u201d. \u2022 Rec-tuning Task Formulation. We aim to utilize LLM, denoted as M, to construct an LRLM, which can predict whether a new item will be enjoyed by a user. To achieve this objective, we do recommendation tuning (rec-tuning) on LLMs with recommendation data. As shown in Table 2, we format recommendation data into a pattern of instruction tuning. We begin by composing a \u201cTask Instruction\u201d that directs the model to determine whether the user will like the target item based on their historical interactions, and to respond with a binary answer of \u201cYes\u201d or \u201cNo\u201d. To format the \u201cTask Input\u201d, we categorize the user\u2019s historically interacted items into two groups based on ratings: the user\u2019s liked items and disliked items, where items are sequentially ranked by interaction time and represented by textual descriptions (e.g., title and brief introduction). Besides, \u201cTask Input\u201d also includes a target new item that the user has never seen. Lastly, we merge \u201cTask Instruction\u201d and \u201cTask Input\u201d to create a \u201cInstruction Input\u201d, and then set the expected \u201cInstruction Output\u201d as \u2018Yes\u201d or \u201cNo\u201d for rec-tuning. 2.2 TALLRec Framework In this subsection, we introduce the TALLRec framework, which aims to facilitate the effective and efficient alignment of LLMs with recommendation tasks, particularly in low GPU memory consumption settings. Specifically, we first present two TALLRec tuning stages with lightweight implementation, followed by the backbone selection. As shown in Figure 2, TALLRec comprises two tuning stages: alpaca tuning and rec-tuning. The former stage is the common training process of LLM that enhances LLM\u2019s generalization ability, while the latter stage emulates the pattern of instruction tuning and tunes LLMs for the recommendation task. \u2022 TALLRec Tuning Stages. For alpaca tuning, we employ the self-instruct data made available by Alpaca [45] to train the LLM. Specifically, we utilize the conditional language modeling objective during the alpaca tuning, as exemplified in the Alpaca repository1. 1https://github.com/tloen/alpaca-lora. Formally, max \u03a6 \u2211\ufe01 (\ud835\udc65,\ud835\udc66) \u2208Z |\ud835\udc66| \u2211\ufe01 \ud835\udc61=1 log (\ud835\udc43\u03a6(\ud835\udc66\ud835\udc61|\ud835\udc65, \ud835\udc66<\ud835\udc61)) , (1) where \ud835\udc65and \ud835\udc66represent the \u201cInstruction Input\u201d and \u201cInstruction Output\u201d in the self-instruct data, respectively, \ud835\udc66\ud835\udc61is the \ud835\udc61-th token of the \ud835\udc66, \ud835\udc66<\ud835\udc61represents the tokens before \ud835\udc66\ud835\udc61, \u03a6 is the original parameters of M, and Z is the training set. For rec-tuning, we can leverage the rec-tuning sample as described in Table 2 to tune the LLM, similar to alpaca tuning. \u2022 Lightweight Tuning. However, directly tuning the LLM is computationally intensive and time-consuming. As such, we propose to adopt a lightweight tuning strategy to execute both alpaca tuning and rec-tuning. The central premise of lightweight tuning is that contemporary language models may possess an excessive number of parameters, and their information is concentrated on a low intrinsic dimension [21]. Consequently, we can achieve comparable performance to that of the entire model by tuning only a small subset of parameters [20, 28, 31]. Specifically, we employ LoRA [21], which involves freezing the pre-trained model parameters and introducing trainable rank decomposition matrices into each layer of the Transformer architecture to facilitate lightweight tuning. Therefore, by optimizing rank decomposition matrices, we can efficiently incorporate supplementary information while maintaining the original parameters in a frozen state. In total, the final learning objective can be computed as: max \u0398 \u2211\ufe01 (\ud835\udc65,\ud835\udc66) \u2208Z |\ud835\udc66| \u2211\ufe01 \ud835\udc61=1 log (\ud835\udc43\u03a6+\u0398(\ud835\udc66\ud835\udc61|\ud835\udc65, \ud835\udc66<\ud835\udc61)) , (2) where \u0398 is the LoRA parameters and we only update LoRA parameters during the training process. Through LoRA, we can complete training with only one-thousandth of the original LLM parameters to complete the training process [21]. \u2022 Backbone Selection. At present, there are large amounts of LLMs released, such as GPT series, PaLM, CHinchilla, and LLaMA [3, 4, 18, 46]. Among these, a considerable number of LLMs (such as PaLM and Chinchilla) do not provide access to their model parameters or APIs, rendering them challenging to utilize for research or other applications. Additionally, data security concerns are significant issues in the recommendation field. Consequently, the utilization of third-party APIs (such as ChatGPT and text-davinci-003) to leverage LLMs necessitates further discussion. To replicate the issues that require consideration in real-world recommendation scenarios, we intend to simulate the practical utilization of a public LLM and update its parameters for recommendation purposes. After careful consideration, we have opted to conduct experiments using LLMsLLaMA, which is presently the best-performing open-source LLM, and whose training data is also publicly available [46]. 3 EXPERIMENTS In this section, we conduct experiments to answer the following research questions: RQ1: How does TALLRec perform compared with current LLMbased and traditional recommendation models? \fRecSys \u201923, September 18\u201322, 2023, Singapore, Singapore Keqin Bao*, Jizhi Zhang*, Yang Zhang, Wenjie Wang, Fuli Feng\u2020, and Xiangnan He\u2020 LLM Alpaca Tuning Rec-Tuning LLM LoRA Input Output Lightweight Tuning TALLRec Framework Rec-Tuning Samples Instruction Input Output K Figure 2: Illustration of the TALLRec framework constructed by alpaca tuning and rec-tuning two stages. During rec-tuning, we use the rec-tuning samples with instruction input and output constructed from recommendation data. Notably, we employ lightweight tuning technology to enhance the efficiency of our TALLRec framework. RQ2: How do the different components in TALLRec affect its effectiveness? RQ3: How does TALLRec perform under cross-domain recommendation? \u2022 Dataset. We conduct experiments on two datasets. The statistics and more details can be found in our released data. Movie. This is a processed dataset from MovieLens100K [12] , which comprises user ratings on movies and comprehensive textual descriptions of movies such as \u201ctitle\u201d and \u201cdirector\u201d. Because we conduct experiments in a few-shot training setting that requires limited tuning samples, we process the original dataset by sampling the most recent 10,000 interactions and split them into training, validation, and testing sets with a ratio of 8:1:1. To construct a rec-tuning sample, 10 interactions prior to the target item are retained as historical interactions. Following [16, 66], we only treat the interactions with ratings > 3 as \u201clikes\u201d, and those with ratings \u22643 as \u201cdislike\u201d. Book. This is a book recommendation dataset processed from BookCrossing [71]. The BookCrossing dataset has user ratings (110) and textual descriptions of books, such as the information of \u2018Book-Author\u2019 and \u2018Book-Title\u2019. For each user, we randomly select an item interacted by this user as the prediction target, and sample 10 interacted items as historical interactions2. Subsequently, we partition constructed rec-tuning samples into training, validation, and testing sets with the same ratio of 8:1:1. Additionally, we binarize the ratings according to a threshold of 5. \u2022 Few-shot Training Setting. We adopt a few-shot training setting, where only a limited number of samples are randomly selected from the training set for model training. It is referred to as \u2018\ud835\udc3e-shot\u2019 training setting, where \ud835\udc3erepresents the number of training samples used. By setting an extremely small value for K, such as 64, we could test whether a method can rapidly acquire recommendation capability from LLMs with severely limited training data. \u2022 Baseline. We compare TALLRec against both LLM-based and traditional recommendation methods. 1) Existing LLM-based methods adopt In-context Learning to directly generate recommendations [13, 47]. For a fair comparison, we align these methods with TALLRec by using the same instructions. Specifically, we perform 2BookCrossing lacks interaction timestamps, thus we can only construct historical interaction by random sampling. In-context Lerning on different LLMs: 1) Alpaca-LoRA, 2) TextDavinvi-002, 3) Text-Daviniv-003, and 4) ChatGPT. Alpaca-LoRA is a model for reproducing Alpaca results of the LLaMA model by using LoRA and alpaca tuning. The latter three are GPT series models from OpenAI. 2) Traditional methods. Since our approach utilizes historical interactions to predict the subsequent interaction, similar to the sequential recommendation, we consider the following sequential models: (i) GRU4Rec [17] is an RNN-based sequential recommender, which utilizes GRU to encode historical interactions. (ii) Caser [44] utilizes CNN to encode historical interaction sequences. (iii) SASRec [26] is a classic transformer-based sequential recommender. (iv) DROS [60] is a state-of-the-art sequential recommender model, which harnesses distributionally robust optimization for robust recommendations. We use the version implemented by GRU4Rec, provided by the authors.3 The sequential models above rely on item ID features without considering textual descriptions of items. However, in our setting, we assume item text descriptions are available for LLM tuning. To ensure fair comparisons, we further consider comparing the following variants of GRU4Rec and DROS: (v) GRU-BERT is a variant of GRU4Rec that incorporates a pre-trained BERT [7] to encode text descriptions. Specifically, BERT encodes text descriptions and outputs a CLS embedding, which is then concatenated with the original ID embeddings of GRU4Rec as the item representations. (vi) DROS-BERT is integrated with BERT, similar to GRU-BERT. \u2022 Evaluation Metric. Since TALLRec aims to predict user preference over a given target item, i.e., a binary classification problem, we adopt a popular evaluation metric used in recommendation: Area Under the Receiver Operating Characteristic (AUC). \u2022 Implementation Details. To ensure uniform sequence lengths, we use the user\u2019s last interacted item to pad the historical interaction sequences with lengths < the threshold, 10. For all methods, we optimize parameters using Adam with MSE loss and a learning rate of 1e-3. We search the weight decay of all methods in {1e-3, 1e-4, 1e5, 1e-6, 1e-7}. Following [60], regarding specific hyperparameters of baselines, we adhered to their original settings. For GRU-BERT and DROS-BERT, we utilize BERT released by Hugging Face4, while setting the number of GRU layers to 4 and the hidden size to 1024 for aligning with BERT\u2019s embedding size. Lastly, we run all methods 3https://github.com/YangZhengyi98/DROS. 4https://huggingface.co/bert-base-uncased. \fTALLRec: An Effective and Efficient Tuning Framework to Align Large Language Model with Recommendation RecSys \u201923, September 18\u201322, 2023, Singapore, Singapore Table 3: Performance comparison between conventional sequential recommendation baselines and TALLRec under different few-shot training settings. The reported result is the AUC multiplied by 100, with boldface indicating the highest score. \u2021: significantly better than all baselines with t-test \ud835\udc5d<0.01. Few-shot GRU4Rec Caser SASRec DROS GRU-BERT DROS-BERT TALLRec movie 16 49.07 49.68 50.43 50.76 50.85 50.21 67.24\u2021 64 49.87 51.06 50.48 51.54 51.65 51.71 67.48\u2021 256 52.89 54.20 52.25 54.07 53.44 53.94 71.98\u2021 book 16 48.95 49.84 49.48 49.28 50.07 50.07 56.36 64 49.64 49.72 50.06 49.13 49.64 48.98 60.39\u2021 256 49.86 49.57 50.20 49.13 49.79 50.20 64.38\u2021 Figure 3: Figure (a) shows the performance comparison between LLM-based baselines (zero-shot setting) and ours TALLRec, where the TALLRec is trained on only 64 rec-tuning samples (i.e., in the 64-shot training setting). Figure (b) demonstrates the performance tendency of TALLRec\u2019s variants and conventional sequential recommendation methods w.r.t. the number of training samples used, ranging from 1 to 256. TALLRec has three variants: \u201cAT\u201d for alpaca tuning only, \u201cRT\u201d for rec-tuning only, and \u201cTALLRec\u201d for the full version. five times with different random seeds and report the averaged results. 3.1 Performance Comparison (RQ1) We aim to investigate the recommendation performance of various methods under the few-shot training setting, which enables us to evaluate how LLMs can be quickly adjusted for recommendation with limited rec-tuning samples. The evaluation results against traditional methods are presented in Table 3, while the comparison against LLM-based methods is depicted in Figure 3 (a). From the figure and table, we draw the following observations: 1) Our method significantly outperforms both traditional and LLMbased methods, verifying the superiority of tuning LLM via our TALLRec framework. TALLRec successfully unlocks the knowledge and generalization capabilities of LLMs for recommendations. 2) LLM-based methods perform similarly to random guessing (AUC\u22480.5). However, the LLMs trained via TALLRec achieves significant improvements. These results demonstrate a considerable gap between recommendation and language tasks, showing the importance of using recommendation data for rec-tuning on LLMs. 3) Traditional recommender methods consistently yield inferior performance under our few-shot training settings, implying that traditional methods are incapable of quickly learning the recommendation capability with limited training samples. 4) GRU-BERT and DROS-BERT do not show significant improvement over their backend models, GRU4Rec and DROS. This indicates that purely adding textual descriptions cannot enhance the traditional recommender models in the few-shot training setting. 3.2 Ablation Study (RQ2) To demonstrate the effectiveness of alpaca tuning and rec-tuning in TALLRec, we conduct ablation studies with varying \ud835\udc3eunder the \ud835\udc3e-shot training setting. Specifically, we compare the performance of TALLRec with that of two variants, \u201cAT\u201d and \u201cRT\u201d, where \u201cAT\u201d only conducts the alpaca tuning, while \u201cRT\u201d solely implements rec-tuning. By varying \ud835\udc3e, we further investigate the impact of the number of training samples. We summarize the results in Figure 3 (b), from which we have the following observations: 1) The performance of \u201cAT\u201d significantly declines compared to that of \u201cRT\u201d and TALLRec, indicating the essential effect of rec-tuning, which effectively inspires the LLM\u2019s recommendation capability 2) With limited rec-tuning samples (\u2264128), TALLRec generally outperforms \u201cRT\u201d, confirming that alpaca tuning can enhance the LLM\u2019s generalization ability on new tasks, especially when the training data in the new tasks are insufficient. As the quantity of rec-tuning samples grows, the results of TALLRec and \u201cRT\u201d become closer. This makes sense, as the significance of generalization abilities derived from other tasks diminishes when there is an ample amount of training data for the new tasks. 3) With the increase of rec-tuning sample number, \fRecSys \u201923, September 18\u201322, 2023, Singapore, Singapore Keqin Bao*, Jizhi Zhang*, Yang Zhang, Wenjie Wang, Fuli Feng\u2020, and Xiangnan He\u2020 Figure 4: Cross-domain generalization performance of LRLMs trained via TALLRec using Book data (TALLRec (Book)), Movie data (TALLRec (Movie)), and both (TALLRec (Both)). The left figure shows the testing results on the Movie dataset with varying numbers of rec-tuning samples, while the right figure shows the testing results on the Book dataset. TALLRec consistently performs better than the baselines. It is attributed to rec-tuning, which can utilize limited samples to inspire the LLM\u2019s recommendation capability. 3.3 Cross-domain Generalization Analyses (RQ3) To further investigate the generalization ability of TALLRec, we conduct experiments on cross-domain recommendations. Specifically, we tune TALLRec with different rec-tuning samples, including 1) \u201cTALLRec (Book)\u201d, only using the samples from the Book dataset; 2) \u201cTALLRec (Movie)\u201d, solely using samples from the Movie dataset; and 3) \u201cTALLRec (Both)\u201d, tuned with both the Book and Movie samples. We vary \ud835\udc3ein {16, 64, 258} under the few-shot training setting, and evaluate the models on the testing sets of Book and Movie, respectively. The results are summarized in Figure 4, from which we can find: 1) TALLRec demonstrates remarkable crossdomain generalization ability. For instance, after tuning only on movie samples, \u201cTALLRec (Movie)\u201d exhibits good performance on Book data, comparable to \u201cTALLRec (Book)\u201d. This is impressive and suggests that TALLRec has cross-domain generalization ability instead of only fitting a single domain like traditional recommenders. 2) \u201cTALLRec (Both)\u201d surpasses \u201cTALLRec (Movie)\u201d and \u201cTALLRec (Book)\u201d on two testing sets when the number of rec-tuning samples exceeds 64. This finding indicates that TALLRec can seamlessly integrate data from different domains to enhance its generalization performance. In future work, it is promising to pre-train TALLRec with large-scale recommendation data from heterogeneous domains. 4 RELATED WORK \u2022 LMs for Recommendation. There have been several attempts to integrate language models (LMs) with recommendation systems. Despite the incorporation of LMs [14, 29], some attempts persist in utilizing traditional user/item IDs to represent users/items. Thereby, they disregard the semantic understanding capabilities of LMs, such as reviews, which other work has incorporated the language information as part of the users/items embedding [19]. In addition, other methods either utilize an undisclosed model that already possesses preliminary recommendation capabilities [6]. or employ small models to train on large-scale downstream task data [68]. Moreover, the aforementioned models are also limited to small models, while this paper is orthogonal about how to adapt large language models to recommendation tasks. In recommendation systems, there is currently little research on applying LLMs in recommendation scenarios. Those works utilize the interaction ability of GPT3.5 series models and apply In-context Learning [13, 47]. In detail, Chat-Rec [13] endeavors to harness the interaction capabilities of ChatGPT and link the ChatGPT with traditional recommendation models (e.g. MF [27], LightGCN [16]) to formulate a conversational recommendation system. NIR [47] shares a similar concept with Chat-Rec, which employs conventional recommendation models to generate candidate items subjected to a three-stage multi-step prompting process for re-ranking. \u2022 Sequential Recommendation. Our setup is close to the sequential recommendation, which aims to infer the user\u2019s next interaction based on users\u2019 historical interaction sequences [11, 50]. In the early time, the Markov chain plays an important role in sequential recommendation [15, 34, 40, 49]. Recently, deep learning-based methods have become mainstream. Extensive work using different kinds of neural network structures, like RNN [5, 9, 17], CNN [44, 59, 62], and attention [26, 58, 65], to model the user interaction sequences. However, limited by only using IDs to represent users and items, such work cannot fastly adapt and generalize to new scenarios. Thus, some works focus on the generalization ability of sequential recommendation models by pre-training [35, 61], data augmentation [38, 39, 51, 57], debiasing [8, 52, 67, 70], and robust optimization [55, 60]. However, they ignore the strong generalization ability of existing LLMs, leading to inadequate exploration. 5" + }, + { + "url": "http://arxiv.org/abs/2302.08975v1", + "title": "Towards Fine-Grained Information: Identifying the Type and Location of Translation Errors", + "abstract": "Fine-grained information on translation errors is helpful for the translation\nevaluation community. Existing approaches can not synchronously consider error\nposition and type, failing to integrate the error information of both. In this\npaper, we propose Fine-Grained Translation Error Detection (FG-TED) task,\naiming at identifying both the position and the type of translation errors on\ngiven source-hypothesis sentence pairs. Besides, we build an FG-TED model to\npredict the \\textbf{addition} and \\textbf{omission} errors -- two typical\ntranslation accuracy errors. First, we use a word-level classification paradigm\nto form our model and use the shortcut learning reduction to relieve the\ninfluence of monolingual features. Besides, we construct synthetic datasets for\nmodel training, and relieve the disagreement of data labeling in authoritative\ndatasets, making the experimental benchmark concordant. Experiments show that\nour model can identify both error type and position concurrently, and gives\nstate-of-the-art results on the restored dataset. Our model also delivers more\nreliable predictions on low-resource and transfer scenarios than existing\nbaselines. The related datasets and the source code will be released in the\nfuture.", + "authors": "Keqin Bao, Yu Wan, Dayiheng Liu, Baosong Yang, Wenqiang Lei, Xiangnan He, Derek F. Wong, Jun Xie", + "published": "2023-02-17", + "updated": "2023-02-17", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "main_content": "Introduction Fine-grained information of translation errors is important for the translation evaluation community (Freitag et al., 2021a; Vamvas and Sennrich, 2022). Given the sentence pair including translated hypothesis (HYP) and source input (SRC), models are required to provide \ufb01ne-grained error information, such as the error type and the error position (e.g., addition error \u201clast month\u201d in Table 1). Approaches like word-level quality estimation (wordlevel QE, Kim et al., 2017; Basu et al., 2018) and critical error detection (CED, Specia et al., 2021) \u2217Equal contribution. Work was done when Keqin Bao and Yu Wan were interning at DAMO Academy, Alibaba Group. DATASET SRC HYP Golden Translation \u5b66\u6821\u548c\u5e7c\u513f\u56ed\u5f00\u5b66\u4e86\u3002 Schools and kindergartens opened. Erroneous Translation \u5b66\u6821\u548c\u5e7c\u513f\u56ed\u5f00\u5b66\u4e86\u3002 Schools opened last month. Dataset Labeling Format MQM\u201920 \u5b66\u6821\u548c\u5e7c\u513f\u56ed\u5f00\u5b66\u4e86\u3002 Schools and kindergardens opened last month. MQM\u201921 \u5b66\u6821\u548c\u5e7c\u513f\u56ed\u5f00\u5b66\u4e86\u3002 Schools opened last month. Table 1: A toy illustration of the addition and omission error and the label format of available datasets. Some omission errors from MQM\u201920 dataset (Freitag et al., 2021a) are labeled at the target side, where the missing semantics is complemented with additional tokens (\u201cand kindergardens\u201d). We follow the setting of MQM\u201921 dataset and relabel the location of ChineseEnglish omission errors to remove such format disagreement. See Appendix \u00a7C for more details. models can predict detailed information on translation errors. Compared to sentence-level QE models (Ranasinghe et al., 2021) which give overall translation quality scores, the outputs of word-level QE and CED models can better help researchers know where they are located and why the translated words are wrong. Besides, the detailed illustrations can also help analyze the translation models and judge the translation quality, offering explainability for the related studies on machine translation (Kim et al., 2017; Specia et al., 2018). Despite the success of word-level QE (Specia et al., 2020) and CED approaches (Freitag et al., 2021b), they can not concurrently identify the types and locations of wrongly translated spans, failing to integrate the information of both. To bridge this gap, in this paper, we propose \ufb01ne-grained translation error detection (FG-TED) task: the model should not only identify the wrongly translated spans, but also predict what error type each span belongs to. In practice, we prioritize the detection of addition and omission translation errors, two typical types of translation errors in real-world applications.1 As the \u201cErroneous\u201d example in Table 1, the span of HYP is labeled as addition error if its se1For other translation error types, we discuss them in \u00a76. arXiv:2302.08975v1 [cs.CL] 17 Feb 2023 \fmantics is excluded in the SRC (i.e., \u201clast month\u201d), and the omission error means that the semantics of span in SRC is omitted (i.e., \u201c\u548c\u5e7c\u513f\u56ed\u201d). To build an FG-TED model, we mainly face challenges from two aspects, i.e., model and data. For the former, recent QE methods involve pre-trained language models (PLMs) to extract the semantic representations of HYP and SRC sentence (Kepler et al., 2019; Ranasinghe et al., 2021). However, most PLMs (e.g., XLM-R, Conneau et al., 2019) are trained by Masked Language Modeling (MLM) objective, focusing on modeling the sentence \ufb02uency and grammatical correctness (Behnke et al., 2022). We \ufb01nd that, those PLMs tend to employ \u201cshortcuts\u201d during \ufb01ne-tuning \u2013 they mainly use monolingual features during learning, failing to utilize cross-lingual ones when collecting \ufb01ne-grained translation error information. To alleviate this, we consider two solutions as follows: 1) As to the PLM backbone, we utilize those ones which are enhanced with cross-lingual semantic alignments. 2) As to the model training, we introduce shortcut learning reduction (SLR) loss to prevent the model from overly using monolingual information during training. Experimental results also verify that, combining both the strategies above can further improve the FG-TED model performance. For the latter, we mainly face two dif\ufb01culties. On one hand, to the best of our knowledge, the Multidimensional Quality Metrics (MQM) dataset (Freitag et al., 2021a) is the authoritative dataset annotating both the types and the locations of translation errors. Yet, as in Table 1, the omission errors of some examples are denoted on HYP, which shows disagreement of MQM benchmarks across multiple years (Vamvas and Sennrich, 2022). On the other hand, the number of samples containing addition/omission errors in the MQM datasets is insuf\ufb01cient.2 Therefore, we consider collecting synthetic examples involving addition/omission errors for model training. Experiments show that, after relabeling the omission errors in MQM\u201920 ChineseEnglish (Zh-En) datasets for reliable evaluation,3 our model performs better than existing baselines, and it can achieve better results on the sub-tasks of FG-TED, i.e., word-level QE and CED. 2The statistics can be seen in Table 6. 3Due to the limitation of linguistic expertise of all authors, we only relabeled Zh-En examples. Quality Estimation (QE) Sentence-level QE FG-TED CED Word-level QE Figure 1: Illustration on \ufb01ne-grained translation error detection (FG-TED) task and existing quality estimation (QE) tasks. Critical Error Detection (CED) and word-level QE can be regarded as FG-TED subtasks. 2 Related Work Related Tasks Word-level QE (Specia et al., 2018) is the most similar task to FG-TED. Most existing approaches like IST-UNBABEL (Zerva et al., 2021) and TRANSQUEST (Ranasinghe et al., 2021) are supervised \u2013 models are trained on the datasets (e.g., MLPE-QE, Fomicheva et al., 2022) where each token is labeled as either \u201c[OK]\u201d or \u201c[BAD]\u201d (Fonseca et al., 2019), and use \u201c[GAP]\u201d in the HYP side to indicate whether there is omission error between adjacent words. Nevertheless, the predictions lack more details of translation errors, i.e., what error type each \u201c[BAD]\u201d token or space belongs to. Besides, the CED task also makes a preliminary attempt to arrange \ufb01ne-grained analyses on translation errors. It de\ufb01nes some accuracy errors (e.g., numeric) as the critical translation errors, and requires the models to deliver binary predictions to identify whether critical errors are involved in the example or not (Specia et al., 2021; Amrhein et al., 2022). However, CED models can not exactly locate what words are wrongly translated. In summary, as shown in Figure 1, both word-level QE and CED tasks can be regarded as subtasks of our FG-TED task. Available Dataset The FG-TED task requires the test set examples ful\ufb01lled with both the type and location of translation errors for reliable evaluation. The MQM\u201920 and MQM\u201921 are two authoritative datasets provided by the organizers of WMT (Freitag et al., 2021b). They include such detailed translation error information, but primarily serve as a benchmark for sentence-level tasks, i.e., giving an overall score to describe the quality of HYP (Xu et al., 2022; Rei et al., 2022). Nevertheless, limited by the disagreement of labeling format (Table 1) and the amount of data examples (Table 6), utilizing MQM datasets for evaluation is not reliable. In practice, we relabel the Zh-En examples from the MQM\u201920 dataset to preserve the consistency between MQM\u201920 and MQM\u201921 ZhEn subset, facilitating a more reliable evaluation benchmark. \fRelated Methods Several existing works have explored the topic of \ufb01ne-grained translation error information. Zhou et al. (2021) constructed the pseudo examples containing translation hallucination errors, and \ufb01ne-tuned a PLM to detect hallucinated translations. However, their approach fails to identify translation errors at the source side, because the hallucination errors only exist in the HYP sentence. Vamvas and Sennrich (2022) adapted the idea of contrastive conditioning (Vamvas and Sennrich, 2021) to predict error spans. They use a two-stage approach \u2013 adapting a dependency parser to collect text spans and calculating the decrease in translation probabilities after removing each span. Nevertheless, when detecting errors, this approach requires a large amount of time when processing long sentences. Compared to those approaches, our model can not only concurrently detect addition and omission errors, but also ease the application following an end-to-end paradigm. Shortcut Learning Reduction Shortcut learning reduction (SLR) aims at preventing the model from bridging the wrong relationship between features and labels. Mahabadi et al. (2020) adapted debiased focal loss (DFL) into object detection and natural language inference (NLI) tasks to alleviate the data-level bias between features and labels (Lin et al., 2017). Ganin and Lempitsky (2015) and Belinkov et al. (2019) used a gradient reversal layer (GRL) to help models ignore those spurious correlations in data on domain adaption and NLI tasks. For the QE tasks, Behnke et al. (2022) attempt to adapt the above two methods into sentence-level QE for bias mitigation, while the performance improvement is not stable across different translation directions. To our view, DFL introduced a biased model to learn the bias and reweight the loss of the base model. However, the NLI task is different from QE tasks, where it is dif\ufb01cult to isolate the bias from HYP sentence (Behnke et al., 2022).4 Besides, GRL is unstable due to the simultaneous optimization of adversarial tasks, which easily leads to model confusion (Ganin et al., 2016). By contrast, our proposed SLR uses a single model and restricts simultaneous optimization, which can both enhance the usage of cross-lingual features and stabilize the model training. Hyp Src Shared-PLM & Classifier Fed into PLM Fed into PLM Concat and fed into PLM Generate probability Probhyp Prob Probsrc L = LCE + \ud835\udf36* LSLR LCE : CrossEntropy LSLR : KL divergence LSLR : KL divergence 2 1 1 Generate probability 2 Generate probability 2 Figure 2: Illustration of our model architecture. Our model involves two shared modules: a PLM and a classi\ufb01er. During training, our model \ufb01rst derives wordlevel classi\ufb01cation results on the sentence-pair input, and reduces shortcut learning on monolingual inputs. 3 Methods We \ufb01rst give the formulation of our FG-TED task (\u00a73.1), then the proposed method (\u00a73.2) and synthetic data construction (\u00a73.3) are introduced. 3.1 Task Formulation We de\ufb01ne our task as a word-level classi\ufb01cation problem. Speci\ufb01cally, given HYP sentence h = [h1, h2, \u00b7 \u00b7 \u00b7 , hm] and SRC sentence s = [s1, s2, \u00b7 \u00b7 \u00b7 , sn], models need to give predictions y = [yh1, yh2, \u00b7 \u00b7 \u00b7 , yhm, ys1, ys2, \u00b7 \u00b7 \u00b7 , ysn] for all words. In this research, we mainly consider addition and omission errors \u2013 typical translation errors in practice (Vamvas and Sennrich, 2022). 3.2 Model Architecture Our model architecture is shown in Figure 2. The HYP and SRC are concatenated into a segment x = [h; s], which is fed into the PLM to obtain the embeddings of all tokens. All embeddings are served as the input to a binary classi\ufb01er. The FGTED model identi\ufb01es the error words if the corresponding predictions are positive. Otherwise, the words with negative scores are correctly translated. The word-level classi\ufb01cation learning objective of our model is to minimize the cross-entropy loss: P = softmax(f(x, \u03b8)) \u2208R(m+n)\u00d72 (1) LCE = \u2212 m+n X i=1 1 X j=0 1yi=j \u00b7 log(Pi,j), (2) where y is the ground truth, f(\u00b7, \u00b7) represents the FG-TED model, and \u03b8 is the parameter set. Further, the erroneous words are labeled as omission and addition errors if they are on the source and target sides, respectively. 4Refer to \u00a7D for more analyses. \fPLM Selection We believe that, the crosslingual alignments can vastly help our model, because they can deliver aligned semantics for HYP and SRC sentences. To make full use of crosslingual information for our FG-TED model, we apply INFOXLM (Chi et al., 2021), which is enhanced with cross-lingual semantic information during the pre-training phase, as PLM backbone instead of conventional choice XLM-R (Conneau et al., 2020).5 Shortcut Learning Reduction The existing PLMs usually contribute many monolingual features to \ufb01ne-tuning, such as \ufb02uency and grammar (Behnke et al., 2022). Unavoidably, we \ufb01nd that, our FG-TED model also easily falls into this \u201cshortcut\u201d: learning to predict the labels mainly based on monolingual information they learned during the pre-training period.6 To eliminate this effect, we guide the FG-TED model to distinguish the predictions on con catenated input (x) and those on single-sentence inputs (h and s). In detail, apart from feeding x into our model, we additionally collect the predictions via taking h or s individually: Q = softmax([f(h, \u03b8); f(s, \u03b8)]). (3) Then, we use KL-divergence loss to help push apart the distribution of Q away from that of P. The learning objective for SLR is formulated as: LSLR(\u03b8) = \u2212 m+n X i=1 \u02c6 Qi,\u00b7 \u00b7 KL-div(Qi,\u00b7||\u02c6 Pi,\u00b7). (4) As seen, two model input formats are leveraged in our SLR strategy, namely the concatenated input (x) and single inputs (h and s). The objective of our SLR strategy is to maximize the difference between the corresponding representations Q and P, preventing the model from solely relying on monolingual features during training. Note that, to preserve the ef\ufb01ciency of our approach, we share the PLM to obtain Q and P. However, we observe that back-propagating the gradients of P and Q simultaneously can lead to model collapse. Our conjecture is that optimizing model parameters with KL-divergence loss delivers unstable learning. To help the model stabilize the 5We also testify the performances of different backbones pre-trained with other objectives in \u00a75. 6Empirical evidence can be found in \u00a74. Construct Masked Sequence Filling with MLM Hyp Schools opened . Hyp1 Schools opened last month . Hyp2 Schools are all opened . Hyp1 Schools opened [MASK] . Hyp2 Schools [MASK] [MASK] opened . [MASK] Re-rank by perplexity Hyp1 Schools opened last month . Hyp2 Schools are all opened . Src \u5b66\u6821 \u5f00\u5b66 \u4e86 \u3002 1 2 Figure 3: Illustration of the pipeline of our synthetic data generated for Zh-En addition errors. training, we \ufb01rst use detached sentence-pair predictions \u02c6 P to obtain KL-divergence loss. Besides, for the wrongly translated tokens whose probabilities in Q are strongly positive, our model is easily over-con\ufb01dent in its own predictions (Pereyra et al., 2017; Gao et al., 2020). To alleviate such over\ufb01tting problem, we apply the detached representations \u02c6 Q as a multiplicator for model regularization. Finally, our loss function can be written as: L = LCE + \u03b1LSLR, (5) where \u03b1 is a hyper-parameter to balance between preserving the cross-lingual information and inhibiting the use of monolingual features.7 3.3 Data Collection Currently, the scarcity of labeled data hinders the research from error detection (Vamvas and Sennrich, 2022). In this research, we alleviate this problem in two ways: 1) We construct synthetic data to help train the FG-TED model; 2) We collect the MQM dataset (Freitag et al., 2021a) which includes English-German (En-De) and Zh-En examples. Then, we relabel the Zh-En examples with well-formatted addition/omission error labels, making the dataset reliable for evaluation. Synthetic Data Construction Inspired by Sellam et al. (2020); Zhou et al. (2021), after collecting parallel corpora, we insert multiple mask tokens into one sentence, and \ufb01ll them to construct an addition or omission error. In detail, as shown in Figure 3, the pipeline for obtaining such data consists of the following steps: \u2022 Data \ufb01ltering. To ensure that the constructed translation errors help model learning, the other words which are labeled as \u201ccorrectly translated\u201d should be highly aligned across languages. To achieve this, we use available 7In this research, we set \u03b1 as 0.05 and 0.1 for INFOXLM and XLM-R backbone for all experiments, respectively. Analysis for tuning this hyper-parameter is in Appendix \u00a7G. \fsentence-level QE toolkits (e.g., COMET-QE, Rei et al., 2020; UNITE, Wan et al., 2022) to help \ufb01lter out low-quality examples. \u2022 Mask-Filling. For each sentence pair, we \ufb01rst randomly choose the sentence to be processed (either h or s). Then, we insert mask tokens into all available positions, and randomly determine the number of mask tokens to increase the diversity of synthetic samples.8 After that, we \ufb01ll the masked sequence with monolingual PLMs (e.g., BERT, Devlin et al., 2018), apply recursive beam search (Sellam et al., 2020) to retain the \ufb02uency of generated samples, and construct the addition and omission errors in the HYP and SRC sentences, respectively. \u2022 Reranking. After collecting \ufb01lled candidate sequences, we use GPT-2 (Radford et al., 2019) to collect their perplexity values for reranking.9 Note that, different from direct selecting the top-1 candidate, we randomly choose one candidate among top-k results to increase the diversity of translation errors. Finally, we use WMT\u201914 En-De and WMT\u201917 Zh-En machine translation datasets \u2013 two mainstream benchmarks to construct synthetic samples. We collect 1.6M sentence pairs from these datasets and keep 0.3M high-quality pairs after \ufb01ltering by QE toolkits. To balance the diversity of synthetic dataset and the ef\ufb01ciency of data construction, in practice, we set the beam size to 8 during mask\ufb01lling, and select the top-8 samples when reranking. We totally collect 0.3M samples with well-labeled addition/omission errors where each example contains HYP h, SRC s, and binary labels y. Dataset Relabeling Considering the available MQM dataset (Freitag et al., 2021a) is rather scarce for model training, we apply it as the test set to evaluate the performances of all models. However, we \ufb01nd that the labels for some omission errors are on the target side (See Table 1), which brings disagreement for model evaluation (Vamvas and Sennrich, 2022). To bridge such gap, we recruit two professional annotators to relabel the annotation of omission errors in the Zh-En direction. We require them to tag those errors on the corre8Positions include the beginning and the end of sentence, as well as the spaces between any two adjacent words. The maximum number of consecutive mask tokens is set to 5 based on our empirical \ufb01ndings. 9English GPT-2: https://huggingface.co/gpt2; Chinese GPT-2: https://github.com/Morizeyao/GPT2-Chinese; German GPT-2: https://huggingface.co/dbmdz/german-gpt2. sponding source side.10 Totally, after combining the conventional and our relabeled annotations, we get 5,502 and 1,102 well-formatted examples for Zh-En and English-German (En-De), respectively. 4 Experiments 4.1 Experimental Settings Evaluation Setting Following Vamvas and Sennrich (2022), we extract the examples containing Accuracy/Addition and Accuracy/Omission errors in MQM datasets (Freitag et al., 2021a). As discussed in section \u00a73.3, we replace the Accuracy/Omission error in Chinese-English parts of the MQM\u201920 datasets with our relabeled ones. During the evaluation, we re-weight each translation error word by the number of annotators tagging them, as the annotation is more con\ufb01dent if more annotators give the same annotation (Monarch, 2021). Notably, as the available PLMs would tokenize the sentence into subwords, we follow the related work (Ranasinghe et al., 2021), treating the whole word as an error if any of its subwords is predicted as a translation error. Baseline We include RANDOM, CONTRASTIVE CONDITIONING (Vamvas and Sennrich, 2022), HALLUCINATION DETECTION (Zhou et al., 2021), TRANSQUEST (Ranasinghe et al., 2021), INFOXLM + DFL (Mahabadi et al., 2020), INFOXLM + FIXED DFL, INFOXLM +GRL (Ganin and Lempitsky, 2015) as baselines.11. 4.2 Main Results 4.2.1 FG-TED task Table 2 shows the results for FG-TED task of all models. As seen, for all baselines, CONTRASTIVE CONDITIONING (Line 4, Vamvas and Sennrich, 2022) outperforms HALLUCINATION DETECTION (Line 2, Zhou et al., 2021) and TRANSQUEST methods (Line 3, Ranasinghe et al., 2021). Moreover, the average performance of TRANSQUEST is even worse than RANDOM (Line 1). This indicates that, those models which perform well on other related tasks may not be suitable for handling our proposed FG-TED task. We \ufb01rst investigate the quality of our synthetic data. As seen, applying different PLMs as backbones of our model can all surpass the performance 10Details are discussed in Appendix \u00a7C. 11For details about baselines and model settings, please refer to Appendix \u00a7E and \u00a7F. \fLine No. Model Zh-En En-De Avg F1 Addition Omission Addition Omission Baseline (1) RANDOM 1.1/55.0/ 2.2 14.6/50.4/22.6 4.9/61.0/ 9.1 0.6/55.0/ 1.2 8.7 (2) \u2217HALLUCINATION DETECTION (Zhou et al., 2021) 4.2/52.8/ 7.8 -/-/-/-/-/-/(3) \u2217TRANSQUEST (Ranasinghe et al., 2021) 1.5/25.7/ 2.8 14.6/92.2/25.2 1.7/42.2/ 3.3 0.1/100.0/ 1.2 7.9 (4) CONTRASTIVE CONDITIONING (Vamvas and Sennrich, 2022) 10.0/24.7/14.2 47.8/24.9/32.7 25.0/11.0/15.3 13.3/28.7/18.2 20.1 Trained by synthetic data (5) MBART 17.7/ 4.6/ 7.3 39.8/28.8/33.4 17.8/28.4/21.9 6.7/13.4/ 9.0 17.9 (6) XLM-R 17.5/ 2.9/ 5.0 41.6/30.9/35.5 40.5/29.9/34.4 5.6/28.3/ 9.3 21.0 (7) INFOXLM 24.3/ 7.5/11.5 45.1/27.0/33.8 42.1/38.5/40.2 9.6/13.6/11.3 24.2 Trained by synthetic data with shortcut learning reduction strategy (8) XLM-R + GRL (Ganin and Lempitsky, 2015) 24.2/ 2.1/ 3.8 46.3/26.5/33.7 38.1/28.8/32.7 6.0/22.5/ 9.5 20.0 (9) INFOXLM + GRL (Ganin and Lempitsky, 2015) 24.5/ 3.3/ 5.8 50.1/25.2/33.6 42.4/40.0/41.2 10.9/18.6/13.8 23.6 (10) INFOXLM + DFL (Mahabadi et al., 2020) 2.0/58.9/ 3.9 22.4/57.5/32.2 12.4/93.0/21.9 1.6/58.9/ 3.1 15.3 (11) INFOXLM + FIXED DFL (Mahabadi et al., 2020) 38.8/ 3.7/15.5 56.2/23.5/33.1 46.4/19.2/27.2 20.9/15.5/17.8 23.4 (12) XLM-R + SLR (OURS) 33.5/ 6.1/10.3 44.1/34.4/38.7 38.3/39.6/38.9 13.2/58.1/21.5 27.4 (13) INFOXLM + SLR (OURS) 27.5/13.5/18.1 48.6/33.3/39.5 37.5/47.6/42.0 12.8/40.3/19.4 29.8 Table 2: Precision/Recall/F1 scores of baselines and our methods. Best F1 scores are viewed in bold. Baselines marked with \u201c*\u201d indicate that the approaches are proposed for other tasks. Our model (INFOXLM + SLR) can achieve the best results than existing baselines and methods. of TRANSQUEST. Notably, the XLM-R approach (Line 6) and TRANSQUEST baseline use the same model architecture. This indicates the effectiveness of our synthetic data, which brings the improvement of 13.1 F1 scores on average. Besides, when replacing the backbone with INFOXLM (Line 7), the performance further achieves 24.2, yielding an improvement of 3.2 F1 scores to the XLMR approach. The reason lies in the pre-training phase of INFOXLM: the PLM is enhanced with cross-lingual alignment semantics. Besides, using MBART (Liu et al., 2020) as the backbone of our model (Line 5) performs worst. We think the reason is that, the encoder-decoder model architecture lacks the information interaction between two languages, failing to fully utilize the cross-lingual semantics for FG-TED (He et al., 2018). Further, we compare our SRL with debiased focal loss (DFL, Mahabadi et al., 2020) and gradient reversal layer (GRL, Ganin and Lempitsky, 2015). As seen, our models can further improve their performances than those without SLR, revealing 6.4 (Line 12 vs. Line 6) and 5.6 (Line 13 vs. Line 7) averaged F1 scores using XLM-R and INFOXLM as backbones, respectively. Especially for INFOXLM + SLR approach (Line 13), which delivers the highest averaged F1 scores at 29.8. This reveals the effectiveness of our proposed SLR strategy, that the monolingual information in each sentence harms the FG-TED model learning. By reducing the processing of monolingual features, the FG-TED models can deliver more accurate \ufb01ne-grained error information. Besides, compared to existing methods, introducing DFL or GRL to the model reveals a performance drop of averaged F1 at 8.9 and 0.6 (Line 11 vs. Line 7 and Line 9 vs. Line 7). To investigate the reason why the performance of \u201cINFOXLM + DFL\u201d approach drops signi\ufb01cantly, we apply a modi\ufb01ed version \u201cINFOXLM + FIXED DFL\u201d. See the analysis in Appendix \u00a7D. 4.2.2 Word-level QE & CED task To illustrate the effectiveness of our methods on two sub-tasks of FG-TED, we conduct experiments on MLPE-QE dataset (Specia et al., 2020) for wordlevel QE task and ACES challenge set (Amrhein et al., 2022) for CED task12. Table 3 exhibits the results of our methods on MLQE-PE dataset.13 As seen: 1) Directly applying the SLR strategy to the word-level-QE task always boost the performance (XLM-R + FT vs. XLM-R + FT*: 31.4 vs. 31.6 (+0.2); INFOXLM + FT vs INFOXLM + FT*: 35.2 vs. 35.3 (+0.1)); 2) Simply using the synthetic data before \ufb01ne-tuning get the improvement on all settings (+2.1 for XLM-R + FT, +2.6 for XLM-R + FT*, +0.5 for INFOXLM + FT*) except slight drop at -0.4 for INFOXLM + FT); 3) When applying the SLR strategy on the SYN phase, INFOXLM + FT + SYN model performs better (34.8 vs. 36.1(+1.3)), and other scenarios demonstrate comparable results than baseline. We conclude that, the combination of using syn12To be consistent with their settings, for the former task, we conduct our experiments using the setting of the WMT21 word-level QE task (Specia et al., 2020) and ignore the \u201c[GAP]\u201d token; for the latter, we directly use the public repository for \ufb01ne-tuning, and output a separate score. 13The baseline is trained on all En-XX directions while we only \ufb01ne-tune our models on En-De and En-Zh directions, which leads to a performance gap in En-De source side. \fModel En-De En-Zh Average Source Target Source Target BASELINE 32.3 37.0 24.1 24.7 29.5 Backbone: XLM-R + FT 25.5 38.9 27.8 33.3 31.4 + FT* 25.4 38.2 28.2 34.5 31.6 + SYN + FT 26.7 41.3 29.5 36.5 33.5 + SYN + FT* 28.8 41.9 29.6 36.5 34.2 + SYN* + FT 25.6 41.8 30.0 35.6 33.3 + SYN* + FT* 28.5 41.9 30.1 36.1 34.2 Backbone: INFOXLM + FT 33.6 39.6 31.3 36.1 35.2 + FT* 33.4 40.1 31.4 36.4 35.3 + SYN + FT 33.9 39.9 30.3 35.1 34.8 + SYN + FT* 36.4 41.4 30.7 34.8 35.8 + SYN* + FT 35.4 42.6 30.9 35.4 36.1 + SYN* + FT* 34.5 41.7 30.7 35.7 35.7 Table 3: Comparison on the Matthews correlation coef\ufb01cient (MCC) on the word-level QE dataset for En-De and En-Zh. \u201cBASELINE\u201d denotes the baseline results reported by WMT\u201921 organizers (Specia et al., 2021). \u201c+ FT\u201d and \u201c+ SYN\u201d denote the models trained on word-level QE training set and our synthetic data, respectively. \u201c*\u201d denotes we use the SLR strategy during the training phase. The best results for each backbone are in bold. thetic data for continuous pre-training and applying the SLR strategy reveals a steady improvement in FG-TED model performance. Table 4 demonstrates the Kendall tau-like correlation results for addition and omission errors in the ACES datasets. Our model outperforms the publicly released models (COMET-21, Zerva et al., 2021; WMT-COMET-22, Rei et al., 2022; and UNITE, Wan et al., 2022). Interestingly, training on synthetic data and applying the SLR strategy during training both boost the performances of our models on two backbones. This demonstrates that our SLR strategy and synthetic data are also useful to evaluate the translation quality of the hypothesis at the sentence level. In conclusion, our approach can not only handle FG-TED tasks, but is also suitable for its subtasks, i.e., word-level QE and CED. 5 Analysis We are interested to discover the characteristics of the FG-TED task and our method. In this section, we direct our experiments with the following research questions: \u2022 RQ1: How does the SLR in\ufb02uence the utilization of monolingual features? \u2022 RQ2: What is the performance of our method perform on the low-resource settings? 14Our SLR serves the word-level classi\ufb01cation tasks, which is inappropriate for \ufb01ne-tuning on MQM, a benchmark that mainly serves for segment-level prediction. Model Addition Omission Avg Score Baseline WMT21-COMET-MQM -0.53 0.40 -0.07 WMT22-COMET-MQM 0.17 0.71 0.44 UNITE-MQM -0.38 0.76 0.19 Backbone: XLM-R + MQM -0.21 0.37 0.08 + SYN + MQM 0.28 0.75 0.52 + SYN* + MQM 0.30 0.76 0.53 Backbone: INFOXLM + MQM -0.46 0.47 0.01 + SYN + MQM 0.18 0.83 0.51 + SYN* + MQM 0.33 0.86 0.60 Table 4: Comparison on Kendall-tau correlation values on the ACES dataset. \u201c+ SYN\u201d and \u201c+ MQM\u201d means that the model is trained on the synthetic data and MQM datasets, respectively. \u201c*\u201d denotes we use the SLR strategy during the training phase.14 For baselines, we use of\ufb01cial checkpoints for COMET (Rei et al., 2022), and we use the released checkpoint to \ufb01netune on MQM datasets for UNITE (Wan et al., 2022). The best results for each backbone are in bold. Note that, WMT22-COMET-MQM uses the multi-task strategy during the \ufb01ne-tuning period, which is different from the others. \u2022 RQ3: Whether our methods have the crosslingual transferring ability or not? \u2022 RQ4: What kind of pre-training method is the most effective for our FG-TED task? 5.1 In\ufb02uence of SLR (RQ1) We \ufb01rst explore the reason why SLR helps our models on the FG-TED task. Specially, if we modify the attention mask to remove the interactions between SRC and HYP, model predictions will be solely derived from the monolingual features. All words in HYP and SRC should be predicted as addition and omission errors, respectively. Following this setting, we collect the results during inference to show the distribution of prediction probabilities, and compare the distributions of the predictions that are derived with INFOXLM + SLR and INFOXLM model (Exp 11 and 7 in Table 2). As Figure 4 illustrates, we can clearly observe that the model without SLR predicts almost all words as correctly translated ones. After introducing SLR into model training, our model can accurately identify most of the translation errors. These \ufb01ndings demonstrate that, based on the strong ability of PLM, the derived monolingual features suggest the FG-TED model mark all words as accurate ones, hardly utilizing cross-lingual information for translation error detection. After introducing SLR into model training, such in\ufb02uence can be moderated. \f0 0.5 1.0 0 0.5 1.0 SLR w/o SLR (a) En-De Addition 0 0.5 1.0 (b) En-De Omission 0 0.5 1.0 (c) Zh-En Addition 0 0.5 1.0 (d) Zh-En Omission Figure 4: Comparison on the distribution of prediction probabilities with SLR. We evenly divide the collected probabilities into 20 buckets according to their values (x-axis), and show the proportion of each bucket (y-axis). We simulate the single-sentence input scenarios via modifying the attention masks, preventing the model from utilizing cross-lingual semantic interactions during prediction. 5.2 Generality on Low-Resource and Cross-Lingual Settings (RQ2, RQ3) Aside from table 2, in this subsection, we further explore the generality of our model in low-resource and cross-lingual settings. Low resource Setting To further identify the generality of our model in the low-resource and cross-lingual settings, we \ufb01rst split the relabeled MQM dataset into the train, dev, and test sets with a ratio of 1:1:8, yielding 816, 888, and 6,885 examples for each subset. To avoid the overlap of contextual information, we make sure that the examples assigned with the same SRC sentence are involved in the same subset. For the model \ufb01netuning, we randomly select 20, 40, 80, 400, and 816 examples to collect the model performances with different numbers of training examples. Figure 5 shows the result on the low-resource setting. As seen, for the INFOXLM-SYN-MQM approach which is \ufb01rstly trained with our synthetic data, \ufb01ne-tuning on 20 MQM samples shows a comparable result with the model \ufb01ne-tuned on the entire training set. Besides, as the number of available MQM samples becomes larger for \ufb01ne-tuning, the performance of the FG-TED model increases. Meanwhile, using XLM-R as the backbone of our model (XLM-R-SYN-MQM) shows worse performance than INFOXLM, and the performance drop is consistent when using different numbers of training examples. Building upon these \ufb01ndings, we claim that our synthetic data can signi\ufb01cantly enhance the capability of the model. In addition, the model with INFOXLM backbone outperforms that with XLMR, especially when available \ufb01ne-tuning data is extremely scarce (20 samples). This may be related to the pre-training objective of those two PLMs. Compared to XLM-R, the INFOXLM is enhanced with cross-lingual alignments information (Chi et al., 2021), which is important for our FG-TED task. 40 400 816 5 25 40 Number of MQM Samples Avg F1 INFOXLM-SYN-MQM XLM-R-SYN-MQM INFOXLM-MQM INFOXLM-SYN Figure 5: Comparison on low-resource setting. We use different scales of MQM dataset (x-axis) for training and collect the averaged model F1 scores (y-axis). INFOXLM and XLM-R: the backbone we used. SYN: model is \ufb01rst trained on synthetic examples. Cross-lingual Transferability We also conduct experiments to explore the cross-lingual transferability of our model. For this setting, the models are trained with the En-De annotated datasets and collect the predictions on Zh-En examples. Figure 6 shows the performance of models on the zero-shot setting. As seen, when the number of synthetic data increases, the performances of two models with the backbones being INFOXLM and XLM-R increase, respectively. This indicates that, our synthetic data can help models learn the core of the FG-TED task across languages. Besides, the INFOXLM approach shows consistently better than XLM-R, showing the importance of crosslingual aligned semantics for our task. In addition, for the cases where the number of synthetic examples is larger than 24k, both approaches show a limited improvement if more examples are used for training. 5.3 Pre-training Objective (RQ4) In this subsection, we further explore what training objective for PLMs helps our FG-TED task the most. We collect the performances of PLMs trained with different training objectives on Zh-En subset and use 18.8M Zh-En parallel data (WMT\u201917 Zh-En Machine Translation benchmark) for pre\f3 12 24 60 120 10 30 50 Number of En-De Synthetic Samples (K) Avg F1 INFOXLM XLM-R RANDOM Figure 6: Comparison on cross-lingual setting. We use different numbers of En-De synthetic samples (x-axis) for training, and collect the averaged performance on Zh-En test set (y-axis). Model Zh-En Avg F1 Adddition Omission MLM 5.1/ 5.9/ 5.5 24.9/22.7/23.7 14.6 TLM 19.4/36.0/25.2 46.0/33.0/38.4 31.8 MLM + TLM 18.4/35.7/24.3 43.7/35.2/39.0 31.6 MLM + TLM + XLCO 16.5/33.7/22.2 41.3/34.6/37.7 29.9 Table 5: The performance of different pre-trianing objective on Zh-En part of MQM dataset. training using Chi et al. (2021) repository.15 As in Table 5, the PLM trained with MLM objective delivers the worst performance of FG-TED models. Besides, the Translation Language Modeling (TLM, Conneau et al., 2019) offers crosslingual aligned semantics during training. The corresponding backbone shows better results than that trained with MLM, indicating the importance of cross-lingual information on our task. Besides, combining the MLM and TLM objectives shows a slight performance drop on average, and the implementation of INFOXLM, i.e., combining MLM, TLM, and Cross-Lingual Contrast (XLCO, Chi et al., 2021), further downgrades the performance of our FG-TED model. Those results demonstrate that, the PLM which is pretrained with cross-lingual aligned semantics can help the FG-TED model better identify addition and omission translation errors. As Chi et al. (2021) claims, TLM maximizes token-sequence mutual information, while XLCO maximizes the sentencelevel mutual information between translation pairs. Thus, TLM is more suitable for tasks that require \ufb01ne-grained cross-lingual alignment information. 6" + }, + { + "url": "http://arxiv.org/abs/2210.10049v2", + "title": "Alibaba-Translate China's Submission for WMT 2022 Quality Estimation Shared Task", + "abstract": "In this paper, we present our submission to the sentence-level MQM benchmark\nat Quality Estimation Shared Task, named UniTE (Unified Translation\nEvaluation). Specifically, our systems employ the framework of UniTE, which\ncombined three types of input formats during training with a pre-trained\nlanguage model. First, we apply the pseudo-labeled data examples for the\ncontinuously pre-training phase. Notably, to reduce the gap between\npre-training and fine-tuning, we use data pruning and a ranking-based score\nnormalization strategy. For the fine-tuning phase, we use both Direct\nAssessment (DA) and Multidimensional Quality Metrics (MQM) data from past\nyears' WMT competitions. Finally, we collect the source-only evaluation\nresults, and ensemble the predictions generated by two UniTE models, whose\nbackbones are XLM-R and InfoXLM, respectively. Results show that our models\nreach 1st overall ranking in the Multilingual and English-Russian settings, and\n2nd overall ranking in English-German and Chinese-English settings, showing\nrelatively strong performances in this year's quality estimation competition.", + "authors": "Keqin Bao, Yu Wan, Dayiheng Liu, Baosong Yang, Wenqiang Lei, Xiangnan He, Derek F. Wong, Jun Xie", + "published": "2022-10-18", + "updated": "2023-02-17", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "main_content": "Introduction Quality Estimation (QE) aims at evaluating machine translation without access to a gold-standard reference translation (Blatz et al., 2004; Specia et al., 2018). Different from other evaluation tasks (e.g., metric), QE arranges its process of evaluation via only accessing source input. As the performance of modern machine translation approaches increase (Vaswani et al., 2017; Lin et al., 2022; Wei et al., 2022; Zhang et al., 2022), the QE systems should better quantify the agreement of cross-lingual semantics on source sentence and translation hypothesis. The evaluation paradigm \u2217Equal contribution. Work was done when Keqin Bao and Yu Wan were interning at DAMO Academy, Alibaba Group. of QE shows its own potential for real-world applications (Wang et al., 2021; Park et al., 2021; Specia et al., 2021). This paper describes Alibaba Translate China\u2019s submission to the sentence-level MQM benchmark at WMT 2022 Quality Estimation Shared Task (Zerva et al., 2022). In recent years, pre-trained language models (PLMs) have shown their strong ability on extracting cross-lingual information (Conneau et al., 2020; Chi et al., 2021). To achieve a higher correlation with human ratings on the quality of translation outputs, plenty of trainable model-based QE approaches appear, e.g., COMET-QE (Rei et al., 2020) and QEMIND (Wang et al., 2021). They both \ufb01rst derive the embeddings assigned with source and hypothesis sentence with given PLM, then predict the overall score based on their embeddings with a followed feedforward network. Those model-based approaches have greatly facilitated the development of the QE community. However, those models can only handle source-only input format, which neglects the other two evaluation scenarios, i.e., reference-only and source-referencecombined evaluation. More importantly, training with multiple input formats can achieve a higher correlation with human assessments than individually training on speci\ufb01c evaluation scenarios (Wan et al., 2021, 2022a). Those \ufb01ndings indicate that, the QE and Metric tasks share plenty of knowledge when identifying the quality of translated outputs, and unifying the functionalities of three evaluation scenarios into one model can also enhance the performance of the evaluation model on each scenario. As a consequence, when building a single model for a sentence-level QE task, we use the pipeline of UNITE (Wan et al., 2022a), which integrates source-only, reference-only, and source-referencecombined translation evaluation ability into one single model. When collecting the system outputs for WMT 2022 Quality Estimation Shared Task, we employ our UNITE models to predict arXiv:2210.10049v2 [cs.CL] 17 Feb 2023 \fthe translation quality scores following a sourceonly setting. As for the training data, we collect synthetic data examples as supervision for continuous pre-training and apply a dataset pruning strategy to increase the translation quality of the training set. Also, during \ufb01ne-tuning our QE model, we use all available Direct Assessment (DA, Bojar et al., 2017; Ma et al., 2018, 2019; Mathur et al., 2020) and Multidimensional Quality Metrics datasets (MQM, Freitag et al., 2021a,b) from previous WMT competitions to further improve the performance of our model. Besides, regarding the applied PLM for UNITE models, we \ufb01nd that for English-Russian (En-Ru) and Chinese-English (ZhEn) directions, PLM enhanced with cross-lingual alignments (INFOXLM, Chi et al., 2021) can deliver better results than conventional ones (XLM-R, Conneau et al., 2020). Moreover, for each subtask including English to German (En-De), En-Ru, ZhEn, and multilingual direction evaluations, we build an ensembled QE system to derive more accurate and convincing results as \ufb01nal predictions. Our models show impressive performances in all translation directions. When only considering the primary metric \u2013 Spearman\u2019s correlation, we get 2nd, 3rd, and 3rd place in En-Ru, Zh-En, and multilingual direction, respectively. More notably, when taking all metrics into account, despite the slight decrease in Spearman\u2019s correlations, our systems show outstanding overall performance than other systems, achieving 1st place in En-Ru and multilingual, and 2nd in En-De and Zh-En direction. 2 Method As outlined in \u00a71, we apply the UNITE framework (Wan et al., 2022a) to obtain QE models. We unify three types of input formats (i.e., source-only, reference-only, and source-reference-combined) into one single model during training. While during inference, we only use the source-only paradigm to collect evaluation scores. In this section, we introduce the applied model architecture (\u00a72.1), synthetic data construction method (\u00a72.2), and model training strategy (\u00a72.3). 2.1 Model architecture Input Format Following Wan et al. (2022a), we design our QE model which is capable of processing source-only, reference-only, and sourcereference-combined evaluation scenarios. Consequently, for the consistency of training across all input formats, we construct the input sequence for source-only, reference-only, and source-referencecombined input formats as follows: xSRC = \u27e8s\u27e9h\u27e8/s\u27e9\u27e8/s\u27e9s\u27e8/s\u27e9, (1) xREF = \u27e8s\u27e9h\u27e8/s\u27e9\u27e8/s\u27e9r\u27e8/s\u27e9, (2) xSRC+REF = \u27e8s\u27e9h\u27e8/s\u27e9\u27e8/s\u27e9s\u27e8/s\u27e9\u27e8/s\u27e9r\u27e8/s\u27e9, (3) where h, s, and r represent hypothesis, source, and reference sentence, respectively. During the pre-training phase, we apply all input formats to enhance the performance of QE models. Notably, we only use the source-only format setting when \ufb01ne-tuning on this year\u2019s dev set and inferring the test set. Model Backbone Selection The core of quality estimation aims at evaluating the translation quality of output given source sentence. As the source and hypothesis sentence are from different languages, evaluating the translation quality requires the ability of multilingual processing. Furthermore, we believe that those PLMs which possess cross-lingual semantic alignments can ease the learning of translation quality evaluation. Referring to the setting of existing methods (Ranasinghe et al., 2020; Rei et al., 2020; Sellam et al., 2020; Wan et al., 2022a), they often apply XLM-R (Conneau et al., 2020) as the backbone of evaluation models for better multilingual support. To testify whether cross-lingual alignments can help the evaluation model training, we further apply INFOXLM (Chi et al., 2021), which enhances the XLM-R model with cross-lingual alignments, as the backbone of evaluation models. Model Training For the training dataset including source, reference, and hypothesis sentences, we \ufb01rst equally split all examples into three parts, each of which only serves one input format training. As to each training example, after concatenating the required input sentences into one sequence and feeding it to PLM, we collect the corresponding representations \u2013 HREF, HSRC, HSRC+REF for each input format, respectively. After that, we use the output embedding assigned with CLS token h as the sequence representation. Finally, a feedforward network takes h as input and gives a scalar p as a \fprediction. Taking xSRC as an example: HSRC = PLM(xSRC) \u2208R(lh+ls)\u00d7d, (4) hSRC = CLS(HSRC) \u2208Rd, (5) pSRC = FeedForward(hSRC) \u2208R1, (6) where lh and ls are the lengths of h and s, respectively. For the learning objective, we apply the mean squared error (MSE) as the loss function: LSRC = (pSRC \u2212q)2, (7) where q is the given ground-truth score. Note that, when training on three input formats, one single step includes three substeps, each of which is arranged on one speci\ufb01c input format. Besides, the batch size is the same across all input formats to avoid the training imbalance. During each update, the \ufb01nal learning objective can be written as the sum of losses for each format: L = LREF + LSRC + LSRC+REF. (8) 2.2 Constructing Synthetic Data To better enhance the translation evaluation ability of pre-trained models, we \ufb01rst construct synthetic dataset for continuous pre-training (Wan et al., 2022a). The pipeline for obtaining such dataset consists of the following steps: 1) collecting synthetic data from parallel data provided by the WMT Translation task; 2) labeling samples with a ranking-based scoring strategy; 3) pruning data samples to increase the quality of dataset; 4) relabeling them with a ranking-based scoring strategy. Collecting Synthetic Data Pseudo datasets for model pre-training has been proven effective for obtaining well-performed evaluation models (Sellam et al., 2020; Wan et al., 2021, 2022a). Moreover, as in Wan et al. (2022a), training on three input formats requires massive pseudo examples. Specifically, we \ufb01rst obtain parallel data from this year\u2019s WMT Translation task as the source-reference sentence pairs, and translate the source using online translation engines, e.g., Google Translate1 and Alibaba Translate2, to generate the hypothesis sentence. As discussed in Sellam et al. (2020), the conventional pseudo hypotheses are 1https://translate.google.com 2https://translate.alibaba.com 0 25 50 75 100 0 0.2 0.4 0.6 0.8 1 Figure 1: The cumulative distribution of scores in WMT 2020 and 2021 MQM datasets. The x-axis represents the annotated score while the y-axis represents the ratio. usually of high translation quality. Consequently, the dataset hardly possesses a higher level of translation quality diversity, making it dif\ufb01cult to train evaluation models. We follow existing works (Wan et al., 2022a; Sellam et al., 2020) to apply the word and span dropping strategy to attenuate hypotheses quality, increasing the ratio of training examples consisting of bad translation outputs. Data Labeling and Pruning After downgrading the translation quality of synthetic hypothesis sentences, we then collect predicted scores for each triple as the learning supervision using checkpoint from UNITE (Wan et al., 2022a).3 As discussed in Wan et al. (2022a) and Sellam et al. (2020), scores labeled by low-quality metrics have poor consistency, confusing the model learning during the training period. To increase the con\ufb01dence of pseudo-labeled scores, we use multiple UNITE checkpoints trained with different random seeds to label the synthetic data (Wan et al., 2022a). Besides, to reduce the gap of predicted scores among different translation directions, as well as alleviate the bias among multiple evaluation approaches, we follow the scoring methods in UNITE (Wan et al., 2022a), using the idea of Borda count (Ho et al., 1994; Emerson, 2013). After sorting the collected prediction scores, we use their ranking indexes instead, and apply the conventional Z-score strategy to normalize them. During our preliminary experiments, we \ufb01nd that the quality of hypotheses in the MQM 2020 and 2021 dataset is generally high. As shown in Figure 1, more than 64% of the human-annotated scores are higher than 90. To further mitigate the disagreement of translation quality distributions be3https://github.com/wanyu2018umac/UniTE \ftween pre-training and test datasets, we arrange data pruning for synthetic data. Speci\ufb01cally, for each language pair, we ascendingly sort the synthetic examples by their scores, and split the examples into 5 bins. For the examples in each bin, we randomly drop 90%, 80%, 60%, 20%, and 0% data examples, yielding. We obtain 0.5M synthetic data for each language pair, and renormalize our prediction scores by the ranking-based manners as described before. In total, we collect pseudo examples on 10 translation directions, i.e., English \u2194Czech/German/Japanese/Russian/Chinese, each of which contains 0.5M data tuples formatted as \u27e8h, s, r, q\u27e9. 2.3 Training Pipeline To train UNITE models, the available datasets consist of synthetic examples (as in \u00a72.2), human annotations (i.e., DA and MQM), as well as provided development set for this year. In practice, we arrange the training pipeline into three steps as follows. Pre-train with Synthetic Data As illustrated in \u00a72.2, after collecting synthetic dataset, we use them to continuously pre-train our UNITE models to enhance the evaluation ability on three input formats. Fine-tune with DA Dataset After collecting pretrained checkpoints, we \ufb01rst \ufb01ne-tune them with human-annotated DA datasets. Although the DA and MQM datasets have different scoring rules, training UNITE models on DA as an additional phase can enhance both the model robustness and the support of multilinguality. In practice, we collect all DA datasets from the year 2017 to 2020, yielding 853k training examples. Notably, we leave the year 2021 out of training due to the reported bug from the organizational committee. Fine-tune with MQM Dataset For the evaluation test set which is assessed with MQM scoring rules, we arrange the MQM dataset from the year 2020 and 2021 for \ufb01ne-tuning models at the end of the training phase, consisting of 75k examples. Speci\ufb01cally, during this step, we \ufb01rst use the provided development set to tune hyper-parameters for continuous pre-training and \ufb01ne-tuning, and directly use all data examples to \ufb01ne-tune our UNITE models following the previous setting. 2.4 Results Conduction To select appropriate checkpoints, we evaluate our models on this year\u2019s development set and select top-3 models for each translation direction. Furthermore, to fully utilize the development set, we conduct a 5-fold cross-validation on the development set to select the best hyper-parameters for each top-3 model training on them. Finally, we use the best hyper-parameters to \ufb01ne-tune one single model on the entire development set. As to the results conduction, we \ufb01rst applied multiple random seeds for each setting, and select the checkpoint with the best performance for model training. Besides, to further increase the accuracy of ensembled scores, we choose two checkpoints whose backbones are XLM-R and INFOXLM, respectively. Notably, uncertainty estimation has been veri\ufb01ed in Machine Translation and Translation Evaluation communities (Wan et al., 2020; Zhou et al., 2020; Glushkova et al., 2021). However, applying this method is time consunming and we do not try it in this year\u2019s QE task. 3 Experiments Experiment Settings We choose the large version of XLM-R (Conneau et al., 2020) and INFOXLM (Chi et al., 2021) as the PLM backbones of all UNITE models. The feedforward network contains three linear transition layers, whose output dimensionalities are 3,072, 1,024, and 1, respectively. Between any two adjacent layers, a hyperbolic tangent is arranged as the activations. During the pre-training phase, we use the WMT 2021 MQM dataset as the development set to tune the hyper-parameters for continuous pre-training and DA \ufb01ne-tuning phases. For the XLM-R setting, we apply the learning rate as 1.0 \u00b7 10\u22125 for PLM, and 3.0 \u00b7 10\u22125 for the feedforward network. Especially, for INFOXLM setting, we halve the corresponding learning rates to maintain the training stability. Besides, we \ufb01nd that raising the batch size can make the training more stable. In practice, we set the batch size for each input format as 1,024. For the following \ufb01ne-tuning steps, we use the batch size as 32 across all settings. Evaluation Setup As requested by organizers, we primarily evaluate our systems in terms of Spearman\u2019s correlation metric between the predicted scores and the human annotations for each translation direction. Apart from that, we also take other metrics, e.g., Pearson\u2019s correlation, into account. Note that, during the evaluation of the multilingual phase, we directly calculate the correlation \fModel Multilingual En-De En-Ru Zh-En COMET-QE-21 (Zerva et al., 2021) 39.8 49.4 46.5 23.5 UNITE-pretrain 14.0 36.0 15.2 23.8 UNITE-pretrain-prune 28.5 41.5 22.2 20.4 UNITE-pretrain-prune + DA 44.5 49.3 50.3 25.2 UNITE-pretrain-prune + MQM 29.2 39.8 49.0 23.9 UNITE-pretrain-prune + DA + MQM 40.2 52.3 58.5 25.7 UNITE-INFOXLM-pretrain-prune + DA + MQM 32.2 47.7 59.0 27.1 Table 1: Spearman\u2019s correlaion (%) on this year\u2019s development dataset. The best result for each translation direction are bolded. Applying both DA and MQM datasets for \ufb01ne-tuning can achieve better results. Taking XLM-R as backbone shows better result on En-De, and INFOXLM on Zh-En and En-Ru. Model Multilingual En-De En-Ru Zh-En Single model 41.1 46.1 47.4 31.3 5-fold ensembling 42.7 53.1 48.4 34.7 XLM-R + INFOXLM ensembling 45.6 55.0 50.5 33.6 Table 2: Spearman\u2019s correlaion (%) on this year\u2019s test set. The best results for each translation direction are viewed in bold. Using 5-fold ensembling strategy delivers better correlation on Zh-En translation direction, and ensembling models trained on different PLM backbones conducts better results on multilingual, En-De, and En-Ru setting. score for all predictions instead of conducting that for each language direction individually. Baseline We introduce COMET-QE-21 (Zerva et al., 2021), one of the best-performed QE models as our strong baseline. COMET-QE-21 have shown their strong performance in WMT 2021 QE (Specia et al., 2021) and Metrics Shared Task (Freitag et al., 2021b) competitions. We directly apply the of\ufb01cial released COMET-21-QE baseline4, and use the well-trained checkpoints to infer on this year\u2019s development set for comparison. Main Results We \ufb01rst testify the effectiveness of our systems on this year\u2019s development set. As shown in Table 1, our models outperform COMETQE-21 in all translation directions. As to the results of \ufb01nal submissions, we list the results in Table 2. 4 Analysis In this section, we discuss the effectiveness of all strategies, i.e., data pruning (\u00a74.1), training data arrangement (\u00a74.2), backbone selection (\u00a74.3), and model ensembling methods(\u00a74.4). 4https://github.com/Unbabel/COMET/ 4.1 Data pruning We \ufb01rst investigate the impact of the data pruning strategy in Table 1. When using the pruneped data to train UNITE models, the performance gains signi\ufb01cant improvements, with 14.5, 5.5, and 7.0 Spearman\u2019s correlation on Multilingual, En-De, and En-Ru translation direction, respectively. As discussed in \u00a72.2, most training examples in MQM dataset have a higher translation quality. The data pruning method can reduce the ratio of training examples that contains poorly translated hypotheses. In contrast to the unpruneped synthetic dataset, the ratio of those examples consisting of welltranslated outputs is raised. Consequently, we can reduce the translation quality distribution gap between synthetic and MQM datasets, and continuous pre-training and \ufb01ne-tuning phases can share a great deal of learned knowledge. The experimental results validate our thinking, that the data pruning strategy offers a higher transferability of quality evaluation from synthetic to MQM data examples, making the model learning easier on the latter. 4.2 Training Data To identify which dataset among DA and MQM is more important during \ufb01ne-tuning, we conduct an experiment for comparing the corresponding effec\ftiveness. As shown in Table 1, using DA or MQM dataset can both give performance improvement compared to only using synthetic data. Notably, the combination of DA and MQM datasets can further boost the performance in En-Ru/En-De/Zh-En directions. However, when comparing UNITE-DAMQM to UNITE-DA, an unexpected performance drop in the Multilingual setting is observed. We think the reasons behind this phenomenon are two-fold. On one hand, DA data has 34 translation directions, while MQM data only has three speci\ufb01c directions (i.e., En-De, En-Ru, and ZhEn). The annotation rules applied for those two datasets are inconsistent with each other. Training the model on MQM data can boost the performance in a speci\ufb01c direction. While a model trained on DA data is possessed with a more general evaluation ability for more translation directions, thus delivering more stable results on multilingual evaluation scenarios. On the other hand, for MQM data items, even though the scores may be similar across translation directions and competition years, the corresponding translation quality may vary vastly. For example, a score of 0.3 may be relatively a high score in MQM 2021 Zh-En subset, while it is rather low in this year\u2019s En-De direction. This phenomenon is quite critical when handling examples from multiple translation directions. As scores from the involved two translation directions are not compatible, training on those examples concurrently may downgrade the multilingual performance of our models. 4.3 Backbone Selection As in Table 1, UniTE-pretrain-prune + DA + MQM is trained with XLM-R backbone, while UNITEINFOXLM-pretrain-prune + DA + MQM is trained with INFOXLM using the same hyper-parameters and strategy. As seen, after updating the backbone of UNITE model from XLM-R to INFOXLM, the latter model outperforms the former in En-Ru and Zh-En directions, with the improvement of Spearman\u2019s correlation at 0.5 and 1.4, respectively. We can see that the quality estimation model can bene\ufb01t from the cross-lingual alignment knowledge during model training. However, as to the En-De direction, the performance shows a signi\ufb01cant drop at 4.6. We attribute this to the reason, that English and German are from the same language family, where the two languages can obtain a great deal of cross-lingual knowledge via similar tokens with the same meaning. For Multilingual direction, we claim that the impact of training data makes it uncon\ufb01dent which has been discussed in \u00a74.2. 4.4 Ensemble Methods As in Table 2, the ensembled models show great improvement on all translation directions. The difference between XLM-R and INFOXLM lies in the training objective and applied training dataset. For the quality estimation task whose core lies in the semantic alignment across languages, the knowledge engaged inside those two PLM models can be complementary to each other. Except for Zh-En direction, XLM + INFOXLM ensembling outperforms the 5-fold ensembling method in three tracks, with the performance increase being 2.9, 1.9, and 2.1 for Multilingual, En-De, and En-Ru settings, respectively. This demonstrates that, ensembling models constructed with different backbones can give better results compared to the k-fold ensembling strategy. 5" + } + ], + "Wenjie Wang": [ + { + "url": "http://arxiv.org/abs/2403.04399v1", + "title": "The 2nd Workshop on Recommendation with Generative Models", + "abstract": "The rise of generative models has driven significant advancements in\nrecommender systems, leaving unique opportunities for enhancing users'\npersonalized recommendations. This workshop serves as a platform for\nresearchers to explore and exchange innovative concepts related to the\nintegration of generative models into recommender systems. It primarily focuses\non five key perspectives: (i) improving recommender algorithms, (ii) generating\npersonalized content, (iii) evolving the user-system interaction paradigm, (iv)\nenhancing trustworthiness checks, and (v) refining evaluation methodologies for\ngenerative recommendations. With generative models advancing rapidly, an\nincreasing body of research is emerging in these domains, underscoring the\ntimeliness and critical importance of this workshop. The related research will\nintroduce innovative technologies to recommender systems and contribute to\nfresh challenges in both academia and industry. In the long term, this research\ndirection has the potential to revolutionize the traditional recommender\nparadigms and foster the development of next-generation recommender systems.", + "authors": "Wenjie Wang, Yang Zhang, Xinyu Lin, Fuli Feng, Weiwen Liu, Yong Liu, Xiangyu Zhao, Wayne Xin Zhao, Yang Song, Xiangnan He", + "published": "2024-03-07", + "updated": "2024-03-07", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "main_content": "Introduction The surge in generative models has catalyzed substantial progress within recommender systems. For example, pre-trained generative models have demonstrated their capability to effectively learn user preferences from historical interactions [1, 2]; generative models might help to produce item content to meet users\u2019 diverse information needs in some scenarios [17], generative models have shown promise in generating item content that caters to the diverse users\u2019 information needs in specific contexts; and the emergence of ChatGPT-like language models offers novel interaction modes to obtain users\u2019 feedback and intension [8, 14]. In this light, user experience can be potentially enhanced by advancing the traditional recommender paradigms via generative models. This workshop provides a platform to facilitate the integration of generative models into recommender systems, with a focus on user modeling, content generation, interaction patterns, trustworthiness evaluations [23], and evaluation methods [13]. 4.2 Objectives and Scope This workshop aims to encourage innovative research on integrating generative models with recommender systems, particularly on five key aspects: (i) enhancing algorithms for user modeling by generative models; (ii) generating personalized content to supplement human-generated content; (iii) evolving the user interaction modes with recommender systems; (iv) prioritizing trustworthiness in generative recommendation; (v) formulating evaluation techniques for generative model-based recommender systems. We summarize the detailed objectives and scope in Section 1. 4.3 Submission Information Submission Guidelines: Submitted papers must be a single PDF file in the template of ACM WWW 2024. Submissions can be of varying length from 4 to 8 pages, plus unlimited pages for references. The authors may decide on the appropriate length of the paper as no distinction is made between long and short papers. All submitted papers will follow the \"double-blind\" review policy and undergo the same review process and duration. Expert peer reviewers in the 2Kindly note that the paper acceptance plan and the program schedule are tentative and may be subject to potential adjustments according to the requirements of the conference chairs. field will assess all papers based on their relevance to the workshop, scientific novelty, and technical quality. The timeline is as follows: \u2022 Submissions deadline: February 26, 2024 \u2022 Paper acceptance notification: March 4, 2024 \u2022 Workshop date: May 13, 2024 5 ORGANIZERS \u2022 Wenjie Wang Email: wangwenjie@u.nus.edu Affiliation: National University of Singapore Biography: Dr. Wenjie Wang is a research fellow at National University of Singapore (NUS). He received Ph.D. in Computer Science from NUS, supervised by Prof. Tat-Sent Chua. Dr. Wang was a winner of Google Ph.D. Fellowships. His research interests cover recommender systems, data mining, and causal inference. His first-author publications appear in top conferences and journals such as SIGIR, KDD, WWW, WSDM, and TOIS. His work has been selected into ACMMM 2019 Best Paper Final List. \u2022 Yang Zhang Email: zy2015@mail.ustc.edu.cn Affiliation: University of Science and Technology of China Biography: Yang Zhang is a Ph.D. candidate at the University of Science and Technology of China (USTC), under the supervision of Prof. Xiangnan He. His research interests lie in recommender systems and causal inference, and he has published several firstauthor papers in top conferences. In particular, he received the Best Paper Honorable Mention in SIGIR 2021 for his work on causal recommendation. \u2022 Xinyu Lin Email: xylin1028@gmail.com Affiliation: National University of Singapore Biography: Xinyu Lin is a Ph.D. candidate at the University of Singapore, under the supervision of Prof. Tat-seng Chua. Her research interests lie in recommender systems, and her work has been published in top conferences and journals such as SIGIR, WWW, CIKM, and TOIS. Moreover, she has also served as the reviewer and PC member for the top conferences and journals, including SIGIR, WSDM, and TOIS. \u2022 Fuli Feng Email: fulifeng93@gmail.com Affiliation: University of Science and Technology of China Biography: Dr. Fuli Feng is a professor in University of Science and Technology of China. He received Ph.D. in Computer Science from National University of Singapore in 2019. His research interests include information retrieval, data mining, causal inference, and multi-media processing. He has over 60 publications appeared in several top conferences such as SIGIR, WWW, and SIGKDD, and journals including TKDE and TOIS. He has received the Best Paper Honourable Mention of SIGIR 2021 and Best Poster Award of WWW 2018. Moreover, he organized the 1st workshop on Information Retrieval in Finance at SIGIR\u201920. \u2022 Weiwen Liu Email: liuweiwen8@huawei.com Affiliation: Huawei Noah\u2019s Ark Lab Biography: Dr. Weiwen Liu is currently a senior researcher at \fWWW \u201924 Companion, May 13\u201317, 2024, Singapore, Singapore Wenjie Wang et al. Huawei Noah\u2019s Ark Lab. She received her Ph.D. in Computer Science and Engineering from the Chinese University of Hong Kong in 2020. Her research is broadly concerned with recommender systems, information retrieval, and user preference learning. She has published over 40 papers on top conferences including KDD, SIGIR, and WWW. She gave a tutorial on neural re-ranking at RecSys\u201922. She will co-organize the DLP workshop at RecSys\u201923. \u2022 Yong Liu Email: liu.yong6@huawei.com Affiliation: Huawei Noah\u2019s Ark Lab Biography: Dr. Yong Liu is a Senior Principal Researcher at Huawei Noah\u2019s Ark Lab, Singapore. Prior to joining Huawei, he was a Senior Research Scientist at Nanyang Technological University (NTU), a Data Scientist at NTUC Enterprise, and a Research Scientist at Institute for Infocomm Research (I2R), A*STAR, Singapore. Moreover, he has served as the Challenge Co-chair for RecSys 2023, PC Co-chair for ICCSE 2021. \u2022 Xiangyu Zhao Email: xianzhao@cityu.edu.hk Affiliation: City University of Hong Kong Biography: Prof. Xiangyu Zhao is a tenure-track assistant professor of Data Science at City University of Hong Kong (CityU). His research has been awarded ICDM\u201922 and ICDM\u201921 Best-ranked Papers, Global Top 100 Chinese New Stars in AI, CCF-Tencent Open Fund (twice), CCF-Ant Research Fund, CCF-BaiChuanEbtech Foundation Model Fund, Ant Group Research Fund, Tencent Focused Research Fund, Criteo Faculty Research Award, Bytedance Research Collaboration Program, and nomination for Joint AAAI/ACM SIGAI Doctoral Dissertation Award. He also co-organizes DRL4KDD and DRL4IR workshops at KDD\u201919, WWW\u201921, SIGIR\u201920/21/22 and CIKM\u201923. \u2022 Wayne Xin Zhao Email: batmanfly@gmail.com Affiliation: Renmin University of China Biography: Dr. (Wayne) Xin Zhao is currently a professor at Renmin University of China. He obtained the doctoral degree from Peking University in July 2014. He has broad research interest in the fields of information retrieval and natural language processing, with 100+ published papers at top-tier conferences/journals and 10000+ academic citations from Google Scholar. He received ECIR 2021 Test of Time Award, RecSys 2022 Best Student Paper Runner-up, CIKM 2022 Best Resource Paper Runner-up, and other awards. \u2022 Yang Song Email: yangsong@kuaishou.com Affiliation: Kuaishou Technology Biography: Dr. Yang Song is currently the Head of Recommendation at Kwai, overseeing both core modeling and data mining teams. He has published over 70 papers in conferences and journals. He has served as PC&Area Chairs in Recsys, WSDM, TheWebConf(WWW), IEEE Big Data etc. \u2022 Xiangnan He Email: xiangnanhe@gmail.com Affiliation: University of Science and Technology of China Biography: Dr. Xiangnan He is a professor at the University of Science and Technology of China (USTC). He has over 100 publications appeared in top conferences such as SIGIR, WWW, and KDD, and journals including TKDE, TOIS, and TNNLS. His work on recommender system has received the Best Paper Award Honourable Mention in SIGIR (2021, 2016) and WWW (2018). He has rich experience in organizing workshops and tutorials at SIGIR\u201918, WSDM\u201919&20, WWW\u201921&22, and RecSys\u201921." + }, + { + "url": "http://arxiv.org/abs/2403.02692v1", + "title": "Uplift Modeling for Target User Attacks on Recommender Systems", + "abstract": "Recommender systems are vulnerable to injective attacks, which inject limited\nfake users into the platforms to manipulate the exposure of target items to all\nusers. In this work, we identify that conventional injective attackers overlook\nthe fact that each item has its unique potential audience, and meanwhile, the\nattack difficulty across different users varies. Blindly attacking all users\nwill result in a waste of fake user budgets and inferior attack performance. To\naddress these issues, we focus on an under-explored attack task called target\nuser attacks, aiming at promoting target items to a particular user group. In\naddition, we formulate the varying attack difficulty as heterogeneous treatment\neffects through a causal lens and propose an Uplift-guided Budget Allocation\n(UBA) framework. UBA estimates the treatment effect on each target user and\noptimizes the allocation of fake user budgets to maximize the attack\nperformance. Theoretical and empirical analysis demonstrates the rationality of\ntreatment effect estimation methods of UBA. By instantiating UBA on multiple\nattackers, we conduct extensive experiments on three datasets under various\nsettings with different target items, target users, fake user budgets, victim\nmodels, and defense models, validating the effectiveness and robustness of UBA.", + "authors": "Wenjie Wang, Changsheng Wang, Fuli Feng, Wentao Shi, Daizong Ding, Tat-Seng Chua", + "published": "2024-03-05", + "updated": "2024-03-05", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "main_content": "INTRODUCTION Recommender systems have evolved into fundamental services for information filtering on numerous Web platforms such as Amazon and Twitter. Recent research has validated the vulnerability of recommender systems to injective attacks [31, 75, 80], which aim to promote the exposure of a target item via injecting limited fake users (refer to Figure 1(a)). Specifically, since recommender models typically utilize Collaborative Filtering (CF) in users\u2019 historical interactions to make recommendations, the attackers can fabricate fake user interactions and inject them into open-world Web platforms, so as to induce recommender models to elevate the exposure probability of a target item. As a result, such injective attacks can deliberately amplify traffic to target items, potentially bringing economic, political, or other profits to certain entities. Generally, past literature on injective attacks falls into three main groups: 1) Heuristic attackers [9, 27] that adopt heuristic rules to construct fake users, for instance, Bandwagon Attack [8] increases the co-occurrence of popular items and the target item in fake user interactions; 2) gradient-based attackers [19, 30] that directly adjust the interactions of fake users via gradients to maximize the welldesigned attack objectives; and 3) neural attackers [34, 35, 51] that optimize the neural networks to generate influential fake users for promoting the target item to more users. However, previous work neglects that not all users will be interested in the target item, as each item appeals to its unique audience. For instance, dresses typically appeal more to female buyers. Increasing the recommendations of a target item to all users not only wastes attack resources, but also results in inferior attack performance. Worse still, most studies fail to account for the varying attack difficulty across different users [31] (see Figure 1(b)). Some \u201charder\u201d users need more fake users to receive exposure to a target item. Due to ignoring the varying attack difficulty, easy users might receive redundant fake user budgets while hard users might be arXiv:2403.02692v1 [cs.IR] 5 Mar 2024 \fWWW \u201924, May 13\u201317, 2024, Singapore, Singapore Wenjie Wang, Changsheng Wang, Fuli Feng, Wentao Shi, Daizong Ding, and Tat-Seng Chua All user attacks Real users Fake users RecSys User-item interactions Users\u2019 recommendations Target user attacks Target item Target item (a) Illustration of all user attacks and target user attacks. (b) Recommendation probability of users w.r.t. varying fake user budgets. 0 0.2 0.4 0.6 0.8 0 2 4 6 8 10 Recommendation probability Fake user budget User A User B Different uplifts for users A and B given the same eight fake users. Figure 1: Illustration of injective attacks on all users and target users (a) and varying attack difficulty on two users (b), where one fake user may cause different uplifts of the recommendation probabilities. inadequately attacked, leading to inefficient use of budgets and poor attack performance. To address the issues, we focus on an interesting recommender attack task \u2014 target user attacks, which attempt to expose a target item to a specific user group instead of all users. Moreover, we formulate the varying attack difficulty via causal language. From a causal view, assigning fake user budgets to a target user can be formulated as a treatment, and the probability of recommending a target item to this target user is the outcome. Given a target item, the varying attack difficulty essentially reflects the heterogeneous treatment effects (a.k.a. uplifts1) on different target users. This is attributed to the different similarities between the target item and each target user\u2019s historically liked items. In this light, the key to maximizing attack performance with limited fake user budgets lies in: 1) estimating the heterogeneous treatment effect on each target user with different budgets, and 2) allocating the limited budgets wisely to maximize the treatment effects across all target users, i.e., the overall recommendation probability. To this end, we present an Uplift-guided Budget Allocation (UBA) framework for target user attacks. In particular, UBA utilizes two methods to estimate the treatment effect. If a surrogate recommender model (e.g., Matrix Factorization (MF)) is available to simulate the victim recommender model, UBA conducts simulation experiments with different budgets to attack target users, and then repeats the experiments to assess the treatment effect. In the absence of a reliable surrogate model, we identify a proxy variable for UBA to approximate the heterogeneous treatment effects between target users and items. We have empirically and theoretically validated that the identified proxy variable \u2014 the three-order path number between the target user and item in the user-item interaction graph \u2014 exhibits a strong positive correlation with the recommendation probability of CF models. Based on the estimated treatment 1Treatment effects and uplifts are exchangeable in this work [1, 76] . effects, UBA employs a dynamic programming algorithm to compute the optimal budget allocation for all target users, maximizing the overall recommendation probability. Since UBA is a model-agnostic framework, we instantiate it on three competitive attack models and conduct experiments on three benchmark datasets. Extensive experiments show the superiority and generalization ability of UBA in various settings, such as different target items, fake user budgets, attack models, and victim models. Moreover, we validate the robustness of UBA in the cases of applying defense models although the defense models are effective to some extent. To ensure reproducibility, we release our code and data at https://github.com/Wcsa23187/UBA. To summarize, our contributions are threefold. \u2022 We highlight the significance of target user attacks and formally inspect the issue of varying attack difficulty on users from a causal perspective. \u2022 We propose the model-agnostic UBA framework, which offers two methods to estimate the heterogeneous treatment effects on target users and calculates the optimal budget allocation to maximize attack performance. \u2022 Extensive experiments reveal the significance of UBA in performing target user attacks across diverse settings. Meanwhile, we validate the robustness of UBA against defense models. 2 RELATED WORK In this section, we introduce closely related concepts and literature on uplift modeling and injective attacks. More related studies may refer to Appendix Section D. \u2022 Uplift Modeling. Uplift, a term commonly used in marketing, usually represents the difference in purchasing actions between customers who receive a promotional offer (the treated group) and those who do not (the control group) [39, 76]. In causal language, uplift essentially quantifies the causal effect of a treatment (e.g., a promotion) on the outcome (e.g., purchasing behaviors). Despite extensive research on uplift modeling in the machine learning and marketing communities [1, 21, 53], the use of uplift modeling in recommendation receives little scrutiny [59, 61, 66]. Initial studies only consider the potential of uplift modeling to regulate the exposure proportion of item categories [69]. By contrast, we define the assigned fake user budgets in injective attacks as the treatment and estimate the difference of recommendation probabilities on target users as the uplifts. Based on the estimated uplifts, we aim to determine the best treatment for budget allocation to maximize overall attack performance. \u2022 Injective Attacks. The objective of injective attacks (a.k.a. shilling attacks) is to promote the recommendations of a target item to all users on the recommender platform [9, 26, 42, 54, 62, 73]. Given a target item, the attacker optimizes fake user interactions, and then the interactions of real users and generated fake users are fed into the victim model for training, improving the recommendation probabilities of the target item to all real users. Formally, give a target item \ud835\udc56in the item set I, and a set of real users U\ud835\udc5fwith their historical interaction matrix \ud835\udc6b\ud835\udc5f\u2208{0, 1}|U\ud835\udc5f|\u00d7|I| where 1 and 0 indicate users\u2019 liked and disliked items, the attacker aims to craft \ud835\udc6b\ud835\udc53\u2208{0, 1}|U\ud835\udc53|\u00d7|I|, the interaction matrix of a set of fake users U\ud835\udc53, for maximizing the attack objective O on a victim \fUplift Modeling for Target User Attacks on Recommender Systems WWW \u201924, May 13\u201317, 2024, Singapore, Singapore recommender model M\ud835\udf03: max \ud835\udc6b\ud835\udc53 O(M\ud835\udf03\u2217, U\ud835\udc5f,\ud835\udc56), s.t. \ud835\udf03\u2217= arg min \ud835\udf03 L(M\ud835\udf03, \ud835\udc6b); |U\ud835\udc53| \u2264\ud835\udc41, (1) where the victim recommender model M\ud835\udf03\u2217is well trained via the loss function L(\u00b7) calculated on the interactions of both real and fake users, i.e., \ud835\udc6b= \u0014 \ud835\udc6b\ud835\udc5f \ud835\udc6b\ud835\udc53 \u0015 . Besides, the budget of fake users is limited by a hyper-parameter \ud835\udc41[30, 72, 73]. In particular, we detail three key components, attack objective, attack knowledge, and optimization of \ud835\udc6b\ud835\udc53, as follows. Attack objective. The attack objective O(M\ud835\udf03\u2217, U\ud835\udc5f,\ud835\udc56) is usually defined as enhancing the probabilities of recommending the target item \ud835\udc56to all users U\ud835\udc5fby the victim model M\ud835\udf03\u2217. Generally, it can be evaluated by the hit ratio on the real user set U\ud835\udc5f, where a user is \u201chit\u201d only when the target item \ud835\udc56is ranked into this user\u2019s Top-\ud835\udc3e recommendation list. Attack knowledge. Existing studies have different assumptions for the knowledge available to the attacker, where the knowledge mainly involves the users\u2019 interaction data \ud835\udc6b\ud835\udc5fand the victim recommender model M\ud835\udf03\u2217. Specifically, white-box settings [18, 19, 30] might presume both \ud835\udc6b\ud835\udc5fand the parameters of M\ud835\udf03\u2217are accessible to the attacker. By contrast, the definitions of gray-box settings vary [11, 57, 65]. While they consistently assume the accessible \ud835\udc6b\ud835\udc5f, the usage of M\ud835\udf03\u2217differs [17, 49, 51, 55, 72, 77]. Some work utilizes the recommendation lists from M\ud835\udf03\u2217[17, 49] while some researchers assume M\ud835\udf03\u2217is totally unavailable and only adopt a surrogate model S\ud835\udf19as a replacement for attacking [51, 55, 72, 77]. Optimization of \ud835\udc6b\ud835\udc53. To adjust \ud835\udc6b\ud835\udc53for maximizing the attack objective, existing methods fall into three lines [15, 41, 48]. First, heuristic attackers intuitively increase the co-occurrence of some selected items and the target item \ud835\udc56in \ud835\udc6b\ud835\udc53via some heuristic rules, enhancing the popularity of item \ud835\udc56[31]. However, such methods cannot directly optimize the attack objectives, leading to poor attack performance [19, 30, 32]. Besides, to maximize the attack objectives, gradient-based methods directly optimize the interactions of fake users [19, 30, 32] while neural attackers optimize neural networks to generate fake user interactions [26, 34, 35, 51]. Their optimization process typically utilizes the recommendations of the victim model or a surrogate model for gradient descent [44, 63]. 3 TASK FORMULATION In this section, we formulate the task of target user attacks. Besides, we quantify the attack difficulty across users from a causal view. \u2022 Target User Attacks. Despite the great success of injective attacks, we focus on a novel recommender attack task \u2014 target user attacks, which aim to promote the recommendations of a target item to a group of specific users. This is more reasonable since each target item has its own potential audience. Blindly attacking all users will waste the limited fake user budgets and lead to suboptimal attack performance. Under target user attacks, the attacker can freely specify potential users for attacking based on user features, users\u2019 historical interactions, or attack difficulty. Attack objective. Formally, target user attacks change the objective O(M\ud835\udf03\u2217, U\ud835\udc5f,\ud835\udc56) in Eq. (1) to O(M\ud835\udf03\u2217, U\ud835\udc61,\ud835\udc56), where U\ud835\udc61denotes the selected target user group. Attack knowledge. In this work, we adhere to a stricter yet more practical setting: only the interactions of a proportion of real users are accessible, given the fact that the attackers can never collect the interactions of all users. Furthermore, we assume the victim model M\ud835\udf03\u2217is unknown. We consider the situations with (w/) and without (w/o) using a surrogate model S\ud835\udf19. Optimization of \ud835\udc6b\ud835\udc53. Previous methods of injective attacks can be adapted for target user attacks by revising the attack objective to O(M\ud835\udf03\u2217, U\ud835\udc61,\ud835\udc56). In detail, heuristic attackers [8, 9, 27, 37] are able to increase the co-occurrence of target items and the items liked by the target users. Gradient-based and neural attackers may optimize the generation of fake user interactions to maximize O(M\ud835\udf03\u2217, U\ud835\udc61,\ud835\udc56) [30, 34, 51]. For instance, neural attackers [35] can randomly sample a target user as a template for fake user generation to increase the similarity between fake users and the target user, and then forcibly set the fake user feedback on the target item as like, pushing the victim model to recommend the target item to the target user. \u2022 Attack Difficulty. However, these intuitive methods ignore the varying attack difficulty on different target users. As illustrated in Figure 1(b), an increase in the fake user budgets from 4 to 5 might be negligible to uplift the recommendation probabilities on user B while it can significantly enhance that for user A. Gradient-based attackers directly optimize fake user interactions for attacking all target users while neural attackers may randomly sample target users as templates to generate fake users by neural networks. They are averagely assigning fake user budgets to all target users without considering the varying attack difficulty. As a special case, DADA [69] considers the difficulty of attacking each user regarding a target item, yet it only employs a greedy algorithm to emphasize easier users than difficult users. In this way, DADA fails to quantify the attack difficulty across users and achieve optimal budget allocation to target users. Formally, we measure the attack difficulty of a target user \ud835\udc62 regarding a target item \ud835\udc56via a function of recommendation probability w.r.t. the fake user interactions \ud835\udc6b\ud835\udc53, denoted as \ud835\udc4c\ud835\udf03\u2217 \ud835\udc62,\ud835\udc56(\ud835\udc6b\ud835\udc53), which represents the probability of ranking item \ud835\udc56into the Top-\ud835\udc3e recommendation list of user \ud835\udc62by the victim model M\ud835\udf03\u2217. Notably, the fake user interaction matrix \ud835\udc6b\ud835\udc53in \ud835\udc4c\ud835\udf03\u2217 \ud835\udc62,\ud835\udc56(\u00b7) is determined by the attacker with the fake user budget allocation \ud835\udc47. From a causal view, we formulate the fake user budget allocation on target users as a multi-dimension treatment \ud835\udc47= {\ud835\udc61\ud835\udc62}\ud835\udc62\u2208U\ud835\udc61, where \ud835\udc61\ud835\udc62satisfies that \u00cd \ud835\udc62\u2208U\ud835\udc61\ud835\udc61\ud835\udc62\u2264\ud835\udc41, \ud835\udc61\ud835\udc62\u22650, and \ud835\udc61\ud835\udc62\u2208Z. Given a target item \ud835\udc56, a victim model M\ud835\udf03\u2217, and fake user interactions \ud835\udc6b\ud835\udc53with treatment\ud835\udc47, the outcome across all target users is formulated as {\ud835\udc4c\ud835\udf03\u2217 \ud835\udc62,\ud835\udc56(\ud835\udc6b\ud835\udc53(\ud835\udc47))}\ud835\udc62\u2208U\ud835\udc61. Given a target item \ud835\udc56, a set of target users U\ud835\udc61, and \ud835\udc41fake user budgets, the objective of target user attacks can be reformulated to consider attack difficulty as follows: max \ud835\udc47,\ud835\udc6b\ud835\udc53 O(M\ud835\udf03\u2217, U\ud835\udc61,\ud835\udc56) = \u2211\ufe01 \ud835\udc62\u2208U\ud835\udc61 \ud835\udc4c\ud835\udf03\u2217 \ud835\udc62,\ud835\udc56(\ud835\udc6b\ud835\udc53(\ud835\udc47)), s.t. \u2211\ufe01 \ud835\udc62\u2208U\ud835\udc61 \ud835\udc61\ud835\udc62\u2264\ud835\udc41;\ud835\udc61\ud835\udc62\u22650,\ud835\udc61\ud835\udc62\u2208Z for any \ud835\udc62\u2208U\ud835\udc61, (2) \fWWW \u201924, May 13\u201317, 2024, Singapore, Singapore Wenjie Wang, Changsheng Wang, Fuli Feng, Wentao Shi, Daizong Ding, and Tat-Seng Chua User-item interactions \ud835\udc6b CF embeddings of user \ud835\udc96and item \ud835\udc8a Recommendation probability \ud835\udc80\ud835\udc96,\ud835\udc8a \ud835\udf3d\u2217(\u22c5) Recommendation probability \ud835\udc80\ud835\udc96,\ud835\udc8a \ud835\udf53(\u22c5) Three-order path number \ud835\udc68\ud835\udc96,\ud835\udc8a \ud835\udfd1 CF embeddings of user \ud835\udc96and item \ud835\udc8a Surrogate model \ud835\udc7a\ud835\udf53 Victim model \ud835\udcdc\ud835\udf3d\u2217 A proxy Correlated due to confounding of \ud835\udc6b Three-order path number \ud835\udc68\ud835\udc96,\ud835\udc8a \ud835\udfd1 Prediction score (b) Correlation visualization when \ud835\udcdc\ud835\udf3d\u2217is MF. (a) Illustration of UBA to estimate \ud835\udc80\ud835\udc96,\ud835\udc8a \ud835\udf3d\u2217(\u22c5) w/ and w/o a surrogate model. Figure 2: Illustration of two estimation methods and the correlation analysis from a causal view, and (b) correlation visualization between three-order path numbers of user-item pairs and their prediction scores by MF, where the correlation coefficients \ud835\udc5f= 0.9998 \u22481 and \ud835\udc5d= 6\ud835\udc52\u221286 \u226a0.001 via the Spearman Rank Correlation Test [71] validate the strong correlation (see similar correlation on more CF models in Appendix Section A.1). where we optimize the fake user interaction matrix \ud835\udc6b\ud835\udc53to maximize the overall Top-\ud835\udc3erecommendation probability on all target users. \ud835\udc6b\ud835\udc53is further affected by different attackers2 and the budget allocation \ud835\udc47across target users, which considers varying attack difficulty. For instance, given a target user\ud835\udc62with a budget \ud835\udc61\ud835\udc62, heuristic attackers can construct \ud835\udc61\ud835\udc62fake users with similar interactions to user \ud835\udc62, increasing the co-occurrence probability of target items and the items liked by user \ud835\udc62. Gradient-based attackers can optimize the interactions of \ud835\udc61\ud835\udc62fake users specifically to maximize the attack objective of target user \ud835\udc62. Neural attackers might take the target user \ud835\udc62as a template to generate the interactions of \ud835\udc61\ud835\udc62fake users. Beyond averagely assigning fake user budgets, the attackers can execute target user attacks more purposefully and effectively by considering varying attack difficulty for treatment \ud835\udc47. To keep the generality and simplicity of the optimization, we aim to design a model-agnostic solution to first estimate the optimal budget allocation \ud835\udc47\u2217for a given attacker, and then we can apply \ud835\udc47\u2217to the attacker for the final optimization of \ud835\udc6b\ud835\udc53. \u2022 Objective. The key to considering the varying attack difficulty for the maximal attack objective lies in 1) estimating \ud835\udc4c\ud835\udf03\u2217 \ud835\udc62,\ud835\udc56(\ud835\udc6b\ud835\udc53(\ud835\udc61\ud835\udc62)) of each target user \ud835\udc62w.r.t. different budgets \ud835\udc61\ud835\udc623 (see the curves in Figure 1(b)), and 2) calculating the optimal treatment \ud835\udc47\u2217to allocate fake users and generate \ud835\udc6b\ud835\udc53for attacking. 4 UPLIFT-GUIDED BUDGET ALLOCATION We propose a UBA framework with two methods to estimate the treatment effects \ud835\udc4c\ud835\udf03\u2217 \ud835\udc62,\ud835\udc56(\ud835\udc6b\ud835\udc53(\ud835\udc61\ud835\udc62)) on each target user, and then calculate the optimal treatment \ud835\udc47\u2217for target user attacks. Lastly, we detail how to instantiate UBA on existing attackers. 4.1 Treatment Effect Estimation Since the victim model M\ud835\udf03\u2217in Eq. (2) is unknown to attackers, we propose two methods w/ and w/o the surrogate model S\ud835\udf19to estimate \ud835\udc4c\ud835\udf03\u2217 \ud835\udc62,\ud835\udc56(\ud835\udc6b\ud835\udc53(\ud835\udc61\ud835\udc62)) w.r.t. varying budgets \ud835\udc61\ud835\udc62. 2To keep notation brevity, we omit the notation of attackers in \ud835\udc6b\ud835\udc53(\ud835\udc47). 3Here we change \ud835\udc47in \ud835\udc4c\ud835\udf03\u2217 \ud835\udc62,\ud835\udc56(\ud835\udc6b\ud835\udc53(\u00b7)) to \ud835\udc61\ud835\udc62to denote how \ud835\udc4c\ud835\udf03\u2217 \ud835\udc62,\ud835\udc56(\u00b7) of target user \ud835\udc62 changes w.r.t. the user \ud835\udc62\u2019 own budget \ud835\udc61\ud835\udc62. The mutual interference of fake user budgets among target users is tough to measure, which is left for further exploration. \u2022 Estimation via Simulation Experiments. If S\ud835\udf19is available to replace M\ud835\udf03\u2217, UBA can utilize S\ud835\udf19to do simulation experiments for the estimation of \ud835\udc4c\ud835\udf03\u2217 \ud835\udc62,\ud835\udc56(\ud835\udc6b\ud835\udc53(\ud835\udc61\ud835\udc62)) with varying \ud835\udc61\ud835\udc62. In detail, we repeat \ud835\udc38times of simulation experiments with different random seeds. In each experiment, given a target item \ud835\udc56, an attacker, and a surrogate model S\ud835\udf19, we do the following steps: 1) assigning the same \ud835\udc61\ud835\udc62fake users to all target users; 2) using the attacker to generate the fake user matrix \ud835\udc6b\ud835\udc53based on the assigned budget; and 3) leveraging \ud835\udc6b\ud835\udc53 to attack S\ud835\udf19and obtain its Top-\ud835\udc3erecommendations for each target user. After \ud835\udc38times of experiments with different random seeds for training (\ud835\udc38\u224810 in our implementation), we can approximate \ud835\udc4c\ud835\udf03\u2217 \ud835\udc62,\ud835\udc56(\ud835\udc6b\ud835\udc53(\ud835\udc61\ud835\udc62)) via the hit ratio of target user \ud835\udc62: \ud835\udc4c\ud835\udf03\u2217 \ud835\udc62,\ud835\udc56(\ud835\udc6b\ud835\udc53(\ud835\udc61\ud835\udc62)) \u2248 \ud835\udc38\u2032 \ud835\udc61\ud835\udc62 \ud835\udc38, (3) where \ud835\udc38\u2032 \ud835\udc61\ud835\udc62is the number of successfully promoting the target item \ud835\udc56into the Top-\ud835\udc3erecommendations of target user \ud835\udc62by using \ud835\udc61\ud835\udc62 budgets in \ud835\udc38times of experiments. Similarly, we can vary \ud835\udc61\ud835\udc62\u2208 {1, 2, ..., \ud835\udc3b} with \ud835\udc3b\u226a\ud835\udc41to estimate \ud835\udc4c\ud835\udf03\u2217 \ud835\udc62,\ud835\udc56(\ud835\udc6b\ud835\udc53(\ud835\udc61\ud835\udc62)) with different fake user budgets4. \u2022 Estimation via High-order Interaction Path. Although UBA can estimate the treatment effects via simulation experiments, they rely on a reliable surrogate model and require repeated simulation experiments. To shake off these shackles, we propose another treatment effect estimation method w/o using surrogate models and simulation experiments. We analyze the reasons for the varying attack difficulty across target users. Recommender models, including M\ud835\udf03\u2217and S\ud835\udf19, learn CF embeddings of users and items from interactions as illustrated in Figure 2(a), and then measure the similarity between their CF embeddings to output prediction scores for item ranking. The varying attack difficulty is attributed to the distinct CF embeddings, and essentially stems from the different similarities w.r.t. interactions. To eliminate the dependence on surrogate models, we consider finding a proxy of interactions for the estimation of \ud835\udc4c\ud835\udf03\u2217 \ud835\udc62,\ud835\udc56(\ud835\udc6b\ud835\udc53(\ud835\udc61\ud835\udc62)). We inspect the number of high-order paths between a target user and a 4The potential budget of each target user is far less than the total budget number \ud835\udc41 because \ud835\udc41budgets are assigning to a group of target users. \fUplift Modeling for Target User Attacks on Recommender Systems WWW \u201924, May 13\u201317, 2024, Singapore, Singapore target item in the user-item interaction graph, discovering that it has a strong positive correlation with the user-item prediction scores as shown in Figure 2(b). Formally, if we define \ud835\udc68= \u0014 0 \ud835\udc6b\ud835\udc5f \ud835\udc6b\ud835\udc47 \ud835\udc5f 0 \u0015 , the three-order path number is the value in \ud835\udc683 indexed by a user and an item, which describes the easiness of connecting this user-item pair via some intermediate users and items (see more explanation in Appendix Section A.1). Proposition 1. Given a user and an item without historical interactions, their prediction score by a CF model is positively correlated with their three-order5 path number in \ud835\udc683. We theoretically analyze the rationality and robustness of Proposition 1 w.r.t. different CF models in Appendix Section A.2. In Appendix Section A.1, we also present extensive experiments to validate the wide existence of such a correlation [45]. Intuitively, CF models utilize the interaction similarity across users and items for recommendations, and three-order path numbers describe the closeness between users and items in the interaction graph [25]. The user-item pairs with larger three-order path numbers are intuitively closer to each other in the CF embedding space, leading to higher prediction scores (see Figure 2(b)). Since the prediction scores are used to rank Top-\ud835\udc3erecommendations, the three-order path number is also positively correlated with the Top-\ud835\udc3erecommendation probability \ud835\udc4c\ud835\udf03\u2217 \ud835\udc62,\ud835\udc56(\u00b7). Moreover, from a causal view in Figure 2(a), user-item interactions act as a confounder, causing the correlation between three-order path numbers and Top-\ud835\udc3e recommendation probabilities. Thanks to such a positive correlation, we can approximate\ud835\udc4c\ud835\udf03\u2217 \ud835\udc62,\ud835\udc56(\u00b7) by the weighted three-order path number between user \ud835\udc62and item \ud835\udc56, denoted as \ud835\udefc\u00b7(\ud835\udc683 \ud835\udc62,\ud835\udc56)\ud835\udefd, where \ud835\udefcand \ud835\udefdare two hyper-parameters to adapt for the correlation. Thereafter, to estimate \ud835\udc4c\ud835\udf03\u2217 \ud835\udc62,\ud835\udc56(\ud835\udc6b\ud835\udc53(\ud835\udc61\ud835\udc62)) with varying budgets \ud835\udc61\ud835\udc62, we only need to assess \ud835\udefc\u00b7(\ud835\udc683 \ud835\udc62,\ud835\udc56)\ud835\udefdafter injecting \ud835\udc61\ud835\udc62fake users to maximize the three-order path number between \ud835\udc62 and \ud835\udc56. By analyzing the relationships between fake user interactions and the three-order path number, we have the following finding. Proposition 2. The three-order path number between target user \ud835\udc62and target item \ud835\udc56(\ud835\udc683 \ud835\udc62,\ud835\udc56) is equivalent to the weighted sum of the intermediate users who liked item \ud835\udc56in \ud835\udc68, where the weights are their interaction similarities with target user\ud835\udc62, i.e., the number of mutually liked items. Please refer to Appendix Section A.3 for the proofs of Proposition 2. In the light of Proposition 2, given a target user\ud835\udc62and a target item \ud835\udc56, we can construct \ud835\udc61\ud835\udc62fake users with the largest interaction similarities with target user\ud835\udc62and a like interaction with target item \ud835\udc56. Consequently, we can obtain the optimal three-order path number in (\ud835\udc68\u2032)3 to estimate \ud835\udc4c\ud835\udf03\u2217 \ud835\udc62,\ud835\udc56(\ud835\udc6b\ud835\udc53(\ud835\udc61\ud835\udc62)) via \ud835\udc4c\ud835\udf03\u2217 \ud835\udc62,\ud835\udc56(\ud835\udc6b\ud835\udc53(\ud835\udc61\ud835\udc62)) \u2248\ud835\udefc\u00b7 ((\ud835\udc68\u2032)3 \ud835\udc62,\ud835\udc56)\ud835\udefd, where \ud835\udc68\u2032 = \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 0 0 \ud835\udc6b\ud835\udc5f 0 0 \ud835\udc6b\ud835\udc53 \ud835\udc6b\ud835\udc47 \ud835\udc5f \ud835\udc6b\ud835\udc47 \ud835\udc53 0 \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb , i.e., the symmetric interaction matrix with both real and fake users. 5We find that the correlation exists with multiple different order numbers (see Appendix Section A.1). To keep simplicity, we select the smallest order for investigation. 4.2 Budget-constrained Treatment Optimization After estimating the treatment effect \ud835\udc4c\ud835\udf03\u2217 \ud835\udc62,\ud835\udc56(\ud835\udc6b\ud835\udc53(\ud835\udc61\ud835\udc62)), the selection of treatment \ud835\udc47in Eq. (2) becomes a budget-constrained optimization problem [1]. To calculate the optimal treatment\ud835\udc47\u2217, we implement a dynamic programming algorithm based on the idea of the knapsack problem (see Appendix Section A.4 for details). Afterward, we can allocate optimal budgets to each target user to enhance existing attackers for superior overall attack performance. \u2022 Instantiation on Existing Attackers. Given the optimal \ud835\udc47\u2217, existing attackers, including heuristic, gradient-based, and neural attackers, can allocate fake users accordingly and utilize their own strategies to construct fake user interactions (see analysis of Eq. (2) in Section 3 for details). These fake users are subsequently used to attack victim recommender models. \u2022 UBA for Enhancing Recommendation Security. A superior and transparent attacker can always inspire stronger defense methods. Recommender platforms can utilize UBA to improve their defense models through adversarial training, thereby enhancing the security of recommendations. 5 EXPERIMENTS In this section, we conduct extensive experiments to answer the following research questions: \u2212RQ1: How does UBA enhance existing attackers in the task of target user attacks? \u2212RQ2: How does UBA generalize across various settings (e.g., different victim models, budgets, and accessible interactions)? \u2212RQ3: How do UBA and other attackers perform if defense models are applied? \u2022 Datasets and Metric. We evaluate the attackers on the three real-world datasets: MovieLens-1M (ML-1M) [22], Amazon-Game (Amazon) [46], and Yelp [4]. Table 2 displays the statistics of the datasets. In addition to the commonly used ML-1M and Amazon datasets in previous recommender attack work [31, 34], we also select Yelp for experiments due to its larger scale and sparsity. In this way, we can analyze target user attacks in more diverse scenarios. We treat target items as the positive items and use a widely used metric Hit Ratio@\ud835\udc3e(HR@\ud835\udc3e) [31] to measure how many target users receive target item recommendation, where \ud835\udc3e= 10 or 20, denoting the length of recommendation lists. Additionally, we introduce NDCG@\ud835\udc3eand MRR@\ud835\udc3eto consider the ranking positions of target items in the recommendation lists. \u2022 Data Processing. For data processing, we conduct a 10-core filtering on all three datasets to ensure data quality. Besides, all three datasets consist of explicit user feedback such as ratings, which might be required by some existing attackers. We follow the default requirements of the attackers to provide explicit or implicit feedback. For the victim recommender models and surrogate recommender models, we employ the common implicit feedback with binary values {0, 1} in recommender training. Following previous studies [25], we map historical interactions greater than 3 to likes with label 1 and the remaining to dislikes with label 0. \u2022 Baselines. We compare with the following baselines: heuristic attackers including Random [27], Segment [8], Bandwagon [8], \fWWW \u201924, May 13\u201317, 2024, Singapore, Singapore Wenjie Wang, Changsheng Wang, Fuli Feng, Wentao Shi, Daizong Ding, and Tat-Seng Chua Table 1: Evaluation on three real-world datasets when overall fake user budget is 100. We show the results of HR@10, NDCG@10, and MRR@10. The best results for each backend model are bold and the second-best ones are underlined. \u2217implies the improvements over the best baseline \u201cTarget\u201d are statistically significant (\ud835\udc5d-value<0.05) under \ud835\udc61-test. ML-1M Yelp Amazon Popular item Unpopular item Popular item Unpopular item Popular item Unpopular item HR NDCG MRR HR NDCG MRR HR NDCG MRR HR NDCG MRR HR NDCG MRR HR NDCG MRR Beofore Attack 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0200 0.0035 0.0020 0.0000 0.0000 0.0000 Random Attack 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0200 0.0028 0.0022 0.0200 0.0028 0.0022 0.2000 0.0235 0.0199 0.2200 0.0262 0.0220 Segment Attack 0.0200 0.0040 0.0022 0.0200 0.0035 0.0035 0.0000 0.0000 0.0000 0.0400 0.0039 0.0040 0.1600 0.0195 0.0160 0.1400 0.0180 0.0155 Bandwagon Attack 0.0200 0.0035 0.0020 0.0000 0.0000 0.0000 0.0200 0.0029 0.0025 0.0200 0.0027 0.0020 0.2000 0.0245 0.0222 0.1600 0.0204 0.0177 Average Attack 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0200 0.0031 0.0029 0.0200 0.0029 0.0025 0.1800 0.0217 0.0200 0.1400 0.0180 0.0355 WGAN 0.0200 0.0039 0.0025 0.0000 0.0000 0.0000 0.0200 0.0029 0.0025 0.0200 0.0027 0.0020 0.1400 0.0180 0.0155 0.1800 0.0227 0.0225 DADA-DICT 0.0400 0.0071 0.0073 0.0800 0.0104 0.0089 0.0200 0.0036 0.0040 0.1200 0.0169 0.0150 0.4200 0.0517 0.0420 0.3200 0.0448 0.0400 DADA-DIV 0.0000 0.0000 0.0000 0.0400 0.0057 0.0044 0.0000 0.0000 0.0000 0.1400 0.0220 0.0199 0.2600 0.0333 0.0260 0.3200 0.0473 0.0457 DADA 0.0600 0.0112 0.0123 0.1200 0.0156 0.0171 0.0400 0.0053 0.0080 0.1600 0.0268 0.0266 0.5400 0.0667 0.0540 0.4400 0.0576 0.0488 AIA 0.0200 0.0061 0.0033 0.0200 0.0127 0.0111 0.0000 0.0000 0.0000 0.0600 0.0054 0.0032 0.2800 0.0375 0.0311 0.2800 0.0393 0.0350 +Target 0.2000 0.0244 0.0222 0.3800 0.0428 0.0356 0.0000 0.0000 0.0000 0.0600 0.0082 0.0075 0.5800 0.0780 0.0725 0.6400 0.0846 0.0711 + UBA(w/o S\ud835\udf19) 0.2400 0.0315 0.0300 0.4400 0.0449 0.0400 0.0200 0.0029 0.0025 0.1200 0.0191 0.0269 0.6600 0.0960 0.0942 0.7400 0.1007 0.0925 + UBA(w/ S\ud835\udf19) 0.2600* 0.0431* 0.0520* 0.5800* 0.0556* 0.0514* 0.1000* 0.0178* 0.0250* 0.1400* 0.0255* 0.0279* 0.7200* 0.1115* 0.1200* 0.8200* 0.1162* 0.1171* AUSH 0.0200 0.0047 0.0050 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.2600 0.0347 0.0288 0.3200 0.0448 0.0400 +Target 0.2400 0.0349 0.0340 0.3600 0.0531 0.0422 0.0800 0.0111 0.0100 0.1000 0.0119 0.0100 0.6400 0.0887 0.0800 0.7400 0.1064 0.1057 + UBA(w/o S\ud835\udf19) 0.2800 0.0380 0.0467 0.4200 0.0646 0.0550 0.1000 0.0138 0.0142 0.0800 0.0137 0.0160 0.7400 0.1064 0.1057 0.7400 0.1137 0.1233 + UBA(w/ S\ud835\udf19) 0.3200* 0.0474* 0.0533* 0.5200* 0.0902* 0.0828* 0.1400* 0.0255* 0.0279* 0.1600* 0.0376* 0.0533* 0.8600* 0.1299* 0.1433* 0.8000* 0.1142* 0.1142* Legup 0.0400 0.0062 0.0054 0.1000 0.0024 0.0021 0.0600 0.0078 0.0067 0.0400 0.0043 0.0050 0.0400 0.0063 0.0050 0.2000 0.0235 0.0199 +Target 0.1400 0.0187 0.0200 0.3200 0.0539 0.0412 0.1600 0.0251 0.0229 0.1800 0.0302 0.0300 0.6000 0.0742 0.0600 0.5000 0.0676 0.0625 + UBA(w/o S\ud835\udf19) 0.1800 0.0239 0.0225 0.3200 0.0604 0.0525 0.2000 0.0283 0.0222 0.2600 0.0411 0.0433 0.7400 0.0961 0.0822 0.6800 0.0887 0.0755 + UBA(w/ S\ud835\udf19) 0.2600* 0.0480* 0.0371* 0.3600* 0.0825* 0.0743* 0.2400* 0.0365* 0.0342* 0.2800* 0.0411* 0.0466* 0.8600* 0.1216* 0.1228* 0.7600* 0.1086* 0.1085* Table 2: Statistics of the three datasets. |U\ud835\udc5f| |I| #Interactions Sparity K-core ML-1M 5950 3702 567,533 0.257% 10 Amazon 3179 5600 38, 596 0.216% 10 Yelp 54632 34474 1,334,942 0.070% 10 and Average Attacks [29], the state-of-the-art gradient-based attacker DADA with its variants [31], and neural attackers including WGAN [3], AIA [51], AUSH [34], and Leg-UP [35]. To adapt to target user attacks, we propose a model-agnostic baseline named \u201cTarget\u201d by changing the attack objective from O(M\ud835\udf03\u2217, U\ud835\udc5f,\ud835\udc56) to O(M\ud835\udf03\u2217, U\ud835\udc61,\ud835\udc56) as discussed in Section 3. We implement \u201cTarget\u201d and our UBA framework to three competitive backend attackers for comparison. We move the hyper-parameter tuning to Appendix Sections B.1. \u2022 Victim and Surrogate Models. Following [31], we choose representative MF [28], NCF [23], and LightGCN [25] as victim models and utilize the simplest MF as the surrogate model. Table 1 shows the results of LightGCN as the victim model due to space limitation. The robustness of attacking MF and NCF is analyzed in Figure 4. \u2022 Selection of Target Items. To verify the robustness of UBA on different target items, we test UBA on popular items and unpopular items, respectively. This is because the attack performance on popular and unpopular items might vary. We divide all items into five groups according to their popularity, i.e., historical interaction numbers, and then we randomly select the popular item and the unpopular item as the target item from the most popular group and the most unpopular group, respectively. For each target item, we run five attack processes by changing random seeds and report the average performance. \u2022 Selection of Target Users. Notably, target users can be specified by user IDs, attributes (e.g., gender), and interactions. In this work, we try to find out the potential target users who might be interested in the target item. To achieve this, we select all users who have interacted with the target item category. For example, given an action movie as the target item, we find out all the users who have interacted with at least one action movie. Thereafter, we rank all the selected users via their interaction number with the target item category. Only the users with an interaction number with the target item category smaller than a threshold (10) are further selected, from which we randomly sample some users who are hard to attack into the target user group. In this way, we can select some target users who are possibly interested in this item category yet hard to attack. We usually select 50 target users in our experiments while we analyze the effect of different target user numbers in Table 4. 5.1 Target User Attack Performance \u2022 Overall Comparison (RQ1). In Table 1, we report the attack performance on target users. As an extension, we present the attack results of all users in Appendix Section C.1. From Table 1, we have the following findings. 1) Superiority of DADA and \u201cTarget\u201d. Most attackers (heuristic and neural ones) cannot achieve satisfactory performance on target user attacks. By contrast, DADA with variants and \u201cTarget\u201d exhibit superior results, revealing the importance of distinguishing users for attacking. DADA and DADA-DICT utilize a greedy algorithm to allocate more budgets to easy users while \u201cTarget\u201d concentrates fake user budgets to attack target users. 2) Effectiveness of UBA. Both UBA w/ and w/o S\ud835\udf19significantly enhance the attack performance of three backend attackers (AIA, AUSH, and Leg-UP) by a large margin on three datasets. In addition, UBA also surpasses \u201cTarget\u201d, further validating the superiority of considering the varying attack difficulty. Due to estimating the attack difficulty across target users, UBA can rationally allocate the fake user budgets to maximize the overall recommendation probabilities. The users with large uplifts of \fUplift Modeling for Target User Attacks on Recommender Systems WWW \u201924, May 13\u201317, 2024, Singapore, Singapore recommendation probabilities will be favored, thus enhancing the attack performance. 3) UBA(w/ S\ud835\udf19) outperforms UBA(w/o S\ud835\udf19). This is reasonable, as the surrogate model S\ud835\udf19can assist in accurately estimating the attack difficulty through simulation experiments. S\ud835\udf19may serve as a reliable substitute for the victim model M\ud835\udf03\u2217in such estimation because they commonly leverage CF information for recommendations. Despite the better performance of UBA(w/ S\ud835\udf19), UBA(w/o S\ud835\udf19) does not need the simulation experiments, reducing the computation costs. As such, UBA(w/o S\ud835\udf19) is also a favorable approach that balances effectiveness and efficiency. \u2022 Varying Fake User Budgets (RQ2). Figure 3 depicts the attack performance on ML-1M with varying budgets. The results on Amazon and Yelp with similar trends are omitted to save space. From the figure, we can find that: 1) under different budgets, UBA w/ and w/o S\ud835\udf19usually achieve better attack performance than the original method and \u201cTarget\u201d on three backend attackers. This verifies the robustness of UBA w.r.t. fake user budgets. And 2) the relative improvements from \u201cTarget\u201d to UBA is significantly large when the budget is small such as budget= 50. Such observation on three attackers demonstrates that UBA holds high practical value in real-world recommender attack scenarios, given that the number of fake users an attacker can manage is often very limited [31]. \u2022 Different Victim Models (RQ2). Figure 4 visualizes the attack results of using MF and NCF as victim models on ML-1M. By inspecting Figure 4 and the results of LightGCN in Table 1, we can have the following observations. 1) UBA shows better performance than \u201cOrigin\u201d and \u201cTarget\u201d in most cases, indicating good generalization ability of UBA across different victim models. Indeed, the generalization of UBA lies in the accurate estimation of the attack difficulty on target users, i.e., the heterogeneous treatment effects. By exploiting the CF information with and without using surrogate models, our proposed UBA is theoretically applicable to most CF victim models. And 2) the hit ratios on MF are significantly smaller than those on NCF and LigntGCN. This phenomenon commonly exists across different backend attackers and attack strategies. We attribute the possible reason to that NCF and LightGCN utilize more advanced neural networks to fully exploit CF information in user-item interactions, resulting in a more severe fitting of fake user interactions. Consequently, the fake users lead to a higher attack performance on NCF and LightGCN. This finding is critical, inspiring us to consider the security issues when devising advanced recommender models. \u2022 Effect of the Proportions of Accessible Interactions (RQ2). The proportion of accessible interactions refers to the ratio of user interactions that the attacker can access. Both the estimation of treatment effects and the generation of fake users by the attackers are using these real user interactions. In real-world attack scenarios, attackers can only collect a limited portion of real users\u2019 interactions. Therefore, using the interactions of partial users can demonstrate the practicality of the attackers. We demonstrate the effects of different proportions of interactions in Table 3. From this table, we can observe the followings: 1) as the proportion increases, the attack performance shows a growing trend, indicating that receiving more user interactions is beneficial for attackers. And 2) two UBA Table 3: Performance with different proportions of user interactions accessible to attackers. Data Ratio 10% 20% 50% 80% HR@10 HR@20 HR@10 HR@20 HR@10 HR@20 HR@10 HR@20 Before Attack 0.02 0.02 0.00 0.02 0.00 0.02 0.00 0.02 Random Attack 0.00 0.02 0.00 0.00 0.00 0.02 0.02 0.02 Segment Attack 0.00 0.02 0.02 0.02 0.02 0.02 0.02 0.02 Bandwagon Attack 0.02 0.02 0.02 0.02 0.02 0.02 0.00 0.04 Average Attack 0.00 0.00 0.00 0.00 0.00 0.02 0.00 0.00 WGAN 0.00 0.02 0.02 0.02 0.00 0.02 0.00 0.02 DADA-DICT 0.02 0.04 0.04 0.04 0.00 0.04 0.04 0.04 DADA-DIV 0.02 0.02 0.00 0.04 0.02 0.04 0.02 0.08 DADA 0.04 0.04 0.06 0.10 0.06 0.12 0.08 0.12 AIA 0.00 0.02 0.02 0.02 0.02 0.02 0.02 0.06 +Target 0.12 0.28 0.20 0.38 0.22 0.36 0.20 0.42 + UBA(w/o S\ud835\udf19) 0.22 0.36 0.24 0.40 0.22 0.42 0.28 0.44 + UBA(w/ S\ud835\udf19) 0.22 0.42 0.26 0.44 0.30 0.48 0.34 0.48 AUSH 0.00 0.04 0.02 0.02 0.04 0.06 0.04 0.08 +Target 0.16 0.34 0.24 0.36 0.24 0.36 0.30 0.40 + UBA(w/o S\ud835\udf19) 0.20 0.42 0.28 0.40 0.26 0.42 0.32 0.44 + UBA(w/ S\ud835\udf19) 0.28 0.44 0.32 0.42 0.30 0.46 0.28 0.44 Leg-UP 0.02 0.02 0.04 0.06 0.04 0.06 0.06 0.10 +Target 0.10 0.14 0.14 0.26 0.12 0.30 0.14 0.32 + UBA(w/o S\ud835\udf19) 0.16 0.22 0.18 0.32 0.16 0.36 0.20 0.36 + UBA(w/ S\ud835\udf19) 0.20 0.28 0.26 0.34 0.30 0.36 0.32 0.44 Table 4: Performance with different numbers of target users. Target user 50 Target user 100 Target user 500 HR@10 HR@20 HR@10 HR@20 HR@10 HR@20 Beofore Attack 0.00 0.02 0.04 0.06 0.00 0.02 Random Attack 0.00 0.00 0.10 0.14 0.10 0.15 Segment Attack 0.02 0.02 0.02 0.12 0.15 0.17 Bandwagon Attack 0.02 0.02 0.14 0.20 0.13 0.18 Average Attack 0.00 0.00 0.10 0.13 0.10 0.11 WGAN 0.02 0.02 0.05 0.12 0.12 0.18 DADA-DICT 0.04 0.04 0.31 0.42 0.30 0.53 DADA-DIV 0.00 0.04 0.20 0.33 0.29 0.41 DADA 0.06 0.10 0.30 0.44 0.28 0.55 AIA 0.02 0.02 0.18 0.23 0.17 0.30 +Target 0.20 0.38 0.20 0.38 0.25 0.39 + UBA(w/o S\ud835\udf19) 0.24 0.40 0.33 0.51 0.31 0.47 + UBA(w/ S\ud835\udf19) 0.26 0.44 0.44 0.70 0.38 0.55 AUSH 0.02 0.02 0.23 0.32 0.20 0.31 +Target 0.24 0.36 0.31 0.45 0.28 0.36 + UBA(w/o S\ud835\udf19) 0.28 0.40 0.44 0.56 0.34 0.39 + UBA(w/ S\ud835\udf19) 0.32 0.42 0.59 0.73 0.39 0.55 Legup 0.04 0.06 0.22 0.36 0.21 0.31 +Target 0.14 0.26 0.30 0.33 0.26 0.36 + UBA(w/o S\ud835\udf19) 0.18 0.32 0.41 0.50 0.40 0.60 + UBA(w/ S\ud835\udf19) 0.26 0.34 0.55 0.62 0.43 0.59 methods usually achieve better performance than the baselines, validating the robustness of UBA under varying proportions of accessible interactions. \u2022 Varying Numbers of Target Users (RQ2). In Table 4, we evaluate the attack performance of increasing the number of target users while maintaining the same fake user budgets. From this table, we can observe that: 1) with increasing target user numbers, two UBA methods accomplish better attack results than the baselines, demonstrating that UBA can handle the attacks on different numbers of target users. 2) Furthermore, two UBA methods remain relatively high hit ratios with limited fake user budgets when the number of target users is large, thereby guaranteeing attack effectiveness even if attackers seek to enlarge the attack scope. \fWWW \u201924, May 13\u201317, 2024, Singapore, Singapore Wenjie Wang, Changsheng Wang, Fuli Feng, Wentao Shi, Daizong Ding, and Tat-Seng Chua Figure 3: Performance comparison w.r.t. HR@10 under different attack budgets. 0.00 0.05 0.10 0.15 0.20 Victim Model: MF AUSH Origin Target UBA(w/o ss) UBA(w/ ss) AIA Leg-UP 0.0 0.1 0.2 0.3 0.4 Victim Model: NCF AIA Origin Target UBA(w/o ss) UBA(w/ ss) AUSH Leg-UP Figure 4: Generalization of UBA w.r.t. HR@10 across different victim models. Figure 5: Case study about the budget allocation on five target users. UBA allocates fake users more wisely to maximize the overall recommendation probability than Target. \u2022 Case Study. To intuitively understand how UBA effectively utilizes fake user budgets to enhance attack effectiveness, we conduct a case study. We select five target users from ML-1M to compare the difference in fake user allocation between \u201cTarget\u201d and UBA. Specifically, in Figure 5, given a target item, we show the varying recommendation probabilities of five target users under different fake user budgets, estimated by UBA(w/ S\ud835\udf19). It is worth noting that \u201cTarget\u201d randomly allocates fake user budgets while UBA maximizes overall recommendation probabilities for budget allocation. From Figure 5, we observe the following: 1) UBA allocates limited budgets to users with larger uplifts, leading to higher overall recommendation probabilities. For instance, for \u201cUser 1\u201d in the left plot, increasing the budget from 2 to 3 results in an increase of 0.2 in the recommendation probability, significantly enhancing the probability of a successful attack. And 2) \u201cTarget\u201d, due to its random allocation of fake user budgets, usually assigns 1 to 3 fake user budgets to most users, leading to a wastage of fake user budgets on the users with constantly lower recommendation probabilities (e.g., \u201cUser 2\u201d and \u201cUser 4\u201d in the right plot of Figure 5). Table 5: Attack performance under two defense models. AIA + Target + UBA(w/o S\ud835\udf19) + UBA(w/ S\ud835\udf19) Origin Detector Origin Detector Origin Detector Origin Detector HR@10 0.02 0.02 0.20 0.14 0.26 0.20 0.24 0.22 PCA HR@20 0.02 0.02 0.38 0.28 0.44 0.32 0.40 0.32 HR@10 0.02 0.02 0.20 0.08 0.26 0.10 0.24 0.12 FAP HR@20 0.02 0.02 0.38 0.18 0.44 0.12 0.40 0.20 AUSH + Target + UBA(w/o S\ud835\udf19) + UBA(w/ S\ud835\udf19) HR@10 0.02 0.02 0.24 0.20 0.30 0.18 0.28 0.22 PCA HR@20 0.02 0.02 0.36 0.36 0.42 0.32 0.40 0.34 HR@10 0.02 0.02 0.24 0.02 0.30 0.04 0.28 0.08 FAP HR@20 0.02 0.02 0.36 0.10 0.42 0.10 0.40 0.14 Leg-UP + Target + UBA(w/o S\ud835\udf19) + UBA(w/ S\ud835\udf19) HR@10 0.04 0.00 0.14 0.12 0.26 0.12 0.18 0.14 PCA HR@20 0.06 0.00 0.26 0.24 0.34 0.26 0.32 0.24 HR@10 0.04 0.00 0.14 0.04 0.26 0.06 0.18 0.08 FAP HR@20 0.06 0.00 0.26 0.08 0.34 0.08 0.32 0.14 5.2 Defense Against Target User Attacks (RQ3). In this section, we explore the ability of existing defense models against target user attackers. We examine two representative unsupervised defense models, PCA [40] and FAP [78]. PCA is the most classic model to detect a group of fake users for injective attacks and FAP is a unified framework for fake user detection based on a fraudulent action propagation algorithm. These defense models usually detect fake users and exclude the detected users for recommender training. Under the defense models, we present the attack results of three backend attackers with \u201cTarget\u201d and UBA on ML-1M in Table 5. From the table, we can observe that: 1) both PCA and FAP decrease the performance of all attackers, indicating their usefulness in defending target user attackers. In particular, FAP achieves superior defense than PCA, attributable to its proficient propagation algorithm on the user-item interaction graph. However, 2) even with the defense models, UBA w/ and w/o S\ud835\udf19show generally higher hit ratios than the vanilla attackers and \u201cTarget\u201d. This possibly validates the capacity of UBA to allocate budgets to target users who are more interested in the target item, making the detection of fake users challenging. Future research could benefit from these two observations, especially inspirations from FAP, to devise defense models specially tailored for target user attackers. \u2022 More Experimental Results. Due to space limitations, we provide more experimental analysis in the Appendix. First, in addition to target user attacks, we present the attack results on all users in Section C.1. We study the results of different victim models and defense models on all users in Section C.2 and Section C.3. Besides, we only report the performance of promoting popular items in Figure 3 to save space. The results on unpopular items are in Section C.4. \fUplift Modeling for Target User Attacks on Recommender Systems WWW \u201924, May 13\u201317, 2024, Singapore, Singapore 6" + }, + { + "url": "http://arxiv.org/abs/2311.08975v2", + "title": "Constraints on cosmic star formation history via a new modeling of the radio luminosity function of star-forming galaxies", + "abstract": "Radio wavelengths offer a unique possibility to trace the total\nstar-formation rate (SFR) in galaxies, both obscured and unobscured. To probe\nthe dust-unbiased star-formation history, an accurate measurement of the radio\nluminosity function (LF) for star-forming galaxies (SFGs) is crucial. We make\nuse of an SFG sample (5900 sources) from the Very Large Array (VLA) COSMOS 3\nGHz data to perform a new modeling of the radio LF. By integrating the\nanalytical LF, we aim to calculate the history of the cosmic SFR density (SFRD)\nfrom $z\\sim5$ onwards. For the first time, we use both models of the pure\nluminosity evolution (PLE) and joint luminosity+density evolution (LADE) to fit\nthe LFs directly to the radio data using a full maximum-likelihood analysis,\nconsidering the sample completeness correction. We also incorporate updated\nobservations of local radio LFs and radio source counts into the fitting\nprocess to obtain additional constraints. We find that the PLE model cannot be\nused to describe the evolution of the radio LF at high redshift ($z>2$). By\nconstruct, our LADE models can successfully fit a large amount of data on radio\nLFs and source counts of SFGs from recent observations. We therefore conclude\nthat density evolution is genuinely indispensable in modeling the evolution of\nSFG radio LFs. Our SFRD curve shows a good fit to the SFRD points derived by\nprevious radio estimates. In view of the fact that our radio LFs are not\nbiased, as opposed those of previous studies performed by fitting the $1/V_{\\rm\nmax}$ LF points, our SFRD results should be an improvement on these previous\nestimates. Below $z\\sim1.5$, our SFRD matches a published multiwavelength\ncompilation, while our SFRD turns over at a slightly higher redshift\n($2 3 (Gruppioni et al. 2013; Rowan-Robinson et al. 2016; Novak et al. 2017; Gruppioni et al. 2020). Such uncertainty in the evolution of the SFRD at early cosmic epochs hinders our understanding of the core mechanism that governs the star formation rate (SFR) histories of individual galaxies. The SFR of galaxies can be traced at multiple wavebands, each tracer having its own advantages and disadvantages (Kennicutt Jr 1998). In dust-free environments, ultraviolet (UV) light originating primarily from young massive stars serves as the most direct tracer of SFR. UV light can be used to constrain the unobscured star formation out to very high redshifts (e.g., McLure et al. 2013; Bowler et al. 2015; Finkelstein et al. 2015; McLeod et al. 2015; Parsa et al. 2016; Oesch et al. 2018; Ono et al. 2018; Adams et al. 2020; Bowler et al. 2020; Bouwens et al. 2021). However, UV observations suffer from dust absorption, Article number, page 1 of 14 arXiv:2311.08975v2 [astro-ph.GA] 22 Jan 2024 \fA&A proofs: manuscript no. 47746corr which means the SFR measurements made at these wavelengths are underestimated (e.g., Smail et al. 1997; Bouwens et al. 2009; Riechers et al. 2013; Dudzevi\u02c7 ci\u00af ut\u02d9 e et al. 2020). When the dust absorbs UV radiation, it gets heated and reradiates the energy at far-infrared (FIR) wavelengths. Therefore, FIR emission is ideal for tracing SFR in dust-rich environments (see Kennicutt Jr 1998). Unfortunately, FIR observations can suffer from poor resolution and source blending. Deep radio continuum observations are now believed to be very promising tracers; they offer a unique possibility to trace the total SFR in galaxies, both obscured and unobscured. As such, they may provide the most robust measurement of the star-formation history of the Universe (Jarvis et al. 2015). Radio continuum emission, which is not affected by dust obscuration, is also an end product of the formation of massive stars (e.g., van der Vlugt et al. 2022). After these short-lived (\u03c4 \u2a7d3 \u00d7 107 yr, Matthews et al. 2021a) stars undergo supernova explosions, the expanding remnants can accelerate the cosmic ray electrons and give rise to synchrotron radiation at a typical frequency of < 30 GHz (e.g., Sadler et al. 1989; Condon 1992; Clemens et al. 2008; Tabatabaei et al. 2017). Radio emission triggered by the above process is empirically found to correlate well with the FIR emission of star-forming galaxies (SFGs), known as the FIR\u2013 radio correlation. This correlation holds over five orders of magnitude in luminosity and extends to high redshifts (Helou et al. 1985; Yun et al. 2001; Bell 2003), although the redshift evolution is controversial (Jarvis et al. 2010; Sargent et al. 2010; Magnelli et al. 2015; Calistro Rivera et al. 2017; Delhaize et al. 2017). The FIR\u2013radio correlation can be used to calibrate radio luminosity as a tracer of SFR (Condon 1992). In the past several years, deep radio surveys reaching submilli-Jansky(mJy) detection limits have emerged as a powerful tool to investigate the cosmic evolution of SFGs (e.g., van der Vlugt et al. 2022; Enia et al. 2022; Malefahlo et al. 2022; Bonato et al. 2021a,b; Ocran et al. 2020b; Upjohn et al. 2019; Ceraj et al. 2018; Bonato et al. 2017; Novak et al. 2017; Smol\u02c7 ci\u00b4 c et al. 2009). These studies generally measured the radio luminosity functions (LFs) of SFGs; the SFRD can then be estimated by taking the luminosity weighted integral of the radio LF (e.g., van der Vlugt et al. 2022). As for the form of radio LFs, most of them assumed pure luminosity evolution (PLE; e.g., Smol\u02c7 ci\u00b4 c et al. 2009; Novak et al. 2017; Ocran et al. 2020b; Malefahlo et al. 2022). Very recently, van der Vlugt et al. (2022) combined the COSMOS-XS survey and Very Large Array (VLA)-COSMOS 3 GHz data sets to constrain a radio LF with both luminosity and density evolution. The analytical LFs from these studies are obtained through fitting the LF points given by the Schmidt (1968) 1/Vmax estimator. This semi-parametric method has also been adopted by almost all the existing studies (e.g., Smol\u02c7 ci\u00b4 c et al. 2009; Novak et al. 2017; Cochrane et al. 2023). However, given the ordinary precision in the 1/Vmax estimate, the LF points themselves have errors, and fitting to them will propagate the uncertainties to the analytical LFs. In addition, the result would be dependent on the choice of binning in the 1/Vmax method (see Fan et al. 2001). We believe that a more reliable approach to obtain the analytical LFs is to use a full maximum-likelihood analysis (e.g., Willott et al. 2001). In the present paper, we make use of the VLA-COSMOS 3 GHz data (Smol\u02c7 ci\u00b4 c et al. 2017a) to measure the radio LFs of SFGs. We use both models of PLE and joint density+luminosity evolution to fit the SFG LFs directly to the radio data using a full maximum-likelihood analysis. We aim to perform a comprehensive parametric study of the radio LF of SFGs by means 0 1 2 3 4 5 6 z 21 22 23 24 25 26 log10(L1.4GHz[W/Hz\u22121]) SFG data Flux limit 0 100 200 300 400 500 Count SFG data Fig. 1: Redshift distribution (top) and the scatter plot (bottom) of our SFG sample. The red dashed curve indicates the flux limit line flim 1.4GHz(z). of constraints from multiple observational data. Finally, we can probe the dust-unbiased SFRD up to a redshift of z \u223c5. The structure of the present paper is outlined below. In Section 2, we briefly describe the data used. In Section 3, we present the method used to constrain the LFs with redshift. In Section 4, we derive our radio LF evolution through cosmic time and compare it to those in the literature. In Section 5, we calculate the evolution of the cosmic SFRD using the LF models we derived and compare it to the literature. In Section 6, we summarize our findings and conclusions. Throughout the paper, we use the flat concordance Lambda cold dark matter (\u039bCDM) cosmology with the following parameters: Hubble constant H0 = 70kms\u22121Mpc\u22121, dark energy density \u2126\u039b = 0.7, and matter density \u2126m = 0.3. We assume the Chabrier (2003) initial mass function (IMF) to calculate SFRs. We assume a simple power-law radio spectrum for SFGs, F\u03bd \u221d\u03bd\u2212\u03b1, where F\u03bd is the flux density at frequency \u03bd and \u03b1 is the spectral index. 2. Sample In our study, we use the same sample of SFGs as presented in Novak et al. (2017), which was compiled from the continuum data and source catalog release of the VLA-COSMOS 3 GHz Large Project survey (Smol\u02c7 ci\u00b4 c et al. 2017a). The sample of SFGs was selected via radio emission and complemented with ancillary data from the comprehensive multiwavelength coverage of COSMOS. The data analysis and multiband association procedure are fully described in Novak et al. (2017), and we refer readers to that publication for a complete description. Here we summarize some key points about the sample. The VLA-COSMOS 3 GHz Large Project survey utilized 384 hours of VLA A+C array observations in the S band to obtain radio data. The survey covered a uniform rms noise of 2.3 \u00b5Jy beam\u22121 and had an angular resolution of 0\u2032\u2032 \u00b7 75 across the 2 square degrees of COSMOS. The final catalog contains 10 Article number, page 2 of 14 \fWang et al.: New Constraints on the Cosmic Star Formation History 830 radio sources. Taking into account the fraction of spurious sources, an 11% incompleteness in the counterpart sample is estimated. Therefore, a total of 7729 radio sources with assigned COSMOS 2015 counterparts were used. About 35% of these radio sources have spectroscopic redshifts, and photometric redshifts were used for the remainder of the sample. According to Delvecchio et al. (2017), sources were classified as radio-excess if the value of r deviates by more than 3\u03c3 from the peak of the distribution obtained as a function of redshift, that is, r = log \uf8eb \uf8ec \uf8ec \uf8ec \uf8ec \uf8ec \uf8ec \uf8ed L1.4GHz h WHz\u22121i S FRIR \u0002M\u2299yr\u22121\u0003 \uf8f6 \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f8> 22 \u00d7 (1 + z)0.013. (1) According to this criterion, they were able to distinguish 1814 sources (23%) that are primarily emitting due to AGN activity in the radio. The sample consists of 5915 SFGs that do not exhibit radio excess. All the sources have a (spectroscopic or photometric) redshift and the rest-frame 1.4 GHz luminosity. The redshift distribution of the SFG sample as well as its scatter plot are shown in Figure 1. The red dashed curve indicates the 1.4 GHz flux limit line defined as flim 1.4GHz(z) = 4\u03c0D2 L (1 + z)1\u2212\u03b1 3 GHz 1.4 GHz !\u03b1 Flim 3GHz, (2) where DL represents the luminosity distance at redshift z, Flim 3GHz = 11.5\u00b5Jy is the 5\u03c3 detection limit of the survey at 3 GHz, and the spectral index \u03b1 is set to 0.7. We have excluded all sources below the flux limit line, and the total number of sources used in this work is 5900. 3. Methods 3.1. Luminosity function and likelihood function The LF \u03a6(z, L) is a measurement of the number of sources per unit comoving volume per unit logarithmic luminosity interval: \u03a6(z, L) = d2N dVd log10 L. (3) Given an analytical form with parameters \u03b8 for the LF, \u03a6(z, L|\u03b8), the maximum-likelihood solution to \u03b8 is obtained by minimizing the negative logarithmic likelihood function S . Following Marshall et al. (1983) and (Fan et al. 2001), S can be written as S = \u22122 n X i ln[\u03a6(zi, Li)p(zi, Li)] + 2 ZZ W \u03a6(z, L)p(z, L)\u2126dV dz dzdL, (4) where p(z, L) is the selection probability of the SFG as a function of redshift and luminosity, and W is the survey region. The inclusion of the selection probability in equation (4) accounts for the fact that the sample is incomplete near the flux limit. The symbol \u2126represents the solid angle covered by the survey, and dV/dz denotes the differential comoving volume per unit solid angle, as defined by Hogg (1999). For our SFG sample, p(z, L) can be estimated by p(z, L) = Cradio[F3 GHz(z)] \u00d7 Copt(z), (5) where Cradio is the completeness of the VLA-COSMOS 3 GHz radio catalog as a function of the flux density F3 GHz, and Copt is the completeness owing to radio sources without assigned optical-NIR counterparts (Novak et al. 2017). We adopt the calculations of Cradio and Copt given by Novak et al. (2017), and refer the interested reader to their Fig. 2 for more details. To estimate the integration term in Equation (4), one needs to find the function values for p(z, L) at given pairs of (z, L). We achieve this using an interpolation method. Firstly, we set a twodimensional (2D) grid of 50 \u00d7 50 in the log L \u2212z space. For each grid point (log Li, zi), we can derive its flux density Fi from Li by assuming \u03b1 = 0.7. We can then estimate the corresponding Cradio and Copt through a one-dimensional linear interpolation method using the observed value from Novak et al. (2017). Finally, we have the values for p(z, L) at the 50 \u00d7 50 grid points, which are used to perform the 2D linear interpolation to estimate the function value of p(z, L). Following the method of Willott et al. (2001) and Yuan et al. (2017), we incorporate the most recent observations of the local radio LFs and source counts (see section 3.2) into the fitting process to obtain additional constraints. The local radio LF (LRLF) and the source counts (SCs) are one-dimensional functions, and their \u03c72 value is calculated as \u03c72 = n X i=1 fdata i \u2212f mod i \u03c3data i !2 , (6) where fdata i represents the value of the data in the ith bin, and fmod i and \u03c3data i are the model value and data error in the ith bin, respectively. As \u03c72 is related to a likelihood by \u03c72 = \u22122 ln(likelihood) (i.e., the same form as S ; Willott et al. 2001), we can define a new function S all, which combines the constraints from all three types of data (i.e., the SFG sample, LRLF, and SC data). The expression is as follows S all = S + \u03c72 LRLF + A0\u03c72 SC, (7) where \u03c72 LRLF and \u03c72 SC denote the value of \u03c72 for the local radio LFs and source counts, respectively. Because we use three different types of data to estimate S all, we need to balance the statistical weight for each term in Equation (7). We chose an A0 so that the value of A0\u03c72 SC is approximately equal to that of \u03c72 LRLF. This yields values of about 10-40 for our calculations. We find that varying A0 does not significantly bias our final results (also see Kochanek 1996). Using Equation (7), we can obtain the bestfit parameters for LFs by numerically minimizing the objective function S all. Here we adopt a Bayesian method as in our previous papers (e.g., Yuan et al. 2016). This latter enables us to determine the best estimates for the model parameters and their probability distribution (also see Lewis & Bridle 2002; Yuan et al. 2017). We use uniform (so-called \u201cuninformative\u201d) priors on the parameters, and employ the MCMC sampling algorithm available in the Python package emcee (Foreman-Mackey et al. 2013) to estimate the best-fit parameters. 3.2. Local luminosity functions and radio source counts The local LFs at 1.4 GHz have been well determined for SFGs thanks to the combined use of large radio surveys, such as NVSS (NRAO VLA Sky Survey) and FIRST (Faint Images of the Radio Sky at Twenty centimeters), and large-area spectroscopic surveys. In the present work, we simultaneously use the local SFG LFs from Condon et al. (2002), Best et al. (2005), Mauch & Sadler (2007), and Condon et al. (2019) (see Figure 2) to calculate \u03c72 LRLF in Equation (7). In addition to local LFs, the observed radio source counts can provide an important constraint to the modeling of SFG LFs. Article number, page 3 of 14 \fA&A proofs: manuscript no. 47746corr 19 20 21 22 23 24 log10(L1.4GHz [W Hz\u22121]) \u22128 \u22127 \u22126 \u22125 \u22124 \u22123 \u22122 \u22121 log10(\u03c6(z, L) [Mpc\u22123 \u2206L\u22121]) This paper: Model A This paper: Model A 3\u03c3 This paper: Model B This paper: Model C This paper: Model C 3\u03c3 Condon +2019 Best +2005 Condon +2002 Mauch +2007 Fig. 2: Local radio LF at 1.4 GHz of SFGs from several surveys with different observed areas and sensitivities (colored data points). The colored lines show the fits to the combined data from our models. In the past several years, deep radio surveys have emerged that reach submJy detection limits, enabling investigation of the faint source counts (e.g., Smol\u02c7 ci\u00b4 c et al. 2017b; Ocran et al. 2020a; Mandal et al. 2021; van der Vlugt et al. 2021; Matthews et al. 2021b; Hale et al. 2023). The source counts, denoted n(F\u03bd), represent the number of sources per flux density (F\u03bd) per steradian. The shape of n(F\u03bd) is closely related to the evolutionary properties of the source as well as the geometry of the Universe (Padovani 2016). Typically, the counts are Euclidean normalized by multiplying by F2.5 \u03bd (e.g., de Zotti et al. 2010). According to Padovani (2016) and Yuan et al. (2017), we can relate the source counts of SFGs to their LF using the following equation: n(F\u03bd) 4\u03c0 = 4\u03c0 c H0 Z zmax(F\u03bd) zmin(F\u03bd) \u03a6(z, L(F\u03bd, z))D4 L(z)dz (1 + z)(3\u2212\u03b1) p \u2126m(1 + z)3 + \u2126\u039b , (8) where c is the speed of light, \u03a6(z, L) is the LF, DL(z) is the luminosity distance, zmin and zmax represent the range of integration in redshift, and \u03b1 is the spectral index. In this work, we use the observed source counts from Algera et al. (2020a) and Hale et al. (2023) to provide an additional constraint in our analysis. The Algera et al. (2020a) 3 GHz source counts are measured based on the ultrafaint (reaching a 5\u03c3 flux limit of \u223c2.7 \u00b5Jy beam\u22121 within the center of the 3 GHz image) radio population detected in the Karl G. Jansky Very Large Array COSMOS-XS survey. The Hale et al. (2023) 1.4 GHz source counts are measured based on the continuum early science data release of the MeerKAT International Gigahertz Tiered Extragalactic Exploration (MIGHTEE) survey in the COSMOS and XMM-LSS fields. The MIGHTEE sources were divided into three subsets: SFGs, AGNs, and unclassified sources. Hale et al. (2023) considered two cases: (1) the unclassified sources are assumed to be a mix of SFGs and AGNs based on the flux density ratio of classified sources, and (2) the unclassified sources are regarded as SFGs. The source counts for the two cases are presented in Table 1 (for the COSMOS field) and Table 2 (for the XMM-LSS field) presented by these latter authors, respectively. In this work, we use the first case, where the unclassified or unmatched sources are assumed to have the same split between SFGs and AGN as the classified sources at the given flux density. The source counts are shown in the SCSFG, ratio column in Table 1 and Table 2 of Hale et al. (2023). For the convenience of calculation, the above source counts are unified to 1.4 GHz by assuming a spectral index of 0.7. 3.3. Models for the luminosity function of star-forming galaxies Without loss of generality, the SFG LF can be written as \u03a6(z, L) = e1(z)\u03d5(z = 0, L/e2(z), \u03b7j), (9) where e1(z) and e2(z) denote the density evolution (DE) and luminosity evolution (LE) functions of redshift, respectively, and \u03b7 j represents the parameters that determine the shape of the LF. If the values of \u03b7 j are constant, this indicates that the shape of the radio LF is unchanged with redshift. Conversely, if \u03b7j exhibits a redshift dependence, this implies luminosity-dependent density evolution (see Singal et al. 2013, 2014, for a more detailed discussion). We assume the shape of the LF to remain unchanged (i.e., \u03b7 j is constant) as in many other studies (e.g., Novak et al. 2017; van der Vlugt et al. 2022). Following previous work (e.g., Smol\u02c7 ci\u00b4 c et al. 2009; Gruppioni et al. 2013; van der Vlugt et al. 2022), the SFG local LF \u03d5(z = 0, L/e2(z = 0)) is described by a modified-Schechter function from Saunders et al. (1990): \u03d5(z = 0, L/e2(z = 0)) = dN d log10 L = \u03d5\u22c6 L L\u22c6 !1\u2212\u03b2 exp \" \u22121 2\u03b3 2 log2 1 + L L\u22c6 !# , (10) where L\u22c6determines the location of the knee in the LF, \u03b2 and \u03b3 fit the faint and bright ends of the LF, respectively, and \u03a6\u22c6is used for the normalization. In this work, we consider three LF models, all of which adopt the same LE function: e2(z) = (1 + z)k1+k2z. (11) The DE function e1(z) has three different forms depending on the model: e1(z) = 1 for model A, an exponential form e1(z) = 10p1z, (12) for model B, and e1(z) = (1 + z)p1+p2z, (13) for model C. In the above equations, k1, k2, p1, and p2 are free parameters. Model A is the pure luminosity evolution (PLE) model, which is the most commonly used model for the radio LFs of SFGs in the literature (e.g., Novak et al. 2017). Models B and C can be referred to as the mixture evolution (e.g., Yuan et al. 2016, 2017) or luminosity and density evolution (LADE, e.g., Aird et al. 2010) models. 3.4. Model selection In order to evaluate which model is a better fit to the data, a helpful tool is the information criterion(Takeuchi 2000). The Akaike Information Criterion (AIC; Akaike 1974) is one of the most widely used information criterion. For our problem, the AIC can be written as: AIC = S all(\u02c6 \u03b8) + 2q, (14) Article number, page 4 of 14 \fWang et al.: New Constraints on the Cosmic Star Formation History where S all is given in Equation (7), \u02c6 \u03b8 is the best-fit model parameters, and q is the number of parameters for each model. The model with the smallest value of AIC is considered to be the most accurate. Another commonly used criterion is the Bayesian information criterion (BIC, Schwarz 1978), which can be written as BIC(q) = S all(\u02c6 \u03b8) + qln n, (15) where n is the sample size. When calculating the S all values for our three models, the weight factor A0 in Equation (7) is set to 1. The AIC and BIC values are listed in Table 2. We find that the AIC and BIC are consistent with each other, both indicating that the LADE model is superior to the PLE model. The AIC value of Model B is slightly smaller than that of Model C, implying that Model B could be taken as our preferred model. 4. Results 4.1. Analytical LFs The parameters in our model LFs are estimated via the MCMC algorithm, which is performed using the Python package emcee of Foreman-Mackey et al. (2013). The emcee algorithm improves the exploration of the parameter space by using an ensemble of chains with different initial conditions. This approach helps to avoid local minima and ensures a more comprehensive exploration of the parameter space. We assume uniform priors on all the parameters. The marginalized oneand two-dimensional posterior probability distributions of the parameters for Models A, B, and C are shown in Figures 3, 4, and 5, respectively. These corner plots illustrate that all the parameters for our three models are well constrained. Table 1 reports the best-fit parameters and their 1\u03c3 uncertainties for the three models. Figure 6 shows our best-fit LFs for Models A (solid blue lines) , B (solid orange lines), and C (solid green lines). All the LFs are measured at the rest-frame 1.4 GHz. We also compare our result with the binned LFs from Gruppioni et al. (2013), Novak et al. (2017), and van der Vlugt et al. (2022), which are represented by orange left-pointing triangles, dark blue circles, and sky blue squares with error bars, respectively. At lower redshifts of z < 1.0, our three models are barely distinguishable. As redshift increases, Model A begins to diverge, predicting larger number densities at L < L\u22c6than models B and C. This deviation increases towards higher redshifts, and the disagreement is in excess of the 3\u03c3 confidence intervals at z >\u223c1. We also note that, at z >\u223c2, the binned LFs of van der Vlugt et al. (2022) present a decline in number density at the faint end, while model A cannot reproduce the behavior. This indicates that model A is not applicable to describe the evolution of LFs at higher redshift. By contrast, Models B and C are in good agreement with the binned LFs for all redshift intervals. In Figure 6, the purple dotted lines depict the PLE model of Novak et al. (2017), which is generally in agreement with our Model A. Nevertheless, the difference between the two results increases with redshift. This is not surprising given that the PLE model of Novak et al. (2017) is constrained through simultaneously fitting the LF points in all redshift bins, while these LF points are estimated using the traditional 1/Vmax method. Although their modeling result would be dependent on the estimation accuracy of 1/Vmax. Our analytical LFs are obtained through a full maximum-likelihood analysis, and are therefore independent of the 1/Vmax estimates. 4.2. Fitting the observed source counts We calculate the source counts for each of our three model LFs using Equation(8). The result is shown in Figure 7 as blue dashdotted line, orange dashed line, and green dotted line for models A, B, and C, respectively. In the Figure, we compare our models with the Euclidean normalized 1.4 GHz source counts for SFGs measured from Hale et al. (2023), Smol\u02c7 ci\u00b4 c et al. (2017a), and Algera et al. (2020a). All our three models can reproduce the Algera et al. (2020a) and Hale et al. (2023) source counts fairly well, but the Smol\u02c7 ci\u00b4 c et al. (2017a) result is systematically lower than the others. The discrepancies could be caused by multiple factors, such as field-to-field variation, differences in the assumptions used to calculate completeness, and resolution bias (see Hale et al. 2023). We note that, at the bright region (F\u03bd > 1 mJy), the measurements of Hale et al. (2023) display huge uncertainties. This is due to the contamination of AGNs at these fluxes, preventing an easy classification between SFGs and AGNs. 4.3. LADE versus PLE Due to the limitation of survey depth, most of the existing radio studies barely reach the knee of the SFG LF at z > 1. If fitting the LF using a LADE model, the DE and LE parameters may become degenerate (also see van der Vlugt et al. 2022). On several occasions, this has led previous authors to assume a PLE model for their SFG LFs. By combining data from the ultradeep COSMOS-XS survey and the shallower VLA-COSMOS 3 GHz large project, van der Vlugt et al. (2022) was able to jointly constrain the LE and DE, finding evidence for significant DE. From Figure 6, we also find that the LADE model is superior to the PLE model. Very recently, Cochrane et al. (2023) measured the 150 MHz LFs of SFGs using data from the Low Frequency Array (LOFAR) Two Metre Sky Survey in three well-studied extragalactic fields, Elais-N1, Bo\u00a8 otes, and the Lockman Hole. By fixing the faint and bright end shape of the radio LF to the local values (equivalent to fixing \u03b3 = 0.49 and \u03b2 = 1.12 in Equation 10), these latter authors fitted the LF points (via 1/Vmax) to find the best-fit L\u22c6and \u03d5\u22c6at each individual redshift bin. Their Figure 7 shows the variation of L\u22c6and \u03d5\u22c6as functions of redshift. The method of Cochrane et al. (2023) is equivalent to assuming a LADE model for their LFs. The variation of L\u22c6and \u03d5\u22c6as functions of redshift in their analysis correspond to L\u22c6\u00d7 e2(z) and \u03d5\u22c6\u00d7 e1(z) in our work. In Figure 8, we show L\u22c6\u00d7 e2(z) and \u03d5\u22c6\u00d7 e1(z) for our three LF models compared with the inference from Cochrane et al. (2023). We converted our L\u22c6to 150 MHz by assuming a spectral index of 0.7. Below z < 1.5, our two LADE models \u2014especially Model C\u2014 agree well with the L\u22c6and \u03d5\u22c6evolutions given by Cochrane et al. (2023). Above z > 1.5, the L\u22c6evolution curve obtain by these authors is significantly higher than those of our models (outside the 3\u03c3 uncertainties), while their \u03d5\u22c6evolution falls more rapidly than those of our models out to high redshift. The discrepancies could be explained as follows: At any redshift bin, the best-fit L\u22c6and \u03d5\u22c6are negatively correlated with each other (e.g., see the right panel of Figure 4 in Cochrane et al. (2023)), implying that the LE and DE parameters are degenerate. The degeneracy should be stronger at higher redshift where the knee location of LFs is increasingly difficult to identify. We highlight the fact that Cochrane et al. (2023) fitted the LF points (via 1/Vmax) for each redshift bin individually. At higher redshift, the LF points estimated by 1/Vmax usually have larger unArticle number, page 5 of 14 \fA&A proofs: manuscript no. 47746corr Fig. 3: Corner plot showing the oneand two-dimensional projections of the posterior probability distributions of the parameters for Model A obtained from the MCMC run. The histograms on the diagonal show the marginalized posterior densities for each parameter (vertical dashed lines denote the 16th and 84th percentiles). The off-diagonal panels show the 2D joint posterior densities of all couples of parameters, with 1\u03c3, 2\u03c3, and 3\u03c3 contours shown by black solid lines. Our best-fitting parameters are marked by red vertical solid lines. certainty. Fitting these discrete LF points to find a precise knee location would be very difficult. This will inevitably bias the inferred L\u22c6and \u03d5\u22c6values. Therefore, their L\u22c6and \u03d5\u22c6evolutions for high redshift are subject to uncertainties due to dual factors. Our LADE modeling is also subject to the uncertainty due to degeneracy, but is free from the 1/Vmax estimates. Cochrane et al. (2023) found that their \u03d5\u22c6remains roughly constant back to z \u223c0.8 but then falls steeply at higher redshifts. Our Model C displays a similar trend. A comparison between the inference of Cochrane et al. (2023) and our models lends strong support to the efficacy of our LADE models. 5. History of the cosmic star formation rate density 5.1. Calculating the SFRD Now that we have obtained the rest-frame 1.4 GHz LF, we can investigate how the SFRD evolves with redshift. We use the Article number, page 6 of 14 \fWang et al.: New Constraints on the Cosmic Star Formation History Fig. 4: Similar to Figure 3, but for Model B. Table 1: Best-fit parameters for models A, B, and C Model log10(\u03d5\u22c6) log10(L\u22c6) \u03b2 \u03b3 k1 k2 p1 p2 A \u22122.576+0.022 \u22120.023 21.232+0.040 \u22120.041 1.306+0.021 \u22120.021 0.611+0.007 \u22120.007 4.271+0.028 \u22120.027 \u22120.548+0.010 \u22120.010 . . . . . . B \u22122.554+0.019 \u22120.019 21.284+0.031 \u22120.031 1.296+0.020 \u22120.020 0.556+0.005 \u22120.005 4.662+0.027 \u22120.026 \u22120.473+0.011 \u22120.011 \u22120.188+0.008 \u22120.008 . . . C \u22122.546+0.019 \u22120.019 21.248+0.032 \u22120.032 1.296+0.020 \u22120.021 0.562+0.005 \u22120.005 4.534+0.030 \u22120.028 \u22120.366+0.017 \u22120.016 \u22120.069+0.016 \u22120.033 \u22120.397+0.022 \u22120.019 Note. Units \u2014 \u03d5\u22c6: [Mpc\u22123dex\u22121], L\u22c6: [WHz\u22121]. The best-fit parameters as well as their 1\u03c3 errors for models A, B and C. Article number, page 7 of 14 \fA&A proofs: manuscript no. 47746corr Fig. 5: Similar to Figure 3, but for Model C. Table 2: Values of AIC and BIC for models A, B, and C Model AIC BIC A 115258.9 115298.9 B 114459.7 114506.5 C 114560.5 114613.9 functional form provided in Delvecchio et al. (2021) to convert the radio luminosity into an SFR: SFR M\u2299yr\u22121 = fIMF \u00d7 10\u22122410qIR(z) L1.4GHz WHz\u22121 , (16) where fIMF is a factor accounting for the IMF (fIMF = 1 for a Chabrier (2003) IMF and fIMF = 1.7 for a Salpeter (1955) IMF), and L1.4GHz is rest-frame 1.4 GHz radio luminosity. Following Novak et al. (2017), we use the Chabrier IMF in the following analysis. In Equation (16), qIR(z) is the FIR-to-radio luminosity ratio, which is conventionally used to parametrize the FIR\u2013radio Article number, page 8 of 14 \fWang et al.: New Constraints on the Cosmic Star Formation History 21 22 23 24 25 26 \u22127 \u22126 \u22125 \u22124 \u22123 \u22122 \u22121 21 22 23 24 25 26 21 22 23 24 25 26 21 22 23 24 25 26 \u22127 \u22126 \u22125 \u22124 \u22123 \u22122 \u22121 21 22 23 24 25 26 21 22 23 24 25 26 21 22 23 24 25 26 \u22127 \u22126 \u22125 \u22124 \u22123 \u22122 \u22121 21 22 23 24 25 26 21 22 23 24 25 26 21 22 23 24 25 26 \u22127 \u22126 \u22125 \u22124 \u22123 \u22122 \u22121 21 22 23 24 25 26 log10(L1.4GHz [W Hz\u22121]) log10(\u03c6(z, L) [Mpc\u22123 \u2206L\u22121]) 0.1 < z < 0.4 zMED = 0.31 0.4 < z < 0.6 zMED = 0.50 0.6 < z < 0.8 zMED = 0.69 0.8 < z < 1.0 zMED = 0.90 1.0 < z < 1.3 zMED = 1.16 1.3 < z < 1.6 zMED = 1.44 1.6 < z < 2.0 zMED = 1.81 2.0 < z < 2.5 zMED = 2.18 2.5 < z < 3.3 zMED = 2.81 3.3 < z < 4.6 zMED = 3.71 4.6 < z < 5.7 zMED = 4.83 This paper: Model A This paper: Model B This paper: Model C This paper: Model C 3\u03c3 This paper: \u03c6KDE This paper: \u03c6KDE 3\u03c3 Novak +2017: pure L evolution Novak +2017 Van der Vlugt +2022 Gruppioni +2013 Fig. 6: Radio LFs of SFGs at various redshifts compared with the previous estimates specified in the inset. The best-fit LFs for Models A, B, and C in each redshift bin are shown by the blue, orange, and green solid lines, respectively. The light green shaded area shows the 3\u03c3 confidence interval for Model C. The red dashed curves represent the KDE LFs (see Appendix A), with the 3\u03c3 confidence interval shown by the pink shaded area. The purple dash-dotted line depicts the PLE model of Novak et al. (2017). We also compare with the binned LFs from Gruppioni et al. (2013), Novak et al. (2017), and van der Vlugt et al. (2022). Article number, page 9 of 14 \fA&A proofs: manuscript no. 47746corr 10\u22122 10\u22121 100 101 F\u03bd(mJy) 10\u22122 10\u22121 100 101 F 5/2 \u03bd n(F\u03bd)[ Jy3/2sr\u22121 ] This paper: Model A This paper: Model B This paper: Model C This paper: Model C 3 \u03c3 Smolcic +2017 Algera +2020 Hale +2023 COSMOS Hale +2023 XMM-LSS Fig. 7: Comparison of our best-fit models with the Euclidean normalized 1.4 GHz source counts for SFGs observed in the literature. The blue dash-dotted line, orange dashed line, and green dotted line show our best-fit source counts of Models A, B, and C, respectively. The source counts from Hale et al. (2023) in the COSMOS and XMM-LSS fields are shown as purple right triangles and light blue pentagons, respectively. Also shown are the observed source counts from Smol\u02c7 ci\u00b4 c et al. (2017a) (green squares), Algera et al. (2020a) (orange circles). correlation in SFGs, and is defined as qIR = log LIR [W] 3.75 \u00d7 1012 [Hz] ! \u2212log(L1.4GHz[W Hz\u22121]), where LIR is the total IR luminosity (rest-frame 8-1000 \u00b5m), and 3.75 \u00d7 1012Hz represents the central frequency over the farinfrared domain. Although qIR is typically taken to be a constant value derived for local galaxies, recent observations suggest that the qIR value probably changes with redshift (e.g., Sargent et al. 2010; Magnelli et al. 2015; Delhaize et al. 2017; Calistro Rivera et al. 2017). Recently, for the first time, Delvecchio et al. (2021) calibrated qIR as a function of both stellar mass (M\u22c6) and redshift. These latter authors found that qIR primarily evolves with M\u22c6, and only weakly with redshift. This finding implies that using radio emission as an SFR tracer requires M\u22c6-dependent conversion factors, but its robustness still needs to be verified with further study. In this work, we use the expression given by Novak et al. (2017): qIR(z) = (2.78 \u00b1 0.02) \u00d7 (1 + z)\u22120.14\u00b10.01, (17) which is the updated version of the analysis of Delhaize et al. (2017), using a new sample selection criteria to exclude AGN. The SFRD of a given epoch can then be estimated by the following integral: SFRD = Z Lmax Lmin \u03a6(L, z) \u00d7 SFR(L1.4 GHz) d log10 L . (18) To obtain the SFRD for a given epoch, we performed a numerical integration of the analytical form of the LF in each redshift bin, employing the best-fit evolution parameters presented in Table 1 and Figure 6. The resulting integral provides an estimate of the SFRD. Unless otherwise specified, our reported SFRD values correspond to the integral of the fitted LF from 0.0 to \u2192\u221e. In Figure 9(a), our SFRD history results are shown as blue, orange, and green solid lines for Models A, B, and C, respectively. The light shaded areas take into account the 3\u03c3 error bands. Our three models coincide at lower redshift of z < 1. Model A gradually separates from the other two models towards higher redshift. Models B and C start to diverge at z >\u223c2, but the difference is within the 3\u03c3 confidence intervals. 5.2. Comparison with the literature In Figure 9, we show the SFRD history derived from our three models compared with those in the literature derived at different wavelengths. The SFRD from the review by Madau & Dickinson (2014), who performed a fit on a collection of previously published UV and IR data, is shown as the black dashed curve in all panels for ease of comparison. From Figure 9(a), we find that below z \u223c1.5, the rate at which the SFRD increases with redshift in our three models shows striking similarity to the trend observed by Madau & Dickinson (2014), although their SFRD is slightly higher. The discrepancy in magnitude is probably due to the assumed FIR\u2013radio relation in our result, which calibrates radio emission as a tracer of SFR. Above z \u223c1.5, our PLE model predicts a significantly higher SFRD, while the SFRD given by our LADE models is lower than that of Madau & Dickinson (2014). The SFRD derived from all our three models turns over at a slightly higher redshift (2 < z < 2.5) and falls more rapidly than that of Madau & Dickinson (2014) out to high redshift. A similar behavior was observed in recent radio estimates by van der Vlugt et al. (2022). In Figure 9(a), we also show our Model A SFRD compared to the radio estimates from Smol\u02c7 ci\u00b4 c et al. (2009) and Novak et al. (2017). Smol\u02c7 ci\u00b4 c et al. (2009) derived the SFRD out to z = 1.3 by assuming a PLE LF and a nonevolving FIR\u2013radio correlation established by Bell (2003). We find an agreement with the Smol\u02c7 ci\u00b4 c et al. (2009) estimates within the permissible error ranges, and therefore their result provides a good consistency check for our models at low redshift. The result of Novak et al. (2017), who also assumed a PLE LF, is the key comparative object for our model A, because our analysis is based on the sample studied by these latter authors. Overall, the curve of our Model A SFRD seems to be a good fit to their SFRD points. Nevertheless, there are two points at z < 2 and one point at z > 2 that seem to disagree with our Model A at the 3\u03c3 level. Because we use the same qIR(z) evolution to calculate the SFRD as Novak et al. (2017) did, any discrepancy between the two results can only arise from the difference in LF (see Figure 6). As discussed in section 4.1, the analytical LFs of Novak et al. (2017) are obtained by fitting the 1/Vmax LF points in all redshift bins, while our LFs are obtained through a full maximum-likelihood analysis, incorporating additional constraints from source counts and local LFs. Therefore, our LFs should be more accurate than that of Novak et al. (2017), making our SFRD an improvement on their estimates. In Figure 9(b), we show the SFRD derived from our Models B and C compared to the radio estimates from van der Vlugt et al. (2022). These latter authors also assumed a LADE LF \u2014similar to our Models B and C\u2014 to calculate their SFRD based on the combined data set from the ultradeep COSMOS-XS survey and the VLA-COSMOS 3 GHz large project. Although adopting a different qIR(z) evolution from that used here, their result is in good agreement with those of our Models B and C \u2014especially our Model C\u2014 within the error bars. We note that three points of the estimates of van der Vlugt et al. (2022) show a slightly elevated SFRD compared to our model prediction. The discrepancies could be attributed to the different qIR(z) evolution used Article number, page 10 of 14 \fWang et al.: New Constraints on the Cosmic Star Formation History 0 1 2 3 4 5 6 7 8 z 21 22 23 24 log10(L\u22c6\u00d7 e2(z)) This paper: Model A This paper: Model A 3\u03c3 This paper: Model B This paper: Model B 3\u03c3 This paper: Model C This paper: Model C 3\u03c3 Cochrane +2023 0 1 2 3 4 5 6 7 8 z \u22125 \u22124 \u22123 \u22122 log10(\u03c6\u22c6\u00d7 e1(z)) This paper: Model A This paper: Model A 3\u03c3 This paper: Model B This paper: Model B 3\u03c3 This paper: Model C This paper: Model C 3\u03c3 Cochrane +2023 Fig. 8: L\u22c6\u00d7 e2(z) (left panel) and \u03d5\u22c6\u00d7 e1(z) (right panel) for our three LF models shown in different colours, compared with the inference (in all fields) from Cochrane et al. (2023). The light shaded areas take into account the 3\u03c3 error bands. Our L\u22c6and \u03d5\u22c6 have been converted to 150 MHz by assuming a spectral index of 0.7. The purple circles represent the variation of L\u22c6(left panel) and \u03d5\u22c6(right panel) as functions of redshift inferred by Cochrane et al. (2023). by these authors, or the uncertainties in their LF measurement propagated from the 1/Vmax estimator. Similar to Novak et al. (2017), van der Vlugt et al. (2022) also obtained their analytical LFs by fitting the 1/Vmax LF points in all redshift bins. Figure 9(c) shows the SFRD derived from our Model B compared to the radio estimates from Karim et al. (2011) and Cochrane et al. (2023). Our estimates are slightly lower than that of Karim et al. (2011), with the difference increasing with redshift. The discrepancies could be attributed to the different approaches taken; these latter authors performed stacking on massselected galaxies and used a nonevolving FIR\u2013radio correlation established by Bell (2003). The measurements of Cochrane et al. (2023) are systematically higher than ours. We find that a vertical shift of our Model B SFRD curve will match the Cochrane et al. (2023) data points over the whole redshift range. This is equivalent to multiplying our SFRD by \u223c1.5, shown as the orange dashed curve. Different from the qIR(z)-based Lradio \u2212SFR calibration used in the present work, Cochrane et al. (2023) used the calibrated relation between 150 MHz radio luminosity and SFR from Smith et al. (2021), and also constrained and corrected the scatter in the L150 MHz \u2212SFR relation. This may explain the discrepancy between their measurements and ours. As noted by Leslie et al. (2020), the impact of different Lradio \u2212SFR calibrations is significant. 5.3. Density evolution is indispensable According to Yuan et al. (2016), the evolution of a LF may be regarded as a vector E, and can be written as E = e1Ed + e2El, (19) where Ed and El are the base vectors of DE and LE, respectively; and e1 and e2 are DE and LE functions as mentioned in Equation (9). DE carries a physical meaning, and can tell us whether the sources are more or less numerous than those of today, while the LE can tell us whether the sources are systematically more or less luminous than those of today. In all three of our models, the LE function has a peak, indicating that SFGs are, on average, most luminous in radio at z\u223c3 \u22124. The DE function, according to our LADE models, monotonically decreases with redshift, implying that, in the radio view, SFGs are less numerous in earlier epochs. From Equation (18), we speculate that the shape of the SFRD curve is jointly determined by the form of DE and LE. Comparing Figures 8 and 9, we note that below z\u223c2, for all three of our models, the effect of positive LE is dominant over the DE. Therefore, the SFRD curves display a monotonically increasing trend. Above z\u223c2, the LE gradually begins to turn over, and the effect of DE begins to show up. As Model C has the strongest negative DE, its SFRD falls more rapidly out to high redshift than Models B and A. One of the main findings of this work is that a DE is genuinely indispensable in modeling the evolution of SFG radio LFs. This finding is further confirmed by the picture that, the assumption of pure LE seems to over-predict the SFRD at high z, while the inclusion of DE corrects for this. 5.4. The effect of IR\u2013radio correlation As shown in Equation (18), the calculation of SFRD relies on two components: the LF and the derived SFR. Although our LF has been well constrained (see Section 4), the calibration of SFR using different scaling factors may also affect the final result. In this section, we show that the SFRD derived from our model B LF can change depending on the choice of qIR(z). The first qIR(z) we tested is that from Bell (2003), where a constant qIR value of 2.64 was assumed. In Figure 9 (d), the derived SFRD is shown as the gray solid curve. The model seems to significantly overpredict the SFRD above z\u223c1.5. The second qIR(z) tested is that from Magnelli et al. (2015), where qFIR(z) = 2.35 \u00d7 (1 + z)\u22120.12; this relation can be scaled as log(LFIR) = log(LIR) \u2212log(2) to obtain the qIR(z). The SFRD based on this qIR(z) is presented in the same panel as the gray dashed line, which is generally consistent with our result. Finally, we considered qIR(z) = 2.20 from Algera et al. (2020b). The resultant SFRD is shown as the gray dash-dotted line, and is consistent with our result at z > 3.5, but is significantly lower at z < 3.5. In conclusion, we show that the assumed IR\u2013radio correlation has a significant impact on the derived SFRD and is crucial to accurately constrain the qIR value at all observed redshifts (also see Novak et al. 2017). Article number, page 11 of 14 \fA&A proofs: manuscript no. 47746corr 0 1 2 3 4 5 6 z 10\u22122 10\u22121 SFRD [ M\u2299yr\u22121 Mpc\u22123 ] (a) This paper: Model A This paper: Model A 3\u03c3 This paper: Model B This paper: Model B 3\u03c3 This paper: Model C This paper: Model C 3\u03c3 Madau & Dickinson+2014 Smolcic+09 (Sadler+02) Novak+17: Pure L evolution 0 1 2 3 4 5 6 z 10\u22122 10\u22121 SFRD [ M\u2299yr\u22121 Mpc\u22123 ] (b) This paper: Model B This paper: Model B 3\u03c3 This paper: Model C This paper: Model C 3\u03c3 Madau & Dickinson+2014 Van der Vlugt+2022: \u03a6 and L evolution 0 1 2 3 4 5 6 z 10\u22122 10\u22121 SFRD [ M\u2299yr\u22121 Mpc\u22123 ] (c) This paper: Model B This paper: Model B\u00d71.5 This paper: Model B 3\u03c3 This paper: Model C This paper: Model C\u00d71.5 This paper: Model C 3\u03c3 Madau & Dickinson+2014 Karim+2011 Cochrane+2023 0 1 2 3 4 5 6 z 10\u22122 10\u22121 SFRD [ M\u2299yr\u22121 Mpc\u22123 ] (d) Madau & Dickinson+2014 This paper: Model B (q = 2.78 \u00d7 (1 + z)\u22120.14) This paper: Model B 3\u03c3 Bell+2003 (q = 2.64) Magnelli+2015 (q = 2.35 \u00d7 (1 + z)\u22120.12) Algera+2020 (q = 2.20) Fig. 9: History of the cosmic SFRD. Our SFRD history results are shown as blue, orange, and green solid lines for Models A, B, and C, respectively. The light shaded areas take into account the 3\u03c3 error bands. The compilation of Madau & Dickinson (2014) is shown as a black dashed line in all panels. All data shown for comparison are indicated in the legend of each panel; see text for details. 6. Summary and" + } + ], + "Wentao Shi": [ + { + "url": "http://arxiv.org/abs/2403.00844v1", + "title": "Lower-Left Partial AUC: An Effective and Efficient Optimization Metric for Recommendation", + "abstract": "Optimization metrics are crucial for building recommendation systems at\nscale. However, an effective and efficient metric for practical use remains\nelusive. While Top-K ranking metrics are the gold standard for optimization,\nthey suffer from significant computational overhead. Alternatively, the more\nefficient accuracy and AUC metrics often fall short of capturing the true\ntargets of recommendation tasks, leading to suboptimal performance. To overcome\nthis dilemma, we propose a new optimization metric, Lower-Left Partial AUC\n(LLPAUC), which is computationally efficient like AUC but strongly correlates\nwith Top-K ranking metrics. Compared to AUC, LLPAUC considers only the partial\narea under the ROC curve in the Lower-Left corner to push the optimization\nfocus on Top-K. We provide theoretical validation of the correlation between\nLLPAUC and Top-K ranking metrics and demonstrate its robustness to noisy user\nfeedback. We further design an efficient point-wise recommendation loss to\nmaximize LLPAUC and evaluate it on three datasets, validating its effectiveness\nand robustness.", + "authors": "Wentao Shi, Chenxu Wang, Fuli Feng, Yang Zhang, Wenjie Wang, Junkang Wu, Xiangnan He", + "published": "2024-02-29", + "updated": "2024-02-29", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.LG" + ], + "main_content": "INTRODUCTION Recommender systems, core engines for Web applications, aim to alleviate Web information overload by recommending the Top-K most relevant items for each user [27, 37]. They are widely adopted in large-scale Web applications such as Amazon and TikTok [5], and typically learned from historical user feedback using optimization metrics related to item ranking [29]. While Top-K ranking metrics such as NDCG@K and Recall@K align well with the goals of recommendation tasks, they are not suitable for practical use at scale due to their substantial computational cost [29]. There thus remains a need to explore effective and efficient optimization metrics for recommender models. Prior research pursues the target through the trade-off between efficiency and alignment with the Top-K ranking. One approach is to frame the recommendation task as a classification problem and optimize the accuracy metric [6], which inherently deviates from the Top-K ranking. Another approach optimizes the Area Under the Receiver Operating Characteristic (ROC) curve (AUC) metric [30] as shown in Figure 1(a), which quantifies the probability of ranking a random positive item higher than a negative one. AUC accounts for item ranking but treats all items equally, which may not improve the ranking quality for Top-K items when optimized, leading to suboptimal recommendation performance. In this work, we propose a new optimization metric, Lower-Left Partial AUC, designed to be more correlated with Top-K ranking than the traditional AUC metric. LLPAUC introduces constraints on the upper bound of False Positive Rate (FPR) and True Positive Rate (TPR), i.e., focusing on the partial area under the ROC curve in the Lower-Left corner as depicted in Figure 1(b). These constraints can narrow the ranking to only include the top-ranked items as shown in Figure 1(c), strengthening the correlation with Top-K metrics. Our theoretical analysis shows that LLPAUC can tighter bound Top-K ranking metrics. Notably, the constraint on TPR can also arXiv:2403.00844v1 [cs.IR] 29 Feb 2024 \fWWW \u201924, May 13\u201317, 2024, Singapore, Singapore Wentao Shi et al. (a) AUC (b) LLPAUC (c) Intuitive Explanation Descending Order Emphasize Top Ranked items \ufffd\ufffd\ufffd = \ufffd\ufffd [\ufffd(\ufffd+) > \ufffd(\ufffd\u2212)] \ufffd\ufffd\ufffd\ufffd\ufffd\ufffd = \ufffd\ufffd [\ufffd(\ufffd+ \ufffd\ufffd\ufffd) > \ufffd(\ufffd\u2212 \ufffd\ufffd\ufffd)] Figure 1: (a) AUC measures the entire area under the ROC curve; (b) LLPAUC considers the lower-left corner; (c) Compared to AUC, LLPAUC only considers the ranking for topranked items. prevent the optimization from overfitting noisy user feedback [34], making LLPAUC more robust than AUC. Nevertheless, the optimization of LLPAUC is non-trivial due to the non-differentiable and computationally expensive TPR and FPR constraint operations. To address these challenges, following [31], we reformulate the constraint operations using the average TopK loss [8] to make it differentiable and amenable to mini-batch optimization. On top of these efforts, we propose a minimax pointwise loss function, which efficiently maximizes the LLPAUC metric. Moreover, both time complexity analysis and empirical results on real-world datasets verify its efficiency. The main contributions of the paper are summarized as follows: \u2022 We propose a new optimization metric LLPAUC for recommendation, and provide both theoretical and empirical evidence on its stronger correlation with Top-K ranking metrics. \u2022 We derive an efficient point-wise loss function for maximizing the LLPAUC metric, which has comparable complexity as conventional point-wise recommendation losses. \u2022 We conduct extensive experiments on three datasets under both clean and noisy settings, demonstrating the effectiveness and robustness of optimizing LLPAUC for recommendation. 2 RELATED WORK In this section, we briefly introduce the optimization metrics and loss functions for the recommendation task and review recent studies in partial AUC and its optimization. 2.1 Optimization Metrics In Recommendation In general, there are two common types of loss functions in recommender systems. Point-wise loss functions such as Binary Cross Entropy (BCE) loss [19] cast the recommendation task into a classification problem and optimize the accuracy metric. Pair-wise loss functions such as Bayesian Personalized Ranking (BPR) loss [30] are optimized to maximize the AUC metric. In addition, softmax cross-entropy loss [6] is also widely used to maximize the likelihood estimation of classification. Despite their optimization efficiency, these loss functions have a significant gap with the ideal Top-K ranking metrics. Beyond these employed loss functions, some approaches aim to directly optimize Top-K ranking metrics, such as NDCG@K [29] and Recall@K [28, 33]. However, these methods are computationally expensive and are not suitable for large-scale applications. To tackle this issue, recent studies have proposed the pAp@K metric [3, 20], which combines partial AUC metric and Precision@K metric. The pAp@K metric represents a specific instance of LLPAUC and offers better alignment with Top-K metrics, which lacks theoretical support. On the contrary, our study introduces the more generalized LLPAUC metric and conducts theoretical analyses and simulated experiments to establish the strong relationship between the LLPAUC metric and Top-K metrics. 2.2 Partial AUC And Its Optimization The concept of partial AUC was initially introduced by [23]. In various applications, such as drug discovery and graph anomaly detection [10\u201312], only the partial AUC up to a low false positive rate is of interest [25], which motivates the research on One-way Partial AUC (OPAUC). [32] first discusses the correlation between OPAUC and Top-K metrics for recommendation. Later, [38] argues that a practical classifier must simultaneously have a high TPR and a low FPR. Hence, they propose a new metric named Two-way Partial AUC (TPAUC), which pays attention to the upper-left head region under the ROC curve. Then, [40] first proposes an end-toend TPAUC optimization framework, which has a profound impact on subsequent work [41]. Nevertheless, TPAUC does not align with the Top-K ranking metrics in the recommendation. The proposed LLPAUC metric exhibits a stronger correlation with Top-K ranking metrics. Beyond that, LLPAUC can additionally alleviate the issue of label noise in recommender systems. Regarding the optimization of partial AUC, previous works [7, 21, 24, 26] rely on full-batch optimization and the approximation of the Top (Bottom)-K ranking, leading to immeasurable biases and inefficiency. Recently, novel end-to-end mini-batch optimization frameworks have been proposed [40, 42, 44]. These methods can be extended to optimize our proposed LLPAUC metric. In this work, we utilize an unbiased mini-batch optimization scheme [31] due to its superiority in the previous investigation. 3 PRELIMINARY In this section, we present our task formulation and partial AUC formulation for recommendation. 3.1 Task Formulation The primary objective of a recommender is to learn a score function \ud835\udc53(\ud835\udc62,\ud835\udc56|\ud835\udf03) which is parameterized by \ud835\udf03and predicts the preference of a user \ud835\udc62\u2208U on an item \ud835\udc56\u2208I. In this work, we only focus on \ud835\udc53: U \u00d7 I \u2192[0, 1]. For convenience, we use \ud835\udc53\ud835\udc62,\ud835\udc56to denote \ud835\udc53(\ud835\udc62,\ud835\udc56|\ud835\udf03). This work focuses on the implicit feedback setting [39], where positive interactions contain all items interacted with by \ud835\udc62 (denoted by I+ \ud835\udc62\u2286I), and negative interactions correspond to all \fLower-Left Partial AUC: An Effective and Efficient Optimization Metric for Recommendation WWW \u201924, May 13\u201317, 2024, Singapore, Singapore non-interacted items (denoted by I\u2212 \ud835\udc62\u2286I). Typically, the learning process is formulated as: min \ud835\udf03 1 |U| \u2211\ufe01 \ud835\udc62\u2208U \u2211\ufe01 \ud835\udc56\u2208I+ \ud835\udc62 \u2211\ufe01 \ud835\udc57\u2208I\u2212 \ud835\udc62 1 |I+ \ud835\udc62| \u00b7 |I\u2212 \ud835\udc62| \ud835\udc3f(\ud835\udf03,\ud835\udc62,\ud835\udc56, \ud835\udc57), (1) where \ud835\udc3f(\ud835\udf03,\ud835\udc62,\ud835\udc56, \ud835\udc57) denotes the fitting loss for the the positive item \ud835\udc56and negative item \ud835\udc57of user \ud835\udc62. The choice of \ud835\udc3f(\u00b7) determines the optimization metrics. For example, the BPR loss [30] can be selected to optimize AUC, while binary cross-entropy loss [6] can be used to optimize accuracy metrics. During serving, the recommender generates a Top-K recommendation list for each user based on the prediction scores. This work aims to develop optimization metrics that are better aligned with the Top-K ranking metrics and can be optimized efficiently. 3.2 AUC And Partial AUC AUC is a widely considered optimization metric in the recommendation, which is defined as the region enclosed by the ROC curve [2], as Figure 1(a) shows. Given a threshold \ud835\udc61and a score function \ud835\udc53, we can define true positive rates (TPR) and false positive rates (FPR) as TPR\ud835\udc62(\ud835\udc61) = Pr(\ud835\udc53\ud835\udc62,\ud835\udc56> \ud835\udc61|\ud835\udc56\u2208I+ \ud835\udc62) and FPR\ud835\udc62(\ud835\udc61) = Pr(\ud835\udc53\ud835\udc62,\ud835\udc57> \ud835\udc61|\ud835\udc57\u2208I\u2212 \ud835\udc62), respectively. For a given value \ud835\udf09\u2208[0, 1], let TPR\u22121 \ud835\udc62(\ud835\udf09) = inf{\ud835\udc61\u2208 R, TPR\ud835\udc62(\ud835\udc61) < \ud835\udf09} and FPR\u22121 \ud835\udc62(\ud835\udf09) = inf{\ud835\udc61\u2208R, FPR\ud835\udc62(\ud835\udc61) < \ud835\udf09}. Then, according to Figure 1(a), AUC can be formulated as: AUC = 1 |U| \u2211\ufe01 \ud835\udc62\u2208U \u222b1 0 TPR\ud835\udc62 \u0002 FPR\u22121 \ud835\udc62(\ud835\udf09) \u0003 d\ud835\udf09. (2) In the recommendation, AUC quantifies the overall ranking quality with consideration of all items in I, and we can reformulate it to a pair-wise ranking form [15] as follows: AUC = 1 |U| \u2211\ufe01 \ud835\udc62\u2208U Pr\ud835\udc56\u223cI+ \ud835\udc62,\ud835\udc57\u223cI\u2212 \ud835\udc62[\ud835\udc53\ud835\udc62,\ud835\udc56> \ud835\udc53\ud835\udc62,\ud835\udc57], (3) where Pr\ud835\udc56\u223cI+ \ud835\udc62,\ud835\udc57\u223cI\u2212 \ud835\udc62[\ud835\udc53\ud835\udc62,\ud835\udc56> \ud835\udc53\ud835\udc62,\ud835\udc57] represents the probability that a positive item \ud835\udc56is ranked higher than a negative item \ud835\udc57for user \ud835\udc62. Recently, One-way Partial AUC (OPAUC) [7] is proposed to better measure Top-K recommendation quality. Different from AUC, OPAUC just focuses on the area with FPR \u2264\ud835\udefd, which is equivalent to just focusing on pair-wise ranking between positive items and highly scored negative items (with prediction scores in [\ud835\udf02\ud835\udefd, 1], where \ud835\udf02\ud835\udefdsatisfies Pr\ud835\udc57\u223cI\u2212 \ud835\udc62[\ud835\udc53\ud835\udc62,\ud835\udc57\u2265\ud835\udf02\ud835\udefd] = \ud835\udefd). Formally, OPAUC(\ud835\udefd) = 1 |U| \u2211\ufe01 \ud835\udc62\u2208U Pr\ud835\udc56\u223cI+ \ud835\udc62,\ud835\udc57\u223cI\u2212 \ud835\udc62[\ud835\udc53\ud835\udc62,\ud835\udc56> \ud835\udc53\ud835\udc62,\ud835\udc57, \ud835\udc53\ud835\udc62,\ud835\udc57\u2265\ud835\udf02\ud835\udefd]. (4) Based on the definition, we could write a non-parametric estimator for OPAUC(\ud835\udefd) as follows: \u009c OPAUC(\ud835\udefd) = 1 |U| \u2211\ufe01 \ud835\udc62\u2208U \u2211\ufe01 \ud835\udc56\u2208I+ \ud835\udc62 \u2211\ufe01 \ud835\udc57\u2208I\u2212 \ud835\udc62 I[\ud835\udc53\ud835\udc62,\ud835\udc56> \ud835\udc53\ud835\udc62,\ud835\udc57] \u00b7 I[\ud835\udc53\ud835\udc62,\ud835\udc57\u2265\ud835\udf02\ud835\udefd] \ud835\udc5b+ \ud835\udc62\u00b7 \ud835\udc5b\u2212 \ud835\udc62 , (5) where I(\u00b7) denotes the indicator function, \ud835\udc5b+ \ud835\udc62denotes the size of I+ \ud835\udc62, and \ud835\udc5b\u2212 \ud835\udc62denotes the size of I\u2212 \ud835\udc62. 4 WHEN LLPAUC MEETS WITH RECOMMENDER SYSTEM In this paper, we introduce a novel metric called Lower-Left Partial AUC, which differs from OPAUC by imposing constraints on both FPR and TPR (i.e., TPR\u2264\ud835\udefc, FPR\u2264\ud835\udefd) as shown in Figure 1(b). By placing additional constraints on TPR, LLPAUC can more closely approach Top-K metrics and effectively address noisy user feedback issues. We next present the formal definition of LLPAUC and subsequently provide theoretical and empirical analyses to demonstrate its effectiveness in aligning with Top-K metrics. \u2022 LLPAUC Definition. LLPAUC(\ud835\udefc,\ud835\udefd), as illustrated in Figure 1(b), is defined as the area of the ROC space that lies below the ROC curve with TPR \u2264\ud835\udefcand FPR \u2264\ud835\udefd. Similarly to OPAUC, for each user \ud835\udc62, the constraint TPR\u2264\ud835\udefcimplies only considering positive items with prediction scores in [\ud835\udf02\ud835\udefc, 1], where \ud835\udf02\ud835\udefcsatisfies that Pr\ud835\udc56\u223cI+ \ud835\udc62[\ud835\udc53\ud835\udc62,\ud835\udc56\u2265\ud835\udf02\ud835\udefc] = \ud835\udefc. The constraint FPR\u2264\ud835\udefdmeans considering only negative items with prediction scores in [\ud835\udf02\ud835\udefd, 1], where \ud835\udf02\ud835\udefdsatisfies that Pr\ud835\udc57\u223cI\u2212 \ud835\udc62[\ud835\udc53\ud835\udc62,\ud835\udc57\u2265\ud835\udf02\ud835\udefd] = \ud835\udefd. These constraints will make LLPAUC focus on measuring the ranking quality between such highly scored positive items and negative items, and we can accordingly formulate LLPAUC(\ud835\udefc,\ud835\udefd) for model \ud835\udc53as: LLPAUC(\ud835\udefc, \ud835\udefd) = 1 |U| \u2211\ufe01 \ud835\udc62\u2208U Pr\ud835\udc56\u223cI+ \ud835\udc62,\ud835\udc57\u223cI\u2212 \ud835\udc62[\ud835\udc53\ud835\udc62,\ud835\udc56> \ud835\udc53\ud835\udc62,\ud835\udc57, \ud835\udc53\ud835\udc62,\ud835\udc56\u2265\ud835\udf02\ud835\udefc, \ud835\udc53\ud835\udc62,\ud835\udc57\u2265\ud835\udf02\ud835\udefd]. (6) We can also formulate it in an empirical form as follows: \u009c LLPAUC(\ud835\udefc, \ud835\udefd) = 1 |U| \u2211\ufe01 \ud835\udc62\u2208U \u2211\ufe01 \ud835\udc56\u2208I+ \ud835\udc62 \u2211\ufe01 \ud835\udc57\u2208I\u2212 \ud835\udc62 I \u0002 \ud835\udc53\ud835\udc62,\ud835\udc56> \ud835\udc53\ud835\udc62,\ud835\udc57 \u0003 \u00b7 I \u0002 \ud835\udc53\ud835\udc62,\ud835\udc56\u2265\ud835\udf02\ud835\udefc \u0003 \u00b7 I \u0002 \ud835\udc53\ud835\udc62,\ud835\udc57\u2265\ud835\udf02\ud835\udefd \u0003 \ud835\udc5b+ \ud835\udc62\u00b7 \ud835\udc5b\u2212 \ud835\udc62 . (7) It is apparent that both AUC and OPAUC are special instances of our proposed LLPAUC metric. Specifically, we have AUC=LLPAUC(1,1) and OPAUC(\ud835\udefd) = LLPAUC(1,\ud835\udefd). 4.1 Theoretical Analysis In this subsection, we present theoretical evidence that LLPAUC(\ud835\udefc,\ud835\udefd) is highly correlated with Top-K metrics such as Recall@K and Precision@K when \ud835\udefcand \ud835\udefdare appropriately set. Theorem 1. Suppose there are \ud835\udc5b+ positive items and \ud835\udc5b\u2212negative items, where \ud835\udc5b+ > \ud835\udc3eand \ud835\udc5b\u2212> \ud835\udc3e. Ranking all items in descending order according to the prediction scores obtained from any model f, we have 1 \ud835\udc5b+ \u230aG\ud835\udc59\ud835\udc5c\ud835\udc64\ud835\udc52\ud835\udc5f(LLPAUC(\ud835\udefc, \ud835\udefd))\u230b\u2264 Recall@K \u22641 \ud835\udc5b+ \u0006 G\u210e\ud835\udc56\ud835\udc54\u210e\ud835\udc52\ud835\udc5f(LLPAUC(\ud835\udefc, \ud835\udefd)) \u0007 , (8) 1 \ud835\udc3e\u230aG\ud835\udc59\ud835\udc5c\ud835\udc64\ud835\udc52\ud835\udc5f(LLPAUC(\ud835\udefc, \ud835\udefd))\u230b\u2264 Precision@K \u22641 \ud835\udc3e \u0006 G\u210e\ud835\udc56\ud835\udc54\u210e\ud835\udc52\ud835\udc5f(LLPAUC(\ud835\udefc, \ud835\udefd)) \u0007 , (9) where \ud835\udefc= \ud835\udc3e \ud835\udc5b+ , \ud835\udefd= \ud835\udc3e \ud835\udc5b\u2212, and G\ud835\udc59\ud835\udc5c\ud835\udc64\ud835\udc52\ud835\udc5f(LLPAUC(\ud835\udefc, \ud835\udefd)) = \ud835\udc3e\u2212 \u221a\ufe03 \ud835\udc3e2 \u2212\ud835\udc5b+\ud835\udc5b\u2212\u00d7 LLPAUC(\ud835\udefc, \ud835\udefd), G\u210e\ud835\udc56\ud835\udc54\u210e\ud835\udc52\ud835\udc5f(LLPAUC(\ud835\udefc, \ud835\udefd)) = \u221a\ufe01 \ud835\udc5b+\ud835\udc5b\u2212\u00d7 LLPAUC(\ud835\udefc, \ud835\udefd). (10) Theorem 2. The bounds for Top-K metrics in Eq. (8) and Eq. (9) are tighter than the bounds obtained with OPAUC in Theorem 3 of [32]. \fWWW \u201924, May 13\u201317, 2024, Singapore, Singapore Wentao Shi et al. 0.001 0.005 0.02 0.1 0.5 0.001 0.005 0.02 0.1 0.5 0.2 0.4 0.6 0.8 1.0 Recall@20 0.001 0.005 0.02 0.1 0.5 0.001 0.005 0.02 0.1 0.5 0.2 0.4 0.6 0.8 1.0 Recall@100 0.001 0.005 0.02 0.1 0.5 0.001 0.005 0.02 0.1 0.5 Pearson Correlation Coefficient 0.2 0.4 0.6 0.8 1.0 Recall@200 Figure 2: Pearson correlation coefficient between Recall@K and LLPAUC(\ud835\udefc, \ud835\udefd). The proof of Theorem 1 and 2 can be found in Appendix A and B, respectively. Based on the two theorems, we conclude that: \u2022 LLPAUC(\ud835\udefc,\ud835\udefd) exhibits a stronger correlation with Top-\ud835\udc3emetrics like Precision@\ud835\udc3eand Recall@\ud835\udc3e, when compared to OPAUC(\ud835\udefd) and AUC. Therefore, optimizing LLPAUC is expected to yield superior performance in the Top-K metrics. \u2022 In the derived bounds, both \ud835\udefc= \ud835\udc3e \ud835\udc5b+ and \ud835\udefd= \ud835\udc3e \ud835\udc5b\u2212decrease as \ud835\udc3edecreases. This implies that while manipulating the value of \ud835\udc3e, adjustments to \ud835\udefcand \ud835\udefdshould be made in order to maintain a robust correlation between LLPAUC and the corresponding Top-K metrics. 4.2 Empirical Analysis We now provide empirical evidence to further substantiate the strong correlation between LLPAUC and Top-K metrics. We perform Monte Carlo sampling experiments via simulation. Specifically, we assume that there are \ud835\udc5b+ positive items and \ud835\udc5b\u2212negative items, and take each possible permutation of all items to represent a possible ranking list. We randomly sample 10,000 permutations and calculate the Pearson correlation coefficient between LLPAUC(\ud835\udefc, \ud835\udefd) and Recall@K with different \ud835\udefc, \ud835\udefd, and \ud835\udc3e. It should be noted that the trend is consistent across simulations with different numbers of positive and negative samples (\ud835\udc5b+ and \ud835\udc5b\u2212). Therefore, without loss of generality, we set \ud835\udc5b+ = 1000 and \ud835\udc5b\u2212= 50000, where \ud835\udefcand \ud835\udefd are logarithmically scaled. It is worth noting that the correlations between Recall@K and OPAUC(\ud835\udefd) (or AUC) can be observed by examining LLPAUC(1,\ud835\udefd) (or LLPAUC(1,1)). From the Figure 2, we observe that: (1) The maximum correlation coefficient is obtained when \ud835\udefc< 1 and \ud835\udefd< 1, with a value exceeding 0.8. This observation provides empirical evidence supporting the proposition that LLPAUC(\ud835\udefc, \ud835\udefd) exhibits a stronger correlation with Top-K metrics compared to OPAUC and AUC metrics, thus validating Theorem 2. (2) As \ud835\udc3edecreases, the point that corresponds to the maximum correlation coefficient shifts towards smaller values of \ud835\udefcand \ud835\udefd. This aligns with the conclusion drawn from the conditions \ud835\udefc= \ud835\udc3e \ud835\udc5b+ and \ud835\udefd= \ud835\udc3e \ud835\udc5b\u2212in the bounds of Eq. (8), further reinforcing the validity of our Theorem 1. Furthermore, we observe that using both \ud835\udefcand \ud835\udefdto regulate TPR or FPR could enhance the alignment of LLPAUC with the Top-K ranking. Additionally, utilizing \ud835\udefcto regulate TPR can also increase the robustness against noise, which we next discuss. \u2022 LLPAUC Enhancing Robustness Against Noise. As stated in [34], noise-positive interactions are harder to fit in the early training stage for the recommendation, which results in relatively larger losses (lower predicted score) of noise interactions. As aforementioned, the constraint TPR\u2264\ud835\udefcimplies LLPAUC only considers positive items with prediction scores \ud835\udc53\ud835\udc62,\ud835\udc56\u2265\ud835\udf02\ud835\udefc. In this way, lots of noise-positive interactions are filtered out, which makes LLPAUC enhance model robustness against noise. 5 METHOD In this section, we first introduce the loss function that enables efficient optimization of LLPAUC. We then describe the learning algorithm and discuss its time complexity. 5.1 Loss Function To optimize LLPAUC during model learning, it is necessary to further convert the LLPAUC(\ud835\udefc,\ud835\udefd) in Eq. (7) to a loss function that can be efficiently optimized. This involves transforming the nondifferentiable and computationally expensive terms in Eq. (7), including the pair-wise ranking term (I \u0002 \ud835\udc53\ud835\udc62,\ud835\udc56> \ud835\udc53\ud835\udc62,\ud835\udc57 \u0003) and TPR and FPR constraint terms (I \u0002 \ud835\udc53\ud835\udc62,\ud835\udc56\u2265\ud835\udf02\ud835\udefc \u0003 and I \u0002 \ud835\udc53\ud835\udc62,\ud835\udc57\u2265\ud835\udf02\ud835\udefd \u0003), into lowcomplexity point-wise loss functions. To this end, we replace the pair-wise ranking term with a decouplable surrogate loss and design an Average Top-K Trick inspired by [31] to transform the constraint terms. Specifically, we follow the four steps to derive our loss: \u2022 Step 1: replacing I \u0002 \ud835\udc53\ud835\udc62,\ud835\udc56> \ud835\udc53\ud835\udc62,\ud835\udc57 \u0003 with surrogate loss function. The non-continuous and non-differentiable I[\ud835\udc53\ud835\udc62,\ud835\udc56> \ud835\udc53\ud835\udc62,\ud835\udc57] in Eq. (7) is also appeared in AUC and OPAUC formulation. To convert it, we adopt an approach similar to that used for AUC and OPAUC, which involves replacing it with a continuous surrogate loss \u2113(\ud835\udc53\ud835\udc62,\ud835\udc56\u2212\ud835\udc53\ud835\udc62,\ud835\udc57). Under the assumptions below, the surrogate \u2113(\u00b7) is consistent for LLPAUC maximization [9]. Assumption 1. We assume \u2113(\u00b7) is a convex, differentiable and monotonically decreasing function when \u2113(\u00b7) > 0, and \u2113\u2032(0) < 0. \fLower-Left Partial AUC: An Effective and Efficient Optimization Metric for Recommendation WWW \u201924, May 13\u201317, 2024, Singapore, Singapore Then, maximizing LLPAUC(\ud835\udefc, \ud835\udefd) in Eq. (7) is equivalent to minimizing the following loss: min \ud835\udf03 1 |U| \u2211\ufe01 \ud835\udc62\u2208U \u2211\ufe01 \ud835\udc56\u2208I+ \ud835\udc62 \u2211\ufe01 \ud835\udc57\u2208I\u2212 \ud835\udc62 \u2113(\ud835\udc53\ud835\udc62,\ud835\udc56\u2212\ud835\udc53\ud835\udc62,\ud835\udc57) \u00b7 I \u0002 \ud835\udc53\ud835\udc62,\ud835\udc56\u2265\ud835\udf02\ud835\udefc \u0003 \u00b7 I \u0002 \ud835\udc53\ud835\udc62,\ud835\udc57\u2265\ud835\udf02\ud835\udefd \u0003 \ud835\udc5b+ \ud835\udc62\u00b7 \ud835\udc5b\u2212 \ud835\udc62 . (11) \u2022 Step 2: decoupling pair-wise loss into point-wise loss. By setting \u2113(\ud835\udc65) = (1 \u2212\ud835\udc65)2, a square loss satisfying Assumption 1, we could decouple the total loss into positive and negative item components, resulting in a point-wise loss. Lemma 1. (Proof in Appendix C) With \u2113(\ud835\udc65) = (1 \u2212\ud835\udc65)2, the LLPAUC(\ud835\udefc, \ud835\udefd) optimization problem in Eq. (11) is equal to min \ud835\udf03,(\ud835\udc4e,\ud835\udc4f)\u2208[0,1]2 max \ud835\udefe\u2208[\u22121,1] 1 |U| \u2211\ufe01 \ud835\udc62\u2208U \u2211\ufe01 \ud835\udc56\u2208I+ \ud835\udc62 \u2113+(\ud835\udc53\ud835\udc62,\ud835\udc56)I \u0002 \ud835\udc53\ud835\udc62,\ud835\udc56\u2265\ud835\udf02\ud835\udefc \u0003 \ud835\udc5b+ \ud835\udc62 + \u2211\ufe01 \ud835\udc57\u2208I\u2212 \ud835\udc62 \u2113\u2212(\ud835\udc53\ud835\udc62,\ud835\udc57)I \u0002 \ud835\udc53\ud835\udc62,\ud835\udc57\u2265\ud835\udf02\ud835\udefd \u0003 \ud835\udc5b\u2212 \ud835\udc62 \u2212\ud835\udefe2, (12) where \ud835\udc4e,\ud835\udc4fand \ud835\udefeare learnable parameters, \u2113+(\ud835\udc53\ud835\udc62,\ud835\udc56) = (\ud835\udc53\ud835\udc62,\ud835\udc56\u2212\ud835\udc4e)2 \u2212 2(1 + \ud835\udefe)\ud835\udc53\ud835\udc62,\ud835\udc56, and \u2113\u2212(\ud835\udc53\ud835\udc62,\ud835\udc57) = (\ud835\udc53\ud835\udc62,\ud835\udc57\u2212\ud835\udc4f)2 + 2(1 + \ud835\udefe)\ud835\udc53\ud835\udc62,\ud835\udc57. \u2022 Step 3: reformulating TPR and FPR constraint terms using an average top-K trick. The constraint terms I \u0002 \ud835\udc53\ud835\udc62,\ud835\udc56\u2265\ud835\udf02\ud835\udefc \u0003 and I \u0002 \ud835\udc53\ud835\udc62,\ud835\udc57\u2265\ud835\udf02\ud835\udefd \u0003 require selecting highly scored positive and negative items, which renders the loss in Eq. (12) still non-differentiable and difficult to optimize. Fortunately, under certain conditions, \u2113+(\ud835\udc53\ud835\udc62,\ud835\udc56) is a monotonic decreasing function w.r.t \ud835\udc53\ud835\udc62,\ud835\udc56and \u2113\u2212(\ud835\udc53\ud835\udc62,\ud835\udc57) is a monotonic increasing function w.r.t \ud835\udc53\ud835\udc62,\ud835\udc57, as proven in Appendix D. Then, we could make the item selection process differentiable using the average Top-K reformulation trick introduced below. Lemma 2. (Proof in Appendix E) Suppose \u2113+(\ud835\udc53\ud835\udc62,\ud835\udc56) is monotonic decreasing w.r.t. \ud835\udc53\ud835\udc62,\ud835\udc56and \u2113\u2212(\ud835\udc53\ud835\udc62,\ud835\udc57) is monotonic increasing w.r.t. \ud835\udc53\ud835\udc62,\ud835\udc57, then we have \u2211\ufe01 \ud835\udc56\u2208I+ \ud835\udc62 \u0002 \u2113+(\ud835\udc53\ud835\udc62,\ud835\udc56) \u00b7 I[\ud835\udc53\ud835\udc62,\ud835\udc56\u2265\ud835\udf02\ud835\udefc] \u0003 = max \ud835\udc60+\u2208R \u2211\ufe01 \ud835\udc56\u2208I+ \ud835\udc62 \u0002 \u2212\ud835\udefc\ud835\udc60+ \u2212[\u2212\u2113+(\ud835\udc53\ud835\udc62,\ud835\udc56) \u2212\ud835\udc60+]+ \u0003 , \u2211\ufe01 \ud835\udc57\u2208I\u2212 \ud835\udc62 \u0002 \u2113\u2212(\ud835\udc53\ud835\udc62,\ud835\udc57) \u00b7 I[\ud835\udc53\ud835\udc62,\ud835\udc57\u2265\ud835\udf02\ud835\udefd] \u0003 = min \ud835\udc60\u2212\u2208R \u2211\ufe01 \ud835\udc57\u2208I\u2212 \ud835\udc62 \u0002 \ud835\udefd\ud835\udc60\u2212+ [\u2113\u2212(\ud835\udc53\ud835\udc62,\ud835\udc57) \u2212\ud835\udc60\u2212]+ \u0003 , where \ud835\udc60+ and \ud835\udc60\u2212are learnable parameters, and [\ud835\udc65]+ = \ud835\udc5a\ud835\udc4e\ud835\udc65(0,\ud835\udc65). By leveraging the average Top-K reformulation trick presented in the lemma, we can reformulate the LLPAUC optimization problem in Eq. (12) as follows: min \ud835\udf03,(\ud835\udc4e,\ud835\udc4f)\u2208[0,1]2 max \ud835\udefe\u2208\u03a9\ud835\udefe 1 |U| \u2211\ufe01 \ud835\udc62\u2208U {max \ud835\udc60+\u2208R \u2211\ufe01 \ud835\udc56\u2208I+ \ud835\udc62 \u2212\ud835\udefc\ud835\udc60+ \u2212[\u2212\u2113+(\ud835\udc53\ud835\udc62,\ud835\udc56) \u2212\ud835\udc60+]+ \ud835\udc5b+ \ud835\udc62 + min \ud835\udc60\u2212\u2208R \u2211\ufe01 \ud835\udc57\u2208I\u2212 \ud835\udc62 \ud835\udefd\ud835\udc60\u2212+ [\u2113\u2212(\ud835\udc53\ud835\udc62,\ud835\udc57) \u2212\ud835\udc60\u2212]+ \ud835\udc5b\u2212 \ud835\udc62 \u2212\ud835\udefe2}, (13) where \u03a9\ud835\udefe= [max(\u2212\ud835\udc4e,\ud835\udc4f\u22121), 1]. \u2022 Step 4: swapping min-max operations. Solving Eq. (13) directly is challenging since it involves a complicated min-max-min sub-problem (it also contains a manageable min-max-max subproblem). However, as done in [31], we could swap the order of the latter max\ud835\udefeand min\ud835\udc60\u2212operations for the min-max-min subproblem after applying two preprocessing steps: 1) replacing the non-smooth function [\u00b7]+ with the softplus function [13] and 2) adding an \ud835\udc3f2 regularizer to make Eq. (13) strongly-concave w.r.t. \ud835\udefe. Finally, according to the min-max theorem [1], we could merge the consecutive min (or max) operations, converting the overall optimization problem into a min-max form. Formally, Eq. (13) could be reformulated as (see Appendix F for the proof): min {\ud835\udf03,(\ud835\udc4e,\ud835\udc4f)\u2208[0,1]2,\ud835\udc60\u2212\u2208R} max {\ud835\udefe\u2208\u03a9\ud835\udefe,\ud835\udc60+\u2208R} 1 |U| \u2211\ufe01 \ud835\udc62\u2208U { \u2211\ufe01 \ud835\udc56\u2208I+ \ud835\udc62 \u2212\ud835\udefc\ud835\udc60+ \u2212\ud835\udc5f\ud835\udf05(\u2212\u2113+(\ud835\udc53\ud835\udc62,\ud835\udc56) \u2212\ud835\udc60+) \ud835\udc5b+ \ud835\udc62 + \u2211\ufe01 \ud835\udc57\u2208I\u2212 \ud835\udc62 \ud835\udefd\ud835\udc60\u2212+ \ud835\udc5f\ud835\udf05(\u2113\u2212(\ud835\udc53\ud835\udc62,\ud835\udc57) \u2212\ud835\udc60\u2212) \ud835\udc5b\u2212 \ud835\udc62 \u2212(\ud835\udc64+ 1)\ud835\udefe2}, (14) where \u03a9\ud835\udefe= [max(\u2212\ud835\udc4e,\ud835\udc4f\u22121), 1], and \ud835\udc5f\ud835\udf05denotes the softplus function. Formally, \ud835\udc5f\ud835\udf05(\ud835\udc65) = 1 \ud835\udf05log(1 + exp(\ud835\udf05\u00b7 \ud835\udc65)), where \ud835\udf05is a hyperparameter. It is easy to show that \ud835\udc5f\ud835\udf05(\ud835\udc65) \ud835\udf05\u2192\u221e \u2212\u2192[\ud835\udc65]+, which leads to asymptotically unbiased optimization. Remark. Our final loss function in Eq. (14) is similar to the one proposed in [31]. However, it is important to emphasize that the primary contribution of our work is not the introduction of a completely new optimization scheme. Rather, our main contribution lies in extending existing optimization methods to align with our novel LLPAUC metric while addressing challenges associated with the coexistence of minima and maxima optimizations. \u2022 Learning Algorithm and Time Complexity Analysis. To solve the above minimax optimization in Eq. (14), we employ a stochastic gradient descent ascent (SGDA) method. The detailed algorithm can be found in Appendix G.2. Based on it, we derive that the total per-iteration complexity of our method is the same as classical loss functions such as BPR [30] and BCE [6]. The detailed derivation process can be found in Appendix G.2. 6 EXPERIMENTS In this section, we conduct a series of experiments on three datasets to evaluate the effectiveness and robustness of our proposed optimization metric LLPAUC along with the loss function. Due to space limitations, additional experimental results, including some supplemental results during the rebuttal stage, can be found in the ArXiv version of the paper. 6.1 Experiments Setting Dateset. We conduct experiments on three real-world datasets: Adressa, Yelp, and Amazon-book. Our dataset selection was made intentionally to cover a broad range of recommendation scenarios and accommodate different dataset sizes. Adressa is a news reading dataset from Adressavisen [14], where the clicks with dwell time < 10s are thought of as noisy interactions [34]. Yelp1 is a restaurant recommendation dataset with user ratings from one to five. Amazon-book2 is from the Amazon-Review [17] datasets, containing user interaction ratings with extensive books. A rating score below 3 on Yelp and Amazon-book is regarded as a noisy interaction. 1https://www.yelp.com/dataset/challenge. 2https://jmcauley.ucsd.edu/data/amazon/. \fWWW \u201924, May 13\u201317, 2024, Singapore, Singapore Wentao Shi et al. Table 1: Performance comparison on three datasets with clean training. The best results are highlighted in bold. MF Method Adressa Yelp Amazon Recall@20 NDCG@20 Recall@20 NDCG@20 Recall@20 NDCG@20 BCE 0.1573\u00b10.0251 0.0793\u00b10.0181 0.0814\u00b10.0004 0.0448\u00b10.0005 0.0663\u00b10.0006 0.0363\u00b10.0002 BPR 0.1800\u00b10.0204 0.0991\u00b10.0144 0.0647\u00b10.0005 0.0358\u00b10.0002 0.0695\u00b10.0001 0.0384\u00b10.0007 SCE 0.2001\u00b10.0031 0.1057\u00b10.0015 0.0762\u00b10.0007 0.0425\u00b10.0003 0.0894\u00b10.0012 0.0507\u00b10.0009 CCL 0.1956\u00b10.0110 0.0911\u00b10.0028 0.0842\u00b10.0002 0.0486\u00b10.0000 0.0944\u00b10.0001 0.0551\u00b10.0008 DNS(\ud835\udc40, \ud835\udc41) 0.1877\u00b10.0025 0.0965\u00b10.0010 0.0856\u00b10.0005 0.0489\u00b10.0002 0.1012\u00b10.0006 0.0580\u00b10.0003 Softmax_v(\ud835\udf0c, \ud835\udc41) 0.1849\u00b10.0105 0.0949\u00b10.0088 0.0824\u00b10.0008 0.0470\u00b10.0004 0.1024\u00b10.0001 0.0592\u00b10.0001 PAUCI(OPAUC) 0.2021\u00b10.0014 0.1086\u00b10.0007 0.0821\u00b10.0004 0.0479\u00b10.0003 0.0991\u00b10.0001 0.0549\u00b10.0002 LLPAUC 0.2166\u00b10.0022 0.1214\u00b10.0009 0.0884\u00b10.0005 0.0505\u00b10.0003 0.1076\u00b10.0007 0.0612\u00b10.0004 LightGCN Method Adressa Yelp Amazon Recall@20 NDCG@20 Recall@20 NDCG@20 Recall@20 NDCG@20 BCE 0.1897\u00b10.0004 0.0935\u00b10.0002 0.0905\u00b10.0003 0.0517\u00b10.0004 0.1149\u00b10.0003 0.0660\u00b10.0003 BPR 0.1737 \u00b10.0006 0.0923 \u00b10.0004 0.0802 \u00b10.0005 0.0453 \u00b10.0003 0.0922\u00b10.0002 0.0520\u00b10.0001 SCE 0.1729 \u00b10.0008 0.0960 \u00b10.0007 0.0890 \u00b10.0005 0.0506 \u00b10.0004 0.1115\u00b10.0004 0.0640\u00b10.0002 CCL 0.1926\u00b10.0008 0.1014 \u00b10.0009 0.0915 \u00b10.0006 0.0528 \u00b10.0005 0.1007\u00b10.0000 0.0614\u00b10.0001 DNS(\ud835\udc40, \ud835\udc41) 0.1830\u00b10.0035 0.0952 \u00b10.0006 0.0962 \u00b10.0003 0.0550 \u00b10.0002 0.1056\u00b10.0004 0.0597\u00b10.0002 Softmax_v(\ud835\udf0c, \ud835\udc41) 0.1923\u00b10.0107 0.1056\u00b10.0117 0.0975\u00b10.0001 0.0567\u00b10.0000 0.1128\u00b10.0007 0.0724\u00b10.0006 LLPAUC 0.2311 \u00b10.0004 0.1312 \u00b10.0002 0.1002 \u00b10.0003 0.0573 \u00b10.0004 0.1201\u00b10.0003 0.0684\u00b10.0003 Training Settings. We employed two training settings, clean training and noise training, to verify the effectiveness and robustness of our proposed loss. Following [35], clean training filters out noisy user interactions and divides the remaining data into separate training, validation, and testing sets. In contrast, noise training retains the same testing set as clean training yet adds noisy interactions to the training and validation sets. Note that we keep the numbers of noisy training and validation interactions on a similar scale as clean training for a fair comparison. Evaluation Protocols. Following existing studies [19, 30], we adopt the full-ranking evaluation setting, where we calculate the metrics using all negative samples. Meanwhile, we utilize two popular metrics to evaluate models, Recall@K and NDCG@K with \ud835\udc3e= 20, where higher scores indicate better performance. Baselines. We compare our LLPAUC surrogate loss function with the following representative recommender losses. 1) Bayesian Personalized Ranking (BPR) [30] loss is a pair-wise loss function, which optimizes the AUC metric. 2) Binary Cross-Entropy (BCE) [19] loss optimizes accuracy metric. 3) Softmax CrossEntropy (SCE) [6] loss is widely used for classification problems and maximizes likelihood estimation of classification. 4) DNS(\ud835\udc40, \ud835\udc41) and Softmax_v(\ud835\udf0c, \ud835\udc41) are advanced OPAUC-based loss functions for recommendation system. 5) PAUCI(OPAUC) [31] is also an advanced OPAUC-based loss function. For clean training, recent 5) Cosine Contrastive Loss (CCL) [22] is included in the comparison. For noise training, we add strong denoising baselines 6) RCE and TCE [34] for comparison. In Appendix H.2, we also compare LLPAUC loss function with advanced learning-to-rank (LTR) methods. Parameter Settings. For a fair comparison, we choose two representative recommender models, Matrix Factorization (MF) and graph neural network model LightGCN [18], as the backbones for all loss functions. All the models are optimized by the Adam optimizer with a learning rate of 0.001 and a batch size of 128. In the training process, we adopt widely used negative sampling trick [22] to improve the training efficiency. The number of negative items for each positive item is set to 100. For the proposed LLPAUC surrogate loss function, we tune \ud835\udefcand \ud835\udefdwithin the ranges of {0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9} and { 0.01, 0.02, 0.05, 0.1, 0.2, 0.5, 0.7, 0.9}. All hyperparameter searches are done relying on the validation set. We reported results (mean and standard deviation) based on three repeats with distinct random seeds. 6.2 Main Results Clean Training. Table 1 shows the performance comparison between the LLPAUC surrogate loss function with various baselines under the clean training setting with MF and LightGCN backbones. Several key observations can be made from the results: 1) LLPAUC consistently achieves the best performance in most cases across all three datasets with different backbones, outperforming the other loss functions significantly. This demonstrates that LLPAUC strongly correlates with Top-K metrics compared to other optimization metrics, which is consistent with our previous theoretical analysis and independent of the dataset and the backbones. 2) The performance of BPR is noticeably inferior to that of DNS(\ud835\udc40, \ud835\udc41) and Softmax_v(\ud835\udf0c, \ud835\udc41) on all datasets with different backbones. Drawing upon the prior knowledge that OPAUC has a stronger correlation with Top-K compared to AUC, we can infer that optimization metrics closely tied to Top-K yield superior performance. \fLower-Left Partial AUC: An Effective and Efficient Optimization Metric for Recommendation WWW \u201924, May 13\u201317, 2024, Singapore, Singapore Table 2: Performance comparison on three datasets with noise training. The best results are highlighted in bold. MF Method Adressa Yelp Amazon Recall@20 NDCG@20 Recall@20 NDCG@20 Recall@20 NDCG@20 BCE 0.1551\u00b10.0025 0.0762\u00b10.0007 0.0799\u00b10.0014 0.0438\u00b10.0009 0.0911\u00b10.0009 0.0515 \u00b10.0009 BPR 0.1666\u00b10.0215 0.0880\u00b10.0139 0.0626\u00b10.0014 0.0341\u00b10.0009 0.0663\u00b10.0008 0.0363\u00b10.0006 SCE 0.1938\u00b10.0010 0.1062\u00b10.0007 0.0738\u00b10.0003 0.0406\u00b10.0009 0.0840\u00b10.0010 0.0470\u00b10.0011 TCE 0.1465\u00b10.0022 0.0862\u00b10.0007 0.0826\u00b10.0008 0.0456\u00b10.0005 0.0906\u00b10.0018 0.0514\u00b10.0011 RCE 0.1617\u00b10.0329 0.0819\u00b10.0221 0.0818\u00b10.0009 0.0452\u00b10.0005 0.0965\u00b10.0017 0.0549\u00b10.0015 DNS(\ud835\udc40, \ud835\udc41) 0.1802\u00b10.0125 0.0847\u00b10.0097 0.0844\u00b10.0016 0.0477\u00b10.0008 0.0966\u00b10.0003 0.0543\u00b10.0003 Softmax_v(\ud835\udf0c, \ud835\udc41) 0.1801\u00b10.0086 0.0922\u00b10.0054 0.0816\u00b10.00014 0.0452\u00b10.0005 0.0954\u00b10.0002 0.0536\u00b10.0001 LLPAUC 0.2127\u00b10.0014 0.1189\u00b10.0009 0.0847\u00b10.0007 0.0481\u00b10.0001 0.0998\u00b10.0008 0.0566\u00b10.0006 LightGCN Method Adressa Yelp Amazon Recall@20 NDCG@20 Recall@20 NDCG@20 Recall@20 NDCG@20 BCE 0.1844 \u00b10.0005 0.0874 \u00b10.0002 0.0888 \u00b10.0003 0.0497 \u00b10.0001 0.1095\u00b10.0003 0.0620\u00b10.0001 BPR 0.1661 \u00b10.0007 0.0914 \u00b10.0006 0.0800 \u00b10.0005 0.0448 \u00b10.0002 0.0884\u00b10.0005 0.0492\u00b10.0002 SCE 0.1732 \u00b10.0008 0.0936 \u00b10.0005 0.0916 \u00b10.0003 0.0514 \u00b10.0003 0.1068\u00b10.0003 0.0604\u00b10.0002 TCE 0.2184\u00b10.0005 0.1187\u00b10.0005 0.0923 \u00b10.0004 0.0522 \u00b10.0003 0.1085 \u00b10.0004 0.0611 \u00b10.0002 RCE 0.2204 \u00b10.0007 0.1219 \u00b10.0007 0.0941 \u00b10.0006 0.0536 \u00b10.0008 0.1126 \u00b10.0004 0.0639 \u00b10.0005 DNS(\ud835\udc40, \ud835\udc41) 0.1701\u00b10.0017 0.0889 \u00b10.0011 0.0948 \u00b10.0002 0.0536 \u00b10.0001 0.1012\u00b10.0002 0.0570\u00b10.0001 Softmax_v(\ud835\udf0c, \ud835\udc41) 0.1815\u00b10.0047 0.0939\u00b10.0084 0.0957\u00b10.0002 0.0549\u00b10.0002 0.1076\u00b10.0003 0.0682\u00b10.0004 LLPAUC 0.2228\u00b10.0006 0.1231 \u00b10.0005 0.0981 \u00b10.0007 0.0558 \u00b10.0004 0.1165\u00b10.0007 0.0655\u00b10.0005 Clean Noise Adressa 0.19 0.20 0.21 Recall@20 Clean Noise Yelp 0.06 0.08 Clean Noise Amazon_Book 0.06 0.08 0.10 LLPAUC OPAUC AUC Figure 3: Ablation studies among different AUC metrics with clean training and noise training. This finding validates our motivation for proposing LLPAUC. 3) In contrast to BPR and BCE, other losses can implicitly pay more attention to hard negative items, resulting in their superior performance. In LLPAUC, we can similarly adjust the attention to hard negative items by varying the \ud835\udefdparameter. 4) LightGCN outperforms MF in most cases, highlighting its superior strength as a representative graph neural network backbone. Noise Training. In real-world recommender systems, the user interactions collected through implicit feedback often contain natural false-positive interactions. To evaluate the robustness of LLPAUC, we compare LLPAUC with other loss functions under the noise training setting in Table 2. Notably, we have the following observation: 1) Across all three datasets, the model performance under the noise training setting drops for all loss functions, when compared to the clean training setting. This observation makes sense because it is more challenging to predict user preference from noisy interactions. 2) Denoising baselines like RCE and TCE achieve better performance than other baselines across all datasets, highlighting the importance of noise removal. 3) LLPAUC surpasses all baselines in most cases on all datasets, verifying the strong robustness against natural noises. The robustness of LLPAUC stems from its emphasis on higher-ranked positive items, which can be adjusted by hyperparameter \ud835\udefc. 6.3 In-depth Analysis Ablation Study. We next conduct ablation studies to assess the significance of the TPR and FPR constraints in LLPAUC(\ud835\udefc, \ud835\udefd). Note that restriction on the upper bound of TPR and FPR represents the emphasis on high-ranked positive and negative items in LLPAUC, respectively. As shown in Eq. (6), OPAUC(\ud835\udefd) = LLPAUC(1, \ud835\udefd) and AUC = LLPAUC(1, 1). Based on it, we obtain ablation loss functions of AUC and OPAUC(\ud835\udefd) by setting \ud835\udefcand \ud835\udefdin Eq. (14). The results of ablation studies are summarized in Figure 3, where we can observe that: 1) Under clean training, LLPAUC outperforms OPAUC, and OPAUC perform better than AUC. This verifies both \fWWW \u201924, May 13\u201317, 2024, Singapore, Singapore Wentao Shi et al. 0.5 1.0 0.5 1.0 0.00 0.25 0.50 0.75 1.00 Recall@20 0.5 1.0 0.5 1.0 0.00 0.25 0.50 0.75 1.00 Recall@50 0.5 1.0 0.5 1.0 Normalized Performance 0.00 0.25 0.50 0.75 1.00 Recall@100 Figure 4: Normalized Recall@K on Adressa dataset under clean training for K=20, 50 and 100. 0.6 0.8 alpha 0.08 0.09 0.10 Amazon-Book Clean Noise 0.6 0.8 alpha 0.070 0.075 0.080 0.085 Recall@20 Yelp Clean Noise Figure 5: Given a fix \ud835\udefd, the hyperparameter analysis of \ud835\udefc in LLPAUC(\ud835\udefc, \ud835\udefd) on different datasets under clean training setting and noise training setting. emphases on high-ranked positive items and high-ranked negative items strengthen the correlation between LLPAUC and Top-K metrics. 2) When exposed to noisy interactions, LLPAUC demonstrates relatively minor performance degradation compared to OPAUC and AUC, showcasing its robustness against noise. This is attributed to the emphasis on high-ranked positive items and avoidance of noise samples with low ranks in LLPAUC. Hyperparameter Analysis. To verify the impact of the constraints of LLPAUC, we conduct the grid search experiments on hyperparameters \ud835\udefcand \ud835\udefdand present the corresponding Recall@K performance in Figure 4. To facilitate a better comparison, we report the normalized Recall@K metrics, where Normalized_Recall = Recall\u2212Min_Recall Max_Recall\u2212Min_Recall. From the figure, we observe that: 1) The maximum performance is obtained with \ud835\udefc< 1 and \ud835\udefd< 1. Recall that AUC=LLPAUC(1,1) and OPAUC=LLPAUC(1,\ud835\udefd). Hence, this demonstrates both restrictions of \ud835\udefcand \ud835\udefdof LLPAUC enhance its correlation with the Top-K metric, which is consistent with our Theorem 2 and empirical analysis in Section 4.2. 2) As K in Recall@K decreases, we should shift towards a smaller value of \ud835\udefcand \ud835\udefdto achieve the best performance, empirically corroborating the bound conditions in our Theorem 1. This means we could emphasize different Top-K performances for different K by adjusting \ud835\udefcand \ud835\udefdin LLPAUC. Analysis of Robustness. In this subsection, we conduct experiments to analyze the impact of hyperparameter \ud835\udefcon the robustness of the model. Given a fix \ud835\udefd, Figure 5 shows how the LLPAUC model\u2019s performance changes w.r.t \ud835\udefcunder clean training and noise training setting. Since the natural noise in the Adressa dataset is relatively weak, we do not include it in our comparison. From the figure, we observe that: 1) Since the noisy interactions impede the model\u2019s ability to learn the true interests of users, the performance in the noise training setting consistently falls below that of the clean training setting. This is consistent with our observation in Table 1. 2) Given a fix \ud835\udefd, the maximum Recall@20 performance of LLPAUC is achieved with \ud835\udefc= 0.9 under clean training settings, and \ud835\udefc= 0.8 under noisy training settings. This means under the noise training setting, we should choose smaller \ud835\udefcto enhance the robustness. Since \ud835\udefcconstrains TPR in LLPAUC as stated in Eq. (6), we conclude that the emphasis on high-ranked positive items could enhance the model robustness. 7" + }, + { + "url": "http://arxiv.org/abs/2403.00843v2", + "title": "Large Language Models are Learnable Planners for Long-Term Recommendation", + "abstract": "Planning for both immediate and long-term benefits becomes increasingly\nimportant in recommendation. Existing methods apply Reinforcement Learning (RL)\nto learn planning capacity by maximizing cumulative reward for long-term\nrecommendation. However, the scarcity of recommendation data presents\nchallenges such as instability and susceptibility to overfitting when training\nRL models from scratch, resulting in sub-optimal performance. In this light, we\npropose to leverage the remarkable planning capabilities over sparse data of\nLarge Language Models (LLMs) for long-term recommendation. The key to achieving\nthe target lies in formulating a guidance plan following principles of\nenhancing long-term engagement and grounding the plan to effective and\nexecutable actions in a personalized manner. To this end, we propose a Bi-level\nLearnable LLM Planner framework, which consists of a set of LLM instances and\nbreaks down the learning process into macro-learning and micro-learning to\nlearn macro-level guidance and micro-level personalized recommendation\npolicies, respectively. Extensive experiments validate that the framework\nfacilitates the planning ability of LLMs for long-term recommendation. Our code\nand data can be found at https://github.com/jizhi-zhang/BiLLP.", + "authors": "Wentao Shi, Xiangnan He, Yang Zhang, Chongming Gao, Xinyue Li, Jizhi Zhang, Qifan Wang, Fuli Feng", + "published": "2024-02-29", + "updated": "2024-04-26", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.CL", + "cs.LG" + ], + "main_content": "INTRODUCTION Recommendation systems have gained widespread adoption in contemporary society to alleviate the overwhelming burden of information overload [5]. Traditionally, researchers primarily focused on optimizing users\u2019 immediate responses (e.g. clicks) to maximize instant benefits [51]. However, such a greedy recommendation strategy tends to cater to users\u2019 immediate interests excessively, neglecting long-term engagement [44] and even influencing the ecology negatively. For instance, some users will be confined within an echo chamber of preferred information and filter bubbles [17]. Therefore, it is essential to investigate long-term recommendation. To tackle this challenge, it is crucial to integrate planning capabilities into the recommendation decision-making process to develop policies that take into account not only immediate benefits but also long-term consequences. Existing work primarily employs Reinforcement Learning [7, 31, 48, 63] to acquire planning capabilities implicitly through training models from scratch with the objective of maximizing cumulative rewards. However, these approaches are entirely data-driven, and their efficacy is significantly constrained by the quality and quantity of available data [15, 17, 38]. Unfortunately, recommendation data is typically sparse and naturally long-tail distributed [6]. This poses a significant challenge for RL to acquire planning ability, particularly for sparse or long-tail items and users, resulting in sub-optimal performance. LLMs have emerged with powerful planning capabilities through pre-training on massive and diverse textual data [1, 36, 42]. Previous studies have demonstrated that LLMs can break down complex textual and agent tasks into subtasks and then execute them sequentially [20, 21, 37, 45]. By conceptualizing multi-round recommendations as analogous to such complex tasks, there is potential to harness the planning prowess of LLMs to devise a multi-round recommendation policy aimed at maximizing long-term engagement. arXiv:2403.00843v2 [cs.IR] 26 Apr 2024 \fSIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Trovato and Tobin, et al. Upon realization, benefitting from the inherent extensive world knowledge and robust reasoning capabilities in LLMs, it is anticipated to obtain superior planning capabilities even in scenarios with sparse recommendation data, especially for long-tail items. To achieve the target, the key is to recall the task-solving principles to formulate a plan and make it effective and executable for individual users. However, direct acquisition of such planning capabilities is non-trivial, due to the substantial scenario divergence between LLM pre-training and recommendation. In the realm of recommendation tasks, the LLM itself may not naturally exhibit an inherent understanding (or commonsense) of the principles that enhance long-term engagement. Additionally, when tailoring recommendations for individual users, a personalized and item-specific strategy becomes essential, far beyond the mere awareness of such guiding principles. It is necessary to inspire or teach the LLM to acquire the desired principles and make them personalized. We propose a novel Bi-level Learnable LLM Planning (BiLLP) framework for long-term recommendation. BiLLP breaks down the learning process into macro-learning and micro-learning through a hierarchical mechanism. Macro-learning, aiming at acquiring high-level guiding principles, includes a Planner and Reflector, both implemented as LLM instances. The Planner leverages memorized high-level experiences that imply guiding principles to formulate high-level plans for long-term goals, while the Reflector reflects on the finished trajectory to gather new experiences for updating the Planner. Micro-learning includes the LLM-based Actor-Critic component to acquire planning personalization. The Actor personalizes high-level plans into executable actions for users. The Critic functions similarly to the Reflector but operates on a more finegrained level. It can promptly evaluate the long-term advantage of an action given a state, facilitating the swift update of the Actor policy and mitigating high-variance issues in Q-values [9]. The main contributions of this work are summarized as follows: \u2022 We introduce the idea of exploring the planning ability of LLMs with a bi-level planning scheme to enhance long-term engagement in recommendation. \u2022 We propose a new BiLLP framework with four modules, which learns the planning ability at both macro and micro levels with low variance estimations of Q-values. \u2022 We conduct extensive experiments, validating the capability of LLMs to plan for long-term recommendation and the superiority of the BiLLP framework. 2 RELATED WORK \u2022 Interactive Recommendation. Interactive recommendation is a typical setting to study long-term recommendation, where a model engages in online interactions with a user [16, 48]. In contrast to the static recommendation setting, where the focus is on identifying \u201ccorrect\u201d answers within a test set, interactive recommendation assesses the efficacy of results by accumulating rewards obtained throughout the interaction trajectories. To improve the performance of interactive recommendation, extensive effort [10, 23, 62] has been made to model the recommendation environment as a Markov decision process (MDP) and then utilize advanced RL algorithms to deliver the optimal policy [7, 31, 48, 63]. CIRS [17] learns a causal user model on historical data to capture the overexposure effect of items on user satisfaction, facilitating the planning of the RL policy. DORL [15] alleviates Matthew Effect of Offline RL to improve long-term engagement. However, these RL-based methods exhibit suboptimal learning efficiency and poor planning performance when confronted with sparse recommendation data. \u2022 LLM for Recommendation. LLM-based Recommendation paradigm has achieved remarkable advancements [11, 29, 50] owing to the extraordinary abilities of LLMs such as GPT4 [1] and Llama2 [42]. Distinguishing from existing LLM-based recommendation methods that are limited to directly using in-context learning [8, 19, 32, 47, 52, 58] or tuning [3, 26, 30, 49, 59, 60] for immediate response in the recommendation, our proposed BiLLP delves deeply into how the powerful planning ability of LLMs can empower the long-term engagement of recommendation systems. Some other approaches attempt to explore LLMs\u2019 planning capability in managing API tools [12, 22, 43] for recommendation, but they are also restricted to immediate responses and lack focus on users\u2019 longterm engagement. In contrast to the previous approach of LLMbased recommendation, our proposed BiLLP deeply harnesses the planning capabilities of LLM and utilizes it to enhance long-term engagement for users which is particularly challenging to optimize in traditional recommendations. \u2022 LLM Planner. After undergoing pre-training and instruction tuning, the LLMs have attained extensive world knowledge and proficient planning capabilities. Recent work [20, 21, 37, 45] exploit these powerful capabilities to generate better control plans for robots and agents. ReAct [55] effectively integrates the action decision with planning and results in promising performance. SwiftSage [28] integrates fast and slow thinking to solve complex tasks. However, these methods lack the ability to learn from past experiences, which allows for better task planning. To enable self-improvement without fine-tuning, Reflexion [40] verbally reflects on task feedback signals. ExpeL [61] utilize cross-task persistent memory to store insights and trajectories. AdaPlanner [41] can learn from past failure, past success, or both. In addition to these macro-level refinements, some work [4, 39, 57] integrates LLMs with RL algorithm to learn from the micro-level interaction experiences. However, they suffer from the issues of high variance estimations of Q-values, which could be alleviated by our proposed Critic module. 3 PRELIMINARY \u2022 Problem Definition. Following recent work on long-term recommendation [17], we adopt the interactive recommendation setting. The target is to learn a recommendation model that recommends items \ud835\udc56\u2208I (i.e., makes an action1 \ud835\udc4e\ud835\udc5b) to a user \ud835\udc62\u2208U at each step \ud835\udc5bbased on the current state2 \ud835\udc60\ud835\udc5b. As to applying LLMs for interactive recommendation, the recommendation process at each step \ud835\udc5binvolves two main operations: generating a problem-solving plan, referred to as thought \ud835\udc61\ud835\udc5b, and subsequently providing an item recommendation, denoted as action \ud835\udc4e\ud835\udc5b. Based on this, the entire interaction episode can be denoted as H1\u00b7\u00b7\u00b7\ud835\udc41= {\ud835\udc601,\ud835\udc611,\ud835\udc4e1,\ud835\udc5f1, \u00b7 \u00b7 \u00b7 ,\ud835\udc60\ud835\udc41,\ud835\udc61\ud835\udc41,\ud835\udc4e\ud835\udc41,\ud835\udc5f\ud835\udc41}, (1) 1As an initial attempt, we constrain each action \ud835\udc4e\ud835\udc5bto recommend only one item. 2\ud835\udc601 is the initial state before interacting with the model. \fLarge Language Models are Learnable Planners for Long-Term Recommendation SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA LLM Freeze Memory Update Mem Actor Mem Simulated Environment Advantage Thought \ufffd\ufffd Action \ufffd\ufffd Reward \ufffd\ufffd & State \ufffd\ufffd+\ufffd Trajectory [\ufffd\ufffd, \ufffd\ufffd, \ufffd\ufffd, \ufffd\ufffd, . . \ufffd\ufffd, \ufffd\ufffd] Reflections \u2113 Micro Learning Macro Learning Mem Retrieve Memory Call Tools Tool Critic Planner Reflector Forward Input Backward Update Figure 1: The overview of the proposed BiLLP framework. The black line indicates that the data serves as a prompt input for the subsequent module. The red line denotes that the data is utilized to update the memory of the subsequent module. and the trajectory H1\u00b7\u00b7\u00b7\ud835\udc5b, (1 \u2264\ud835\udc5b\u2264\ud835\udc41) can be thought of as a subsequence of an episode. \u2022 Simulated Environment. The interactive recommendation setting requires immediate user feedback for recommendation actions. Collecting online feedback from users can be financially burdensome, we thus follow [15, 17] and construct \u201cSimulated Environment\u201d with offline data for both model learning and testing. This environment can mimic users\u2019 behaviors in online scenarios, accept recommendations (i.e. action \ud835\udc4e) from the model, and provide feedback (i.e., reward \ud835\udc5f) accordingly. 4 METHOD In this section, we present the proposed BiLLP framework for improving long-term engagement in the interactive recommendation. As shown in Figure 1, the recommendation process of our framework involves two main steps: \u2022 The Planner generates problem-solving plans (i.e., thoughts \ud835\udc61), where the recommendation task is broken into sequential stepby-step sub-plans, striking a harmonious balance between exploration and exploitation. \u2022 The Actor recommends items (i.e., takes actions \ud835\udc4e) to the user by incorporating both macro-level sub-plans (thoughts) and microlearning experiences. To generate appropriate plans and personalized item recommendations, the key lies in teaching LLMs to learn from past interaction episodes. To enhance the learning process, the BiLLP framework employs a hierarchical mechanism (See Figure 1): \u2022 Macro-learning involves the Planner and Reflector to generate more appropriate plans, where the Reflector extracts high-level guiding principles from historical episodes and incorporates them into the input of Planner to enhance the quality of plans. \u2022 Micro-learning involves the Actor and Critic to generate more personalized recommendations, where Critic assesses the user\u2019s current satisfaction level (action advantage value) and updates the policy of Actor to enhance personalized recommendations. 4.1 Macro-Learning The macro-learning refers to a process in which the Reflector generates reflections based on historical episodes and subsequently updates them into the memory of the Planner. The Planner then retrieves the most relevant reflections from the memory and utilizes them as prompts to enhance the quality of plan generation. Next, we present the details of the Reflector and Planner, and the procedure of the micro-learning process. 4.1.1 Reflector. The Reflector is designed to extract guiding principles from historical episode data. When a user ends his interaction with the model, we utilize this complete interaction episode H1\u00b7\u00b7\u00b7\ud835\udc41 \ud835\udc50 as input, and then generate reflections \u2113as follows: \u2113\ud835\udc50= Reflector(H1\u00b7\u00b7\u00b7\ud835\udc41 \ud835\udc50 ). (2) We implement the Reflector as an LLM instance. Based on the predefined instruction prompt and few-shot examples P\ud835\udc45, the reflection \u2113generation process can be formulated as: \u2113\ud835\udc50= LLM(P\ud835\udc45, H1\u00b7\u00b7\u00b7\ud835\udc41 \ud835\udc50 ). (3) The obtained reflections are then used to update the memory M\ud835\udc43in the Planner, denoted as \u2113\ud835\udc50\u2192M\ud835\udc43. To facilitate understanding, we provide an example of reflection in Table 1, which primarily covers two high-level aspects: analysis of withdrawal reasons and prospective guidance. Specifically, in the example, the users\u2019 disengagement is identified as stemming from the repetitive recommendation of identical items, and the guiding principle for future recommendations emphasizes prioritizing diversity. Both aspects do not involve specific items. 4.1.2 Planner. The Planner module is designed to generate forwardlooking plans and decompose the high-level plan into sub-plans, indicated in outputted thoughts, where thoughts facilitate the Actor to execute actions. The Planner is implemented as a frozen LLM instance equipped with a memory library M\ud835\udc43storing past reflections for reference. At each step \ud835\udc5bof a new episode, the Planner utilizes the historical trajectory H1\u00b7\u00b7\u00b7\ud835\udc5b\u22121 and the current state \ud835\udc60\ud835\udc5bfrom the environment as input to generate the thought \ud835\udc61\ud835\udc5bwith reflections \fSIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Trovato and Tobin, et al. Table 1: Example of reflections. Reflection Case 1 The user became dissatisfied with the final recommendation, which was a repeat of a previously recommended game. This suggests that the user may have been looking for more variety in their recommendations. In the future, it would be beneficial to avoid repeating recommendations and instead focus on providing a diverse range of games across different genres. obtained from the memory: \ud835\udc61\ud835\udc5b= Planner(H1\u00b7\u00b7\u00b7\ud835\udc5b\u22121,\ud835\udc60\ud835\udc5b; M\ud835\udc43), (4) where the M\ud835\udc43is a set of past episode reflections. Formally, we have M\ud835\udc43= {\u2113\ud835\udc5a|\ud835\udc5a= 1, 2, ...}. When starting a new interaction process, meaning a new episode, multiple relevant reflections \u2113\ud835\udc3e M\ud835\udc43are retrieved from the memory library M\ud835\udc43as guidance for generating new thoughts. For the following steps in the episode, we utilize the same reflections and other inputs to prompt LLM to generate thoughts. We next introduce these parts. \u2022Reflection retrieval. To ensure that the retrieved reflections are helpful for planning. We select \ud835\udc3ereflections with a minimal distance to this planning process. Taking the initial state \ud835\udc601 to represent the process for distance computation, we have \u2113\ud835\udc3e M\ud835\udc43= {\u2113|\ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc58(\ud835\udc51(\u2113,\ud835\udc601)) < \ud835\udc3e, \u2113\u2208M\ud835\udc43}, (5) where\ud835\udc51(\u00b7, \u00b7) is defined as the Euclidean distance between the two encoded textsimplemented by Facebook AI Similarity Search (FAISS) [24], a library that allows us to quickly search for similar documents. \ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc58(\u00b7) gets the rank of a value in ascending order. \u2022Thought generation. We leverage the macro-level guidance from the memory to generate a thought. For each input (H1\u00b7\u00b7\u00b7\ud835\udc5b\u22121,\ud835\udc60\ud835\udc5b), we can sample a thought from LLM policy as follows: \ud835\udc61\ud835\udc5b\u223cLLM(P\ud835\udc43, \u2113\ud835\udc3e M\ud835\udc43, H1\u00b7\u00b7\u00b7\ud835\udc5b\u22121,\ud835\udc60\ud835\udc5b). (6) Here, \ud835\udc61\ud835\udc5bis a sample from the Planner policy, the arguments in the function LLM(\u00b7) represent the prompt input to the LLM including task instruction of the Planner P\ud835\udc43(including few-shot examples), retrieved reflections \u2113\ud835\udc3e M\ud835\udc43, state \ud835\udc60\ud835\udc5b, and historical trajectory H1\u00b7\u00b7\u00b7\ud835\udc5b\u22121 in the current episode. Table 2 presents examples of our input prompt template and two representative thoughts, encapsulating common outputs of generated thoughts. In the input prompt, we integrate historical interaction sequences and reflections, prompting the LLM to generate appropriate thoughts for guiding subsequent actions. In these examples, we could find some interesting properties of the generated thoughts. In the case of thought example 1, we observe that LLM can analyze users\u2019 interests and decompose multiple rounds of recommendation tasks into distinct sub-plans. By leveraging its planning capabilities, the LLM can generate suggestions extending beyond immediate choices, considering their potential long-term impact on user satisfaction. This enables the method to take into account various factors to optimize user long-term engagement and satisfaction. In thought example 2, we note that LLMs can adhere to the previous plan, maintaining the consistency and continuity of the recommendation strategy. 4.1.3 Update. The macro-learning involves updating the Planner during the training. After an episode is completed, we update the Planner module by injecting the new reflections for this episode into its memory. The Planner memory update can be formulated as M\ud835\udc43\u2190\u2113\ud835\udc50, (7) where \u2113\ud835\udc50denotes the reflections of the complete episode. 4.2 Micro-Learning The micro-learning refers to a process in which the Actor grounds the thoughts into executable actions to environments and the Critic provides evaluations of these actions. By updating the policy of Actor based on the feedback of Critic and updating the policy of Critic based on the feedback of the environment, Actor and Critic learn to provide personalized recommendations in specific situations. The learning mechanism is similar to the Planner-Reflector but operates in a more granular dimension, i.e., directly considering the recommendation of items. In essence, the micro-learning process bears analogies to the Advantage Actor-Critic (A2C) algorithm [33]. In the following, we first introduce the details of the Actor and Critic modules and present the procedure of the micro-learning process. 4.2.1 Actor. The Actor module aims to customize high-level plans into executable actions for each users. As illustrated in Figure 1, similar to the Planner module, we implement it as an LLM instance equipped with a memory M\ud835\udc34storing micro-level experiences. Additionally, considering that some knowledge is valuable for personalization but challenging for LLMs to handle [2], we add a tool library denoted as \ud835\udc47\ud835\udc59to access such knowledge. At each step \ud835\udc5bof an episode, the actor utilizes the historical trajectory H1\u00b7\u00b7\u00b7\ud835\udc5b\u22121, the current state \ud835\udc60\ud835\udc5b, and the corresponding thought \ud835\udc61\ud835\udc5bfrom the Planner module as inputs to generate an executable action \ud835\udc4e\ud835\udc5bwith knowledge obtained from the memory and the tool library as follows: \ud835\udc4e\ud835\udc5b= Actor(H1\u00b7\u00b7\u00b7\ud835\udc5b\u22121,\ud835\udc60\ud835\udc5b,\ud835\udc61\ud835\udc5b; M\ud835\udc34,\ud835\udc47\ud835\udc59). Here, the memory M\ud835\udc34can be formulated as a set of micro-level experiences, where the \ud835\udc5a-th experience is a previous interaction record, including three factors: state \ud835\udc60\ud835\udc5a, action \ud835\udc4e\ud835\udc5a, and corresponding value \ud835\udc63\ud835\udc5a. Formally, we have M\ud835\udc34= {(\ud835\udc60\ud835\udc5a,\ud835\udc4e\ud835\udc5a, \ud835\udc63\ud835\udc5a)|\ud835\udc5a= 1, 2, . . . }. Upon receiving these inputs, each generation process comprises three operations: 1) retrieving valuable experiences from the memory M\ud835\udc34, 2) utilizing the tools to gather valuable statistical information of the current state, and 3) integrating the results of the first two steps and other inputs to prompt LLM to generate an action. We next elaborate on these operations. \u2022 Retrieval. Similar to the retrieval operation in the Planner module, we rely on the similarity between the experience and input to select valuable experiences from the memory. Specifically, we leverage the distance between the state of an experience and the input state to measure the similarity, and we select all experiences with distances smaller than a threshold \ud835\udf0f\ud835\udc34. The process can be formulated as follows: \u03a8\ud835\udc5b \ud835\udc34= {(\ud835\udc60\ud835\udc5a,\ud835\udc4e\ud835\udc5a, \ud835\udc63\ud835\udc5a)|\ud835\udc51(\ud835\udc60\ud835\udc5a,\ud835\udc60\ud835\udc5b) < \ud835\udf0f\ud835\udc34,\ud835\udc60\ud835\udc5a\u2208M\ud835\udc34}, (8) where \u03a8\ud835\udc5b \ud835\udc34denotes the retrieved results, and \ud835\udc51(\u00b7, \u00b7) is the same to that in Equation (5). \u2022 Tool analysis. We utilize the tools in the tool library\ud835\udc47\ud835\udc59to analyze users\u2019 interaction history, extracting valuable information that is \fLarge Language Models are Learnable Planners for Long-Term Recommendation SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Table 2: Example of the input and output for the Planner module. Instruction Input Instruction: Solve a recommendation task with interleaving Thought, Action, and Observation steps. Thought can reason about the current situation and current user interest. Your goal is to meet the user\u2019s interest as much as possible and make recommendations to users as many times as possible. Note that if the user is not satisfied with your recommendations, he will quit and not accept new recommendations. You may take as many steps as necessary. Here are some examples: (END OF EXAMPLES) Reflection: {Reflections \u2113\ud835\udc3e M\ud835\udc43} {Historical interaction sequence H1\u00b7\u00b7\u00b7\ud835\udc5b\u22121} Output: Thought Case 1: \"The user seems to enjoy a mix of Action and Independent video games. They also seem to appreciate Adventure games. I would first recommend the user their favorite action games, and then recommend some other niche genre games that they like.\" Case 2: The user seems to be satisfied with the recommendations so far. Following the previous plan, I should recommend some other niche genre games that they like, such as RPG games.. challenging for the LLM to handle. In this study, we primarily focus on leveraging the Category Analysis Tool. At the \ud835\udc5b-th step, given the state \ud835\udc60\ud835\udc5b, the tool can identify a list of categories associated with each legal action and conduct statistical analysis on the user\u2019s viewing history. Formally, O\ud835\udc5b= \ud835\udc47\ud835\udc59(\ud835\udc60\ud835\udc5b), (9) where O\ud835\udc5bdenotes the tool output in text format. Notably, the methodology described here can be adapted and applied to various other tools. \u2022 Action generation. We leverage both the guidance from the Planner module, micro-level knowledge obtained from the memory and tool to prompt the LLM of the Actor module to generate an action. For the input (H1\u00b7\u00b7\u00b7\ud835\udc5b\u22121,\ud835\udc60\ud835\udc5b,\ud835\udc61\ud835\udc5b) with \ud835\udc61\ud835\udc5brepresenting the thought, once we obtain the corresponding retrieval results \u03a8\ud835\udc5b \ud835\udc34and tool analysis result \ud835\udc42\ud835\udc5b, we can sample an action \ud835\udc4e\u2032 \ud835\udc5bfrom the LLM policy as follows: \ud835\udc4e\u2032 \ud835\udc5b\u223cLLM(P\ud835\udc34, \u03a8\ud835\udc5b M\ud835\udc34, O\ud835\udc5b, H1\u00b7\u00b7\u00b7\ud835\udc5b\u22121,\ud835\udc60\ud835\udc5b,\ud835\udc61\ud835\udc5b), (10) where P\ud835\udc34represents the task instruction for the Actor. Note that the temperature coefficient of the LLM should be set to a non-zero value, ensuring the generation of non-deterministic results. Item grounding. The final action should be specific to an item within the candidate pool. Note that the LLM may generate items \ud835\udc4e\u2032 \ud835\udc5bthat are not necessarily included in the pool. To address this issue, we adopt the grounding strategy from [2] to map \ud835\udc4e\u2032 \ud835\udc5bto an actual item with the highest similarity. Formally, the final action is obtained as follows: \ud835\udc4e\ud835\udc5b= arg min \ud835\udc4e\u2208I sim(e\ud835\udc4e, e\ud835\udc4e\u2032 \ud835\udc5b), sim(e\ud835\udc4e, e\ud835\udc4e\u2032 \ud835\udc5b) := \u2225e\ud835\udc4e\u2212e\ud835\udc4e\u2032 \ud835\udc5b\u2225, (11) where e\ud835\udc4erepresents the embedding of the action (item) \ud835\udefcencoded by Llama2-7b [42], sim(\u00b7, \u00b7) denotes the embedding similarity measured by the L2 distance, and \u2225\u00b7 \u2225signifies the \ud835\udc3f2 norm. 4.2.2 Critic. The Critic module is an LLM-based evaluator, providing evaluative feedback on the long-term goals for the actions generated by the Actor module to help update the policy of Actor. The Critic module also contains a memory M\ud835\udc36to store previous experiences. Inspired by the A2C algorithm, we utilize the advantage value \ud835\udc63\ud835\udc5bof action \ud835\udc4e\ud835\udc5bin the given state \ud835\udc60\ud835\udc5bas the measurement. In particular, Critic takes the state \ud835\udc60\ud835\udc5b, action \ud835\udc4e\ud835\udc5b, and the history trajectory H1...\ud835\udc5b\u22121 as inputs and then outputs the advantage \ud835\udc63\ud835\udc5b with the experiences in M\ud835\udc36as references, which can be abstracted as follows: \ud835\udc63\ud835\udc5b= Critic(\ud835\udc60\ud835\udc5b,\ud835\udc4e\ud835\udc5b; M\ud835\udc36). (12) To compute advantage values, similar to A2C, we first estimate the state-value function \ud835\udc49(\ud835\udc60\ud835\udc5b), and then, based on it, we use the advantage function [33] to determine the advantage value: \u2022 Estimating state-value. The function \ud835\udc49(\ud835\udc60\ud835\udc5b) provides an estimation of the value of being in state \ud835\udc60\ud835\udc5bwhen following the Actor policy. We directly model the function with the LLM of the Critic module. In particular, we leverage in-context learning with few-shot examples and previous estimations in the memory M\ud835\udc36= {(\ud835\udc60\ud835\udc5a,\ud835\udc49(\ud835\udc60\ud835\udc5a))|\ud835\udc5a= 1, 2, . . . } to predict the values of a given state \ud835\udc60\ud835\udc5b. Formally, we have: \ud835\udc49(\ud835\udc60\ud835\udc5b) = LLM(P\ud835\udc36, \u03a6\ud835\udc5b M\ud835\udc36, H1\u00b7\u00b7\u00b7\ud835\udc5b\u22121,\ud835\udc60\ud835\udc5b), (13) where P\ud835\udc36represents the used task prompt (including few-shot examples), and \u03a6\ud835\udc5b M\ud835\udc36denotes the selected experiences from M\ud835\udc36, which is obtained as follows: \u03a6\ud835\udc5b M\ud835\udc36= {(\ud835\udc60\ud835\udc5a,\ud835\udc49(\ud835\udc60\ud835\udc5a))|\ud835\udc51(\ud835\udc60\ud835\udc5a,\ud835\udc60\ud835\udc5b) < \ud835\udf0f\ud835\udc36,\ud835\udc60\ud835\udc5a\u2208M\ud835\udc36}, (14) where \ud835\udf0f\ud835\udc36is a threshold, and \ud835\udc51(\u00b7, \u00b7) denotes the same distance function in Equation 5. \u2022 Computing advantage value. We next utilize the advantage function to determine the advantage value \ud835\udc63\ud835\udc5bof action \ud835\udc4e\ud835\udc5bgiven the state \ud835\udc60\ud835\udc5bat the \ud835\udc5b-th step. The advantage value is: \ud835\udc63\ud835\udc5b= \ud835\udf0e(\ud835\udc34(\ud835\udc60\ud835\udc5b,\ud835\udc4e\ud835\udc5b)), \ud835\udc34(\ud835\udc60\ud835\udc5b,\ud835\udc4e\ud835\udc5b) = \ud835\udc5f\ud835\udc5b+ \ud835\udefe\u2217\ud835\udc49(\ud835\udc60\ud835\udc5b+1) \u2212\ud835\udc49(\ud835\udc60\ud835\udc5b), (15) where \ud835\udc34(\u00b7, \u00b7) is the commonly used advantage function, \ud835\udc5f\ud835\udc5bdenotes the environmental reward at step \ud835\udc5b, and \ud835\udc60\ud835\udc5b+1 denotes the next-step state if taking action \ud835\udc4e\ud835\udc5bat state \ud835\udc60\ud835\udc5b. Regarding the function \ud835\udf0e, we have \ud835\udf0e(\ud835\udc65) = 1 if \ud835\udc65\u22650 else 0. Note that this approach mitigates the issue of high variance estimation of the Q-value in previous work [33] (cf. Section 5.4). \fSIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Trovato and Tobin, et al. The Actor Policy (updated) \ufffd\ufffd\ufffd\ufffd\ufffd\ufffd Policy Gradient \ufffd\ufffd Memory-based Learning \ufffd\ufffd Mem store retrieve Figure 2: The memory-based learning methods and policy gradient based methods have a comparable impact on the Actor policy. 4.2.3 Update. The micro-learning involves updating both the Actor and the Critic during their iteration. At each step \ud835\udc5b, after obtaining the advantage value \ud835\udc63\ud835\udc5bfor the action \ud835\udc4e\ud835\udc5b, we update the two modules by injecting the new experience into their memory. \u2022 The Critic memory update can be formulated as M\ud835\udc36\u2190(\ud835\udc60\ud835\udc5b,\ud835\udc5f\ud835\udc5b+ \ud835\udefe\u2217\ud835\udc49(\ud835\udc60\ud835\udc5b+1)), (16) where \ud835\udc5f\ud835\udc5b+ \ud835\udefe\u2217\ud835\udc49(\ud835\udc60\ud835\udc5b+1) can be considered as a more accurate estimation of \ud835\udc49(\ud835\udc60\ud835\udc5b) [33]. \u2022 The Actor memory update can be formulated as M\ud835\udc34\u2190(\ud835\udc60\ud835\udc5b, \ud835\udc63\ud835\udc5b). (17) The updated memory incorporates new experiences, helping enhance the next step of processing. 4.3 Discussion Next, we compare the policy update of our BiLLP framework with traditional gradient-based policy updates to illustrate why our approach, based on in-context learning, can learn planning without the need for gradient updates. As shown in Figure 2, for traditional methods, when a favorable action is identified for a state (possibly determined based on Q-values in the REINFORCE algorithm [34] or the Advantage Function in A2C [33]), the purpose of the gradient update is to adjust the policy to increase the probability of sampling that specific action for the given state. In contrast, for our method, although no gradient updates are performed, the specific state and action are recorded in external memory. When encountering a similar state again, the retrieving probability of that specific state and action from the memory will increase, which would further enhance the probability of executing that specific action in that state. This achieves a similar effect to gradient updates. This is the underlying learning principle for our BiLLP framework. 5 EXPERIMENTS In this section, we evaluate the proposed BiLLP framework in the interactive recommendation settings. Our experiments aim to address the following questions: \u2022 RQ1: How does BiLLP perform compared to state-of-the-art RLbased methods and other LLM frameworks in the interactive recommendation setting? \u2022 RQ2: To what extent can macro-learning and micro-learning mechanisms improve the LLMs\u2019 planning ability? \u2022 RQ3: Can the proposed Critic module effectively estimate the state-value function to facilitate the update of the Actor module? \u2022 RQ4: Whether the proposed BiLLP framework is robust to different recommendation environments and base LLM models? 5.1 Experiments Setup We introduce the experimental settings with regard to simulated experiments and baselines, which are implemented based on the EasyRL4Rec library3 [56]. 5.1.1 Recommendation Experiments. In the interactive recommendation setting, we are interested in examining the potential of models to mitigate the issue of filter bubbles and maximize users\u2019 long-term engagement. Conducting direct online experiments for model learning and testing can be prohibitively costly. As a result, following [15], we resort to creating simulated interactive environments using high-quality logs. \u2022 Steam [25] contains reviews and game information. The dataset compiles titles and genres of games. we consider users who engage in gameplay for a duration exceeding 3 hours to have a rating of 5, while others are assigned a rating of 2. We filter out users and items that interact less than 5 times in the log. \u2022 Amazon-Book [35] refers to a book recommendation dataset, the \u201cbook\u201d subset of the famous Amazon Product Review dataset4. This dataset compiles titles and genres of books from Amazon, collected between 1996 and 2018, with review scores ranging from 1 to 5. We filter out users and items that interact less than 90 times in the log. To better reflect the issue of filter bubbles and simulate real-world recommendation scenarios, we follow [15, 17, 54] to introduce a quit mechanism. The interaction will terminate if any of the following conditions are met: \u2022 The similarity between a recommended item and the items in the recent recommendation list (with a window size of \ud835\udc4a) is below a predefined threshold \ud835\udefd. \u2022 The online reward \ud835\udc5fof a recommended item is less than 2. In this sense, a model that effectively captures users\u2019 interests and mitigates the risk of continuously recommending similar items that reinforce the filter bubble phenomenon is crucial for achieving a longer interaction trajectory and maximizing cumulative rewards. To estimate the online reward, we first split the dataset evenly into training and test sets in chronological order. For each set D \u2208 {D\ud835\udc61\ud835\udc5f\ud835\udc4e\ud835\udc56\ud835\udc5b, D\ud835\udc61\ud835\udc52\ud835\udc60\ud835\udc61}, we utilize the DeepFM model [18] to fit the data and obtain vector representations for users eD \ud835\udc62and items eD \ud835\udc56. Then we can calculate the online rewards: \ud835\udc5fD \ud835\udc62,\ud835\udc56= DeepFM(eD \ud835\udc62, eD \ud835\udc56), \ud835\udc62\u2208U,\ud835\udc56\u2208I, (18) and the similarity between the two items: sim(eD \ud835\udc56, eD \ud835\udc57) = |eD \ud835\udc56 \u2212eD \ud835\udc57|2, \ud835\udc56, \ud835\udc57\u2208I, (19) It is noteworthy that we have established separate training and test environments for each dataset in order to simulate real-world scenarios where the user interests may have evolved during online training and model deployment. For now, the simulated environments can play the same role as the online users. Therefore, we can train the model on the training simulated experiments and evaluate the model on the test simulated experiments as the process shown in Figure 1. The statistics of the datasets are illustrated in Table 4. 3https://github.com/chongminggao/easyrl4rec 4https://cseweb.ucsd.edu/~jmcauley/datasets/amazon_v2/ \fLarge Language Models are Learnable Planners for Long-Term Recommendation SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Table 3: Average results of all methods in two environments (Bold: Best, Underline: Runner-up). Methods Steam Amazon Len Reach Rtraj Len Reach Rtraj SQN 2.183 \u00b1 0.177 3.130 \u00b1 0.050 6.837 \u00b1 0.517 4.773 \u00b1 0.059 4.303 \u00b1 0.017 20.570 \u00b1 0.245 CRR 4.407 \u00b1 0.088 3.263 \u00b1 0.427 14.377 \u00b1 1.658 3.923 \u00b1 0.162 4.537 \u00b1 0.103 17.833 \u00b1 1.129 BCQ 4.720 \u00b1 0.343 3.997 \u00b1 0.068 18.873 \u00b1 1.092 4.847 \u00b1 0.721 4.367 \u00b1 0.053 21.150 \u00b1 2.893 CQL 5.853 \u00b1 0.232 3.743 \u00b1 0.147 21.907 \u00b1 0.299 2.280 \u00b1 0.185 4.497 \u00b1 0.039 10.263 \u00b1 0.882 DQN 4.543 \u00b1 0.693 4.500 \u00b1 0.069 20.523 \u00b1 3.618 4.647 \u00b1 0.498 4.290 \u00b1 0.083 19.923 \u00b1 1.909 A2C 9.647 \u00b1 0.848 4.367 \u00b1 0.069 42.180 \u00b1 3.937 7.873 \u00b1 0.310 4.497 \u00b1 0.026 35.437 \u00b1 1.453 DORL 9.467 \u00b1 0.862 4.033 \u00b1 0.098 38.300 \u00b1 4.173 7.507 \u00b1 0.174 4.510 \u00b1 0.014 33.887 \u00b1 0.655 ActOnly 5.567 \u00b1 0.160 4.537 \u00b1 0.021 25.250 \u00b1 0.637 6.383 \u00b1 0.176 4.490 \u00b1 0.008 28.660 \u00b1 0.761 ReAct 11.630 \u00b1 0.741 4.559 \u00b1 0.047 52.990 \u00b1 2.925 7.733 \u00b1 0.450 4.603 \u00b1 0.033 35.603 \u00b1 1.806 Reflexion 12.690 \u00b1 1.976 4.523 \u00b1 0.026 57.423 \u00b1 8.734 8.700 \u00b1 0.535 4.670 \u00b1 0.073 40.670 \u00b1 2.954 BiLLP 15.367 \u00b1 0.119 4.503 \u00b1 0.069 69.193 \u00b1 1.590 9.413 \u00b1 0.190 4.507 \u00b1 0.012 42.443\u00b1 0.817 Table 4: Statistics of experiment datasets. Datasets #Users #Items #Train #Test Steam 6,012 190,365 1,654,303 958,452 Amazon 3,109 13,864 339,701 137,948 5.1.2 Evaluation Metrics. In this paper, we utilize three metrics: the trajectory length (Len), the average single-round reward (Reach), and the cumulative reward of the whole trajectory (Rtraj) to evaluate the model performance in the interactive recommendation setting. Longer trajectory lengths and higher cumulative rewards demonstrate the model\u2019s ability to maximize long-term engagement. However, it is important to note that a higher average reward is not necessarily better. Excessively high average rewards may indicate a model\u2019s overemphasis on immediate responses. 5.1.3 Baselines. To comprehensively and fairly evaluate the superiority of our proposed BiLLP, we choose some representative RL-based methods and LLM-based methods as baselines. For the RLbased methods, we choose seven representative baselines including the State-Of-The-Art (SOTA) method for long-term engagement optimization to mitigate filter bubble problems: \u2022 DQN, or Deep Q-Networks [34], is a deep reinforcement learning algorithm that combines deep neural networks with the Qlearning algorithm. \u2022 SQN, or Self-Supervised Q-learning [53], consists of two output layers, namely the cross-entropy loss head and the RL head. The RL head is utilized to generate the final recommendations. \u2022 BCQ, or Batch-Constrained deep Q-learning [14], a modified version of conventional deep Q-learning designed for batch reinforcement learning. It utilizes the discrete-action variant [13], which focuses on discarding uncertain data and updating the policy solely based on high-confidence data. \u2022 CQL, or Conservative Q-Learning [27], is a model-free RL method that adds a Q-value regularizer on top of an actor-critic policy. \u2022 CRR, or Critic Regularized Regression [46], is a model-free RL method that learns the policy by avoiding OOD actions. \u2022 A2C, or Advantage Actor-Critic [33], improves the Actor-Critic algorithm and stabilizes learning by using the Advantage function as Critic instead of the Action value function. \u2022 DORL, or Debiased model-based Offline RL [15], add a penalty term to relax the pessimism on states with high entropy to alleviate the Matthew effect in offline RL-based recommendation. This is the SOTA method of maximizing users\u2019 long-term engagement to alleviate filter bubble issues. Ensuring fair comparison, we also implement three LLM-based baselines utilizing the same LLM backbone as BiLLP: \u2022 ActOnly, a baseline that recommends items to users according to instruction prompts without thought and planning. \u2022 ReAct, [55] utilizes LLMs to generate both reasoning traces and task-specific actions in an interleaved manner, allowing for greater synergy between the reasoning and acting. \u2022 Reflexion, [40] verbally reflects on task feedback signals, then maintains their own reflective text in an episodic memory buffer to induce better decision-making in subsequent trials. 5.1.4 Implementation Details. For a fair comparison, all RL-based methods are trained with 100,000 episode data, and all LLM-based methods are trained with 100 episode data. For model-based RL methods DORL, we use the same DeepFM model as [15]. For the Reflection and BiLLP methods, we set the number of most similar reflections \ud835\udc3e= 2. For the BiLLP method, we set the similarity threshold \ud835\udf0f\ud835\udc34= 0.01 and \ud835\udf0f\ud835\udc36= 0.1. The discount factor \ud835\udefeis set to 0.5. All methods in two environments are evaluated with the quit parameters: \ud835\udc4a= 4, \ud835\udefd\ud835\udc46\ud835\udc61\ud835\udc52\ud835\udc4e\ud835\udc5a= 50, and \ud835\udefd\ud835\udc34\ud835\udc5a\ud835\udc4e\ud835\udc67\ud835\udc5c\ud835\udc5b= 15. The maximum round is set to 100. For all the RL-based methods, we leverage DeepFM [18] as the backbone following [15], and for all the LLM-based methods, we utilize the \u201cgpt-3.5-turbo-16k\u201d provided by OpenAI as the LLM backbone for its strong long context modeling ability. And the temperature is set to 0.5 for all experiments. 5.2 Main Results Comparison (RQ1) After training, we evaluate all methods with 100 episodes (i.e., interaction trajectories) in two interactive environments. The results are shown in Table 3, where each result in the table is averaged over three random experiments with distinct seeds for robustness and reliability. From the results, we observe that: \u2022 BiLLP consistently achieves the best long-term performance (Len and Rtraj) over RL-based methods and LLM-based baselines across \fSIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Trovato and Tobin, et al. 1 2 3 4 5 Item groups with decreasing item frequency 0.0 0.5 1.0 Proportion Steam A2C Ours 1 2 3 4 5 Item groups with decreasing item frequency 0.0 0.2 0.4 0.6 Proportion Amazon A2C Ours Figure 3: The frequency distribution of items recommended by our method and A2C in the two environments. two datasets. This demonstrates the effectiveness of our proposed framework and its ability to stimulate and adapt the long-term planning capacity of LLMs. For single-round reward Reach, BiLLP obtains a relatively higher score, which indicates it successfully captures user interests while avoiding excessive emphasis on immediate responses. \u2022 The ActOnly method, which only utilizes LLMs to generate actions (recommendations), exhibits inferior performance compared to certain RL-based methods and LLM-based methods that incorporate planning. Drawing upon this, we can infer that an explicit thinking and planning process is crucial for enhancing the planning capabilities of LLMs. \u2022 The ReAct method, which integrates the thinking process and action process, still performs worse than Reflexion and BiLLP. This underscores the significance of self-improvement in LLMs, in order to improve their planning abilities for long-term recommendation tasks. In addition to the overall performance comparison, we conduct an in-depth analysis of the recommended items for RL-based method A2C and LLM-based method BiLLP. We first calculate the items\u2019 popularity (occurrence frequencies) both in the training set and test set. Subsequently, we evenly divide the items into five groups with decreasing popularity: 1, 2, 3, 4, 5. We analyze the proportion of items belonging to each group among the recommended items generated by A2C and BiLLP, where the results are shown in Figure 3. From the figure, we observe that: \u2022 RL-based method A2C tends to overfit on popularity items and lack planning capabilities on long-tail items. \u2022 In contrast, BiLLP exhibits better planning capabilities on longtail items, which could effectively alleviate the issue of filter bubbles and maximize long-term engagement. 5.3 Ablation Study (RQ2) In this subsection, we conduct ablation studies to evaluate the effect of the two learning mechanisms. Concretely, w/o Macro refers to a variant of the BiLLP framework that does not use the reflective text to enhance its Planner module, and w/o Micro refers to a variant that does not use the micro-learning experience to enhance its Actor module. From Table 5, we can observe that: \u2022 The absence of either of the two learning mechanisms would result in a decline in performance, thereby indicating that both learning mechanisms have contributed to the enhancement of long-term engagement. Table 5: Average results of all methods in the two environments (Bold: Best). Methods Steam Len Reach Rtraj w/o Macro 14.363 \u00b1 0.467 4.523 \u00b1 0.012 64.960 \u00b1 2.011 w/o Micro 14.270 \u00b1 0.190 4.535 \u00b1 0.005 64.720 \u00b1 0.920 BiLLP 15.367 \u00b1 0.119 4.503 \u00b1 0.069 69.193 \u00b1 1.590 Methods Amazon Len Reach Rtraj w/o Macro 8.947 \u00b1 0.480 4.530 \u00b1 0.057 40.547 \u00b1 2.622 w/o Micro 8.800 \u00b1 0.432 4.707 \u00b1 0.026 41.420 \u00b1 2.003 BiLLP 9.413 \u00b1 0.190 4.507 \u00b1 0.012 42.443 \u00b1 0.817 1 2 3 4 5 6 7 8 9 10 State s 2 4 6 8 10 State-value function V (s) Monte Carlo Estimation LLM Estimation Figure 4: The memory-based in-context learning methods and policy gradient-based methods have a comparable impact on the Actor policy. \u2022 Based on the experimental details presented in Section 5.1.4, the aforementioned improvements are achieved using only 100 episodes of data for both learning mechanisms. This suggests the high efficiency of in-context learning compared to fine-tuning and training from scratch. 5.4 Effects of Critic Module (RQ3) In this subsection, our objective is to demonstrate the effectiveness of the Critic module in estimating the state-value function, denoted as \ud835\udc49\ud835\udf0b(\ud835\udc60), which is crucial for facilitating the update process of the Actor module. The state-value function \ud835\udc49\ud835\udf0b(\ud835\udc60) gives the expected cumulative discounted reward if we start from state \ud835\udc60and act according to the policy. To get an accurate and unbiased estimation of \ud835\udc49\ud835\udf0b(\ud835\udc60), for a specific state \ud835\udc60, we sample 1000 complete trajectories according to the Actor policy and calculate the cumulative discounted reward for each trajectory. Figure 4 illustrates the distribution of these reward samples, along with their mean value. The mean value serves as an accurate and unbiased estimation of the state-value function \ud835\udc49\ud835\udf0b(\ud835\udc60). It is worth noting that prior studies [4, 57], have utilized a single trajectory\u2019s cumulative discounted reward to estimate either the state-value function \ud835\udc49\ud835\udf0b(\ud835\udc60) or the state-action value function \ud835\udc44\ud835\udf0b(\ud835\udc60,\ud835\udc4e), which suffers from the issue of high variance estimation. \fLarge Language Models are Learnable Planners for Long-Term Recommendation SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Table 6: Average Results of all methods in the two environments (Bold: Best). Methods Steam Len Reach Rtraj GPT-4-32k 25.400 \u00b1 2.800 4.635 \u00b1 0.115 118.235 \u00b1 15.915 GPT-3.5-16k 15.367 \u00b1 0.119 4.503 \u00b1 0.069 69.193 \u00b1 1.590 Llama-2-7B 13.800 \u00b1 1.105 4.610 \u00b1 0.065 63.767 \u00b1 6.015 Methods Amazon Len Reach Rtraj GPT-4-32k 12.450 \u00b1 1.250 4.580 \u00b1 0.070 57.180 \u00b1 6.570 GPT-3.5-16k 9.413 \u00b1 0.190 4.507 \u00b1 0.012 42.443 \u00b1 0.817 Llama-2-7B 8.100 \u00b1 1.512 4.603 \u00b1 0.054 37.300 \u00b1 6.895 In contrast to these approaches, we leverage the Critic module to estimate the state-value function \ud835\udc49\ud835\udf0b(\ud835\udc60). To evaluate our estimation, we repeat the estimation 100 times. The resulting estimations, as well as their distribution and mean value, are also depicted in Figure 4. Based on the analysis of ten different states, it can be inferred that the utilization of the Critic module effectively mitigates estimation variance, despite the presence of a small bias in the estimation. 5.5 Robustness of the Framework (RQ4) 5.5.1 Results with Different Environments. To validate that BiLLP can work robustly in different environment settings, we vary the window size \ud835\udc4ain the exit mechanism and fix the similarity threshold \ud835\udefdto simulate different effects of filter bubbles on user disengagement. The evaluation results are shown in Figure 5, where all results are averaged over three random experiments with distinct seeds. We visualize all three metrics and observe that: \u2022 As the window size \ud835\udc4aincreases, the performance of the trajectory length and the cumulative reward metrics decrease for all methods. This implies that when users are more susceptible to the influence of filter bubbles, the model faces greater challenges in learning to improve users\u2019 long-term engagement. \u2022 BiLLP outperforms all baselines in terms of both the trajectory length and the cumulative reward metrics, which demonstrates the robustness of BiLLP in different environments. \u2022 BiLLP obtains a relatively higher score in terms of the singleround in different environments, which indicates it successfully captures user interests while avoiding excessive emphasis on immediate responses. 5.5.2 Results with Different Base Models. To validate the robustness of the BiLLP framework across various base models, we conduct additional experiments with other different LLM backbones: \u201cgpt-4-32k\u201d and \u201cLlama-2-7b\u201d. The results are presented in Table 6. From the table, several noteworthy observations can be made: \u2022 BiLLP showcases superior performance compared to traditional RL-based methods with different base models, as demonstrated in Table 6. This indicates that our framework is robust across different LLMs. \u2022 The performance of BiLLP based on \u201cGPT-3.5-16k\u201d is superior to that based on \u201cLlama-2-7B\u201d, while inferior to that based on \u201cGPT-4-32k\u201d. This observation suggests a positive correlation 1 2 3 4 5 6 7 8 9 10 0 25 50 75 100 125 Cumulative reward Steam CRR A2C DQN DORL ActOnly Ours 1 2 3 4 5 6 7 8 9 10 0 25 50 75 100 Amazon 1 2 3 4 5 6 7 8 9 10 0 5 10 15 20 25 30 Length of trajectory 1 2 3 4 5 6 7 8 9 10 0 5 10 15 20 1 2 3 4 5 6 7 8 9 10 3 3.5 4 4.5 5 Window size \ud835\udc4a Single-round reward 1 2 3 4 5 6 7 8 9 10 4 4.25 4.5 4.75 5 Window size \ud835\udc4a Figure 5: Results under different simulated environments. between the strength of the LLM backbone and the performance enhancement of BiLLP. 6" + }, + { + "url": "http://arxiv.org/abs/2302.03472v2", + "title": "On the Theories Behind Hard Negative Sampling for Recommendation", + "abstract": "Negative sampling has been heavily used to train recommender models on\nlarge-scale data, wherein sampling hard examples usually not only accelerates\nthe convergence but also improves the model accuracy. Nevertheless, the reasons\nfor the effectiveness of Hard Negative Sampling (HNS) have not been revealed\nyet. In this work, we fill the research gap by conducting thorough theoretical\nanalyses on HNS. Firstly, we prove that employing HNS on the Bayesian\nPersonalized Ranking (BPR) learner is equivalent to optimizing One-way Partial\nAUC (OPAUC). Concretely, the BPR equipped with Dynamic Negative Sampling (DNS)\nis an exact estimator, while with softmax-based sampling is a soft estimator.\nSecondly, we prove that OPAUC has a stronger connection with Top-K evaluation\nmetrics than AUC and verify it with simulation experiments. These analyses\nestablish the theoretical foundation of HNS in optimizing Top-K recommendation\nperformance for the first time. On these bases, we offer two insightful\nguidelines for effective usage of HNS: 1) the sampling hardness should be\ncontrollable, e.g., via pre-defined hyper-parameters, to adapt to different\nTop-K metrics and datasets; 2) the smaller the $K$ we emphasize in Top-K\nevaluation metrics, the harder the negative samples we should draw. Extensive\nexperiments on three real-world benchmarks verify the two guidelines.", + "authors": "Wentao Shi, Jiawei Chen, Fuli Feng, Jizhi Zhang, Junkang Wu, Chongming Gao, Xiangnan He", + "published": "2023-02-07", + "updated": "2023-02-19", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI" + ], + "main_content": "INTRODUCTION Recommendation systems are essential in addressing information overload by filtering unintended information and have benefited many high-tech companies [7]. Bayesian Personalized Ranking (BPR) [29] is a common choice for learning recommender models from implicit feedback, which randomly draws negative items for the sake of efficiency and approximately optimizes the AUC metric. However, uniformly sampled negative items may not be informative, contributing little to the gradients and the convergence [28, 40]. To overcome this obstacle, researchers have proposed many Hard Negative Sampling (HNS) methods, such as Dynamic Negative Sampling (DNS) [40] and Softmax-based Sampling methods [9, 21, 33]. Superior to uniform sampling, HNS methods oversample arXiv:2302.03472v2 [cs.IR] 19 Feb 2023 \fWWW \u201923, May 1\u20135, 2023, Austin, TX, USA Wentao et al. NDCG@3 AUC 0.531 0.800 1.000 0.800 Case\u00031: Case\u00032: Top3 OPAUC(0.4) 0.200 0.320 Figure 3: Two simple cases have the same overall ranking performance but different top-ranking performance. The ROC curves of two cases have the same AUC but different OPAUC(\ud835\udefd=0.4). high-scored negative items, which are more informative with large gradients and thus accelerate the convergence [8]. While existing work usually attributes the superior performance of HNS to its better convergence, we find that the merits of HNS are beyond this thought. To validate it, we conduct empirical analyses on three widely used datasets in Figure 1. We compare two HNS strategies with a strong baseline named Non-Sampling1 [4] that computes the gradient over the whole data (including all negative items). As such, the Non-Sampling strategy is supposed to converge to a better optimum more stably [1, 5, 6, 17]. Nevertheless, to our surprise, both HNS strategies substantially outperform the NonSampling strategy. It indicates that fast convergence may not be the only justification for the impressive performance of HNS. There must be other reasons for its superior performance, which motivates us to delve into HNS and explore its theoretical foundation. Our findings are twofold: \u2022 Optimizing the BPR loss equipped with HNS is equivalent to optimizing the One-way Partial AUC (OPAUC), whereas the original BPR loss only optimizes AUC. OPAUC puts a restriction on the range of false positive rate (FPR) \u2208[0, \ud835\udefd] [13], as shown in Figure 2(b), which emphasizes the ranking of topranked negative items. In contrast, AUC is a special case of OPAUC(\ud835\udefd) with \ud835\udefd= 1, which considers the whole ranking list. Our proof of the equivalence is based on the Distributionally Robust Optimization (DRO) framework [27] (cf. Section 3). \u2022 Compared to AUC, OPAUC has a stronger connection with Top-\ud835\udc3emetrics. To illustrate it, we conduct simulation studies with randomly generated ranking lists, showing that OPAUC exhibits a much higher correlation with Top-\ud835\udc3emetrics like Recall, Precision and NDCG by tuning \ud835\udefd(cf. Figure 6). This is because both OPAUC and Top-\ud835\udc3emetrics care more about the ranking of top-ranked items, as shown in Figure 3. Furthermore, we confirm the correlation through theoretical analysis that Recall@\ud835\udc3eand Precision@\ud835\udc3emetrics could be higher and lower bounded with a function of specific OPAUC(\ud835\udefd), respectively. In short, our analyses reveal that equipping BPR with HNS is equivalent to optimizing the OPAUC, leading to better Top-\ud835\udc3erecommendation performance (cf. Figure 4). Our analyses not only explain the impressive performance of HNS but also shed light on how to perform HNS in recommendation. Given the correspondence between Top-\ud835\udc3eevaluation metrics and OPAUC(\ud835\udefd), we offer two instructive guidelines to ensure the practical effectiveness of 1All compared methods optimize the same loss of BPR. Hard Negative Sampling TopK Evaluation Measures OPAUC Understand Effectiveness Equivalent Relevant Figure 4: The relationship among HNS, OPAUC measure, and Top-\ud835\udc3eevaluation metrics. HNS. First, the sampling hardness should be controllable, e.g., via pre-defined hyper-parameters, to adapt to different Top-\ud835\udc3emetrics and datasets. Second, the smaller the \ud835\udc3ewe emphasize in Top-\ud835\udc3e evaluation metrics, the harder the negative samples we should draw. The main contributions of this paper are summarized as follows: \u2022 We are the first to establish the theoretical foundations for HNS: equipping BPR with DNS is an exact estimator of the OPAUC objective, and with softmax-based sampling is a soft estimator. \u2022 We conduct theoretical analyses, simulation studies, and realworld experiments, to justify the connection between OPAUC and Top-\ud835\udc3emetrics and explain the performance gain of HNS. \u2022 We provide two crucial guidelines on how to perform HNS and adjust sampling hardness. The experiments on real-world datasets validate the rationality of the guidelines. 2 BACKGROUND This section provides the necessary background of Implicit Feedback, Hard Negative Sampling Strategies, One-way Partial Area Under ROC Curve (OPAUC), and Distributionally Robust Optimization (DRO) [27]. DRO is a robust learning framework that we will use in subsequent sections. 2.1 Implicit Feedback The goal of a recommender is to learn a score function \ud835\udc5f(\ud835\udc50,\ud835\udc56|\ud835\udf03) to predict scores of unobserved item \ud835\udc56in context \ud835\udc50and recommend the top-ranked items [1]. A larger predicted score reflects a higher preference for the item \ud835\udc56\u2208I in a context \ud835\udc50\u2208C2. In the implicit feedback setting, we can only observe positive class I+ \ud835\udc50\u2286I in the context \ud835\udc50. The remaining I\u2212 \ud835\udc50 = I\\I+ \ud835\udc50are usually considered as negative items in the context \ud835\udc50. In personalized ranking algorithms with BPR loss, the objective functions can be formulated as follows: min \ud835\udf03 \u2211\ufe01 \ud835\udc50\u2208C \u2211\ufe01 \ud835\udc56\u2208I+ \ud835\udc50 \ud835\udc38\ud835\udc57\u223c\ud835\udc43\ud835\udc5b\ud835\udc60(\ud835\udc57|\ud835\udc50) [\u2113(\ud835\udc5f(\ud835\udc50,\ud835\udc56|\ud835\udf03) \u2212\ud835\udc5f(\ud835\udc50, \ud835\udc57|\ud835\udf03))] , (1) where \ud835\udf03are the model parameters, \u2113(\ud835\udc61) is the loss function which is often defined as log(1 + exp(\u2212\ud835\udc61)). \ud835\udc43\ud835\udc5b\ud835\udc60(\ud835\udc57|\ud835\udc50) denotes the negative sampling probability that a negative item \ud835\udc57\u2208I\u2212 \ud835\udc50in the context c is drawn. In BPR [29], each negative item is assigned an equal sampling probability. For HNS strategies, a negative item with a larger predicted score will have a higher sampling probability. For ease of understanding, we refer to [12] and define the \u201chardness\u201d of a negative sample as its predicted score, i.e., a negative sample is \u201charder\u201d than another when its score is larger. In what follows, \ud835\udc5f(\ud835\udc50,\ud835\udc56|\ud835\udf03) is abbreviated as \ud835\udc5f\ud835\udc50\ud835\udc56for short. 2In collaborative filtering setting, a context \ud835\udc50denotes a user \ud835\udc62\u2208U; in sequential recommendation setting, \ud835\udc50can be a historical interaction sequence. \fOn the Theories Behind Hard Negative Sampling for Recommendation WWW \u201923, May 1\u20135, 2023, Austin, TX, USA 2.2 Hard Negative Sampling Strategies Different from static sampling like uniform and popularity-aware strategy [10], HNS strategies are adaptive both to context and recommender models during the training. Here we review two widelyused HNS strategies. DNS [40] ranks the negative items and oversamples the highranked items3. The sampling probability of DNS is defined as: \ud835\udc43\ud835\udc37\ud835\udc41\ud835\udc46 \ud835\udc5b\ud835\udc60 (\ud835\udc57|\ud835\udc50) = ( 1 \ud835\udc40, \ud835\udc57\u2208\ud835\udc46\u2193 I\u2212 \ud835\udc50[1, \ud835\udc40] 0, \ud835\udc57\u2208\ud835\udc5c\ud835\udc61\u210e\ud835\udc52\ud835\udc5f\ud835\udc60 , (2) where \ud835\udc46\u2193 I\u2212 \ud835\udc50[1, \ud835\udc40] \u2282I\u2212 \ud835\udc50denotes the subset of the top-ranked \ud835\udc40 negative items, i.e., the negative samples with top-\ud835\udc40largest predicted scores. Remark that the smaller the \ud835\udc40is, the harder the negative samples will be drawn. Softmax-based sampling is widely used in adversarial learning [26, 33] and importance sampling [9, 21], where they refer to softmax distribution to assign higher sampling probability to higher scored items. The negative sampling probability can be defined as: \ud835\udc43\ud835\udc46\ud835\udc5c\ud835\udc53\ud835\udc61\ud835\udc5a\ud835\udc4e\ud835\udc65 \ud835\udc5b\ud835\udc60 (\ud835\udc57|\ud835\udc50) = exp(\ud835\udc5f\ud835\udc50\ud835\udc57/\ud835\udf0f) \u00cd \ud835\udc58\u2208I\u2212 \ud835\udc50 exp(\ud835\udc5f\ud835\udc50\ud835\udc58/\ud835\udf0f) = exp((\ud835\udc5f\ud835\udc50\ud835\udc57\u2212\ud835\udc5f\ud835\udc50\ud835\udc56)/\ud835\udf0f) \u00cd \ud835\udc58\u2208I\u2212 \ud835\udc50 exp((\ud835\udc5f\ud835\udc50\ud835\udc58\u2212\ud835\udc5f\ud835\udc50\ud835\udc56)/\ud835\udf0f) , (3) where \ud835\udf0fis a temperature parameter. It is noteworthy that the smaller the \ud835\udf0fis, the harder the samples will be drawn. 2.3 One-way Partial AUC For each context \ud835\udc50, we can define true positive rates (TPR) and false positive rates (FPR) as \ud835\udc47\ud835\udc43\ud835\udc45\ud835\udc50,\ud835\udf03(\ud835\udc61) = Pr(\ud835\udc5f\ud835\udc50\ud835\udc56> \ud835\udc61|\ud835\udc56\u2208I+ \ud835\udc50), (4) \ud835\udc39\ud835\udc43\ud835\udc45\ud835\udc50,\ud835\udf03(\ud835\udc61) = Pr(\ud835\udc5f\ud835\udc50\ud835\udc57> \ud835\udc61|\ud835\udc57\u2208I\u2212 \ud835\udc50). (5) Then, for a given \ud835\udc60\u2208[0, 1], let \ud835\udc47\ud835\udc43\ud835\udc45\u22121 \ud835\udc50,\ud835\udf03(\ud835\udc60) = inf{\ud835\udc61\u2208R,\ud835\udc47\ud835\udc43\ud835\udc45\ud835\udc50,\ud835\udf03(\ud835\udc61) < \ud835\udc60} and \ud835\udc39\ud835\udc43\ud835\udc45\u22121 \ud835\udc50,\ud835\udf03(\ud835\udc60) = inf{\ud835\udc61\u2208R, \ud835\udc39\ud835\udc43\ud835\udc45\ud835\udc50,\ud835\udf03(\ud835\udc61) < \ud835\udc60}. Based on these, the AUC can be formulated as AUC(\ud835\udf03) = 1 |C| \u2211\ufe01 \ud835\udc50\u2208C \u222b1 0 \ud835\udc47\ud835\udc43\ud835\udc45\ud835\udc50,\ud835\udf03 h \ud835\udc39\ud835\udc43\ud835\udc45\u22121 \ud835\udc50,\ud835\udf03(\ud835\udc60) i d\ud835\udc60. (6) As shown in Figure 2, One-way Partial AUC (OPAUC) only cares about the performance within a given false positive rate (FPR) range [\ud835\udefc, \ud835\udefd]. Non-normalized OPAUC [13] is equal to \ud835\udc42\ud835\udc43\ud835\udc34\ud835\udc48\ud835\udc36(\ud835\udf03, \ud835\udefc, \ud835\udefd) = 1 |C| \u2211\ufe01 \ud835\udc50\u2208C \u222b\ud835\udefd \ud835\udefc \ud835\udc47\ud835\udc43\ud835\udc45\ud835\udc50,\ud835\udf03 h \ud835\udc39\ud835\udc43\ud835\udc45\u22121 \ud835\udc50,\ud835\udf03(\ud835\udc60) i d\ud835\udc60. (7) In this paper, we consider the special case of OPAUC with \ud835\udefc= 0, which is denoted as \ud835\udc42\ud835\udc43\ud835\udc34\ud835\udc48\ud835\udc36(\ud835\udefd) for short. Based on the definition in Eq. (7), we can have the following non-parametric estimator of OPAUC(\ud835\udefd): \u009c \ud835\udc42\ud835\udc43\ud835\udc34\ud835\udc48\ud835\udc36(\ud835\udefd) = 1 |C| \u2211\ufe01 \ud835\udc50\u2208C 1 \ud835\udc5b+ 1 \ud835\udc5b\u2212 \u2211\ufe01 \ud835\udc56\u2208I+ \ud835\udc50 \u2211\ufe01 \ud835\udc57\u2208\ud835\udc46\u2193 I\u2212 \ud835\udc50[1,\ud835\udc5b\u2212\u00b7\ud835\udefd] I(\ud835\udc5f\ud835\udc50\ud835\udc56> \ud835\udc5f\ud835\udc50\ud835\udc57), (8) 3Without loss of generality, we consider a special case of DNS (Algorithm 2 in [40]) that set \ud835\udc5bto |I\u2212 \ud835\udc50|, set \ud835\udefd1, \u00b7 \u00b7 \u00b7 , \ud835\udefd\ud835\udc40\u22121 to 1 and set \ud835\udefd\ud835\udc40, \u00b7 \u00b7 \u00b7 , \ud835\udefd\ud835\udc41to 0. Our analysis can generalize to the arbitrary multi-nomial distribution of \ud835\udefd\ud835\udc58. where \ud835\udc5b+ = |I+ \ud835\udc50| and \ud835\udc5b\u2212= |I\u2212 \ud835\udc50|, and I(\u00b7) is an indicator function. For simplicity, we assume \ud835\udc5b\u2212\u00b7 \ud835\udefdis a positive integer. Since the OPAUC estimator in Eq. (8) is non-continuous and non-differentiable, we usually replace the indicator function with a continuous surrogate loss \ud835\udc3f(\ud835\udc50,\ud835\udc56, \ud835\udc57) = \u2113(\ud835\udc5f\ud835\udc50\ud835\udc56\u2212\ud835\udc5f\ud835\udc50\ud835\udc57). With suitable surrogate loss \u2113(\u00b7), maximizing \u009c \ud835\udc42\ud835\udc43\ud835\udc34\ud835\udc48\ud835\udc36(\ud835\udefd) in Eq. (8) is equivalent to the following problem: min \ud835\udf03 1 |C| \u2211\ufe01 \ud835\udc50\u2208C 1 \ud835\udc5b+ \u2211\ufe01 \ud835\udc56\u2208I+ \ud835\udc50 1 \ud835\udc5b\u2212\u00b7 \ud835\udefd \u2211\ufe01 \ud835\udc57\u2208\ud835\udc46\u2193 I\u2212 \ud835\udc50[1,\ud835\udc5b\u2212\u00b7\ud835\udefd] \ud835\udc3f(\ud835\udc50,\ud835\udc56, \ud835\udc57). (9) Remark that the objective is divided by a fixed constant \ud835\udefdfor proof, which does not affect the properties of the objective function. For surrogate loss \u2113(\u00b7), [16] proposes a sufficient condition to ensure it consistent for OPAUC maximization, where the widely used logistic loss \u2113(\ud835\udc61) = log(1 + exp(\u2212\ud835\udc61)) satisfies the properties. Additionally, for comparison among different \ud835\udefd, we define normalized OPAUC(\ud835\udefd) following [24], OPAUCnorm(\ud835\udefd) = Trans (OPAUC(\ud835\udefd)) , (10) where the normalized transformation is defined as: Trans(\ud835\udc34) = 1 2 \u0014 1 + \ud835\udc34\u2212min\ud835\udf03\ud835\udc34 max\ud835\udf03\ud835\udc34\u2212min\ud835\udf03\ud835\udc34 \u0015 . (11) 2.4 Distributionally Robust Optimization Given a divergence \ud835\udc37\ud835\udf19between two distributions \ud835\udc43and \ud835\udc44, Distributionally Robust Optimization (DRO) aims to minimize the expected risk over the worst-case distribution \ud835\udc44[19, 22, 27], where \ud835\udc44is in a divergence ball around training distribution \ud835\udc43. Formally, it can be defined as: min \ud835\udf03 sup \ud835\udc44 \ud835\udc38\ud835\udc44[L(\ud835\udc53\ud835\udf03(x),\ud835\udc66)] \ud835\udc60.\ud835\udc61. \ud835\udc37\ud835\udf19(\ud835\udc44||\ud835\udc43) \u2264\ud835\udf0c, (12) where the hyperparameter \ud835\udf0cmodulates the distributional shift, L is the loss function. In this paper, we will focus on two special divergence metrics, i.e. the KL divergence \ud835\udc37\ud835\udc3e\ud835\udc3f(\ud835\udc44||\ud835\udc43) = \u222b log( d\ud835\udc44 d\ud835\udc43)d\ud835\udc44 [18] and the CVaR divergence \ud835\udc37\ud835\udc36\ud835\udc49\ud835\udc4e\ud835\udc45(\ud835\udc44||\ud835\udc43) = sup log( d\ud835\udc44 d\ud835\udc43) [14]. 3 HARD NEGATIVE SAMPLING MEETS OPAUC In this section, we prove that the BPR loss equipped with HNS optimizes OPAUC(\ud835\udefd), which is the first step to understanding the effectiveness of HNS. We achieve the proof based on the DRO objective and present the proof outline in Figure 5. Following the theorems proposed in [41], we first show the connection between the OPAUC objective and the DRO-based objective. Then we prove that the personalized ranking problem (Eq. (1)) equipped with HNS is equivalent to the DRO-based objective in our theorems. Following [41], we define the DRO-based objective as: min \ud835\udf03 1 |C| \u2211\ufe01 \ud835\udc50\u2208C 1 \ud835\udc5b+ \u2211\ufe01 \ud835\udc56\u2208I+ \ud835\udc50 max \ud835\udc44\ud835\udc38\ud835\udc44[\ud835\udc3f(\ud835\udc50,\ud835\udc56, \ud835\udc57)] \ud835\udc60.\ud835\udc61. \ud835\udc37\ud835\udf19(\ud835\udc44||\ud835\udc430) \u2264\ud835\udf0c, (13) \fWWW \u201923, May 1\u20135, 2023, Austin, TX, USA Wentao et al. Exact but non-smooth Smooth but inexact CVaR Divergence KL Divergence DNS Softmax Estimator of OPAUC DRO objective Hard negative sampling Approximation Lemma Theorem Figure 5: The Lemma 1 shows the equivalence between the OPAUC estimator and DRO objective. Based on DRO objective, we prove the equivalence between HNS and OPAUC in Theorem 1 and Theorem 2. where \ud835\udc430 denotes uniform distribution over I\u2212 \ud835\udc50, the hyperparameter \ud835\udf0cmodulates the degree of distributional shift, \ud835\udc37\ud835\udf19is the divergence measure between distributions. Then we show the connection between the OPAUC objective and the DRO-based objective through the following lemma: Lemma 1 (Theorem 1 of [41]). By choosing CVaR divergence \ud835\udc37\ud835\udf19= \ud835\udc37\ud835\udc36\ud835\udc49\ud835\udc4e\ud835\udc45(\ud835\udc44||\ud835\udc430) = sup log( d\ud835\udc44 d\ud835\udc430 ) and setting \ud835\udefd= \ud835\udc52\u2212\ud835\udf0c, the DRObased objective (Eq. (13)) is equivalent to the \ud835\udc42\ud835\udc43\ud835\udc48\ud835\udc34\ud835\udc36(\ud835\udefd) objective (Eq. (9)). Based on the above lemma, we prove the equivalence between the OPAUC objective and the HNS based objective. Theorem 1. By choosing \ud835\udc43\ud835\udc5b\ud835\udc60= \ud835\udc43\ud835\udc37\ud835\udc41\ud835\udc46 \ud835\udc5b\ud835\udc60 , \ud835\udc40= \ud835\udc5b\u2212\u00b7 \ud835\udefd, (14) the DNS based problem (Eq. (1)) is equivalent to the \ud835\udc42\ud835\udc43\ud835\udc48\ud835\udc34\ud835\udc36(\ud835\udefd) objective (Eq. (9)). Proof. Given Lemma 1, we just need to show that DNS sampling based problem (Eq. (1)) is equivalent to the DRO-based objective (Eq. (13)). By choosing CVaR divergence, then the DRO-based objective (Eq. (13)) reduces to [41] (using strong duality and Theorem 4 in [30]) min \ud835\udf03 min \ud835\udf02\u22650 1 |C| \u2211\ufe01 \ud835\udc50\u2208C 1 \ud835\udc5b+ \u2211\ufe01 \ud835\udc56\u2208I+ \ud835\udc50 { 1 \ud835\udc52\u2212\ud835\udf0c\u00b7 \ud835\udc38\ud835\udc57\u223c\ud835\udc430 \u0002 (\ud835\udc3f(\ud835\udc50,\ud835\udc56, \ud835\udc57) \u2212\ud835\udf02\ud835\udc56)+ \u0003 + \ud835\udf02\ud835\udc56}, (15) where \ud835\udc430 denotes uniform distribution over I\u2212 \ud835\udc50. Following [39], it\u2019s easy to see that the optimal \ud835\udf02\ud835\udc56is the \ud835\udc52\u2212\ud835\udf0c-quantile of \ud835\udc3f(\ud835\udc50,\ud835\udc56, \ud835\udc57), which is defined as: \ud835\udf02\u2217 \ud835\udc56= inf \ud835\udf02\ud835\udc56{\ud835\udc43\ud835\udc57\u223c\ud835\udc430 [\ud835\udc3f(\ud835\udc50,\ud835\udc56, \ud835\udc57) > \ud835\udf02\ud835\udc56] < \ud835\udc52\u2212\ud835\udf0c}. (16) Substitute \ud835\udf02\ud835\udc56with \ud835\udf02\u2217 \ud835\udc56in Eq. (15) and replace \ud835\udc52\u2212\ud835\udf0cwith \ud835\udc40 \ud835\udc5b\u2212, then we obtain the equivalence between DNS sampling based problem (Eq. (1)) and DRO-based objective (Eq. (13)). Recall the conclusion in Lemma 1, then we complete the proof by setting \ud835\udc40= \ud835\udc5b\u2212\u00b7 \ud835\udefd. \u25a1 Remark: The DNS based problem is an exact but non-smooth estimator of OPUAC(\ud835\udefd), which is consistent for OPAUC(\ud835\udefd) maximization. The hyperparameter \ud835\udc40in DNS strategy directly determines \ud835\udefdin the OPAUC objective. Theorem 2. By choosing \ud835\udc43\ud835\udc5b\ud835\udc60= \ud835\udc43\ud835\udc46\ud835\udc5c\ud835\udc53\ud835\udc61\ud835\udc5a\ud835\udc4e\ud835\udc65 \ud835\udc5b\ud835\udc60 , \ud835\udf0f= \u221a\ufe04 Var\ud835\udc57(\ud835\udc3f(\ud835\udc50,\ud835\udc56, \ud835\udc57)) \u22122 log \ud835\udefd , (17) Var\ud835\udc57(\ud835\udc3f(\ud835\udc50,\ud835\udc56, \ud835\udc57)) = \ud835\udc38\ud835\udc57\u223c\ud835\udc430 \u0002 (\ud835\udc3f(\ud835\udc50,\ud835\udc56, \ud835\udc57) \u2212\ud835\udc38\ud835\udc57\u223c\ud835\udc430 [\ud835\udc3f(\ud835\udc50,\ud835\udc56, \ud835\udc57)])2\u0003 , (18) then problem (Eq. (1)) equipped with softmax-based sampling strategy is a surrogate version of the \ud835\udc42\ud835\udc43\ud835\udc48\ud835\udc34\ud835\udc36(\ud835\udefd) objective (Eq. (9)). The proof process is similar to Theorem 1. Substitute CVaR divergence with KL divergence but remain the same \ud835\udf0c, then we get a soft estimator of OPAUC(\ud835\udefd). We prove that the soft estimator is equivalent to softmax-based sampling problem (Eq. (1)). The precise relationship between \ud835\udf0fand \ud835\udefdis complex and hard to compute. Hence we get an approximate version via the Taylor expansion. The detailed proof can be found in Appendix A. Remark: The BPR loss equipped with softmax-based sampling is a smooth but inexact estimator of OPAUC(\ud835\udefd). The hyperparameter \ud835\udf0fin softmax-based sampling directly determines \ud835\udefd in OPAUC objective. 4 OPAUC MEETS TOP-K METRICS In this section, we investigate the connection between OPAUC(\ud835\udefd) and Top-\ud835\udc3eevaluation metrics, which is the second step to understanding the effectiveness of HNS. We propose two arguments to declare their relationship: (1) Compared to AUC, OPAUC(\ud835\udefd) has a stronger correlation with Top-\ud835\udc3eevaluation metrics. (2) A smaller\ud835\udc3ein Top-\ud835\udc3eevaluation metrics has a stronger correlation with a smaller \ud835\udefdin OPAUC(\ud835\udefd). We conduct theoretical analysis and simulation experiments to verify our proposals as follows. 4.1 Theoretical Analysis In this subsection, we analyze the connection between OPAUC(\ud835\udefd) and Top-\ud835\udc3emetrics from a theoretical perspective. To be concrete, we prove that given \ud835\udc3e, Precision@\ud835\udc3eand Recall@\ud835\udc3eare higher bounded and lower bounded by the functions of specific OPAUC(\ud835\udefd). Theorem 3. Suppose there are \ud835\udc41+ positive items and \ud835\udc41\u2212negative items, where \ud835\udc41+ > \ud835\udc3eand \ud835\udc41\u2212> \ud835\udc3e. For any permutation of all items in descending order, we have 1 \ud835\udc41+ $ \ud835\udc41+ + \ud835\udc3e\u2212 \u221a\ufe01 (\ud835\udc41+ + \ud835\udc3e)2 \u22124\ud835\udc41+\ud835\udc41\u2212\u00d7 \ud835\udc42\ud835\udc43\ud835\udc34\ud835\udc48\ud835\udc36(\ud835\udefd) 2 % \u2264\ud835\udc45\ud835\udc52\ud835\udc50\ud835\udc4e\ud835\udc59\ud835\udc59@\ud835\udc3e\u2264 1 \ud835\udc41+ l\u221a\ufe01 \ud835\udc41+\ud835\udc41\u2212\u00d7 \ud835\udc42\ud835\udc43\ud835\udc34\ud835\udc48\ud835\udc36(\ud835\udefd) m , (19) 1 \ud835\udc3e $ \ud835\udc41+ + \ud835\udc3e\u2212 \u221a\ufe01 (\ud835\udc41+ + \ud835\udc3e)2 \u22124\ud835\udc41+\ud835\udc41\u2212\u00d7 \ud835\udc42\ud835\udc43\ud835\udc34\ud835\udc48\ud835\udc36(\ud835\udefd) 2 % \u2264\ud835\udc43\ud835\udc5f\ud835\udc52\ud835\udc50\ud835\udc56\ud835\udc60\ud835\udc56\ud835\udc5c\ud835\udc5b@\ud835\udc3e\u22641 \ud835\udc3e l\u221a\ufe01 \ud835\udc41+\ud835\udc41\u2212\u00d7 \ud835\udc42\ud835\udc43\ud835\udc34\ud835\udc48\ud835\udc36(\ud835\udefd) m , (20) where \ud835\udefd= \ud835\udc3e \ud835\udc41\u2212. Remark: From above, we get the following inspirations: \fOn the Theories Behind Hard Negative Sampling for Recommendation WWW \u201923, May 1\u20135, 2023, Austin, TX, USA 0.005 0.01 0.025 0.05 0.1 0.2 1.0 0.2 0.4 0.6 0.8 Correlation Coefficient =0.005 =0.025 =0.05 NDCG 0.005 0.01 0.025 0.05 0.1 0.2 1.0 =0.01 =0.025 =0.1 Precision 0.005 0.01 0.025 0.05 0.1 0.2 1.0 =0.01 =0.025 =0.1 Recall @5 @20 @50 Figure 6: The estimated correlation coefficient between Top-\ud835\udc3eevaluation metrics and OPAUC\ud835\udc5b\ud835\udc5c\ud835\udc5f\ud835\udc5a(\ud835\udefd) under Monte Carlo sampling experiments, where \ud835\udc41+ = 200 and \ud835\udc41\u2212= 800. We highlight the value of \ud835\udefdwhen each curve reaches its maximum correlation coefficient. Remark that AUC is also a special case of OPAUC\ud835\udc5b\ud835\udc5c\ud835\udc5f\ud835\udc5a(\ud835\udefd) with \ud835\udefd= 1. (1) The Top-\ud835\udc3emetrics like Precision@\ud835\udc3eand Recall@\ud835\udc3ehave a strong connection with specific OPAUC(\ud835\udefd), where \ud835\udefd= \ud835\udc3e \ud835\udc41\u2212. However, such a connection does not exist for AUC, which confirms our first argument. Hence, maximizing specific OPAUC(\ud835\udefd) approximately optimizes specific Precision@\ud835\udc3eand Recall@\ud835\udc3e. (2) The smaller the \ud835\udc3eis, the smaller the \ud835\udefd(= \ud835\udc3e \ud835\udc41\u2212) should be considered. A smaller \ud835\udc3ehas a stronger connection with a smaller \ud835\udefd, which effectively verifies our second argument. 4.2 Simulation Experiments In this subsection, we conduct Monte Carlo sampling experiments to analyze the connection between OPAUC(\ud835\udefd) and Top-\ud835\udc3eevaluation metrics. For comparison among different \ud835\udefd, we use normalized OPAUC defined in Eq. (10) here. Suppose there are \ud835\udc41+ positive items and \ud835\udc41\u2212negative items in the item set I. Due to the vast scale of the entire permutation space of items, it is impossible to enumerate all cases for analyses directly. Hence, we make a MonteCarlo approximation and uniformly sample permutations from the space as simulated ranking lists 100000 times. Then we calculate the evaluation metrics (Top-\ud835\udc3emetrics and OPAUC\ud835\udc5b\ud835\udc5c\ud835\udc5f\ud835\udc5a(\ud835\udefd)) for these simulated ranking lists. Afterward, we estimate the correlation coefficient between Top-\ud835\udc3emetrics and OPAUC\ud835\udc5b\ud835\udc5c\ud835\udc5f\ud835\udc5a(\ud835\udefd) and report them in Figure 6. We report \ud835\udefdof OPAUC\ud835\udc5b\ud835\udc5c\ud835\udc5f\ud835\udc5a(\ud835\udefd) in logarithmic scale. Furthermore, we highlight the value of \ud835\udefdwhen each curve reaches its maximum correlation coefficient. Remark that OPAUC\ud835\udc5b\ud835\udc5c\ud835\udc5f\ud835\udc5a(1) is equal to AUC. As shown in Figure 6, we have the following observations: (1) The correlation coefficient of the highest point of the curve is much larger than the correlation coefficient when \ud835\udefdis equal to 1. That means most Top-\ud835\udc3eevaluation metrics have higher correlation coefficients with specific OPAUC\ud835\udc5b\ud835\udc5c\ud835\udc5f\ud835\udc5a(\ud835\udefd) (above 0.8) than AUC (under 0.4), which clearly verifies our first argument. (2) Given a specific \ud835\udc3ein Top-\ud835\udc3emetrics, the correlation coefficient with OPAUC\ud835\udc5b\ud835\udc5c\ud835\udc5f\ud835\udc5a(\ud835\udefd) gets the maximum value at a specific \ud835\udefd. Both too large and too small \ud835\udefdwill degrade the correlation with specific Top-\ud835\udc3emetrics. (3) For different \ud835\udc3e, the peak of the curve varies according to \ud835\udefd. The smaller the \ud835\udc3ein the Top-\ud835\udc3eevaluation metrics, the smaller the \ud835\udefdthat takes the maximum value of the correlation coefficient. This effectively confirms our second argument. (4) On the left side of the peak of the curve, we find that the correlation coefficient of NDCG@\ud835\udc3edescends more slowly than the other two metrics. This is because NDCG@\ud835\udc3epays more attention to top-ranked items in Top-\ud835\udc3eitems. 5 DEEP UNDERSTANDING OF HNS Based on the arguments discussed above, we gain a deeper theoretical understanding of HNS. The BPR loss equipped with HNS optimizes OPAUC(\ud835\udefd), which has a stronger connection with Top-\ud835\udc3e metrics. In this sense, we derive the following corollary: Corollary 1. The BPR loss equipped with HNS approximately optimizes Top-\ud835\udc3eevaluation metrics, where the level of sampling hardness controls the value of \ud835\udc3e. Moreover, we take a step further and propose two instructive guidelines for effective usage of HNS. (1) The sampling hardness should be controllable, e.g., via pre-defined hyper-parameters, to adapt to different Top\ud835\udc3emetrics and datasets. (2) The smaller the\ud835\udc3ewe emphasize in Top-\ud835\udc3eevaluation metrics, the harder the negative samples we should draw. Motivated by these, we generalize the DNS and softmax-based sampling to two controllable algorithms DNS(\ud835\udc40, \ud835\udc41) and Softmaxv(\ud835\udf0c, \ud835\udc41), as shown in Algorithm 1 and Algorithm 2 respectively. \u2022 In DNS(\ud835\udc40, \ud835\udc41), we utilize hyperparameter \ud835\udc40to control sampling hardness, where the original DNS is a special case with \ud835\udc40= 1. \u2022 In Softmax-v(\ud835\udf0c, \ud835\udc41), we propose to use an adaptive \ud835\udf0fin Eq. (17), instead of a fixed \ud835\udf0fin Eq. (3). Hyperparameter \ud835\udf0ccontrols the sampling hardness. This ensures that \ud835\udefdof the optimization objective \ud835\udc42\ud835\udc43\ud835\udc34\ud835\udc48\ud835\udc36(\ud835\udefd) remains the same during training. As discussed, the hyperparameters \ud835\udc40and \ud835\udf0caffect how hard the negative samples we will draw. Besides, the size of the sampling pool \ud835\udc41also affects the actual sampling probability of negative items. We conduct simulation experiments to investigate the difference of the sampling distribution under different parameter settings. We choose the user embeddings and items embeddings from the welltrained model on the Gowalla dataset and keep them fixed. Then, we randomly pick a (user, positive item) pair (\ud835\udc50, \ud835\udc56) and then simulate the sampling process 10000 times to estimate the actual sampling probability. The average value of \ud835\udc5d\ud835\udc50\ud835\udc56\ud835\udc57over the sampling process \fWWW \u201923, May 1\u20135, 2023, Austin, TX, USA Wentao et al. Algorithm 1 DNS (\ud835\udc40, \ud835\udc41) 1: Initialize \ud835\udf03 2: for \ud835\udc61= 1, . . . ,\ud835\udc47do 3: Sample a mini-batch B \u2208D 4: for (\ud835\udc50,\ud835\udc56) \u2208B do 5: Uniformly sample a mini-batch B\u2032 \ud835\udc50\u2208I\u2212 \ud835\udc50, |B\u2032 \ud835\udc50| = \ud835\udc41. 6: Let \ud835\udc5d\ud835\udc50\ud835\udc56\ud835\udc57= ( 1 \ud835\udc40, \ud835\udc57\u2208S\u2193 B\u2032 \ud835\udc50[1, \ud835\udc40] 0, \ud835\udc57\u2208\ud835\udc5c\ud835\udc61\u210e\ud835\udc52\ud835\udc5f\ud835\udc60. 7: end for 8: Compute a gradient estimator \u2207\ud835\udc61by \u2207\ud835\udc61= 1 |B| \u2211\ufe01 (\ud835\udc50,\ud835\udc56) \u2208B \u2211\ufe01 \ud835\udc57\u2208I\u2212 \ud835\udc50 \ud835\udc5d\ud835\udc50\ud835\udc56\ud835\udc57\u2207\ud835\udf03\ud835\udc3f(\ud835\udc50,\ud835\udc56, \ud835\udc57). 9: Update \ud835\udf03\ud835\udc61+1 = \ud835\udf03\ud835\udc61\u2212\ud835\udf02\u2207\ud835\udc61. 10: end for Algorithm 2 Softmax-v (\ud835\udf0c, \ud835\udc41) 1: Initialize \ud835\udf03 2: for \ud835\udc61= 1, . . . ,\ud835\udc47do 3: Sample a mini-batch B \u2208D 4: for (\ud835\udc50,\ud835\udc56) \u2208B do 5: Uniformly sample a mini-batch B\u2032 \ud835\udc50\u2208I\u2212 \ud835\udc50, |B\u2032 \ud835\udc50| = \ud835\udc41. 6: Let \ud835\udc5d\ud835\udc50\ud835\udc56\ud835\udc57= \uf8f1 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f3 \ud835\udc52\u2113(\ud835\udc5f\ud835\udc50\ud835\udc56\u2212\ud835\udc5f\ud835\udc50\ud835\udc57)/\ud835\udf0f \u00cd \ud835\udc58\u2208B\u2032 \ud835\udc50\ud835\udc52\u2113(\ud835\udc5f\ud835\udc50\ud835\udc56\u2212\ud835\udc5f\ud835\udc50\ud835\udc58)/\ud835\udf0f, \ud835\udc57\u2208B\u2032 \ud835\udc50 0, \ud835\udc57\u2208\ud835\udc5c\ud835\udc61\u210e\ud835\udc52\ud835\udc5f\ud835\udc60, where \ud835\udf0f= \u221a\ufe02 Var\ud835\udc57(\ud835\udc3f(\ud835\udc50,\ud835\udc56,\ud835\udc57)) 2\ud835\udf0c . 7: end for 8: Compute a gradient estimator \u2207\ud835\udc61by \u2207\ud835\udc61= 1 |B| \u2211\ufe01 (\ud835\udc50,\ud835\udc56) \u2208B \u2211\ufe01 \ud835\udc57\u2208I\u2212 \ud835\udc50 \ud835\udc5d\ud835\udc50\ud835\udc56\ud835\udc57\u2207\ud835\udf03\ud835\udc3f(\ud835\udc50,\ud835\udc56, \ud835\udc57). 9: Update \ud835\udf03\ud835\udc61+1 = \ud835\udf03\ud835\udc61\u2212\ud835\udf02\u2207\ud835\udc61. 10: end for is approximated as the actual sampling probability that negative item \ud835\udc57is chosen by pair (\ud835\udc50, \ud835\udc56) for training. We report the cumulative probability distribution under different parameter settings in Figure 7. The negative items are in descending order w.r.t. their scores. Since items are in descending order, we conclude that the faster the curve rises, the higher the sampling probability the top-ranked items are drawn with. Easily, we have the following observations: \u2022 Smaller \ud835\udc40in DNS(\ud835\udc40, \ud835\udc41) means higher sampling hardness. \u2022 Larger \ud835\udc41in DNS(\ud835\udc40, \ud835\udc41) means higher sampling hardness. \u2022 Larger \ud835\udf0cin Softmax-v(\ud835\udf0c, N) means higher sampling hardness. 6 EXPERIMENTS In this section, we evaluate the models on three public datasets to figure out the following questions: \u2022 (Q1) How do DNS(\ud835\udc40, \ud835\udc41) and Softmax-v(\ud835\udf0c, \ud835\udc41) perform compared to state-of-the-art HNS methods? Is it beneficial to control sampling hardness with pre-defined hyperparameters? \u2022 (Q2) Can experiment results validate our second guideline on adjusting sampling hardness according to \ud835\udc3ein Top-\ud835\udc3emetrics? 0 2500 5000 7500 Item 0.00 0.25 0.50 0.75 1.00 Cumulative Probability DNS(M, 200) M: 20 M: 10 M: 5 M: 1 0 2500 5000 7500 Item 0.00 0.25 0.50 0.75 1.00 DNS(5,N) N: 50 N: 100 N: 200 N: 500 0 2500 5000 7500 Item 0.00 0.25 0.50 0.75 1.00 Softmax( , 200) : 0.1 : 0.2 : 0.3 : 0.4 Figure 7: Approximated distributions under different parameter settings. The faster the curve rises, the higher the sampling probability the top-ranked items are drawn with. Table 1: The Statistics of Datasets Dataset User Item Train Test Sparsity Gowalla 29,858 40,988 822,358 205,106 99.9160% Yelp 77,277 45,638 1,684,846 419,049 99.9403% Amazon 130,380 128,939 1,934,404 481,246 99.9856% Dataset. The Statistics of three public datasets are shown in Table 1, which vary in scale and sparsity. The Gowalla dataset is the collection of user check-in histories. The Yelp dataset is a subset of Yelp\u2019s businesses, reviews, and user data. The Amazon dataset is a subset of customers\u2019 ratings for Amazon books. Considering the ratings are integers ranging from 1 to 5, the ratings above 4 are regarded as positive. Following [9, 21], we leverage the routine strategy \u2014 5-core setting to preprocess the dataset. For each user, we randomly select 80% of items to form the training set and 20% of items to form the test set. 10% of the training set is used for validation. The models are built on the training set and evaluated on the test set. Metrics. When evaluating the models, we filter out positive items in the training set and utilize widely-used metrics Recall@\ud835\udc3eand NDCG@\ud835\udc3eto evaluate the recommendation performance. The detailed definitions are shown in Appendix C. 6.1 Baselines To verify the effectiveness of DNS(\ud835\udc40, \ud835\udc41) and Softmax-v(\ud835\udf0c, \ud835\udc41) methods, we compare our algorithms with the following baselines. \u2022 BPR [29] is a classical method for implicit feedback. It utilizes pairwise logit loss and randomly samples negative items. \u2022 AOBPR [28] improves BPR through adaptively oversampling top-ranked negative items. \u2022 WARP [35] uses the Weighted Approximate-Rank Pairwise loss function for implicit feedback. \u2022 IRGAN [33] utilizes a minimax game to optimize the generative and discriminative network simultaneously. The negative items are drawn based on softmax distribution. \u2022 DNS [40] is a dynamic negative sampler, which is a special case of DNS(\ud835\udc40, \ud835\udc41) with \ud835\udc40= 1. \u2022 Kernel [2] is an efficient sampling method that approximates the softmax distribution with non-negative quadratic kernel. \u2022 PRIS [21] utilizes importance sampling for training, where importance weights are based on softmax distribution. They adopt \fOn the Theories Behind Hard Negative Sampling for Recommendation WWW \u201923, May 1\u20135, 2023, Austin, TX, USA Table 2: Performance comparison on three datasets. The best results are in bold and the second best are underlined. The baselines are taken from [9], as we completely follow their experiment settings. \u201c**\u201d denote the improvement is significant with t-test with \ud835\udc5d< 0.05. Method Gowalla Yelp Amazon NDCG@50 Recall@50 NDCG@50 Recall@50 NDCG@50 Recall@50 BPR 0.1216 0.2048 0.0524 0.1083 0.0499 0.1171 AOBPR 0.1385 0.2417 0.0677 0.1346 0.0563 0.1303 WARP 0.1248 0.2240 0.0636 0.1332 0.0542 0.1267 IRGAN 0.1443 0.2242 0.0695 0.1367 0.0627 0.1395 Kernel 0.1399 0.2264 0.0658 0.1315 0.0700 0.1495 DNS 0.1412 0.1839 0.0693 0.1425 0.0615 0.1378 PRIS(U) 0.1334 0.2217 0.0639 0.1273 0.0607 0.1377 PRIS(P) 0.1385 0.2282 0.0673 0.1342 0.0697 0.1463 AdaSIR(U) 0.1489 0.2500 0.0732 0.1523 0.0731 0.1505 AdaSIR(P) 0.1519 0.2516 0.0731 0.1525 0.0740 0.1534 DNS(\ud835\udc40, \ud835\udc41) 0.1811** 0.2989** 0.0899** 0.1774** 0.1014** 0.1833** Softmax-v(\ud835\udf0c, \ud835\udc41) 0.1837** 0.2993** 0.0840** 0.1690** 0.1046** 0.1937** the uniform and popularity-based distribution to construct the sampling pool, denoted as PRIS(U) and PRIS(P), respectively. \u2022 AdaSIR [9] is a two-stage method that maintains a fixed size contextualized sample pool with importance resampling. The importance weights are based on softmax distribution. They adopt the uniform and popularity-based distribution to construct the sampling pool, denoted as AdaSIR(U) and AdaSIR(P), respectively. 6.2 Implementation Details The algorithms are implemented based on PyTorch. We completely follow the experiments setting in [9, 21]. We utilize Matrix Factorization (MF) as the recommender model for our model. We utilize Adam optimizer to optimize all parameters. The dimension of user and item embedding is set to 32. The batch size is fixed to 4096, and the learning rate is set to 0.001 by default. The number of training epochs is set to 200 for all methods. We utilize grid search to find the best with weight_decay \u2208{0.1, 0.01, 0.001, 0.0001}. The hyperparameter \ud835\udc40in DNS(\ud835\udc40, \ud835\udc41) is tuned over {1,2,3,4,5,10,20} and the hyperparameter \ud835\udf0cof Softmax-v(\ud835\udf0c, \ud835\udc41) is tuned over {0.01, 0.1, 1, 10, 100} for all datasets. Due to the efficiency limit, the sample pool size \ud835\udc41for each user is set to 200, 200, and 500 for Gowalla, Yelp, and Amazon. The maximum number of negative samples per positive pair (\ud835\udc50,\ud835\udc56) is the sample pool size. The baseline results are directly taken from [9], as we completely follow their experiment setting. Code is available at https://github.com/swt-user/WWW_2023_code. 6.3 (RQ1) Performance Comparison Table 2 shows the performance of DNS(\ud835\udc40, \ud835\udc41), Softmax-v(\ud835\udf0c, \ud835\udc41), and baselines. From them, we have the following key findings: \u2022 Compared to the uniform negative sampling method BPR, most HNS methods perform much better, especially DNS(\ud835\udc40, \ud835\udc41) and Softmax-v(\ud835\udf0c, \ud835\udc41). This clearly verifies the effectiveness of HNS. \u2022 Benefiting from the adjustable sampling hardness, DNS(\ud835\udc40, \ud835\udc41) significantly outperform its original version on average 40%. Meanwhile, the two methods also present a huge performance boost 0.0% 50.0% 100.0% NDCG M=5.0 M=5.0 M=6.0 Gowalla M=6.0 M=6.0 M=9.0 Yelp M=2.0 M=2.0 M=6.0 Amazon 0.0% 50.0% 100.0% Recall M=3.0 M=5.0 M=7.0 M=6.0 M=6.0 M=10.0 M=2.0 M=6.0 M=6.0 5.0 7.5 10.0 M 0.0% 50.0% 100.0% Precision M=5.0 M=7.0 M=8.0 5 10 15 M M=6.0 M=10.0 M=11.0 @5 @20 @50 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0 10.0 11.0 12.0 13.0 14.0 15.0 M M=2.0 M=6.0 M=6.0 Figure 8: The effect of \ud835\udc40in DNS(\ud835\udc40, \ud835\udc41), where \ud835\udc41is set to 200, 200, 500 for Gowalla, Yelp and Amazon respectively. over other HNS methods. These findings demonstrate the extreme importance of our first guideline in Section 5. 6.4 (RQ2) Performance with Different Sampling Distributions This subsection investigates how Top-\ud835\udc3emetrics will change under different sampling distributions on real-world datasets. As the sampling distribution is affected by hyperparameters, see Section 5, we investigate the performance under different hyperparameter settings. We report the performance results on three public dataset under different \ud835\udc40in DNS(M,N), different N in DNS(M, N) and different \ud835\udf0c in Softmax-v(\ud835\udf0c,N) in Figure 8, Figure 9 and Figure 10 respectively. We only care about the relative magnitude of Top-\ud835\udc3emetrics, so we report the relative value of Top-\ud835\udc3eevaluation metrics for better visualization. We highlight the value of hyperparameters when each curve reaches its maximum result. For each result, we tune \fWWW \u201923, May 1\u20135, 2023, Austin, TX, USA Wentao et al. 0.0% 50.0% 100.0% NDCG N=400 N=400 N=300 Gowalla N=300 N=150 N=150 Yelp N=1200 N=1000 N=800 Amazon 0.0% 50.0% 100.0% Recall N=500 N=400 N=300 N=250 N=150 N=120 N=1000 N=800 N=500 200 400 600 N 0.0% 50.0% 100.0% Precision N=400 N=300 N=200 100 200 300 400 N N=250 N=150 N=150 @5 @20 @50 200 300 400 500 600 700 800 900 1000 1100 1200 1300 1400 1500 N N=1200 N=800 N=500 Figure 9: The effect of \ud835\udc41in DNS(\ud835\udc40, \ud835\udc41), where \ud835\udc40is set to 5 for all three datasets. 0.0% 50.0% 100.0% NDCG =0.9 =0.9 =0.6 Gowalla =0.017 =0.017 =0.017 Yelp =20.0 =20.0 =10.0 Amazon 0.0% 50.0% 100.0% Recall =0.9 =0.8 =0.6 =0.017 =0.017 =0.017 =20.0 =15.0 =5.0 0.5 1.0 0.0% 50.0% 100.0% Precision =0.9 =0.8 =0.6 0.0125 0.0150 0.0175 0.0200 =0.017 =0.015 =0.013 @5 @20 @50 1.0 5.0 10.0 15.0 20.0 25.0 30.0 40.0 50.0 =20.0 =5.0 =5.0 Figure 10: The effect of \ud835\udf0cin Softmax-v(\ud835\udf0c, \ud835\udc41), where \ud835\udc41is set to 200, 200, 500 for Gowalla, Yelp and Amazon respectively. the learning rate \u2208{0.01, 0.001} and weight_decay \u2208{0.01, 0.001, 0.0001} to find the best. \u2022 From Figure 8, we observe that for all datasets and all metrics, the lower the \ud835\udc3ein Top-\ud835\udc3emetrics is, the smaller the \ud835\udc40in DNS(\ud835\udc40, \ud835\udc41) when the curve achieves its maximum performance. \u2022 From Figure 9, we observe that for all datasets and all metrics, the lower the \ud835\udc3ein Top-\ud835\udc3emetrics is, the larger the \ud835\udc41in DNS(\ud835\udc40, \ud835\udc41) when the curve achieves its maximum performance. \u2022 From Figure 10, we observe that for all datasets and all metrics, the lower the \ud835\udc3ein Top-\ud835\udc3emetrics is, the larger the \ud835\udf0cin Softmaxv(\ud835\udf0c, \ud835\udc41) when the curve achieves its maximum performance. In some cases, the peak of the Top-20 curve coincides with the peak of the Top-50 curve or Top-5 curve. This can be attributed to the relatively small difference of \ud835\udc3e. With a larger difference of \ud835\udc3e, for example, Top-50 and Top-5, their curve always matches our observation. We conduct further experiments to investigate the performance across a wide range of \ud835\udc3ein Appendix D. Recall that we have observed how hyperparameters (\ud835\udc40, \ud835\udc41, \ud835\udf0c) affect sampling hardness in Figure 7. Combining these two observations, we can easily conclude that the smaller the \ud835\udc3ein Top\ud835\udc3emetrics, the harder the negative samples we should draw. These clearly verify our second guideline. 7 RELATED WORK 7.1 Negative Sampling for Recommendation Early work sample items based on predefined distributions, e.g., uniform distribution [11, 29] and popularity-based distribution [3, 10]. These static samplers are independent of model status and unchanged for different users. Thus, the performance is limited. Later on, adaptive samplers are proposed, such as DNS [40] and softmaxbased sampling methods. Softmax-based sampling is widely used in adversarial learning (e.g. IRGAN [33] and ADVIR [26]) and importance sampling (e.g. PRIS [21] and AdaSIR [9]). They assign high sampling probability to top-ranked negative items, accounting for model status. There are also some fine-grained negative sampling methods [23, 32, 34, 42]. Empirical experiments verify the effectiveness and efficiency of HNS. The efficiency problem has been studied in AOBPR [28]. They argue that HNS samples more informative high-scored items, which can contribute more to the gradients and accelerate the convergence. Nevertheless, the reasons for the effectiveness of HNS are not revealed yet. To the best of our knowledge, only DNS [40] provides clues of the connection between HNS and Top-\ud835\udc3emetrics. But unfortunately, they fail to give a theoretical foundation and deep analyses. 7.2 Partial AUC Maximization Early work does not directly optimize the surrogate objective of Partial AUC, but instead, some other related objectives, e.g., p-norm push [31], infinite-push [20], and asymmetric SVM objective [36]. Nevertheless, these algorithms are not scalable and applicable to deep learning. More recently, [38] considers two-way partial AUC maximization and simplifies the optimizing problem for large scale optimization. [41] proposes new formulations of Partial AUC surrogate objectives using distributionally robust optimization (DRO). This work motivates our proof of the connection between OPAUC and HNS. A more comprehensive study of AUC can refer to [37]. 8" + } + ], + "Yuxuan Bian": [ + { + "url": "http://arxiv.org/abs/2402.04852v2", + "title": "Multi-Patch Prediction: Adapting LLMs for Time Series Representation Learning", + "abstract": "In this study, we present aLLM4TS, an innovative framework that adapts Large\nLanguage Models (LLMs) for time-series representation learning. Central to our\napproach is that we reconceive time-series forecasting as a self-supervised,\nmulti-patch prediction task, which, compared to traditional contrastive\nlearning or mask-and-reconstruction methods, captures temporal dynamics in\npatch representations more effectively. Our strategy encompasses two-stage\ntraining: (i). a causal continual pre-training phase on various time-series\ndatasets, anchored on next patch prediction, effectively syncing LLM\ncapabilities with the intricacies of time-series data; (ii). fine-tuning for\nmulti-patch prediction in the targeted time-series context. A distinctive\nelement of our framework is the patch-wise decoding layer, which departs from\nprevious methods reliant on sequence-level decoding. Such a design directly\ntransposes individual patches into temporal sequences, thereby significantly\nbolstering the model's proficiency in mastering temporal patch-based\nrepresentations. aLLM4TS demonstrates superior performance in several\ndownstream tasks, proving its effectiveness in deriving temporal\nrepresentations with enhanced transferability and marking a pivotal advancement\nin the adaptation of LLMs for time-series analysis.", + "authors": "Yuxuan Bian, Xuan Ju, Jiangtong Li, Zhijian Xu, Dawei Cheng, Qiang Xu", + "published": "2024-02-07", + "updated": "2024-03-10", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "main_content": "INTRODUCTION Time-series analysis (TSA) plays a pivotal role in a myriad of real-world applications. Current state-of-the-art TSA methodologies are usually custom-designed for specific tasks, such as forecasting (Zeng et al., 2023; Nie et al., 2023), classification (Dempster et al., 2020), and anomaly detection (Xu et al., 2021). *Equal contribution 1The Chinese University of Hong Kong 2Tongji University 3Shanghai Jiao Tong University 4Shanghai Artificial Intelligence Laboratory. Correspondence to: Dawei Cheng , Qiang Xu . Despite these advancements, the quest for a versatile timeseries representation capable of addressing diverse downstream tasks remains a formidable challenge. Traditional approaches predominantly rely on self-supervised learning strategies such as contrastive learning (Yue et al., 2022; Yang & Hong, 2022) and mask-and-reconstruction modeling (Zerveas et al., 2021; Li et al., 2023). Yet, they often struggle to fully grasp the intricate temporal variations characteristic of time-series, arising from inconsistencies between high-level representation optimization with downstream low-level tasks (Xie et al., 2023), or the temporal disruption caused by random masking (Ma et al., 2023). The advent of large language models (LLMs) has revolutionized the field of natural language processing (OpenAI, 2023; Brown et al., 2020). Their remarkable adaptability extends beyond text, as demonstrated through prompting or fine-tuning for various modalities (Lu et al., 2023; Borsos et al., 2023). This adaptability has sparked a surge of interest in leveraging LLMs for TSA. Some studies have explored the use of frozen LLMs in TSA, either through the artful design of prompts (Gruver et al., 2023; Yu et al., 2023; Xue & Salim, 2023) or by reprogramming input time-series (Jin et al., 2023; Cao et al., 2023). Others have experimented with fine-tuning LLMs for specific TSA tasks (Zhou et al., 2023; Chang et al., 2023; Sun et al., 2023). While these methods show promise, they tend to fall short in generating a comprehensive time-series representation due to implicit representation adaption and inappropriate sequence-wise decoder (Spathis & Kawsar, 2023; Lee et al., 2023) in Fig. 1. Recognizing the potential of LLMs in time-series modeling and the shortcomings of the aforementioned methods, we present aLLM4TS, an innovative framework that fully realizes the potential of LLMs for general time-series representation learning. Our framework reconceptualizes time-series forecasting as a self-supervised, multi-patch1 prediction task. This approach offers a more effective mechanism for capturing temporal dynamics at the patch level, mitigates the modeling inconsistency in contrastive learning and the temporal dependencies disruption in mask-and-reconstruction, 1The patch concept, introduced by Nie et al. (2023), denotes the segmentation of the original time series at the subseries level, serving as input tokens for transformer-based TSA models. 1 arXiv:2402.04852v2 [cs.LG] 10 Mar 2024 \fMulti-Patch Prediction: Adapting LLMs for Time Series Representation Learning Linear/Transformer/CNN Backbone Sequence-wise Decoder \ud835\udc7e\ud835\udc7e\ud835\udc94\ud835\udc94 Patch-wise Decoder \ud835\udc7e\ud835\udc7e\ud835\udc77\ud835\udc77 \u2776 Non-Patch-Based \u2777 Previous Patch-Based \u2778 Ours Stage \u2160: Casual Next Patch Pre-training Encoder Decoder representation Stage \u2161: Multipatch Prediction Encoder Decoder representation representation \u00d7 0.34% (a) Pipeline Comparison Sampling Patching Downstream Task \u2461 Ours (b) Learning Paradigms \u2460 Previous Positive Negative Reconstruction LLM Finetuning Contrast Patches Figure 1. Pipeline Comparison. Given a time series embedding/patch sequence x \u2208RL\u00d7D, D \u226bP where P is the patch size and forecasting horizon H: Non-Patch Based Models \u2776or Patch Based Models \u2777map it to the target sequence using a huge sequence-level linear layer Ws \u2208R(L\u00b7D)\u00d7H; Our Patch-based Parallel Decoding aLLM4TS \u2778decodes each patch to the time domain using a small shared patch-level linear layer Wp \u2208RD\u00d7P without modeling temporal relationships among patches. Specifically, the parameters of our patch-based decoding layer are only P L\u2217H (e.g., 0.34%, when P = 16, L = 64, H = 720), compared to the sequence-based decoding layer. Learning Paradigms. Instead of contrastive learning, masking reconstruction, and limited fine-tuning of the LLMs \u2460, we adopt a forecasting-based two-stage pre-training task \u2461to better transfer the sequence modeling capabilities within LLMs to time series. and aligns with the casual pre-training process of LLMs. Specifically, we implement a two-pronged self-supervised training strategy to tailor LLMs for time-series representation learning. The first stage involves causal continual training on a variety of time-series datasets, focusing on next-patch prediction to synchronize LLM capabilities with time-series data intricacies. The subsequent stage involves fine-tuning the model for multi-patch prediction in targeted time-series scenarios. A pivotal aspect of our framework is the innovative design of the patch-wise decoder (depicted in Figure 1). This design mandates the model to decode each patch independently into temporal sequences, deviating from the conventional sequence-wise decoding approach, thus enabling the encoding of time-series representations directly within patches as the decoder is precluded from using the patch sequence for temporal dynamics modeling. In summary, the primary contributions of this work include: \u2022 We introduce aLLM4TS, an innovative framework adapting LLMs for patch-based time-series representation learning. This framework utilizes a two-stage forecasting-based pre-training strategy. The first stage encompasses causal next-patch training, transferring LLM capabilities for nuanced understandings of timeseries data, followed by a fine-tuning stage focused on multi-patch prediction, ensuring a robust representation adaptation to specific time-series contexts. \u2022 Diverging from traditional approaches that utilize sequence-wise decoding in TSA tasks, we propose a novel patch-wise decoding methodology. This approach significantly improves the adaptability of LLM backbones, optimizing patch-based time-series representation learning more effectively. \u2022 aLLM4TS demonstrates superior performance across various downstream TSA tasks and diverse time-series data domains, validating its ability to derive time-series representations with remarkable transferability and setting new benchmarks in the field. 2. RELATED WORK 2.1. Time Series Representation Learning The field of time series representation learning has witnessed increasing interest in recent years, with self-supervised learning methods playing a pivotal role. These methods generally fall into two categories: Contrastive Learning. This category encompasses methods designed to refine the representation space by leveraging various forms of consistency, such as subseries consistency (Franceschi et al., 2019a; Fortuin et al., 2019), temporal consistency (Tonekaboni et al., 2021; Woo et al., 2022a; Yue et al., 2022), transformation consistency (Zhang et al., 2022c; Yang & Hong, 2022), and contextual consistency (Eldele et al., 2021). The goal is to ensure that representations of positive pairs are closely aligned, whereas those of neg2 \fMulti-Patch Prediction: Adapting LLMs for Time Series Representation Learning ative pairs are distinctly separated. Despite their strengths, contrastive learning methods often struggle with aligning to low-level tasks such as forecasting, primarily due to their focus on high-level information (Xie et al., 2023). Masked Modeling. PatchTST (Nie et al., 2023) pioneers the utilization of patches as the basic unit for processing time series, advocating for the prediction of masked subserieslevel patches to grasp local semantic information while minimizing memory consumption. However, this approach tends to compromise temporal dependencies and fails to guarantee adequate representation learning of temporal dynamics, as the masked values are often predictably inferred from adjacent contexts (Li et al., 2023). 2.2. Time Series Analysis based on LLMs The adaptation of pre-trained LLMs for time series analysis has garnered significant attention, exploiting their exceptional ability in sequence representation learning. This body of work can be categorized into three primary approaches: \u2022 Zero-shot Adaptation. Studies such as those by Gruver et al. (2023); Xue & Salim (2023); Yu et al. (2023) leverage the inherent sequence representation capabilities of frozen LLMs to facilitate time series prediction without any task-specific training. \u2022 Prompt Optimization. Works by Jin et al. (2023); Cao et al. (2023); Sun et al. (2023) employ reprogrammed input time series in conjunction with the innate sequence modeling prowess of pre-trained LLMs, aiming for enhanced forecasting accuracy. \u2022 Limited Fine-tuning. Research efforts like those of Zhou et al. (2023); Liu et al. (2023); Chang et al. (2023) apply fine-tuning to selected components of LLMs to improve performance in time series analysis tasks. While these approaches yield encouraging outcomes, they predominantly focus on distinct TSA tasks, instead of achieving a holistic time-series representation. 3. PRELIMINARIES AND MOTIVATION 3.1. Preliminaries Time-Series Forecasting (TSF) is the fundamental challenge in time series analysis (Ma et al., 2023), aiming to analyze the dynamics and correlations among historical time-series data to predict future behavior, formulated as: P(xL+1:L+H i |x1:L i ) = QL+H t=L+1 P(xt i|x1:t\u22121 i ) (1) where L is the look-back window size and xt i is the value of the ith variate at the tth time step, and the modeling target is to learn the unknown distribution of the H future values. Casual Language Model Pre-training. Current LLMs mostly belong to casual language models (OpenAI, 2023; Radford et al., 2019). They utilize a diagonal masking matrix, ensuring that each token can only access information from previous tokens. The training objective is to predict the next token based on the history information, defined as: LCLM = PN i=2 logP(xi|x1, \u00b7 \u00b7 \u00b7 , xi\u22121) (2) where N is the number of tokens, xi denotes the i-th token. 3.2. Motivation We explain the motivation of our proposed aLLM4TS solution by raising and answering the following two questions. How can we effectively adapt LLMs in time series modality? Traditional contrastive learning LCL and mask-andreconstruction LMR serve as a training loss, defined as: LCL = \u22121 N N X i=1 log exp(f(xi)Tf(xp i )) exp(f(xi)Tf(xp i )) + PB\u22121 j=1 exp(f(xi)Tf(xj i)) , LMR = N X i=1 \u2225xi \u2212b xi\u22252 2 = N X i=1 \u2225xi \u2212f(Mask(xi))\u22252 2, (3) where N denotes the number of training samples (pairs), B \u22121 is the number of negative samples, xi denotes the i-th sample, xp i is the only positive sample, xj i is the j-th negative sample of xi, b xi is the corresponding masked sample, f(\u00b7) is the forward pass. Both are inappropriate for adapting LLMs into time series since the non-negligible misalignment between their modeling objective with time series modeling and casual sequence pre-training processes of LLMs (Xie et al., 2023; Dong et al., 2023). However, during the pre-training stages in Eq. 2, casual LLMs undergo a similar training process as TSF in Eq. 1 on massive tokens (Brown et al., 2020; Radford et al., 2019), where xt i is the tth token of the ith sentences. This drives us to reformulate time-series forecasting as a self-supervised multi-patch prediction task. This offers several benefits: (1) Guarantee modeling consistency between high-level representation optimization and downstream low-level tasks, where contrastive learning fails, fostering a versatile representation with robust predictive capabilities excelling in diverse TSA tasks. (2) Avoid the temporal dependencies disruption caused by random masking in masked modeling. (3) Align with LLMs\u2019 pre-training where each token is casually predicted, facilitating the seamless adaptation of LLM to the temporal domain. Thus, we devise a forecasting-based self-supervised training strategy, as in Fig. 2 (a) and (b), to naturally sync LLMs\u2019 excellent representation learning capabilities with time-series variations, including a casual next-patch continual pre-training and a fine-tuning for multipatch prediction in the target time-series context. Is the sequence-level decoder suitable for patch-based time series representation? Current TSA models based on LLMs (Jin et al., 2023; Zhou et al., 2023) follow the traditional patch-based framework in Fig. 1 \u2777. Given patchbased time series representation {p1, \u00b7 \u00b7 \u00b7 , pLp}, pi \u2208RD 3 \fMulti-Patch Prediction: Adapting LLMs for Time Series Representation Learning Instance Norm & Patching (a) Stage \u2160: Casual Next-patch Pre-training Diverse Domains \ud835\udc91\ud835\udc91\ud835\udfcf\ud835\udfcf \ud835\udfcf\ud835\udfcf \ud835\udc91\ud835\udc91\ud835\udfd0\ud835\udfd0 \ud835\udfcf\ud835\udfcf \ud835\udc91\ud835\udc91\ud835\udfd1\ud835\udfd1 \ud835\udfcf\ud835\udfcf \ud835\udc91\ud835\udc91\ud835\udfd2\ud835\udfd2 \ud835\udfcf\ud835\udfcf \ud835\udc91\ud835\udc91\ud835\udfd3\ud835\udfd3 \ud835\udfcf\ud835\udfcf \ud835\udc91\ud835\udc91\ud835\udfd4\ud835\udfd4 \ud835\udfcf\ud835\udfcf \ud835\udc91\ud835\udc91\ud835\udfd5\ud835\udfd5 \ud835\udfcf\ud835\udfcf \u22ef LLM Backbone Shared Patch-Level Decoding Projection & Position Emb Casual Attention-Mask (b) Stage \u2161: Multi-patch Prediction Fine-tuning Specific Domain Instance Norm & Patching Look-Back Window Anchor Projection & Position Emb LLM Backbone Shared Patch-Level Decoding \ud835\udc91\ud835\udc91\ud835\udfcf\ud835\udfcf \ud835\udfcf\ud835\udfcf \ud835\udc91\ud835\udc91\ud835\udfd0\ud835\udfd0 \ud835\udfcf\ud835\udfcf \ud835\udc91\ud835\udc91\ud835\udfd1\ud835\udfd1 \ud835\udfcf\ud835\udfcf \ud835\udc91\ud835\udc91\ud835\udfd2\ud835\udfd2 \ud835\udfcf\ud835\udfcf \ud835\udc91\ud835\udc91\ud835\udfd3\ud835\udfd3 \ud835\udfcf\ud835\udfcf \ud835\udc91\ud835\udc91\ud835\udfd4\ud835\udfd4 \ud835\udfcf\ud835\udfcf \ud835\udc91\ud835\udc91\ud835\udfd5\ud835\udfd5 \ud835\udfcf\ud835\udfcf \u22ef \ud835\udc91\ud835\udc91\ud835\udc73\ud835\udc73\ud835\udc91\ud835\udc91 \ud835\udfcf\ud835\udfcf \u0307 \ud835\udc91\ud835\udc91\ud835\udc73\ud835\udc73\ud835\udc91\ud835\udc91+\ud835\udfcf\ud835\udfcf \ud835\udfcf\ud835\udfcf \u0307 \ud835\udc91\ud835\udc91\ud835\udc73\ud835\udc73\ud835\udc91\ud835\udc91+\ud835\udc6f\ud835\udc6f\ud835\udc91\ud835\udc91 \ud835\udfcf\ud835\udfcf \u22ef \ud835\udc91\ud835\udc91\ud835\udfcf\ud835\udfcf \ud835\udfcf\ud835\udfcf \ud835\udc91\ud835\udc91\ud835\udfd0\ud835\udfd0 \ud835\udfcf\ud835\udfcf \ud835\udc91\ud835\udc91\ud835\udc73\ud835\udc73\ud835\udc91\ud835\udc91 \ud835\udfcf\ud835\udfcf \ud835\udc91\ud835\udc91\ud835\udfcf\ud835\udfcf \ud835\udfcf\ud835\udfcf\ud835\udc91\ud835\udc91\ud835\udfd0\ud835\udfd0 \ud835\udfcf\ud835\udfcf\ud835\udc91\ud835\udc91\ud835\udfd1\ud835\udfd1 \ud835\udfcf\ud835\udfcf\ud835\udc91\ud835\udc91\ud835\udfd2\ud835\udfd2 \ud835\udfcf\ud835\udfcf\ud835\udc91\ud835\udc91\ud835\udfd3\ud835\udfd3 \ud835\udfcf\ud835\udfcf\u22ef\ud835\udc91\ud835\udc91\ud835\udc73\ud835\udc73\ud835\udc91\ud835\udc91 \ud835\udfcf\ud835\udfcf \u22ef \u22ef \u22ef \u22ef \u22ef \u22ef \u22ef \u22ef \u22ef \u22ef \u22ef \u22ef \u22ef \u22ef \u22ef Position-Aware Attention Mask \ud835\udc41\ud835\udc41\u00d7 Multi-Head Attention Add & Norm Feed Forward Add & Norm \u0307 \ud835\udc91\ud835\udc91\ud835\udc73\ud835\udc73\ud835\udc91\ud835\udc91+\ud835\udfcf\ud835\udfcf \ud835\udfcf\ud835\udfcf \u0307 \ud835\udc91\ud835\udc91\ud835\udc73\ud835\udc73\ud835\udc91\ud835\udc91+\ud835\udc6f\ud835\udc6f\ud835\udc91\ud835\udc91 \ud835\udfcf\ud835\udfcf \u0307 \ud835\udc91\ud835\udc91\ud835\udc73\ud835\udc73\ud835\udc91\ud835\udc91+\ud835\udfcf\ud835\udfcf \ud835\udfcf\ud835\udfcf \u0307 \ud835\udc91\ud835\udc91\ud835\udc73\ud835\udc73\ud835\udc91\ud835\udc91+\ud835\udc6f\ud835\udc6f\ud835\udc91\ud835\udc91 \ud835\udfcf\ud835\udfcf \ud835\udc91\ud835\udc91\ud835\udfd6\ud835\udfd6 \ud835\udfcf\ud835\udfcf \u0ddd \ud835\udc91\ud835\udc91\ud835\udfd0\ud835\udfd0 \ud835\udfcf\ud835\udfcf \u0ddd \ud835\udc91\ud835\udc91\ud835\udfd1\ud835\udfd1 \ud835\udfcf\ud835\udfcf \u0ddd \ud835\udc91\ud835\udc91\ud835\udfd2\ud835\udfd2 \ud835\udfcf\ud835\udfcf \u0ddd \ud835\udc91\ud835\udc91\ud835\udfd3\ud835\udfd3 \ud835\udfcf\ud835\udfcf \u0ddd \ud835\udc91\ud835\udc91\ud835\udfd4\ud835\udfd4 \ud835\udfcf\ud835\udfcf \u0ddd \ud835\udc91\ud835\udc91\ud835\udfd5\ud835\udfd5 \ud835\udfcf\ud835\udfcf \u0ddd \ud835\udc91\ud835\udc91\ud835\udfd6\ud835\udfd6 \ud835\udfcf\ud835\udfcf \u22ef \u0ddd \ud835\udc91\ud835\udc91\ud835\udfd7\ud835\udfd7 \ud835\udfcf\ud835\udfcf \ud835\udc91\ud835\udc91\ud835\udfcf\ud835\udfcf \ud835\udfcf\ud835\udfcf \ud835\udc91\ud835\udc91\ud835\udfd0\ud835\udfd0 \ud835\udfcf\ud835\udfcf \ud835\udc91\ud835\udc91\ud835\udfd1\ud835\udfd1 \ud835\udfcf\ud835\udfcf \ud835\udc91\ud835\udc91\ud835\udfd2\ud835\udfd2 \ud835\udfcf\ud835\udfcf \ud835\udc91\ud835\udc91\ud835\udfd3\ud835\udfd3 \ud835\udfcf\ud835\udfcf \u22ef \u22ef \u22ef \ud835\udc91\ud835\udc91\ud835\udfd4\ud835\udfd4 \ud835\udfcf\ud835\udfcf \ud835\udc91\ud835\udc91\ud835\udfcf\ud835\udfcf \ud835\udfcf\ud835\udfcf\ud835\udc91\ud835\udc91\ud835\udfd0\ud835\udfd0 \ud835\udfcf\ud835\udfcf\ud835\udc91\ud835\udc91\ud835\udfd1\ud835\udfd1 \ud835\udfcf\ud835\udfcf\ud835\udc91\ud835\udc91\ud835\udfd2\ud835\udfd2 \ud835\udfcf\ud835\udfcf\ud835\udc91\ud835\udc91\ud835\udfd3\ud835\udfd3 \ud835\udfcf\ud835\udfcf \u22ef \ud835\udc91\ud835\udc91\ud835\udfd4\ud835\udfd4 \ud835\udfcf\ud835\udfcf Figure 2. The model framework of aLLM4TS. In stage 1, Casual Next-patch Pre-training (a), time series from different datasets are initially converted into univariate patch sequences. Then, we conduct next-patch prediction training with casual attention, effectively syncing LLM capabilities with the intricacies of time-series data. In stage 2, Multi-patch Prediction Fine-tuning (b), we fine-tune a few layers for multi-patch prediction in the target time-series context. Firstly, non-parametric methods are first employed to obtain the initial anchor representation of the horizon. Next, we concatenate the look-back window patches and anchors and feed them into the time-series-aligned LLM after stage 1 training with a position-aware attention mask, optimizing anchors with history patches. Finally, all optimized horizon anchors are independently decoded into the target temporal domain through a shared patch-wise linear decoder. across Lp patches with dimension D, they concatenate and map the patch sequence to prediction horizon Hp, through a sequence-level decoder Ws \u2208R(Lp\u00b7D)\u00d7Hp, which can be particularly oversized if either one or all of these values are large, causing severe downstream task overfitting. Instead of mapping at the sequence level, we disentangle the encoding and decoding within our framework through a patch-wise decoder in Fig. 1 \u2778, which is involved throughout our pipeline (Stage 1 and Stage 2). This empowers the LLM backbone and patch-wise decoder to excel in its designated role: encoding each patch for better representation and decoding each patch independently to the temporal domain. 4. METHOD Our aLLM4TS stands as a novel framework in redefining the landscape of adapting LLMs into time series analysis. 4.1. Casual Next-patch Continual Pre-Training In this section, we propose to conduct casual next-patch continual pre-training, to sync pre-trained LLMs sequence modeling capabilities with time-series modalities on diverse time series datasets(e.g., Weather, Traffic), as in Fig. 2 (a). Forward Process. Given time series from various datasets, we first flatten them into M univariate sequences. We denote the i-th univariate series of look-back window size L starting at time index t as x(i) t:t+L\u22121 = {x(i) t , ..., x(i) t+L\u22121} \u2208R1\u00d7L where i = 1, ..., M. Then each of them is first divided into patch sequence p(i) tp:tp+Lp\u22121 = {p(i) tp , ..., p(i) tp+Lp\u22121} \u2208 RLp\u00d7P where tp = \u230a(t\u2212P ) S \u230b+ 1 is starting patch index, Lp = \u230a(L\u2212P ) S \u230b+ 1 is the number of patches, P is the patch length, and S is the sliding stride. Finally, each sequence is fed independently into the casual LLM backbone, such as GPT2 (Radford et al., 2019) for the channelindependence setting. Then we get the casual next-patch prediction \u02c6 p(i) tp+1:tp+Lp = {\u02c6 p(i) tp+1, ..., \u02c6 p(i) tp+Lp} \u2208RLp\u00d7D. Loss Function. We choose to use the MSE loss to guide the representation alignment at the patch level. The loss in each time series is gathered and averaged over M time series to get the overall objective loss: Lp = Ep 1 M PM i=1 \u2225\u02c6 p(i) tp+1:tp+Lp \u2212p(i) tp+1:tp+Lp\u22252 2 , which is employed to guide the casual next-patch time-series representation optimization to uncover the hidden temporal dynamics while aligning with LLMs sequential modeling abilities. 4 \fMulti-Patch Prediction: Adapting LLMs for Time Series Representation Learning 4.2. Multi-patch Prediction Fine-tuning In this section, we fine-tune for the self-supervised, multipatch prediction task, further refining patch representations to align with the target time series temporal contexts based on the casual next-patch pre-training model in Sec. 4.1. Forward Process. As illustrated in Fig. 2 (b), given the i-th univariate time series x(i) t:t+L\u22121 = {x(i) t , ..., x(i) t+L\u22121} \u2208 R1\u00d7L of look-back window size L starting at time index t, the prediction horizon H and the time-series-aligned LLM f\u03b8(\u00b7) trained in Sec. 4.1, we firstly prepare prediction anchors \u02d9 x(i) t+L:t+L+H\u22121 = { \u02d9 x(i) t+L, ..., \u02d9 x(i) t+L+H\u22121} \u2208 R1\u00d7H through non-parametric methods (Recent history x(i) t+L\u2212H:t+L\u22121or Discrete fourier prediction). Then the look-back window input and anchors are divided into patch sequence p(i) tp:tp+Lp\u22121 = {p(i) tp , ..., p(i) tp+Lp\u22121} \u2208RLp\u00d7P and \u02d9 p(i) tp+Lp:tp+Lp+Hp\u22121 = { \u02d9 p(i) tp+Lp, ..., \u02d9 p(i) tp+Lp+Hp\u22121} \u2208 RHp\u00d7P where tp = \u230a(t\u2212P ) S \u230b+ 1, Lp = \u230a(L\u2212P ) S \u230b+ 1, Hp = \u230a(H\u2212P ) S \u230b+ 1, P is the patch length, and S is the sliding stride. Next, these two patch sequences are concatenated and fed into the time-series-aligned LLM backbone f\u03b8(\u00b7) trained in stage 1 with a position-aware attention mask Ap, which enhances the temporal relationships among patches (Each future anchor can only see all accurate history patches and itself.). Finally, we employ the patch-wise projection layer Wp \u2208RD\u00d7P to independently decode optimized anchors \u02d9 p(i)o tp+Lp:tp+Lp+Hp\u22121 \u2208RHp\u00d7D into temporal patches \u02c6 p(i) tp+Lp:tp+Lp+Hp\u22121 \u2208RHp\u00d7P , formulated as \u02c6 p(i) tp+Lp+k = \u02d9 p(i)o tp+Lp+kWp, k \u2208[0, Hp \u22121]. Loss Function. We flatten the predicted patches from \u02c6 p(i) tp+Lp:tp+Lp+Hp\u22121 to \u02c6 x(i) t+L:t+L+H\u22121, gather and average the loss in M time series: Ls = Ex 1 M PM i=1 \u2225\u02c6 x(i) t+L:t+L+H\u22121\u2212x(i) t+L:t+L+H\u22121\u22252 2. Notably, during multi-patch (anchor) representation optimization, most parameters in the time-series-aligned LLM f\u03b8(\u00b7) trained in Sec. 4.1 are frozen except for the Position Embedding and Layer Normalization Layer (Less than 0.01% of the overall parameters) to make better adaptions in target time series. Also, once finish a single stage 2 adaption in a target time series dataset, history L, horizon H, we can perform any other forecasting tasks with other input/out length \u02c6 L/ \u02c6 H without any re-training since our patch-wise representation decoding is independent of input/out length. 5. EXPERIMENTS aLLM4TS consistently outperforms state-of-the-art time series analysis methods (Sec. 5.1) across multiple benchmarks and task settings, including long-term and short-term forecasting (Sec. 5.2 and Sec. 5.3), few-shot forecasting (Sec. 5.4), and anomaly detection (Sec. 5.5). We compared aLLM4TS against a broad collection of models, including the state-of-the-art LLM-based time series analysis model GPT4TS (Zhou et al., 2023).2 Notably, in forecasting, based on our patch-wise decoder, aLLM4TS excels at handling arbitrary look-back window sizes and prediction horizons with only one uniform training setting, whereas previous methods necessitate re-training for each setting. Then, we provide additional analysis of representation learning ability and ablation study in Sec. 5.6 and Sec. 5.7. Due to the page limit, more experiment results are in the appendix, including imputation, classification, and other exploration experiments. We use the same default LLM backbone GPT2 with the first 6 layers as GPT4TS (Zhou et al., 2023). 5.1. Experimental Settings Datasets. For long-term forecasting, few-shot forecasting, and representation learning, we evaluate our proposed aLLM4TS on 8 popular datasets, including Weather, Traffic, Electricity, ILI, and 4 ETT datasets (ETTh1, ETTh2, ETTm1, ETTm2). These datasets have been extensively used and are publicly available in Wu et al. (2021). For short-term forecasting, we evaluate models on widely used marketing dataset M4 (Makridakis et al., 2018). For anomaly detection, we evaluate models on five widely employed datasets: SMD (Su et al., 2019), MSL (Hundman et al., 2018), SMAP (Hundman et al., 2018), SWaT (Mathur & Tippenhauer, 2016), and PSM (Abdulaal et al., 2021). Baselines. For time series forecasting and anomaly detection task, we compare 11 baseline methods: the SOTA LLM-based model GPT4TS (Zhou et al., 2023), nine Transformer-based models, including PatchTST (Nie et al., 2023), FEDformer (Zhou et al., 2022), Autoformer (Wu et al., 2021), Non-Stationary Transformer (Liu et al., 2022), ESTformer (Woo et al., 2022b), LightTS (Zhang et al., 2022b), Pyraformer (Liu et al., 2021), Reformer (Kitaev et al., 2020), Informer (Zhou et al., 2021), and the SOTA MLP-based model DLinear (Zeng et al., 2023). And we also add a strong text-guided LLM baseline Time-LLM (Jin et al., 2023) for long-term forecasting. Besides, N-HiTS (Challu et al., 2022) and N-BEATS (Oreshkin et al., 2019) are added for comprehensive short-term forecasting performance comparison. For representation learning, we compare aLLM4TS with 5 baseline methods: the SOTA masking-based represen2To ensure fair comparisons, all experimental configurations are the same as Zhou et al. (2023) and follow a unified evaluation pipeline: https://github. com/thuml/Time-Series-Library. For current LLMbased methods (GPT4TS (Zhou et al., 2023) and TimeLLM (Jin et al., 2023)) that are not included in the pipeline, we reproduced their results through their official publicly available code (https://github.com/DAMO-DI-ML/ NeurIPS2023-One-Fits-All and https://github. com/KimMeen/Time-LLM). Notably, we use GPT-2 (Radford et al., 2019) as the default backbone for all LLM-based methods. 5 \fMulti-Patch Prediction: Adapting LLMs for Time Series Representation Learning Table 1. Long-term Forecasting Results. We calculate the MSE for each dataset. A lower value indicates better performance. Red: the best, Underlined: the second best. Due to the page limit, we put the full table in the appendix. Methods aLLM4TS GPT4TS Time-LLM DLinear PatchTST TimesNet FEDformer Autoformer Stationary ETSformer LightTS Informer Reformer ETTh1 96 0.380 0.376 0.399 0.375 0.375 0.384 0.376 0.449 0.513 0.494 0.424 0.865 0.837 192 0.396 0.416 0.433 0.405 0.414 0.436 0.420 0.500 0.534 0.538 0.475 1.008 0.923 336 0.413 0.442 0.469 0.439 0.431 0.491 0.459 0.521 0.588 0.574 0.518 1.107 1.097 720 0.461 0.477 0.473 0.472 0.449 0.521 0.506 0.514 0.643 0.562 0.547 1.181 1.257 ETTh2 96 0.251 0.285 0.294 0.289 0.274 0.340 0.358 0.346 0.476 0.340 0.397 3.755 2.626 192 0.298 0.354 0.355 0.383 0.339 0.402 0.429 0.456 0.512 0.430 0.520 5.602 11.12 336 0.343 0.373 0.372 0.448 0.331 0.452 0.496 0.482 0.552 0.485 0.626 4.721 9.323 720 0.417 0.406 0.428 0.605 0.379 0.462 0.463 0.515 0.562 0.500 0.863 3.647 3.874 ILI 24 1.359 2.063 1.617 2.215 1.522 2.317 3.228 3.483 2.294 2.527 8.313 5.764 4.400 36 1.405 1.868 1.708 1.963 1.430 1.972 2.679 3.103 1.825 2.615 6.631 4.755 4.783 48 1.442 1.790 1.633 2.130 1.673 2.238 2.622 2.669 2.010 2.359 7.299 4.763 4.832 60 1.603 1.979 2.106 2.368 1.529 2.027 2.857 2.770 2.178 2.487 7.283 5.264 4.882 Weather 96 0.149 0.162 0.163 0.176 0.152 0.172 0.217 0.266 0.173 0.197 0.182 0.300 0.689 192 0.190 0.204 0.206 0.220 0.197 0.219 0.276 0.307 0.245 0.237 0.227 0.598 0.752 336 0.238 0.254 0.255 0.265 0.249 0.280 0.339 0.359 0.321 0.298 0.282 0.578 0.639 720 0.316 0.326 0.325 0.333 0.320 0.365 0.403 0.419 0.414 0.352 0.352 1.059 1.130 Traffic 96 0.372 0.388 0.383 0.410 0.367 0.593 0.587 0.613 0.612 0.607 0.615 0.719 0.732 192 0.383 0.407 0.398 0.423 0.385 0.617 0.604 0.616 0.613 0.621 0.601 0.696 0.733 336 0.396 0.412 0.407 0.436 0.398 0.629 0.621 0.622 0.618 0.622 0.613 0.777 0.742 720 0.433 0.450 0.434 0.466 0.434 0.640 0.626 0.660 0.653 0.632 0.658 0.864 0.755 Electricity 96 0.127 0.139 0.140 0.140 0.130 0.168 0.193 0.201 0.169 0.187 0.207 0.274 0.312 192 0.145 0.153 0.151 0.153 0.148 0.184 0.201 0.222 0.182 0.199 0.213 0.296 0.348 336 0.163 0.169 0.171 0.169 0.167 0.198 0.214 0.231 0.200 0.212 0.230 0.300 0.350 720 0.206 0.206 0.210 0.203 0.202 0.220 0.246 0.254 0.222 0.233 0.265 0.373 0.340 Table 2. Short-term Forecasting Results. We calculate the SMAPE for each dataset. A lower value indicates better performance. Red: the best. IMP. denotes the absolute SMAPE reduction of aLLM4TS compared with SOTA LLM-based GPT4TS, where a larger value indicates a better improvement. Due to the page limit, we put the full table in the appendix. Methods IMP. aLLM4TS GPT4TS TimesNet PatchTST N-HiTS N-BEATS ETSformer LightTS DLinear FEDformer Stationary Autoformer Informer Reformer Y early 1.264 13.540 14.804 13.387 13.477 13.418 13.436 18.009 14.247 16.965 13.728 13.717 13.974 14.727 16.169 Quarterly 0.292 10.216 10.508 10.100 10.38 10.202 10.124 13.376 11.364 12.145 10.792 10.958 11.338 11.360 13.313 Monthly 0.206 12.775 12.981 12.670 12.959 12.791 12.677 14.588 14.014 13.514 14.260 13.917 13.958 14.062 20.128 Others 0.250 5.032 5.282 4.891 4.952 5.061 4.925 7.267 15.880 6.709 4.954 6.302 5.485 24.460 32.491 Average 0.472 11.950 12.422 11.829 12.059 11.927 11.851 14.718 13.525 13.639 12.840 12.780 12.909 14.086 18.200 tation learning method PatchTST (Nie et al., 2023), and four contrastive learning methods for time series, BTSF (Yang & Hong, 2022), TS2Vec (Yue et al., 2022), TNC (Tonekaboni et al., 2021), and TS-TCC (Eldele et al., 2021). Experiment Settings. For the time series forecasting task, all models follow the same experimental setup with prediction length H \u2208{24, 36, 48, 60} for the ILI dataset and H \u2208{96, 192, 336, 720} for other datasets. We use the default look-back window L = 336 for all baseline models and our proposed framework aLLM4TS. Metrics. We use the following metrics in the experiment comparison: Mean Square Error (MSE), Mean Absolute Error (MAE), Symmetric Mean Absolute Percentage Error (SMAPE), and F1-score (Grishman & Sundheim, 1996). 5.2. Long-Term Time Series Forecasting As shown in the table 1, aLLM4TS outperforms all baseline methods in most cases. Specifically, our informative twostage forecasting-based, self-supervised representation optimization for LLMs and patch-wise decoding lead to an average performance improvement of 9.71% over GPT4TS, the current SOTA LLM-based method which directly employs the sequence modeling capabilities in LLMs without any representation adaption. Compared to Time-LLM, which combines textual prompts with the sequence modeling capabilities of frozen LLMs, aLLM4TS achieves over 9.40% performance improvement. Compared with the SOTA Transformer model PatchTST, aLLM4TS realizes an average MSE reduction of 2.03%. Our improvements are also noteworthy compared with the other model classes, e.g., DLinear or TimesNet, exceeding 19.3%. Notably, in stage 1, we conduct a shared casual next-patch pre-training with training sets of Weather, Traffic, Electricity, ILI, and 4 ETT datasets. In stage 2, for 4 horizons H \u2208{96, 192, 336, 720} in a target dataset, aLLM4TS performs only one training for H = 720, then it can forecast arbitrary horizons due to its patch-wise decoder that is independent with horizon, while other baselines have to fine-tune for each horizon length. 5.3. Short-Term Time Series Forecasting Table 2 shows the results of short-term forecasting in the M4 benchmark, which contains marketing data in different frequencies. Specifically, we use the same backbone as 6 \fMulti-Patch Prediction: Adapting LLMs for Time Series Representation Learning Table 3. Few-shot Forecasting on 5% Data. We calculate the MSE and MAE for each dataset. All results are averaged from 4 prediction lengths (96, 192, 336, and 720). Red: the best, Underlined: the second best. Due to the page limit, we put the full table in the appendix. Methods aLLM4TS GPT4TS DLinear PatchTST TimesNet FEDformer Autoformer Stationary ETSformer LightTS Informer Reformer MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE ETTh1 0.608 0.507 0.681 0.560 0.750 0.611 0.694 0.569 0.925 0.647 0.658 0.562 0.722 0.598 0.943 0.646 1.189 0.839 1.451 0.903 1.225 0.817 1.241 0.835 ETTh2 0.374 0.417 0.400 0.433 0.694 0.577 0.827 0.615 0.439 0.448 0.463 0.454 0.441 0.457 0.470 0.489 0.809 0.681 3.206 1.268 3.922 1.653 3.527 1.472 ETTm1 0.419 0.414 0.472 0.450 0.400 0.417 0.526 0.476 0.717 0.561 0.730 0.592 0.796 0.620 0.857 0.598 1.125 0.782 1.123 0.765 1.163 0.791 1.264 0.826 ETTm2 0.297 0.345 0.308 0.346 0.399 0.426 0.314 0.352 0.344 0.372 0.381 0.404 0.388 0.433 0.341 0.372 0.534 0.547 1.415 0.871 3.658 1.489 3.581 1.487 Average 0.425 0.421 0.465 0.447 0.594 0.517 0.493 0.461 0.612 0.509 0.553 0.504 0.594 0.535 0.653 0.518 0.914 0.712 1.799 0.952 2.492 1.188 2.403 1.155 Table 4. Anomaly Detection Results. We calculate the F1-score (as %) for each dataset. Red: the best, Underlined: second best. \u2217. in the Transformers indicates the name of \u2217former. Due to the page limit, we put the full table in the appendix. Methods aLLM4TS GPT4TS TimesNet PatchTS. ETS. FED. LightTS DLinear Stationary Auto. Pyra. In. Re. LogTrans. Trans. Ours SMD 85.42 86.89 84.61 84.62 83.13 85.08 82.53 77.10 84.72 85.11 83.04 81.65 75.32 76.21 79.56 MSL 82.26 82.45 81.84 78.70 85.03 78.57 78.95 84.88 77.50 79.05 84.86 84.06 84.40 79.57 78.68 SMAP 78.04 72.88 69.39 68.82 69.50 70.76 69.21 69.26 71.09 71.12 71.09 69.92 70.40 69.97 69.70 SWaT 94.57 94.23 93.02 85.72 84.91 93.19 93.33 87.52 79.88 92.74 91.78 81.43 82.80 80.52 80.37 PSM 97.19 97.13 97.34 96.08 91.76 97.23 97.15 93.55 97.29 93.29 82.08 77.10 73.61 76.74 76.07 Average 87.51 86.72 85.24 82.79 82.87 84.97 84.23 82.46 82.08 84.26 82.57 78.83 77.31 76.60 76.88 Sec. 5.2 which undergoes a casual next-patch pre-training on training sets of Weather, Traffic, Electricity, ILI, and 4 ETT datasets. Then we employ the same model configuration as GPT4TS to fine-tune for each frequency. We achieve a competitive performance close to the current SOTA TimesNet, whose CNN-based structure is usually considered to perform better in datasets characterized by diverse variations but limited volume (Wu et al., 2022), such as M4. The overall 0.472 SMAPE reduction compared with SOTA LLMbased GPT4TS is attributed to the importance of syncing LLM capabilities with the temporal dynamics. This also verifies the excellent transferability of our forecasting-aligned representation across time series from different domains. 5.4. Few-shot Time Series Forecasting LLMs have demonstrated remarkable performance in fewshot learning (Liu et al., 2023; Brown et al., 2020) due to their ability to obtain strong general representations. In this section, we evaluate the few-shot forecasting ability of our time-series-aligned LLM in ETT datasets. To avoid data leakage, we conduct stage 1 on ETT1 datasets and perform multi-patch prediction with only 5% training data in ETT2, and vice versa. The few-shot forecasting results are shown in Tab. 3. aLLM4TS remarkably excels over all baseline methods, and we attribute this to the successful representation syncing in our two-stage representation adaption. Notably, both our aLLM4TS and GPT4TS consistently outperform other competitive baselines, further verifying the potential of LLMs as effective time series machines. Significantly, aLLM4TS achieves an average MSE reduction of 8.6% compared to SOTA LLM-based GPT4TS, indicating the benefits of our forecasting-based adaption and patch-wise decoding. In comparison to convolution-based TimesNet and MLP-based DLinear models that are usually considered more data-efficient for training and suitable for few-shot learning methods, aLLM4TS still demonstrates an average MSE reduction of 30.6% and 28.5% respectively. 5.5. Time Series Anomaly Detection Time series anomaly detection has various industrial applications, such as health monitoring or finance evaluation. Similar to short-term forecasting, we use the same backbone as Sec. 5.2 which undergoes a casual next-patch pre-training on training sets of Weather, Traffic, Electricity, ILI, and 4 ETT datasets. Then we employ the same model configuration as GPT4TS to fine-tune in each anomaly dataset. Results in Tab. 4 demonstrate that aLLM4TS achieves the SOTA performance with the averaged F1-score 87.51%. We attribute this better capability of detecting infrequent anomalies within time series to the forecasting-aligned representation learned in stage 1 casual next-patch pre-training on diverse time series. The aligning process syncs the LLM sequence modeling abilities with time series and further enhances the representation\u2019s transferability and generality across various time series domains and downstream tasks. 5.6. Representation Learning In addition to aLLM4TS\u2019s outstanding performance across various downstream tasks, we further explore its superiority in adapting LLMs for time-series representation learning in ETTh1 forecasting. This is achieved through comparisons with state-of-the-art representation learning methods, and various comparative experiments for our two-stage forecasting-based aLLM4TS. Detailed results are in Tab. 5. Sta1: Casual Next-patch Pre-training. Comparing the column Sta1+Sta2 with column Masking+Sta23, we observe a distinct average performance decline over 13.5%, whether in a pre-trained LLM or a vanilla PatchTST. We attribute it to the challenge that masking-based patch pre-training struggles to model the vital temporal variations due to its 3Explanation of column names can be found in table caption. 7 \fMulti-Patch Prediction: Adapting LLMs for Time Series Representation Learning Table 5. Representation Learning Methods Comparison. Sta1 implies conducting stage 1, casual continual pre-training based on next-patch prediction in diverse time series. Sta2 implies conducting stage 2, multi-patch prediction fine-tuning for target time series. Masking \u2212\u2217% implies replacing the origin stage 1 with masking-patch pre-training of masking ratio \u2217\u2217% (Masking in PatchTST denotes using default masking ratio 40% as its origin paper). SD denotes replacing the patch-wise decoder in stage 2 with sequence-wise decoder. Red: the best, Underlined: the second best. IMP. denotes the improvement on best results of aLLM4TS compared to that of baselines. Models IMP. aLLM4TS PatchTST BTSF TS2Vec TNC TS-TCC Sta1+Sta2 Masking-20%+Sta2 Masking-40%+Sta2 Sta1+Sta2 Masking+SD Masking+Sta2 Metrics MSE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE ETTh1 24 6.52% 0.301 0.362 0.417 0.435 0.441 0.447 0.402 0.433 0.322 0.369 0.431 0.453 0.541 0.519 0.599 0.534 0.632 0.596 0.653 0.610 48 3.48% 0.343 0.377 0.418 0.437 0.422 0.439 0.401 0.432 0.354 0.385 0.437 0.467 0.613 0.524 0.629 0.555 0.705 0.688 0.720 0.693 168 6.21% 0.393 0.415 0.436 0.448 0.437 0.449 0.416 0.441 0.419 0.424 0.459 0.480 0.640 0.532 0.755 0.636 1.097 0.993 1.129 1.044 336 7.20% 0.413 0.428 0.439 0.456 0.439 0.463 0.440 0.486 0.445 0.446 0.479 0.483 0.864 0.689 0.907 0.717 1.454 0.919 1.492 1.076 720 5.34% 0.461 0.462 0.475 0.485 0.479 0.499 0.500 0.517 0.487 0.478 0.523 0.508 0.993 0.712 1.048 0.790 1.604 1.118 1.603 1.206 random masking and reconstruction training paradigm. The results of other contrastive-learning-based methods also lead to a significant performance decline, as a result of the nonnegligible non-alignment between their high-level objective with the downstream time series analysis tasks. Sta2: Multi-patch Prediction Fine-tuning. Previous patchbased models (Nie et al., 2023; Zhou et al., 2023) all choose to concatenate and project in sequence-level as in Fig. 1 \u2777. Comparing column Masking+SD and Masking+Sta2, more than 13.4% deterioration occurs, strongly indicating the great risks of overfitting in the huge sequence-wise decoder. Table 6. Ablations on ETT Dataset (average MSE in prediction length [96, 192, 336, 720] reported). Red: the best. Due to the page limit, we put the full table and analysis in the appendix. Model Variant Long-term Forecasting ETTh1 ETTh2 ETTm1 ETTm2 A.1 aLLM4TS (Default: 6)4 0.413 0.327 0.332 0.294 A.2 aLLM4TS (3) 0.437 0.330 0.350 0.292 A.3 aLLM4TS (9) 0.452 0.339 0.404 0.307 A.4 aLLM4TS (12) 0.580 0.334 0.403 0.303 B.1 w/o Casual Continual Pre-training 0.460 0.350 0.373 0.307 B.2 w/o LLM Pretrained Weights (6) 0.437 0.336 0.364 0.301 B.3 w/o LLM Pretrained Weights (3) 0.462 0.339 0.373 0.305 C.1 w/o Patch-level Decoder 0.455 0.387 0.362 0.284 C.2 w/o Position-aware Attention Mask 0.443 0.358 0.399 0.303 D.1 Init with FFT 0.416 0.355 0.375 0.301 D.2 Init with Random 0.447 0.351 0.365 0.309 E.1 LN+PE+Attn 0.442 0.346 0.363 0.319 E.2 LN+PE+Attn+FFN 0.465 0.348 0.358 0.302 5.7. Ablation Study In this section, we conduct several ablations on framework design and the effectiveness of our two-stage forecastingbased self-supervised training. Brief results are in Tab. 6. Casual Next-Patch Continual Pre-training. Comparing row A.1 and B.1 in Tab. 6, an average MSE increase of 8.80% is observed, indicating that ablating casual next-patch continual pre-training significantly harms the sequence pattern recognition and forecasting modeling of the LLM for effective time series analysis. We attribute it to the inadequate adaption to apply pre-trained LLMs in time series 4The \u2217(x) denotes using the first x layers of model \u2217. without alignment that fits the temporal dynamics, forecasting modeling, and the casual pre-training of LLMs. LLM Pre-trained Weight. We designed two sets of ablation experiments with different model sizes to avoid the mismatch between training data and model parameter quantity. We discard the pre-trained weights of the LLMs and train from scratch the first 6 layers (B.2) and the first 3 layers (B.3) of GPT-2. Ablating the LLM pre-trained weights directly results in the loss of the learned sequential representation capabilities from massive sequential text data (Zhou et al., 2023; Gruver et al., 2023). Consequently, it becomes difficult to learn the temporal representation from scratch within the LLM architecture, leading to the degradation in performance of 5.15% and 7.91%, respectively. Patch-level Decoder. In ablation experiment C.1, we employed the conventional sequence-level decoder, resulting in an average performance loss exceeding 8.54%. Despite using a decoder over 100 times larger and can train specifically for each input/output length, a substantial performance loss occurred. This is attributed to the potential downstream task overfitting of the huge sequence-level head and the incapability to disentangle the patch representation encoding and decoding process, leading to inadequate patch representation optimization in the LLM backbone. Position-aware Attention Mask. In aLLM4TS, we transform the forecasting into multi-patch representation optimization based on well-aligned patch-based time series knowledge. Position-aware attention mask is designed to further enhance the optimization process by removing the unwanted confusion brought by other being-optimized anchors during the optimization. Ablation of this component (C.2) results in over 10.01% performance deterioration. 5.8. Interpretability Experiment We conducted a case study on the Traffic dataset to illustrate the evolution of attention weights from the prediction horizon patches to look-back window patches at four stages in Fig. 3. The 4 subplots detail the attention weights optimization process from randomly-initialized (Stage \u2776), 8 \fMulti-Patch Prediction: Adapting LLMs for Time Series Representation Learning through LLMs-pretrained (Stage \u2777), casually next-patch continual pre-trained (Stage \u2778) to multi-patch prediction adaption (Stage \u2779). Our observations are as follows: Obs.\u2460 After stage \u2779, aLLM4TS adeptly captures the complex multi-periodic properties of time series and a discernible trend of increasing information importance along the temporal dimension. In Fig. 3 (d), look-back window patches closest to the prediction horizon exhibit similar patterns from prediction horizon patches at time steps t, t + 3, \u00b7 \u00b7 \u00b7 . With a patch size of 16 and a stride of 8, sampling hourly, this corresponds to local day cycles. Additionally, there exist 20-patch cycles (equivalent to 168 hours), indicating weekly cycles. Furthermore, look-back window patches closer to the predicted horizon receive increasing attention due to their temporal proximity, indicating their greater informational significance. Obs.\u2461After stage \u2778, aLLM4TS learns universal single-period features (e.g., day) and showcases a noticeable trend of increasing attention along the time dimension in Fig. 3 (c), stemming from the process of casually predicting the next patch. Obs.\u2462 Pre-trained LLM parameters capture fundamental timeseries cycle attributes in Fig. 3 (a) and (b), serving as a robust optimization anchor for time-series representation learning when compared with random initialization. Prediction Horizon Patches Look-back Window Patches (a) Stage \u2776 Look-back Window Patches Look-back Window Patches Look-back Window Patches (b) Stage \u2777 (c) Stage \u2778 (d) Stage \u2779 Figure 3. Interpretability study in Traffic dataset. Due to the page limit, we put the full visualization and analysis in the appendix. The Y-axis and X-axis represent prediction horizon patch indexes and look-back window patch indexes, respectively. 6." + } + ], + "Xiangyu Wen": [ + { + "url": "http://arxiv.org/abs/2302.10296v3", + "title": "On Function-Coupled Watermarks for Deep Neural Networks", + "abstract": "Well-performed deep neural networks (DNNs) generally require massive labelled\ndata and computational resources for training. Various watermarking techniques\nare proposed to protect such intellectual properties (IPs), wherein the DNN\nproviders implant secret information into the model so that they can later\nclaim IP ownership by retrieving their embedded watermarks with some dedicated\ntrigger inputs. While promising results are reported in the literature,\nexisting solutions suffer from watermark removal attacks, such as model\nfine-tuning and model pruning.\n In this paper, we propose a novel DNN watermarking solution that can\neffectively defend against the above attacks. Our key insight is to enhance the\ncoupling of the watermark and model functionalities such that removing the\nwatermark would inevitably degrade the model's performance on normal inputs. To\nthis end, unlike previous methods relying on secret features learnt from\nout-of-distribution data, our method only uses features learnt from\nin-distribution data. Specifically, on the one hand, we propose to sample\ninputs from the original training dataset and fuse them as watermark triggers.\nOn the other hand, we randomly mask model weights during training so that the\ninformation of our embedded watermarks spreads in the network. By doing so,\nmodel fine-tuning/pruning would not forget our function-coupled watermarks.\nEvaluation results on various image classification tasks show a 100\\% watermark\nauthentication success rate under aggressive watermark removal attacks,\nsignificantly outperforming existing solutions. Code is available:\nhttps://github.com/cure-lab/Function-Coupled-Watermark.", + "authors": "Xiangyu Wen, Yu Li, Wei Jiang, Qiang Xu", + "published": "2023-02-08", + "updated": "2023-04-01", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.LG" + ], + "main_content": "INTRODUCTION Training a well-performed deep neural network (DNN) generally requires substantial human efforts (e.g., to collect massive labels) and huge computational resources [22], despite the fact that the model architectures are often publicly-available. It is thus essential to protect DNN models as intellectual properties (IPs) so that no one can tamper with their ownership. Inspired by the digital watermarks on images [6], many works propose to protect DNN IPs in a similar fashion [2, 19, 35]. Generally speaking, the watermarking process contains the embedding stage and the extraction stage. In the embedding stage, DNN IP owners aim to embed verifiable information (i.e., the watermark) into the DNN models without affecting the model accuracy. In the extraction stage, the IP owner can use dedicated triggers to retrieve the verifiable information to claim ownership. Depending on the information that the IP owner could access during the extraction stage, existing techniques can be categorized into white-box and black-box approaches. White-box watermarking methods directly inject secret information onto model parameters [3, 7, 26, 35]. During the verification phase, the IP owner could extract the embedded information from model weights and claim its ownership [35]. As DNN models are often deployed remotely as cloud services, the assumption to have access to model parameters is often impractical. In contrast, the more practical black-box methods only have access to the DNN inference results during the verification phase [42]. Backdoor-based [1, 21] and adversarial example-based [27, 28, 37] strategies are two main-stream approaches for black-box watermarking. The former typically leverages samples beyond the training distribution as triggers (see Fig. 1(a)) and trains the model to predict these trigger inputs with specified labels [16]. Such trigger and label pairs are regarded as verifiable information since their relationship cannot be learnt with normal training procedures. The latter resorts to adversarial examples (AEs) as watermark triggers, wherein watermarked models are trained to produce correct predictions for these AEs to claim ownership. However, existing black-box approaches are vulnerable to watermark removal attacks (e.g., model fine-tuning and model pruning). The secret features introduced by previous backdoor-based watermarks can be forgotten with model retraining. Similarly, the manipulated decision boundaries with AE-based watermarking methods are easily changed by model fine-tuning/pruning. arXiv:2302.10296v3 [cs.CV] 1 Apr 2023 \fCCS, xxx, xxx Xiangyu Wen et al. Base Image Our triggers Trigger A Trigger B Out-of-distribution Image In-distribution Image Trigger C Trigger D (a) (b) Figure 1: Comparison of backdoor-based watermarks with (a) existing triggers and (b) our triggers. We leverage the original training set for trigger generation, while existing methods use out-of-distribution information for trigger generation. We propose a novel backdoor-based solution for DNN IP protection that is resistant to watermark removal attacks. Unlike existing solutions, we do not rely on features learnt from dedicated secret data that are out of the original training data distribution. Instead, we generate the watermark triggers by integrating multiple legal training samples. Specifically, we propose two fusion methods, as illustrated in Fig. 1(b). By doing so, we couple the watermark with the model\u2019s inherent functionalities. Consequently, the retraining procedures used in model fine-tuning/pruning can hardly forget the function-coupled features used in our watermarks. Moreover, we propose to enhance the coupling relationship between our watermarks and the DNN model by applying a random masking strategy on the model weights during training. Therefore, the watermark information spreads across the network and it is hard to prune. The main contributions of this paper are as follows: \u2022 We propose a novel black-box watermarking method for DNN IP protection, based on the key insight to enhance the coupling of the watermark and the DNN model such that removing the watermark would inevitably degrade the model\u2019s performance on normal inputs. \u2022 To achieve functional coupling, we propose to leverage samples in the original training set for watermark trigger generation, which effectively combats the forgetting phenomenon often occurred with model retraining. \u2022 To further enhance the coupling relationship, we introduce a new training procedure that randomly masks model weights so that the watermark information is embedded across the network and hence resistant to watermark removal attacks. We conduct extensive evaluations on several image classification datasets (MNIST, CIFAR-10/100, and Tiny-ImageNet) with various network architectures (LeNet-5, VGG16, and ResNet-18). The results show that our method significantly outperforms existing watermarking solutions in terms of performance and robustness. The rest of this paper is organized as follows. We first survey related works in Section 2. Next, Section 3 defines the problem and Section 4 details the proposed solution. Experimental results are then presented in Section 5. Finally, Section 6 concludes this paper. 2 RELATED WORK Existing DNN watermarking methods can be divided into white-box methods and black-box methods. The white-box methods require the internal information of the DNN model for verification, while the black-box methods only need model predictions during the verification phase. We illustrate them as follows: White-box watermarking. In this scenario, researchers inject manually selected secret information (e.g., encoded images and strings) into the model weights. Then, in the verification phase, defenders try to extract these watermarks from model weights and claim ownership of this model. Uchida et al. [35] are the pioneers in proposing the concept of DNN watermarking. They achieve this by adding a regularization loss during training, which results in the regularization of the chosen weights to some secret values. Li et al. [24] enhance the approach presented in [35] by incorporating a regularization term similar to Spread-Transform Dither Modulation (ST-DM). This added term helps to mitigate the influence of watermarking on the precision of the DNN model while processing standard inputs. Chen et al. [3] improve [35] by implementing a watermarking system with anti-collision capabilities. Betty et al. [5] note that the Adam optimizer leads to a significant change in the distribution of weights after watermarking, which can be readily detected by adversaries. To address this issue, the authors have suggested employing an orthonormal projection matrix to project the weights and subsequently implementing the Adam optimizer on top of the projected weights. In contrast to Uchida\u2019s approach, Tartaglione et al. [34] predetermine the watermarked weights before initiating the training procedure, and they have fixed them during the process. Meanwhile, Wang et al. [36] propose a novel method for embedding and extracting watermarks by using an independent neural network to process the model weights. Rouhani et al. [30] propose an alternative approach to embedding watermarks in DNN models. \fOn Function-Coupled Watermarks for Deep Neural Networks CCS, xxx, xxx Rather than embedding the watermark into the model weights, the authors have introduced it into the feature maps of the model. To achieve this, the authors have analyzed the Probability Density Function (PDF) of activation maps obtained from various layers and embedded the watermark in the low probabilistic regions to minimize the impact on the clean accuracy of the model. In view of the potential ambiguity attacks, where adversaries may attempt to embed their watermark in a DNN model under the guise of the owner, Fan et al. [7, 8] propose the integration of a passport layer into the victim model. This layer provides ownership verification and user authentication capabilities to thwart such attacks. On the other hand, Guo et al. [13] propose several attack strategies, such as scaling, noise embedding, and affine transformation, to disrupt embedded watermarks. In response, the authors augmente existing watermarks by integrating these attack methods into the watermark generation process. In contrast, our proposed approach involves the use of functional-coupled watermarks, which is a conceptually different approach. Black-box watermarking. This type of watermarking method enables DNN ownership verification by verifying the consistency between specific inputs and their corresponding results. The watermarking process can be achieved through injecting backdoors into the model [23] or generating adversarial examples. In this approach, the injected backdoor and the model\u2019s vulnerability can be considered as embedded watermarks, while the trigger images and adversarial examples serve as corresponding keys. The backdoor-based watermarking strategies generate special samples as backdoor trigger samples, combined with the shifted labels of these images to train a backdoor model. To verify the model\u2019s ownership, defenders can recover the watermark by querying the model and examining the consistency between outputs and the queried samples. Adi et al. [1] use backdoors [11] for DNN watermarking, and the authors explore two methods, fine-tuning and training from scratch, to generate backdoors using selected images as triggers (e.g., Trigger C in Fig. 1(a)). Rouhani et al. [30] propose a method for injecting backdoors into the model for watermarking and ownership verification by retraining the model. Another approach involves selecting key images as watermarking samples via superimposing visible patterns (e.g., Trigger A & B in Fig. 1(a)) on some of the training images. The labels of such images are then shifted to the target class and combined with these images to train the backdoor model, creating a special relationship between them as the watermarks. Zhang et al. [42] use this method to generate watermarks. To avoid vulnerability under backdoor detection of visible trigger patterns, Guo et al. [12] and Li et al. [25] propose replacing the trigger pattern with invisible ones, such as adding a bit sequence to random pixel locations. Jia et al. [16] suggest training the features of outof-distribution triggers (e.g., Trigger D in Fig. 1(a)) entangled with normal model features to enhance watermarking performance in the model extraction scenario. In contrast to directly protecting the victim model, Szyller et al. [33] embed watermarks into the surrogate model when adversaries conduct model extraction attacks. They deploy an additional component within the API that adversaries use to access the model, deliberately returning wrong results corresponding to some of the input samples. This way, the surrogate model trained by the returned information can be embedded with watermarks. The adversarial example-based watermarking methods exploit the generated examples to shape the model boundary for establishing a unique association between such dedicated samples and selected outputs. Merrer et al. [28] employ the IFGSM algorithm [10] to generate the adversarial examples as the key samples of watermarks. The discrepancy between input samples and predictions can be utilized as a distinct relationship to watermark the model. He et al. [15] generate sensitive-sample watermarks, intending that small changes in model weights can be reflected in the model outputs through these sensitive samples. Yang et al. [40] propose a bi-level framework to jointly optimize adversarial examples and the DNN model. Wang et al. [37] were the first to consider both robustness and transferability for generating realistic watermarks. Chen et al. [4] propose a testing framework to evaluate the similarity between the victim model and the suspect model by a set of extreme test cases of adversarial examples. 3 PROBLEM DEFINITION In this section, we will first introduce the threat model in Section 3.1. Next, we will formulate our problem in Section 3.2, followed by providing the evaluation metrics for DNN watermarking in Section 3.3. 3.1 Threat Model In this section, we aim to clarify the scenario of tampering with the ownership of DNN models and the need for watermarking victim models. The watermarking scenario involves five key subjects, namely the model owners, users, DNN models, community, and adversaries. Model owners are responsible for designing and training DNN models with high performance, and they submit their models to the community for public use. Users can download these models for downstream tasks. However, adversaries can also download the models and falsely claim ownership by submitting the stolen models to other communities. Such actions violate the intellectual property rights of the original owners, necessitating the need for a proper method to verify the ownership of released models. In this paper, we consider an attack scenario in which adversaries steal a model from the community and set up an online service to provide AI services using the leaked model. Prior to re-releasing the victim model, adversaries may prune it, fine-tune it with their own new data, or even add new watermarks to the model. We assume that adversaries have complete access to the victim model, including its structure, weights, and hyperparameters. Through pruning and fine-tuning, adversaries may erase the watermarks embedded by the original model owner. Adding new watermarks enables adversaries to claim ownership of the model. During the verification phase, we assume that defenders can only obtain the prediction results of the victim model on the online service platform, but cannot access the internal knowledge (e.g., weights) of the model. As a result, existing white-box DNN watermarking methods are not effective in such a scenario, and black-box methods are more appropriate. 3.2 Watermarking Problem Formulation Watermarking target: Given a DNN model \ud835\udc53\ud835\udf03(\ud835\udc65), where \ud835\udf03represents the model weights, a watermarking strategy \u210e(\u00b7) is designed \fCCS, xxx, xxx Xiangyu Wen et al. to embed an abstract watermark \ud835\udc46into the model, and a recovering strategy \ud835\udc5f(\u00b7) is developed to extract the watermarks from the candidate model. Techniques: In terms of white-box and black-box watermarking methods, the details of \u210e(\u00b7) differ. White-box methods can embed and recover watermarks \ud835\udc46from only the model weights. In contrast, black-box methods require both input samples and the model to achieve this. Thus, \ud835\udc46is a subset of joint (\ud835\udc65,\ud835\udf03), meaning that \ud835\udc46 relies on both samples and models. Mainstream white-box methods aim to embed additional information into model weights such that \ud835\udc53\ud835\udf03+\ud835\udeff(\u00b7) = \u210e\ud835\udc64\u210e\ud835\udc56\ud835\udc61\ud835\udc52(\ud835\udc53\ud835\udf03(\u00b7)). Here, \ud835\udeffrepresents the perturbation on model weights, and we can recover \ud835\udc46from \ud835\udeff(i.e., \ud835\udeff\u21d2\ud835\udc46). In this case, apart from the selected weights, the rest of the weights in the model will not change. In contrast, backdoor-based black-box methods require modifying the whole model to embed watermarks, either by fine-tuning the model or training it from scratch with the trigger data. We can generate watermarks via \ud835\udc65\u2032, \ud835\udc53\ud835\udf03\u2032(\u00b7) = \u210e\ud835\udc4f\ud835\udc4e\ud835\udc50\ud835\udc58\ud835\udc51\ud835\udc5c\ud835\udc5c\ud835\udc5f(\ud835\udc53\ud835\udf03(\u00b7),\ud835\udc65\u2032), and (\ud835\udc65\u2032,\ud835\udf03\u2032, \ud835\udc53\ud835\udf03\u2032(\ud835\udc65\u2032)) \u21d2\ud835\udc46. Here,\ud835\udf03\u2032 represents the modified model weights after injecting the backdoor, and \ud835\udc65\u2032 represents the trigger samples. Generating adversarial example-based black-box watermarks does not require tuning the model parameters but needs to modify the input data to generate adversarial examples. We can generate watermarks via \ud835\udc65\u2032, \ud835\udc53\ud835\udf03(\u00b7) = \u210e\ud835\udc4e\ud835\udc51\ud835\udc63(\ud835\udc53\ud835\udf03(\u00b7),\ud835\udc65), and (\ud835\udc65\u2032,\ud835\udf03, \ud835\udc53\ud835\udf03(\ud835\udc65\u2032)) \u21d2\ud835\udc46, with \ud835\udc65\u2032 indicating the generated adversarial example corresponding to the input \ud835\udc65. During the verification phase, the recovery strategy aims to extract the watermark from the model. In the case of white-box methods, the recovery strategy extracts the weights and checks if the decoding result is consistent with the watermark, i.e., \ud835\udc46= \ud835\udc5f(\ud835\udeff). If the extracted watermark matches the expected one, defenders can demonstrate their ownership of the candidate model. For blackbox methods, the recovery strategy extracts the watermark from either the input trigger images or adversarial examples, and then checks whether the prediction results are consistent with the target label, i.e., \ud835\udc46= \ud835\udc5f(\ud835\udc65\u2032, \ud835\udc53\ud835\udf03\u2032(\ud835\udc65\u2032)) or \ud835\udc46= \ud835\udc5f(\ud835\udc65\u2032, \ud835\udc53\ud835\udf03(\ud835\udc65\u2032)). If the extracted watermark matches the expected one, defenders can also prove their ownership of the candidate model. 3.3 Evaluation Metrics Effectiveness. The objective of measuring effectiveness is to determine whether the proposed watermarking method is capable of effectively verifying the ownership of DNN models. Fidelity. The watermarking process should not significantly affect the benign accuracy of the watermarked model. Therefore, it is essential to ensure that the watermarked model\u2019s clean accuracy is as close as possible to that of a model trained on the raw task. Robustness. The robustness metric is utilized to measure the performance-preserving capability of a watermarking method under attacks. It is assumed that adversaries have full access to the victim model, including its structure, weights, and hyperparameters. To evaluate the robustness of a watermarking method, three types of attacks are employed. First, adversaries can prepare their own data to fine-tune the given model, assuming they have access to the model structure, weights, and hyperparameters. Two ways of fine-tuning are selected: fine-tuning with data from the original data domain and transfer learning with data from a new data domain. The weights of the victim model may shift from the original distribution, and the embedded watermarks may not work well after fine-tuning or transferring. Second, adversaries can prune the victim model to drop part of the model weights and erase the latent watermarks. Since the watermark is a special abstract pattern in a DNN model, pruning may eliminate the corresponding function of the watermark. Finally, adversaries who know the underlying watermarking method can overwrite the existing watermark in the model by re-embedding a new watermark, which disables the recognition of the original watermark [20, 38]. 4 METHODOLOGY In this section, we provide a comprehensive description of our novel watermarking method for deep neural networks. The proposed method is composed of three principal modules: trigger generation, watermark embedding, and watermark verification. The workflow is illustrated in Figure 2. Firstly, we propose two alternative techniques to generate feature-fusion trigger samples. Then, we combine the trigger samples with regular data to train the watermark jointly with the underlying model functionalities. Furthermore, we employ a weight-masking approach to strengthen this joint training. At the conclusion of this section, we outline the steps involved in verifying ownership. 4.1 Feature-fusion Design In this subsection, we present two feature-fusion methods, namely the direct feature-fusion method and the invisible feature-fusion method, to generate the watermark images that are coupled with model functionalities. Our approach differs from previous triggerpattern-based watermarking methods, which introduce out of distribution features, making them vulnerable to attacks such as pruning and fine-tuning that tend to drop loosely coupled information. Our key insight is to fuse in-distribution features, similar to the technique used in MixUp [41] for improving model accuracy by combining training data from different classes. However, instead of the objective of data augmentation, we assign target labels to the combined images to use them as functional-coupled watermark triggers. We present these methods to ensure that the watermarks generated are coupled with the model\u2019s normal functionalities. 4.1.1 Direct Feature-fusion Method. We generate watermark images based on a dataset \ud835\udc4b= \ud835\udc4b1,\ud835\udc4b2, \u00b7 \u00b7 \u00b7 ,\ud835\udc4b\ud835\udc41\ud835\udc50, where \ud835\udc41\ud835\udc50represents the total number of categories in the dataset. Let \ud835\udc3e\ud835\udc64\ud835\udc5abe the size of the watermark image set\ud835\udc4a\ud835\udc40. We select subsets \ud835\udc4b\ud835\udc56and \ud835\udc4b\ud835\udc57from the original dataset to select base instances for generating the watermark images, with the target class set as \ud835\udc61\u2208[1, \ud835\udc41\ud835\udc50] excluding \ud835\udc56and \ud835\udc57. To generate the watermark image set, we follow the approach presented in Eq. 1. ( \ud835\udc4a\ud835\udc40= {\ud835\udc64\ud835\udc5a1,\ud835\udc64\ud835\udc5a2, \u00b7 \u00b7 \u00b7 ,\ud835\udc64\ud835\udc5a\ud835\udc58, \u00b7 \u00b7 \u00b7 ,\ud835\udc64\ud835\udc5a\ud835\udc3e\ud835\udc64\ud835\udc5a} \ud835\udc3f\ud835\udc64\ud835\udc5a\ud835\udc58= \ud835\udc61,\ud835\udc61\u2208[1, \ud835\udc41\ud835\udc50] (1a) (1b) where \ud835\udc64\ud835\udc5a\ud835\udc58represents each element in the watermark image set, and \ud835\udc3f\ud835\udc64\ud835\udc5a\ud835\udc58corresponds to the given label of each generated watermark image. \fOn Function-Coupled Watermarks for Deep Neural Networks CCS, xxx, xxx Generating Triggers Training Data Embedding Watermark Normal Training Watermarking Verification Consistency Check Query Labels Predictions Victim Model Feature Fusion Samples Figure 2: The workflow of the proposed function-coupled watermarking method. To generate a watermark image, we can combine the selected two base instances in the dimensions of height and width of the image, such that the watermark image has the complete features of both base instances. If we assume that the shape of the base instance is (\ud835\udc4a, \ud835\udc3b,\ud835\udc36), then we can generate the watermark image using the following equation: \ud835\udc64\ud835\udc5a\ud835\udc58= \ud835\udc4b\ud835\udc5d \ud835\udc56\u2295\ud835\udc4b\ud835\udc5e \ud835\udc57 \ud835\udc64\ud835\udc5a(\ud835\udc64,\u210e,\ud835\udc50) \ud835\udc58 = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 \ud835\udc4b\ud835\udc5d,(\ud835\udc64,\u210e,\ud835\udc50) \ud835\udc56 , if \ud835\udc64\u2264\ud835\udc4a,\u210e\u2264\ud835\udc3b \ud835\udc4b\ud835\udc5e,(\ud835\udc64\u2212\ud835\udc4a,\u210e\u2212\ud835\udc3b,\ud835\udc50) \ud835\udc57 , if \ud835\udc64> \ud835\udc4a,\u210e> \ud835\udc3b (255, 255, 255), if \ud835\udc64\u2264\ud835\udc4a,\u210e> \ud835\udc3b (255, 255, 255), others (2a) (2b) where \u2295is an operator that merges two base instances from different base classes, \ud835\udc56and \ud835\udc57represent the two selected classes, which range from class 1 to class \ud835\udc41\ud835\udc50. \ud835\udc5dand \ud835\udc5eare indices for the randomly selected base instances from the two base subsets, \ud835\udc4b\ud835\udc56and \ud835\udc4b\ud835\udc57, respectively. (a) (b) (c) (d) Figure 3: Examples of visible trigger images generated for Cifar-10 dataset. The top-left corners are the instances from \u2018airplane\u2019 class, and the images in the bottom-right corners are selected from the \u2018cat\u2019 class. The pixels of each watermark image \ud835\udc64\ud835\udc5a\ud835\udc58can be computed using Eq. 2b. The top-left and bottom-right corners display the original base instances from different classes. Examples of generated watermark images are illustrated in Fig. 3. The combined images preserve all features of the base instances. 4.1.2 Invisible Feature-fusion method. The direct feature-fusion method described in Eq. 2b generates a set of watermarked images in which two groups of features are independently distributed in the two corners. Although this method works well for embedding watermarks, adversaries can easily detect such key samples due to the abnormal white blocks in the images. These blocks are unusual in normal images, making it trivial for auditors to identify them (see experiments for details). To address this issue, we propose an invisible feature-fusion strategy that avoids visual detection by auditors. Specifically, we discard the strategy of merging features in the dimensions of width and length and design a new method from the perspective of image depth (RGB channels). We suppose a dataset \ud835\udc4b= {\ud835\udc4b1,\ud835\udc4b2, \u00b7 \u00b7 \u00b7 ,\ud835\udc4b\ud835\udc41\ud835\udc50} with \ud835\udc41\ud835\udc50representing the total categories to be the base for generating watermarking images. \ud835\udc4b\ud835\udc4f1 and \ud835\udc4b\ud835\udc4f2 are the subsets of the original dataset for selecting instances from them to generate the watermark images. We select images from \ud835\udc4b\ud835\udc4f1 and \ud835\udc4b\ud835\udc4f2 as two sets of base instance, and the target class is set as \ud835\udc61which is different from \ud835\udc4f1 and \ud835\udc4f2. Define \ud835\udc3e\ud835\udc64\ud835\udc5aas the size of the watermark image set \ud835\udc4a\ud835\udc40. We can also follow Eq. 1 to generate the watermark image set. Differently, in order to generate an invisible watermark image, we need to merge the two base instances in a different way. Suppose the shape of the base instance is (\ud835\udc4a, \ud835\udc3b,\ud835\udc36), the watermark image can be computed as follows: We consider a dataset \ud835\udc4b= \ud835\udc4b1,\ud835\udc4b2, \u00b7 \u00b7 \u00b7 ,\ud835\udc4b\ud835\udc41\ud835\udc50with \ud835\udc41\ud835\udc50representing the total number of categories as the base for generating watermark images. Let \ud835\udc4b\ud835\udc4f1 and \ud835\udc4b\ud835\udc4f2 be the subsets of the original dataset from which we select instances to generate the watermark images. We select images from \ud835\udc4b\ud835\udc4f1 and \ud835\udc4b\ud835\udc4f2 as two sets of base instances, and set the target class as \ud835\udc61, which is different from \ud835\udc4f1 and \ud835\udc4f2. Let \ud835\udc3e\ud835\udc64\ud835\udc5adenote the size of the watermark image set \ud835\udc4a\ud835\udc40. To generate an invisible watermark image, we need to merge the two base instances in a different way. Suppose the shape of the base instance is (\ud835\udc4a, \ud835\udc3b,\ud835\udc36), then the watermark image can be computed as follows: \ud835\udc64\ud835\udc5a\ud835\udc58= \ud835\udc4b\ud835\udc5d \ud835\udc4f1 \u2295\ud835\udc4b\ud835\udc5e \ud835\udc4f2 \ud835\udc64\ud835\udc5a(\ud835\udc64,\u210e,\ud835\udc50) \ud835\udc58 = \ud835\udc5f\u00b7 \ud835\udc4b\ud835\udc5d,(\ud835\udc64,\u210e,\ud835\udc50) \ud835\udc4f1 + (1 \u2212\ud835\udc5f) \u00b7 \ud835\udc4b\ud835\udc5e,(\ud835\udc64,\u210e,\ud835\udc50) \ud835\udc4f2 (3a) (3b) where the operator \u2295denotes the strategy for merging two base instances from different base classes. \ud835\udc5dand \ud835\udc5eare the indices of randomly selected base instances from the two base subsets, \ud835\udc4b\ud835\udc4f1 and \ud835\udc4b\ud835\udc4f2, respectively. The parameter \ud835\udc5f, which ranges from 0 to 1, is used to adjust the transparency of the target instance in the merged watermarking image. Increasing the value of \ud835\udc5fresults in the features of the target instance becoming more invisible, i.e., more transparent. \fCCS, xxx, xxx Xiangyu Wen et al. (a) (b) (c) (d) Figure 4: Examples of invisible trigger images generated for Cifar-10 dataset. The two base instances are selected from the \u2018automobile\u2019 class and the \u2018cat\u2019 class, respectively. The transparency ratio here is set to 0.7. The pixels of each invisible watermark image \ud835\udc64\ud835\udc5a\ud835\udc58can be computed using Equation 3b. Given two source images with a shape of (\ud835\udc4a, \ud835\udc3b,\ud835\udc36), the merged watermarking image retains the same dimensions as the original data domain. In the last step of this invisible feature-fusion method, the labels of the merged samples are assigned as \ud835\udc61. Figure 4 illustrates examples of the generated watermark images. 4.2 Masking during Training Phase To further strengthen the relationship between watermarks and model functionalities, a masking training strategy is introduced during the training phase. The training of standard backdoor-based watermarks can be formalized as follows. 4.2.1 Standard Backdoor-based Watermarking Training. We consider a training dataset {(\ud835\udc65\ud835\udc56, \u00af \ud835\udc66\ud835\udc56)}\ud835\udc41\ud835\udc51 \ud835\udc56=1, where \ud835\udc4b= {\ud835\udc65\ud835\udc56}\ud835\udc41\ud835\udc51 \ud835\udc56=1 and \u00af \ud835\udc4c= {\u00af \ud835\udc66\ud835\udc56}\ud835\udc41\ud835\udc51 \ud835\udc56=1 represent the input samples and their corresponding labels, respectively, with \ud835\udc41\ud835\udc51being the total number of samples. A DNN model \ud835\udc53(\u00b7) : \ud835\udc4b\u2192\u00af \ud835\udc4cis trained from the dataset to map the input samples to labels. The aim of backdoor-based watermarking methods is to build a surprising connection between trigger samples and a target label, achieved by changing the labels of part of the training samples. Specifically, the target class is set as \u00af \ud835\udc66\ud835\udc61. Defenders can manipulate a portion of training samples by adding well-designed trigger patterns and change their labels to the target label, producing a watermarking training set {\ud835\udc4b\u2032,\ud835\udc4c\u2032} = {(\ud835\udc65\u2032 \ud835\udc56, \u00af \ud835\udc66\ud835\udc61)}\ud835\udc41\ud835\udc51\u2217\ud835\udc52% \ud835\udc56=1 + {(\ud835\udc65\ud835\udc57, \u00af \ud835\udc66\ud835\udc57)}\ud835\udc41\ud835\udc51 \ud835\udc57=\ud835\udc41\ud835\udc51\u2217\ud835\udc52%+1, with \ud835\udc52% denoting the ratio of the trigger data. Defenders then can exploit the manipulated dataset to train the model, producing a watermarking model \ud835\udc53\ud835\udc64\ud835\udc5a(\u00b7). 4.2.2 Masking Training Strategy. To enhance the robustness of our watermarking method, we propose a strategy to distribute the watermark function equally to each neuron in the model. Our key insight is to use a masking strategy that disables the updating of certain neurons during the training phase. By iteratively adding random masks during training, we can avoid the model\u2019s performance relying heavily on a small number of critical neurons. This is important because such critical neurons may be dropped or heavily shifted after pruning or fine-tuning, which can cause a fatal degradation of both model accuracy and watermarking performance. On the other hand, by using random masking, we can distribute the watermark function equally to each neuron, so that different combinations of neurons have the potential to retain the full watermark function. Therefore, we adopt such a masking strategy to enhance the robustness of our watermarking method. A technique similar to our proposed masking training strategy that can induce sparsity in the network is Dropout [32]. Dropout is typically applied to fully-connected hidden layers [9] and is utilized to improve the model\u2019s generalization ability. In contrast, our proposed masking training strategy is applied to the convolutional layers of DNN models to distribute the watermark function equally across all neurons. Two crucial factors in masking methods during the training phase are the masking strategies and masking ratios. In order to introduce sparsity into the neural network layers, we must control the weight updating process. Since the manipulation occurs during the training phase, we need to apply the sparse strategy during both forward propagation and freeze the selected weights during backward updating. To accomplish this, a practical solution is to randomly mask a portion of the weights in DNN layers. This mask mimics the operations of pruning carried out by adversaries and can be applied to mimic various pruning principles, such as global pruning and module-based pruning, as described in Eq. 4. To ensure the masking training strategy is generalizable to different pruning attacks, we only utilize random masks to train the watermarked model. Sparse mask: \u001a\ud835\udc66= \ud835\udc53(((\ud835\udc4a\u2299\ud835\udc40)/(1 \u2212\ud835\udc5d))\ud835\udc65+ \ud835\udc4f), \ud835\udc40\ud835\udc56,\ud835\udc57\u223c\ud835\udc43\ud835\udc5f\ud835\udc62\ud835\udc5b\ud835\udc52({\ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc51\ud835\udc5c\ud835\udc5a,\ud835\udc54\ud835\udc59\ud835\udc5c\ud835\udc4f\ud835\udc4e\ud835\udc59,\ud835\udc5a\ud835\udc5c\ud835\udc51\ud835\udc62\ud835\udc59\ud835\udc52, \u00b7 \u00b7 \u00b7 }, \ud835\udc5d) (4) Inference phase: \ud835\udc66= \ud835\udc53(\ud835\udc4a\ud835\udc65+ \ud835\udc4f) (5) where \ud835\udc53(\u00b7) indicates the DNN model for training and testing. \ud835\udc4a, \ud835\udc65, \ud835\udc4frepresent the weights, inputs, and biases, respectively. We use 0 \u2264\ud835\udc5d\u22641 to indicate the ratio of the preserved elements after masking. \ud835\udc40is the mask that is used to indicate the pruned part of the convolutional kernels, and \ud835\udc40corresponds to the pruning strategy chosen in the inference phase, \ud835\udc56, \ud835\udc57indicate the position of these kernels. The operator \u2299is the element-wise multiplication. \ud835\udc43\ud835\udc5f\ud835\udc62\ud835\udc5b\ud835\udc52means the pruning strategies that depend on the parameters including the specific pruning principle (such as global pruning and module-based pruning) and the pruning ratio. To ensure the same expectation as the original outputs, the calculation results need to be scaled by multiplying with 1/(1 \u2212\ud835\udc5d). Then, we can update the model weights as follows: \ud835\udc4a\u2032 = \ud835\udc4a\u2212\ud835\udf02\u00b7 ( \ud835\udf15\ud835\udc3f \ud835\udf15\ud835\udc4a\u2299\ud835\udc40)/(1 \u2212\ud835\udc5d) (6) where \ud835\udc4a\u2032 is the temporary variable of neural network weights, \ud835\udf02 is the learning rate and controls the gradient updating step size, \ud835\udc3f represents the model loss. In contrast to the conventional gradient updating method, we mask the gradients with \ud835\udc40before applying them to the weights. This means that, for each iteration, the gradient updating operation is only performed on the weights that are preserved after being masked with \ud835\udc40. \fOn Function-Coupled Watermarks for Deep Neural Networks CCS, xxx, xxx 4.3 Procedures of Ownership Verification In black-box watermarking scenarios, we utilize previously generated watermarking samples to verify the ownership of the candidate model by sending queries to the remote AI service. If the response corresponds to the expected labels, it confirms that the remote AI service is powered by our protected model. This is due to the fact that DNN models without embedding watermarks cannot recognize the given key samples, and as a result, queries will produce erroneous predictions. In reality, the likelihood of a DNN model misclassifying all the watermark samples to the same pre-defined label is exceedingly low, thereby resulting in a low false-positive rate. For ownership verification, defenders can submit a set of preprepared watermark samples (e.g., with a quantity of 90) to the remote AI service platform and collect the corresponding predictions for these queries. As each key sample is associated with the target label, defenders can compute the authentication success rate with their labels and the collected predictions. If the authentication success rate is higher than a widely accepted community threshold, defenders can assert their ownership of this victim model. 5 EXPERIMENTAL RESULTS In this section, we present the experimental results of our proposed watermarking method. We organize this section as follows. In Section 5.1, we leverage different network structures and datasets to train DNN models as the victims, and present their accuracy on the clean dataset. In section 5.2, we compare our proposed method with six state-of-the-art watermarking methods. Then, in Section 5.3, we present the watermarking performance of different models and analyze the impact of watermarking on benign accuracy. Furthermore, in Section 5.5, we conduct an ablation to investigate the effectiveness of the masking training method. Finally, in Section 5.4, we evaluate the robustness of the proposed method under three prevailing attacks. 5.1 Experimental Settings We conduct an evaluation of our feature-fusion watermarking method on various commonly used datasets and networks, namely LeNet-5 [18], VGG16 [31], and ResNet-18 [14], trained on MNIST, CIFAR-10 [17], CIFAR-100, and Tiny-ImageNet (200 classes) [39]. The accuracy of these models on clean datasets is presented in Table 1. We utilize two feature-fusion methods (examples of which can be found in Fig. 3 and Fig. 4) to generate watermarks. For each experiment, we use less than 1% of the training data as the watermarking samples and fix the number of validation images to 90. We set three transparency rates (\ud835\udc5f) of 0.5, 0.7, and 0.9 to generate invisible feature-fusion triggers. For the ease of conducting experiments, we set \ud835\udc5f= 0.5 in Section 5.4 and Section 5.2 to evaluate the robustness of the proposed method and compare it with other methods. In addition to using models without watermarks as the baseline, we also perform empirical evaluations of our proposed featurefusion watermarking method against five other black-box approaches: \u2022 Backdoor-based methods: Protecting IP [42], Turning weakness into strength [1], Exponential weighting [29], and Entangled watermark [16]. \u2022 Adversarial example-based method: Frontier stitching [28]. Table 1: Benign accuracy of different models. Models Classes Top-1 Benign accuracy (mean) MNIST (LeNet5) 10 99.14% CIFAR-10 (ResNet-18) 10 94.49% CIFAR-100 (VGG16) 100 73.13% Tiny-ImageNet (ResNet-18) 200 65.98% 5.2 Comparison with Baselines We conduct a comparative analysis of the performance of our proposed feature-fusion watermarking method against five state-ofthe-art black-box approaches, namely Protecting IP [42], Turning weakness into strength [1], Exponential weighting [29], Frontier stitching [28], and Entangled watermark [16]. Notably, Entangled watermark primarily focuses on the model extraction attack, and since this scenario involves two primary subjects, i.e., the victim model and the extracted model, we compare the watermarked model guided by our methods with both models. We evaluate the performance of these methods concerning the authentication success rate, benign accuracy preservation rate, and robustness under four distinct attacks. Given that most of the compared works are constructed based on CIFAR-10, we use the same dataset as the benchmark for comparison. All other methods are implemented based on the open-source codes released on GitHub 1. Table 2 presents the summarized experimental results. Our method outperforms most of the other black-box methods by 15% and 10% in terms of average authentication success rate in 10-iteration finetuning and transfer learning (define a 20-epoch training process as an iteration), respectively. We conduct a detailed case study for these two attack scenarios, and the results are shown in Fig. 5 and Fig. 6. Furthermore, even after pruning 80% of neuron weights, our methods still achieved an authentication success rate of 100%, while the best authentication success rate of other methods is lower than 90%. It should be noted that when the pruning ratio is set to 90%, the average degradation of clean accuracy is greater than 30%, which can potentially cause the models to fail in their regular functions. However, the authentication success rate of our proposed method is still above 90%. 0 1 2 3 4 5 6 7 8 9 10 0.2 0.4 0.6 0.8 1.0 Authentication Success Rate Iterations Protecting IP Turning weakness Exponential weighting Frontier stitching Entangled-victim Entangled-extract Ours Figure 5: Fine-tuning results on authentication success rate. 1We conducted experiments of Ref. [1, 28, 29, 42] referring to https://github.com/mathebell/model-watermarking, and Ref. [16] referring to https://github.com/RorschachChen/entangled-watermark-torch \fCCS, xxx, xxx Xiangyu Wen et al. Table 2: Comparison with state-of-the-art methods. Methods Authentication success rate Benign accuracy preserving rate Robustness Fine-tuning Transfer learning Pruning Overwriting Protecting IP [42] 100.0% 99.95% 89.00% 70.0% 89.32% 85.10% Turning weakness into strength [1] 100.0% 99.85% 84.21% 41.0% 74.56% 82.00% Exponential weighting [29] 100.0% 99.92% 82.00% 38.0% 83.75% 83.30% Frontier stitching [28] 100.0% 99.90% 43.10% 30.0% 83.62% 68.40% Entangled-victim [16] 87.41% 98.76% 95.98% 63.73% 41.71% Entangled-extract [16] 75.43% 85.23% 16.14% 8.89% 58.22% Ours (direct) 100.0% 100.0% 2 100.0% 82.20% 100.0% 100.0% Ours (invisible) 100.0% 100.0% 100.0% 86.70% 100.0% 100.0% As illustrated in Fig. 5, our proposed watermarking methods exhibit a significant advantage in terms of robustness against finetuning attacks, when compared with other methods. Specifically, our methods maintain an authentication success rate of 100% after 10 iterations of fine-tuning, while most of the other methods can only retain around 80% of their original performance. Notably, Frontier Stitching [28] is more susceptible to fine-tuning attacks, as its authentication success rate drops to around 40% after finetuning. Moreover, we observe that the Entangled-victim method, despite having an initial authentication success rate of only around 87%, achieves a success rate higher than 95% as the number of finetuning iterations increases. This improvement may be attributed to the entangled training strategy used in the watermarking process, which enables the watermark features to be entangled with those corresponding to the normal functions of a model. Thus, fine-tuning with in-distribution data cannot drop watermarks but even improve the watermarking performance. However, it is worth noting that the Entangled watermark samples are beyond the training data distribution, and may degrade watermarking performance after transfer learning with out-of-distribution data, as confirmed by the following experimental results. 0 1 2 3 4 5 6 7 8 9 10 0.0 0.2 0.4 0.6 0.8 1.0 Authentication Success Rate Iterations Protecting IP Turning weakness Exponential weighting Frontier stitching Entangled-victim Entangled-extract Ours Figure 6: Watermarking robustness of different methods under transfer learning from CIFAR-10 to CIFAR-100. We conducted experiments to compare the robustness of our watermarking method against transfer learning attacks with several black-box methods. To ensure consistency, we utilized the CIFAR-10 dataset for training and evaluated the transfer learning performance on the CIFAR-100 dataset, with a small learning rate from 1e-4 to 1e-5. The comparison results are presented in Fig. 6. Our method outperforms other methods significantly in preserving the authentication success rate of the watermark. Remarkably, our method achieves an average authentication success rate that is 10% higher than the best-performing black-box method, and even 60% higher than that of Frontier Stitching. 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 Authentication Success Rate Pruning ratio Protecting IP Turning weakness Exponential weighting Frontier stitching Greedy residuals Entangled-victim Entangled-extract Ours (Naive) Ours (Invisible) Figure 7: Pruning results on authentication success rate. Figure 7 presents the comparison results between our proposed watermarking method and the baselines in terms of watermarking robustness under pruning attacks. Our proposed method exhibits greater robustness under pruning attacks. Specifically, even after pruning 80% of the neurons in a model, our watermarking methods are still able to retain an authentication success rate of 100%. When the pruning ratio is set to 90%, our methods still exhibit a high authentication success rate of 97% (for invisible watermark) and almost 100% (for direct watermark). In contrast, the authentication success rate of other methods decreases dramatically by more than 55% when 90% of neurons are pruned. 5.3 Effectiveness and Fidelity The effectiveness of our watermarking method is measured by the ability to successfully verify the ownership of DNN models without significantly impacting their clean accuracy. Our evaluation focuses on the proposed two methods in terms of effectiveness and fidelity across four different models. Each method is tested in three sets of experiments. Specifically, for the direct feature-fusion method, \fOn Function-Coupled Watermarks for Deep Neural Networks CCS, xxx, xxx Table 3: Results on effectiveness and fidelity. Methods Hyper-parameter changing MNIST (LeNet5) CIFAR-10 (ResNet-18) CIFAR-100 (VGG16) Tiny-ImageNet (ResNet-18) Eff. Fid. Eff. Fid. Eff. Fid. Eff. Fid. Direct #1 100% 99.36% 100% 94.18% 100% 73.23% 100% 69.68% #2 100% 99.30% 100% 94.20% 100% 73.20% 100% 69.66% #3 100% 99.32% 100% 94.18% 100% 73.23% 100% 69.70% Invisible r=0.5 100% 99.33% 100% 94.63% 100% 73.37% 100% 69.51% r=0.7 100% 99.33% 100% 94.62% 100% 73.35% 100% 69.48% r=0.9 100% 99.29% 100% 94.50% 100% 73.34% 100% 69.42% we conduct three replicate experiments numbered as #1, #2, and #3. Regarding the invisible feature-fusion method, we vary the transparency ratio from \ud835\udc5f= 0.5 to \ud835\udc5f= 0.9 in our experiments. Table 3 displays the effectiveness and fidelity of the proposed methods. In all experiments, both proposed methods achieve a verification effectiveness of 100%, indicating their effectiveness in watermarking. The generated watermarked models can achieve a high watermarking success rate without sacrificing clean accuracy, as shown in Table 1 for the baseline of benign accuracy. The perturbation of the proposed watermarking methods on the benign accuracy of the model is within the range of \u00b10.5%. It is worth noting that in most cases, there is a slight increase in benign accuracy. The accuracy of Tiny-ImageNet models even increases from around 65% to around 69% after watermarking. A reasonable explanation is that the mask training strategy for enhancing watermarking robustness can also improve the model\u2019s generalization capability. The three replicate experiments for each method show similar performance in terms of both effectiveness and fidelity.The watermarking performance of the invisible feature-fusion method under three different transparency ratios is also very close, indicating that transparency has little effect on the final watermarking performance and only affects visual features. Table 4: Confusion matrix on detection results of watermark images. Confusion Matrix Normal (Ground Truth) Invisible fusion (Ground Truth) Direct fusion (Ground Truth) Normal (Prediction) 25317 10326 3 Invisible fusion (Prediction) 19663 34666 0 Direct fusion (Prediction) 20 8 44997 In total 45000 45000 45000 We also conduct experiments to investigate whether an attacker can use a detector to detect the watermark images. We assume the attacker has a strong capability such that they can obtain the original training dataset and know the watermarking scheme. Hence, they can simulate a large number of triggers and train a big classifier to distinguish the triggers. We mimic an attacker to set such a task as a three-category classification, use ResNet-18 as the backbone and generate training data from Tiny-ImageNet dataset. Specifically, 2According to the experimental results, the mask training strategy for enhancing watermarking robustness can also effectively improve the clean accuracy of a model. Thus, the benign accuracy after injecting watermark is higher than that of the previous one. 1 2 3 4 5 6 7 8 9 10 0.950 0.955 0.960 0.965 0.970 Benign Accuracy Authentication Success Rate Iterations (MNIST) Benign Accuracy 0.0 0.2 0.4 0.6 0.8 1.0 Authentication Success Rate (a) 1 2 3 4 5 6 7 8 9 10 0.86 0.88 0.90 0.92 0.94 Benign Accuracy Authentication Success Rate Iterations (CIFAR10) Benign Accuracy 0.0 0.2 0.4 0.6 0.8 1.0 Authentication Success Rate (b) 1 2 3 4 5 6 7 8 9 10 0.65 0.66 0.67 0.68 0.69 0.70 0.71 0.72 0.73 Benign Accuracy Authentication Success Rate Iterations (CIFAR100) Benign Accuracy 0.0 0.2 0.4 0.6 0.8 1.0 Authentication Success Rate (c) 1 2 3 4 5 6 7 8 9 10 0.67 0.68 0.69 0.70 0.71 0.72 0.73 Benign Accuracy Authentication Success Rate Iterations (TinyImagenet) Benign Accuracy 0.0 0.2 0.4 0.6 0.8 1.0 Authentication Success Rate (d) Figure 8: Fine-tuning results of direct watermarking on benign accuracy and authentication success rate. we randomly select and generate 45,000 normal data and 45,000 direct-fusion watermark images and invisible-fusion watermark images. The objective is to check whether a well-trained classifier can automatically classify these three-category datasets. Table 4 presents the confusion matrix on detection results of watermark images. It shows that direct-fusion watermarks can be easily detected, i.e., the both precision and recall are high. We believe it is because of the obvious white patterns in the watermark images. Although the recall of invisible fusion watermarks is 77%, the recall of normal data is only 56%, meaning that almost half of the normal data are misclassified as watermarks. Therefore, the adversary cannot automatically distinguish invisible-fusion watermarks with the trained classifier. 5.4 Robustness Fine-tuning resistance. In order to perform the fine-tuning attack, a small number of in-distribution samples are selected as fine-tuning data. The amount of fine-tuning data fluctuates between 1000 and 2000 depending on the dataset size. Furthermore, we evaluate the \fCCS, xxx, xxx Xiangyu Wen et al. 1 2 3 4 5 6 7 8 9 10 0.950 0.955 0.960 0.965 0.970 Benign Accuracy Authentication Success Rate Iterations (MNIST) Benign Accuracy 0.0 0.2 0.4 0.6 0.8 1.0 Authentication Success Rate (a) 1 2 3 4 5 6 7 8 9 10 0.88 0.90 0.92 0.94 0.96 Benign Accuracy Authentication Success Rate Iterations (CIFAR10) Benign Accuracy 0.0 0.2 0.4 0.6 0.8 1.0 Authentication Success Rate (b) 1 2 3 4 5 6 7 8 9 10 0.68 0.70 0.72 0.74 0.76 Benign Accuracy Authentication Success Rate Iterations (CIFAR100) Benign Accuracy 0.0 0.2 0.4 0.6 0.8 1.0 Authentication Success Rate (c) 1 2 3 4 5 6 7 8 9 10 0.68 0.69 0.70 0.71 0.72 0.73 0.74 Benign Accuracy Authentication Success Rate Iterations (TinyImagenet) Benign Accuracy 0.0 0.2 0.4 0.6 0.8 1.0 Authentication Success Rate (d) Figure 9: Fine-tuning results of invisible watermarking on benign accuracy and authentication success rate. 0 1 2 3 4 5 6 7 8 9 10 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Acc. on CIFAR10 (1) Acc. on CIFAR10 (2) Acc. on CIFAR100 CIFAR10 CIFAR100 CIFAR10 MNIST CIFAR100 Tiny ImageNet Iterations Accuracy on Original Dataset 0.0 0.2 0.4 0.6 0.8 1.0 Authentication Success Rate Figure 10: Watermarking robustness under transfer learning. Acc. on CIFAR10(1) represents the model accuracy on the CIFAR-10 dataset after the transfer learning from CIFAR-10 to CIFAR-100. Acc. on CIFAR10(2) represents the model accuracy on the CIFAR-10 dataset after the transfer learning from CIFAR-10 to MNIST. robustness metric under different fine-tuning iterations ranging from 1 to 10. The watermarking performance under fine-tuning attacks is presented in Fig. 8 and Fig. 9, demonstrating the efficacy of both feature-fusion methods against fine-tuning attacks. Fig. 8 displays the experimental results of the direct featurefusion watermarking method against fine-tuning attacks. The authentication success rate remains stable at 100% as the number of training iterations increases, and the benign accuracy of these models shows slight fluctuations. Similarly, Fig. 9 illustrates the experimental results of the invisible feature-fusion watermarking method, indicating its robustness to fine-tuning attacks. 0.0 0.2 0.4 0.6 0.8 1.0 0.80 0.85 0.90 0.95 1.00 1.05 Benign Accuracy Authentication Success Rate Pruning ratio (MNIST) Benign Accuracy 0.0 0.2 0.4 0.6 0.8 1.0 Authentication Success Rate (a) 0.0 0.2 0.4 0.6 0.8 1.0 0.2 0.4 0.6 0.8 1.0 Benign Accuracy Authentication Success Rate Pruning ratio (CIFAR10) Benign Accuracy 0.0 0.2 0.4 0.6 0.8 1.0 Authentication Success Rate (b) 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 Benign Accuracy Authentication Success Rate Pruning ratio (CIFAR100) Benign Accuracy 0.0 0.2 0.4 0.6 0.8 1.0 Authentication Success Rate (c) 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Benign Accuracy Authentication Success Rate Pruning ratio (TinyImageNet) Benign Accuracy 0.0 0.2 0.4 0.6 0.8 1.0 Authentication Success Rate (d) Figure 11: Pruning results of direct watermarking on benign accuracy and authentication success rate. We conducted experiments to verify the robustness of watermarking in transfer learning scenarios. Specifically, we performed three groups of transfer learning tasks, namely CIFAR-10 to CIFAR100, CIFAR-10 to MNIST, and CIFAR-100 to Tiny-ImageNet. For each group of experiments, we changed the datasets while retaining the number of classes, and let the learning rate be from 1e-4 to 1e-5. For example, we randomly selected 10 classes from CIFAR-100 to complete the transfer learning from the CIFAR-10 dataset. Fig. 10 illustrates the authentication success rate and accuracy on the original dataset for each group of transfer learning tasks. The results show that transfer learning affects both the authentication success rate and the model\u2019s accuracy on the original dataset. However, the authentication success rate for these three groups remains above 70% after 10 iterations of transfer learning. In contrast, transfer learning affects the model\u2019s accuracy on the original datasets more, particularly in the transfer learning tasks of CIFAR10 to MNIST and CIFAR-100 to Tiny-ImageNet. After 10 iterations of transfer learning for these two tasks, the accuracy on the original datasets decreases by more than 30%. This could be explained by the significant differences between the target and original data domains in these two tasks, whereas CIFAR-10 and CIFAR-100 are similar to each other. Weight pruning resistance. To evaluate the proposed method\u2019s robustness against pruning attacks, we adopt the widely-used L1norm unstructured pruning strategy. This strategy determines the parameters to be pruned based on the weights\u2019 values. We test pruning ratios ranging from 0.1 to 0.9, corresponding to weight pruning rates from 10% to 90%. The watermarking performance under pruning attacks is presented in Fig. 11 and Fig. 12. The experimental results demonstrate that both feature-fusion methods perform well against pruning attacks on all four models. \fOn Function-Coupled Watermarks for Deep Neural Networks CCS, xxx, xxx As illustrated in Fig. 11, we evaluate the direct feature-fusion watermarking method against the pruning attack. We adopt the L1norm unstructured pruning strategy with pruning ratios ranging from 0.1 to 0.9. Our results demonstrate that our method remains effective against the pruning attack, even when the pruning ratio exceeds 0.7. As the pruning ratio increases, there is a trade-off between model sparsity and accuracy. Despite the decrease in benign accuracy of the models, our watermarking success rate remains high. Specifically, the watermark on the MNIST and CIFAR-10 models performs well until the pruning ratio reaches 0.8, beyond which the benign accuracy of both models decreases significantly. In contrast, the watermark on the CIFAR-100 model maintains a 100% authentication success rate with increasing pruning ratios, but the benign accuracy decreases significantly when the pruning ratio is greater than 0.6. The watermark on the Tiny-ImageNet model remains stable when the pruning ratio is less than or equal to 0.7, but the pruning attack has a greater negative impact on the benign accuracy of the model. 0.0 0.2 0.4 0.6 0.8 1.0 0.94 0.95 0.96 0.97 0.98 0.99 1.00 Benign Accuracy Authentication Success Rate Pruning ratio (MNIST) Benign Accuracy 0.0 0.2 0.4 0.6 0.8 1.0 Authentication Success Rate (a) 0.0 0.2 0.4 0.6 0.8 1.0 0.70 0.75 0.80 0.85 0.90 0.95 1.00 Benign Accuracy Authentication Success Rate Pruning ratio (CIFAR10) Benign Accuracy 0.0 0.2 0.4 0.6 0.8 1.0 Authentication Success Rate (b) 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 Benign Accuracy Authentication Success Rate Pruning ratio (CIFAR100) Benign Accuracy 0.0 0.2 0.4 0.6 0.8 1.0 Authentication Success Rate (c) 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Benign Accuracy Authentication Success Rate Pruning ratio (TinyImageNet) Benign Accuracy 0.0 0.2 0.4 0.6 0.8 1.0 Authentication Success Rate (d) Figure 12: Pruning results of invisible watermarking on benign accuracy and authentication success rate. The experiments conducted demonstrate that the watermarks embedded in the models are more robust against pruning attacks as compared to the basic classification function (clean accuracy) of the model. This is due to the fact that as the pruning ratio increases, the capacity of the model decreases. Additionally, the watermarking method requires much less model capacity than the basic classification function of such a model. Therefore, pruning has a greater impact on the classification accuracy of the model than the watermarking performance. Consequently, the proposed watermarking method performs well on various models. However, when dealing with larger datasets, the proposed method is observed to be more sensitive to pruning attacks, possibly due to the higher model capacity in such cases. Fig. 12 presents the experimental results of the invisible feature-fusion watermarking method against pruning attacks, where similar observations can be made as with the direct watermarking strategy. Overwriting resistance. To evaluate our method\u2019s robustness against overwriting attacks, we employ a similar watermarking strategy while changing the watermark samples, source classes, and target class. Specifically, we set No.0 and No.3 as the source classes and No.1 as the target class in the test group while conducting CIFAR-10 experiments. In the control group, we transfer the source classes to No.2 and No.4 and select No.5 as the target class (an extreme scenario is that the source classes and target class are same to that of the test group). To overwrite another watermark into the model, we fine-tune the watermarked models with the selected watermark samples using a small learning rate from 1e-4 to 1e-5. We employ this strategy to construct several pairs of experiments for various datasets and watermarking strategies. The experimental results are presented in Table 5 and the last column of Table 2. Table 5: Overwriting results of watermarking methods on authentication success rate. Datasets MNIST CIFAR-10 CIFAR-100 Tiny-ImageNet Source Class ID No.2 No.4 No.2 No.4 No.9 No.13 No.9 No.13 Target Class ID No.5 No.5 No.16 No.16 Strategies Direct 100% 100% 100% 100% Invisible 100% 100% 100% 100% New Watermarks 100% 100% 100% 100% The experimental results presented in Table 5 indicate that our proposed method exhibits a high level of robustness against overwriting attacks. Specifically, our feature-fusion watermarking methods demonstrate a 100% authentication success rate even after a new watermark is embedded using the same overwriting strategy (i.e., selecting same source classes and target class for the control group as that of the test group). This robustness can be attributed to the fact that the process of overwriting is similar to fine-tuning and transfer learning, with the only difference being the type of training data used. Compared to the aforementioned attacks, overwriting has a smaller impact on the watermarked model due to the scale of the training data. Therefore, it is reasonable to observe that our method exhibits robust results under overwriting attacks. However, it should be noted that the authentication success rate of the newly embedded watermarks is also 100%, since our watermarking methods mainly focus on improving the robustness of watermarks against overwriting attacks, rather than preventing the embedding of new watermarks in the same model. 5.5 Ablation Study We also conduct several additional experiments to investigate the impact impact of equipping our watermarking method with a masking training strategy on its robustness capability. Figure 13 illustrates the impact of equipping enhancing strategies on the watermarking robustness against fine-tuning attacks. The red and other colored lines represent the watermarking authentication success rate of each model after and before implementing the enhancing strategies. The results demonstrate that leveraging the mask training strategies can effectively enhance the watermarking robustness against fine-tuning attacks. On average, we observe a \fCCS, xxx, xxx Xiangyu Wen et al. 2 4 6 8 10 0.5 0.6 0.7 0.8 0.9 1.0 Authentication Success Rate Iterations (Direct Watermark) After (MNIST/CIFAR/Tiny-ImageNet) MNIST (before) CIFAR10 (before) CIFAR100 (before) Tiny-ImageNet (before) (a) 2 4 6 8 10 0.5 0.6 0.7 0.8 0.9 1.0 Authentication Success Rate Iterations (Invisible Watermark) After (MNIST/CIFAR/Tiny-ImageNet) MNIST (before) CIFAR10 (before) CIFAR100 (before) Tiny-ImageNet (before) (b) Figure 13: Comparison of watermarking robustness on fine-tuning attacks before and after equipping robustnessenhancing strategies. 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 Authentication Success Rate Pruning ratio MNIST (before) MNIST (after) Gap (a) 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 Authentication Success Rate Pruning ratio CIFAR10 (before) CIFAR10 (after) Gap (b) 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 Authentication Success Rate Pruning ratio CIFAR100 (before) CIFAR100 (after) Gap (c) 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 Authentication Success Rate Pruning ratio Tiny-ImageNet (before) Tiny-ImageNet (after) Gap (d) Figure 14: The effect of enhancing strategy on the robustness of direct feature-fusion method. 10% increase in authentication success rate across the four tasks after applying these strategies. Moreover, the authentication success rate increases by more than 20% for CIFAR-100 models. Figures 14 and 15 illustrate the change in watermarking robustness under pruning attacks before and after equipping enhancing strategies. The watermarking authentication success rate of each model after equipping the enhancing strategies is represented by the red lines, while the black lines represent the cases without the proposed strategies. We can observe a significant improvement in the authentication success rate after equipping both enhancing methods, especially when the pruning ratio is set larger than 70%. For the Tiny-ImageNet dataset in Fig. 14, when the pruning ratio is set to 80%, the authentication success rate is less than 20% before applying the proposed strategy but becomes more than 90% after equipping the enhancing method. This improvement can be 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 Authentication Success Rate Pruning ratio MNIST (before) MNIST (after) Gap (a) 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 Authentication Success Rate Pruning ratio CIFAR10 (before) CIFAR10 (after) Gap (b) 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 Authentication Success Rate Pruning ratio CIFAR100 (before) CIFAR100 (after) Gap (c) 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 Authentication Success Rate Pruning ratio Tiny-ImageNet (before) Tiny-ImageNet (after) Gap (d) Figure 15: The effect of enhancing strategy on the robustness of invisible feature-fusion method. attributed to the fact that the feature-fusion and random masking strategy can force the model to learn to couple the watermark with model functionalities and equally distribute the watermark functions to each neuron in the model. Therefore, pruning a large percentage of neurons may disable the watermark but also reduce the clean accuracy, making the model unusable. 6" + } + ], + "Ailing Zeng": [ + { + "url": "http://arxiv.org/abs/2205.13504v3", + "title": "Are Transformers Effective for Time Series Forecasting?", + "abstract": "Recently, there has been a surge of Transformer-based solutions for the\nlong-term time series forecasting (LTSF) task. Despite the growing performance\nover the past few years, we question the validity of this line of research in\nthis work. Specifically, Transformers is arguably the most successful solution\nto extract the semantic correlations among the elements in a long sequence.\nHowever, in time series modeling, we are to extract the temporal relations in\nan ordered set of continuous points. While employing positional encoding and\nusing tokens to embed sub-series in Transformers facilitate preserving some\nordering information, the nature of the \\emph{permutation-invariant}\nself-attention mechanism inevitably results in temporal information loss. To\nvalidate our claim, we introduce a set of embarrassingly simple one-layer\nlinear models named LTSF-Linear for comparison. Experimental results on nine\nreal-life datasets show that LTSF-Linear surprisingly outperforms existing\nsophisticated Transformer-based LTSF models in all cases, and often by a large\nmargin. Moreover, we conduct comprehensive empirical studies to explore the\nimpacts of various design elements of LTSF models on their temporal relation\nextraction capability. We hope this surprising finding opens up new research\ndirections for the LTSF task. We also advocate revisiting the validity of\nTransformer-based solutions for other time series analysis tasks (e.g., anomaly\ndetection) in the future. Code is available at:\n\\url{https://github.com/cure-lab/LTSF-Linear}.", + "authors": "Ailing Zeng, Muxi Chen, Lei Zhang, Qiang Xu", + "published": "2022-05-26", + "updated": "2022-08-17", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.LG" + ], + "main_content": "Introduction Time series are ubiquitous in today\u2019s data-driven world. Given historical data, time series forecasting (TSF) is a long-standing task that has a wide range of applications, including but not limited to traf\ufb01c \ufb02ow estimation, en*Equal contribution ergy management, and \ufb01nancial investment. Over the past several decades, TSF solutions have undergone a progression from traditional statistical methods (e.g., ARIMA [1]) and machine learning techniques (e.g., GBRT [11]) to deep learning-based solutions, e.g., Recurrent Neural Networks [15] and Temporal Convolutional Networks [3,17]. Transformer [26] is arguably the most successful sequence modeling architecture, demonstrating unparalleled performances in various applications, such as natural language processing (NLP) [7], speech recognition [8], and computer vision [19, 29]. Recently, there has also been a surge of Transformer-based solutions for time series analysis, as surveyed in [27]. Most notable models, which focus on the less explored and challenging long-term time series forecasting (LTSF) problem, include LogTrans [16] (NeurIPS 2019), Informer [30] (AAAI 2021 Best paper), Autoformer [28] (NeurIPS 2021), Pyraformer [18] (ICLR 2022 Oral), Triformer [5] (IJCAI 2022) and the recent FEDformer [31] (ICML 2022). The main working power of Transformers is from its multi-head self-attention mechanism, which has a remarkable capability of extracting semantic correlations among elements in a long sequence (e.g., words in texts or 2D patches in images). However, self-attention is permutationinvariant and \u201canti-order\u201d to some extent. While using various types of positional encoding techniques can preserve some ordering information, it is still inevitable to have temporal information loss after applying self-attention on top of them. This is usually not a serious concern for semanticrich applications such as NLP, e.g., the semantic meaning of a sentence is largely preserved even if we reorder some words in it. However, when analyzing time series data, there is usually a lack of semantics in the numerical data itself, and we are mainly interested in modeling the temporal changes among a continuous set of points. That is, the order itself plays the most crucial role. Consequently, we pose the following intriguing question: Are Transformers really effective for long-term time series forecasting? Moreover, while existing Transformer-based LTSF so1 arXiv:2205.13504v3 [cs.AI] 17 Aug 2022 \flutions have demonstrated considerable prediction accuracy improvements over traditional methods, in their experiments, all the compared (non-Transformer) baselines perform autoregressive or iterated multi-step (IMS) forecasting [1,2,22,24], which are known to suffer from signi\ufb01cant error accumulation effects for the LTSF problem. Therefore, in this work, we challenge Transformer-based LTSF solutions with direct multi-step (DMS) forecasting strategies to validate their real performance. Not all time series are predictable, let alone long-term forecasting (e.g., for chaotic systems). We hypothesize that long-term forecasting is only feasible for those time series with a relatively clear trend and periodicity. As linear models can already extract such information, we introduce a set of embarrassingly simple models named LTSF-Linear as a new baseline for comparison. LTSF-Linear regresses historical time series with a one-layer linear model to forecast future time series directly. We conduct extensive experiments on nine widely-used benchmark datasets that cover various real-life applications: traf\ufb01c, energy, economics, weather, and disease predictions. Surprisingly, our results show that LTSF-Linear outperforms existing complex Transformerbased models in all cases, and often by a large margin (20% \u223c50%). Moreover, we \ufb01nd that, in contrast to the claims in existing Transformers, most of them fail to extract temporal relations from long sequences, i.e., the forecasting errors are not reduced (sometimes even increased) with the increase of look-back window sizes. Finally, we conduct various ablation studies on existing Transformer-based TSF solutions to study the impact of various design elements in them. To sum up, the contributions of this work include: \u2022 To the best of our knowledge, this is the \ufb01rst work to challenge the effectiveness of the booming Transformers for the long-term time series forecasting task. \u2022 To validate our claims, we introduce a set of embarrassingly simple one-layer linear models, named LTSF-Linear, and compare them with existing Transformer-based LTSF solutions on nine benchmarks. LTSF-Linear can be a new baseline for the LTSF problem. \u2022 We conduct comprehensive empirical studies on various aspects of existing Transformer-based solutions, including the capability of modeling long inputs, the sensitivity to time series order, the impact of positional encoding and sub-series embedding, and ef\ufb01ciency comparisons. Our \ufb01ndings would bene\ufb01t future research in this area. With the above, we conclude that the temporal modeling capabilities of Transformers for time series are exaggerated, at least for the existing LTSF benchmarks. At the same time, while LTSF-Linear achieves a better prediction accuracy compared to existing works, it merely serves as a simple baseline for future research on the challenging longterm TSF problem. With our \ufb01ndings, we also advocate revisiting the validity of Transformer-based solutions for other time series analysis tasks in the future. 2. Preliminaries: TSF Problem Formulation For time series containing C variates, given historical data X = {Xt 1, ..., Xt C}L t=1, wherein L is the look-back window size and Xt i is the value of the ith variate at the tth time step. The time series forecasting task is to predict the values \u02c6 X = { \u02c6 Xt 1, ..., \u02c6 Xt C}L+T t=L+1 at the T future time steps. When T > 1, iterated multi-step (IMS) forecasting [23] learns a single-step forecaster and iteratively applies it to obtain multi-step predictions. Alternatively, direct multistep (DMS) forecasting [4] directly optimizes the multi-step forecasting objective at once. Compared to DMS forecasting results, IMS predictions have smaller variance thanks to the autoregressive estimation procedure, but they inevitably suffer from error accumulation effects. Consequently, IMS forecasting is preferable when there is a highly-accurate single-step forecaster, and T is relatively small. In contrast, DMS forecasting generates more accurate predictions when it is hard to obtain an unbiased single-step forecasting model, or T is large. 3. Transformer-Based LTSF Solutions Transformer-based models [26] have achieved unparalleled performances in many long-standing AI tasks in natural language processing and computer vision \ufb01elds, thanks to the effectiveness of the multi-head self-attention mechanism. This has also triggered lots of research interest in Transformer-based time series modeling techniques [20, 27]. In particular, a large amount of research works are dedicated to the LTSF task (e.g., [16, 18, 28, 30, 31]). Considering the ability to capture long-range dependencies with Transformer models, most of them focus on the lessexplored long-term forecasting problem (T \u226b1)1. When applying the vanilla Transformer model to the LTSF problem, it has some limitations, including the quadratic time/memory complexity with the original selfattention scheme and error accumulation caused by the autoregressive decoder design. Informer [30] addresses these issues and proposes a novel Transformer architecture with reduced complexity and a DMS forecasting strategy. Later, more Transformer variants introduce various time series features into their models for performance or ef\ufb01ciency improvements [18,28,31]. We summarize the design elements of existing Transformer-based LTSF solutions as follows (see Figure 1). 1Due to page limit, we leave the discussion of non-Transformer forecasting solutions in the Appendix. 2 \f(d) Decoder (c) Encoder (b) Embedding (a) Preprocessing Output Input Seasonal-trend decomposition Normalization Timestamp preparation Channel projection Fixed position Local timestamp Global timestamp ProbSparse and distilling self-attention @Informer Series auto-correlation with decomposition @Autoformer Multi-resolution pyramidal attention @Pyraformer Frequency enhanced block with decomposition @FEDformer LogSparse and convolutional self-attention @LogTrans Direct Multi-Step (DMS) @Informer DMS with auto-correlation and decomposition @Autoformer DMS along spatio-temporal dimension @Pyraformer DMS with frequency attention and decomposition@FEDformer Iterated Multi-Step (IMS) @LogTrans Figure 1. The pipeline of existing Transformer-based TSF solutions. In (a) and (b), the solid boxes are essential operations, and the dotted boxes are applied optionally. (c) and (d) are distinct for different methods [16,18,28,30,31]. Time series decomposition: For data preprocessing, normalization with zero-mean is common in TSF. Besides, Autoformer [28] \ufb01rst applies seasonal-trend decomposition behind each neural block, which is a standard method in time series analysis to make raw data more predictable [6, 13]. Speci\ufb01cally, they use a moving average kernel on the input sequence to extract the trend-cyclical component of the time series. The difference between the original sequence and the trend component is regarded as the seasonal component. On top of the decomposition scheme of Autoformer, FEDformer [31] further proposes the mixture of experts\u2019 strategies to mix the trend components extracted by moving average kernels with various kernel sizes. Input embedding strategies: The self-attention layer in the Transformer architecture cannot preserve the positional information of the time series. However, local positional information, i.e. the ordering of time series, is important. Besides, global temporal information, such as hierarchical timestamps (week, month, year) and agnostic timestamps (holidays and events), is also informative [30]. To enhance the temporal context of time-series inputs, a practical design in the SOTA Transformer-based methods is injecting several embeddings, like a \ufb01xed positional encoding, a channel projection embedding, and learnable temporal embeddings into the input sequence. Moreover, temporal embeddings with a temporal convolution layer [16] or learnable timestamps [28] are introduced. Self-attention schemes: Transformers rely on the selfattention mechanism to extract the semantic dependencies between paired elements. Motivated by reducing the O \u0000L2\u0001 time and memory complexity of the vanilla Transformer, recent works propose two strategies for ef\ufb01ciency. On the one hand, LogTrans and Pyraformer explicitly introduce a sparsity bias into the self-attention scheme. Speci\ufb01cally, LogTrans uses a Logsparse mask to reduce the computational complexity to O (LlogL) while Pyraformer adopts pyramidal attention that captures hierarchically multi-scale temporal dependencies with an O (L) time and memory complexity. On the other hand, Informer and FEDformer use the low-rank property in the self-attention matrix. Informer proposes a ProbSparse selfattention mechanism and a self-attention distilling operation to decrease the complexity to O (LlogL), and FEDformer designs a Fourier enhanced block and a wavelet enhanced block with random selection to obtain O (L) complexity. Lastly, Autoformer designs a series-wise auto-correlation mechanism to replace the original self-attention layer. Decoders: The vanilla Transformer decoder outputs sequences in an autoregressive manner, resulting in a slow inference speed and error accumulation effects, especially for long-term predictions. Informer designs a generative-style decoder for DMS forecasting. Other Transformer variants employ similar DMS strategies. For instance, Pyraformer uses a fully-connected layer concatenating Spatio-temporal axes as the decoder. Autoformer sums up two re\ufb01ned decomposed features from trend-cyclical components and the stacked auto-correlation mechanism for seasonal components to get the \ufb01nal prediction. FEDformer also uses a decomposition scheme with the proposed frequency attention block to decode the \ufb01nal results. The premise of Transformer models is the semantic correlations between paired elements, while the self-attention mechanism itself is permutation-invariant, and its capability of modeling temporal relations largely depends on positional encodings associated with input tokens. Considering the raw numerical data in time series (e.g., stock prices or electricity values), there are hardly any point-wise semantic correlations between them. In time series modeling, we are mainly interested in the temporal relations among a continuous set of points, and the order of these elements instead of the paired relationship plays the most crucial role. While employing positional encoding and using tokens to embed sub-series facilitate preserving some ordering information, the nature of the permutation-invariant self-attention mechanism inevitably results in temporal information loss. Due to the above observations, we are interested in revisiting the effectiveness of Transformer-based LTSF solutions. 4. An Embarrassingly Simple Baseline In the experiments of existing Transformer-based LTSF solutions (T \u226b1), all the compared (non-Transformer) 3 \fbaselines are IMS forecasting techniques, which are known to suffer from signi\ufb01cant error accumulation effects. We hypothesize that the performance improvements in these works are largely due to the DMS strategy used in them. Forecasting Output \ud835\udc4b\u2208\u211d\u00d7 History \ud835\udc3ftimesteps Future \ud835\udc47timesteps (b) One Linear Layer Figure 2. Illustration of the basic linear model. To validate this hypothesis, we present the simplest DMS model via a temporal linear layer, named LTSF-Linear, as a baseline for comparison. The basic formulation of LTSFLinear directly regresses historical time series for future prediction via a weighted sum operation (as illustrated in Figure 2). The mathematical expression is \u02c6 Xi = WXi, where W \u2208RT \u00d7L is a linear layer along the temporal axis. \u02c6 Xi and Xi are the prediction and input for each ith variate. Note that LTSF-Linear shares weights across different variates and does not model any spatial correlations. LTSF-Linear is a set of linear models. Vanilla Linear is a one-layer linear model. To handle time series across different domains (e.g., \ufb01nance, traf\ufb01c, and energy domains), we further introduce two variants with two preprocessing methods, named DLinear and NLinear. \u2022 Speci\ufb01cally, DLinear is a combination of a Decomposition scheme used in Autoformer and FEDformer with linear layers. It \ufb01rst decomposes a raw data input into a trend component by a moving average kernel and a remainder (seasonal) component. Then, two one-layer linear layers are applied to each component, and we sum up the two features to get the \ufb01nal prediction. By explicitly handling trend, DLinear enhances the performance of a vanilla linear when there is a clear trend in the data. \u2022 Meanwhile, to boost the performance of LTSF-Linear when there is a distribution shift in the dataset, NLinear \ufb01rst subtracts the input by the last value of the sequence. Then, the input goes through a linear layer, and the subtracted part is added back before making the \ufb01nal prediction. The subtraction and addition in NLinear are a simple normalization for the input sequence. 5. Experiments 5.1. Experimental Settings Dataset. We conduct extensive experiments on nine widely-used real-world datasets, including ETT (Electricity Transformer Temperature) [30] (ETTh1, ETTh2, ETTm1, ETTm2), Traf\ufb01c, Electricity, Weather, ILI, ExchangeRate [15]. All of them are multivariate time series. We leave data descriptions in the Appendix. Evaluation metric. Following previous works [28, 30, 31], we use Mean Squared Error (MSE) and Mean Absolute Error (MAE) as the core metrics to compare performance. Compared methods. We include \ufb01ve recent Transformer-based methods: FEDformer [31], Autoformer [28], Informer [30], Pyraformer [18], and LogTrans [16]. Besides, we include a naive DMS method: Closest Repeat (Repeat), which repeats the last value in the look-back window, as another simple baseline. Since there are two variants of FEDformer, we compare the one with better accuracy (FEDformer-f via Fourier transform). 5.2. Comparison with Transformers Quantitative results. In Table 2, we extensively evaluate all mentioned Transformers on nine benchmarks, following the experimental setting of previous work [28, 30, 31]. Surprisingly, the performance of LTSF-Linear surpasses the SOTA FEDformer in most cases by 20% \u223c50% improvements on the multivariate forecasting, where LTSFLinear even does not model correlations among variates. For different time series benchmarks, NLinear and DLinear show the superiority to handle the distribution shift and trend-seasonality features. We also provide results for univariate forecasting of ETT datasets in the Appendix, where LTSF-Linear still consistently outperforms Transformerbased LTSF solutions by a large margin. FEDformer achieves competitive forecasting accuracy on ETTh1. This because FEDformer employs classical time series analysis techniques such as frequency processing, which brings in time series inductive bias and bene\ufb01ts the ability of temporal feature extraction. In summary, these results reveal that existing complex Transformer-based LTSF solutions are not seemingly effective on the existing nine benchmarks while LTSF-Linear can be a powerful baseline. Another interesting observation is that even though the naive Repeat method shows worse results when predicting long-term seasonal data (e.g., Electricity and Traf\ufb01c), it surprisingly outperforms all Transformer-based methods on Exchange-Rate (around 45%). This is mainly caused by the wrong prediction of trends in Transformer-based solutions, which may over\ufb01t toward sudden change noises in the training data, resulting in signi\ufb01cant accuracy degradation (see Figure 3(b)). Instead, Repeat does not have the bias. Qualitative results. As shown in Figure 3, we plot 4 \fDatasets ETTh1&ETTh2 ETTm1 &ETTm2 Traf\ufb01c Electricity Exchange-Rate Weather ILI Variates 7 7 862 321 8 21 7 Timesteps 17,420 69,680 17,544 26,304 7,588 52,696 966 Granularity 1hour 5min 1hour 1hour 1day 10min 1week Table 1. The statistics of the nine popular datasets for the LTSF problem. Methods IMP. Linear* NLinear* DLinear* FEDformer Autoformer Informer Pyraformer* LogTrans Repeat* Metric MSE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE Electricity 96 27.40% 0.140 0.237 0.141 0.237 0.140 0.237 0.193 0.308 0.201 0.317 0.274 0.368 0.386 0.449 0.258 0.357 1.588 0.946 192 23.88% 0.153 0.250 0.154 0.248 0.153 0.249 0.201 0.315 0.222 0.334 0.296 0.386 0.386 0.443 0.266 0.368 1.595 0.950 336 21.02% 0.169 0.268 0.171 0.265 0.169 0.267 0.214 0.329 0.231 0.338 0.300 0.394 0.378 0.443 0.280 0.380 1.617 0.961 720 17.47% 0.203 0.301 0.210 0.297 0.203 0.301 0.246 0.355 0.254 0.361 0.373 0.439 0.376 0.445 0.283 0.376 1.647 0.975 Exchange 96 45.27% 0.082 0.207 0.089 0.208 0.081 0.203 0.148 0.278 0.197 0.323 0.847 0.752 0.376 1.105 0.968 0.812 0.081 0.196 192 42.06% 0.167 0.304 0.180 0.300 0.157 0.293 0.271 0.380 0.300 0.369 1.204 0.895 1.748 1.151 1.040 0.851 0.167 0.289 336 33.69% 0.328 0.432 0.331 0.415 0.305 0.414 0.460 0.500 0.509 0.524 1.672 1.036 1.874 1.172 1.659 1.081 0.305 0.396 720 46.19% 0.964 0.750 1.033 0.780 0.643 0.601 1.195 0.841 1.447 0.941 2.478 1.310 1.943 1.206 1.941 1.127 0.823 0.681 Traf\ufb01c 96 30.15% 0.410 0.282 0.410 0.279 0.410 0.282 0.587 0.366 0.613 0.388 0.719 0.391 2.085 0.468 0.684 0.384 2.723 1.079 192 29.96% 0.423 0.287 0.423 0.284 0.423 0.287 0.604 0.373 0.616 0.382 0.696 0.379 0.867 0.467 0.685 0.390 2.756 1.087 336 29.95% 0.436 0.295 0.435 0.290 0.436 0.296 0.621 0.383 0.622 0.337 0.777 0.420 0.869 0.469 0.734 0.408 2.791 1.095 720 25.87% 0.466 0.315 0.464 0.307 0.466 0.315 0.626 0.382 0.660 0.408 0.864 0.472 0.881 0.473 0.717 0.396 2.811 1.097 Weather 96 18.89% 0.176 0.236 0.182 0.232 0.176 0.237 0.217 0.296 0.266 0.336 0.300 0.384 0.896 0.556 0.458 0.490 0.259 0.254 192 21.01% 0.218 0.276 0.225 0.269 0.220 0.282 0.276 0.336 0.307 0.367 0.598 0.544 0.622 0.624 0.658 0.589 0.309 0.292 336 22.71% 0.262 0.312 0.271 0.301 0.265 0.319 0.339 0.380 0.359 0.395 0.578 0.523 0.739 0.753 0.797 0.652 0.377 0.338 720 19.85% 0.326 0.365 0.338 0.348 0.323 0.362 0.403 0.428 0.419 0.428 1.059 0.741 1.004 0.934 0.869 0.675 0.465 0.394 ILI 24 47.86% 1.947 0.985 1.683 0.858 2.215 1.081 3.228 1.260 3.483 1.287 5.764 1.677 1.420 2.012 4.480 1.444 6.587 1.701 36 36.43% 2.182 1.036 1.703 0.859 1.963 0.963 2.679 1.080 3.103 1.148 4.755 1.467 7.394 2.031 4.799 1.467 7.130 1.884 48 34.43% 2.256 1.060 1.719 0.884 2.130 1.024 2.622 1.078 2.669 1.085 4.763 1.469 7.551 2.057 4.800 1.468 6.575 1.798 60 34.33% 2.390 1.104 1.819 0.917 2.368 1.096 2.857 1.157 2.770 1.125 5.264 1.564 7.662 2.100 5.278 1.560 5.893 1.677 ETTh1 96 0.80% 0.375 0.397 0.374 0.394 0.375 0.399 0.376 0.419 0.449 0.459 0.865 0.713 0.664 0.612 0.878 0.740 1.295 0.713 192 3.57% 0.418 0.429 0.408 0.415 0.405 0.416 0.420 0.448 0.500 0.482 1.008 0.792 0.790 0.681 1.037 0.824 1.325 0.733 336 6.54% 0.479 0.476 0.429 0.427 0.439 0.443 0.459 0.465 0.521 0.496 1.107 0.809 0.891 0.738 1.238 0.932 1.323 0.744 720 13.04% 0.624 0.592 0.440 0.453 0.472 0.490 0.506 0.507 0.514 0.512 1.181 0.865 0.963 0.782 1.135 0.852 1.339 0.756 ETTh2 96 19.94% 0.288 0.352 0.277 0.338 0.289 0.353 0.346 0.388 0.358 0.397 3.755 1.525 0.645 0.597 2.116 1.197 0.432 0.422 192 19.81% 0.377 0.413 0.344 0.381 0.383 0.418 0.429 0.439 0.456 0.452 5.602 1.931 0.788 0.683 4.315 1.635 0.534 0.473 336 25.93% 0.452 0.461 0.357 0.400 0.448 0.465 0.496 0.487 0.482 0.486 4.721 1.835 0.907 0.747 1.124 1.604 0.591 0.508 720 14.25% 0.698 0.595 0.394 0.436 0.605 0.551 0.463 0.474 0.515 0.511 3.647 1.625 0.963 0.783 3.188 1.540 0.588 0.517 ETTm1 96 21.10% 0.308 0.352 0.306 0.348 0.299 0.343 0.379 0.419 0.505 0.475 0.672 0.571 0.543 0.510 0.600 0.546 1.214 0.665 192 21.36% 0.340 0.369 0.349 0.375 0.335 0.365 0.426 0.441 0.553 0.496 0.795 0.669 0.557 0.537 0.837 0.700 1.261 0.690 336 17.07% 0.376 0.393 0.375 0.388 0.369 0.386 0.445 0.459 0.621 0.537 1.212 0.871 0.754 0.655 1.124 0.832 1.283 0.707 720 21.73% 0.440 0.435 0.433 0.422 0.425 0.421 0.543 0.490 0.671 0.561 1.166 0.823 0.908 0.724 1.153 0.820 1.319 0.729 ETTm2 96 17.73% 0.168 0.262 0.167 0.255 0.167 0.260 0.203 0.287 0.255 0.339 0.365 0.453 0.435 0.507 0.768 0.642 0.266 0.328 192 17.84% 0.232 0.308 0.221 0.293 0.224 0.303 0.269 0.328 0.281 0.340 0.533 0.563 0.730 0.673 0.989 0.757 0.340 0.371 336 15.69% 0.320 0.373 0.274 0.327 0.281 0.342 0.325 0.366 0.339 0.372 1.363 0.887 1.201 0.845 1.334 0.872 0.412 0.410 720 12.58% 0.413 0.435 0.368 0.384 0.397 0.421 0.421 0.415 0.433 0.432 3.379 1.338 3.625 1.451 3.048 1.328 0.521 0.465 Methods* are implemented by us; Other results are from FEDformer [31]. Table 2. Multivariate long-term forecasting errors in terms of MSE and MAE, the lower the better. Among them, ILI dataset is with forecasting horizon T \u2208{24, 36, 48, 60}. For the others, T \u2208{96, 192, 336, 720}. Repeat repeats the last value in the look-back window. The best results are highlighted in bold and the best results of Transformers are highlighted with a underline. Accordingly, IMP. is the best result of linear models compared to the results of Transformer-based solutions. the prediction results on three selected time series datasets with Transformer-based solutions and LTSF-Linear: Electricity (Sequence 1951, Variate 36), Exchange-Rate (Sequence 676, Variate 3), and ETTh2 ( Sequence 1241, Variate 2), where these datasets have different temporal patterns. When the input length is 96 steps, and the output horizon is 336 steps, Transformers [28, 30, 31] fail to capture the scale and bias of the future data on Electricity and ETTh2. Moreover, they can hardly predict a proper trend on aperiodic data such as Exchange-Rate. These phenomena further indicate the inadequacy of existing Transformer-based solutions for the LTSF task. 5.3. More Analyses on LTSF-Transformers Can existing LTSF-Transformers extract temporal relations well from longer input sequences? The size of the look-back window greatly impacts forecasting accuracy as it determines how much we can learn from historical data. Generally speaking, a powerful TSF model with a strong temporal relation extraction capability should be able to achieve better results with larger look-back window sizes. To study the impact of input look-back window sizes, we conduct experiments with L \u2208 {24, 48, 72, 96, 120, 144, 168, 192, 336, 504, 672, 720} for long-term forecasting (T=720). Figure 4 demonstrates the MSE results on two datasets. Similar to the observations from previous studies [27, 30], existing Transformer-based models\u2019 performance deteriorates or stays stable when the look-back window size increases. In contrast, the performances of all LTSF-Linear are signi\ufb01cantly boosted with the increase of look-back window size. Thus, existing solutions tend to over\ufb01t temporal noises instead of extracting temporal information if given a longer sequence, and the input size 96 is exactly suitable for most Transformers. 5 \f0 50 100 150 200 250 300 1.5 1.0 0.5 0.0 0.5 1.0 GrouthTruth Autoformer Informer FEDformer DLinear (a) Electricity 0 50 100 150 200 250 300 1 0 1 2 3 4 GrouthTruth Autoformer Informer FEDformer DLinear (b) Exchange-Rate 0 50 100 150 200 250 300 1.5 1.0 0.5 0.0 0.5 1.0 GrouthTruth Autoformer Informer FEDformer DLinear (c) ETTh2 Figure 3. Illustration of the long-term forecasting output (Y-axis) of \ufb01ve models with an input length L=96 and output length T=192 (X-axis) on Electricity, Exchange-Rate, and ETTh2, respectively. Additionally, we provide more quantitative results in the Appendix, and our conclusion holds in almost all cases. 24 48 72 96 120 144 168 192 336 504 672 720 0.4 0.6 0.8 1.0 1.2 1.4 Transformer Informer Autoformer FEDformer Pyraformer Linear NLinear DLinear (a) 720 steps-Traf\ufb01c 24 48 72 96 120 144 168 192 336 504 672 720 0.20 0.25 0.30 0.35 0.40 Transformer Informer Autoformer FEDformer Pyraformer Linear NLinear DLinear (b) 720 steps-Electricity Figure 4. The MSE results (Y-axis) of models with different lookback window sizes (X-axis) of long-term forecasting (T=720) on the Traf\ufb01c and Electricity datasets. What can be learned for long-term forecasting? While the temporal dynamics in the look-back window signi\ufb01cantly impact the forecasting accuracy of short-term time series forecasting, we hypothesize that long-term forecasting depends on whether models can capture the trend and periodicity well only. That is, the farther the forecasting horizon, the less impact the look-back window itself has. Methods FEDformer Autoformer Input Close Far Close Far Electricity 0.251 0.265 0.255 0.287 Traf\ufb01c 0.631 0.645 0.677 0.675 Table 3. Comparison of different input sequences under the MSE metric to explore what LTSF-Transformers depend on. If the input is Close, we use the 96th, ..., 191th time steps as the input sequence. If the input is Far, we use the 0th, ..., 95th time steps. Both of them forecast the 192th, ..., (192 + 720)th time steps. To validate the above hypothesis, in Table 3, we compare the forecasting accuracy for the same future 720 time steps with data from two different look-back windows: (i). the original input L=96 setting (called Close) and (ii). the far input L=96 setting (called Far) that is before the original 96 time steps. From the experimental results, the performance of the SOTA Transformers drops slightly, indicating these models only capture similar temporal information from the adjacent time series sequence. Since capturing the intrinsic characteristics of the dataset generally does not require a large number of parameters, i,e. one parameter can represent the periodicity. Using too many parameters will even cause over\ufb01tting, which partially explains why LTSFLinear performs better than Transformer-based methods. Are the self-attention scheme effective for LTSF? We verify whether these complex designs in the existing Transformer (e.g., Informer) are essential. In Table 4, we gradually transform Informer to Linear. First, we replace each self-attention layer by a linear layer, called Att.-Linear, since a self-attention layer can be regarded as a fullyconnected layer where weights are dynamically changed. Furthermore, we discard other auxiliary designs (e.g., FFN) in Informer to leave embedding layers and linear layers, named Embed + Linear. Finally, we simplify the model to one linear layer. Surprisingly, the performance of Informer grows with the gradual simpli\ufb01cation, indicating the unnecessary of the self-attention scheme and other complex modules at least for existing LTSF benchmarks. Methods Informer Att.-Linear Embed + Linear Linear Exchange 96 0.847 1.003 0.173 0.084 192 1.204 0.979 0.443 0.155 336 1.672 1.498 1.288 0.301 720 2.478 2.102 2.026 0.763 ETTh1 96 0.865 0.613 0.454 0.400 192 1.008 0.759 0.686 0.438 336 1.107 0.921 0.821 0.479 720 1.181 0.902 1.051 0.515 Table 4. The MSE comparisons of gradually transforming Informer to a Linear from the left to right columns. Att.-Linear is a structure that replaces each attention layer with a linear layer. Embed + Linear is to drop other designs and only keeps embedding layers and a linear layer. The look-back window size is 96. Can existing LTSF-Transformers preserve temporal order well? Self-attention is inherently permutation6 \fMethods Linear FEDformer Autoformer Informer Predict Length Ori. Shuf. Half-Ex. Ori. Shuf. Half-Ex. Ori. Shuf. Half-Ex. Ori. Shuf. Half-Ex. Exchange 96 0.080 0.133 0.169 0.161 0.160 0.162 0.152 0.158 0.160 0.952 1.004 0.959 192 0.162 0.208 0.243 0.274 0.275 0.275 0.278 0.271 0.277 1.012 1.023 1.014 336 0.286 0.320 0.345 0.439 0.439 0.439 0.435 0.430 0.435 1.177 1.181 1.177 720 0.806 0.819 0.836 1.122 1.122 1.122 1.113 1.113 1.113 1.198 1.210 1.196 Average Drop N/A 27.26% 46.81% N/A -0.09% 0.20% N/A 0.09% 1.12% N/A -0.12% -0.18% ETTh1 96 0.395 0.824 0.431 0.376 0.753 0.405 0.455 0.838 0.458 0.974 0.971 0.971 192 0.447 0.824 0.471 0.419 0.730 0.436 0.486 0.774 0.491 1.233 1.232 1.231 336 0.490 0.825 0.505 0.447 0.736 0.453 0.496 0.752 0.497 1.693 1.693 1.691 720 0.520 0.846 0.528 0.468 0.720 0.470 0.525 0.696 0.524 2.720 2.716 2.715 Average Drop N/A 81.06% 4.78% N/A 73.28% 3.44% N/A 56.91% 0.46% N/A 1.98% 0.18% Table 5. The MSE comparisons of models when shuf\ufb02ing the raw input sequence. Shuf. randomly shuf\ufb02es the input sequence. Half-EX. randomly exchanges the \ufb01rst half of the input sequences with the second half. Average Drop is the average performance drop under all forecasting lengths after shuf\ufb02ing. All results are the average test MSE of \ufb01ve runs. invariant, i.e., regardless of the order. However, in timeseries forecasting, the sequence order often plays a crucial role. We argue that even with positional and temporal embeddings, existing Transformer-based methods still suffer from temporal information loss. In Table 5, we shuf\ufb02e the raw input before the embedding strategies. Two shuf\ufb02ing strategies are presented: Shuf. randomly shuf\ufb02es the whole input sequences and Half-Ex. exchanges the \ufb01rst half of the input sequence with the second half. Interestingly, compared with the original setting (Ori.) on the Exchange Rate, the performance of all Transformer-based methods does not \ufb02uctuate even when the input sequence is randomly shuf\ufb02ed. By contrary, the performance of LTSF-Linear is damaged signi\ufb01cantly. These indicate that LTSF-Transformers with different positional and temporal embeddings preserve quite limited temporal relations and are prone to over\ufb01t on noisy \ufb01nancial data, while the LTSF-Linear can model the order naturally and avoid over\ufb01tting with fewer parameters. For the ETTh1 dataset, FEDformer and Autoformer introduce time series inductive bias into their models, making them can extract certain temporal information when the dataset has more clear temporal patterns (e.g., periodicity) than the Exchange Rate. Therefore, the average drops of the two Transformers are 73.28% and 56.91% under the Shuf. setting, where it loses the whole order information. Moreover, Informer still suffers less from both Shuf. and Half-Ex. settings due to its no such temporal inductive bias. Overall, the average drops of LTSF-Linear are larger than Transformer-based methods for all cases, indicating the existing Transformers do not preserve temporal order well. How effective are different embedding strategies? We study the bene\ufb01ts of position and timestamp embeddings used in Transformer-based methods. In Table 6, the forecasting errors of Informer largely increase without positional embeddings (wo/Pos.). Without timestamp embeddings (wo/Temp.) will gradually damage the performance of Informer as the forecasting lengths increase. Since Informer uses a single time step for each token, it is necessary to introduce temporal information in tokens. Methods Embedding Traf\ufb01c 96 192 336 720 FEDformer All 0.597 0.606 0.627 0.649 wo/Pos. 0.587 0.604 0.621 0.626 wo/Temp. 0.613 0.623 0.650 0.677 wo/Pos.-Temp. 0.613 0.622 0.648 0.663 Autoformer All 0.629 0.647 0.676 0.638 wo/Pos. 0.613 0.616 0.622 0.660 wo/Temp. 0.681 0.665 0.908 0.769 wo/Pos.-Temp. 0.672 0.811 1.133 1.300 Informer All 0.719 0.696 0.777 0.864 wo/Pos. 1.035 1.186 1.307 1.472 wo/Temp. 0.754 0.780 0.903 1.259 wo/Pos.-Temp. 1.038 1.351 1.491 1.512 Table 6. The MSE comparisons of different embedding strategies on Transformer-based methods with look-back window size 96 and forecasting lengths {96, 192, 336, 720}. Rather than using a single time step in each token, FEDformer and Autoformer input a sequence of timestamps to embed the temporal information. Hence, they can achieve comparable or even better performance without \ufb01xed positional embeddings. However, without timestamp embeddings, the performance of Autoformer declines rapidly because of the loss of global temporal information. Instead, thanks to the frequency-enhanced module proposed in FEDformer to introduce temporal inductive bias, it suffers less from removing any position/timestamp embeddings. Is training data size a limiting factor for existing LTSFTransformers? Some may argue that the poor performance of Transformer-based solutions is due to the small sizes of the benchmark datasets. Unlike computer vision or natural language processing tasks, TSF is performed on collected time series, and it is dif\ufb01cult to scale up the training data size. In fact, the size of the training data would indeed have a signi\ufb01cant impact on the model performance. Accordingly, we conduct experiments on Traf\ufb01c, comparing the performance of the model trained on a full dataset (17,544*0.7 hours), named Ori., with that trained on a shortened dataset (8,760 hours, i.e., 1 year), called Short. Unexpectedly, Table 7 presents that the prediction errors 7 \fwith reduced training data are lower in most cases. This might because the whole-year data maintains more clear temporal features than a longer but incomplete data size. While we cannot conclude that we should use less data for training, it demonstrates that the training data scale is not the limiting reason for the performances of Autoformer and FEDformer. Methods FEDformer Autoformer Dataset Ori. Short Ori. Short 96 0.587 0.568 0.613 0.594 192 0.604 0.584 0.616 0.621 336 0.621 0.601 0.622 0.621 720 0.626 0.608 0.660 0.650 Table 7. The MSE comparison of two training data sizes. Is ef\ufb01ciency really a top-level priority? Existing LTSFTransformers claim that the O \u0000L2\u0001 complexity of the vanilla Transformer is unaffordable for the LTSF problem. Although they prove to be able to improve the theoretical time and memory complexity from O \u0000L2\u0001 to O (L), it is unclear whether 1) the actual inference time and memory cost on devices are improved, and 2) the memory issue is unacceptable and urgent for today\u2019s GPU (e.g., an NVIDIA Titan XP here). In Table 8, we compare the average practical ef\ufb01ciencies with 5 runs. Interestingly, compared with the vanilla Transformer (with the same DMS decoder), most Transformer variants incur similar or even worse inference time and parameters in practice. These follow-ups introduce more additional design elements to make practical costs high. Moreover, the memory cost of the vanilla Transformer is practically acceptable, even for output length L = 720, which weakens the importance of developing a memoryef\ufb01cient Transformers, at least for existing benchmarks. Method MACs Parameter Time Memory DLinear 0.04G 139.7K 0.4ms 687MiB Transformer\u00d7 4.03G 13.61M 26.8ms 6091MiB Informer 3.93G 14.39M 49.3ms 3869MiB Autoformer 4.41G 14.91M 164.1ms 7607MiB Pyraformer 0.80G 241.4M\u2217 3.4ms 7017MiB FEDformer 4.41G 20.68M 40.5ms 4143MiB \u00d7 is modi\ufb01ed into the same one-step decoder, which is implemented in the source code from Autoformer. \u2217236.7M parameters of Pyraformer come from its linear decoder. Table 8. Comparison of practical ef\ufb01ciency of LTSF-Transformers under L=96 and T=720 on the Electricity. MACs are the number of multiply-accumulate operations. We use Dlinear for comparison since it has the double cost in LTSF-Linear. The inference time averages 5 runs. 6. Conclusion and Future Work" + }, + { + "url": "http://arxiv.org/abs/2108.07181v2", + "title": "Learning Skeletal Graph Neural Networks for Hard 3D Pose Estimation", + "abstract": "Various deep learning techniques have been proposed to solve the single-view\n2D-to-3D pose estimation problem. While the average prediction accuracy has\nbeen improved significantly over the years, the performance on hard poses with\ndepth ambiguity, self-occlusion, and complex or rare poses is still far from\nsatisfactory. In this work, we target these hard poses and present a novel\nskeletal GNN learning solution. To be specific, we propose a hop-aware\nhierarchical channel-squeezing fusion layer to effectively extract relevant\ninformation from neighboring nodes while suppressing undesired noises in GNN\nlearning. In addition, we propose a temporal-aware dynamic graph construction\nprocedure that is robust and effective for 3D pose estimation. Experimental\nresults on the Human3.6M dataset show that our solution achieves 10.3\\% average\nprediction accuracy improvement and greatly improves on hard poses over\nstate-of-the-art techniques. We further apply the proposed technique on the\nskeleton-based action recognition task and also achieve state-of-the-art\nperformance. Our code is available at\nhttps://github.com/ailingzengzzz/Skeletal-GNN.", + "authors": "Ailing Zeng, Xiao Sun, Lei Yang, Nanxuan Zhao, Minhao Liu, Qiang Xu", + "published": "2021-08-16", + "updated": "2021-08-17", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction Single-view skeleton-based 3D pose estimation problem plays an important role in numerous applications, such as human-computer interaction, video understanding, and human behavior analysis. Given the 2D skeletal positions detected by a 2D keypoint detector (e.g., [3, 27, 38]), this task aims to regress the 3D positions of the corresponding joints. It is a challenging task and has drawn lots of attention from academia in recent years. Even though the average performance increases steadily over the years, the prediction errors on some poses are still quite high. Fig. 1 shows some examples in a widely used dataset, Human3.6M [9]. Some actions (e.g., \u201cSit\u201d and \u201cSit Down\u201d) contain many poses with depth ambiguity, self-occlusion, or complex poses. Also, there inevitably exist some poses rarely seen in the training dataset. Similar to the de\ufb01nition of hard examples in object detection [17] and semantic segmentation [15], we collectively regard those poses with high prediction errors as hard poses. Early attempts [24, 29] simply use fully-connected networks (FCN) to lift the 2D keypoints into 3D space. However, the dense connection of FCN is prone to over\ufb01t, leading to relatively poor performance. To tackle this problem, geometric dependencies are incorporated into the network in [7, 28, 39, 46], which signi\ufb01cantly improve prediction accuracy. As articulated human body can be naturally modeled as a graph, with the recent development of graph neural networks (GNN) [11, 13, 1, 41, 47], various GNN-based methods [49, 6, 20, 2, 51] are proposed in the literature for 2D-to-3D pose estimation. GNN-based solutions naturally capture the relationship between body joints. For a target node, aggregating features from its neighboring nodes facilitates bringing in semantic information to relieve the uncertainty in estimating its 3D position. In other words, for the estimation of a particular node in the graph (e.g., left hand), both its direct neighbor (i.e., left elbow) and other nodes that are multiple hops away in the graph (e.g., left shoulder and even right foot in some poses) may provide useful information that contributes to the position estimation of the target node, and the learning of skeletal graph neural networks is to capture such context information for better 2D-to-3D pose estimation. However, existing GNN-based solutions do not fully tap the potential of the skeleton graph. The reasons are two-fold: \u2022 The power of graph neural networks lies in the aggregation of neighboring nodes, which, however, contributes both useful information and undesired noises. On the one hand, aggregating distant nodes in the skeletal graph does provide useful information; On the other hand, the more distant the nodes, the more likely undesired noises are introduced into the aggregation procedure. Existing works do not consider such signalto-noise issues in message passing over the GNN. \u2022 The relationship between body joints varies with different poses. For example, for poses in \u201crunning\u201d, the hand-foot joints are closely related, while for poses in \u201cSitting\u201d, there is no such strong relationship. It is rather dif\ufb01cult to capture such information with a static 1 arXiv:2108.07181v2 [cs.CV] 17 Aug 2021 \f23 32 41 50 59 Walk WalkToge Eat Greet Purcha. Direct Smoke Phone WalkDog Discuss Wait Pose Photo Sit SitDown MPJPE (mm) FCN[24] Multi-task[23] SRNet[46] Global-GCN[20] High-order GCN[51] (5) Complex Pose (6) Rare Pose (4) Self-occlusion (3) Depth Ambiguity (1) Easy Poses (2) Moderate Pose Figure 1: The examples of two easy poses, one moderate pose, and four kinds of hard poses in 2D-to-3D pose estimation with 2D detected poses as inputs (shown in the images). Although rapid progress has been made in this \ufb01eld, both non-graph methods and graph-based ones yield large prediction error on these hard poses. skeleton graph across all poses. This work proposes novel skeletal GNN learning solutions to mitigate the above problems, especially for hard poses. Our contributions are summarized as follows: \u2022 We propose a hop-aware hierarchical channelsqueezing fusion layer to extract relevant information from neighboring nodes effectively while suppressing undesired noises. This is inspired by the feature squeezing works [50, 44, 45], wherein channel size is reduced to keep valuable information in each layer. Speci\ufb01cally, we squeeze long-range context features (i.e., information from distant nodes) and fuse them with short-range features in a hierarchical manner. \u2022 Inspired by GNN-based action recognition work [36, 43, 21], we build dynamic skeletal graphs, wherein the edges between nodes are not only from the \ufb01xed human skeleton topology but also the node features to capture action-speci\ufb01c poses. To cope with the change of dynamic graphs over time and relieve outliers from frame-level features, we further integrate temporal cues into the learning process of dynamic graphs. The proposed temporal-aware dynamic graph construction procedure is robust and effective for 2Dto-3D pose estimation. We conduct experiments on Human3.6M dataset [9], and the proposed solution outperforms state-of-the-art techniques [46] in 3D pose estimation by 10.3% on average, and greatly improves on hard poses. Compared to stateof-the-art GNN-based solutions, we surpass [6] by 16.3%. As the proposed method is a plug-and-play module, we further integrate it into the skeleton-based action recognition framework, achieving state-of-the-art performance. 2. Preliminaries and Motivation This work focuses on GNN-based 3D pose estimation. We \ufb01rst describe the general skeletal GNN construction Hop-2: Dynamic Edge: Target Node: Hop-1: Physical Edge: 1 4 2 3 5 6 7 Short-range -> Long-range: 1 4 2 3 5 6 7 (1) Standard Physical Topology (2) Physical with Dynamic Topology 8 9 9 8 Figure 2: The illustration of a skeletal graph with human physical edges and action-speci\ufb01c dynamic edges. procedure in Sec. 2.1. Next, we discuss existing GNNbased solutions for 3D pose estimation in Sec. 2.2. Finally, Sec. 2.3 motivates this work. 2.1. Skeletal Graph Neural Network The human skeleton can be naturally modeled as a graph as shown in Fig. 2(a). The nodes in the graph represent the 2D positions of human joints, and the edges between two joints denote bone connections. Hop-k demonstrates the shortest path k between two nodes. For instance, take node 1 (right shoulder) as the target node, node 2 (right elbow) and node 3 (neck) are its hop-1 neighbors (masked in dark blue); while nodes 4 (right hand), 5 (head), 6 (left shoulder), and 7 (spine) are its hop-2 neighbors (masked in light blue). In a graph neural network, adjacency matrix determines the information passing among nodes, and the objective of the learning procedure is to obtain each node\u2019s features by aggregating features from its neighboring nodes. As hop-k increases during message passing, the information passed from the corresponding neighbors varies from short-range context to long-range context. 2.2. GNN-Based 3D Human Pose Estimation Recently, various GNN-based methods [49, 6, 2, 19, 20, 51] are proposed in the literature for 3d pose estimation. As 2 \fweight sharing strategy restricts the representation power, a locally connected network (LCN) [6] is proposed to learn the weight of each node individually for enhancing the representation differences among nodes, achieving better generalization capability. High-order GCN [51] explores different aggregation methods on high-order neighbors to capture the long-range dependencies among nodes. However, it may introduce more noises from less-related nodes without differentiating the impacts between short-range and long-range contexts. Furthermore, some recent works [49, 6] try to learn the edge weights of the skeletal graph. However, without changing the graph topology, the effectiveness of such a dynamic solution is limited, especially for rare poses. 2.3. Observation and Motivation Fig. 1 shows the prediction accuracy of recent 3D pose estimation methods [24, 32, 46, 20, 51]. As can be observed from this \ufb01gure, most of them suffer from poor performance for some complex actions, such as, \u201cSit,\u201d \u201cSit Down,\u201d and \u201cTake Photos.\u201d We attribute this phenomenon to that these hard poses require both short-range and long-range context information for better estimation. Meanwhile, existing solutions do not fuse them effectively in the learning process. Distant neighbors pass not only valuable semantic information but also irrelevant noise. Existing GNN-based methods try to aggregate the semantic information in both short-range and long-range neighbors. However, they ignore that the messages passed from distant neighbors also contain irrelevant noise. For example, \u201dWalking\u201d has a clear pattern, which contains strong correlation between arm and leg. Intuitively, we need to take both of them into account. However, due to some personalized style, besides the action pattern, there is also some heterogeneous noise. Therefore, it is necessary to suppress such irrelevant noise. Interestingly, through experiments, we observe that such noise is sensitive to the channel dimension. The channel dimension can constrain the amount of information and noise passed among nodes in a skeleton graph. In other words, an effective channel squeezing strategy could \ufb01lter out undesired noise while keeping valuable information. Consequently, we propose a hop-aware hierarchical channelsqueezing transform on long-range features to improve aggregation effectiveness in skeletal GNN learning. Dynamic graph construction is useful but should be delicately designed. Existing GNN-based methods construct the graph based on the physical skeleton topology [6, 49], like Fig. 2(a). However, the strong hidden relationships among nodes vary with actions. By constructing a dynamic graph as shown in Fig. 2(b), such a relationship might be more useful than physical features. Although the dynamic graphs seem intuitive for representing different motion-speci\ufb01c relations, it usually seems vulnerable to the A + + Figure 3: The framework of our method. The key is a specially designed module called Dynamic Hierarchical Channel-Squeezing Fusion layer (D-HCSF), shown in Fig. 4 with details. single-frame outliers. Thus, we introduce temporal information to make dynamic graph learning robust. 3. Method In this work, our goal is to reduce errors of 3D human pose estimation, especially on hard poses. More speci\ufb01cally, given 2D keypoints X \u2208RN\u00d72, with N nodes, the model outputs better 3D positions Y \u2208RN\u00d73. The framework is designed based on the observations and motivations in Sec. 2.3, and shown in Fig 3. The core of our framework is the module: a Dynamic Hierarchical Channel-Squeezing Fusion Layer (D-HCSF) shown in Fig 4. It contains a hierarchical channel-squeezing fusion scheme for updating features of each node (Fig 4), and a temporal-aware dynamic graph learning component for updating the dynamic graph. In this section, we \ufb01rst revisit the formulation of generic GCN [11] and LCN [6] in Sec. 3.1. Then, we introduce our hierarchical channel-squeezing fusion scheme in Sec. 3.2. Finally, we propose the dynamic graph learning and consider temporal-aware strategy in this process in Sec 3.3. 3.1. Vanilla Graph Neural Network Given a graph G = (V, E), it consists of the nodes V and the edges E. We revisit a generic GCN [11] layer de\ufb01ned as follows: H = \u03c3( \u02c6 AXW), (1) where A \u2208RN\u00d7N is an adjacency matrix with N nodes, indicating the connections between nodes. If the joint j is dependent on the joint i, then aij = 1. Otherwise, the connections are set to zero aij = 0. We denote the input node features as X \u2208RN\u00d7Cin, the learnable weight matrix as W \u2208RCin\u00d7Cout, and the activation function as \u03c3(\u00b7). For simpli\ufb01cation, we ignore the \u03c3(\u00b7) in the following formulas. The GCN\u2019s representation power is limited by weight sharing strategy in node regression problem, while Ci et al. [6] propose a locally connected network (LCN), which introduces the node-wise trainable parameters to enhance the differences among node features. This aggregation scheme learns different relations among different nodes. Accordingly, we recap its basic formulation. For clarity purposes, we take the embedding learning of node i from the direct neighbors in a layer as an example: 3 \f\ud835\udc4a \ud835\udc58,\u03b8(1, \ud835\udc6d) \u00d7 \ud835\udc36\u210e\ud835\udc47 \u210e\u00d7 \ud835\udc41] [\ud835\udc41\u00d7 \ud835\udc36\u210e\ud835\udc47 \u210e] Tanh [\ud835\udc41\u00d7 \ud835\udc41] \ud835\udc76\ud835\udc8c \ud835\udc74\ud835\udc8c + [\ud835\udc43\ud835\udc5f\ud835\udc52-\ud835\udc51\ud835\udc52\ud835\udc53\ud835\udc56\ud835\udc5b\ud835\udc52\ud835\udc51] [\ud835\udc36\ud835\udc56\ud835\udc5b\ud835\udc47\ud835\udc56\ud835\udc5b\u00d7 \ud835\udc41] [\ud835\udc36\ud835\udc56\ud835\udc5b\u00d7 \ud835\udc47\ud835\udc56\ud835\udc5b\u00d7 \ud835\udc41] [\ud835\udc6a\ud835\udc8c,\ud835\udc90\ud835\udc96\ud835\udc95\u00d7 \ud835\udc47\ud835\udc56\ud835\udc5b\u00d7 \ud835\udc41] X: [\ud835\udc36\ud835\udc56\ud835\udc5b\u00d7 \ud835\udc47\ud835\udc56\ud835\udc5b\u00d7 \ud835\udc41] \u00d7 \u00d7 [\ud835\udc36\ud835\udc56\ud835\udc5b\ud835\udc47\ud835\udc56\ud835\udc5b\u00d7 \ud835\udc41] Hierarchical Fusion Layer Channel-Squeezing \ud835\udc4a \ud835\udc58,\u03a6(1, \ud835\udc6d) Figure 4: The architecture of our Dynamic Hierarchical Channel-Squeezing Fusion (D-HCSF) layer under k hops. Each dotted box consists of two streams: a weighted graph learning branch based on \ufb01xed physical edges (blue lines) and a dynamic graph learning branch to update the graph based on the node features adaptively (orange lines). hi = X j\u2208N1,i (\u02c6 aijxjWij), (2) where N1,i contains the self-node and direct neighbors (hop=1) of the node i. We denote \u02c6 aij as the value of ith row and jth column in the adjacency matrix \u02c6 A, which distinctly aggregates features among neighbors. xj is the inputs of the neighbor j. Wij \u2208RCin\u00d7Cout denotes the learnable weights between the node pair (i, j), and hi is the updated features of the node i. Hence, the \ufb01nal output H is represented by the concatenation of all node features. 3.2. Hierarchical Channel-Squeezing Fusion Layer Inspired by the \ufb01rst observation in Sec. 2.3, we \ufb01nd (i) hierarchical spatial features are important to capture better short-to-long range context; (ii) distinguishing short-range and long-range context in fusion strategies is necessary to relieve irrelevant long-range context while keeping their essential components in hard pose estimation. Thus, as illustrated in Fig. 5 (c), we propose a hierarchical channelsqueezing fusion layer to reach the above hypothesis. Hierarchical fusion layer. Accordingly, we take nodewise aggregation LCN [6] as a baseline. To capture different ranges in spatial context, we generalize Eq. 2 by modifying the direct neighbors N1,i to hop-k neighbors Nk,i. Then, we can get updated features hk,i from hop-k neighbors. To integrate multi-hop features in a layer, we propose a hierarchical fusion block as follows. It consists of two parts. First, we consider the short-range features hS,i within hop-S, which contain the most essential context of the target node i. We thus keep the whole information without squeezing them. We then de\ufb01ne a farthest hop L to obtain the potentially useful information as a set of long-range context HL,i de\ufb01ned as: hS,i = X j\u2208NS,i (\u02c6 aijxjWij), (3) HL,i = {hk,i|k = S + 1, ..., L}, (4) where S is less than or equal to L, set empirically. To fuse features from different contexts, we introduce two fusion functions, namely Fk and Fa, to form a twostage procedures. Fk \ufb01rst transforms a set of long-range features HL,i to obtain a fused long-range features hL,i, and then Fa fuses hL,i with short-range features hS,i to get output ha,i. We refer to such two-step fusion scheme as hierarchical fusion block. Finally, we process the feature ha,i through a transformation Wa to obtain an \ufb01nal output hi with pre-de\ufb01ned dimension. Formally, the \ufb01nal output hi of this fusion layer is de\ufb01ned as: hi = Fa[hS,i, Fk(HL,i)]Wa. (5) Channel-Squeezing block. To retain useful information while suppressing the irrelevant information of long-range context, we hypothesize that the context contains less relevant information. Hence, we propose a set of bottleneck transformations, named Channel-Squeezing Blocks to \ufb01lter irrelevance by an end-to-end learning scheme. To reach the above hypothesis, we distinguish the learnable matrix Wij \u2208RCin\u00d7Ck,out from its output size Ck,out of long-range context HL,j. Moreover, based on the theoretical Information Gain (IG) analysis on the hop-k [8] measured by the average KL-divergence, where the information gain IG(k) of hop k always \ufb01ts well on the function ae\u2212bk, where a, b > 0. It means the information gain from long-range context will be decreased exponentially. In our condition, we propose a simpli\ufb01ed relation function to decide the output size Ck,out for each long-range context to re\ufb02ect the decay of useful information as k increases. Ck,out = d(k\u2212L) \u2217Cin, (6) where d \u2208[0, 1] indicates the channel squeezing ratio, and k \u2208[S + 1, L]. For short-range context hS,i, the output channels of Wi,j keep the same as Cin without squeezing to reduce the loss of useful features. Compared with Eq. 2, LCN is a special kind of HCSF module when Ck,out is a constant across hops, and Fk, Fa are summations. 4 \f1 2 3 4 5 6 7 8 9 \ud835\udc4a 11 \ud835\udc4a 21 \ud835\udc4a 31 \ud835\udc4a 41 \ud835\udc4a 51 \ud835\udc4a 61 \ud835\udc4a 71 \ud835\udc4a 81 \ud835\udc4a 91 1 2 3 4 5 6 7 8 9 + \u210e\ud835\udc46,1 \u210e2,1 \u210e3,1 \u2131 \ud835\udc4e \u2131\ud835\udc58 \u210b\ud835\udc3f,1 Short-range Long-range \ud835\udc4a \ud835\udc4e \u210e\ud835\udc4e,1 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 + \ud835\udc4a 11 \ud835\udc4a 21 \ud835\udc4a 31 \ud835\udc4a 41 \ud835\udc4a 51 \ud835\udc4a 61 \ud835\udc4a 71 \ud835\udc4a 81 \ud835\udc4a 91 1 2 3 4 5 6 7 8 9 + \ud835\udc4a (a) Graph Convolution Network (b) Locally Connected Network (c) Hierarchical Channel-Squeezing Fusion + + Figure 5: The architectures of (a) Graph Convolution Network (GCN) [11], (b) Locally Connected Network (LCN) [6], and (c) Our Hierarchical Channel-Squeezing Fusion (HCSF). We take the feature updating of node 1 as an example. The index of the node corresponds to Fig. 2. Please noted that the HCSF layer could also adapt to other graph-based frameworks, like GCN [11], PreAgg, Post-Agg [20], the difference lies in weight sharing schemes, which is orthogonal to our approach. 3.3. Temporal-aware Dynamic Graph Learning In this subsection, we present a framework when dealing with dynamic skeleton topology. Based on the second observation in Sec. 2.3, with a static graph, the model is hard to properly re\ufb02ect the relations, especially for hard poses. Learning dynamic graph. We learn the dynamic graph from two streams, where one is to update by physical skeleton topology, and the other is to update through features of nodes. For the \ufb01rst stream (Fig. 4 blue lines), we initialize the graph Mk as the physical skeleton topology following SemGCN [49]. Then, this graph will be updated during training. After training, this graph Mk keeps \ufb01xed during inference and re\ufb02ects the distribution of the whole dataset. As the poses change, the connections should be varied during testing to capture motion-speci\ufb01c relations. Hence, we introduce a scheme for the second stream (Fig. 4 orange lines) to learn the graph from input features dynamically. Then, the graph can be adapted to the input pose during the inference. The input feature X is separately transformed by Wk,\u03b8 and Wk,\u03c6 for each branch. After multiplying two transformed features, we obtain a N \u00d7 N matrix indicating relations among nodes. Ok = F{[(X)trWk,\u03b8][(Wk,\u03c6)trX]}, (7) where F denotes the activation function (i.e., Tanh), and tr denotes the transpose of a matrix. The output Ok generates unique connections changing adaptively with input X. Since learning dynamic graphs directly from input features may be sensitive to outliers (e.g., jitter and missing joints). We regard Ok as dynamic offsets to re\ufb01ne the weighted graph Mk. Hence, the \ufb01nal formulation of the dynamic graph of Hop-k is as follows: Ak = Mk + \u03b1Ok, (8) where \u03b1 is a learnable scalar to adjust the scale of dynamic offsets. Then, we can aggregate the hop-k features by Ak, and the following fusion block is the same as Eq. 5. Temporal-aware dynamic graph. The connections between nodes may naturally vary over space and time, and the frame-level dynamic graph will suffer from unstable inputs, it is potential to optimize the dynamic graph learning process combining spatial and temporal context as a whole. In terms of introducing temporal information, the input will add a time dimension, where X \u2208RCin\u00d7Tin\u00d7N. We develop a temporal-aware scheme in Eq. 7. Instead of only applying linear transformations in spatial domain, we integrate temporal context by 1D temporal convolution layer with a kernel size of 1 \u00d7 F. It can smooth and \ufb01lter the spatial outliers in input X to make the learning process robust. 4. Experiment In this section, we perform experimental studies to demonstrate the effectiveness of the proposed solution. 4.1. 3D Human Pose Estimation 4.1.1 Data Description \u2013 Human3.6M [9] consists of 3.6 million video frames with 15 actions from 4 camera viewpoints, where accurate 3D human joint positions are captured from high-speed motion capture system. Following previous works [6, 2, 46, 24], we adopt the standard cross-subject protocol with 5 subjects (S1, S5, S6, S7, S8) as training set and another 2 subjects (S9, S11) as test set. It is commonly evaluated by two metrics, namely the mean per joint position error (MPJPE) with 17 joints of each subject and the Procrustes Analysis MPJPE (PA-MPJPE) to relieve the inherent scale, rotation, and translation problems. \u2013 MPI-INF-3DHP [25, 26] contains both constrained indoor scenes and complex outdoor scenes, covering a greater diversity of poses and actions, where it is usually taken as a cross-dataset setting to verify the generalization ability of the proposed methods. For evaluation, we follow common 5 \fMethod Direct Discuss Eat Greet Phone Photo Pose Purcha. Sit SitD Smoke Wait WalkD Walk WalkT Avg. Luvizon et al. [23] 63.8 64.0 56.9 64.8 62.1 70.4 59.8 60.1 71.6 91.7 60.9 65.1 51.3 63.2 55.4 64.1 Martinez et al. [24] 51.8 56.2 58.1 59.0 69.5 78.4 55.2 58.1 74.0 94.6 62.3 59.1 65.1 49.5 52.4 62.9 Park et al.[28] 49.4 54.3 51.6 55.0 61.0 73.3 53.7 50.0 68.5 88.7 58.6 56.8 57.8 46.2 48.6 58.6 Wang et al. [39] 47.4 56.4 49.4 55.7 58.0 67.3 46.0 46.0 67.7 102.4 57.0 57.3 41.1 61.4 40.7 58.0 Zhao et al. [49]\u2020 47.3 60.7 51.4 60.5 61.1 49.9 47.3 68.1 86.2 55.0 67.8 61.0 42.1 60.6 45.3 57.6 Zou et al. [51]\u2020 49.0 54.5 52.3 53.6 59.2 71.6 49.6 49.8 66.0 75.5 55.1 53.8 58.5 40.9 45.4 55.6 Liu et al. [20]\u2020 48.4 53.6 49.6 53.6 57.3 70.6 51.8 50.7 62.8 74.1 54.1 52.6 58.2 41.5 45.0 54.9 Ci et al. [6]\u2020 46.8 52.3 44.7 50.4 52.9 68.9 49.6 46.4 60.2 78.9 51.2 50.0 54.8 40.4 43.3 52.7 Liu et al. [19]\u2020 46.3 52.2 47.3 50.7 55.5 67.1 49.2 46.0 60.4 71.1 51.5 50.1 54.5 40.3 43.7 52.4 Pavllo et al. [29] 47.1 50.6 49.0 51.8 53.6 61.4 49.4 47.4 59.3 67.4 52.4 49.5 55.3 39.5 42.7 51.8 Cai et al. [2]\u2020 46.5 48.8 47.6 50.9 52.9 61.3 48.3 45.8 59.2 64.4 51.2 48.4 53.5 39.2 41.2 50.6 Zeng et al. [46] 44.5 48.2 47.1 47.8 51.2 56.8 50.1 45.6 59.9 66.4 52.1 45.3 54.2 39.1 40.3 49.9 Ours-HCSF \u2020 43.4 49.7 45.1 47.6 50.7 57.5 47.1 45.9 56.5 61.1 49.8 47.1 51.4 35.8 37.8 48.4 Ours-HCSF w/A\u2020 43.1 50.4 43.9 45.3 46.1 57.0 46.3 47.6 56.3 61.5 47.7 47.4 53.5 35.4 37.3 47.9 Martinez et al. [24] 37.7 44.4 40.3 42.1 48.2 54.9 44.4 42.1 54.6 58.0 45.1 46.4 47.6 36.4 40.4 45.5 Pham et al. [32] 36.6 43.2 38.1 40.8 44.4 51.8 43.7 38.4 50.8 52.0 42.1 42.2 44.0 32.3 35.9 42.4 Zhao et al. [49]\u2020 37.8 49.4 37.6 40.9 45.1 41.4 40.1 48.3 50.1 42.2 53.5 44.3 40.5 47.3 39.0 43.8 Wang et al. [39] 35.6 41.3 39.4 40.0 44.2 51.7 39.8 40.2 50.9 55.4 43.1 42.9 45.1 33.1 37.8 42.0 Liu et al. [20]\u2020 36.2 40.8 33.9 36.4 38.3 47.3 39.9 34.5 41.3 50.8 38.1 40.1 40.0 30.3 33.0 38.7 Liu et al. [19]\u2020 36.8 40.3 33.0 36.3 37.5 45.0 39.7 34.9 40.3 47.7 37.4 38.5 38.6 29.6 32.0 37.8 Ci et al. [6] \u2020 36.3 38.8 29.7 37.8 34.6 42.5 39.8 32.5 36.2 39.5 34.4 38.4 38.2 31.3 34.2 36.3 Zeng et al. [46] 32.9 34.5 27.6 31.7 33.5 42.5 35.1 29.5 38.9 45.9 33.3 34.9 34.4 26.5 27.1 33.9 Ours-HCSF \u2020 29.0 34.1 27.3 31.7 28.8 34.8 34.4 27.3 33.5 38.9 30.4 32.3 29.7 24.6 25.2 30.8 Ours-HCSF w/A \u2020 26.8 33.2 26.7 30.0 30.8 36.7 31.5 27.4 33.1 38.0 30.8 31.8 30.3 23.9 25.0 30.4 Table 1: Comparison of single-frame 3D pose estimation in terms of MPJPE on Human3.6M. Works above the double line show results from detected 2D poses, and the below results are from 2D ground truth inputs to explore the upper bound of these methods. We highlight the graph-based methods by \u2020. w/A denotes using dynamic graphs. Best results in bold. practice [6, 39, 46] by using the Percentage of Correct Keypoints (PCK) with a threshold of 150mm and the Area Under Curve (AUC) for a range of PCK thresholds. 4.1.2 Method Comparison In terms of the frames of inputs, single-view 3D pose estimation can be divided into single-frame and temporal settings. We \ufb01rst compare our HCSF module and Dynamic graph learning with other previous works under the singleframe setting. Then, we extend to the temporal setting to compare related works with our Temporal-aware dynamic graph learning with HCSF scheme. Comparison with Single-frame methods. As shown in Tab. 1, we compare our methods with other baselines. Under the standard protocol with 2D detected inputs [3], our methods can improve the graph-based method [2] from 50.6mm to 47.9mm (relative 5.3% improvement), and surpass the non-graph based method [46] by 2.0mm (relative 4.0% improvement). Since the results of 2D detected poses would affect uncertainty, it is better to consider using 2D ground truth as input to explore the upper bound of these methods. Accordingly, with 2D ground truth inputs, our proposed model improves the graph-based stateof-the-art [6] from 36.3mm to 30.4mm (relative 16.3% improvement). Although LCN [6] aggregates long-range (L = 3) information to relieve depth ambiguities, it ignores the fact that distant joints may bring more disruptions while they still contain certain useful information. The proposed HCSF module considers this effect by squeezing the different hop features into different latent spaces and then hierarchically fusing them. Moreover, our method surpasses state-of-the-art non-GNN method [46] by 3.5mm (relative 10.3% improvement), which further proves the effectiveness among the general methods. Comparison with Temporal methods. We compare with temporal methods with nine frames as inputs and Tab. 2 shows the comparison in terms of average error. For all methods, we select their reported results with similar input frames for comparison. The result shows that the proposed method can outperform previous approaches consistently. With temporal-aware dynamic graph construction, the proposed solution further improves the result by 0.7mm. Improvements on hard poses. As discussed earlier, we de\ufb01ne hard poses as those with high prediction errors and they are model-speci\ufb01c. That is, while hard poses have some inherent characteristics (e.g., depth ambiguity and self-occlusion), they are handled differently with different models [6, 49, 46, 51]. Consequently, one pose that exhibits large errors on one model may show satisfactory results on another model and vice versa (we show more detailed analyses in the supplemental material). However, statistically speaking, if a model handles hard poses better, it would have the following properties: (1) those actions with high prediction errors would be improved more; (2) the proportion of poses with high errors would be smaller; (3) the upper bound of high-error poses would be smaller. Compared with state-of-the-art solution [46], our method reduces the prediction errors by 7.9mm, 6.1mm, 5.8mm, and 5.8mm (relative 17.2%, 18.5%, 14.9%, 13.6% improvements) on the actions \u201cSitDown\u201d, \u201cDirect\u201d, \u201cSit\u201d and \u201cPhoto\u201d, respectively. The average improvement of the hard poses is 16.1% in Tab. 1. Next, in Fig. 6, we compare the error distribution in the test set with four existing solutions [24, 49, 6, 46]. We can observe that there are much fewer poses with high prediction errors with our proposed 6 \fMethod Hossain et al. [34] Lee et al. [12] Pavllo et al. [29] Cai et al. [2] Lin et al. [16] Ours w/o T Ours MPJPE (mm) 58.3 52.8 49.8 48.8 48.8 46.4 45.7 Table 2: Comparison on temporal 2D detected poses input with similar input frames (5, 3, 9, 7, 10, 9, 9 frames, individually) for comparison. The noted w/o T denotes using dynamic graphs without temporal-aware scheme. solution. Specially, there are only 3.6% cases with MPJPE above 60mm with our solution, while it is more than 6% with all the other methods. In fact, the number of cases with MPJPE above 40mm is consistently lower, and the number of cases with MPJPE less than 30mm is consistently higher with our solution than that with other methods. The Intervals of MPJPE (mm) Proportion 0.00 0.05 0.10 0.15 0.20 0.25 <20 [20,25] [25,30] [30,35] [35,40] [40,45] [45,50] [50,55] [55,60] >60 FCN[24] SemGCN[49] LCN[6] SRNet[46] Ours Figure 6: MPJPE distribution on the testset of Human3.6M. Last, we present the mean errors for the top 5% hardest poses of \ufb01ve methods in Fig. 7, ours is 70.7mm, which is 13.8% and 17.1% smaller than the SOTA methods LCN (82.0mm) and SRNet (85.3mm), respectively. 98.7 95.3 82.0 85.3 70.7 MPJPE (mm) 60 70 80 90 100 FCN[24] SemGCN[49] LCN[6] SRNet[46] Ours Figure 7: Mean-Error comparison of the 5% Hardest Poses. We also show visualization results in Fig. 8, compared with SOTA methods (upper LCN, below SRNet). In summary, the above results demonstrate the bene\ufb01ts of the proposed technique on hard poses. Comparison on MPI-INF-3DHP. We further test our model trained with the Human3.6M dataset on the MPIINF-3DHP dataset to verify its generalization capability. Tab. 3 shows about 5.5% improvements on all metrics over related methods. Method FCN [24] OriNet [22] LCN [6] SRNet [46] Ours Outdoor 31.2 65.7 77.3 80.3 84.6 \u21915.4% All PCK 42.5 65.6 74.0 77.6 82.1 \u21915.8% All AUC 17.0 33.2 36.7 43.8 46.2 \u21915.5% Table 3: Cross-dataset results on MPI-INF-3DHP. Ours: 29.6mm (1) Depth Ambiguity (2) Self-Occlusion Ours: 27.2mm Ours: 31.4mm SRNet: 80.7mm (3) Rare Pose Ours: 51.3mm LCN: 178.9mm LCN: 76.1mm SRNet: 128.7mm (4) Complex Pose Figure 8: Qualitative results of hard poses. 3D ground truth, SOTA methods, and ours are black, red, blue in order. L 2 3 4 5 6 Non-hierarchy 37.2 36.2 37.7 37.4 39.9 Hierarchy w/o hop-aware 34.6 34.2 34.7 35.4 36.2 Hierarchy 32.6 32.9 33.7 34.4 Table 4: Comparison of aggregating without hierarchy strategy (Non-hierarchy), with hierarchy scheme but regarding all hops information equally (Hierarchy w/o hop-aware) and our full design (Hierarchy). 4.2. Ablation Study To study some important design choices in the proposed method, we take 2D ground truth poses as inputs and adopt MPJPE as an evaluation metric for analysis. Impact of hierarchical fusion. Tab. 4 shows that (1) two hierarchical design consistently outperforms Non-hierarchy by 2mm\u223c3.7mm under different L, indicating that it is not appropriate to fuse long-range and short-range information in a single stage; (2) without considering different contributions from different hops, the performance of Hierarchy w/o hop-aware is inferior to Hierarchy, leading to a consistent performance drop of 1.6mm\u223c1.8mm. It illustrates that the importance of processing long-range contexts according to hops. We \ufb01x S=1, d=1/16 by default. Impact of squeezing ratio d. We set S = 1, L = 2 to study the in\ufb02uence of the squeezing ratio d. Tab. 6 shows that as d decreases, the corresponding MPJPE \ufb01rst decreases and then increases. As d controls the output channel size of different hop features, the small value of d indicates the small output dimension for channel-squeezing transform. The decrease of d may \ufb01rst reduce the irrelevant information from long-range context, and thus reducing the pose estimation error. When d takes an extreme value 1 16, the useful information may also be substantially squeezed, leading to a performance drop. Impact of S and L. As shown in Tab. 7, S = 1 yields consistently good results under different L. As features 7 \fMethod Baseline a b c d e f g Ak ori Only Mk (ori) Only Mk (dense) Only Mk (rand) Only Ok Mk + Ok Eq. 8 Eq. 8 w/T MPJPE(mm) 30.8 32.1 35.7 40.8 44.3 30.5 30.4 29.7 Table 5: Comparison on the effects of dynamic graph learning A. ori is the static graph with physical connections. Baseline takes Ak as ori. Only Mk (\u00b7) denotes applying Mk with different initialization. Only Ok keeps the dynamic offset in Eq. 8. Mk + Ok equals to set \u03b1 = 1 in Eq. 8. w/T represents the temporal-aware scheme de\ufb01ned in Sec. 3.3. d 1 1/2 1/4 1/8 1/16 MPJPE (mm) 35.4 34.7 33.8 31.4 32.6 Table 6: In\ufb02uence of the squeezing ratio d. within hop-S will not be squeezed, it is in line with our intuition that the direct neighbors provide the most relevant information to the target node. Besides, a random combination of S and L surpasses the strong baseline LCN [6], demonstrating the effectiveness of our design. Impact of the dynamic graph learning In Sec. 3.3, we introduce a dynamic graph Ak consisting of a learnable graph Mk and a graph offset Ok learned from input features. As Tab. 5a, 5b and 5c illustrate, the initialization of the dynamic graph learning is important. The result shows that it is hard for all connected initialization (Tab. 5b) and random (Tab. 5c) to converge well. On the other hand, taking the physical topology as an initial graph (Tab. 5a) can achieve better results (32.1mm). Only learning the dynamic offsets (Tab. 5d) leads to severe performance drop. Relying only on input features may weaken its capability of dealing with data noise. Tab. 5e adds the weighted graph Mk and dynamic offsets Ok, obtaining a 0.3mm performance gain over baseline. Moreover, although the dynamic graph shows priority in representing different motion-speci\ufb01c relations, it is usually vulnerable to the single-frame outliers. After considering the temporal context, we can further improve the baseline from 30.8mm to 29.7mm. 4.3. Skeleton-based Action Recognition As a plug-and-play module, we integrate the proposed HCSF layer and temporal-aware dynamic graph learning into skeleton-based action recognition. Given a sequence of human skeleton coordinates in videos, this task categorizes them into a prede\ufb01ned action class. We conduct experiments on two commonly used datasets, namely, NTU RGB+D 60 [35] and NTU RGB+D 120 [18]. Due to the limits of contents, we leave detailed datasets, implementation details, and ablation study in the supplemental material. Following related works [42, 36, 37, 21, 5, 43], we employ a GNN model with ten spatial-temporal convolution neural layers as the baseline framework. We adopt the proL 1 2 3 4 5 6 S = 0 35.5 34.6 34.8 34.7 35.4 36.1 S = 1 32.6 33.4 34.0 34.7 35.7 S = 2 34.9 35.1 35.6 36.2 LCN [6] 38.0 35.7 36.2 37.7 37.4 39.9 Table 7: The in\ufb02uence of the hop-S and the hop-L. posed framework shown in Fig. 3 to this task by changing the \ufb01nal regression branch into classi\ufb01cation branch. Our results use the multi-stream framework as in [37, 5]. Single stream results are in the supplemental material. Tab. 8 shows that our solution outperforms SOTA on both benchmarks. In particular, the top-1 accuracy is 89.2% and 87.5% in the X-Set and X-Sub settings on the more complex dataset [18], surpassing state-of-the-art solutions irrespective of the fact that they employ sophisticated attention modules [37] and temporal-dilated combinations in each layer [21]. Our method can construct robust dynamic relations through the proposed HCSF layer and temporalaware scheme. Therefore, those inherent relations among joints are better captured, enhancing the capability to distinguish different actions. Method NTU RGB+D 60 NTU RGB+D 120 X-Sub (%) X-View (%) X-Sub (%) X-Set (%) ST-GCN [42] 84.3 92.7 71.3 72.4 AS-GCN [14] 86.8 94.2 77.7 78.9 SGN [48] 89.0 94.5 79.2 81.5 2s-AGCN [36] 88.9 95.1 82.9 84.9 NAS-GCN [30] 89.4 95.7 Mix-dim [31] 89.7 96.0 80.5 83.2 ST-Transformer [33] 89.3 96.1 82.7 84.7 MS-AAGCN [37] 90.0 96.2 Shift-GCN [5] 90.7 96.5 85.9 87.6 MMDGCN [40] 90.8 96.5 86.8 88.0 DecoupleGCN [4] 90.8 96.6 86.5 88.1 MS-G3D [21] 91.5 96.2 86.9 88.4 Ours 91.6 96.7 87.5 89.2 Table 8: Comparison against state-of-the-art methods on the NTU RGB+D 60 and 120 Skeleton dataset in terms of Top1 accuracy(%). Best results in bold. 5." + }, + { + "url": "http://arxiv.org/abs/2105.14490v4", + "title": "Relational Graph Neural Network Design via Progressive Neural Architecture Search", + "abstract": "We propose a novel solution to addressing a long-standing dilemma in the\nrepresentation learning of graph neural networks (GNNs): how to effectively\ncapture and represent useful information embedded in long-distance nodes to\nimprove the performance of nodes with low homophily without leading to\nperformance degradation in nodes with high homophily. This dilemma limits the\ngeneralization capability of existing GNNs. Intuitively, interactions with\ndistant nodes introduce more noise for a node than those with close neighbors.\nHowever, in most existing works, messages being passed among nodes are mingled\ntogether, which is inefficient from a communication perspective. Our solution\nis based on a novel, simple, yet effective aggregation scheme, resulting in a\nladder-style GNN architecture, namely LADDER-GNN. Specifically, we separate\nmessages from different hops, assign different dimensions for them, and then\nconcatenate them to obtain node representations. Such disentangled\nrepresentations facilitate improving the information-to-noise ratio of messages\npassed from different hops. To explore an effective hop-dimension relationship,\nwe develop a conditionally progressive neural architecture search strategy.\nBased on the searching results, we further propose an efficient approximate\nhop-dimension relation function to facilitate the rapid configuration of the\nproposed LADDER-GNN. We verify the proposed LADDER-GNN on seven diverse\nsemi-supervised node classification datasets. Experimental results show that\nour solution achieves better performance than most existing GNNs. We further\nanalyze our aggregation scheme with two commonly used GNN architectures, and\nthe results corroborate that our scheme outperforms existing schemes in\nclassifying low homophily nodes by a large margin.", + "authors": "Ailing Zeng, Minhao Liu, Zhiwei Liu, Ruiyuan Gao, Jing Qin, Qiang Xu", + "published": "2021-05-30", + "updated": "2022-02-16", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "main_content": "Introduction Recently, a large number of research efforts have been dedicated to applying deep learning methods to graphs, known as graph neural networks (GNNs) [15,36], achieving great success in modelling non-structured data, e.g., social networks [25] and recommendation systems [26]. Learning an effective low-dimensional embedding to represent each node in the graph is arguably the most important task for GNN learning, wherein the node embedding is obtained by aggregating information with its direct and indirect neighbouring nodes passed through GNN layers [12]. Earlier GNN works usually aggregate with neighboring nodes that are within a short range (1-2 hops). For many graphs, this may cause the so-called under-reaching issue [3] \u2013 distant yet informative nodes are not involved, leading to unsatisfactory results, particularly when the homophily level of the graph is relatively low [31]. Consequently, lots of techniques that attempt to aggregate long-distant neighbours are proposed by deepening or widening the network [19,20,44,45,52]. However, when we aggregate information from too many long-distant neighbours, the so-called over-smoothing problem [20] may occur, causing nodes to be less distinguishable from each other [7]. To the end, the inference performance often greatly degrades, particularly at the nodes with high homophily level. This situation puts the GNN learning into a dilemma. On the one hand, to enhance the generalization capability of existing GNNs, particularly in dealing with graphs with low homophily, we should integrate information embedded in long-distance nodes in the learning process. On the other hand, such an integration under existing aggregation schemes will often cause obvious performance degradation, particularly in graphs with high level of homophily [53]. To alleviate this problem, several hop-aware GNN aggregation schemes are proposed in the literature [2, 38, 40, 49, 54]. While showing promising results, the messages passed among nodes are mingled together in these approaches. From a communication perspective, mixing information from clean sources (mostly low-order neighbours) and that from noisy sources (mostly high-order neighbours) would inevitably cause dif\ufb01culty for the receiver (i.e., the target node) to extract information. Motivated by the above observations and considerations, we propose a novel, simple, yet effective hop-aware aggregation scheme, resulting in a ladder-style GNN architecture, namely LADDERGNN, to comprehensively address this dilemma. The contributions of this paper include: 1 arXiv:2105.14490v4 [cs.LG] 16 Feb 2022 \f12000 8000 4000 0 0 20 40 60 80 100 Node number Ratio (%) (a) Hop-1 3500 2500 1500 500 0 20 40 60 80 100 Node number Ratio (%) (b) Hop-2 500 300 100 0 20 40 60 80 100 Node number Ratio (%) (c) Hop-4 1000 600 200 0 20 40 60 80 100 Node number Ratio (%) (d) Hop-8 Figure 1. The histogram for the number of nodes (y axis) with different homophily ratio, i.e., the percentage of neighbors with the same label as the target node at Hop-k on the Pubmed dataset. It indicates a diminishing information-to-noise ratio for messages ranging from low-order hops to high-order hops. \u2022 We take a communication perspective on GNN message passing. That is, we regard the target node for representation learning as the receiver and group the set of neighbouring nodes with the same distance to it as a transmitter that carries both information and noise. The dimension of the message can be regarded as the capacity of the communication channel. Then, aggregating neighbouring information from multiple hops becomes a multi-source communication problem with multiple transmitters over the communication channel. \u2022 To improve node representation learning, we propose to separate the messages from different transmitters (i.e., Hop-k neighbours), each occupying a proportion of the communication channel (i.e., disjoint message dimensions). As the homophily ratio in highorder neighbours is often lower than that in low-order neighbours, the resulting hop-aware representation is unbalanced with more dimensions allocated to loworder neighbours, leading to a ladder-style aggregation scheme. \u2022 To explore the dimension allocation manner for neighbouring nodes with different hops effectively, we propose a conditionally progressive neural architecture search (NAS) strategy. Motivated by the search results, we introduce an approximate hop-dimension relation function, which can generate close results to the NAS solution without applying compute-expensive NAS. To verify the effectiveness of the proposed GNN representation learning solution, we demonstrate it on seven semisupervised node classi\ufb01cation datasets for both homogeneous and heterogeneous graphs with different homophily levels. Experimental results show that our solution achieves better performance than most existing GNNs. We further analyze our aggregation scheme with two commonly used GNN architectures, and the results corroborate that our scheme outperforms existing schemes in classifying low homophily nodes by a large margin. 2. Related Work and Motivation GNNs adopt message passing to learn node embeddings, which involves two steps for each node: neighbour aggregation and linear transformation [12]. The following formula presents the mathematical form of message passing in a graph convolutional network (GCN) [15]. Given an undirected graph G= (V, E) with N nodes and adjacency matrix A, we can aggregate node features at the l-th layer H(l) \u2208RN\u00d7C(l) i as: H(l+1) = \u03c3( \u02c6 AH(l)W(l)), (1) where \u02c6 A = D\u22121/2(A + IN)D\u22121/2 is the augmented normalized adjacency matrix of G. IN is the identity matrix and Dii = P j(A + IN)ij. W(l) \u2208RC(l) i \u00d7C(l) h is the trainable weight matrix at the l-th layer used to update node embeddings. C(l) i and C(l) h are the channel size of the input and hidden layer, respectively. \u03c3(\u00b7) is an activation function. By taking the importance of different neighbours into consideration, graph attention network (GAT) [36] applies a multi-head self-attention mechanism during aggregation and achieves higher performance than GCN in many datasets. Recent GNN works improve GCN or GAT from two aspects. (i) Some make changes to the input graphs [7,27,32,46,50]. For example, [27] proposes to drop some task-irrelevant or noisy edges to achieve high generalization capability. The underlying thought of such changes is to increase the homophily level of the input graph so as to achieve satisfactory results even when aggregating a short range of neighboring nodes. Unfortunately, for many practical applications, it is dif\ufb01cult, if not impossible, to design a high homophily graph. (ii) Without changing the graph, some GNN works try to further extract relevant information from high-order neighbours. Our work belongs to this category. In order to aggregate information from high-order neighbours, some earlier works [19, 44, 45] simply stack deeper networks to retrieve such information recursively. To mitigate the possible over-\ufb01tting issues (caused by model complexity), SGC [41] removes the nonlinear operations and directly aggregates node features from multiple hops. To relieve the potential over-smoothing problem that results in less discriminative node representations (due to overmixing) [20], various hop-aware aggregation solutions are proposed. Some of them (e.g. HighOrder [29], MixHop [2], 2 \fNode Info Mix ... ... (b) (c) (d) channel channel capacity useful info for different hops noise transmitter receiver source info encoder decoder Output backward update Layer Info Mix ... (a) Figure 2. An illustration of GNN message passing and representation learning from a communication perspective. (a) A communication system contains transmitters that encode source information, communication channel, and receivers that decode the original information; (b) GNN representation learning with existing node aggregation scheme; (c) GNN representation learning with existing hop-aware aggregation scheme; (d) GNN representation learning with the proposed ladder-style aggregation scheme. N-GCN [1], GB-GNN [30]) employ multiple convolutional branches to aggregate neighbors from different hops. Others (e.g. AM-GCN [39], HWGCN [23], MultiHop [54]) try to learn adaptive attention scores when aggregating neighboring nodes from different hops. As our scheme is closely related to the homophily level of a node, we provide a de\ufb01nition of homophily ratio and further explain our motivation based on this de\ufb01nition. Given a graph G = (V, E), we de\ufb01ne the homophily ratio of the node i as ri = P j\u2208Ei,j (C(Vi)==C(Vj)) P j\u2208Ei,j Vj , where j is its neighboring nodes, C is the class label, and thus the ri indicates the percentage of nodes with the same class to i in all its neighboring nodes. In Figure 1, we plot the homophily ratio for nodes in the Pubmed dataset with different hops. As can be observed from the \ufb01gure, with the increase of hop distance, the percentage of neighbouring nodes with the same label decreases, indicating a diminishing information-to-noise ratio for messages ranging from low-order neighbours to highorder neighbours. This is a common phenomenon when the graph is well designed. Therefore, the critical issue in GNN message passing is how to retrieve information effectively while suppressing noise simultaneously. However, all the existing aggregation schemes do not explicitly and suf\ufb01ciently consider this issue, making it challenging to achieve good performance in both high homophily nodes and low homophily nodes. This observation motivates us to propose our LADDERGNN architecture. 3. Method In Sec. 3.1, we take a communication perspective on GNN message passing and representation learning. Then, we give an overview of the proposed LADDERGNN framework in Sec. 3.2. Next, we explore the dimensions of different hops with an RL-based NAS strategy in Sec. 3.3 and then introduce the approximate hop-dimension relation function in Sec. 3.4. 3.1. GNN Representation Learning from a Communication Perspective In GNN representation learning, messages are passed from neighbouring nodes to the target node and updated its embedding. Figure 2 presents a communication perspective on GNN message passing, wherein we regard the target node as the receiver. Considering neighbouring nodes from different hops tend to contribute unequally (see Figure 1), we group the set of neighbouring nodes with the same distance as one transmitter, and hence we have K transmitters if we would like to aggregate up to K hops. The dimension of the message can be regarded as the communication channel capacity. Then, GNN message passing becomes a multi-source communication problem. Some existing GNN message-passing schemes (e.g., SGC [41], JKNet [45], and S2GC [52]) aggregate neighboring nodes before transmission, as shown in Figure 2(b), which mix clean information source and noisy information source directly. The other hop-aware GNN messagepassing schemes (e.g., AMGCN [39], MultiHop [54], and MixHop [2]) as shown in Figure 2(c)) \ufb01rst conduct aggregation within each hop (i.e., using separate weight matrix) before transmission over the communication channel, but they are again mixed afterward. Different from a conventional communication system that employs a well-developed encoder for the information source, one of the primary tasks in GNN representation learning is to learn an effective encoder that extracts useful information with the help of supervision. Consequently, mixing clean information sources (mostly low-order neighbours) and noisy information sources (mostly high-order neighbours) makes the extraction of discriminative features challenging. The above motivates us to perform GNN message passing without mixing up messages from different hops, as shown in Figure 2(d). At the receiver, we concatenate the messages from various hops, and such disentangled repre3 \fDimension Search 1. Embedding Concatenation 2. (a) Ladder-GNN Hop=2 Hop=3 \ud835\udc4a \u0bce (\u0b6a) \u2208\u211d\u2211 \ud835\udc6a\ud835\udc8c \ud835\udfd1 \ud835\udc8c\u0c38\ud835\udfcf \u00d7\u0bbc\u0cda \u2297 \ud835\udc4b\u2208\u211d\u0bc7\u00d7\u0bbc\u0cd4 Aggregation Aggregation \ud835\udc3b\u2208\u211d\u0bc7\u00d7\u0bbc\u0cda 3. In-hop Aggregation Embedding Transformation Figure 3. Illustration of the process of LADDERGNN of one target node. The height of bars represents the dimension of embeddings from each node. Colors are with respect to hops. sentations facilitate extracting useful information from various hops with little impact on each other. Moreover, dimensionality signi\ufb01cantly impacts any neural networks\u2019 generalization and representation capabilities [3, 4, 22, 34], as it controls the amount of quality information learned from data. In GNN message passing, the information-to-noise ratio of low-order neighbours is usually higher than that of high-order neighbours. Therefore, we tend to allocate more dimensions to close neighbours than distant ones, leading to a ladder-style aggregation scheme. 3.2. Ladder-Aggregation Framework With the above, Figure 3 shows the node representation update procedure in the proposed LADDERGNN architecture. For a particular target node (the center node in the \ufb01gure), we \ufb01rst aggregate node within each hop, which can be conducted by existing node-wise aggregation methods (e.g., GCN or GAT). Next, we determine the dimensions for the aggregated messages from different hops and then concatenate them, instead of mixing them up, for inter-hop aggregation. Finally, we perform a linear transformation to generate the updated node representation. Speci\ufb01cally, K is the maximum number of neighboring hops for node aggregation. For each group of neighbouring nodes at Hop-k, we determine their respective optimal dimensions and then concatenate their embeddings into H as follows: H =\u2225k\u2208[1,K] ( b AkXWk), (2) where b Ak is the normalized adjacency matrix of the kth hop and X is the input feature. A learnable matrix Wk \u2208 RCi\u00d7Co,k controls the output dimension of the kth hop as Co,k. \u2225means concatenation. Encoding messages from different hops with distinct Wk avoids the over-mixing of neighbours, thereby alleviating the impact of noisy information sources on clean information sources during GNN message passing. Accordingly, H is a hop-aware disentangled representation of the target node. Then, with the classi\ufb01er f after the linear layer WU, we have: b Y = f(HWU), (3) where b Y is the output softmax values. Given the supervision Y of some nodes, we can use a cross-entropy loss to calculate gradients and optimize the above weights in an end-to-end manner. With the above, if the adjacency matrix A are the same as the original GCN architecture, the resulting GNN architecture with our ladder-aggregation framework is namely Ladder-GCN. Similarly, when we employ a self-attention scheme within hops to obtain the attention-based adjacency matrix \u02c6 A as in the original GAT architecture, the resulting GNN architecture is namely Ladder-GAT. Please note, our proposed ladder-aggregation scheme could also be integrated into other GNN architectures (e.g., [27]). 3.3. Hop-Aware Dimension Search Allocating different dimensions for messages from different hops is the key in LADDERGNN design. As there are numerous hop-dimension allocation possibilities, determining an appropriate allocation is a non-trivial task. In recent years, neural architecture search (NAS) has been extensively researched, which automatically designs deep neural networks with comparable or even higher performance than manual designs by experts (e.g., [5, 21, 24, 35, 55]). Existing NAS works in GNNs [11, 33, 51] search the graph architectures (e.g., 1-hop aggregators, activation function, aggregation type, attention type, etc) and hyperparameters to reach better performance. However, they ignore to aggregate multi-hop neighbours, let alone the dimensionality of each hop. In the following, we introduce our proposed NAS solution. Search Space: Different from previous works in GNNs [11, 48, 51], our search space focuses on the dimension of each hop, called hop-dimension combinations. To limit the possible search space O for hopdimension combinations, we apply exponential sampling 20, 21, 22,...,Ci,...,2(n\u22121), 2(n) strategies for dimensions. n are hyper-parameters, representing the index and sampling granularity to cover the possible dimensions. For each strat4 \fInit 3703 256 64 16 M RD(M) Hop-1 Hop-2 Hop-3 Hop-1 Hop-2 Hop-3 Hop-4 Validation \ud835\udc6b Hop Dimension Figure 4. An illustration of RL-based NAS for the hop-dimension exploration. A recurrent network (controller) generates descriptions of the dimension for each hop. Once the controller generates a framework M, it will be trained on the training set and then tested on the validate set D. The validation result RD(M) is taken as the reward to update the recurrent controller. egy, the search space should also cover the dimension of initial input feature Ci. Basic Search Algorithm: Given the search space O, we target \ufb01nding the best model M \u2217\u2208M to maximize the expected validation accuracy. We choose the reinforcement learning strategy since its reward is easy to customize for our problem. As shown in Figure 4, a LSTM controller based on the parameters \u03b8 generates a sequence of actions a1:K with length K, where each hop dimension Ck (1 \u2264k \u2264K) is sampled from the search space mentioned above. Then, we can build a model M mentioned in Sec. 3.2, and train it with a cross-entropy loss function. Then, we test it on the validation set D to get an accuracy RD(M). Next, we can use the accuracy as a reward signal and perform a policy gradient algorithm to update the parameters \u03b8 so that the controller can generate better hopdimension combinations iteratively. The objective function of the model is shown in: M \u2217= arg max M EP (a1:K,\u03b8)[RD(M)]. (4) Conditionally Progressive Search Algorithm: Considering the extremely large search space with the basic search algorithm, e.g., the search space size will be (n+1)k for the exponential sampling with k hops. This makes it more challenging to search for the optimal combinations with limited computational resources. Moreover, we \ufb01nd that there are a large number of redundant actions in our search space. To improve the ef\ufb01ciency and effectiveness of the search procedure, we are inspired to propose a conditionally progressive search algorithm. That is, instead of searching the entire space all at once, we divide the searching process into multiple phases, starting with a relatively small number of hops, e.g., K = 3. After obtaining their results, we only keep those hopdimension combinations that are promising, where they are regarded as the conditional search space, with high RD(M). Next, we conduct the hop-dimension search for the (K+1)th hop based on the conditional search space \ufb01ltered from the last step, and again, keep those combinations with high RD(M). This procedure is conducted progressively until aggregating more hops cannot boost performance. With this algorithm, we can largely reduce the redundant search space to enhance search ef\ufb01ciency. 3.4. Hop-Dimension Relation Function The computational resources required to conduct NAS are extremely expensive for large graphs, even with the proposed progressive search algorithm. Therefore, it is essential to have an ef\ufb01cient method to determine the dimensionality of every hop in practical applications. From our NAS experimental results (Sec. 4.1), we observe that the loworder neighbours within L hops are usually directly aggregated with the original feature dimensions while high-order neighbours are associated with an approximately exponentially decreasing dimensions. This motivates us to propose a simple yet effective hopdim relation function to approximate the NAS solutions. The output dimension of kth hop is: Co,k = dmax {k\u2212L,0} \u2217Ci, (5) where 0 < d < 1 is the dimension compression ratio, and Ci is the dimension of the input feature. With such an approximate function, there is only one hyper-parameter to determine, signi\ufb01cantly reducing the computational cost. 4. Experiment In this section, we validate the effectiveness of LADDERGNN on seven widely-used semi-supervised node classi\ufb01cation datasets. We \ufb01rst analyze the NAS results in Sec. 4.1. Then, we combine the proposed hop-aware aggregation scheme with the approximate function with existing GNNs in Sec. D. Moreover, as a new hop-aware aggregation scheme, in Sec. 4.3, we quantitatively compare with exiting works. Furthermore, we conduct experiments on heterogeneous graphs in Sec. 4.4. Last, we show an ablation study on the proposed hop-dim relation function in Sec. 4.5. Data description: For the semi-supervised node classi\ufb01cation task on homogeneous graphs, we evaluate our method on \ufb01ve datasets: Cora [47], Citeseer [47], Pubmed [47], OGB-Arxiv [14] and OGB-Products [14]. We split the training, validation and test set following earlier works [2, 15, 36, 39]. Furthermore, on heterogeneous graphs, we verify the methods on two datasets: ACM and IMDB [38]. Due to page limits, more details about dataset descriptions, data pre-processing procedure and more comparison with existing methods are listed in the Appendix. 5 \f0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Hop=1 Hop=2 Hop=3 Hop=4 Hop=5 Hop=6 HOP-DIM RATE (%) C=1 C=2 C=4 C=8 C=16 C=32 C=64 C=128 C=256 C=512 C=1024 C=1433 C=2048 Figure 5. The histogram of the \ufb01ltered dimensions for different hops on the Cora dataset. X-axis illustrates Hop-k with potential dimension assignment, while Y-axis shows the rate (%) of the dimension for the corresponding hop. The colored bars represent the size of different dimensions, as shown in the upper legend. 4.1. Results from Neural Architecture Search To study the impact of the dimensions among hops, we conduct the NAS on different datasets to \ufb01nd out the optimal hop dimension combinations. There exist a number of NAS approaches for GNN models, including random search (e.g., AGNN [51]), reinforcement learning-based (RL) solution (e.g., GraphNAS [11] and AGNN [51]) and evolutionary algorithm (e.g., Genetic-GNN [33]), wherein the RL-based solutions are more effective than others. Thus, in this work, we follow the same strategy as RL-based solutions to search for appropriate dimensions allocated to each hop. Table 1. The accuracy(%) comparison for different NAS strategies. Ours-with cond. in our method means using the conditional search strategy. OursApprox. indicates the results is obtained from the hop-dimension relation function. The best result is in bold. The second-place result is underlined. share means to train the architecture with shared weight. Method Cora Citeseer Pubmed Random-w/o share [51] 81.4 72.9 77.9 Random-with share [51] 82.3 69.9 77.9 GraphNAS-w/o share [11] 82.7 73.5 78.8 GraphNAS-with share [11] 83.3 72.4 78.1 AGNN-w/o share [51] 83.6 73.8 79.7 AGNN-with share [51] 82.7 72.7 79.0 Genetic-GNN [33] 83.8 73.5 79.2 Ours-w/o cond. 82.0 72.9 79.6 Ours-w/ cond. 83.5 74.8 80.9 Ours-Approx. 83.3 74.7 80.0 In particular, we search the hop-dimension combinations of 10 hops on Cora, Citeseer, and Pubmed datasets and show experimental results in Table 1. Compared with existing NAS methods, our NAS method achieves better results with conditional progressive search algorithm on Citeseer and Pubmed datasets, improving over Genetic-GNN by 1.4% and 2.1%, respectively. Meanwhile, we achieve comparable accuracy in the Cora dataset only by considering the hop-dimension combinations. Moreover, compared with w/o cond., we can \ufb01nd 2.6% improvements on conditional progressive search, indicating the effectiveness of this strategy to search optimal hop dimension combinations under a limited search resource. Moreover, the Approx. method show competitive results with NAS-based results, especially on Cora and Citeseer datasets. Specially, we demonstrate the histogram of the possible dimension assignment for different hops in Figure 5. We can obtain two observations: (i) for low-order neighbors, i.e., when hop is less than 3 in this case, most of the sorted solutions with high accuracy keep the initial feature dimension; (ii) most of the possible dimensions of the hop are only in single digits, which verify the necessity of the proposed conditional strategy to reduce the search space greatly; (iii) The dimensionality tends to be reduced for high-order neighbours, and approximating it with exponentially decreasing dimensions occupies a relatively large proportion of the solutions. Last, the above results serve two purposes: (i) they facilitate and support the design of the proposed approximate hop-dimension relation function; (ii) they validate the effectiveness of the proposed approximate hop-dimension relation function. Accordingly, we could use the approximate relation function with only one parameter to search for proper hop dimensions with comparable performance to the NAS solution. 4.2. Comparison with General GNNs Quantitative Results: We demonstrate the accuracy among general GNNs, like GCN, GAT and GraphSage on \ufb01ve popular datasets in Table 2. As an effective hop-aware aggregation, we take GCN and GAT as examples to integrate them into our aggregation framework. The results show our methods can boost the performance of GCN and GAT by up to 4.7%, indicating the proposed aggregation is bene\ufb01cial and robust on different dataset. Table 2. GNN performance comparison. Method Cora Citeseer Pubmed Arxiv Products Average GCN 81.5 70.3 79.0 71.7 75.6 75.6 GraphSage 81.3 70.6 75.2 71.5 78.3 75.4 GAT 78.9 71.2 79.0 73.6 79.5 76.4 SGC 81.0 71.9 78.9 68.9 68.9 73.9 Ladder-GCN 83.3 74.7 80.0 72.1 78.7 77.8 Ladder-GAT 82.6 73.8 80.6 73.9 80.8 78.3 Detailed Analysis: To understand the bene\ufb01ts provided by LADDERGNN, in Figure 6, we show the classi\ufb01cation accuracy for nodes under different homophily ratio. As ex6 \fHomophily Ratio Test Accuracy 0.00 0.25 0.50 0.75 1.00 0.00 0.25 0.50 0.75 1.00 GCN Ladder-GCN GAT Ladder-GAT (a) Citeseer dataset. Homophily Ratio Test Accuracy 0.00 0.25 0.50 0.75 1.00 0.00 0.25 0.50 0.75 1.00 GCN Ladder-GCN GAT Ladder-GAT (b) Cora dataset. Figure 6. Results under different homophily ratios for the Citeseer and Cora datasets. LADDERGNN outperforms GCN and GAT by a large margin when classifying low homophily nodes, while keeping competitive performance on high homophily nodes. pected, high homophily nodes (e.g., when the ratio is higher than 50%) are relatively easy to classify with high accuracy, and there is not much difference between the results of our approach and those of existing methods. For the low homophily nodes (e.g., when the ratio is smaller than 25%), LADDERGNN clearly excels with much higher classi\ufb01cation accuracy and it even sustains at similar accuracy as those nodes with homophily ratio of 60%. Table 3. The comparison of ef\ufb01ciency on Citeseer dataset. Methods GCN GAT SGC MixHop Ours Accuracy (%) 70.3\u00b10.2 71.2\u00b10.7 71.9\u00b10.5 73.0\u00b10.1 74.7\u00b10.3 Parameter(K) 118.7 237.5 22.2 177.9 107.6 Train Time(ms) 5.9\u00b11.0 525.0\u00b15.5 1.2\u00b10.4 4.9\u00b10.3 2.0\u00b10.5 Test Time(ms) 9.8\u00b10.8 181.4\u00b10.5 4.6\u00b10.3 10.5\u00b12.43 6.2\u00b10.9 Memory(MiB) 723.0 10,565.0 753.0 789.0 779.0 Ef\ufb01ciency Exploration: A comparison of our LadderGCN model with other related works (GCN, SGC, GAT, and MixHop) is shown in Table 3. Experimental results are performed on a TITAN-Xp machine. We average ten runs on the Citeseer dataset to obtain the average training time for each epoch, the average test time, and the model accuracy. The settings of the other methods follow the respective papers, and we set K = 5 and d = 0.0625 in Ladder-GCN. As can be seen from Table 3, the computational time and memory costs of Ladder-GCN are moderately larger than SGC, with higher model accuracy. 4.3. Comparison with Hop-aware GNNs Our proposed solution improves the hop-aware GNN representation learning capability when aggregating highorder neighbors during message passing. Table 4 presents the comparison with other hop-aware solutions. As can be observed, in terms of Top-1 accuracy (%), our LADDERGNN with a simple hop-dimension relation function achieves competitive results on all three datasets, indicating the effectiveness of the proposed solution. Specifically, our method shows more improvements on Citeseer Table 4. Comparison with Hop-aware GNNs. Method Cora Citeseer Pubmed Average HighOrder [29] 76.6 64.2 75.0 71.9 MixHop [2] 80.5 69.8 79.3 76.5 GB-GNN [30] 80.8 70.8 79.2 76.9 HWGCN [23] 81.7 70.7 79.3 77.2 MultiHop [54] 82.4 71.5 79.4 77.8 HLHG [18] 82.7 71.5 79.3 77.8 AM-GCN [39] 82.6 73.1 79.6 78.4 N-GCN [1] 83.0 72.2 79.5 78.2 TDGNN-w [40] 83.4 70.3 79.8 77.8 TDGNN-s [40] 83.8 71.3 80.0 78.4 Ladder-GCN 83.3 74.7 80.0 79.3 than on Cora and Pubmed. We attribute it to the fact that CiteCeer has a relatively lower graph homophily ratio of 0.74 while the homophily ratios of Cora and Pubmed are 0.81 and 0.80, respectively. Because our LADDERGNN can extract discriminative information from noisy graph, the improvements on Citeceer is more signi\ufb01cant. 4.4. Heterogeneous Graph Representation Learning Heterogeneous graphs that consist of different types of entities (i.e., nodes) and relations are ubiquitous. In heterogeneous graph representation learning, meta-path-based solutions are proposed to model the semantics of different relations among entities, e.g., Movie-Actor-Movie and Movie-Director-Movie in the IMDB dataset. In this section, we apply our method to heterogeneous semi-supervised classi\ufb01cation on two popular datasets: ACM and IMDB extracted by HAN [38]. For a fair comparison, we follow the same experimental settings. Meta-path based baseline: Heterogeneous graph attention network (HAN) introduces hierarchical attention, including node-level and semantic-level attention. It can learn the importance between a node and its meta-path-based neighbours and the importance of different meta-paths for heterogeneous graph representation learning. Ladder Aggregation at Semantic Level: Since distinct meta-paths have different contributions to the target node representation, we propose to use LADDERGNN for semantic-level aggregation. In particular, with the prior knowledge of the ordinal importance of meta-paths (e.g., MDM is more important than MAM for movie type), we can allocate dimensions for them accordingly. Therefore, the hop-dimension relation function is used here for semantics-dimension relations, wherein more/less relevant semantic embeddings get higher/lower dimensions. Again, the only hyper-parameter in our method is the compression ratio d. Compare LADDERGNN with Attention-based Aggregation: We use eight kinds of meta-path combinations 7 \fTable 5. Comparison of the accuracy (%) of the two methods on two datasets (Higher values are better). GAT and HAN are re-implemented by us to keep all hyper-parameters as the same. All uses all meta-paths. IMP. is an abbreviation of improvement. Dataset ACM IMDB Method PAP&PSP PAP&PTP PSP&PTP All MAM&MDM MAM&MYM MDM&MYM All GAT 83.5 79.7 79.7 79.7 53.0 40.9 48.9 48.2 HAN 87.9 84.9 82.1 88.7 52.4 48.6 51.8 53.4 Ours 89.2 86.2 84.7 89.6 58.4 50.4 53.1 55.9 IMP. (%) 1.48 1.53 3.17 1.01 11.5 3.57 2.45 4.12 and compare with GAT1 and HAN. Experimental results on Top-1 accuracy are shown in Table 5. From the results we can observe: (i) distinguishing heterogeneous paths is essential as the performance of GAT is always the worst; (ii) by allocating distinct dimensions for different semantic embeddings, the proposed LADDERGNN produces better results than state-of-the-art solution2 (i.e., HAN). 4.5. Ablation Study To analyze the impact of the hop-dimension function in Eq. (5), we conduct experiments by varying two dominant hyper-parameters: the furthest hop K, the dimension compression rate d and the aggregation methods among hops. We present the results on Citeseer. Table 6. Comparison of different compression rate d under different furthest hop K of the proposed LADDERGNN. K 2 3 4 5 6 7 8 9 d=2 60.7 67.6 64.8 63.0 61.2 58.9 55.8 50.2 d=0.5 65.5 71.5 72.3 72.8 73.0 73.1 73.2 73.7 d=0.25 68.8 72.9 73.3 73.9 73.6 73.5 73.4 73.0 d=0.125 71.0 73.5 74.1 74.0 74.2 74.3 74.0 72.8 d=0.0625 69.3 73.1 73.6 74.7 74.3 74.3 73.8 74.2 d=0.03125 67.2 71.4 72.7 73.0 73.2 73.8 73.5 73.3 Impact of the largest hop K and compression rate d. In Table 6, the compression rate d varies from 2 to 0.03125 (1/32) as comparison. We can observe that (i) by increasing the furthest hop K with \ufb01xed d, the performance will increase to saturation when d < 1. When increasing K, more information can be aggregated from neighbors. LADDERGNN\u2019s ability to suppress noise by dimension compression facilitates the performance to saturation. (ii) by decreasing the decay rate on \ufb01xed K, the performance \ufb01rst increases and then drops under most situations. The reason for increasing is that the decreased compression rate will map the distant nodes to a lower and suitable dimension space, suppressing the interference of noise in distant nodes. However, there is an upper bound for these improvements given K. When d < 0.0625, the reduced dimension is too low to preserve the overall structural information, leading to 1With GAT, we treat the heterogeneous meta-paths as homogeneous edges by summing up the adjacency matrices of different attributes. 2More detailed analyses are shown in the appendix. worse performance in most cases. (iii) the effective rate d is mainly on {0.125, 0.0625}, which can achieve better results for most K. If K = 5 and d = 0.0625, we obtain the best accuracy of 74.7%. (iv) note that there are signi\ufb01cant improvements with dimension compression comparing to dimension increase (d = 2), which validates the effectiveness of the basic principle of dimension compression. Impact of the aggregation operators. For aggregation method among hops, most existing solutions (e.g., GCN, SGC and GAT) mix the information from multiple hops, while Ladder-GNN disentangles and concatenates the information from multiple hops. To further demonstrate the effectiveness of the proposed method, we conduct an experiment to substitute concatenation with addition in Ladder-GNN. To accommodate the dimensional differences between features from different hops, we use zero paddings to \ufb01ll those vacant positions before addition. As can be seen from Table 7, concatenating features show consistently better results. Table 7. Comparison of different aggregation operators of the proposed LADDERGNN. Methods Citeseer Cora Pubmed Concatenation 74.70\u00b10.34 83.34\u00b10.38 80.08\u00b10.45 Addition 73.15\u00b10.40 80.06\u00b10.51 76.20\u00b10.46 5." + } + ] + }, + "edge_feat": {} + } +} \ No newline at end of file