diff --git "a/abs_29K_G/test_abstract_long_2405.00902v1.json" "b/abs_29K_G/test_abstract_long_2405.00902v1.json" new file mode 100644--- /dev/null +++ "b/abs_29K_G/test_abstract_long_2405.00902v1.json" @@ -0,0 +1,207 @@ +{ + "url": "http://arxiv.org/abs/2405.00902v1", + "title": "MESA: Cooperative Meta-Exploration in Multi-Agent Learning through Exploiting State-Action Space Structure", + "abstract": "Multi-agent reinforcement learning (MARL) algorithms often struggle to find\nstrategies close to Pareto optimal Nash Equilibrium, owing largely to the lack\nof efficient exploration. The problem is exacerbated in sparse-reward settings,\ncaused by the larger variance exhibited in policy learning. This paper\nintroduces MESA, a novel meta-exploration method for cooperative multi-agent\nlearning. It learns to explore by first identifying the agents' high-rewarding\njoint state-action subspace from training tasks and then learning a set of\ndiverse exploration policies to \"cover\" the subspace. These trained exploration\npolicies can be integrated with any off-policy MARL algorithm for test-time\ntasks. We first showcase MESA's advantage in a multi-step matrix game.\nFurthermore, experiments show that with learned exploration policies, MESA\nachieves significantly better performance in sparse-reward tasks in several\nmulti-agent particle environments and multi-agent MuJoCo environments, and\nexhibits the ability to generalize to more challenging tasks at test time.", + "authors": "Zhicheng Zhang, Yancheng Liang, Yi Wu, Fei Fang", + "published": "2024-05-01", + "updated": "2024-05-01", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.MA" + ], + "label": "Original Paper", + "paper_cat": "Multi AND Agent AND Reinforcement AND Learning", + "gt": "Multi-agent reinforcement learning (MARL) algorithms often struggle to find\nstrategies close to Pareto optimal Nash Equilibrium, owing largely to the lack\nof efficient exploration. The problem is exacerbated in sparse-reward settings,\ncaused by the larger variance exhibited in policy learning. This paper\nintroduces MESA, a novel meta-exploration method for cooperative multi-agent\nlearning. It learns to explore by first identifying the agents' high-rewarding\njoint state-action subspace from training tasks and then learning a set of\ndiverse exploration policies to \"cover\" the subspace. These trained exploration\npolicies can be integrated with any off-policy MARL algorithm for test-time\ntasks. We first showcase MESA's advantage in a multi-step matrix game.\nFurthermore, experiments show that with learned exploration policies, MESA\nachieves significantly better performance in sparse-reward tasks in several\nmulti-agent particle environments and multi-agent MuJoCo environments, and\nexhibits the ability to generalize to more challenging tasks at test time.", + "main_content": "INTRODUCTION Reinforcement learning (RL) algorithms often adopt a trial-anderror learning paradigm and optimize the policy based on the reward signals given by the environment. The e\ufb00ectiveness of RL relies on e\ufb03cient exploration, especially in sparse reward settings, as it is critical to get su\ufb03cient experiences with high rewards to guide the training. \u2217Equal contribution. Proc. of the 23rd International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2024), N. Alechina, V. Dignum, M. Dastani, J.S. Sichman (eds.), May 6 \u2013 10, 2024, Auckland, New Zealand. \u00a9 2024 International Foundation for Autonomous Agents and Multiagent Systems (www.ifaamas.org). This work is licenced under the Creative Commons Attribution 4.0 International (CC-BY 4.0) licence. Figure 1: Illustration of structured exploration and unstructured exploration behavior in the 2-player climb game. The rows and columns indicate the players\u2019 action space. While unstructured exploration aims to visit novel states, structured exploration exploits structures in the joint stateaction space, helping agents coordinatedly and more e\ufb03ciently explore the potential high-reward subspace. The exploration challenge has been studied extensively and existing works can be categorized mainly into two streams. One core idea with great success is to incentivize the agent to visit the underexplored states more frequently by adding an intrinsic reward based on a visitation measure [3, 25, 28, 37] or some other heuristics [17, 39]. However, in multi-agent settings, due to the exponential growth of the joint state-action space, simply visiting more novel states can be increasingly ine\ufb00ective. Exploration policies need to better capture the low-dimensional structure of the tasks and leverage the structural knowledge for higher exploration e\ufb03ciency. Another line of work speci\ufb01cally learns exploration strategies. However, these works do not explicitly consider the underlying task structure. For example, Mahajan et al. conditions the policy on a shared latent variable [24] learned via mutual information maximization. Liu et al. adopts a goal-conditioned exploration strategy by setting state features as goals [21]. Other works in the singleagent settings [6, 26, 35] learn exploration policies through a prede\ufb01ned intrinsic reward. All these works train the exploration policy using task-agnostic exploration-speci\ufb01c rewards. In Section 4, we will present a simple matrix game to show that popular exploration methods can have di\ufb03culties \ufb01nding the optimal solution due to the reward structure of the game. \fHow can we enable the agents to more e\ufb00ectively explore by leveraging the intrinsic structure of the environment? We adopt a meta-exploration framework (i.e., learning to explore) for MARL: we \ufb01rst train multiple structured exploration policies from a set of training tasks (referred to as the meta-training stage), and then use these exploration policies to facilitate agents\u2019 learning in a testtime task, which is typically a new task sampled from the task distribution (referred to as meta-testing stage). We develop a multiagent meta-exploration method, Cooperative Meta-Exploration in Multi-Agent Learning through Exploiting State-Action Space Structure (MESA) for fully cooperative settings. MESA leverages the task structures by explicitly identifying the agents\u2019 high-rewarding joint state-action subspace in the training tasks. It then trains a set of diverse exploration policies to cover this identi\ufb01ed subspace. The exploration policies are trained with a reward scheme induced by the distance to the high-rewarding subspace. The meta-learned exploration policies can be combined with any o\ufb00-policy MARL algorithm during the meta-testing stage by randomly selecting learned exploration policies to collect valuable experiences. Such structured exploration can help the agents to learn good joint policies e\ufb03ciently (Figure 1). We empirically show the success of MESA on the matrix climb game and its harder multi-stage variant. In addition, we evaluate MESA in two continuous control tasks, i.e., the MPE environment [23] and the multi-agent MuJoCo benchmark [29]. We demonstrate the superior performance of MESA compared to existing multi-agent learning and exploration algorithms. Furthermore, we show that MESA is capable of generalizing to unseen test-time tasks that are more challenging than any of the training tasks. 2 RELATED WORK Exploration has been a long-standing challenge in RL with remarkable progress achieved in the single-agent setting [3, 5, 10, 25, 28, 34, 37]. Most of these works maintain pseudo-counts over states and construct intrinsic rewards to encourage the agents to visit rarely visited states more frequently [3, 25, 28, 37]. These countbased methods have been extended to the multi-agent setting by incentivizing intra-agent interactions or social in\ufb02uence [17\u201319, 39]. However, in the multi-agent setting, a simple count-based method can be less e\ufb00ective due to the partial observability of each agent, an exponentially large joint state-action space, and the existence of multiple non-Pareto-optimal NE. Therefore, recent works focus on discovering the structures of possible multi-agent behaviors. For example, [24] adopts variational inference to learning structured latent-space-policies; [15] generates similar tasks with simpler reward functions to promote cooperation; [21] learns to select a subset of state dimensions for e\ufb03cient exploration. We follow a metalearning framework and learn structured exploration strategies by exploiting high-rewarding subspace in the joint state-action space. Our method also leverages a count-based technique as a subroutine during the meta-training phase to prevent over-exploitation and mode collapse. Meta reinforcement learning (meta-RL) is a popular RL paradigm that focuses on training a policy that can quickly adapt on an unseen task at test time [9, 12, 14, 20, 32, 40, 42, 44]. Such a paradigm has been extended to the setting of learning to explore. The key idea is to meta-learn a separate exploration policy that can be used in the testing task. Most closely related to our work is [26], where an exploration policy is pretrained on a set of training tasks. However, their method is designed for the single-agent setting and learns the exploration policy by using a task-agnostic intrinsic reward to incentivize visitation of interesting states , while we directly utilize the task reward to learn the structure of the environments. Other existing works in meta-exploration propose to learn a latent-space exploration policy that is conditioned on a task variable, which can be accomplished by meta-policy gradient [14, 20, 40], variational inference [32] or information maximization [42] over the training tasks. Therefore, at test time, posterior inference can be performed for the latent variable towards fast exploration strategy adaption. Our approach follows a similar metaexploration paradigm by learning additional exploration policies. However, existing meta-exploration methods focus on the singleagent setting while we consider much more challenging multi-agent games with a distribution of similarly-structured tasks, for example, the MPE environment [23] with a distribution of target landmarks that the agents need to reach. In addition, we meta-learn a discrete set of exploration policies through an iterative process, which results in a much simpler meta-testing phase without the need for posterior sampling or gradient updates on exploration policies. Besides, some other methods pretrain exploration policies from an o\ufb04ine dataset [7, 31, 36], which is beyond the scope of this paper. Finally, our approach largely di\ufb00ers from the setting of multitask learning [1, 2, 11, 16, 27], which are commonly evaluated in environments with heterogeneous tasks or scenarios. Our exploration policies are not trained to achieve high returns in the training tasks. Instead, they are trained to reach as many high-reward state-action pairs as possible collectedin a diverse set of tasks. Therefore, the state-action pairs covered by a single exploration policy are very likely to be distributed across di\ufb00erent training tasks. 3 PRELIMINARIES Dec-POMDP. We consider fully-cooperative Markov games described by a decentralized partially observable Markov decision process (Dec-POMDP), which is de\ufb01ned by \u27e8S, A, \ud443, \ud445, \u03a9, O,\ud45b,\ud6fe\u27e9. S is the state space. A \u2261A1\u00d7...\u00d7A\ud45bis the joint action space. The dynamics is de\ufb01ned by the transition function \ud443(\ud460\u2032 | \ud460, \ud482). Agents share a reward function \ud445(\ud460, \ud482), and\ud6fe\u2208(0, 1) is the discount factor. \u03a9 \u2261\u03a91 \u00d7 .. \u00d7 \u03a9\ud45bis the joint observation space, where \u03a9\ud456is the observation space for agent \ud456. At each timestep, each agent \ud456only has access to its own observation \ud45c\ud456\u2208\u03a9\ud456de\ufb01ned by the function O : S \u00d7A \u21a6\u2192\u03a9. The goal of agents in Dec-POMDP is to maximize the common expected discounted return under the joint policy \ud745: J (\ud745) = E\ud745 \u0002\u00cd \ud461\ud6fe\ud461\ud445(\ud460\ud461, \ud482\ud461) \u0003 . Learning to Explore. Meta-RL assumes a task distribution\ud45d(T) over tasks, and an agent aims to learn to quickly adapt to a testtime task T test drawn from \ud45d(T) after training in a batch of training tasks {T \ud456| T \ud456\u223c\ud45d(T)}\ud435 \ud456=1. Inspired by the explicit exploration methods [6, 42], we adopt a meta-exploration framework for MARL: we learn joint exploration policies \ud745\ud452from training tasks {T \ud456| T \ud456\u223c\ud45d(T)}\ud435 \ud456=1 and use \ud745\ud452to collect experiences for the training of the agents\u2019 policy pro\ufb01le \ud745in task T test, denoted as \f\ud745(\ud745\ud452, T test). Formally, the objective of meta-exploration is max \ud745\ud452ET test\u223c\ud45d(T) \" E\ud745(\ud745\ud452,T test) \"\u00d5 \ud461 \ud6fe\ud461\ud445\ud456(\ud460\ud461, \ud482\ud461) ## . (1) Nash Equilibrium and Pareto Optimality. A joint policy \ud745 is an NE if each agent\u2019s policy \ud70b\ud456is a best response to the other agents\u2019 policies \ud745\u2212\ud456. That is, for any agent \ud456\u2019s alternative policy \ud70b\u2032 \ud456, we have \ud444\ud456(\ud745) \u2265\ud444\ud456(\ud70b\u2032 \ud456, \ud745\u2212\ud456), where \ud444\ud456is the value function for agent \ud456. A joint policy \ud745is Pareto optimal if there does not exist an alternative joint policy \ud745\u2032 such that \u2200\ud456, \ud444\ud456(\ud745\u2032) \u2265\ud444\ud456(\ud745) and \u2203\ud456, \ud444\ud456(\ud745\u2032) > \ud444\ud456(\ud745). 4 A MOTIVATING EXAMPLE: CLIMB GAME We analyze a fully cooperative matrix game known as Climb Game. In Section 4.1, we show how popular exploration strategies, including unstructured strategies like uniform exploration and taskspeci\ufb01c strategies like\ud716\u2212greedy, fail to e\ufb03ciently explore the climb game. By contrast, we show in Section 4.2 that a simple structured exploration strategy can substantially improve the exploration ef\ufb01ciency. A climb game \ud43a\ud453(\ud45b,\ud462,\ud448) is a \ud45b-player game with action space A\ud456= {0, . . . ,\ud448\u22121} for any player\ud456. The reward of a joint action \ud482\u2208 A is determined by the number of players performing a speci\ufb01c action \ud462(denoted as #\ud462), which is \ud445(\ud482) = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f3 1, if #\ud462= \ud45b, 1 \u2212\ud6ff(0 < \ud6ff< 1), if #\ud462= 0, 0, otherwise. . (2) 4.1 Exploration Challenge A climb game \ud43a\ud453(\ud45b,\ud462,\ud448) has three groups of NE: the Pareto optimal NE (\ud462,\ud462, . . . ,\ud462), the sub-optimal NEs {(\ud44e1,\ud44e2, . . . , \ud44e\ud45b) | \u2200\ud456, \ud44e\ud456\u2260 \ud462}, and the zero-reward NEs {(\ud44e1, \ud44e2, . . . , \ud44e\ud45b) | 1 < #\ud462< \ud45b}. The sheer di\ufb00erence in the size of the three subsets of NEs makes it particularly challenging for RL agents to learn the optimal policy pro\ufb01le without su\ufb03cient exploration, as evidenced by the theoretical analysis below and empirical evaluation in Section 6. Consider a 2-agent climb game \ud43a\ud453(2, 0,\ud448). A joint action \ud482can be represented by a pair of one-hot vectors [e\ud456, e\ud457] \u2208{0, 1}2\ud448. Let \ud45e(x, y;\ud703) be a joint Q function parameterized by \ud703that takes input x, y \u2208{0, 1}\ud448and is learned to approximate the reward of the game. We hope the joint Q function has the same optimal policy pro\ufb01le. De\ufb01nition 4.1. We call a joint \ud444function \ud45e(x, y;\ud703) equivalently optimal when \ud45e(e0, e0;\ud703) = max0\u2264\ud456,\ud457<\ud448\ud45e(e\ud456, e\ud457;\ud703). When a joint \ud444function is equivalently optimal, one can use it to \ufb01nd the optimal policy. Since neural networks are di\ufb03cult to analyze in general [4], we parameterize the joint \ud444function in a quadratic form: \ud45e(x, y; W, b, c,\ud451) = x\u22a4Wy + b\u22a4x + c\u22a4y + \ud451 (3) A Gaussian prior \ud45d(W) = N (W; 0,\ud70e2 \ud464\ud43c) is introduced under the assumption that a non-linear W is harder and slower to learn. Quadratic functions have been used in RL [13, 38] as a replacement for the commonly-used multi-layer perceptron, and there are also theoretical results [8] analyzing neural networks with quadratic activation. For the climb game, it is easy to verify that the quadratic coe\ufb03cients make the joint\ud444function su\ufb03ciently expressive to perfectly \ufb01t the reward function by setting W to be the reward matrix. Therefore, the learning process of \ud444is mainly a\ufb00ected by how the exploration policy samples the data. Consider an exploration policy \ud45d(\ud461) \ud452 that selects joint action \ud482= (\ud456, \ud457) at step \ud461with probability \ud45d(\ud461) \ud452 (\ud456, \ud457). The e\ufb03ciency of an exploration policy can be measured by the required number of steps for learning an equivalently optimal \ud444function using the maximum likelihood estimator over the data sampled from \ud45d(\ud461) \ud452 . The learning objective includes both the prior\ud45d(W) and the likelihood of prediction error \ud45d(\ud438\ud456\ud457), where the prediction error \ud438\ud456\ud457= \ud45e(e\ud456, e\ud457; \u00b7) \u2212\ud445\ud456\ud457. If the prediction error is assumed to be depicted by a Gaussian distribution \ud45d(\ud438\ud456\ud457) = N (\ud438\ud456\ud457; 0,\ud70e2 \ud452) for every visited joint action (\ud456, \ud457), then the learning objective for the \ud444function can be formulated as: J (\ud447) (W, b, c,\ud451) =E{ (\ud456(\ud461),\ud457(\ud461) )\u223c\ud45d(\ud461) \ud452 }\ud447 \ud461=1 log \ud45d(W) \ud447 \u00d6 \ud461\u2032=1 \ud45d(\ud438\ud456(\ud461) \ud457(\ud461) ) ! = \ud447 \u00d5 \ud461=1 E(\ud456,\ud457)\u223c\ud45d(\ud461) \ud452 \u0002 log N (\ud45e(e\ud456, e\ud457; W, b, c,\ud451) \u2212\ud445\ud456\ud457; 0,\ud70e2 \ud452) \u0003 + log N (W; 0,\ud70e2 \ud464\ud43c) + Const. (4) We use \ud45eJ (\ud447) (W, b, c,\ud451) to denote the learned joint \ud444function that maximizes J (\ud447) at step \ud447. \ud45eJ (\ud447) (W, b, c,\ud451) is determined by the exploration policy \ud45d(\ud461) \ud452 and the exploration steps \ud447. Then we have the following theorem for the uniform exploration strategy. Theorem 4.2 (uniform exploration). Assume \ud6ff\u22641 6,\ud448\u22653. Using a uniform exploration policy in the climb game \ud43a\ud453(2, 0,\ud448), it can be proved that \ud45eJ (\ud447) (W, b, c,\ud451) will become equivalently optimal only after \ud447= \u03a9(|A|\ud6ff\u22121) steps. When \ud6ff= 1, \ud447= \ud442(1) steps su\ufb03ce to learn the equivalently optimal joint Q function, suggesting the inef\ufb01ciency of uniform exploration is due to a large set of sub-optimal NEs. The intuition behind Theorem 4.2 is that the hardness of exploration in climb games largely comes from the sparsity of solutions: a set of sub-optimal NEs exist but there is only a single Pareto optimal NE. Learning the joint \ud444function can be in\ufb02uenced by the sub-optimal NEs. And if the exploration attempts are not well coordinated, a lot of zero reward would be encountered, making it hard to \ufb01nd the Pareto optimal NE. We also remark that uniform exploration can be particularly ine\ufb03cient since the term |A| can be exponentially large in a multi-agent system. This indicates that more e\ufb03cient exploration can potentially be achieved by reducing the search space and identifying a smaller \u201ccritical\u201d subspace. To formally prove Theorem 4.2, we de\ufb01ne \ud4531, \ud4532, \ud4533 as the stepaveraged probability of taking the joint action in optimal NE, suboptimal NE and zero-reward, respectively. We show that to make the joint \ud444function equivalently optimal, there is a necessary condition that \ud4531, \ud4532, \ud4533 should follow. When\ud447is not large enough, this condition cannot be satis\ufb01ed. Detailed proof is in Appendix A.2. \fFigure 2: MESA\u2019s meta-learning framework. In the meta-training stage, MESA learns exploration policies to cover the highrewarding subspace. In the meta-testing stage, MESA uses the learned exploration policies to assist the learning in an unseen task. Each color corresponds to a di\ufb00erent task, and the colored points represent the high-rewarding joint state-action pairs collected in that task. Next, we consider the case of another popular exploration paradigm, \ud716-greedy exploration. Theorem 4.3 (\ud716-greedy exploration). Assume \ud6ff\u22641 32,\ud448\u22654,\ud448\u2265 \ud70e\ud464\ud70e\u22121 \ud452. In the climb game \ud43a\ud453(2, 0,\ud448), under \ud716-greedy exploration with \ufb01xed \ud716\u22641 2, \ud45eJ (\ud447) (W, b, c,\ud451) will become equivalently optimal only after \ud447= \u03a9(|A|\ud6ff\u22121\ud716\u22121) steps. If \ud716(\ud461) = 1/\ud461, it requires \ud447= exp \u0000\u03a9 \u0000|A|\ud6ff\u22121\u0001\u0001 exploration steps to be equivalently optimal. The proof is similar to that of Theorem 4.2 (detailed in Appendix A.3). By comparing 4.2 and 4.3, \ud716-greedy results in even poorer exploration e\ufb03ciency than uniform exploration. Note the\ud716-greedy strategy is training policy speci\ufb01c, i.e., the exploration behavior varies as the training policy changes. Theorem 4.3 suggests that when the policy is sub-optimal, the induced \ud716-greedy exploration strategy can be even worse than uniform exploration. Hence, it can be bene\ufb01cial to adopt a separate exploration independent from the training policy. The above analysis shows that common exploration strategies like uniform exploration or \ud716-greedy exploration are ine\ufb03cient for such a simple game and the main reason is that it requires coordination between di\ufb00erent agents to reach high-rewarding states, but naive exploration strategies lack such cooperation. 4.2 Structured Exploration We will show that it is possible to design a better exploration strategy with some prior knowledge of the climb game structure. Consider a speci\ufb01c structuredexploration strategy \ud45d(\ud461) \ud452 (\ud456, \ud457) = \ud448\u22121 \u0002 1\ud456=\ud457 \u0003 , where both agents always choose the same action. With such a strategy, we can quickly \ufb01nd the optimal solution to the game. More formally, we have the following theorem. Theorem 4.4 (structuredexploration). In the climb game\ud43a\ud453(2, 0,\ud448), under structured exploration \ud45d(\ud461) \ud452 (\ud456, \ud457) = \ud448\u22121 \u0002 1\ud456=\ud457 \u0003 ,\ud45eJ (\ud447) (W, b, c,\ud451) is equivalently optimal at step \ud447= \ud442(1). Theorem 4.4 shows the e\ufb03ciency of exploration can be greatly improved if the exploration strategy captures a proper structure of the problem, i.e., all agents taking the same action. We further remark that by considering a set of similar climb games G, where G = {\ud43a\ud453(2,\ud462,\ud448)}\ud448\u22121 \ud462=0 , the structured exploration strategy \ud45d(\ud461) \ud452 (\ud456, \ud457) = \ud448\u22121 \u0002 1\ud456=\ud457 \u0003 can be interpreted as a uniform distribution over the optimal policies of this game set G. This interesting fact suggests that we can \ufb01rst collect a set of similarly structuredgames and then derive e\ufb00ective exploration strategies from these similar games. Once a set of structuredexploration strategies are collected,we can further adopt them for fast learning in a novel game with a similar problem structure. We take the inspiration here and develop a general meta-exploration algorithm in the next section. 5 METHOD We detail our method Cooperative Meta-Exploration in Multi-Agent Learning through Exploiting State-Action Space Structure (MESA) for cooperative multi-agent learning. As shown in Figure 2, MESA consists of a meta-training stage (Algo. 1) and a meta-testing stage (Algo. 2). In the meta-training stage, MESA learns exploration policies by training in a batch of training tasks that share intrinsic structuresin the state-action space. In the meta-testing stage, MESA utilizes the meta-learned exploration policies to assist learning in an unseen task sampled from the distribution of the training tasks. 5.1 Meta-Training The meta-training stage contains two steps: 1) identify the highrewarding state-action subspace, and 2) train a set of exploration policies using the subspace-induced rewards. 5.1.1 Identifying High-Rewarding Joint State-Action Subspace. For each training task T \ud456, we collect experiences D\ud456= {(\ud460\ud461, \ud482\ud461,\ud45f\ud461,\ud460\ud461+1)}. If the reward \ud45f\ud461is higher than a threshold \ud445\u2605, we call this joint state-action pair (\ud460\ud461, \ud482\ud461) valuable and store it into a dataset M\u2217. For goal-oriented tasks where \ud45f= 1\ud460=\ud454\ud45c\ud44e\ud459, the threshold can be \fAlgorithm 1 MESA: Meta-Training Input: Meta-training tasks {T \ud456}\ud435 \ud456=1 \u223c\ud45d(T), o\ufb00-policy MARL algorithm \ud453, distance metric \u2225\u00b7 \u2225F Parameter: #policies \ud438, threshold \ud445\u2605, horizon \u210e Output: Exploration policies {\ud745\ud456 \ud452}\ud438 \ud456=1 1: M\u2217\u2190\u2205, global pseudo-count \u02c6 \ud441\u21900 2: for i = 1 to B do 3: Initialize policy \ud745\ud703 4: Train \ud745\ud703with \ud453and collect dataset \ud437\ud456= {(\ud494\ud461, \ud482\ud461,\ud45f\ud461, \ud494\ud461+1)} 5: M\u2217\u2190M\u2217\u222a{\ud70f| \ud445(\ud70f) \u2265\ud445\u2605,\ud70f\u2208\ud437\ud456} 6: end for 7: for i = 1 to E do 8: Initialize exploration policy \ud745\ud456 \ud452 9: while \ud745\ud456 \ud452\u2019s training not converged do 10: Initialize \ud441as \u02c6 \ud441, D \u2190\u2205 11: for t = 0 to h-1 do 12: Execute \ud482\ud461\u223c\ud745\ud456 \ud452(\ud460\ud461), and observe (\ud460\ud461, \ud482\ud495,\ud45f\ud461,\ud460\ud461+1) 13: Calculate \u02c6 \ud45f\ud461based on Eq. 5 or 6 14: Store (\ud460\ud461, \ud482\ud461, \u02c6 \ud45f\ud461,\ud460\ud461+1) into D 15: \ud441(\ud719(\ud460\ud461, \ud482\ud461)) \u2190\ud441(\ud719(\ud460\ud461, \ud482\ud461)) + 1 16: end for 17: Optimize policy \ud745\ud456 \ud452with algorithm \ud453 18: end while 19: Update \u02c6 \ud441using D 20: end for 21: return {\ud745\ud456 \ud452}\ud438 \ud456=1 set as \ud445\u2605= 1. For other tasks, the threshold can be set as a hyperparameter, for example, a certain percentile of all collectedrewards. A smaller \ud445\u2605results in a larger identi\ufb01ed subspace but a less e\ufb03cient exploration policy. The data stored in M\u2217is highly diversi\ufb01ed since it comes from all the \ud435training tasks, which are expected to share an intrinsic structure. We expect that with this intrinsic structure, the highrewarding joint state-action pairs fall into some low-dimensional subspace. In the simplest case, they may form several dense clusters, or many of them lie in a hyperplane. Even if the subspace is not easily interpretable to humans, it may still be e\ufb00ectively \u201ccovered\u201d by a set of exploration policies (to be found in the subsequent step). We also explicitly deal with the reward sparsity problem by assigning a positive reward to a joint state-action pair (\ud460\ud461, \ud482\ud461) if it has zero reward but leads to a valuable state-action pair (\ud460\ud461\u2032, \ud482\ud461\u2032) later in the same trajectory. We also put these relabeled pairs into the dataset M\u2217. Let \ud461\u2032 = arg min\ud461\u2032>\ud461[\ud45f\ud461\u2032 > 0], we therefore have the following densi\ufb01ed reward function \u02c6 \ud45f\ud461= ( \ud6fe\ud461\u2032\u2212\ud461\u00b7 \ud45f\ud461\u2032, \ud45f\ud461= 0, \ud45f\ud461, \ud45f\ud461> 0. (5) 5.1.2 Learning Exploration Policies. In this step, we aim to learn a diverse set of exploration policies to cover the identi\ufb01ed highrewarding joint state-action subspace. We use a distance metric \u2225\u00b7 \u2225F (e.g., \ud4592 distance) to determine whether two state-action Algorithm 2 MESA: Meta-Testing Input: Test task \u02c6 T , meta-trained exploration policies {\ud745\ud456 \ud452}\ud438 \ud456=1, o\ufb00-policy MARL algorithm \ud453 Parameter: horizon \u210e Output: Policy \ud745\ud703for task \u02c6 T 1: Initialize policy \ud745\ud703, D = \u2205, annealing \ud716 2: while not converged do 3: Determine \ud45d\ud452under annealing probability schedule \ud716 4: Choose policy to perform rollouts by \ud745\ud451= ( \ud745\ud452\u223cU({\ud745\ud456 \ud452}\ud438 \ud456=1), w.p. \ud45d\ud452 \ud745\ud703, otherwise. 5: for t = 0 to h-1 do 6: Execute \ud482\ud461\u223c\ud745\ud451(\ud460\ud461). 7: Observe transition (\ud460\ud461, \ud482\ud461,\ud45f\ud461,\ud460\ud461+1). 8: D \u2190D \u222a(\ud460\ud461, \ud482\ud461,\ud45f\ud461,\ud460\ud461+1) 9: end for 10: Optimize \ud745\ud703with algorithm \ud453on replay bu\ufb00er D 11: end while 12: return \ud745\ud703 pairs are close. Then if a visited joint state-action pair (\ud460, \ud482) is close enough to the identi\ufb01ed subspace M\u2217, i.e., min\ud451\u2208M\u2217\u2225(\ud460, \ud482),\ud451\u2225F < \ud716, it would be assigned a derived positive reward \u02c6 \ud45f. Increasing the value of \ud435in the collection step would generally result in a more accurate distance measurement. However, this comes at the cost of making the minimization calculation more computationally expensive. To encourage a broader coverage of the subspace and to avoid mode collapse, the reward assignment scheme ensures that repeated visits to similar joint state-action pairs within one trajectory would result in a decreasing reward for each visit. Similar to [37], we adopt a pseudo-count function \ud441with a hash function \ud719(\ud494, \ud482) to generalize between similar joint state-action pairs. We then apply a decreasing function \ud453\ud451: N \u21a6\u2192[0, 1] on the trajectory-level pseudocount \ud441(\ud719((\ud460, \ud482)). The resulted reward assignment scheme is de\ufb01ned as follows: \u02dc \ud45f\ud461= \u02c6 \ud45f\ud461\ud453\ud451(\ud441(\ud719((\ud460\ud461, \ud482\ud461))) h 1min\ud451\u2208M\u2217\u2225(\ud460\ud461,\ud482\ud461),\ud451\u2225F<\ud716 i (6) After one exploration policy is trained with this reward, we will train a new policy to cover the part of the identi\ufb01ed subspace that has not yet been covered. This is achieved by having a global pseudo-count \u02c6 \ud441which is updated after training each exploration policy using its visitation counts and is maintained throughout the training of all exploration policies. This iterative process continues until the subspace is well-covered by the set of trained exploration policies. 5.2 Meta-Testing During meta-testing, MESA uses the meta-learned exploration policies {\ud745\ud456 \ud452}\ud438 \ud456=1 to assist the training of any generic o\ufb00-policy MARL algorithm on a test-time task \u02c6 T . Speci\ufb01cally, for each rolloutepisode, \fwe choose with probability \ud716to execute one uniformly sampled exploration policy \ud745\ud452\u223cU({\ud745\ud456 \ud452}\ud438 \ud456=1). For the best empirical performance, we also adopt an annealing schedule \ud716: \ud447\u21a6\u2192[0, 1] so that the exploration policies provide more rollouts at the initial stage of the training and are gradually turned o\ufb00later. Here we further provide some analysis of deploying the metalearned exploration policy on unseen testing tasks. Theorem 5.1 (Exploration during Meta-Testing). Consider goaloriented tasks with goal space G \u2286S. Assume the training and testing goals are sampled from the distribution \ud45d(\ud465) on G, and the dataset has \ud441i.i.d. goals sampled from a distribution \ud45e(\ud465) on S. If the exploration policy generalizes to explore \ud716nearby goals for every training sample, we have that the testing goal is not explored with probability at most \ud443fail \u2248 \u222b \ud45d(\ud465)(1 \u2212\ud716\ud45e(\ud465))\ud441\ud451\ud465\u2264\ud442 \u0012\ud43e\ud43f(\ud45d||\ud45e) + H (\ud45d) log(\ud716\ud441) \u0013 . (7) Theorem 5.1 shows that the good performance of meta-learned exploration policy relies on 1) a small di\ufb00erence between the training and testing distribution; and 2) a structured,e.g., low-dimensional, high-rewarding subspace G to reduce H (\ud45d). And when uniformly sampling the training data, \ud43e\ud43f(\ud45d||\ud45e) is bounded by log \u03a9G in our method. This term, however, can be up to log \u03a9S with an uncoordinated exploration on the joint state space S, where \u03a9S can be exponentially larger than \u03a9G. 5.3 Implementation Detail of MESA We choose MADDPG, following the centralized training with decentralized execution (CTDE) paradigm, as the o\ufb00-policy MARL algorithm for MESA since it can be applied to both discrete and continuous action space, as shown in its original paper [23]. We use a clustering mapping \ud453\ud450as the hash function \ud719so that the dataset M\u2217is clustered into \ud436clusters de\ufb01ned by the clustering function \ud453\ud450: S \u00d7 A \u21a6\u2192[\ud436]. The cluster mapping is implemented with the KMeans clustering algorithm [22]. The number of exploration policies to learn is viewed as a hyperparameter. See the Appendix for detailed hyperparameter settings. 6 EXPERIMENTS Our experimental evaluation aims to answer the following questions: (1) Are the meta-learned exploration policies capable of achieving more e\ufb03cient exploration during meta-testing on newly sampled tasks in matrix climb game variants (Section 6.2) and highdimensional domains (Section 6.3 and 6.4)? (2) Can these metalearned exploration policies successfully generalize to unseen testtime tasks from a more challenging (e.g., with more agents) test task distribution which is di\ufb00erent the training task distribution (Section 6.5)? 6.1 Evaluation Setup Compared Methods. We compare to 3 multi-agent reinforcement learning algorithms: MADDPG [23], MAPPO [41], and QMIX [33], to measure the e\ufb00ectiveness of our exploration policies. We also compare to 3 multi-agent exploration algorithm: MAVEN [24], MAPPO with RND exploration [5], and EMC [43]. To compare with baselines that adopt a similar meta-training stage, we add two naive Figure 3: Learning curve of the two climb game variants w.r.t number of environment steps. The return is averaged over timesteps for the multi-stage games. The dotted lines indicate the suboptimal return of 0.5 (purple) and the optimal return 1 (blue) for each agent. meta-learning baselines, including one with an unconditioned shared policy, which is trained over all training tasks, and one with a goal-conditioned policy, which takes the target landmarks as parts of the input. We also adapt the single-agent meta-RL algorithm MAESN [14] to the multi-agent setting. Finally, we adapt the singleagent C-BET [26] to multi-agent settings based on MAPPO. The training and testing tasks are as de\ufb01ned in Section 6.1. Please refer to the Appendix for more visualization and experimental results. Environments.We experiment on the Climb Game, Multi-agent Particle Environment (MPE) [23], and multi-agent MuJoCo [29], on which generating a distribution of meta-training tasks \ud45d(T) is feasible. 6.2 Climb Game Variants First, we consider task spaces consisting of variants of the aforementioned climb games. We extend previous climb game to (1) one-step climb game\ud43a(\ud45b,\ud458,\ud462,\ud448), which is a\ud45b-player game with \ud448actions for each player, and the joint reward is 1 if #\ud462= \ud458, 1 \u2212\ud6ff if #\ud462= 0, and 0 otherwise. The task space T one \ud448 consists of all one-step climb games that contain two players and \ud448actions; (2) multi-stage climb game, which is an \ud446-stage game where each stage is a one-stage climb game with the same number of available actions. Each stage \ud461has its own con\ufb01guration (\ud458\ud461,\ud462\ud461) of the one-stage climb game \ud43a(2,\ud458\ud461,\ud462\ud461,\ud448). Agents observe the history of joint actions and the current stage \ud461. The task space T multi \ud446,\ud448 consists of all multi-stage climb games with \ud446stages and \ud448actions. In our experiments, we use T one 10 and T multi 5,10 as the task space for the one-step and multi-stage Climb Games. We choose uniformly at random ten training tasks and three di\ufb00erent test tasks from the task space T , and we keep \ud6ff= 1 2 as in the classic climb games. Results on Climb Game Variants. For the matrix games, we additionally compare with MA-MAESN, which is our adaptation of the original single-agent meta-learning algorithm MAESN [14] to the multi-agent scenario In the single-step matrix game, MESA exhibits better performance, being able to \ufb01nd the optimal reward in some harder tasks when \ud458= 2, while other baselines are stuck at the sub-optimal reward for almost all tasks. \fFigure 4: Learning curves of MESA and the compared baselines w.r.t the number of environment interactions during the metatesting stage in the MPE domain and the multi-agent MuJoCo environment Swimmer. The two dotted lines indicate the ideal optimal (purple) and sub-optimal (blue) return summed over timesteps. A return above the blue line would typically indicate that the agents are able to learn the optimal strategy. In the more challenging 10-action multi-stage game where task space is exponentially larger, MESA outperforms all compared algorithms by a large margin. With the help of the exploration policies that have learned the high-rewarding joint action pairs, MESA quickly learns the optimal joint action for each stage and avoids being stuck at the sub-optimal. Figure 5: Visualizations of a 2-player 3-landmark MPE climb game. 6.3 MPE Domain We extend the matrix climb games to MPE [23], which has a continuous high-dimensional state space. Agents must \ufb01rst learn to reach the landmarks under sparse rewards and then learn to play the climb games optimally. In a MPE Climb Game \u00af \ud43a(\ud45b,\ud458,\ud462,\ud448, {\ud43f\ud457}\ud448\u22121 0 ) (Figure 5), there are \ud448non-overlapping landmarks with positions {\ud43f\ud457}\ud448\u22121 \ud457=0 . The reward is non-zero only when every agent is on some landmark. Agents will be given a reward of 1 if there are exactly \ud458agents located on the \ud462-th landmark (target landmark), and a suboptimal reward of 1 \u2212\ud6ffwill be given when none of the agents are located on the target landmark. Otherwise, the reward will be zero. As before, \ud462 and \ud458are not present in the observation and can only be inferred from the received reward. A task space T MPE \ud45b,\ud448 consists of all MPE climb games with \ud45bplayers and \ud448landmarks. We evaluate MESA Figure 6: Visualization of structured exploration behaviors discovered by the meta-trained exploration policy in MESA. on the 2-agent tasks (T MPE 2,5 and T MPE 2,6 ) and 3-agent tasks (T MPE 3,5 and T MPE 3,6 ) while \ufb01xing \ud458= 2. Each sampled training and testing task has a di\ufb00erent con\ufb01guration of landmark positions. Adaptation Performance in MPE. We show in Figure 4 the learning curve of our approach MESA compared with the aforementioned baseline methods. MESA outperforms the compared baselines by a large margin, being able to coordinately reach the task landmark quickly, as evidenced by the near-optimal reward. Even when combined with RND-based exploration, MAPPO easily sticks to the sub-optimal equilibrium. Value-based methods like QMIX and MAVEN are unable to learn the correct \ud444-function because the reward is quite sparse before agents can consistently move themselves to a landmark. EMC sometimes jumps out of the suboptimal equilibrium with curiosity-driven exploration, but the performance is not robust. Furthermore, as the meta-learning baselines only learn the sub-optimal behavior during meta-training, they fail to learn the optimal equilibrium during test time and quickly converge to the suboptimal equilibrium. Visualization of Exploration Policies. To answer question (2), we visualize the learned exploration policies in a 2-agent 3landmark MPE task in Figure 6. We can see that the learned exploration policy consecutively visited the 3 landmarks within 20 timesteps in one trajectory. 6.4 Multi-agent MuJoCo Environments We also extend the matrix climb games to multi-agent MuJoCo environments [29]. We consider speci\ufb01cally the 2-agent Swimmer environment where each agent is a hinge on the swimmer\u2019s body, and each agent\u2019s action is the amount of torque applied to hinge rotors. The extension considers the angles between the two hinges and the \fbody segments. Each task in the task space is a target angle such that a reward of 1 will be given only if the two angles are both close to the target angles, a 0.5 suboptimal reward is given if none of two angles are close to the target, and a reward of 0 if only one of the two angles are close. This multi-agent environment is extremely hard as agents are very likely to converge to the suboptimal reward of 0.5, which is con\ufb01rmed by the results that none of the baselines were able to \ufb01nd the optimal equilibrium in Figure 4. Therefore, MESA vastly outperforms all the compared baselines by learning a \ufb01nal policy that frequently reaches the target angle. 6.5 Generalization Performance of MESA In this section, our goal is to evaluate the generalization performance of the meta-trained exploration policy in scenarios where the meta-training and meta-testing task distributions are di\ufb00erent. In particular, we focus on the setting where the test-time tasks are more challenging than the training-time tasks and examine how an exploration policy learned from simpler tasks can boost training performances on harder tasks. The test task here is uniform on the 3-agent high-di\ufb03culty MPE Climb games. The task di\ufb03culty is de\ufb01ned by the average pairwise distances between the landmark positions and the initial positions of the agents. We consider two simpler training task distributions, including (1) a 2-agent setting with the same di\ufb03culty, and (2) a 3-agent setting with a lower di\ufb03culty. In both settings, the metatraining tasks are less challenging than the test-time tasks. For evaluation, the meta-trained exploration policy from each setting will be directly applied to assist the training on the more challenging test-time tasks, without any \ufb01ne-tuning. We modi\ufb01ed the neural network architecture by adopting an attention layer in both actor and critic to ensure they are compatible with a varying number of agents. The attention mechanism acts as an aggregation function between the relative positions of the other agents and its own relative position to the landmarks to handle the varying observation dimensions. Additionally, we employed behavior cloning (BC) [30] on the rollouts of the exploration policies as a warm-up to accelerate learning of the \ufb01nal policy. In Figure 7, we present the generalization results from our study. We evaluate the zero-shot generalization ability of the meta-exploration policy by measuring the average number of high-reward transitions hit in a test task randomly sampled from the test task distribution. As shown on the left of Figure 7, the meta-exploration policies are able to explore the test-time tasks much more e\ufb03ciently than a random exploration policy, even on test-time tasks that are drawn from a harder task distribution. Notably, the generalization ability increases with the number of exploration policies (\ud435). Using the meta-exploration policies trained on the simpler tasks, MESA is able to consistently reach the high-reward region in the unseen hard 3-agent tasks, as opposed to the vanilla MADDPG algorithm that only learns the sub-optimal equilibrium. We also see that with an increasing number of meta-exploration policies, the performance of MESA increases, but the improvement becomes marginal, while the meta-training time increases linearly with E. Figure 7: Generalization results of MESA on the hard 3-agent MPE Climb game. Left: Zero-shot generalizability of the meta-exploration policies, measured by the number of visitations on high-reward transitions per episode on the test tasks. The purple dotted line corresponds to the random exploration policy. The plot shows the concatenated training curves for all exploration policies. Right: Learning curves of MESA under di\ufb00erent settings using the meta-exploration policies trained on the two di\ufb00erent training-task distributions. 7", + "additional_graph_info": { + "graph": [ + [ + "Zhicheng Zhang", + "Yancheng Liang" + ], + [ + "Yancheng Liang", + "Jiajie Zhang" + ], + [ + "Yancheng Liang", + "Xiaochen Liu" + ], + [ + "Yancheng Liang", + "Yi Hu" + ] + ], + "node_feat": { + "Zhicheng Zhang": [ + { + "url": "http://arxiv.org/abs/2405.00902v1", + "title": "MESA: Cooperative Meta-Exploration in Multi-Agent Learning through Exploiting State-Action Space Structure", + "abstract": "Multi-agent reinforcement learning (MARL) algorithms often struggle to find\nstrategies close to Pareto optimal Nash Equilibrium, owing largely to the lack\nof efficient exploration. The problem is exacerbated in sparse-reward settings,\ncaused by the larger variance exhibited in policy learning. This paper\nintroduces MESA, a novel meta-exploration method for cooperative multi-agent\nlearning. It learns to explore by first identifying the agents' high-rewarding\njoint state-action subspace from training tasks and then learning a set of\ndiverse exploration policies to \"cover\" the subspace. These trained exploration\npolicies can be integrated with any off-policy MARL algorithm for test-time\ntasks. We first showcase MESA's advantage in a multi-step matrix game.\nFurthermore, experiments show that with learned exploration policies, MESA\nachieves significantly better performance in sparse-reward tasks in several\nmulti-agent particle environments and multi-agent MuJoCo environments, and\nexhibits the ability to generalize to more challenging tasks at test time.", + "authors": "Zhicheng Zhang, Yancheng Liang, Yi Wu, Fei Fang", + "published": "2024-05-01", + "updated": "2024-05-01", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.MA" + ], + "main_content": "INTRODUCTION Reinforcement learning (RL) algorithms often adopt a trial-anderror learning paradigm and optimize the policy based on the reward signals given by the environment. The e\ufb00ectiveness of RL relies on e\ufb03cient exploration, especially in sparse reward settings, as it is critical to get su\ufb03cient experiences with high rewards to guide the training. \u2217Equal contribution. Proc. of the 23rd International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2024), N. Alechina, V. Dignum, M. Dastani, J.S. Sichman (eds.), May 6 \u2013 10, 2024, Auckland, New Zealand. \u00a9 2024 International Foundation for Autonomous Agents and Multiagent Systems (www.ifaamas.org). This work is licenced under the Creative Commons Attribution 4.0 International (CC-BY 4.0) licence. Figure 1: Illustration of structured exploration and unstructured exploration behavior in the 2-player climb game. The rows and columns indicate the players\u2019 action space. While unstructured exploration aims to visit novel states, structured exploration exploits structures in the joint stateaction space, helping agents coordinatedly and more e\ufb03ciently explore the potential high-reward subspace. The exploration challenge has been studied extensively and existing works can be categorized mainly into two streams. One core idea with great success is to incentivize the agent to visit the underexplored states more frequently by adding an intrinsic reward based on a visitation measure [3, 25, 28, 37] or some other heuristics [17, 39]. However, in multi-agent settings, due to the exponential growth of the joint state-action space, simply visiting more novel states can be increasingly ine\ufb00ective. Exploration policies need to better capture the low-dimensional structure of the tasks and leverage the structural knowledge for higher exploration e\ufb03ciency. Another line of work speci\ufb01cally learns exploration strategies. However, these works do not explicitly consider the underlying task structure. For example, Mahajan et al. conditions the policy on a shared latent variable [24] learned via mutual information maximization. Liu et al. adopts a goal-conditioned exploration strategy by setting state features as goals [21]. Other works in the singleagent settings [6, 26, 35] learn exploration policies through a prede\ufb01ned intrinsic reward. All these works train the exploration policy using task-agnostic exploration-speci\ufb01c rewards. In Section 4, we will present a simple matrix game to show that popular exploration methods can have di\ufb03culties \ufb01nding the optimal solution due to the reward structure of the game. \fHow can we enable the agents to more e\ufb00ectively explore by leveraging the intrinsic structure of the environment? We adopt a meta-exploration framework (i.e., learning to explore) for MARL: we \ufb01rst train multiple structured exploration policies from a set of training tasks (referred to as the meta-training stage), and then use these exploration policies to facilitate agents\u2019 learning in a testtime task, which is typically a new task sampled from the task distribution (referred to as meta-testing stage). We develop a multiagent meta-exploration method, Cooperative Meta-Exploration in Multi-Agent Learning through Exploiting State-Action Space Structure (MESA) for fully cooperative settings. MESA leverages the task structures by explicitly identifying the agents\u2019 high-rewarding joint state-action subspace in the training tasks. It then trains a set of diverse exploration policies to cover this identi\ufb01ed subspace. The exploration policies are trained with a reward scheme induced by the distance to the high-rewarding subspace. The meta-learned exploration policies can be combined with any o\ufb00-policy MARL algorithm during the meta-testing stage by randomly selecting learned exploration policies to collect valuable experiences. Such structured exploration can help the agents to learn good joint policies e\ufb03ciently (Figure 1). We empirically show the success of MESA on the matrix climb game and its harder multi-stage variant. In addition, we evaluate MESA in two continuous control tasks, i.e., the MPE environment [23] and the multi-agent MuJoCo benchmark [29]. We demonstrate the superior performance of MESA compared to existing multi-agent learning and exploration algorithms. Furthermore, we show that MESA is capable of generalizing to unseen test-time tasks that are more challenging than any of the training tasks. 2 RELATED WORK Exploration has been a long-standing challenge in RL with remarkable progress achieved in the single-agent setting [3, 5, 10, 25, 28, 34, 37]. Most of these works maintain pseudo-counts over states and construct intrinsic rewards to encourage the agents to visit rarely visited states more frequently [3, 25, 28, 37]. These countbased methods have been extended to the multi-agent setting by incentivizing intra-agent interactions or social in\ufb02uence [17\u201319, 39]. However, in the multi-agent setting, a simple count-based method can be less e\ufb00ective due to the partial observability of each agent, an exponentially large joint state-action space, and the existence of multiple non-Pareto-optimal NE. Therefore, recent works focus on discovering the structures of possible multi-agent behaviors. For example, [24] adopts variational inference to learning structured latent-space-policies; [15] generates similar tasks with simpler reward functions to promote cooperation; [21] learns to select a subset of state dimensions for e\ufb03cient exploration. We follow a metalearning framework and learn structured exploration strategies by exploiting high-rewarding subspace in the joint state-action space. Our method also leverages a count-based technique as a subroutine during the meta-training phase to prevent over-exploitation and mode collapse. Meta reinforcement learning (meta-RL) is a popular RL paradigm that focuses on training a policy that can quickly adapt on an unseen task at test time [9, 12, 14, 20, 32, 40, 42, 44]. Such a paradigm has been extended to the setting of learning to explore. The key idea is to meta-learn a separate exploration policy that can be used in the testing task. Most closely related to our work is [26], where an exploration policy is pretrained on a set of training tasks. However, their method is designed for the single-agent setting and learns the exploration policy by using a task-agnostic intrinsic reward to incentivize visitation of interesting states , while we directly utilize the task reward to learn the structure of the environments. Other existing works in meta-exploration propose to learn a latent-space exploration policy that is conditioned on a task variable, which can be accomplished by meta-policy gradient [14, 20, 40], variational inference [32] or information maximization [42] over the training tasks. Therefore, at test time, posterior inference can be performed for the latent variable towards fast exploration strategy adaption. Our approach follows a similar metaexploration paradigm by learning additional exploration policies. However, existing meta-exploration methods focus on the singleagent setting while we consider much more challenging multi-agent games with a distribution of similarly-structured tasks, for example, the MPE environment [23] with a distribution of target landmarks that the agents need to reach. In addition, we meta-learn a discrete set of exploration policies through an iterative process, which results in a much simpler meta-testing phase without the need for posterior sampling or gradient updates on exploration policies. Besides, some other methods pretrain exploration policies from an o\ufb04ine dataset [7, 31, 36], which is beyond the scope of this paper. Finally, our approach largely di\ufb00ers from the setting of multitask learning [1, 2, 11, 16, 27], which are commonly evaluated in environments with heterogeneous tasks or scenarios. Our exploration policies are not trained to achieve high returns in the training tasks. Instead, they are trained to reach as many high-reward state-action pairs as possible collectedin a diverse set of tasks. Therefore, the state-action pairs covered by a single exploration policy are very likely to be distributed across di\ufb00erent training tasks. 3 PRELIMINARIES Dec-POMDP. We consider fully-cooperative Markov games described by a decentralized partially observable Markov decision process (Dec-POMDP), which is de\ufb01ned by \u27e8S, A, \ud443, \ud445, \u03a9, O,\ud45b,\ud6fe\u27e9. S is the state space. A \u2261A1\u00d7...\u00d7A\ud45bis the joint action space. The dynamics is de\ufb01ned by the transition function \ud443(\ud460\u2032 | \ud460, \ud482). Agents share a reward function \ud445(\ud460, \ud482), and\ud6fe\u2208(0, 1) is the discount factor. \u03a9 \u2261\u03a91 \u00d7 .. \u00d7 \u03a9\ud45bis the joint observation space, where \u03a9\ud456is the observation space for agent \ud456. At each timestep, each agent \ud456only has access to its own observation \ud45c\ud456\u2208\u03a9\ud456de\ufb01ned by the function O : S \u00d7A \u21a6\u2192\u03a9. The goal of agents in Dec-POMDP is to maximize the common expected discounted return under the joint policy \ud745: J (\ud745) = E\ud745 \u0002\u00cd \ud461\ud6fe\ud461\ud445(\ud460\ud461, \ud482\ud461) \u0003 . Learning to Explore. Meta-RL assumes a task distribution\ud45d(T) over tasks, and an agent aims to learn to quickly adapt to a testtime task T test drawn from \ud45d(T) after training in a batch of training tasks {T \ud456| T \ud456\u223c\ud45d(T)}\ud435 \ud456=1. Inspired by the explicit exploration methods [6, 42], we adopt a meta-exploration framework for MARL: we learn joint exploration policies \ud745\ud452from training tasks {T \ud456| T \ud456\u223c\ud45d(T)}\ud435 \ud456=1 and use \ud745\ud452to collect experiences for the training of the agents\u2019 policy pro\ufb01le \ud745in task T test, denoted as \f\ud745(\ud745\ud452, T test). Formally, the objective of meta-exploration is max \ud745\ud452ET test\u223c\ud45d(T) \" E\ud745(\ud745\ud452,T test) \"\u00d5 \ud461 \ud6fe\ud461\ud445\ud456(\ud460\ud461, \ud482\ud461) ## . (1) Nash Equilibrium and Pareto Optimality. A joint policy \ud745 is an NE if each agent\u2019s policy \ud70b\ud456is a best response to the other agents\u2019 policies \ud745\u2212\ud456. That is, for any agent \ud456\u2019s alternative policy \ud70b\u2032 \ud456, we have \ud444\ud456(\ud745) \u2265\ud444\ud456(\ud70b\u2032 \ud456, \ud745\u2212\ud456), where \ud444\ud456is the value function for agent \ud456. A joint policy \ud745is Pareto optimal if there does not exist an alternative joint policy \ud745\u2032 such that \u2200\ud456, \ud444\ud456(\ud745\u2032) \u2265\ud444\ud456(\ud745) and \u2203\ud456, \ud444\ud456(\ud745\u2032) > \ud444\ud456(\ud745). 4 A MOTIVATING EXAMPLE: CLIMB GAME We analyze a fully cooperative matrix game known as Climb Game. In Section 4.1, we show how popular exploration strategies, including unstructured strategies like uniform exploration and taskspeci\ufb01c strategies like\ud716\u2212greedy, fail to e\ufb03ciently explore the climb game. By contrast, we show in Section 4.2 that a simple structured exploration strategy can substantially improve the exploration ef\ufb01ciency. A climb game \ud43a\ud453(\ud45b,\ud462,\ud448) is a \ud45b-player game with action space A\ud456= {0, . . . ,\ud448\u22121} for any player\ud456. The reward of a joint action \ud482\u2208 A is determined by the number of players performing a speci\ufb01c action \ud462(denoted as #\ud462), which is \ud445(\ud482) = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f3 1, if #\ud462= \ud45b, 1 \u2212\ud6ff(0 < \ud6ff< 1), if #\ud462= 0, 0, otherwise. . (2) 4.1 Exploration Challenge A climb game \ud43a\ud453(\ud45b,\ud462,\ud448) has three groups of NE: the Pareto optimal NE (\ud462,\ud462, . . . ,\ud462), the sub-optimal NEs {(\ud44e1,\ud44e2, . . . , \ud44e\ud45b) | \u2200\ud456, \ud44e\ud456\u2260 \ud462}, and the zero-reward NEs {(\ud44e1, \ud44e2, . . . , \ud44e\ud45b) | 1 < #\ud462< \ud45b}. The sheer di\ufb00erence in the size of the three subsets of NEs makes it particularly challenging for RL agents to learn the optimal policy pro\ufb01le without su\ufb03cient exploration, as evidenced by the theoretical analysis below and empirical evaluation in Section 6. Consider a 2-agent climb game \ud43a\ud453(2, 0,\ud448). A joint action \ud482can be represented by a pair of one-hot vectors [e\ud456, e\ud457] \u2208{0, 1}2\ud448. Let \ud45e(x, y;\ud703) be a joint Q function parameterized by \ud703that takes input x, y \u2208{0, 1}\ud448and is learned to approximate the reward of the game. We hope the joint Q function has the same optimal policy pro\ufb01le. De\ufb01nition 4.1. We call a joint \ud444function \ud45e(x, y;\ud703) equivalently optimal when \ud45e(e0, e0;\ud703) = max0\u2264\ud456,\ud457<\ud448\ud45e(e\ud456, e\ud457;\ud703). When a joint \ud444function is equivalently optimal, one can use it to \ufb01nd the optimal policy. Since neural networks are di\ufb03cult to analyze in general [4], we parameterize the joint \ud444function in a quadratic form: \ud45e(x, y; W, b, c,\ud451) = x\u22a4Wy + b\u22a4x + c\u22a4y + \ud451 (3) A Gaussian prior \ud45d(W) = N (W; 0,\ud70e2 \ud464\ud43c) is introduced under the assumption that a non-linear W is harder and slower to learn. Quadratic functions have been used in RL [13, 38] as a replacement for the commonly-used multi-layer perceptron, and there are also theoretical results [8] analyzing neural networks with quadratic activation. For the climb game, it is easy to verify that the quadratic coe\ufb03cients make the joint\ud444function su\ufb03ciently expressive to perfectly \ufb01t the reward function by setting W to be the reward matrix. Therefore, the learning process of \ud444is mainly a\ufb00ected by how the exploration policy samples the data. Consider an exploration policy \ud45d(\ud461) \ud452 that selects joint action \ud482= (\ud456, \ud457) at step \ud461with probability \ud45d(\ud461) \ud452 (\ud456, \ud457). The e\ufb03ciency of an exploration policy can be measured by the required number of steps for learning an equivalently optimal \ud444function using the maximum likelihood estimator over the data sampled from \ud45d(\ud461) \ud452 . The learning objective includes both the prior\ud45d(W) and the likelihood of prediction error \ud45d(\ud438\ud456\ud457), where the prediction error \ud438\ud456\ud457= \ud45e(e\ud456, e\ud457; \u00b7) \u2212\ud445\ud456\ud457. If the prediction error is assumed to be depicted by a Gaussian distribution \ud45d(\ud438\ud456\ud457) = N (\ud438\ud456\ud457; 0,\ud70e2 \ud452) for every visited joint action (\ud456, \ud457), then the learning objective for the \ud444function can be formulated as: J (\ud447) (W, b, c,\ud451) =E{ (\ud456(\ud461),\ud457(\ud461) )\u223c\ud45d(\ud461) \ud452 }\ud447 \ud461=1 log \ud45d(W) \ud447 \u00d6 \ud461\u2032=1 \ud45d(\ud438\ud456(\ud461) \ud457(\ud461) ) ! = \ud447 \u00d5 \ud461=1 E(\ud456,\ud457)\u223c\ud45d(\ud461) \ud452 \u0002 log N (\ud45e(e\ud456, e\ud457; W, b, c,\ud451) \u2212\ud445\ud456\ud457; 0,\ud70e2 \ud452) \u0003 + log N (W; 0,\ud70e2 \ud464\ud43c) + Const. (4) We use \ud45eJ (\ud447) (W, b, c,\ud451) to denote the learned joint \ud444function that maximizes J (\ud447) at step \ud447. \ud45eJ (\ud447) (W, b, c,\ud451) is determined by the exploration policy \ud45d(\ud461) \ud452 and the exploration steps \ud447. Then we have the following theorem for the uniform exploration strategy. Theorem 4.2 (uniform exploration). Assume \ud6ff\u22641 6,\ud448\u22653. Using a uniform exploration policy in the climb game \ud43a\ud453(2, 0,\ud448), it can be proved that \ud45eJ (\ud447) (W, b, c,\ud451) will become equivalently optimal only after \ud447= \u03a9(|A|\ud6ff\u22121) steps. When \ud6ff= 1, \ud447= \ud442(1) steps su\ufb03ce to learn the equivalently optimal joint Q function, suggesting the inef\ufb01ciency of uniform exploration is due to a large set of sub-optimal NEs. The intuition behind Theorem 4.2 is that the hardness of exploration in climb games largely comes from the sparsity of solutions: a set of sub-optimal NEs exist but there is only a single Pareto optimal NE. Learning the joint \ud444function can be in\ufb02uenced by the sub-optimal NEs. And if the exploration attempts are not well coordinated, a lot of zero reward would be encountered, making it hard to \ufb01nd the Pareto optimal NE. We also remark that uniform exploration can be particularly ine\ufb03cient since the term |A| can be exponentially large in a multi-agent system. This indicates that more e\ufb03cient exploration can potentially be achieved by reducing the search space and identifying a smaller \u201ccritical\u201d subspace. To formally prove Theorem 4.2, we de\ufb01ne \ud4531, \ud4532, \ud4533 as the stepaveraged probability of taking the joint action in optimal NE, suboptimal NE and zero-reward, respectively. We show that to make the joint \ud444function equivalently optimal, there is a necessary condition that \ud4531, \ud4532, \ud4533 should follow. When\ud447is not large enough, this condition cannot be satis\ufb01ed. Detailed proof is in Appendix A.2. \fFigure 2: MESA\u2019s meta-learning framework. In the meta-training stage, MESA learns exploration policies to cover the highrewarding subspace. In the meta-testing stage, MESA uses the learned exploration policies to assist the learning in an unseen task. Each color corresponds to a di\ufb00erent task, and the colored points represent the high-rewarding joint state-action pairs collected in that task. Next, we consider the case of another popular exploration paradigm, \ud716-greedy exploration. Theorem 4.3 (\ud716-greedy exploration). Assume \ud6ff\u22641 32,\ud448\u22654,\ud448\u2265 \ud70e\ud464\ud70e\u22121 \ud452. In the climb game \ud43a\ud453(2, 0,\ud448), under \ud716-greedy exploration with \ufb01xed \ud716\u22641 2, \ud45eJ (\ud447) (W, b, c,\ud451) will become equivalently optimal only after \ud447= \u03a9(|A|\ud6ff\u22121\ud716\u22121) steps. If \ud716(\ud461) = 1/\ud461, it requires \ud447= exp \u0000\u03a9 \u0000|A|\ud6ff\u22121\u0001\u0001 exploration steps to be equivalently optimal. The proof is similar to that of Theorem 4.2 (detailed in Appendix A.3). By comparing 4.2 and 4.3, \ud716-greedy results in even poorer exploration e\ufb03ciency than uniform exploration. Note the\ud716-greedy strategy is training policy speci\ufb01c, i.e., the exploration behavior varies as the training policy changes. Theorem 4.3 suggests that when the policy is sub-optimal, the induced \ud716-greedy exploration strategy can be even worse than uniform exploration. Hence, it can be bene\ufb01cial to adopt a separate exploration independent from the training policy. The above analysis shows that common exploration strategies like uniform exploration or \ud716-greedy exploration are ine\ufb03cient for such a simple game and the main reason is that it requires coordination between di\ufb00erent agents to reach high-rewarding states, but naive exploration strategies lack such cooperation. 4.2 Structured Exploration We will show that it is possible to design a better exploration strategy with some prior knowledge of the climb game structure. Consider a speci\ufb01c structuredexploration strategy \ud45d(\ud461) \ud452 (\ud456, \ud457) = \ud448\u22121 \u0002 1\ud456=\ud457 \u0003 , where both agents always choose the same action. With such a strategy, we can quickly \ufb01nd the optimal solution to the game. More formally, we have the following theorem. Theorem 4.4 (structuredexploration). In the climb game\ud43a\ud453(2, 0,\ud448), under structured exploration \ud45d(\ud461) \ud452 (\ud456, \ud457) = \ud448\u22121 \u0002 1\ud456=\ud457 \u0003 ,\ud45eJ (\ud447) (W, b, c,\ud451) is equivalently optimal at step \ud447= \ud442(1). Theorem 4.4 shows the e\ufb03ciency of exploration can be greatly improved if the exploration strategy captures a proper structure of the problem, i.e., all agents taking the same action. We further remark that by considering a set of similar climb games G, where G = {\ud43a\ud453(2,\ud462,\ud448)}\ud448\u22121 \ud462=0 , the structured exploration strategy \ud45d(\ud461) \ud452 (\ud456, \ud457) = \ud448\u22121 \u0002 1\ud456=\ud457 \u0003 can be interpreted as a uniform distribution over the optimal policies of this game set G. This interesting fact suggests that we can \ufb01rst collect a set of similarly structuredgames and then derive e\ufb00ective exploration strategies from these similar games. Once a set of structuredexploration strategies are collected,we can further adopt them for fast learning in a novel game with a similar problem structure. We take the inspiration here and develop a general meta-exploration algorithm in the next section. 5 METHOD We detail our method Cooperative Meta-Exploration in Multi-Agent Learning through Exploiting State-Action Space Structure (MESA) for cooperative multi-agent learning. As shown in Figure 2, MESA consists of a meta-training stage (Algo. 1) and a meta-testing stage (Algo. 2). In the meta-training stage, MESA learns exploration policies by training in a batch of training tasks that share intrinsic structuresin the state-action space. In the meta-testing stage, MESA utilizes the meta-learned exploration policies to assist learning in an unseen task sampled from the distribution of the training tasks. 5.1 Meta-Training The meta-training stage contains two steps: 1) identify the highrewarding state-action subspace, and 2) train a set of exploration policies using the subspace-induced rewards. 5.1.1 Identifying High-Rewarding Joint State-Action Subspace. For each training task T \ud456, we collect experiences D\ud456= {(\ud460\ud461, \ud482\ud461,\ud45f\ud461,\ud460\ud461+1)}. If the reward \ud45f\ud461is higher than a threshold \ud445\u2605, we call this joint state-action pair (\ud460\ud461, \ud482\ud461) valuable and store it into a dataset M\u2217. For goal-oriented tasks where \ud45f= 1\ud460=\ud454\ud45c\ud44e\ud459, the threshold can be \fAlgorithm 1 MESA: Meta-Training Input: Meta-training tasks {T \ud456}\ud435 \ud456=1 \u223c\ud45d(T), o\ufb00-policy MARL algorithm \ud453, distance metric \u2225\u00b7 \u2225F Parameter: #policies \ud438, threshold \ud445\u2605, horizon \u210e Output: Exploration policies {\ud745\ud456 \ud452}\ud438 \ud456=1 1: M\u2217\u2190\u2205, global pseudo-count \u02c6 \ud441\u21900 2: for i = 1 to B do 3: Initialize policy \ud745\ud703 4: Train \ud745\ud703with \ud453and collect dataset \ud437\ud456= {(\ud494\ud461, \ud482\ud461,\ud45f\ud461, \ud494\ud461+1)} 5: M\u2217\u2190M\u2217\u222a{\ud70f| \ud445(\ud70f) \u2265\ud445\u2605,\ud70f\u2208\ud437\ud456} 6: end for 7: for i = 1 to E do 8: Initialize exploration policy \ud745\ud456 \ud452 9: while \ud745\ud456 \ud452\u2019s training not converged do 10: Initialize \ud441as \u02c6 \ud441, D \u2190\u2205 11: for t = 0 to h-1 do 12: Execute \ud482\ud461\u223c\ud745\ud456 \ud452(\ud460\ud461), and observe (\ud460\ud461, \ud482\ud495,\ud45f\ud461,\ud460\ud461+1) 13: Calculate \u02c6 \ud45f\ud461based on Eq. 5 or 6 14: Store (\ud460\ud461, \ud482\ud461, \u02c6 \ud45f\ud461,\ud460\ud461+1) into D 15: \ud441(\ud719(\ud460\ud461, \ud482\ud461)) \u2190\ud441(\ud719(\ud460\ud461, \ud482\ud461)) + 1 16: end for 17: Optimize policy \ud745\ud456 \ud452with algorithm \ud453 18: end while 19: Update \u02c6 \ud441using D 20: end for 21: return {\ud745\ud456 \ud452}\ud438 \ud456=1 set as \ud445\u2605= 1. For other tasks, the threshold can be set as a hyperparameter, for example, a certain percentile of all collectedrewards. A smaller \ud445\u2605results in a larger identi\ufb01ed subspace but a less e\ufb03cient exploration policy. The data stored in M\u2217is highly diversi\ufb01ed since it comes from all the \ud435training tasks, which are expected to share an intrinsic structure. We expect that with this intrinsic structure, the highrewarding joint state-action pairs fall into some low-dimensional subspace. In the simplest case, they may form several dense clusters, or many of them lie in a hyperplane. Even if the subspace is not easily interpretable to humans, it may still be e\ufb00ectively \u201ccovered\u201d by a set of exploration policies (to be found in the subsequent step). We also explicitly deal with the reward sparsity problem by assigning a positive reward to a joint state-action pair (\ud460\ud461, \ud482\ud461) if it has zero reward but leads to a valuable state-action pair (\ud460\ud461\u2032, \ud482\ud461\u2032) later in the same trajectory. We also put these relabeled pairs into the dataset M\u2217. Let \ud461\u2032 = arg min\ud461\u2032>\ud461[\ud45f\ud461\u2032 > 0], we therefore have the following densi\ufb01ed reward function \u02c6 \ud45f\ud461= ( \ud6fe\ud461\u2032\u2212\ud461\u00b7 \ud45f\ud461\u2032, \ud45f\ud461= 0, \ud45f\ud461, \ud45f\ud461> 0. (5) 5.1.2 Learning Exploration Policies. In this step, we aim to learn a diverse set of exploration policies to cover the identi\ufb01ed highrewarding joint state-action subspace. We use a distance metric \u2225\u00b7 \u2225F (e.g., \ud4592 distance) to determine whether two state-action Algorithm 2 MESA: Meta-Testing Input: Test task \u02c6 T , meta-trained exploration policies {\ud745\ud456 \ud452}\ud438 \ud456=1, o\ufb00-policy MARL algorithm \ud453 Parameter: horizon \u210e Output: Policy \ud745\ud703for task \u02c6 T 1: Initialize policy \ud745\ud703, D = \u2205, annealing \ud716 2: while not converged do 3: Determine \ud45d\ud452under annealing probability schedule \ud716 4: Choose policy to perform rollouts by \ud745\ud451= ( \ud745\ud452\u223cU({\ud745\ud456 \ud452}\ud438 \ud456=1), w.p. \ud45d\ud452 \ud745\ud703, otherwise. 5: for t = 0 to h-1 do 6: Execute \ud482\ud461\u223c\ud745\ud451(\ud460\ud461). 7: Observe transition (\ud460\ud461, \ud482\ud461,\ud45f\ud461,\ud460\ud461+1). 8: D \u2190D \u222a(\ud460\ud461, \ud482\ud461,\ud45f\ud461,\ud460\ud461+1) 9: end for 10: Optimize \ud745\ud703with algorithm \ud453on replay bu\ufb00er D 11: end while 12: return \ud745\ud703 pairs are close. Then if a visited joint state-action pair (\ud460, \ud482) is close enough to the identi\ufb01ed subspace M\u2217, i.e., min\ud451\u2208M\u2217\u2225(\ud460, \ud482),\ud451\u2225F < \ud716, it would be assigned a derived positive reward \u02c6 \ud45f. Increasing the value of \ud435in the collection step would generally result in a more accurate distance measurement. However, this comes at the cost of making the minimization calculation more computationally expensive. To encourage a broader coverage of the subspace and to avoid mode collapse, the reward assignment scheme ensures that repeated visits to similar joint state-action pairs within one trajectory would result in a decreasing reward for each visit. Similar to [37], we adopt a pseudo-count function \ud441with a hash function \ud719(\ud494, \ud482) to generalize between similar joint state-action pairs. We then apply a decreasing function \ud453\ud451: N \u21a6\u2192[0, 1] on the trajectory-level pseudocount \ud441(\ud719((\ud460, \ud482)). The resulted reward assignment scheme is de\ufb01ned as follows: \u02dc \ud45f\ud461= \u02c6 \ud45f\ud461\ud453\ud451(\ud441(\ud719((\ud460\ud461, \ud482\ud461))) h 1min\ud451\u2208M\u2217\u2225(\ud460\ud461,\ud482\ud461),\ud451\u2225F<\ud716 i (6) After one exploration policy is trained with this reward, we will train a new policy to cover the part of the identi\ufb01ed subspace that has not yet been covered. This is achieved by having a global pseudo-count \u02c6 \ud441which is updated after training each exploration policy using its visitation counts and is maintained throughout the training of all exploration policies. This iterative process continues until the subspace is well-covered by the set of trained exploration policies. 5.2 Meta-Testing During meta-testing, MESA uses the meta-learned exploration policies {\ud745\ud456 \ud452}\ud438 \ud456=1 to assist the training of any generic o\ufb00-policy MARL algorithm on a test-time task \u02c6 T . Speci\ufb01cally, for each rolloutepisode, \fwe choose with probability \ud716to execute one uniformly sampled exploration policy \ud745\ud452\u223cU({\ud745\ud456 \ud452}\ud438 \ud456=1). For the best empirical performance, we also adopt an annealing schedule \ud716: \ud447\u21a6\u2192[0, 1] so that the exploration policies provide more rollouts at the initial stage of the training and are gradually turned o\ufb00later. Here we further provide some analysis of deploying the metalearned exploration policy on unseen testing tasks. Theorem 5.1 (Exploration during Meta-Testing). Consider goaloriented tasks with goal space G \u2286S. Assume the training and testing goals are sampled from the distribution \ud45d(\ud465) on G, and the dataset has \ud441i.i.d. goals sampled from a distribution \ud45e(\ud465) on S. If the exploration policy generalizes to explore \ud716nearby goals for every training sample, we have that the testing goal is not explored with probability at most \ud443fail \u2248 \u222b \ud45d(\ud465)(1 \u2212\ud716\ud45e(\ud465))\ud441\ud451\ud465\u2264\ud442 \u0012\ud43e\ud43f(\ud45d||\ud45e) + H (\ud45d) log(\ud716\ud441) \u0013 . (7) Theorem 5.1 shows that the good performance of meta-learned exploration policy relies on 1) a small di\ufb00erence between the training and testing distribution; and 2) a structured,e.g., low-dimensional, high-rewarding subspace G to reduce H (\ud45d). And when uniformly sampling the training data, \ud43e\ud43f(\ud45d||\ud45e) is bounded by log \u03a9G in our method. This term, however, can be up to log \u03a9S with an uncoordinated exploration on the joint state space S, where \u03a9S can be exponentially larger than \u03a9G. 5.3 Implementation Detail of MESA We choose MADDPG, following the centralized training with decentralized execution (CTDE) paradigm, as the o\ufb00-policy MARL algorithm for MESA since it can be applied to both discrete and continuous action space, as shown in its original paper [23]. We use a clustering mapping \ud453\ud450as the hash function \ud719so that the dataset M\u2217is clustered into \ud436clusters de\ufb01ned by the clustering function \ud453\ud450: S \u00d7 A \u21a6\u2192[\ud436]. The cluster mapping is implemented with the KMeans clustering algorithm [22]. The number of exploration policies to learn is viewed as a hyperparameter. See the Appendix for detailed hyperparameter settings. 6 EXPERIMENTS Our experimental evaluation aims to answer the following questions: (1) Are the meta-learned exploration policies capable of achieving more e\ufb03cient exploration during meta-testing on newly sampled tasks in matrix climb game variants (Section 6.2) and highdimensional domains (Section 6.3 and 6.4)? (2) Can these metalearned exploration policies successfully generalize to unseen testtime tasks from a more challenging (e.g., with more agents) test task distribution which is di\ufb00erent the training task distribution (Section 6.5)? 6.1 Evaluation Setup Compared Methods. We compare to 3 multi-agent reinforcement learning algorithms: MADDPG [23], MAPPO [41], and QMIX [33], to measure the e\ufb00ectiveness of our exploration policies. We also compare to 3 multi-agent exploration algorithm: MAVEN [24], MAPPO with RND exploration [5], and EMC [43]. To compare with baselines that adopt a similar meta-training stage, we add two naive Figure 3: Learning curve of the two climb game variants w.r.t number of environment steps. The return is averaged over timesteps for the multi-stage games. The dotted lines indicate the suboptimal return of 0.5 (purple) and the optimal return 1 (blue) for each agent. meta-learning baselines, including one with an unconditioned shared policy, which is trained over all training tasks, and one with a goal-conditioned policy, which takes the target landmarks as parts of the input. We also adapt the single-agent meta-RL algorithm MAESN [14] to the multi-agent setting. Finally, we adapt the singleagent C-BET [26] to multi-agent settings based on MAPPO. The training and testing tasks are as de\ufb01ned in Section 6.1. Please refer to the Appendix for more visualization and experimental results. Environments.We experiment on the Climb Game, Multi-agent Particle Environment (MPE) [23], and multi-agent MuJoCo [29], on which generating a distribution of meta-training tasks \ud45d(T) is feasible. 6.2 Climb Game Variants First, we consider task spaces consisting of variants of the aforementioned climb games. We extend previous climb game to (1) one-step climb game\ud43a(\ud45b,\ud458,\ud462,\ud448), which is a\ud45b-player game with \ud448actions for each player, and the joint reward is 1 if #\ud462= \ud458, 1 \u2212\ud6ff if #\ud462= 0, and 0 otherwise. The task space T one \ud448 consists of all one-step climb games that contain two players and \ud448actions; (2) multi-stage climb game, which is an \ud446-stage game where each stage is a one-stage climb game with the same number of available actions. Each stage \ud461has its own con\ufb01guration (\ud458\ud461,\ud462\ud461) of the one-stage climb game \ud43a(2,\ud458\ud461,\ud462\ud461,\ud448). Agents observe the history of joint actions and the current stage \ud461. The task space T multi \ud446,\ud448 consists of all multi-stage climb games with \ud446stages and \ud448actions. In our experiments, we use T one 10 and T multi 5,10 as the task space for the one-step and multi-stage Climb Games. We choose uniformly at random ten training tasks and three di\ufb00erent test tasks from the task space T , and we keep \ud6ff= 1 2 as in the classic climb games. Results on Climb Game Variants. For the matrix games, we additionally compare with MA-MAESN, which is our adaptation of the original single-agent meta-learning algorithm MAESN [14] to the multi-agent scenario In the single-step matrix game, MESA exhibits better performance, being able to \ufb01nd the optimal reward in some harder tasks when \ud458= 2, while other baselines are stuck at the sub-optimal reward for almost all tasks. \fFigure 4: Learning curves of MESA and the compared baselines w.r.t the number of environment interactions during the metatesting stage in the MPE domain and the multi-agent MuJoCo environment Swimmer. The two dotted lines indicate the ideal optimal (purple) and sub-optimal (blue) return summed over timesteps. A return above the blue line would typically indicate that the agents are able to learn the optimal strategy. In the more challenging 10-action multi-stage game where task space is exponentially larger, MESA outperforms all compared algorithms by a large margin. With the help of the exploration policies that have learned the high-rewarding joint action pairs, MESA quickly learns the optimal joint action for each stage and avoids being stuck at the sub-optimal. Figure 5: Visualizations of a 2-player 3-landmark MPE climb game. 6.3 MPE Domain We extend the matrix climb games to MPE [23], which has a continuous high-dimensional state space. Agents must \ufb01rst learn to reach the landmarks under sparse rewards and then learn to play the climb games optimally. In a MPE Climb Game \u00af \ud43a(\ud45b,\ud458,\ud462,\ud448, {\ud43f\ud457}\ud448\u22121 0 ) (Figure 5), there are \ud448non-overlapping landmarks with positions {\ud43f\ud457}\ud448\u22121 \ud457=0 . The reward is non-zero only when every agent is on some landmark. Agents will be given a reward of 1 if there are exactly \ud458agents located on the \ud462-th landmark (target landmark), and a suboptimal reward of 1 \u2212\ud6ffwill be given when none of the agents are located on the target landmark. Otherwise, the reward will be zero. As before, \ud462 and \ud458are not present in the observation and can only be inferred from the received reward. A task space T MPE \ud45b,\ud448 consists of all MPE climb games with \ud45bplayers and \ud448landmarks. We evaluate MESA Figure 6: Visualization of structured exploration behaviors discovered by the meta-trained exploration policy in MESA. on the 2-agent tasks (T MPE 2,5 and T MPE 2,6 ) and 3-agent tasks (T MPE 3,5 and T MPE 3,6 ) while \ufb01xing \ud458= 2. Each sampled training and testing task has a di\ufb00erent con\ufb01guration of landmark positions. Adaptation Performance in MPE. We show in Figure 4 the learning curve of our approach MESA compared with the aforementioned baseline methods. MESA outperforms the compared baselines by a large margin, being able to coordinately reach the task landmark quickly, as evidenced by the near-optimal reward. Even when combined with RND-based exploration, MAPPO easily sticks to the sub-optimal equilibrium. Value-based methods like QMIX and MAVEN are unable to learn the correct \ud444-function because the reward is quite sparse before agents can consistently move themselves to a landmark. EMC sometimes jumps out of the suboptimal equilibrium with curiosity-driven exploration, but the performance is not robust. Furthermore, as the meta-learning baselines only learn the sub-optimal behavior during meta-training, they fail to learn the optimal equilibrium during test time and quickly converge to the suboptimal equilibrium. Visualization of Exploration Policies. To answer question (2), we visualize the learned exploration policies in a 2-agent 3landmark MPE task in Figure 6. We can see that the learned exploration policy consecutively visited the 3 landmarks within 20 timesteps in one trajectory. 6.4 Multi-agent MuJoCo Environments We also extend the matrix climb games to multi-agent MuJoCo environments [29]. We consider speci\ufb01cally the 2-agent Swimmer environment where each agent is a hinge on the swimmer\u2019s body, and each agent\u2019s action is the amount of torque applied to hinge rotors. The extension considers the angles between the two hinges and the \fbody segments. Each task in the task space is a target angle such that a reward of 1 will be given only if the two angles are both close to the target angles, a 0.5 suboptimal reward is given if none of two angles are close to the target, and a reward of 0 if only one of the two angles are close. This multi-agent environment is extremely hard as agents are very likely to converge to the suboptimal reward of 0.5, which is con\ufb01rmed by the results that none of the baselines were able to \ufb01nd the optimal equilibrium in Figure 4. Therefore, MESA vastly outperforms all the compared baselines by learning a \ufb01nal policy that frequently reaches the target angle. 6.5 Generalization Performance of MESA In this section, our goal is to evaluate the generalization performance of the meta-trained exploration policy in scenarios where the meta-training and meta-testing task distributions are di\ufb00erent. In particular, we focus on the setting where the test-time tasks are more challenging than the training-time tasks and examine how an exploration policy learned from simpler tasks can boost training performances on harder tasks. The test task here is uniform on the 3-agent high-di\ufb03culty MPE Climb games. The task di\ufb03culty is de\ufb01ned by the average pairwise distances between the landmark positions and the initial positions of the agents. We consider two simpler training task distributions, including (1) a 2-agent setting with the same di\ufb03culty, and (2) a 3-agent setting with a lower di\ufb03culty. In both settings, the metatraining tasks are less challenging than the test-time tasks. For evaluation, the meta-trained exploration policy from each setting will be directly applied to assist the training on the more challenging test-time tasks, without any \ufb01ne-tuning. We modi\ufb01ed the neural network architecture by adopting an attention layer in both actor and critic to ensure they are compatible with a varying number of agents. The attention mechanism acts as an aggregation function between the relative positions of the other agents and its own relative position to the landmarks to handle the varying observation dimensions. Additionally, we employed behavior cloning (BC) [30] on the rollouts of the exploration policies as a warm-up to accelerate learning of the \ufb01nal policy. In Figure 7, we present the generalization results from our study. We evaluate the zero-shot generalization ability of the meta-exploration policy by measuring the average number of high-reward transitions hit in a test task randomly sampled from the test task distribution. As shown on the left of Figure 7, the meta-exploration policies are able to explore the test-time tasks much more e\ufb03ciently than a random exploration policy, even on test-time tasks that are drawn from a harder task distribution. Notably, the generalization ability increases with the number of exploration policies (\ud435). Using the meta-exploration policies trained on the simpler tasks, MESA is able to consistently reach the high-reward region in the unseen hard 3-agent tasks, as opposed to the vanilla MADDPG algorithm that only learns the sub-optimal equilibrium. We also see that with an increasing number of meta-exploration policies, the performance of MESA increases, but the improvement becomes marginal, while the meta-training time increases linearly with E. Figure 7: Generalization results of MESA on the hard 3-agent MPE Climb game. Left: Zero-shot generalizability of the meta-exploration policies, measured by the number of visitations on high-reward transitions per episode on the test tasks. The purple dotted line corresponds to the random exploration policy. The plot shows the concatenated training curves for all exploration policies. Right: Learning curves of MESA under di\ufb00erent settings using the meta-exploration policies trained on the two di\ufb00erent training-task distributions. 7" + }, + { + "url": "http://arxiv.org/abs/2303.15175v2", + "title": "Sparse Feedback Controller: From Open-loop Solution to Closed-loop Realization", + "abstract": "In this paper, we explore the discrete time sparse feedback control for a\nlinear invariant system, where the proposed optimal feedback controller enjoys\ninput sparsity by using a dynamic linear compensator, i.e., the components of\nfeedback control signal having the smallest possible nonzero values. The\nresulting augmented dynamics ensures closed-loop stability, which infers sparse\nfeedback controller from open-loop solution to closed-loop realization. In\nparticular, we show that the implemented sparse feedback (closed-loop) control\nsolution is equivalent to the original sparse (open-loop) control solution\nunder a specified basis. We then extend the dynamic compensator to a\nfeedforward tracking control problem. Finally, numerical examples demonstrate\nthe effectiveness of proposed control approach.", + "authors": "Zhicheng Zhang, Yasumasa Fujisaki", + "published": "2023-03-27", + "updated": "2023-07-29", + "primary_cat": "eess.SY", + "cats": [ + "eess.SY", + "cs.SY", + "math.OC", + "93-XX Systems theory, control, 90C25 Convex programming" + ], + "main_content": "Introduction Sparse control is closely related to sparse optimization [1], which penalizes sparsity on controller so as to schedule resource-aware allocation. In control design, the sparsitypromoting idea has thrived in various directions to distributed control [2\u20134], tracking control [5], and predictive control [6\u20138]. When the sparsity is imposed on the control structure (i.e., structured sparsity) whose controller depends on a static state feedback [9\u201311], then the sparse control is recast into a distributed control, which attempts to reduce the number of communication links [2] in networked control system. Another appealing alternative is to penalize sparsity on control signal (i.e., input sparsity) to implement a sparse control that maximizes the time duration over which the control value is exactly zero [12]. In this paper, we are interested in the generation of control signals, which emphasizes on the latter sparse control, namely, reducing control effort or fuel consumption, also known as \u201c\u21131 optimal control\u201d or \u201cmaximum hands-o\ufb00 control\u201d [6,13]. The optimal control design of sparse signals is of great signi\ufb01cance for control system and directly a\ufb00ects the dynamic process of the system. Many practical systems of interest are dependent on a feedback mechanism to achieve closed-loop stability. However, closed-loop realization is more challenging to optimal control problem because Emails: {zhicheng-zhang;fujisaki}@ist.osaka-u.ac.jp \fdetermining the feedback gain (matrix) is a non-trivial task [14]. Indeed, in closed-loop sparse control scenarios, almost all existing literature has been focused on discussing \u201cstructured sparsity\u201d [2\u20134] by optimizing linear quadratic state feedback cost [9\u201311] rather than pursuing our expected sparse control inputs (i.e., \u21131 optimal control) [6,15]. Furthermore, most successful stories on sparse control taking \u21131 cost of discrete (resp., L1 cost of continuous) systems have been extensively treated in open-loop solutions [6,12,13,16]. These recent advances motivate us to study the closed-loop \u21131 optimal control problem. Although \u201creal-time control\u201d bridges the gap between the open-loop and closedloop solutions, schemes such as self-triggered sparse control [12, Sec. VI] and sparse predictive control [6\u20138,17] can, and often do, emerge feedback solutions. On the other hand, these iterative feedback algorithms perform online optimization, leading to the computationally burden, especially when the decision variable is high dimension. Neither exploring sparsity on the structure of feedback gain matrix or exploiting real-time control, we immediately promotes sparsity on the control inputs with a closed-loop response, called sparse feedback control realization. Inspired by seminal works [14,18], a relatively optimal control technique paves the way towards the open-loop solution to the closed-loop solution by means of linear implementation. In this paper, we focus on closed-loop realization for sparse optimal control design for a discrete linear time invariant system, where the state feedback controller is linear dynamic, as well as enjoys input sparsity. We become aware of the result [19] that induces a desired sparse input by taking a row-sparsity on the static state feedback gain, which implies a structured sparsity on channels. In contrast, we here leverage a dynamic state feedback controller and keep the standard optimal control framework. We believe that such a direct sparse optimization of control signal is more convenient to synthesizing sparse feedback control. One of its bene\ufb01ts is that it relies solely on o\ufb04ine optimization. Thus, the proposed controller can often o\ufb00er a signi\ufb01cant computational saving and control e\ufb00ort minimization. Encouraged by simple theoretical and numerical results by a position paper [20] of the present authors, we extend the results to a more comprehensive version, including problem setup, proofs, tracking control, and simulations. The main contributions of this article are summarized as follows. \u2022 This paper gives feedback realization of sparse optimal control via a dynamic linear compensator. In other words, the sparse feedback controller is derived from open-loop solutions of sparse optimal control. Besides, we propose the computationally tractable analytical and explicit feedback solution for the sparse control problem (Problems 1 and 2). \u2022 We provide the stability, optimaility, and sparsity for the closed-loop augmented system, and display that the designed sparse feedback control is essentially a deadbeat control (Theorem 3.1 and Corollary 3.3). \u2022 In particular, we show that an equivalence for the open-loop sparse control and the closed-loop sparse control under a speci\ufb01ed basis (Corollary 3.2). \u2022 Furthermore, we demonstrate that the sparse feedback control can in fact be extended successfully for tracking problem (Lemma 4.1 and Proposition 4.2). This paper is organized as follows. In Section 2, we introduce the problem formulation and the basic preliminaries. Section 3 gives the main result for stabilizing sparse feedback control using a dynamic linear compensator, which can be divided into two steps, that is, sparse optimization and feedback realization, respectively. Section 4 extends the result to a tracking problem by devising a dynamic tracking controller. The numerical examples are illustrated in Section 5. Section 6 concludes this paper. 2 \fNotation. Throughout this paper, let R, Rn, and Rn\u00d7m denote the sets of real numbers, n dimension of real vectors, and n \u00d7 m size real matrices, respectively. We use In (or 0n) to denote the identity (or zero) matrix of size n \u00d7 n, 0n\u00d7m to denote the zero matrix of size n \u00d7 m; and for brevity, we sometimes abbreviate I (or 0) to represent the identify (or zero) matrix with appropriate size. Let 1n = [1 \u00b7 \u00b7 \u00b7 1]\u22a4\u2208Rn be the all one vector, and ei \u2208Rq stands for an q-tuple basis vector whose all entries equal to 0, except the ith, which is 1. Given a vector x = [x1 x2 \u00b7 \u00b7 \u00b7 xn]\u22a4\u2208Rn, we de\ufb01ne the \u21131 and \u21132 norms, respectively, by \u2225x\u22251 = Pn i=1 |xi|, \u2225x\u22252 = pPn i=1 |xi|2. Similarity, the \u21131 norm of a matrix X \u2208Rn\u00d7m is de\ufb01ned by \u2225X\u22251 = Pn i=1 Pm j=1 |xij|. 2. Problem formulation 2.1. Review of sparse optimal control Consider a discrete linear time invariant (LTI) system described by x(t + 1) = Ax(t) + Bu(t), x(0) = x0, y(t) = Cx(t) + Du(t), (1) where u(t) \u2208Rm is the control input with m \u2264n, x(t) \u2208Rn is the state with an initial value x0, y(t) \u2208Rp is the output, and A, B, C, and D are real constant matrices of appropriate sizes. Throughout this paper, we assume that the pair (A, B) is reachable. In this paper, we are interested in sparse optimal control problem, and the control objective is to seek a control sequence {u(t)}N\u22121 t=0 such that it drives the resultant state x(t) from an initial state x(0) = x0 to the origin in a \ufb01nite N steps (i.e., x(N) = 0) with minimum or sparse control e\ufb00ort. As indicated in [1], an exact sparsity is achieved by penalizing an \u21130 \u201cquasi-norm\u201d on decision variables. However, computing the \u21130 norm precisely is challenging due to its non-convex and non-smooth nature, often resulting in an NP-hard problem. It suggests replacing the \u21130 cost with convex relaxation using an \u21131 norm, which still generates the sparse solution. In particular, the restricted isometry property reveals an equivalence between \u21130 (sparse) optimal control and \u21131 optimal control for discrete systems [6]. In this context, we shift the idea from compressed sensing to sparse control problem, which seeks an \u201copen-loop\u201d \u21131 optimal control action u\u2217in control system [6,13] de\ufb01ned as u\u2217= arg min u\u2208U \u2225u\u22251 = arg min u\u2208U N\u22121 X t=0 \u2225u(t)\u22251, (2) where u = \u0002 u\u22a4(0) u\u22a4(1) \u00b7 \u00b7 \u00b7 u\u22a4(N \u22121) \u0003\u22a4\u2208RmN, and \u2225u\u22251 indicates the \u21131 norm of input vector u that sums the absolute values of its elements. Meanwhile, a feasible control set is described by U = {u \u2208RmN : \u03a6Nu = \u2212ANx0}, in which \u03a6N = [AN\u22121B \u00b7 \u00b7 \u00b7 AB B] \u2208Rn\u00d7mN is an N step reachability matrix and satis\ufb01es full row rank, i.e., rank(\u03a6N) = n. Occasionally, the state and input constraints are necessarily taken into account, for instance, y(t) \u2208Y, where Y is a convex and closed set. Besides, the horizon N should be su\ufb03ciently long so that the admissible set of u is nonempty. 3 \f2.2. Dynamic linear compensator Before proceeding with the \u201cfeedback realization\u201d, a compensator K which we demand to design is a dynamic and linear state feedback controller, depending on the evolution K : z(t + 1) = Fz(t) + Gx(t), z(0) = 0, u(t) = Hz(t) + Kx(t), (3) where z(t) is the state of the compensator and F, G, H, and K are real matrices of compatible sizes. Note that we set the initial state z(0) of the compensator as zero (i.e., z(0) = 0). The advantages of such a dynamic compensator K of (3) are threefold. First, it brings a linear dynamic fashion to the sparse feedback control realization, which allows for computationally tractable compensator gain matrices. Second, it promotes input/temporal sparsity [6,12,13], rather than structured/spatial sparsity for the controller [2\u20134,9\u201311]. Lastly, it ensures internal stability for the closed-loop system. Notice that the requirement z(0) = 0 is not overly restrictive in our context. In fact, in the following section, we further impose z(N) = 0, along with x(N) = 0, as part of the sparse control implementation. This means that we consider the closedloop system through deadbeat control. In this case, once the stable closed-loop reaches the zero state within a \ufb01nite time, the condition z(0) = 0 is automatically ful\ufb01lled whenever we have another x(0) \u0338= 0 due to a new disturbance. 3. Sparse feedback control realization In this section, we focus on optimal sparse feedback control synthesis from open-loop solution to closed-loop realization. Determining the explicit matrices of F, G, H, K for dynamic compensator (3) is of primary interest in designing sparse feedback controller. Let us formally state the constrained sparse optimal control problem with a general initial condition x0 \u2208X0, where X0 . = {e1, e2, . . . , en} is used to generate all n possible input-state trajectories, and ei \u2208Rn represents the standard basis vector, e.g., e1 = [1 0 \u00b7 \u00b7 \u00b7 0]\u22a4\u2208Rn. The problem is as follows: Find F, G, H, and K of (3) such that (i) the dynamic compensator (3) stabilizes the plant (1) and (ii) for any x0 \u2208X0 with z(0) = 0, the controller (3) generates an input sequence {u(t)}N\u22121 t=0 , which minimizes PN\u22121 t=0 \u2225u(t)\u22251 subject to the terminal constraints x(N) = 0 and z(N) = 0 for a positive integer N, as well as the state and input constraints y(t) \u2208Y, Y = {y \u2208Rp : \u2212s \u2264y(t) \u2264s}, (4) where s \u2208Rp is a given positive vector. Remark 1. The input sparsity can be easily performed by minimizing the convex \u21131 norm instead of the non-convex \u21130 norm, as seen in the previous section. Moreover, although we select only a subset of the initial states of the plant, i.e., x0 \u2208X0, which is su\ufb03cient for our purposes. In fact, suppose that the given constrained sparse control problem is solved. Since the resultant closed-loop system composed of (1) and (3) is linear, it means that for any x0 \u2208Rn with z(0) = 0, the controller (3) generates a linear combination of the input sequences corresponding to x0 = e1, x0 = e2, . . ., 4 \fx0 = en, thereby achieving sparsity and satisfying x(N) = 0 and z(N) = 0. 3.1. Closed-loop augmented system In the celebrated works [14,18], the authors performed the linear implementation built from the relatively optimal technique. This oracle suggests us to investigate an augmented closed-loop system composed of the discrete dynamics (1) and the dynamic compensator (3) of the form \u03c8(t + 1) = (A + BK)\u03c8(t), \u03c8(0) = \u03c80, y(t) = (C + DK)\u03c8(t), (5) where the corresponding state and the gain matrices are given by \u03c8(t) = \u0014 x(t) z(t) \u0015 , \u03c80 = \u0014 x0 0 \u0015 A = \u0014 A 0 0 0 \u0015 , B = \u0014 B 0 0 I \u0015 , K = \u0014 K H G F \u0015 , C = \u0002 C 0\u0003 , D = \u0002 D 0\u0003 . To represent the closed-loop system dynamics in a compact way, we accordingly introduce a stable matrix P, which is an N-Jordan block associated with 0 eigenvalue, de\ufb01ned by P = \u0014 0 0 IN\u22121 0 \u0015 \u2208RN\u00d7N. (6) Based on the previous problem setup, we formulate the constrained sparse optimal control problem as the following sparse optimization. Problem 1 (Sparse Optimization). Find the matrices X \u2208Rn\u00d7nN and U \u2208Rm\u00d7nN such that the obtained U is sparse, which amounts to solve a constrained \u21131 norm input matrix optimization min X,U \u2225U\u22251 s.t. AX + BU = X(P \u2297In), In = X(e1 \u2297In), abs(CX + DU) \u2264s(1n \u22971N)\u22a4, where e1 = [1 0 \u00b7 \u00b7 \u00b7 0]\u22a4\u2208RN and abs(\u00b7) returns the absolute value of each element in a matrix. It is mentioned that Problem 1 is a convex optimization, and hence the solution is computationally tractable by means of the o\ufb00-the-shelf packages, such as CVX [21] or YALMIP [22] in MATLAB. Once the open-loop optimal solution (X, U) of Problem 1 is attained, we then proceed the second step, that is to say, we move on to tackling the following sparse feedback realization problem. 5 \fProblem 2 (Feedback Realization). Based on the solution (X, U) of Problem 1, solve a linear equation \u0014 K H G F \u0015 \u0014 X Z \u0015 = \u0014 U V \u0015 (7) with respect to (K, H, G, F) and determine the compensator\u2019s gain matrices, where Z = \u00020n(N\u22121)\u00d7n In(N\u22121) \u0003 , V = Z(P \u2297In). (8) 3.2. Stability analysis The approach to sparse feedback control design makes use of the above discussed sparse optimization (Problem 1) and feedback realization (Problem 2), where the dynamic compensator ensures the internally stability of the closed-loop augmented system (5). The result is summarized in the following theorem and corollary. Theorem 3.1 (Sparse Feedback Control Realization). Suppose that Problem 1 has the minimizer (X, U). Then the equation (7) has the unique solution (K, H, G, F) and the resulting compensator (3) with z(0) = 0 generates the input sequence u(t) = U(et+1 \u2297x0), t = 0, 1, . . . , N \u22121, for x0 \u2208X0, which drives the plant state x(t) from x(0) = x0 to x(N) = 0 under the output constraint (4). Furthermore, the closed-loop system (5) is internally stable. Proof. We \ufb01rst describe the matrices X and U as X = \u0002 X0 X1 \u00b7 \u00b7 \u00b7 XN\u22121 \u0003 , U = \u0002 U0 U1 \u00b7 \u00b7 \u00b7 UN\u22121 \u0003 , where Xt \u2208Rn\u00d7n, Ut \u2208Rm\u00d7n, and t = 0, 1, . . . , N \u22121. Notice that checking the second constraint of Problem 1 gives rise to the result X0 = In. With this fact, it admits that the matrix \u03a8 is non-singular. In other words, we claim that det \u03a8 \u0338= 0, \u03a8 = \u0014 X Z \u0015 = \u0014 In X1 \u00b7 \u00b7 \u00b7 XN\u22121 0n(N\u22121)\u00d7n In(N\u22121) \u0015 . (9) Since the matrix \u03a8 is invertible, it follows that the equation (7) has the unique solution (K, H, G, F) associated with the dynamic compensator (3). Meanwhile, the \ufb01rst constraint of Problem 1 with (7) and (8) asserts that (A + BK)\u03a8 = \u0014 A 0 0 0 \u0015 \u0014 X Z \u0015 + \u0014 B 0 0 I \u0015 \u0014 U V \u0015 = \u0014 AX + BU V \u0015 = \u0014 X Z \u0015 (P \u2297In) = \u03a8(P \u2297In). This implies that the closed-loop matrix A + BK = \u03a8(P \u2297In)\u03a8\u22121, so that it is similar to a nilpotent matrix (P \u2297In), hence, the closed-loop system (5) is internally stable 6 \fand the zero terminal state x(N) = 0 is achieved for any initial state \u03c8(0) of the system. Moreover, we see that the sequences x(t) = X(et+1 \u2297x0) = Xtx0, u(t) = U(et+1 \u2297 x0) = Utx0, and z(t) = Z(et+1 \u2297x0) = Ztx0 are indeed generated by the system (1) with x(0) = x0 and the controller (3) with z(0) = 0. As a matter of fact, apparently x(0) = X(e1 \u2297x0) = X0x0 = x0 and z(0) = Z(e1 \u2297x0) = 0. Furthermore, based on the fact that (P \u2297In)(et+1 \u2297x0) = et+2 \u2297x0, we have Ax(t) + Bu(t) = (AX + BU)(et+1 \u2297x0) = X(P \u2297In)(et+1 \u2297x0) = X(et+2 \u2297x0) = x(t + 1), (10) Fz(t) + Gx(t) = (FZ + GX)(et+1 \u2297x0) = Z(P \u2297In)(et+1 \u2297x0) = Z(et+2 \u2297x0) = z(t + 1), (11) Hz(t) + Kx(t) = (HZ + KX)(et+1 \u2297x0) = U(et+1 \u2297x0) = u(t). (12) We next consider the third constraint of Problem 1, whose validity can be inspected by assessing the inequality \u2212abs(CX + DU) \u2264CX + DU \u2264abs(CX + DU) holds true. Therefore, for a given positive vector s \u2208Rp and x0 \u2208X0, we have (CX + DU)(et+1 \u2297x0) \u2264abs(CX + DU)(et+1 \u2297x0) \u2264s(1n \u22971N)\u22a4(et+1 \u2297x0) = s. (13) Notice that Cx(t)+Du(t) = (CX +DU)(et+1 \u2297x0) = y(t), then the output constraint of the form {\u2212s \u2264y(t) \u2264s} in (4) is veri\ufb01ed. According to the above arguments, we claim that the sequences (10), (11), (12), and (13) indeed satisfy the input-state trajectories of LTI dynamics (1) under output constraint (4), which proves the theorem for realizing sparse feedback control. Remark 2 (O\ufb04ine vs. Online). It is clear that realizing sparse feedback control in Theorem 3.1 is based on o\ufb04ine computation, and hence the computational complexity is low. Compared with sparse predictive control [6,8,17], a real-time feedback iterations is employed to ensure closed-loop dynamics and online optimization is repeatedly performed as a feedback controller to calculate sparse solutions. Beyond all doubt, predictive feedback control naturally leads to computational burden when the sizes of controlled system is high (e.g., the curse of dimensionality), even for using a fast ADMM (alternating direction method of multipliers) algorithm [6,7]. 7 \fBased on the proposed Theorem 3.1, we can directly give a corollary that establishes the connection between the open-loop sparse optimal control solution and the closedloop sparse optimal control solution. Corollary 3.2 (Equivalence). Suppose that Problem 1 has the minimizer (X, U). Let u\u2217 K be optimal sparse feedback control (i.e., closed-loop \u21131 optimal control) solution using a dynamic linear compensator K (3), and u\u2217be open-loop \u21131 optimal control solution u\u2217of program (2) with output constraint (4), respectively. Then, for x0 \u2208X0, it holds that u\u2217= u\u2217 K = Hz + Kx\u2217. (14) Corollary 3.3 (Deadbeat Control). Suppose that Problems 1 and 2 have solved, then the implemented sparse feedback controller u\u2217 K = Hz+Kx\u2217(i.e., closed-loop \u21131 optimal control) of discrete LTI plant (1) is essentially an N-step deadbeat controller. Remark 3. Regarding the deadbeat control, since the designed compensator K brings the state x(t) to the origin in N steps (satisfying x(N) = 0), which places all of the eigenvalues of the augmented closed-loop system matrix A + BK at the origin in the complex plane. 4. Extension: Tracking problem In this section, we extend the above result of sparse feedback control to tracking control problem [23, Chapter 8]. We start by giving a step-type reference signal r(t) \u2208Rp as r(t) = \u001ar\u2212, t < 0 r+, t \u22650, (15) where r\u2212\u2208Rp and r+ \u2208Rp are constant vectors. The purpose of tracking problem is to design a dynamic tracking compensator such that the performance output tracks a reference input with zero steady-state error by using additional feedforward gains. For this reason, we de\ufb01ne the tracking error by e(t) = y(t) \u2212r(t), where y(t) is a performance output signal stated in LTI plant (1). Meanwhile, we make the following assumption before giving an e\ufb00ective tracking controller. Assumption 1. For a tracking problem, assume that the performance output signal y(t) \u2208Rp and the control signal u(t) \u2208Rm in LTI dynamics (1) be of the same size (i.e., p = m) and take the matrix D = 0m. It is known that the performance output y(t) \u2208Rm can track any reference signal r(t) \u2208Rm of (15) in the steady-state if rank \u0014 I \u2212A B C 0 \u0015 = n + m. (16) As already reported in Section 3, an analogous dynamic tracking compensator Kr can be applied to the discrete LTI plant (1) by adding a prescribed reference input r(t) \u2208Rm to the control actuator (3). To this end, a dynamic tracking compensator 8 \fFigure 1. Feedforward tracking control system: r(t) is reference signal; y(t) is the performance output which must track a speci\ufb01ed reference input r(t); \u201cComp\u201d represents a dynamic tracking compensator (17) applied to a discrete LTI plant (1). Kr for the plant can be designed as Kr : zr(t + 1) = Fzr(t) + Gx(t) + Lr(t), zr(0) = 0, u(t) = Hzr(t) + Kx(t) + Mr(t), (17) where L and M represent the feedforward gain matrices with suitable sizes, and r(t) \u2208 Rm is a speci\ufb01c reference input (15). Notice that here the initial value zr(0) of tracking compensator Kr is set to zero (i.e., zr(0) = 0). Figure 1 shows the closed-loop system composed of the plant (1) and the controller (17). For a preferable reference input tracking, we employ the di\ufb00erence or variation of the control inputs PN\u22121 t=0 \u2225u(t + 1) \u2212u(t)\u22251 as the performance index, which is referred to as minimum attention control [24\u201326]. We slightly relax the constraints in the previous sections by removing the state and input constraints (4). As a result, we formulate the following tracking (minimum attention) control problem that we aim to solve here: Find F, G, H, and K of (17) such that (i) the dynamic tracking compensator (17) stabilizes the plant (1) and (ii) for any x0 \u2208X0 with zr(0) = 0 and r(t) \u22610, the controller (17) generates an input sequence {u(t)}N\u22121 t=0 , which minimizes PN\u22121 t=0 \u2225u(t + 1) \u2212u(t)\u22251 subject to x(N) = 0 and z(N) = 0 for a positive N. Then, determine L and M of (17) such that the steady state gain of the closed-loop system from r(t) to y(t) is the identity and that from r(t) to zr(t) is zero. Remark 4. When we have a solution to the above problem, we see that y(t) tracks r(t) without steady state error owing to the selected steady state gain. We also observe that u(t) achieves minimum attention for any x0 \u2208Rn due to linearity of the system. Moreover, since z(N) = 0 is achieved in the steady state for any r+, the condition z(0) = 0 is automatically satis\ufb01ed whenever we have another r+ as a new reference signal. The closed-loop behavior can be described in an augmented description \u03c8r(t + 1) = (A + BK)\u03c8r(t) + Mrr(t), \u03c8r(0) = \u03c8r0, y(t) = C\u03c8r(t), (18) where \u03c8r(t) = \u0014 x(t) zr(t) \u0015 , Mr = \u0014 BM L \u0015 , 9 \fand the other matrices A, B, K, and C have been de\ufb01ned for (5). Based on the problem setup above, we \ufb01rst consider the following minimum attention control problem. Problem 3 (Minimum Attention Control). Find the matrices X \u2208Rn\u00d7nN and U \u2208 Rm\u00d7nN such that the obtained U solves a minimum attention control problem min X,U \u2225U(P \u2297In) \u2212U\u22251 s.t. AX + BU = X(P \u2297In) In = X(e1 \u2297In). Looking for the solution (X, U) of Problem 3 is always accessible because, the above problem is a convex program. Then, with (K, H, G, F) of (7) and (8), the resultant closed-loop system (18) always assures internal stability, that is, the condition A + BK = \u03a8(P \u2297In)\u03a8\u22121 holds, as discussed in Section 3. We next deal with tracking problem. Due to the fact that the closed-loop augmented system (18) is internally stable with matrices (K, H, G, F), it admits a unique steadystate \u03c8\u221e= [x\u22a4 \u221ez\u22a4 \u221e]\u22a4for a desired reference input r+ = limt\u2192\u221er(t). More precisely, we have the following matrix equation \u0014 \u03c8\u221e y\u221e \u0015 = \u0014 A + BK Mr C 0 \u0015 \u0014 \u03c8\u221e r+ \u0015 . (19) If y\u221e= Cx\u221e= r+ for any reference r+, the output y(t) tracks reference r(t) with no steady-state tracking error. If z\u221e= 0 for any r+, we can have z(0) = 0 whenever the reference signal changes again after a steady-state is achieved. In what follows, we are going to achieve tracking error elimination with z\u221e= 0 by assigning feedforward tracking gains M and L, leading to the following lemma. Lemma 4.1 (Steady-State Tracking). Supposed that Assumption 1 and rank condition (16) of steady-state tracking hold, and Problem 3 has the miniminer (X, U). Then, the priori gain matrices (K, H, G, F) of compensator Kr (17) can be uniquely determined by feedback realization (7) and (8). In addition, if the following conditions hold (i) det (I \u2212(A + BK)) \u0338= 0, (ii) det (I \u2212F) \u0338= 0, then the feedforward tracking gain matrices (M, L) in compensator Kr (17) can be derived by M = \u0000C \u0000I \u2212(A + BK) \u0001\u22121B \u0001\u22121, (20) L = \u2212G \u0000I \u2212(A + BK) \u0001\u22121BM. (21) Based on the obtained matrices (K, H, G, F, M, L), for all initial state x0 \u2208Rn and any reference r+ \u2208Rm, there exist the unique steady-state values (x\u221e, z\u221e) such that y\u221e= r+ and z\u221e= 0 are achieved in the steady-state. Proof. Since the gain matrices (K, H, G, F) can previously be calculated by (7) and (8), the augmented closed-loop system (18) is internally stable, as reported in Theo10 \frem 3.1. We here reformulate the augmented system (19) as \u0014 x\u221e z\u221e \u0015 = \u0014 A + BK BH G F \u0015 \u0014 x\u221e z\u221e \u0015 + \u0014 BM L \u0015 r+. (22) Suppose that the zero steady-state z\u221e= 0, we then have x\u221e= (A + BK)x\u221e+ BMr+. This implies that, if the matrix I \u2212(A + BK) is invertible, x\u221e= (I \u2212(A + BK))\u22121BMr+. Therefore, we see the result y\u221e= Cx\u221e= r+ if M is selected as (20). On the other hand, the steady-state of the tracking compensator z\u221ein (22) satis\ufb01es z\u221e= Gx\u221e+ Fz\u221e+ Lr+. Consequently, we derive z\u221e= (I \u2212F)\u22121(Gx\u221e+ Lr+) = (I \u2212F)\u22121\u0000G(I \u2212(A + BK))\u22121BM + L\u0001 r+ when the matrix (I \u2212F) is invertible. Thus, we see that z\u221e= 0 if the feedforward gain matrix L meets (21). Therefore, the feedforward gains (20) and (21) make error cancellation for achieving tracking. Remark 5 (Illustrative conditions). Checking determinant conditions (i) and (ii) in Lemma 4.1 can be equivalently formulated with optimal solution X to Problem 3. In fact, we have I \u2212(A + BK) = In \u2212X1, I \u2212F = In(N\u22121) \u2212 \u0014 \u2212X1 \u00b7 \u00b7 \u00b7 \u2212XN\u22122 \u2212XN\u22121 In(N\u22122) 0n(N\u22122)\u00d7n \u0015 with A + BK = \u03a8(P \u2297In)\u03a8\u22121 and (9). Proposition 4.2. Under Assumption 1 and Lemma 4.1, the output y(t) tracks reference r(t) with no steady-state error via dynamic tracking compensator (17), where the tracking control input realizes minimum attention. 5. Numerical simulations In this section, we perform several numerical examples to illustrate the e\ufb00ectiveness of the designed sparse feedback control, which gives a closed-loop optimal solution by using a dynamic linear compensator. 11 \f5.1. Single input At \ufb01rst, we consider a single-input control system modeled as a linearized cart-pole system, and the parameters benchmark are similar to [14, Sec. VI], in which the mass of the cart is 0.29 [kg], mass of the pole is 0.1 [kg], length of the pole is 1 [m], gravity acceleration is 9.81 [m/s2], and friction is neglected. We then execute time-discretization of this continuous system using zero-order-hold (ZoH) with sampling \u2206t = 0.3 [s], then the system matrices of form (1) are given by A = \uf8ee \uf8ef \uf8ef \uf8f0 1 0.3 0.1377 0.0143 0 1 0.8256 0.1377 0 0 0.4628 0.2441 0 0 \u22123.2198 0.4628 \uf8f9 \uf8fa \uf8fa \uf8fb, B = \uf8ee \uf8ef \uf8ef \uf8f0 0.1514 0.9850 \u22120.1404 \u22120.8416 \uf8f9 \uf8fa \uf8fa \uf8fb. Meanwhile, the output matrices with respect to state-input constraints (4) are set to C = \u0014 0 0 1 0 0 0 0 0 \u0015 , D = \u0014 0 1 \u0015 , then we have y\u22a4(t) = \u0002 x3(t) u(t) \u0003 , which means that the enforced constrained state is only third component of the state x3(t) and the imposed constrained input is u(t). By selecting the suitable variable s, it gives state and input constraints as follows |x3(t)| \u22641, |u(t)| \u22641. Next, we simulate the discrete-time controlled system, the target is to drive the cart state from a non-zero initial state x\u22a4 0 = [0.9453, 0.7465, 0.7506, 0.4026] to the zero terminal state x(N) = 0 in a \ufb01nite N = 40 steps (i.e, taking T\ufb01nal = (5 \u2217N \u2217\u2206t)/4 = 15 [s] in a state-space representation). In order to realize the sparse feedback controller, we need to solve Problems 1 and 2 to seek the closed-loop \u21131 optimal solution. By computing, the total CPU time in PC is 0.38 [s] in MATLAB R2020b using CVX [21], and the found optimal value \u2225U \u2217\u22251 = 6.4204. Figure 2 illustrates the related closed-loop optimal input and state trajectories, respectively. It re\ufb02ects the input sparsity on sparse feedback control, in which the control sequence is with less active components, and the optimal control meets constraint |u(t)| \u22641. In addition, it plots the optimal state trajectories, where the pole angle x3 \ufb02uctuates between the bounds \u22121 and 1, satisfying the prescribed state constraint |x3(t)| \u22641. Meanwhile, the trajectories of four di\ufb00erent states start from an initial state x0 and eventually converge to zero state as time tends to a \ufb01xed steps under the dynamic compensator (3), this implies that the closed-loop stabilization is achieved. 5.2. Multiple inputs numerical benchmark As a second numerical simulation, we show the result that the synthesized dynamic controller (3) is useful for sparse feedback control of multi-input control system. We here consider a discretized version of third-order system with two control inputs. Using 12 \f-0.8 -0.5 0 0.5 0.8 Sparsity 0 5 10 15 Tfinal Sparse Feedback Control 0 5 10 15 Tfinal -2.2 -1 0 1 2.2 State Trajectories Figure 2. Single input case: optimal sparse feedback control u\u2217(t) (top) and the regarding optimal state trajectories x\u2217(t) (bottom). The black dot line represents the constraint on the pole angle |x3| \u22641. 1 2 3 4 5 6 7 8 9 10 11 12 Column 1 2 3 4 5 6 7 8 9 10 11 Row -8 -6 -4 -2 0 2 4 Figure 3. The pattern of dynamic linear Compensator K. a ZoH sampling time of \u2206t = 0.1 [s], the discrete system matrices are given by A = \uf8ee \uf8f0 1.1133 0.0177 \u22120.1478 0.0177 1.4517 0.2514 0.0418 0.2758 0.9208 \uf8f9 \uf8fb, B = \uf8ee \uf8f0 0.0031 0.5218 0.0121 0.1486 0.0957 0.1202 \uf8f9 \uf8fb, and output matrices is chosen as C = 0 and D = I, which means that only the restriction on the control inputs |ui(t)| \u226410 by taking si = 10, \u2200i = 1, 2. We now randomly generate initial data x0 \u2208[\u22121, 1]3 and each input channel of time length is 13 \f0 0.1 0.2 0.3 0.4 0.5 Tfinal -3 -2 -1 0 1 2 3 Multiple Inputs: Sparse Feedback Control 0 0.1 0.2 0.3 0.4 0.5 Tfinal -1 -0.5 0 0.5 State Trajectories Figure 4. Multiple control inputs case: optimal sparse feedback control inputs u\u2217 K(t) (top) and the corresponding optimal state trajectories x\u2217(t) (bottom). N = 4, then the related \ufb01nal time in state-space is as T\ufb01nal = 0.5 [s]. After the state-input matrices X and U were calculated (see (A.1) in Appendix), we obtained the optimal value \u2225U \u2217\u22251 = 24.1544. Due to the fact that the augmented matrix \u03a8 is invertible (9), then the controller K with real matrices (K, H, G, F) is computationally e\ufb03cient. Figure 3 reveals the pattern of the compensator K, in which the color-bar reports the level of real values of the correlation elements in matrix K. Theorem 3.1 implies that we require the knowledge of the matrices H and K (see (A.2) in Appendix) to synthesize the sparse feedback control, as follows u\u2217 K = \u0014 1.8798 0.0000 \u22120.0000 2.8111 0.0000 0.0000 \u22122.7970 0.0000 \u22120.0000 1.7122 \u22120.0000 0.0000 \u0015 . As shown in Figure 4, the optimal feedback control signals contain two components, where both control inputs are along the input constraints |uK,i(t)| \u226410, \u2200i = 1, 2. Clearly, the inferred feedback control sequences are sparse as desired. From this \ufb01gure, it appears that the optimal state trajectories converge to zeros with minimum control e\ufb00ort. 5.3. Tracking problem Finally, we show a numerical example to illustrate the e\ufb00ectiveness of our extended dynamic tracking compensator (17) for tracking problem, in Section 4. By taking a 14 \f0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Tfinal -4 -2 0 2 4 6 8 10 Minimum attention control attention tracking control 0 0.2 0.4 0.6 0.8 1 Tfinal -1.5 -1 -0.5 0 0.5 1 Tracking Figure 5. Minimum attention tracking control (i.e., the di\ufb00erence of control input) (top) and the tracking trajectories of performance y with respect to a step reference r (bottom). continuous second-order harmonic oscillator as \u02d9 x(t) = \u0014 0 1 1 0 \u0015 x(t) + \u0014 \u22122 1 \u0015 u(t), x(0) = \u0014 0.5 \u22120.5 \u0015 , y(t) = \u0002 0 1\u0003 x(t), and discretized it with a ZoH sampling with time period \u2206t = 0.2 [s]. The time horizon is N = 5. For convenience, we model a step reference signal (15) as r+ = 1 for all t \u22650. In this case, the default value of steady-state of reference is r+ = r0 = 1. Focusing on the solution (X, U) determined by Problem 3, we then make use of the feedback realization technique to derive the feedback gain matrices (K, H, G, F), where the relevant numerical results are shown in (A.3) in Appendix. The problem at hand is to achieve tracking, we found that the determinants det(I \u2212 (A + BK)) = det(I \u2212X1) = 0.2475 \u0338= 0 and det(I \u2212F) = 2.4902 \u0338= 0, \ufb01tting two conditions in Lemma 4.1, and the related feedforward gains can be selected as M = \u22123.0837, L = \u0002 0.5 \u22121 0 0 0 0\u0003\u22a4. Based on the above analysis, the implemented dynamic tracking compensator u\u2217 Kr = \u0002 \u22123.6367 \u22123.6367 4.4087 4.4087 0.5000 0.5000\u0003 15 \fcan be easily applied to the discrete LTI plant so as to achieve tracking. Figure 5 depicts the evolution of minimum attention tracking control (top) and the corresponding tracking trajectories. It can be see that the performance output signal y(t) gradually tracks a step reference signal r(t) = 1 under the speci\ufb01ed time steps. 6." + }, + { + "url": "http://arxiv.org/abs/2210.15152v1", + "title": "Robust output regulation of linear system subject to modeled and unmodeled uncertainty", + "abstract": "In this paper, a novel robust output regulation control framework is proposed\nfor the system subject to noise, modeled disturbance and unmodeled disturbance\nto seek tracking performance and robustness simultaneously. The output\nregulation scheme is utilized in the framework to track the reference in the\npresence of modeled disturbance, and the effect of unmodeled disturbance is\nreduced by an $\\mathcal{H}_\\infty$ compensator. The Kalman filter can be also\nintroduced in the stabilization loop to deal with the white noise. Furthermore,\nthe tracking error in the presence/absence of noise and disturbance is\nestimated. The effectiveness and performance of our proposed control framework\nis verified in the numerical example by applying in the Furuta Inverted\nPendulum system.", + "authors": "Zhicheng Zhang, Zhiqiang Zuo, Xiang Chen, Ying Tan, Yijing Wang", + "published": "2022-10-27", + "updated": "2022-10-27", + "primary_cat": "eess.SY", + "cats": [ + "eess.SY", + "cs.SY" + ], + "main_content": "Introduction The objective of tracking a reference trajectory in the presence of disturbance is an essential problem running through the development of control theory, and the achievements contribute to the applications including robotic manipulators [1], quadrotors [2], autonomous vehicles [3], [4], etc. As disturbances are unavoidable, the exploration of disturbance rejection has attracted comprehensive focus, and lots of outstanding algorithms have been proposed. Despite the inherent robustness to tolerate uncertainty in a small range, the investigation of controller design approaches that are dedicated to disturbance rejection is indispensable. The disturbance rejection methods can be roughly divided into two categories, namely, the nonlinear and linear approaches. The former, such as sliding mode control [5], [6], active disturbance rejection control \u22c6This paper was not presented at any IFAC meeting. Corresponding author Zhicheng Zhang. Email addresses: zczhang@tju.edu.cn (Zhicheng Zhang), zqzuo@tju.edu.cn (Zhiqiang Zuo), xchen@uwindsor.ca (Xiang Chen), yingt@unimelb.edu.au (Ying Tan), yjwang@tju.edu.cn (Yijing Wang). (ADRC) [7], etc., play a large enough control torque in suppressing the undesired e\ufb00ect caused by disturbance, and as a result, the zero steady-state error can be acted. Compared with targeting at zero static error, the H\u221e control method, which keeps the objective onto minimize the impact of disturbance on the system, has more practical signi\ufb01cance. Adopting the idea of robust control in the tracking problem, the robust output regulation scheme provides a uni\ufb01ed framework for tracking control and disturbance rejection of the system, in which, an external system model is introduced to characterize the disturbance and expected trajectory of the system. The research on robust output regulation scheme has spanned from single systems to multi-agent systems (cf., [8], [9], [10], [11], [12], [13], [14]). However, this method fails to deal with the model-free disturbance and does not provide more robustness than the inherent robustness of the matrix equation solution, so it is di\ufb03cult to give a quantitative description of the disturbance rejection performance, which spurs us to investigate the output regulation together with an H\u221econtrol loop reducing the e\ufb00ect of disturbance. This paper proposes a novel control framework consistPreprint submitted to Automatica 28 October 2022 \fing of multiple loops to solve an output tracking problem in the presence of modeled/unmodeled disturbance and noise. Both tracking performance and disturbance rejection performance are considered, and the tracking error under the proposed control structure is estimated. A distinct feature between our framework and robust output regulation is that the system is subject to the disturbance formulated by an exo-system with known dynamics as described in [15] and the unknown disturbance signal simultaneously. Moreover, the upper bound of the in\ufb02uence can be formulated. This problem formulation is applicable to a more general class of dynamic systems. This paper is organized as follows. The preliminaries and formulation of the main problem are presented in Section 2. In Section 3, we will design the robust output regulation controller for the system with unmodeled disturbance. Then the extensions to the noisy system and the system subject to both modeled and unmodeled disturbances as well will be presented in Section 4. An application example using Furuta Inverted Pendulum is given in Section 5, where the comparison between classical output regulation and our proposed design is depicted in order to validate the e\ufb00ectiveness of the proposed design. The paper is concluded in Section 6. Notations. Throughout this paper, all matrices are assumed to have appropriate dimensions, so when it does not cause confusion, some dimensions of the matrices will not be explicitly speci\ufb01ed. The transpose of matrix A is expressed as AT. \u00af \u03c3(A) stands for the maximum singular value of matrix A. Let \u2225\u00b7 \u2225be the 2-norm of a vector. De\ufb01ne the power norm of stochastic signal w as \u2225w\u2225P = q limT \u2192\u221e1 T R T 0 E (w(t)Tw(t)) dt, where E(\u00b7) is expectation. For non-stochastic signal x, it reads \u2225x\u2225P = q limT \u2192\u221e1 T R T 0 x(t)Tx(t) dt. For transfer functions \u039e1 = \" \u039e11 \u039e12 \u039e21 \u039e22 # , \u039e11 \u2208Cp1\u00d7q1, \u039e22 \u2208Cp2\u00d7q2, and \u039e2 \u2208Cq2\u00d7p2, the linear fractional transformation is F\u2113(\u039e1, \u039e2) := \u039e11 + \u039e12\u039e2(I \u2212\u039e22\u039e2)\u22121\u039e21, provided that (I \u2212\u039e22\u039e2)\u22121 exists. 2 Preliminaries and Formulation of Main Problem In this paper, we are motivated to investigate the robust output regulation problem for linear systems modeled by \u02d9 x = Ax + B0w0 + B1w1 + B2w2 + Bu, y = Cx + D0w0 + D1w1 + D2w2, (1) where x \u2208Rn, u \u2208Rm, and y \u2208Rp stand for the state, input and output of the system, respectively. w0 \u2208Rm0 denotes the white noise signal satisfying E (w0(t)) = 0 and E \u0000w0(t)wT 0 (\u03c4) \u0001 = \u03b4(t \u2212\u03c4)I. w1 \u2208Rm1 is the unknown disturbance representing the uncertainty and/or unmodeled dynamics, while w2 \u2208Rm2 is the measurable disturbance with the dynamics \u02d9 xw = Awxw, w2 = Cwxw. (2) It is assumed that (A1) (A, B) is stabilizable and (C, A) is detectable; (A2) \" A \u2212j\u03c9I B0 C D0 # has full column rank for all \u03c9 \u2208R; (A3) D0DT 0 > 0; (A4) \" A \u2212j\u03c9I B1 C D1 # has full row rank for all \u03c9 \u2208R; (A5) rank(D1) = p. (A6) \" A \u2212j\u03c9I B C 0 # has full column rank for all \u03c9 \u2208R; Assumptions (A1)\u2013(A6) are basic requirements in LQG and H\u221econtrol design [16]. The main objective of this paper is to \ufb01nd a control u such that the output y tracks a reference signal r \u2208Rp, which is derived from a given model \u02d9 xr = Arxr, xr(0) = xr0, r = Crxr, (3) that is, the tracking error e = y \u2212Crxr satis\ufb01es \u2225e\u2225P is bounded in the presence of noise w0, bounded uncertain disturbance w1 and modeled disturbance w2, and, in particular, limt\u2192\u221e\u2225e(t)\u2225P = 0 for w0 = 0, w1 = 0 and w2 = 0. As commonly appeared in the theory of output regulation, the following assumptions hold in subsequent discussions. (A7) (Cr, Ar) and (Cw, Aw) are both detectable; (A8) Ar and Aw have no eigenvalues with negative real parts; (A9) \" A \u2212\u03bbI B C 0 # has full row rank for all eigenvalues \u03bb of Ar and Aw; In order to present our control design idea clearly, the tracking control of the noiseand disturbance-free system is \ufb01rst discussed, and we have the following theorem. 2 \fTheorem 1 [15] For system \u02d9 x = Ax + Bu, y = Cx, satisfying Assumption (A1), and reference dynamics (3) satisfying Assumptions (A7)\u2013(A9), the output regulation problem is solved by controller \u02d9 \u02c6 x = A\u02c6 x + Bu \u2212Lt (y \u2212C\u02c6 x) , (4) \u02d9 \u02c6 xr = Ar\u02c6 xr \u2212Lr(r \u2212Cr\u02c6 xr), (5) u = ut = Kt\u02c6 x + Kr\u02c6 xr, (6) where Kt, Lr and Lt are gains such that (A + BKt), (A+LtC) and (Ar +LrCr) are Hurwitz, and Kr satis\ufb01es XtAr = AXt + BKr + BKtXt, (7) 0 = CXt \u2212Cr, (8) for an appropriate matrix Xt. For systems with uncertainties or disturbances, the inherent robustness of the system is not su\ufb03cient to maintain tracking performance. Therefore, the robust control method is considered. While traditional robust methods, such as H\u221econtrol[17] or mixed H2/H\u221econtrol [18], could be applied to address this robust output regulation problem, it is also well-known that these methods could be quite conservative which may not provide good tracking performance and robustness simultaneously. Noteworthy, a new development is reported in [19] which presents a complementary control structure derived based on Youla parameterization of all stabilizing controllers as shown in Fig. 1, where this new structure e\ufb00ectively combines an LQG control and an H\u221econtrol to achieve noncompromised optimal performance and robustness, as summarized in Theorem 2. \u2014 Fig. 1. Complementary Control Structure Theorem 2 [19] For system \u02d9 x = Ax + B0w0 + B1w1 + Bu, z = C1x + D12u, y = Cx + D0w0 + D1w1, satisfying Assumptions (A1)\u2013(A5), the combined LQG/H\u221econtroller can be designed as u = ul + u\u221e with the LQG controller \u02d9 \u02c6 x = (A + LC) \u02c6 x + Bu \u2212Ly, ul = F \u02c6 x, and H\u221econtroller \u02dc Q, which is of the form \u02d9 x\u221e= A\u221ex\u221e+ B\u221e\u01eb\u221e, u\u221e= F\u221ex\u221e, using residual signal \u01eb\u221e= C\u02c6 x \u2212y, where F = \u2212R\u22121 1 \u0000DT 12C1 + BTP10 \u0001 , L = \u2212 \u0000B0DT 0 + P20CT\u0001 R\u22121 0 , A\u221e= \u00af A + \u03b3\u22122 \u00af B1 \u00af BT 1 \u00af P1 + \u00af B2F\u221e \u2212B\u221e \u0000 \u00af C2 \u2212\u03b3\u22122D1 \u00af BT 1 \u00af P1 \u0001 , B\u221e= \u2212 \u0000I \u2212\u03b3\u22122 \u00af P2 \u00af P1 \u0001\u22121 L\u221e, F\u221e= \u2212 \u0002 F + R\u22121 1 \u0000BTP1 + DT 12C1 \u0001 , \u2212F \u0003 , L\u221e= \" \u0000P2CT + B1DT 1 \u0001 R\u22121 2 \u0000P2CT + B1DT 1 \u0001 R\u22121 2 + L # , and P10 \u22650, P20 \u22650, P1 \u22650, P2 \u22650, satisfying \u03c1(P1P2) < \u03b32, solve P10A1 + AT 1 P10 \u2212P10BR\u22121 1 BTP10 + Q1 = 0, P20AT 0 + A0P20 \u2212P20CTR\u22121 0 CP20 + Q0 = 0, P1A1 + AT 1 P1 + P1 \u0012B1BT 1 \u03b32 \u2212BR\u22121 1 BT \u0013 P1 + Q1 = 0, P2AT 2 + A2P2 + P2 \u0012CT 1 C1 \u03b32 \u2212CTR\u22121 2 C \u0013 P2 + Q2 = 0, \u00af P1 = \" P1 0 0 0 # , \u00af P2 = \" P2 P2 P2 P2 # , with \u00af A = \" A + BF \u2212BF 0 A + LC # , \u00af B1 = \" B1 B1 + LD1 # , \u00af B2 = \" B 0 # , \u00af C2 = [0, \u2212C], 3 \fA0 = A \u2212B0DT 0 R\u22121 0 C, A1 = A \u2212BR\u22121 1 DT 12C1, A2 = A \u2212B1DT 1 R\u22121 2 C, Q0 = B0 \u0000I \u2212DT 0 R\u22121 0 D0 \u0001 BT 0 , Q1 = CT 1 \u0000I \u2212D12R\u22121 1 DT 12 \u0001 C1, Q2 = B1 \u0000I \u2212DT 1 R\u22121 2 D1 \u0001 BT 1 , R1 = DT 12D12, R2 = D1DT 1 . Furthermore, \u2225z\u2225P/\u2225w1\u2225P < \u03b3 with \u03b3 > 0. In the following, a novel control strategy is proposed for the system in the presence of w1 is discussed \ufb01rst, where the main advantages of our proposed framework are manifested. The situation with noise and modeled disturbance will then be analyzed respectively. At last, we extend the control framework to the system that is not fully detectable. 3 Design of Robust Output Regulation Control Motivated by Theorems 1 and 2, we present a novel robust output regulation control strategy for disturbed system, as shown in Fig. 2, whose dynamics is characterized as \u02d9 x = Ax + B1w1 + Bu, y = Cx + D1w1. \u2014 \u2014 Fig. 2. Robust Output Regulation Control Structure In this control strategy, both output regulation performance and robustness are considered and handled by the output regulation control (Kr, Lr) and \u02dc Q respectively. The measured and estimated tracking errors are used to \ufb01nd the residual signal \u01ebf re\ufb02ecting the impact of disturbance w1. From Theorem 1, the output regulation scheme can be designed as (4)\u2013(6), which will give a control singal ut to perfectly track the reference r in the absence of unmodeled disturbance w1. In order to evaluate the tracking performance, a vector variable z = \" e + Dzu e \u2212Dzu # \u2208R2p is introduced into the system, namely, \u02d9 x = Ax + B1w1 + Bu, z = \" e + Dzu e \u2212Dzu # , y = Cx + D1w1, where Dz is a matrix with rankDz = m. To design the controller \u02dc Q, we \ufb01rst reformulate the system. Let xf = \" \u02c6 x \u2212Xtxr x \u2212\u02c6 x # with Xt being de\ufb01ned in (7) and (8), and w1 = xr \u2212\u02c6 xr, then the augmented system together with the output regulation scheme is of the form \u02d9 xf = \" A + BKt \u2212LtC 0 A + LtC # xf + \" \u2212BKr 0 # w1 + \" \u2212LtD1 B1 + LtD1 # w1 + \" B 0 # uf, z = \" C C C C # xf + \" D1 D1 # w1 + \" Dz \u2212Dz # uf, \u01ebf = h 0 \u2212C i xf + Crw1 \u2212D1w1, (9) where residual signal \u01ebf = C\u02c6 x \u2212Cr\u02c6 xr \u2212e describes the di\ufb00erence between the nominal model and the real system. Since limt\u2192\u221ew1 = 0, we can ignore w1 and keep focus on the unknown disturbance w1. By introducing linear transformation uf = V T 2 \u03a3\u22121 2 \u02dc uf, w1 = V T 1 \u02dc w, \u01ebf = U T 1 \u03a31\u02dc yf, z = U T 2 \u02dc z, where the matrices U1, U2, V1, V2, \u03a31 and \u03a32 satisfy \u2212D1 = U T 1 h 0 \u03a31 i V1, \" Dz \u2212Dz # = U T 2 \" 0 \u03a32 # V2, with rank(\u03a31) = p, rank(\u03a32) = m. Under Assumption 4 \f(A5), the system (9) can be rewritten as \u02d9 xf = Afxf + B1f \u02dc w + B2f \u02dc uf, \u02dc z = C1fxf + D11f \u02dc w + D12f \u02dc uf, \u02dc yf = C2fxf + D21f \u02dc w, (10) where Af = \" A + BKt \u2212LtC 0 A + LtC # , B1f = \" \u2212LtD1 B1 + LtD1 # V T 1 , B2f = \" B 0 # V T 2 \u03a3\u22121 2 , C1f = U2 \" C C C C # , C2f = \u03a3\u22121 1 U1 h 0 \u2212C i , D12f = \" 0 Im # , D21f = h 0 Ip i , and D11f = U2 \u0002 DT 1 , DT 1 \u0003T V T 1 can be decomposed into D11f = \" 0 D1112 0 D1122 # , D1122 \u2208Rm\u00d7p. Given positive scalar \u03b3, denote \u0393 = \u0000DT 1112D1112 \u2212\u03b32Ip \u0001\u22121 , D = DT 1112D1112 + DT 1122D1122, and symmetric matrices R = \uf8ee \uf8ef \uf8ef \uf8f0 \u2212\u03b3\u22122Im1\u2212p 0 0 0 \u0393 \u2212\u0393DT 1122 0 \u2212D1122\u0393 Im + D1122\u0393DT 1122 \uf8f9 \uf8fa \uf8fa \uf8fb, \u02dc R = \uf8ee \uf8ef \uf8ef \uf8f0 \u2212\u03b3\u22122I2p\u2212m 0 \u03b3\u22122D1112 0 \u2212\u03b3\u22122Im \u03b3\u22122D1122 \u03b3\u22122DT 1112 \u03b3\u22122DT 1122 Ip \u2212\u03b3\u22122D \uf8f9 \uf8fa \uf8fa \uf8fb. Then with the controller gain Kf = \u2212R \u0010 [D11f, D12f]T C1f + [B1f, B2f]T Xf \u0011 := \u0002 KT 11f, KT 12f, KT 2f \u0003T , Lf = \u2212 \u0010 B1f \u0002 DT 11f, DT 21f \u0003 + Yf \u0002 CT 1f, CT 2f \u0003 \u0011 \u02dc R := [L11f, L12f, L2f] , where K11f \u2208R(m1\u2212p)\u00d72n, K12f \u2208Rp\u00d72n, K2f \u2208 Rm\u00d72n, L11f \u2208R2n\u00d7(2p\u2212m), L12f \u2208R2n\u00d7m, L2f \u2208 R2n\u00d7p, Xf and Yf are, respectively, the solutions to the Ricatti equations HT 11Xf + XfH11 + XfH12Xf + H21 = 0, JT 11Yf + YfJ11 + YfJ12Yf + J21 = 0, with H11 = Af + [B1f, B2f] \uf8ee \uf8ef \uf8ef \uf8f0 0(m1\u2212p)\u00d7(2p\u2212m2) 0 \u2212\u0393DT 1112 0 D1122\u0393DT 1112 \u2212Im \uf8f9 \uf8fa \uf8fa \uf8fbC1f, H12 = \u2212[B1f, B2f] R [B1f, B2f]T , H21 = CT 1f \" I2p\u2212m \u2212D1112\u0393DT 1112 0 0 0m # C1f, J11 = AT f \u2212 h 0n\u00d7(m1\u2212p) CT 2f i BT 1f, J12 = \u2212 \" C1f C2f #T \u02dc R \" C1f C2f # , J21 = B1f \" Im1\u2212p 0 0 0p # BT 1f, the controller can be designed as F\u2113(Kf, \u039e) , (11) where \u039e is any dynamics with input \u039eu and output \u039ey satisfying that \u039e \u2208RH\u221e, \u2225\u039e\u2225\u221e< \u03b3. Kf can be formulated as \u02d9 \u02c6 xf = \u02c6 A\u02c6 xf + \u02c6 B1\u03a3\u22121 1 U1\u01ebf + \u02c6 B2\u039ey, uf = V T 2 \u03a3\u22121 2 \u02c6 C1\u02c6 xf + V T 2 \u03a3\u22121 2 \u02c6 D11\u03a3\u22121 1 U1\u01ebf + V T 2 \u03a3\u22121 2 \u02c6 D12\u039ey, \u039eu = \u02c6 C2\u02c6 xf + \u02c6 D21\u03a3\u22121 1 U1\u01ebf, with \u02c6 D11 = \u2212D1122, \u02c6 D12 \u2208Rm\u00d7m and \u02c6 D21 \u2208Rp\u00d7p being any matrices (e.g. Cholesky factors) satisfying \u02c6 D12 \u02c6 DT 12 = I, \u02c6 DT 21 \u02c6 D21 = I \u2212\u03b3\u22122DT 1112D1112, \u039eu and \u039ey being respectively the input and output variables of \u039e, and \u02c6 B2 = (B2f + L12f) \u02c6 D12, \u02c6 C2 = \u2212\u02c6 D21 (C2f + K12f) Z, \u02c6 B1 = \u2212L2f + \u02c6 B2 \u02c6 D\u22121 12 \u02c6 D11, \u02c6 C1 = K2fZ + \u02c6 D11 \u02c6 D\u22121 21 \u02c6 C2, \u02c6 A = Af + Lf \u0002 CT 1f, CT 2f \u0003T + \u02c6 B2 \u02c6 D\u22121 12 \u02c6 C1, Z = \u0000I \u2212\u03b3\u22122YfXf \u0001\u22121 . 5 \fThe following theorem indicates the stability and performance of the closed-loop system. Theorem 3 For system (9) satisfying Assumptions (A1), (A4)\u2013(A6) and the reference dynamics (3) satisfying Assumptions (A7)\u2013(A9), if for given positive constant \u03b3 > \u00af \u03c3(D1112), there exist Xf and Yf such that \u03c1(XfYf) < \u03b32, then the controller (11) stabilizes the system (9). Furthermore, the tracking error satis\ufb01es \u2225e\u2225P < \u03b3\u2225w1\u2225P. (12) PROOF. First of all, we will prove that the conditions for H\u221econtrol hold. Under the controller (4)\u2013(6), Af is Hurwitz, which implies that the system is stabilizable and detectable. Since rank (Dz) = m, one has \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 A + BKt \u2212j\u03c9I \u2212LtC B 0 A + LtC \u2212j\u03c9I 0 C C Dz C C \u2212Dz \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb has full column rank for any \u03c9 \u2208R. Similarly, assumption (A4) ensures \uf8ee \uf8ef \uf8ef \uf8f0 A + BKt \u2212j\u03c9I \u2212LtC \u2212LtD1 0 A + LtC \u2212j\u03c9I B1 + LtD1 0 \u2212C \u2212D1 \uf8f9 \uf8fa \uf8fa \uf8fb has full row rank. Then to analyze the stability of the closed-loop system, denote R = \" DT 11f DT 12f # h D11f D12f i \u2212 \" \u03b32Im1 0 0 0 # , \u02dc R = \" D11f D21f # h DT 11f DT 21f i \u2212 \" \u03b32I2p 0 0 0 # , and it is clear that RR = RR = I, \u02dc R \u02dc R = \u02dc R \u02dc R = I. According to the main result in [20], Kf and Lf with respect to Xf and Yf are proper controller gains to solve the H\u221econtrol problem. Thereby, the controller F\u2113 \u0010 e Kf, \u039e \u0011 with e Kf being designed as \u02d9 \u02c6 xf = \u02c6 A\u02c6 xf + \u02c6 B1\u02dc yf + \u02c6 B2\u039ey, \u02dc uf = \u02c6 C1\u02c6 xf + \u02c6 D11\u02dc yf + \u02c6 D12\u039ey, \u039eu = \u02c6 C2\u02c6 xf + \u02c6 D21\u02dc yf, is an admissible stabilizer for the system (10), and equivalently, the system (9) complete with controller (11) achieves \u2225z\u2225P \u2225w1\u2225P < \u03b3 if \u2225\u02dc z\u2225P \u2225\u02dc w\u2225P = \u2225z\u2225P \u2225w1\u2225P is noticed. To estimate the e\ufb00ect of the disturbance on the output, consider that \u2225z\u22252 P = \r \r \r \r \r \" e + Dzu e \u2212Dzu #\r \r \r \r \r 2 P = \u2225e\u22252 P + \u2225Dzu\u22252 P \u2265\u2225e\u22252 P. Then (12) is obtained, which completes the proof. Theorem 3 provides an approach to suppress the e\ufb00ect of the uncertainty/disturbance using the residual signal. Combined with Theorem 1, the robust tracking objective can be ful\ufb01lled by focus on the tracking performance \ufb01rst to get ut de\ufb01ned in 6, and then, if the system deviates from the nominal model, the H\u221econtrol \u02dc Q is designed to get uf. Then one gets the complemental control u = ut + uf. (13) If there is no uncertainty/disturbance in the system, \u02dc Q will not operate and the tracking performance appears. Remark 4 The performance output z in this paper includes direct disturbances term D1w1, \u03b3 > \u00af \u03c3(D1112) is hence a su\ufb03cient condition for H\u221econtrol. Specially, when m = 2p, the condition degenerates into \u03b3 > 0, which is in coincidence with that in [19]. 4 Robust Output Regulation for Linear Systems with Noise And Modeled/Unmodeled Disturbance In this section, the robust output regulation control for linear systems subject to, simultaneously, white noise, model disturbance and unmodeled disturbance is discussed. 4.1 Linear System with White Noise The white noise is addressed in this subsection, for which, the control structure is depicted in Fig. 3. For the noisy system \u02d9 x = Ax + B0w0 + B1w1 + Bu, y = Cx + D0w0 + D1w1, (14) an observer acting as an Kalman \ufb01lter [17] is employed in (4), which is redesigned as \u02d9 \u02c6 x = A\u02c6 x + But \u2212Lk (y \u2212C\u02c6 x) , (15) 6 \f\u2014 \u2014 Fig. 3. Control Structure for System with Noise where Lk = \u2212(B0DT 0 + P1CT)R\u22121 0 with R0 = D0DT 0 , P1 > 0 is the solution to the Ricatti equation (A \u2212B0DT 0 R\u22121 0 C)P1 + P1(A \u2212B0DT 0 R\u22121 0 C)T \u2212P1CTR\u22121 0 CP1 + B0(I \u2212DT 0 R\u22121 0 D0)B0 = 0. (16) Denote P2 > 0 meets (A + BKt)P2 + P2(A + BKt)T + LkD0DT 0 LT k = 0, then we have the following result. Theorem 5 Given reference dynamics (3) satisfying Assumptions (A7)\u2013(A9). Consider system (14) satisfying Assumptions (A1)\u2013(A6), under controller (5), (11), (15) and (13). The tracking error e satis\ufb01es \u2225e\u2225P = q tr \u0000C(P1 + P2)CT + D0DT 0 \u0001 + \u03b32\u2225w1\u22252 P. (17) In particular, for the case where w0 = 0, (12) holds. PROOF. First, we will show that Lk is a proper gain for the output regulation scheme. Rewrite (16) as (A + LkC)P1 + P1(A + LkC)T + (B + LkD0)(B + LkD0)T = 0, which means A + LkC is Hurwitz and Lk meets the requirement of output regulation design. Second, the baseline performance is discussed, and the impact of noise w0 on tracking error is deduced in the absence of w1. Let \u02dc xt = \" \u02c6 x \u2212Xtxr x \u2212\u02c6 x # , then, one has \u02d9 \u02dc xt = \u02dc At\u02dc xt + \u02dc Brwr + \u02dc B0w0, e = \u02dc Ct\u02dc xt + D0w0, (18) with w1 = \u02c6 xr \u2212xr, and \u02dc At = \" A + BKt \u2212LkC 0 A + LkC # , \u02dc B0 = \" \u2212LkD0 B0 + LkD0 # , \u02dc Br = \" BKr 0 # , \u02dc Ct = h C C i . Since w1 vanishes exponentially, \u02dc Brw1 can be ignored. The tracking error is measured by e = \u02dc Ct Z t 0 \u03a6(t, \u03c4) \u02dc B0w0(\u03c4) d\u03c4 + D0w0, where \u03a6(t, \u03c4) = e\u2212\u02dc At(t\u2212\u03c4). Then one has \u2225e\u22252 P = lim T \u2192\u221e 1 T E \u0012 Z T 0 \u0012 Z t 0 Z t 0 w0(\u03c4)T \u02dc BT 0 \u03a6(t, \u03c4)T \u02dc CT t \u00d7 \u02dc Ct\u03a6(t, s) \u02dc B0w0(s) ds d\u03c4 + wT 0 DT 0 D0w0 + 2wT 0 DT 0 \u02dc Ct Z t 0 \u03a6(t, \u03c4) \u02dc B0w0(\u03c4) d\u03c4 \u0013 dt \u0013 = lim T \u2192\u221e 1 T tr \u0012 Z T 0 \u0012 Z t 0 Z t 0 \u02dc Ct\u03a6(t, s) \u02dc B0 \u00d7 E(w0(s)w0(\u03c4)T) \u02dc BT 0 \u03a6(t, \u03c4)T \u02dc CT t ds d\u03c4 + 2 Z t 0 \u03a6(t, \u03c4) \u02dc B0E \u0000w0(\u03c4)w0(t)T\u0001 DT 0 \u02dc Ct d\u03c4 + D0E(w0(t)w0(t)T)DT 0 \u0013 dt \u0013 = lim T \u2192\u221e 1 T \u0012 tr \u0012 Z T 0 Z t 0 \u02dc Ct\u03a6(t, s) \u02dc B0 \u00d7 \u02dc BT 0 \u03a6(t, s)T \u02dc CT t ds dt \u0013 + T tr \u0000D0DT 0 \u0001 \u0013 = lim T \u2192\u221e 1 T tr Z T 0 \u02dc CtY \u02dc CT t dt ! + tr \u0000D0DT 0 \u0001 , where Y = R t 0 \u03a6(t, s) \u02dc B0 \u02dc BT 0 \u03a6(t, s)T ds is the solution to \u02d9 Y = \u02dc AtY + Y \u02dc AT t + \u02dc B0 \u02dc BT 0 . Provided P1 > 0 and P2 > 0, it is easy to verify that a proper solution has lim t\u2192\u221eY (t) = \" P2 0 0 P1 # , which is followed by \u2225e\u2225P = q tr \u0000C(P1 + P2)CT + D0DT 0 \u0001 . (19) 7 \fIf the non-noisy system is su\ufb00ered from the disturbance w1, we third introduce the controller (11) using a similar procedure, except that Lt is replaced by Lk, and tracking error is measured by (12) according to Theorem 3. In summary, one can conclude that the tracking error is estimated by (1) Eq. (19) if r \u0338= 0, w0 \u0338= 0, w1 = 0, (2) Eq. (12) if r = 0, w0 = 0, w1 \u0338= 0. Notice that r, w0 and w1 are independent signals, which, together with the de\ufb01nition of power norm, gives (17) and completes this proof. Remark 6 In the proof, it can be seen that undesirable factors are handled by di\ufb00erent control loops, and if nonnominal items do not exist, the nominal performance appears. 4.2 Linear Systems with Additional Modeled Disturbance For the case where partial disturbance is modeled, we denote w2 \u2208Rm2 as the modeled disturbance that is governed by (2), and the control structure is shown in Fig. 4. \u2014 \u2014 Fig. 4. Control Structure for System with Modeled Disturbance The rejection of w2 can be handled in the output regulation scheme (4), (5) and \u02d9 \u02c6 xw = Aw\u02c6 xw + Lw(Cw\u02c6 xw \u2212w2), (20) ut = Kt\u02c6 x + Kr\u02c6 xr + Kw\u02c6 xw, (21) where Kt, Lr and Lk are the same as that in (5), (6) and (15), Lw is designed such that (Aw + LwCw) is Hurwitz, and Kr and Kw satisfy (7), (8) and YtAw = AYt + BKw + BKtYt + B2Cw, 0 = CYt + D2Cw, for some matrices Xt and Yt. Theorem 7 Given reference dynamics (3) satisfying Assumptions (A7)\u2013(A9), consider system (1) satisfying Assumptions (A1)\u2013(A6) under ontroller (15), (5), (20), (21) and (11). The tracking error e satis\ufb01es (17). PROOF. The proof of tracking performance and disturbance rejection is similar to that of Theorem 3 and 5. Thus it is omitted here. 4.3 Linear System when (C, A) is Not Detectable If (C, A) is not fully detectable, it is reasonable to use all the output in the stabilization. In this subsection, we extend our control structure to a more general circumstance, where the output of the system is more than the reference dynamics by separating the stabilization out of the output regulation scheme. The structure of which is shown in Fig. 5. \u2014 \u2014 Fig. 5. Three Loops Control Structure Consider the system \u02d9 x = Ax + B0w0 + B1w1 + B2w2 + Bu, yp = Cpx + Dp0w0 + Dp1w1 + Dp2w2, y = Cx + D0w0 + D1w1 + D2w2, z = \" e + Dzu e \u2212Dzu # , e = y \u2212r, u = up + ut + uf, (22) where yp \u2208Rq collects all measurable signals that can be used for stabilization, while y contains the output that should track the reference r. up, ut and uf are designed for stabilization, output regulation and unmodeled disturbance rejection. Instead of Assumption (A1), we assume in the following that (A1)\u2032 (A, B) is stabilizable and (Cp, A) is detectable. 8 \fAssumption (A1)\u2032 decreases the conservatism of the Assumption (A1). Then we design the stabilization controller \u02d9 \u02c6 x = A\u02c6 x + Bu \u2212Lp(yp \u2212Cp\u02c6 x) + (B2 + LpDp2)Cw\u02c6 xw, up = Kp\u02c6 x, (23) the output regulation controller (5), (20) and ut = Kr\u02c6 xr + Kw\u02c6 xw, (24) and the disturbance rejection controller \u02dc Q designed for \u02d9 \u02dc x = \" A + BKp \u2212LpCp 0 A + LpCp # \u02dc x + \" \u2212LpDp0 B0 + LpDp0 # w0 + \" \u2212LpDp1 B1 + LpDp1 # w1 + \" B 0 # uf + \" \u2212BKr \u2212BKw \u2212(B2 + LpDp2)Cw 0 (B2 + LpDp2)Cw # w1, z = \" C C C C # \u02dc x + \" D0 D0 # w0 + \" D1 D1 # w1 + \" Dz \u2212Dz # uf, \u01ebf = h 0 \u2212C i \u02dc x \u2212D0w0 \u2212D1w1 + h CXt CYt i w1, where w1 = \u0002 (xr \u2212\u02c6 xr)T, (xw \u2212\u02c6 xw)T\u0003T, Lp = \u2212(B0DT p0+ P3CT)R\u22121 p , Rp = Dp0DT p0, P3 > 0 and P4 > 0 satisfy (A + LpCp)P3 + P3(A + LpCp)T + (B + LpDp0)(B + LpDp0)T = 0, (A + BKp)P4 + P4(A + BKp)T + LpDp0DT p0LT p = 0, and Kp meets (1) A + BKp is a Hurwitz matrix; (2) A + BKp + LpCp \u2212\u03bbI has full row rank for all eigenvalues \u03bb of Ar and Aw. Theorem 8 Given reference dynamics (3) satisfying Assumptions (A7)\u2013(A9). Consider system (22) under controller (23), (24), together with H\u221econtroller \u02dc Q. Suppose that Assumptions (A1)\u2032, (A2)\u2013(A6) hold. The tracking error e satis\ufb01es \u2225e\u2225P = q tr \u0000C(P3 + P4)CT + D0DT 0 \u0001 + \u03b32\u2225w1\u22252 P. PROOF. Using similar skills in the proof of Theorem 3 and 5, one can easily get this conclusion. 5 An Illustrative Example In this section, our proposed robust tracking control framework is used in Furuta Inverted Pendulum, which is reported in [21]. The system dynamics is governed by (22), where A = \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 0 0 1.0000 0 0 0 0 1.0000 0 149.2673 \u22127.0611 \u22120.9829 0 523.1909 \u22126.9788 \u22121.7226 \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb , B = \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 0 0 49.7260 49.1467 \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb , B0 = \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 0.012 0 0 0 0 0.012 0 0 0 0 1 0 0 0 0 1 \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb , B1 = \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 0.05 0 0 0 0 0.05 0 0 0 0 0.8 0 0 0 0 0.8 \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb , Cp = \" 1 0 0 0 0 1 0 0 # , C = [1, 0, 0, 0] , Dp0 = Dp1 = \" 0 0 1 0 0 0 0 1 # \u00d7 10\u22124, D0 = D1 = [0 0 1 0] \u00d7 10\u22124, and state variable x = h \u03b81, \u03b82, \u02d9 \u03b81, \u02d9 \u03b82 i represents the angles of jointed arms and the angular velocities. The objective is to control the angle of the horizontal arm to track the reference signal r = sin(0.5\u03c0t) in radian, which is presented by the dynamics \u02d9 xr = \" 0 1 \u2212(0.5\u03c0)2 0 # xr, xr(0) = \" 0 0.5\u03c0 # , r = h 1 0 i xr, in the presence of white noise, unmodeled disturbance as shown in Fig. 6, and modeled disturbance governed by w2(t) = cos(2\u03c0t) + 0.5 cos(5\u03c0t). 0 2 4 6 8 10 \u22122 \u22121 0 1 2 time (s) w1 Fig. 6. Unmodeled Disturbance 9 \f0 2 4 6 8 10 \u2212500 0 500 1000 time (s) y1 (\u25e6) 2 4 6 8 10 \u2212100 0 100 0 0.5 1 \u2212200 0 200 Reference Output Regulation Our Approach Fig. 7. Angle of the horizontal arm 0 2 4 6 8 10 \u2212600 \u2212400 \u2212200 0 200 400 time (s) y2 (\u25e6) 2 4 6 8 10 \u22125 0 5 0 0.5 1 \u2212200 0 200 Output Regulation Our Approach Fig. 8. Angle of the vertical arm 0 2 4 6 8 10 \u2212500 0 500 time (s) e (\u25e6) 2 4 6 8 10 \u22120.5 0 0.5 Output Regulation Our Approach Fig. 9. Tracking error We will reveal the performance of our proposed control framework by comparison with output regulation control that is reported in [15]. In the controller design, the angles of both arms are used in the stabilization, and the LMI based approach that was reported in [22] is adopted to assign poles into region S that is de\ufb01ned by S(q, \u03b8, r) = n z :\u211cz < q, |\u2111z| < tan \u03b8 |\u211cz|, |z \u2212q| < r, q < 0, 0 \u2264\u03b8 < \u03c0 2 , r > 0 o . Speci\ufb01cally, the poles of A+BKp locates in S(\u221215, 0.5, 15) and Ar +LrCr and Aw +LrCw in S(\u221213, 0.5, 13). Then the controller reported in [15] can be designed as Kp = \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 4.7900 \u221251.2190 1.2218 \u22122.4165 \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb T , Lp = \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 \u22120.0016 \u22120.0460 \u22120.1529 \u22121.0082 \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb \u00d7 105, Kr = h \u22124.5287 \u22121.0635 i , Lr = h \u221239.8623 \u2212421.1467 iT , Kw = h \u22120.0891 \u22120.0013 \u22120.0878 \u22120.0012 i , Lw = \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 0.4801 \u22120.1602 \u22120.1600 \u22120.1599 2.1791 \u22120.7273 \u22120.7263 \u22120.7255 \u22120.3755 0.1253 0.1251 0.1250 3.3876 \u22121.1303 \u22121.1291 \u22121.1284 \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb \u00d7 107. Let \u03b3 = 0.34 and Dz = 0.001. According to Theorem 1 and 8 the gains of our proposed control strategy can be designed as Lp = \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 \u22120.1134 0.0035 0.0035 \u22120.1187 \u22129.2327 0.3394 0.4767 \u22129.8542 \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb \u00d7 103, Kf = \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 \u22122.5832 2.3056 \u22122.0880 2.0983 1.4468 3.0666 \u22122.5820 2.4753 \u22122.4322 \u22123.3498 \u22120.1524 0.1287 \u22120.1230 0.1209 0.1590 0.1552 \u22120.1307 0.1253 \u22120.1231 \u22120.1695 \u22122.5864 2.3072 \u22122.0895 2.1000 1.4308 3.0756 \u22122.5864 2.4805 \u22122.4375 \u22123.3547 \u22120.1524 0.1287 \u22120.1231 0.1209 0.1601 0.1552 \u22120.1307 0.1253 \u22120.1231 \u22120.1721 \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb T , Lf = \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 2.8192 0 \u22120.0243 0.2664 0 \u22120.0019 \u22125.5855 0 \u22121.0247 \u22121.2274 0 \u22120.0236 0.0004 0 \u22120.0381 0.0033 0 \u22120.0004 0.1629 0 0.3987 0.2038 0 0.0935 \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb , 10 \fand Kp, Kr, Lr, Kw, Lw are the same as that in the method of [15]. The angle trajectories of horizontal and vertical arms of the Inverted Pendulum are depicted in Fig. 7 and 8, respectively. From Fig. 7 and 8, one can see that the horizontal arm can track the reference signal under our proposed control strategy, and the maximum deviation of the vertical arm from the equilibrium point is less than 43\u25e6. In contrast, the system under the control scheme in [15] exhibits a large overshoot (520\u25e6of the vertical arm), leading to possible unstable performance. The tracking error shown in Fig. 9 indicates that the controller \u02dc Q narrows down the error in the transient and steady state, and the maximum tracking error is less than 0.5\u25e6, while the tracking error under controller in [15] is over 27\u25e6. 6" + } + ], + "Yancheng Liang": [ + { + "url": "http://arxiv.org/abs/2308.03704v1", + "title": "DeRisk: An Effective Deep Learning Framework for Credit Risk Prediction over Real-World Financial Data", + "abstract": "Despite the tremendous advances achieved over the past years by deep learning\ntechniques, the latest risk prediction models for industrial applications still\nrely on highly handtuned stage-wised statistical learning tools, such as\ngradient boosting and random forest methods. Different from images or\nlanguages, real-world financial data are high-dimensional, sparse, noisy and\nextremely imbalanced, which makes deep neural network models particularly\nchallenging to train and fragile in practice. In this work, we propose DeRisk,\nan effective deep learning risk prediction framework for credit risk prediction\non real-world financial data. DeRisk is the first deep risk prediction model\nthat outperforms statistical learning approaches deployed in our company's\nproduction system. We also perform extensive ablation studies on our method to\npresent the most critical factors for the empirical success of DeRisk.", + "authors": "Yancheng Liang, Jiajie Zhang, Hui Li, Xiaochen Liu, Yi Hu, Yong Wu, Jinyao Zhang, Yongyan Liu, Yi Wu", + "published": "2023-08-07", + "updated": "2023-08-07", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "q-fin.ST" + ], + "main_content": "Introduction Credit risk is the risk of loan default or loan delinquency when a borrower fails to repay on time. Credit risk prediction is an analytical problem that is vital for financial institutions when they are formulating lending strategies for loan applications. It helps make lending decisions by assessing the solvency of the applicants from their credit information. Accurate prediction keeps bad debts at a low level, which directly saves substantial financial loss for the multi-billion dollar credit loan industry (Malekipirbazari and Aksakalli, 2015; Tan et al., 2018). As credit risk is one major threat to financial institutions (Buehler et al., 2008; Li et al., 2015; Ma et al., 2018; Tan et al., 2018), better credit risk prediction also improves the risk management capacity of banks and financial technology companies. Although credit scores, such as FICO Score, have been widely used as mainstream risk indicators by many financial institutions, data-driven \u2217 Both authors contributed equally to this research. methods have recently shown their great potential and superior practical performances (Xu et al., 2021). Deep learning (DL), the dominating modeling technique in various domains such as computer vision, natural language processing, and recommendation system, has been a promising and increasingly popular tool considered to tackle financial problems. Recent attempts include market prediction (Ding et al., 2015; Minh et al., 2018), stock trading (Sezer et al., 2017) and exchange rate prediction (Shen et al., 2015). Despite the recent trend of using deep models, non-DL methods, such as XGBoost and logistic regression, remain the most effective techniques so far for credit risk prediction in the financial industry. Many existing studies have shown that neural network models lead to similar or even worse performances than non-DL methods (Fu, 2017; Kvamme et al., 2018; Varmedja et al., 2019; Li et al., 2020; Moscato et al., 2021). Credit risk prediction can be formulated as a binary classification problem, where the goal is to learn a function f\u03b8 : X \u2192[0, 1] to map the credit information x \u2208X of an applicant to a risk score y \u2208[0, 1] that represents the probability of default. Despite such a simple problem formulation, credit risk prediction can be particularly challenging. Existing deep-learning-based solutions mainly focus on e-commerce consumer data (Liang et al., 2021), which typically include dense features and highly frequent user activities, such as clicks and payments, on e-commerce platforms. However, these fine-grained data are not commonly available to financial institutions. Specifically, in our application, we adopt the official credit reports provided by the Credit Reference Center (CRC) of the People\u2019s Bank of China. These financial data are of much lower quality, i.e., containing much higher dimensions (over 4k) with a large portion of missing entries and extreme values, due to low-frequency credit records. End-to-end training neural networks on these data can be substantially more challengarXiv:2308.03704v1 [cs.LG] 7 Aug 2023 \f... Credit Report Time Line time 1 time 2 time L ... ... Sequential Features ... ... ... ... ... Non-sequential Features MLM Pre-training Training with Oversampling Transformer Encoder DNN Training with\u00a0 Weighted Loss ... ... Concatenate Linear Head 0.72 Final Score Stage I: Data Pre-processing Stage II: Separate Training Stage III: Joint Fine-tuning \u00a0f \u221a \u00d7 \u00d7 ... f = 0? \u00d7 \u00d7 \u221a ... f = NAN? + + Feature Selection Score 0.68 Score \ud835\udcdb\u2193 Linear Head 0.75 \ud835\udcdb\u2193 Linear Head \ud835\udcdb\u2193 Figure 1: The three-stage pipeline of our DeRisk framework. First, in data pre-processing, feature selection and DL-specific data argumentation are adopted to benefit the optimization of DL models. Then we separately train two models for non-sequential data and sequential data, respectively. Finally, we combine them and fine-tune the joint model on the whole multi-format data. In contrast to the end-to-end paradigm in conventional DL applications, we remark the multi-stage process is critical to the overall success of DeRisk on real-world financial data. ing and brittle (Poole et al., 2016; Borisov et al., 2021). Therefore, to the best of our knowledge, most financial institutions (e.g., banks) still adopt non-DL-based methods. In this work, we present a successful industrial case study by developing an effective deep learning framework, DeRisk, which outperforms our production decision-tree-based system, on real-world financial data. Our DeRisk framework consists of three major stages including data pre-processing, separate training of non-sequential and sequential models, and joint fine-tuning. We also design a collection of practical techniques to stabilize deep neural network training under the aforementioned challenges. Specifically for the low-quality realworld financial data, we observe that a multi-stage process with feature selection and DL-specific engineering processing can be critical to the overall success of our framework. Main contributions. (1) We develop a comprehensive workflow that considers all the model training aspects for risk prediction. (2) We implement DeRisk, the first deep risk prediction model that outperforms statistical learning approaches on real-world financial data. (3) We conduct extensive ablation studies on the effect of different technical components of DeRisk, which provides useful insights and practical suggestions for the research community and relevant practitioners. 2 Related Work There have been extensive studies using machine learning techniques for credit risk prediction, including linear regression (Puro et al., 2010; Guo et al., 2016), SVM (Jadhav et al., 2018; Kim and Cho, 2019), decision tree based methods like Random Forest (RF) (Malekipirbazari and Aksakalli, 2015; Varmedja et al., 2019; Xu et al., 2021) or Gradient Boost Decision Tree (GBDT) (Xia et al., 2017a; He et al., 2018), deep learning (Byanjankar et al., 2015; Kvamme et al., 2018; Yang et al., 2018; Yotsawat et al., 2021), or an ensemble of them (Fu, 2017; Li et al., 2020). Most of these works use data with non-sequential features. Although deep learning is applied, empirical results find that XGBoost or other GBDT approaches usually outperforms deep learning (Fu, 2017; Kvamme et al., 2018; Varmedja et al., 2019; Xu et al., 2021). On the other hand, deep learning has shown its superiority beyond tabular data through the flexibility of deep neural networks. Convolutional Neural Network (CNN) (Kvamme et al., 2018), Long Short-Term Memory (LSTM) (Yang et al., 2018) and Graph Neural Network (GNN) (Wang et al., 2021a) are adopted for sequential data or graph data since other machine learning techniques like GBDT fail to properly model non-tabular data. According to (Liang et al., 2021), deep learning outperforms conventional methods on multimodal e-commerce data for credit risk prediction. Many data challenges in financial applications are also common in other machine learning fields. (1) For high-dimensional data, many feature selection methods have been proposed, including filter methods (Gu et al., 2011), wrapper methods (Yamada et al., 2014) and embedded methods (Feng and Simon, 2017). Many risk prediction works \fhave adopted feature selection for better performance (Xia et al., 2017a; Ha et al., 2019; Li et al., 2020) or interpretability (Ma et al., 2018; Xu et al., 2021). (2) Handling multiple data formats and feature types is related to the field of deep learning for tabular data (Gorishniy et al., 2021; Borisov et al., 2021). There are typical three popular deep neural network architectures for tabular data (Klambauer et al., 2017; Huang et al., 2020; Ar\u0131k and Pfister, 2021), including Multi-Layer Perception (MLP), Residual Network (ResNet) (He et al., 2016) and Transformer (Vaswani et al., 2017). Similar to the financial domain, it is also reported that deep models are not universally superior to GBDT models (Gorishniy et al., 2021) on tabular data. (3) For the out-of-time distribution shift issue, it is common to split training and test data according to the temporal order (Kvamme et al., 2018; Jiang et al., 2021). (4) Furthermore, data imbalance is also a long-standing problem in machine learning research. Among the popular over-sampling and under-sampling strategies (He et al., 2018; Bastani et al., 2019; Mahbobi et al., 2021), Synthetic Minority Over-sampling Technique (SMOTE) (Chawla et al., 2002) is a widespread technique for synthetic minority data, which is also reported be effective for credit risk prediction (Bastani et al., 2019). Generative adversarial networks can also be used to generate additional minority data (Mariani et al., 2018) and this method can be applied to financial data (Liu et al., 2020) for risk prediction. However, these methods are limited to non-sequential data generation, while our financial data has multiple formats. Class-balanced loss is another method to make the model attend more to the minority samples (Lin et al., 2017; Xia et al., 2017b; Cui et al., 2019; Ren et al., 2022). Comparative experiments (Kaur et al., 2019; Moscato et al., 2021) show that all strategies have their pros and cons. In our work, we use a class-balanced loss to mitigate the problem of data imbalance, and different strategies are used for non-sequential data and sequential data thanks to their great difference in data dimension. 3 Preliminary In this section, we first present the problem statement for the credit risk prediction task, and then introduce the credit information and labels used in the task. 3.1 Task Formulation The credit risk prediction task aims to decide whether a loan can be granted to the applicant according to his/her credit information. To be more specific, the risk prediction model needs to learn a function f\u03b8 : X \u2192[0, 1], which takes the credit information x \u2208X of an applicant as input and produces a risk score y \u2208[0, 1] that represents the probability of delinquent on the applicant\u2019s payments. 3.2 Multi-format Credit Information. In this work, we adopt the credit information in the credit report data that is generally available in financial institutions. The credit report data of an applicant consists of two parts: non-sequential features and sequential features. Specifically, the non-sequential part usually contains thousands of stable profiles of the applicant, including age, marital status, industry, property status, etc. We remark that the non-sequential data of a credit report can be extremely high-dimensional and sparse, which requires further processing to successfully train deep neural network models. The sequential part contains dozens of features and consists of three components of the applicant\u2019s financial behavior organized by time: (1) applicant\u2019s past loan information (loan), including the date of loan issuing, type of lending institution, loan amount, etc.; (2) the records that applicant\u2019s credit report was inquired in the past (inquiry), including inquiry time, inquiry institutions, inquiry reasons, etc.; (3) applicant\u2019s credit card information (card), including card application date, credit card type, currency, etc. Note that the number of sequential features is much smaller than non-sequential features. 3.3 Multiple Labels and Imbalanced Data Loan repayments naturally generate multiple labels because of installment (e.g., the first or the second month to pay back) and different degrees of delinquency (e.g., one-week or one-month delay). These labels are roughly categorized into short-term labels (e.g., the first/second/third installment is more than 30 days overdue) and long-term labels (e.g., any installment in recent 12 months is more than 5/15/30 days overdue). Due to the general priority of short-term benefits and the convenience of subse\fquent collection, financial institutions typically use short-term labels for evaluation. However, directly using this short-term evaluation label as the training label can be suboptimal. The choice of training label needs careful consideration for the best practice. Note that all these labels are particularly imbalanced (10% or even 1% for minority samples) because applicants who pay on time are much more than applicants who are overdue. Therefore, different choices of labels may lead to drastically different model performances in practice, as shown in our ablation study in Section 7.2. 4 Methodology 4.1 Overall Pipeline The overall pipeline of our DeRisk framework is shown in Figure 1. Firstly, we apply careful data processing to turn noisy and irregular input features into a neatly structured format, which is indispensable for training deep networks. Secondly, to well utilize both sequential and non-sequential features, we design two main sub-models: a DNN model for processing non-sequential features and a Transformer-based model MS for processing sequential features. We train them separately in the second stage. In the last stage, we fuse MNS and MS by concatenating the final hidden layers from both models and applying another linear head to give the final prediction score. We jointly fine-tune this whole model to get improved performance. 4.2 Selection of Training Label As we mentioned in Sec. 3, there are multiple labels in risk prediction tasks that record an applicant\u2019s repayment behavior in different time periods. Among these labels, we choose a long-term label to train our model for two reasons. First, long-term labels are more balanced than short-term labels. Second, the data distribution (e.g., the ratio of negative and positive data) varies over time (see Appendix A.3) because of economic changes and the continual improvement of our deployed model. The long-term label is less sensitive to these influences and is more stable because it summarizes an applicant\u2019s behavior in the last 12 months, conceptually performing a smoothing operator over the timeline. We believe this will make our model more generalizable and perform better on the out-of-time test set, though predicting long-term risk is inherently more difficult. 4.3 Data Pre-Processing The credit report data, especially the non-sequential data, is extremely complex and noisy, as it contains many missing values and outlier values. This low-quality input can make the learning process unstable and hurts the final performance. Therefore, proper data pre-preprocessing can be significantly beneficial for the optimization of DL models. Both sequential and non-sequential features can be divided into three types: time features (i.e., features about time such as credit card issue date), real-value features (e.g., age, loan amount), and category features (e.g., industry, type of lending institution). For the time features, we always use a relative date difference to avoid the models memorizing input data according to the date. We also apply normalization for the numerical time features and real-value features, and discard minor classes in the category features. In addition, we adopt specific techniques for non-sequential features. We found that lots of non-sequential real-value features are useless noise and even harmful for training. Hence we adopt a commonly-used feature selection technique that utilizes XGBoost (Chen et al., 2015) to select the most important 500 features among thousands of non-sequential real-value features and discard the others. Besides, most non-sequential features have many 0s and missing values (NAN) that naturally arise from the financial behaviors and data collection processes, which makes non-sequential data sparse, noisy, and problematic for DL training. These 0s and NANs are not necessarily meaningless, e.g., a NAN in \u201cThe time of first application for a mortgage\" may imply that this applicant has never applied for a mortgage. Besides, if we simply fill these entries with a constant c, it will influence those true entries close to c and significantly influence the learned model. So, we treat these 0s and NANs carefully. For every category feature, we add a category \u27e8NAN\u27e9, and for every real-value and time feature, besides replacing all NANs with 0s, we also create two indicators that directly tell whether a value is 0 and is NAN. With explicit indicators, DL models can therefore directly utilize the information implied by meaningful 0s and NANs and learn to ignore those 0s and NANs that are harmful to training. \fTime Feature Real-value Feature Category Feature Category Embedding Concat ReLU MLP Linear Sigmoid Figure 2: Non-sequential DNN model. Time Feature Real-value Feature Category Feature Category Embedding Concat Number Embedding Feature Attention ReLU MLP Number Embedding ReLU Add Transformer Encoder Seq Attention Linear Sigmoid Time Net ReLU MLP Figure 3: Sequential Transformer-based model. 4.4 Modeling Non-sequential Features We adopt a simple but effective neural network for non-sequential features. The architecture is shown in Figure 2. Firstly it uses an embedding layer to convert category features into dense vectors and concatenate them with time and realvalue features to the dense input xNS dense \u2208Rm1. Then xNS dense is fed into a MLP (multi-layer perceptron) with ReLU activation function to get the nonsequential output hidden state xNS final \u2208Rm2. And the final prediction \u02c6 yNS is computed as: \u02c6 yNS = \u03c3((wNS logit)T xNS final + bNS logit), where wNS logit \u2208Rm2 and bNS logit are the weight vector and bias for the logit, respectively, zNS = (wNS logit)T xNS final + bNS logit is the logit, and \u03c3(x) = 1/(1 + exp(\u2212x)) is sigmoid activation. 4.5 Modeling Sequential Features 4.5.1 Architecture We adopt a Transformer (Vaswani et al., 2017)based model for its strong modeling capacity. The architecture is shown in Figure 3. Three such models, Mcard, Minquiry, and Mloan, are used for card, inquiry, and loan features, respectively. Suppose the sequence length is l and the embedding size is e. Firstly a time net will convert the time feature into time embedding Et \u2208Rl\u00d7e, which plays a role of position embedding, and attention is used to merge different feature embeddings into one, i.e., Ef \u2208Rl\u00d7e. Then a Transformer encoder will encode the sequential embeddings E = Et + Ef into hidden feature xh \u2208Rl\u00d7e, which will be pooled by another attention into output feature x* final \u2208Re, where \u2217refer to card, inquiry or loan. We concatenate xcard final, xinquiry final , and xloan final to obtain xS final \u2208R3e. At last, similar to non-sequential case, we have logit zS = (wS logit)T xS final + bS logit and final predication \u02c6 yS = \u03c3(zS). To improve the generalization ability of the sequential model, we share the time net and Transformer encoder among Mcard, Minquiry, and Mloan. 4.5.2 Mask Language Model Pre-training During training, we found that optimization of the sequential model is much harder than the nonsequential model (the left part of Figure 4) due to the scarcity of sequential features compared with non-sequential features. To ease the training of the sequential model, we adopt mask language model (MLM) pre-training as BERT (Devlin et al., 2019) to make the model first learn informative and general features from sequential data. We randomly mask the input sequential features, where 80% of masked value are replaced with token \u27e8MASK\u27e9(for category features) or 0 (for time and real-value features), 10% are replaced with a random value, and 10% remain unchanged. The three output hidden features of Transformer encoder, i.e., xcard h , xinquiry h , and xloan h , will be input into different classification heads to predict different type of origin value at the masked position. After pre-training, we fine-tune MNS on the downstream classification task. 4.6 Weighted BCE Loss We also adopt weighted BCE loss to deal with data imbalance. Firstly, BCE (binary cross entropy) loss function is commonly used in binary classification tasks: BCE = \u22121 n n X i=1 [yi log(\u02c6 yi) + (1 \u2212yi) log(1\u2212\u02c6 yi)]. However, when negative samples are much more than positive samples, naive BCE loss will induce the model to output \u02c6 yi = 0. To avoid this, we can give more weight to positive samples by using weighted BCE loss: WBCE = \u2212 1 |D\u2212| X i\u2208D\u2212 log(1\u2212\u02c6 yi)\u2212 1 |D+| X i\u2208D+ log(\u02c6 yi), where D+ = {i : yi = 1} is the set of positive samples and D\u2212= {i : yi = 0} is the set of negative samples. Another implementation of the \f0.6 0.602 0.604 0.606 0.608 0.61 0.612 0.614 0.616 1 2 3 4 5 6 7 8 9 10 Valid AUC Epoch ours ours MLM ours oversampling 0.6 0.605 0.61 0.615 0.62 0.625 0.63 0.635 0.64 0.645 1 3 5 7 9 11 13 15 17 19 21 23 25 Valid AUC Epoch non-seq model seq model Figure 4: Left: valid AUC of non-sequential and sequential models with training epoch. Right: change of valid AUC of the sequential model with and without MLM pre-training or oversampling. above-mentioned weighted loss is oversampling, i.e., adjusting the ratio |D\u2212| : |D+| to 1 : 1 by re-sampling positive samples. We use oversampling on the sequential model and use normal weighted BCE loss on the nonsequential model and joint fine-tuning stage. This is because the optimization of the non-sequential model is much harder and slower than that of the non-sequential model due to the small number of non-sequential features, while oversampling enables the model to see rare samples multiple times in one epoch and thus accelerates optimization. On the other hand, the number of non-sequential features is large and the optimization of the nonsequential model is already fast enough, oversampling may lead to overfitting on the minority samples instead. 4.7 Separate Training & Joint Fine-tuning To fuse the sequential and non-sequential features, we use a concatenation layer (Concat Net) on the top of them to concatenate their output hidden states and to predict the final score, i.e., \u02c6 y = \u03c3((wlogit)T xfinal + blogit), where xNS final = [xNS final, xS final]. Note that the hardness of optimization non-sequential and sequential models is different, so if we train them with the concatenation layer together from scratch, the overall model will totally rely on the non-sequential outputs, which are easier to train on, while ignoring the output of the sequential model. To avoid this and to utilize the sequential features better, we adopt a two-stage training strategy: separately train sequential and non-sequential models first and then jointly finetune them with the Concat Net. Data Time Real-value Category card 1 2 5 inquiry 1 0 2 loan 1 4 5 non-sequential 13 4098 9 Table 1: Number of time, real-value, and category features in each sample of sequential and non-sequential data. The card, inquiry, and loan sequences for each user are clipped with lengths 32, 64, and 128, respectively. 5 Experiment Setup 5.1 Notation We mainly use a long-term label Ylong for training and a short-term label Yeval short for evaluation. Yother1 short , Yother2 short , Yother3 short , There are also three other short-term labels used in our experiments. The description of these labels are in Sec A.2. 5.2 Dataset Statistics We sample 582,996 Yanqianguan Users and use their credit report data and repayment behavior from August 2020 to July 2021 as the dataset. To simulate the out-of-time prediction in real business scenarios, we take the 430,865 data pieces from August 2020 to May 2021 as the training set and 152,131 data pieces from June 2021 to July 2021 as the test set. The ratio of negative and positive samples is about 50 : 1 according to the shortterm label used for evaluation and is about 10 : 1 according to the long-term label used for training.1 For sequential data, we set the maximum sequence length of card, query, and loan data to be 32, 64, and 128, respectively, according to the distribution 1We keep the exact ratio numbers confidential due to commercial and security concerns. \fModel Yeval short Yother2 short Yother3 short non-seq model over non-seq data only XGBoost 0.6418 0.6282 0.6187 DeepFM 0.5700 0.5508 0.5478 SDCN 0.6450 0.6319 0.6236 PDCN 0.6483 0.6343 0.6254 AutoInt 0.6454 0.6325 0.6238 DNN 0.6499 0.6349 0.6254 seq model over sqe data only Pooled MLP 0.5996 0.5821 0.5749 LSTM 0.6108 0.5936 0.5859 Transformer 0.6132 0.5941 0.5871 Transformer+MLM 0.6156 0.5971 0.5885 joint model over the entire data Add-Attn Net 0.6504 0.6369 0.6285 Mul-Attn Net 0.6520 0.6377 0.6278 DeRisk(ours) 0.6546 0.6398 0.6297 Table 2: All models are evaluated by AUC scores on three different short-term labels. of data length. Only the latest data will be included for training and evaluation. Some statistics are summarized in Table 1. Note that all above data are definitely authorized by the customers since they hope to apply for loan in our platform and they should provide the access to their credit report. We also anonymized the names of people and organizations on credit reports to protect customers\u2019 privacy. 5.3 Evaluation Metric The metric commonly used to evaluate credit risk prediction models M is AUC (Area Under the ROC Curve) score. We remark that this is a challenging task and an increment of 0.01 in AUC can be significant in performance as this results in a roughly 5% decrement of real-world bad debts. 6 Main Results 6.1 Baselines For non-sequential model, the baselines include (1) current popular traditional ML model XGBoost (Chen et al., 2015) (main baseline) and several more complicated deep models including (2) DeepFM (Guo et al., 2017): the final score is yNS = \u03c3(zNS DNN + zNS FM), where zNS DNN is the logit of DNN and zNS FM is the logit gotten by a FM (factorization machine (Rendle, 2010)) layer. (3) DCNv2 (Wang et al., 2021b): use cross-network (multiple cross layers) to obtain high-order cross feature. A DNN can be stacked on top of the crossnetwork (SDCN); we could also place them in parallel (PDCN). (4) AutoInt (Song et al., 2019): use a multi-head self-attention to learn interacted features. For the sequential model, our baselines are pooled MLP (which uses a pooling layer to average hidden states of different times that are individually produced by the MLP) and LSTM (Hochreiter and Schmidhuber, 1997). For the final module that fuses the output of the hidden state by the non-sequential model and sequential model, we compare our simple Concat Net with an additive attention layer (Add-Attn Net) and a multiplicative attention layer (Mul-Attn Net) that use xNS final as a query vector to pool output hidden feature of Transformer Encoder xh by additive and multiplicative attention, respectively. 6.2 Evaluation and Analysis Since our dataset has multiple formats, we first test separated models for single-format data modeling. For non-sequential data, we compare the DNN module in DeRisk with XGboost, a widelyused decision-tree model in our production system. We aim to show whether our DeRisk system and techniques can make its DNN module outperform other non-DL methods on real-world financial data. Other popular models in recommendation systems like DeepFM, DCN, and AutoInt are also tested as DL competitors. For sequential data, we consider different sequential models including Pooled MLP, LSTM, and Transformer for evaluation. Our DeRisk adopts Transformer and additionally adopts MLM-pretraining to accelerate training. Finally, we consider joint models trained over the entire dataset with both formats by fusing the best non-sequential model, DNN, and the best sequential model MLM-pretrained Transfomer, to obtain joint models for the best evaluation results. With more data, the joint models outperform either separated models, but we also find different fusing techniques lead to different performances. We compare our Concat Net with two different attention-based methods. Table 2 summarizes the main results. All models are evaluated by three different labels to show consistent results. From the results we can see that: (1) Our non-sequential model DNN and sequential model \fChange AUC No (ours) 0.6546 w/o Separate Training (end-to-end) 0.6487 w/ Freeze Sub-models 0.6512 Table 3: Yeval short AUC scores with different training strategies. MLM+Transformer outperform all baselines, respectively. Specifically, compared with current popular XGBoost model, our DNN model MNS and best joint model DeRisk (with Concat Net) improve Yeval short AUC score by 0.0081 and 0.0128, respectively. (2) Joint fine-tuning of non-sequential and sequential models can achieve better results than only using a single non-sequential or sequential model. (3) Complex models do not necessarily perform better: simplest DNN and Concat Net outperform other more complicated models. This indicates that the high-order features created by those additional networks such as FM and cross layers are not that helpful for the credit risk prediction task. 7 Ablation Study In this section, we conduct a series of experiments to demonstrate the effect of each part of our DeRisk framework. We mainly use Yeval short for evaluation since we find it shows a consistent result with other short-term labels as in Table 2. We test the effectiveness of different modules in our multistage process, including separate training & joint fine-tuning, feature selection, indicator features, and MLM-pretraining. Many different techniques for data imbalance are also studied in this section. With our ablation studies, we also present best practices for training deep neural network models over real-world financial data. 7.1 Effect of Multi-stage Training Because the hardness of optimization on nonsequential data and sequential data is different as shown in Figure 4, we first separately train MNS and MS and then joint fine-tune them. We also tried joint training them from scratch (end-to-end), or freezing MNS and MS and only tuning the concatenating layer during joint fine-tuning. The results are reported in Table 3. We can see that sepTraining Label Test Label AUC non-seq model Ylong (Ours) Yeval short 0.6499 Yother1 short Yeval short 0.6392 Yeval short Yeval short 0.6363 seq model Ylong (Ours) Yeval short 0.6156 Yother1 short Yeval short 0.6113 Yeval short Yeval short 0.6105 Table 4: Experiment results of selecting different training labels on non-sequential and sequential models. Change AUC No (Ours) 0.6499 |FR| = 4098 0.6415 |FR| = 100 0.6390 w/o Indicator 0.6426 w/ BCE Loss 0.6454 w/ Focal Loss 0.6403 w/ Oversample 0.6458 Table 5: Analysis experiment results on non-sequential DNN model, where |FR| is the number of selected features. arate training outperforms the other two training strategies. Suggestion#1: It is beneficial to first perform separate training and then joint tuning for multiformat data. The additional tunable parameters introduced in the fine-tuning process should be sufficiently large for effective multi-format fusion. 7.2 Effect of Different Training Labels We tried taking two short-term labels (Yother1 short and Yeval short) and a long-term label (Ylong) as the training label, respectively. The results in Table 4 demonstrate that the long-term label is the best choice for both non-sequential and sequential models, even when the model is evaluated on a short-term label. Suggestion#2: It is better to choose a balanced and stable signal that measures the long-term behaviors as the training label. 7.3 Effect of Real-value Feature Selection To show the effect of selecting real-value features with XGBoost, we compare the following three \fChange AUC No (Ours) 0.6156 w/o MLM Pre-training 0.6132 w/o Oversampling 0.6153 Table 6: Analysis experiment results on sequential Transformer-based model. cases: no selection, selecting 500 real-value features (Ours), and selecting 100 real-value features. The results in Table 5 show that selecting 500 features performs the best. This indicates that (1) by selecting real-value features with XGBoost, we can drop useful fewer features and improve the performance. (2) dropping too many features would lead to worse predictions. Suggestion#3: It is important to perform feature selection before deep learning training. The dimension of selected features should be chosen carefully. 7.4 Effect of Indicator Features To show the effect of NAN and zero indicators, we compare the case with and without them. As shown in Table 5, after removing indicators, the AUC score decreases by 0.0073. Suggestion#4: Some NANs and 0s can be meaningful and it is better to use indicator features rather than simply filling these missing values with a constant or discarding them. 7.5 Comparison of Different Loss Functions We compared the performance of using weighted BCE loss (Ours) with using naive BCE loss on the DNN model. In addition, we also tried Focal loss (Lin et al., 2017) which is designed for the data imbalance case, but the result in Table 5 shows that it is not helpful for our task and weight BCE achieves the best performance. Suggestion#5: Adding more weight to rare positive samples is critical to prevent the model from biasing to the overwhelming negative outputs. 7.6 Effect of Oversampling We compared the cases with and without oversampling on both the non-sequential model and sequential model to demonstrate the effect of oversampling. We can see from Table 6 and the right of Figure 4 that for sequential model, oversampling (1) improves AUC. (2) accelerates optimization. By enabling the model to see rare positive samples more times in each epoch, oversampling reduces the training difficulty of the sequential model. On the other hand, oversampling also makes the non-sequential model, the one easier to optimize, overfits more quickly on the training data and thus cannot achieve good performance as shown in Table 5. In practice, DNN with oversampling usually overfits after the first epoch. Suggestion#6: Oversampling makes optimization of the sequential model easier and improves performance. And considering the difference between non-sequential data and sequential data, each separated model should be optimized with different sampling strategies. 7.7 Effect of MLM Pre-training of Sequential Model From Table 6 and the right of Figure 4 that MLM pre-training of the sequential model (1) improves performance. (2) accelerates optimization. This indicates that the pre-trained model has learned some knowledge of sequential data that are useful for the risk prediction task. Suggestion#7: MLM pre-training benefits the optimization of the sequential model on credit risk prediction. 8" + } + ], + "Jiajie Zhang": [ + { + "url": "http://arxiv.org/abs/2402.01619v1", + "title": "KB-Plugin: A Plug-and-play Framework for Large Language Models to Induce Programs over Low-resourced Knowledge Bases", + "abstract": "Program induction (PI) has become a promising paradigm for using knowledge\nbases (KBs) to help large language models (LLMs) answer complex\nknowledge-intensive questions. Nonetheless, PI typically relies on a large\nnumber of parallel question-program pairs to make the LLM aware of the schema\nof the given KB, and is thus challenging for many low-resourced KBs that lack\nannotated data. To this end, we propose KB-Plugin, a plug-and-play framework\nthat enables LLMs to induce programs over any low-resourced KB. Firstly,\nKB-Plugin adopts self-supervised learning to encode the detailed schema\ninformation of a given KB into a pluggable module, namely schema plugin.\nSecondly, KB-Plugin utilizes abundant annotated data from a rich-resourced KB\nto train another pluggable module, namely PI plugin, which can help the LLM\nextract question-relevant schema information from the schema plugin of any KB\nand utilize this information to induce programs over this KB. Experiments on\nfive heterogeneous KBQA datasets show that KB-Plugin achieves better or\ncomparable performance with 25$\\times$ smaller backbone LLM compared to SoTA PI\nmethods for low-resourced KBs, and even approaches the performance of\nsupervised methods. Our code and data are available at\nhttps://github.com/THU-KEG/KB-Plugin.", + "authors": "Jiajie Zhang, Shulin Cao, Linmei Hu, Ling Feng, Lei Hou, Juanzi Li", + "published": "2024-02-02", + "updated": "2024-02-02", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "main_content": "Introduction Recently, the usage of knowledge bases (KBs) as external resources to assist large language models (LLMs) (Brown et al., 2020; Zhao et al., 2023) in answering complex knowledge-intensive questions has gained increasing study (Pan et al., 2023; Li et al., 2023b; Jiang et al., 2023). Among various methods, program induction (PI) has emerged as a promising paradigm due to its good interpretability * Corresponding author. LLM \ud835\udc5a!\" Q: Semaphore railway line is on the rail network named what? Program: Find (Semaphore railway line) Relate(part of network) FilterConcept(rail network) Answer: TransAdelaide \ud835\udc5a#$ % Q: Who is taller, LeBron James Jr. or his father? Q: Citation count of Yuta Saito at Cornell University Program: Find(Cornell University) ReverseRelate(organization) Find(Yuta Saito) And() Relate(citation count) Answer: 464 Program: Find (LeBron James Jr.) Find (LeBron James Jr.) Relate (father) Or() Argmax(height) Answer: LeBron James Schema Plugin Program Induction Plugin \ud835\udc5a#$ & \ud835\udc5a#$ ' \ud835\udc5a#$ ( \ud835\udc5a!\" Figure 1: Illustration of KB-Plugin. By simply plugging the schema plugin of a KB and the PI plugin, the LLM is injected with the schema information of this KB and the ability to induce programs over it. and capacity to support complex reasoning operations (Cao et al., 2022a; Gu et al., 2023; Li et al., 2023b). Given a KB, PI methods employ LLMs to convert a question into a multi-step program (e.g., KoPL (Cao et al., 2022a) and S-expression (Su et al., 2016)), whose execution against the KB produces the answer. Despite strong capacity, most PI methods rely on individual training for each KB using a large number of manually annotated questionprogram pairs (Xie et al., 2022; Li et al., 2023b; Luo et al., 2023). As for many low-resourced KBs that lack program annotations, how to enable LLMs to utilize their knowledge via PI remains a challenging problem. Recent studies (Cao et al., 2022b; Li et al., 2023a) have indicated that the mapping from questions to program sketches (i.e., composed functions without arguments, such as Find\u2192Relate\u2192 FilterConcept) primarily correlates with language compositional structures and is thus transferable across KBs. Hence the main challenge for PI over low-resourced KBs is to determine the argument for each function (Gu and Su, 2022), which requires LLMs to link natural language in a question 1 arXiv:2402.01619v1 [cs.CL] 2 Feb 2024 \fto corresponding schema items (i.e., pre-defined relations and concepts) in the KB (e.g., in Fig 1, the relation \u201cpart of network\u201d and the concept \u201crail network\u201d are arguments of function Relate and FilterConcept, respectively), so it is important to provide LLMs adequate information of each schema item. A straightforward approach is to directly feed all the schema information to the LLM via a prompt. However, the broad schema of KBs and limited context windows of LLMs make this infeasible (Li et al., 2023a). Regarding the above challenges, we are inspired by recent studies that claim that the parameters of LLMs can encode task-specific knowledge (Saxena et al., 2022; Moiseev et al., 2022; Wang et al., 2022). Our basic idea is to encode schema information of a KB into the parameters of a pluggable module (e.g., LoRA (Hu et al., 2022)), namely schema plugin, and use another pluggable module, namely PI plugin, to help the LLM capture question-relevant schema information from the schema plugin and utilize this information to induce programs. As illustrated in Fig. 1, by simply plugging the schema plugin of a KB and the PI plugin, the LLM is injected with the schema information of this KB and the ability to induce programs over it. We name this framework KBPlugin. To implement KB-Plugin, there remain two key problems: (1) By what task can sufficient information about each schema item in a KB be encoded into its schema plugin? (2) Without annotated data from the low-resource KBs, how can the PI plugin learn to extract and utilize questionrelevant schema information from their schema plugins to induce programs over these KBs? To address the above problems, we propose a novel plugin learning and transfer framework. First, inspired by prior studies (Bordes et al., 2013; Lin et al., 2015) which show that schema items in a KB can be well represented by fact triples involving them, we propose to learn schema plugins via a self-supervised triple completion task. Specifically, given a KB, we plug a schema plugin into the LLM and tune the plugin to enable the LLM to complete relevant triples for each schema item in the KB. In this way, the detailed schema information can be encoded into this schema plugin. As for PI plugin learning, inspired by Cao et al. (2022b), we utilize abundant program annotations from a rich-resourced KB. Specifically, we use this KB to generate multiple KBs with different schemas via alias replacement and train a schema plugin for each of them. Given a training question, we plug these schema plugins alone with the PI plugin into the LLM in turn and train the PI plugin to make the LLM generate the correct program whose arguments conform to the currently plugged schema plugin. In this way, the PI plugin is forced to learn the skills of extracting and utilizing question-relevant schema information from the plugged schema plugin for PI over the corresponding KB. Besides, since the PI plugin is trained to be compatible with different schema plugins, it can be directly transferred to other low-resourced KBs and generalize well with their schema plugins, even if most schema items in these KBs are unseen during its training. In experiments, we take Wikidata-based KQA Pro as the rich-resourced KB to train the PI plugin and evaluate our framework on three Freebasebased datasets (WebQSP, GraphQ, and GrailQA) and two domain-specific datasets (MetaQA for movie domain and SoAyBench for academic domain). The results show that KB-Plugin achieves better or comparable performance with 25\u00d7 smaller backbone LLM compared to SoTA PI methods for low-resource KBs. On GraphQ, GrailQA, and MetaQA, KB-Plugin even surpasses the performance of several supervised methods. Our contributions include: (1) proposing KBPlugin, a novel plug-and-play framework that enables LLMs to induce programs over any lowresourced KB; (2) empirical validation of the efficacy of KB-Plugin through comprehensive experiments on five heterogeneous KBQA datasets. 2 Related Work Low-resourced Program Induction. Recently, there have emerged three types of PI methods for low-resourced KBs that lack program annotations, but each of them has limitations: (1) Few-shot program generation methods (Gu et al., 2023; Li et al., 2023a) utilize in-context learning ability of LLMs to induce programs with a handful of demonstrations. However, they can only determine function arguments based on the schema item names due to limited context windows, so they face challenges in distinguishing similar schema items. They also suffer from long inference time due to excessive LLM calls or executing a vast number of potential programs; (2) Few-shot data generation methods (Li et al., 2023c) also employ in-context learning with LLMs to convert automatically sampled 2 \fprograms into questions, and train a smaller PI model using the generated question-program pairs. Nonetheless, the generated questions may not align with programs and often lack diversity due to the limited number of program templates; (3) Similar to us, program transfer methods (Cao et al., 2022b) also leverage program annotations from a rich-resourced KB to aid PI for low-resourced KBs. However, they mainly focus on program sketch transfer and perform poorly without fine-tuning using annotated question-answer pairs from lowresourced KBs to adapt to their schemas. While KB-plugin obviates the reliance on any annotated data from low-resourced KBs, thereby enabling LLMs to easily utilize their knowledge. Plug-and-Play Modules for LLMs. In recent years, various parameter-efficient modules have been proposed to adapt LLMs to different downstream tasks (Lester et al., 2021; Hu et al., 2022; Li and Liang, 2021; Pfeiffer et al., 2021) . These modules show plug-and-play characteristics and can inject task-specific knowledge and skills into LLMs (Xiao et al., 2023; Zhang et al., 2023). Some researchers also found that pluggable modules for similar tasks encode knowledge and skills into the parametric space in similar ways (Qin et al., 2021; Su et al., 2022), providing basic rationality for the transferability of our PI plugin. 3 Problem Formulation In this section, we first provide some necessary definitions and then formulate our task. Knowledge Base. A knowledge base (KB) can be formalized as KB = {C, E, R, T }, where C, E, R and T represent the sets of concepts, entities, relations and fact triples, respectively. Specifically, R = {re, rc} \u222aRl, where re is \u201cinstance of\u201d, rc is \u201csubclass of\u201d, and Rl is the set of other general relations. Correspondingly, T can be divided into there disjoint subsets: (1) \u201cinstance of\u201d triples Te = {(e, re, c)|e \u2208E, c \u2208C}; (2) \u201csubclass of\u201d triples Tc = {(ci, rc, cj)|ci, cj \u2208C}; (3) relational triples Tl = {(ei, r, ej)|ei, ej \u2208E, r \u2208Rl}. Elements in C and R are also called the schema items of KB. Program Induction. Given a KB KB and a natural language question x = w1, w2, \u00b7 \u00b7 \u00b7 , w|x| \u000b , program induction (PI) aims to convert x into a program y, which would return the correct answer when executed against KB. Formally, y is composed of functions that take a specific type of arguments, and can be serialized as y = f1(arg1), \u00b7 \u00b7 \u00b7 , ft(argt), \u00b7 \u00b7 \u00b7 , f|y|(arg|y|) \u000b , ft \u2208 F, argt \u2208E \u222aC \u222aR \u222a{\u2205}. Here, F is a set of pre-defined functions that cover basic reasoning operations on KBs. In this work, we use KoPL (Cao et al., 2022a) as our programming language. Task Formulation. Suppose we have access to (1) source KB KBS and source domain data DS = {(xS i , yS i )}nS i=1, which are question-program pairs for KBS; (2) target KB KBT , which is lowresourced and has no annotated data. The goal is to learn a PI model MT PI that can translate a question xT for KBT into program yT , whose execution on KBT produces the correct answer. 4 Methodology As mentioned in the introduction, to enable a LLM M to induce programs over low-resourced KBT , KB-Plugin learns two types of pluggable modules for M: (1) KB-specific schema plugin msc, which stores information of schema items of a given KB within its parameters; (2) KB-transferable PI plugin mPI, which encodes the skill of inducing programs over any KB by extracting and utilizing question-relevant schema information from the schema plugin of this KB. It is trained with KBS and DS but can be directly transferred to KBT . The final PI model for KBT can be formulated as MT PI = plug(M, {mT sc, mPI}), (1) where mT sc is the schema plugin of KBT and plug(M, {\u00b7}) means plugging the plugins in {\u00b7} into M. In the following, we will first introduce the architecture of two types of plugins, then present our plugin learning and transfer framework. 4.1 Plugin Architecture A host of studies have demonstrated that knowledge and skills can be encapsulated within the parameters of LLMs (Saxena et al., 2022; Moiseev et al., 2022; Wang et al., 2022). Inspired by this, we implement both schema plugin and PI plugin with LoRA (Hu et al., 2022), a popular type of pluggable module for LLMs with a few trainable parameters. Specifically, let LM be the set of weight matrices in the self-attention modules and MLP modules of a LLM M. For each Wi \u2208Rd\u00d7k in LM, LoRA modifies its forward pass from h = Wix to h = (Wi + AiBi)x, where Ai \u2208Rd\u00d7r and Bi \u2208Rr\u00d7k 3 \fLLM \ud835\udc5a!\" L.A. Lakers || instance of basketball team || contains instance basketball team || subclass of sports team || contains subclass Lebron James | human || member of sports team | forward L.A. Lakers | basketball team || member of sports team | backward Lebron James | human || what relation || basketball team | L.A. Lakers basketball team L.A. Lakers sports team basketball team basketball team | L.A. Lakers human | Lebron James member of sports team L.A. Lakers sports team Lebron James human member of sports team instance of instance of subclass of basketball team Program: Find(Lebron James) Relate(member of sports team) FilterConcept(basketball team) \ud835\udc3e\ud835\udc35# LLM \ud835\udc5a$% \ud835\udc5a!\" #! \ud835\udc3e\ud835\udc35# \ud835\udc3e\ud835\udc35#\" \ud835\udc3e\ud835\udc35## \ud835\udc3e\ud835\udc35#! \u2026 \ud835\udc5a!\" ## Q: Which basketball team does Lebron James play for? Find(Lebron James) Relate(member of sports team) FilterConcept(basketball team) Find(Lebron James) Relate(plays for) FilterConcept(basket club) Find(Lebron James) Relate(player of) FilterConcept(basketball team) \ud835\udc5a!\" #\" \u2026 \u2026 \ud835\udc3e\ud835\udc35& LLM \ud835\udc5a$% \ud835\udc5a!\" & Q: Semaphore railway line is on the rail network named what? Find(Semaphore railway line) Relate(part of network) FilterConcept(rail network) Constrained Decoding L.A. Lakers athletic team Lebron James person plays for instance of instance of subclass of basket club Program: Find(Lebron James) Relate(plays for) FilterConcept(basket club) \ud835\udc3e\ud835\udc35## (a) KB Generation and Data Augmentation (b) Learning of Schema Plugin via Schema-relevant Triple Completion (c) Learning of PI Plugin (d) Plugin Transfer Alias Replacement Figure 2: Overview of our plugin learning and transfer framework: (a) Generate multiple source KBs with different schemas and augmented source domain data via alias replacement; (b) Learn an individual schema plugin for each source KB and the target KB via self-supervised schema-relevant triple completion task; (c) Train the PI plugin by inducing program for each source KB when plugging it into the LLM along with the corresponding schema plugin. (d) Transfer the PI plugin by plugging it into the LLM with the schema plugin of the target KB and inducing programs over the target KB with constrained decoding. are two matrices with rank r \u226amin(d, k). A LoRA plugin mj is thus defined as mj = {(Amj i , Bmj i )|Wi \u2208LM}, (2) and plug(M, {m1, . . . , mN}) means replacing all Wi \u2208 LM with Wi + PN j=1 Amj i Bmj i . If we train M\u2032 = plug(fz(M), {fz(m1), . . . , fz(mN\u22121), mN}) on a certain task, where fz(\u00b7) represents parameter freezing, knowledge and skills related to this task will be encoded within mN. Although other parameter-efficient pluggable modules such as prefix-tuning (Li and Liang, 2021) can also serve as our plugin modules, the advantages of LoRA are that it does not increase input length or inference latency. 4.2 Plugin Learning and Transfer Framework There are two primary challenges for learning schema plugins and the PI plugin: (1) How to encode sufficient information about each schema item of a KB into a schema plugin? (2) How to ensure that the PI plugin can extract and utilize useful schema information for program induction from schema plugins of different KBs, instead of ignoring the schema plugin entirely, directly learning to induce program over source KB during training, and consequently losing transferability? To handle these challenges, we propose a novel plugin learning and transfer framework, which is illustrated in Fig. 2 and contains four steps: (1) Generate multiple source KBs KBS1, . . . , KBSN with different schemas and augmented data DS a = {(xS j , yS1 j , . . . , ySN j )}nS j=1 based on KBS and DS via alias replacement, where ySi j is the golden program for question xS j on KBSi; (2) Learn individual schema plugin mSi sc for each KBSi via self-supervised schemarelevant triple-completion task; (3) Train PI plugin mPI by requiring MS1 PI, . . . , MSN PI to generate yS1 j , . . . , ySN j given xS j , respectively, where MSi PI = Plug(fz(M), {fz(mSi sc), mPI}), so that mPI is forced to extract and utilize schema information from each mSi sc; (4) Learn schema plugin mT sc for KBT using the same method in (2) and take MT PI = plug(M, {mT sc, mPI}) as the final PI model for KBT . We will introduce each step in detail in the following. 4.2.1 KB Generation and Data Augmentation We utilize the aliases of each schema item to generate multiple KBs with different schemas based on KBS = {CS, ES, RS, T S}. As shown in Fig. 2(a), for each schema item v \u2208CS \u222aRS, we replace v with vi, a randomly chosen alias of v, and record ai(v) = vi. For example, the concept \u201cbasketball team\u201d can be replaced with \u201cbasket club\u201d and the relation \u201cmember of sports team\u201d can be replaced with \u201cplays for\u201d. Relevant triples in T S are also 4 \fmodified with the same alias. In this way, KBSi that has a different schema than KBS is created. In practice, we let KBS1 = KBS and repeat above process N \u22121 times to generate KBS2, . . . , KBSN . Similarly, for each question-program pair (xS j , yS j ) \u2208 DS, suppose yS j = D f1(arg1), \u00b7 \u00b7 \u00b7 , ft(argt), \u00b7 \u00b7 \u00b7 , f|yS j |(arg|yS j |) E , we replace every argt \u2208CS \u222aRS with ai(argt) to obtain ySi j , which is the correct program for xS j executable on KBSi. We repeat the process for KBS1, . . . , KBSN to obtain augmented data DS a = {(xS j , yS1 j , . . . , ySN j )}nS j=1. 4.2.2 Learning of Schema Plugin Many studies about knowledge graph embedding show that the information of schema items in a KB can be represented by not only their names but also triples containing them (Bordes et al., 2013; Lv et al., 2018). Inspired by this, we propose to encode schema information into schema plugins via a self-supervised triple completion task. As illustrated in Fig. 2(b), to learn the schema plugin msc for a given KB KB = {C, E, R, T }, where T = Te \u222aTc \u222aTl, we train Msc = Plug(fz(M), msc) to complete relevant triples for each concept and relation in KB in sequence-to-sequence form as follows. First, for each concept c \u2208C, we require Msc to complete relevant \u201cinstance of\u201d triples to aggregate the semantic features of entities belonging to c. Specifically, we sample K triples (ek, instance of, c) from Te (see Appendix B for detailed sampling strategy), and use each sampled triple to construct two pairs of verbalized queries and answer as the inputs and expected outputs for Msc: \u2022 \u201c\u27e8ek\u27e9|| instance of\u201d \u2192\u201c\u27e8c\u27e9\u201d; \u2022 \u201c\u27e8c\u27e9|| contains instance\u201d \u2192\u201c\u27e8ek\u27e9\u201d. Here, \u27e8ek\u27e9and \u27e8c\u27e9means filling in the names of ek and c, respectively. Besides, the information of a concept is also related to its suband super-concepts. Therefore, for each triple (ci, subclass of, cj) \u2208Tc, we also construct two queries with answers for Msc: \u2022 \u201c\u27e8ci\u27e9|| subclass of\u201d \u2192\u201c\u27e8cj\u27e9\u201d; \u2022 \u201c\u27e8cj\u27e9|| contains subclass\u201d \u2192\u201c\u27e8ci\u27e9\u201d. Finally, the information of a relation can be learned from its name and the elements connected by it. Therefore, for each r \u2208Rl, we sample K triples (ei, r, ej) from Tl, choose ci, cj such that (ei, instance_of, ci), (ej, instance_of, cj) \u2208Te, and use each (ei, ci, r, ej, cj) to construct three queries with answers: \u2022 \u201c\u27e8ei\u27e9| \u27e8ci\u27e9|| \u27e8r\u27e9| forward\u201d \u2192\u201c\u27e8cj\u27e9| \u27e8ej\u27e9\u201d; \u2022 \u201c\u27e8ej\u27e9| \u27e8cj\u27e9|| \u27e8r\u27e9| backward\u201d \u2192\u201c\u27e8ci\u27e9| \u27e8ei\u27e9\u201d; \u2022 \u201c\u27e8ei\u27e9| \u27e8ci\u27e9|| what relation || \u27e8cj\u27e9| \u27e8ej\u27e9\u201d \u2192 \u201c\u27e8r\u27e9\u201d. We empirically find that including ci, cj benefits the information encoding for both concepts and relations. Let the set of all generated queries and answers be Dsc = {(qi, ai)}l i=1, then msc is trained to minimize Lsc = \u2212 X (qi,ai)\u2208Dsc log P(ai|qi), (3) where P(ai|qi) is the likelihood of Msc generating ai given qi, defined by token-level cross entropy. Note that the learning of msc does not rely on any additional data except the KB itself, so we can train a schema plugin for any KB. 4.2.3 Learning of PI Plugin As illustrated in Fig. 2(c), to learn the PI plugin mPI, we first train individual schema plugin mSi sc for each KBSi. After that, given (xS j , yS1 j , . . . , ySN j ) \u2208DS a , where xS i is a question and ySi j is the golden program for xS j on KBSi, we train mPI by feeding xS i to MS1 PI, . . . , MSN PI and requiring them to generate yS1 j , . . . , ySN j , respectively. Here, MSi PI = Plug(fz(M), {fz(mSi sc), mPI}). The overall objective can be formulated as: LPI = \u2212 X (xS j ,yS1 j ,...,ySN j )\u2208DS a N X i=1 log Pi(ySi j |xS j ), (4) where Pi(ySi j |xS j ) is the likelihood of MSi PI generating ySi j given xS j , defined by token-level cross entropy. To generate programs conforming to different schemas given the same question, mPI must learn to (1) choose correct functions according to the compositional structure of the question; (2) extract and utilize question-relevant schema information for argument determination from the corresponding schema plugin, because it is the only difference among MS1 PI, . . . , MSN PI . 4.2.4 Plugin Transfer Once the PI plugin mPI is trained, we directly transfer it to KBT as in Fig 2 (d), and let MT PI = 5 \fplug(M, {mT sc, mPI}) be the PI model for KBT . Here, mT sc is the trained schema plugin for KBT using the method in Sec. 4.2.2. Since mT sc and mSi sc are trained with the same tasks, we expect that they encode schema information into their parameters in similar ways (Qin et al., 2021; Su et al., 2022), so mPI can also extract schema information from mT sc to help PI over KBT . Besides, to guarantee MT PI generating valid programs which do not cause execution error or return an empty answer, we adopt constrained decoding, i.e., after MT PI generates f1(arg1), . . . , ft(argt), we enumerate all the valid ft+1(argt+1) following the method of Gu et al. (2023) and restrict MT PI to only generate one of them. More details are in Appendix C. We also use beam search to retain top-k programs during decoding to provide MT PI with a more global view. 5 Experiments 5.1 Datasets Source Domain. We use KQA Pro (Cao et al., 2022a) as the source domain datasets. It provides 117,970 questions with diverse compositional structures and corresponding programs based on a subset of Wikidata (Vrandecic and Kr\u00f6tzsch, 2014). Target Domain. We use WebQSP (Yih et al., 2016), GraphQ (Su et al., 2016), GrailQA (Gu et al., 2021), MetaQA (Zhang et al., 2018) and SoAyBench (Anonymous, 2024) as the target domain datasets. Among them, WebQSP, GraphQ, and GrailQA are based on Freebase (Bollacker et al., 2008). Their KBs contain a large number of schema items and can evaluate the effectiveness of KB-Plugin for large-scale KBs. MetaQA and SoAyBench are two datasets in movie and academic domains, respectively, and can evaluate the effectiveness for specific domains. For MetaQA, since most of the relations in its KB have been covered by KQA Pro, we remove these relations and relevant question-program pairs from KQA Pro to avoid data leakage. For SoAyBench which is originally a tool-using dataset based on Aminer (Tang et al., 2008) APIs, we construct its KB by collecting relevant data from these APIs. Table 1 shows the statistics of these datasets and their overlap with source KBs generated from KQA Pro. Most schema items in the target KBs are unseen in source KBs and most test cases also involve unseen schema items. Dataset |R| |Ru| |C| |Cu| |Dtest| |Dtest u | KQA Pro 1209 794 WebQSP 412 296 446 363 1639 1083 GraphQ 9569 8931 7298 7004 2395 2340 GrailQA(dev) 3938 3524 2018 1868 6763 6578 GrailQA(test) 3938 3524 2018 1868 13231 MetaQA 9 9 9 3 39093 39093 SoAyBench 17 11 5 3 792 756 Table 1: Statistics for source and target domain datasets and their overlaps with 16 source KBs generated from KQA Pro. |R| / |C| denotes the number of relations / concepts in their KBs. |Ru| / |Cu| denotes the number of relations / concepts unseen in the source KBs. |Dtest| and |Dtest u | denotes the numbers of test cases and test cases that involve unseen schema items, respectively. 5.2 Baselines For WebQSP, GraphQ, GrailQA, and MetaQA, we mainly compare KB-Plugin with low-resourced PI methods including (1) few-shot program generation methods Pangu (Gu et al., 2023) and KBBINDER (Li et al., 2023a); (2) few-shot data generation method APS (Li et al., 2023c); (3) program transfer method ProgramTrans (Cao et al., 2022b), where we adopt its results without fine-tuning on target KBs for fair comparison. In addition, we also provide the results of several representative supervised models for comparison. For SoAyBench, we choose tool-using methods that were evaluated on it as baselines, including DFSDT (Qin et al., 2023) and SoAy (Anonymous, 2024). These methods solve questions by prompting LLMs to call Aminer APIs in specific orders via in-context learning. Their processes of determining the composition of APIs and filling in arguments for each API can also be viewed as program induction. We provide detailed descriptions of all the baselines and our evaluation metrics in Appendix D.1. 5.3 Implementation Details In experiments, we use Llama2-7B (Touvron et al., 2023) as the backbone LLM of KB-Plugin and set the rank r of LoRA to 16. The number of parameters of each plugin is consequently 40M, which is extremely lightweight. The number of generated source KBs is set to 16 to balance performance and training efficiency. The sampling number K in schema plugin learning is set to be 500, 500, 50, 100, 3000, and 1000 for KQA Pro, WebQSP, GraphQ, GrailQA, MetaQA, and SoAyBench, respectively, to limit the size of the constructed data 6 \fMethod WebQSP GraphQ GrailQA Test Dev Supervised QGG 74.0 36.7 BERT+Ranking 25.0 58.0 ArcaneQA 75.6 31.8 73.7 76.8 RnG-KBQA 75.6 74.4 76.9 Low-resourced ProgramTrans 53.8\u2217 APS 51.1 57.7 62.1 KB-BINDER 53.2 39.5 56.0 Pangu 54.5 43.3 62.7 KB-Plugin 57.2 / 61.1\u2217 49.5 62.7 65.0 w/o schema plugin 41.0 42.8 57.5 w/ mS0 sc 48.0 37.9 51.0 Table 2: F1 results on WebQSP, GraphQ, and GrailQA. \u2217means using oracle topic entities. Method 1-hop 2-hop 3-hop Supervised KV-Mem 96.2 82.7 48.9 PullNet 97.0 99.9 91.4 EmbedKGQA 97.5 98.8 94.8 TransferNet 97.5 100.0 100.0 Low-resourced KB-BINDER 93.5 99.6 96.4 KB-Plugin 97.1 100.0 99.3 w/o schema plugin 92.6 99.0 98.9 w/ mS0 sc 90.4 93.6 88.6 Table 3: Hit@1 results on MetaQA. for schema plugin learning. We use beam size 5 for all experiments. More details can be found in Appendix D.2. 5.4 Main Results The results are presented in Table 2, 3 and 4. Compared with Pangu, the SoTA PI method for lowresourced KBs, KB-Plugin improves the F1 score by 2.7% and 6.2% on WebQSP and GraphQ, respectively, and achieves comparable performance on GrailQA, despite Pangu using 25\u00d7 larger model (175B Codex) and 100 annotated examples from each dataset. Moreover, Pangu needs to call Codex hundreds of times for a question to score each candidate program, while our model selects the optimal program via beam search, which is significantly faster and less costly. Besides, since ProgramTrans, KB-BINDER, and Pangu all link questions to schema items according to their names only, the superiority of KB-Plugin also demonstrates the benefits of aggregating additional schema information from relevant triples via schema plugin learnMethod Acc DFSDT (gpt-3.5-turbo) 45.7 DFSDT (gpt-4) 59.7 SoAy (gpt-3.5-turbo) 67.7 SoAy (gpt-4) 88.7 KB-Plugin 90.8 w/o schema plugin 70.8 w/ mS0 sc 64.0 Table 4: Accuracy results on SoAyBench. Dataset Method Dtest seen Dtest unseen WebQSP KB-Plugin 64.9 53.3 w/o schema plugin 47.6 37.6 Gain +17.4 +15.7 GraphQ KB-Plugin 40.0\u2217 49.7 w/o schema plugin 70.9\u2217 42.2 Gain -30.9\u2217 +7.5 GrailQA-dev KB-Plugin 69.0 64.8 w/o schema plugin 64.9 57.3 Gain +4.1 +7.5 Table 5: F1 Results of KB-Plugin with and without schema plugin. Dtest unseen and Dtest seen denote the sets of test cases that involve and do not involve schema items unseen in the source KBs, respectively. \u2217means the results may not be indicative since there are only 55 cases in Dtest seen of GraphQ. ing. KB-Plugin even surpasses several supervised models on GraphQ and GrailQA, which demand training using thousands of annotated samples from target KBs, showing the effectiveness of transferring prior knowledge from rich-resourced KBs. On MetaQA and SoAyBench, KB-Plugin outperforms all the low-resourced baselines even though they use more powerful LLMs (i.e., Codex, gpt-3.5turbo, and gpt-4), indicating that our framework also performs well for domain-specific KBs. In particular, KB-Plugin achieves strong performance on par with supervised SoTAs on MetaQA even if it does not see any target relations from the source domain. 5.5 Ablation Study To demonstrate the effect of schema plugins, we remove them from our framework, i.e., we directly train a PI plugin using the source domain data and transfer it to the target KBs without training any schema plugins. According to Table 2, 3, 4, and 5, the performance of KB-Plugin without schema plugins is severely degraded, especially on the test cases that involve schema items unseen in the source KBs. The experimental results illustrate that (1) direct PI transfer is difficult due to the sub7 \fQuestion I Which airport to fly into Rome? Pangu Find(Rome) Relate(tourist attractions) (%) KB-Plugin w/o schema plugin Find(Rome) Relate(country) FilterConcept(sovereign state) (%) KB-Plugin Find(Rome) Relate(transport terminus) FilterConcept(airport) (!) Relevant Triples (London, transport terminus, Luton airport), (London, instance of, citytown), (Luton airport, instance of, airport) Question II What role did Paul Mccartney play in the Beatles? Pangu Find(Paul Mccartney) Relate(instruments played) (%) KB-Plugin Find(Beatles) Relate(member) Find(Paul Mccartney) ReverseRelate(member) And() Relate(role) (!) Source Domain Data Pair What is Jane Lynch\u2019s role in Glee? Find(Glee) Relate(starring) Find(Jane Lynch) ReverseRelate(starring) And() Relate(character role) Table 6: Two typical questions from the test set of WebQSP that KB-Plugin succeeds while Pangu fails. The incorrect functions and arguments are marked as red, while the correct ones are marked as green. 40.6 47.2 48.2 51 57.2 36.7 45 46.8 46.4 49.5 54.9 60.7 63.7 64.2 65 30 35 40 45 50 55 60 65 70 0 2 4 6 8 10 12 14 16 F1 Number of Generated Source KBs WebQSP GraphQ GrailQA-dev Figure 3: KB-Plugin performance with different numbers of generated source KBs. stantial difference between the schemas of source and target KBs; (2) schema plugins of target KBs effectively encode adequate schema information via the triple completion task, and the PI plugin can extract and utilize question-relevant schema information from these schema plugins even though it is never trained with them. In addition, if we adopt the schema plugin of a source KB, e.g., mS0 sc , for the target KBs, the performance of KB-Plugin also drops heavily, showing the necessity of using matched schema plugin. To show the rationality of our PI plugin learning method, we evaluate the performance of PI plugins trained with different numbers of generated source KBs on WebQSP, GraphQ, and GrailQA, and present the results in Fig. 3. The PI plugin trained with only one source KB performs poorly, implying that it ignores the schema plugin entirely and directly learns PI over this source KB. Once there emerges a new source KB with a different schema, the performance of the trained PI plugin increases substantially, and there is an apparent trend that the performance will increase with more generated source KBs. These results prove that training the PI plugin over multiple source KBs succeeds in forcing the PI plugin to learn to extract and utilize schema information from different schema plugins, and the learned skill can be transferred to target KBs. 5.6 Case Study To better showcase the advantages of KB-Plugin over in-context learning PI methods, we present a case comparison between KB-Plugin and Pangu in Table 6. Question I shows the effect of schema plugin learning and utilization. Both Pangu and KB-Plugin without schema plugin struggle to predict the correct relation \u201ctransport terminus\u201d because it is unseen in the demo examples or source KBs. The complete KB-Plugin, however, effectively encodes the information that \u201ctransport terminus\u201d is a possible relation between \u201ccitytown\u201d and \u201cairport\u201d into the schema plugin via completing relevant triples, and succeeds in predicting this relation by utilizing above information. Question II demonstrates the benefits of harnessing abundant program annotations from the source domain, where Pangu produces a program with incorrect function composition because none of its demo examples has a similar compositional structure, while KB-Plugin induces the correct program by utilizing prior knowledge learned from the source domain. Further analysis can be found in Appendix E and F. 6" + }, + { + "url": "http://arxiv.org/abs/2305.15056v1", + "title": "Reasoning over Hierarchical Question Decomposition Tree for Explainable Question Answering", + "abstract": "Explainable question answering (XQA) aims to answer a given question and\nprovide an explanation why the answer is selected. Existing XQA methods focus\non reasoning on a single knowledge source, e.g., structured knowledge bases,\nunstructured corpora, etc. However, integrating information from heterogeneous\nknowledge sources is essential to answer complex questions. In this paper, we\npropose to leverage question decomposing for heterogeneous knowledge\nintegration, by breaking down a complex question into simpler ones, and\nselecting the appropriate knowledge source for each sub-question. To facilitate\nreasoning, we propose a novel two-stage XQA framework, Reasoning over\nHierarchical Question Decomposition Tree (RoHT). First, we build the\nHierarchical Question Decomposition Tree (HQDT) to understand the semantics of\na complex question; then, we conduct probabilistic reasoning over HQDT from\nroot to leaves recursively, to aggregate heterogeneous knowledge at different\ntree levels and search for a best solution considering the decomposing and\nanswering probabilities. The experiments on complex QA datasets KQA Pro and\nMusique show that our framework outperforms SOTA methods significantly,\ndemonstrating the effectiveness of leveraging question decomposing for\nknowledge integration and our RoHT framework.", + "authors": "Jiajie Zhang, Shulin Cao, Tingjia Zhang, Xin Lv, Jiaxin Shi, Qi Tian, Juanzi Li, Lei Hou", + "published": "2023-05-24", + "updated": "2023-05-24", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "main_content": "Introduction Explainable question answering (XQA) is the task of (i) answering a question and (ii) providing an explanation that enables the user to understand why the answer is selected (Neches et al., 1985; Schuff et al., 2020). It provides a qualified way to test the reasoning ability and interpretability of intelligent systems, and plays an important role in artificial intelligence (Lu et al., 2022). Recent work in XQA can be grouped into two directions: 1) neuro-symbolic methods (Berant et al., \u2217Indicates equal contribution. \ud835\udc92\ud835\udfce: Which is higher, the highest mountain in North America or the highest mountain in Africa? \ud835\udc92\ud835\udfcf: How high is the highest mountain in North America? \ud835\udc92\ud835\udfd0: How high is the highest mountain in Africa? \ud835\udc92\ud835\udfd3: How high is #4? \ud835\udc92\ud835\udfd4: Which mountain is the highest in Africa? \ud835\udc92\ud835\udfd5: How high is #6? \ud835\udc92\ud835\udfd2: Which mountain is the highest in North America? \ud835\udc92\ud835\udfd1:[SelectBetween] [greater] #1 #2 Figure 1: An example of Hierarchical Question Decomposition Tree (HQDT). qi represents the index of node in its BFS ordering enumeration. 2013; Liang et al., 2017; Cao et al., 2022b) translate natural language questions into formal representations (e.g., SPARQL (Sun et al., 2020), KoPL (Cao et al., 2022a), lambda-DCS (Liang, 2013), etc.), whose execution on structured knowledge bases (KBs) gives the answer. Here, the formal representation acts as an explanation of the final answer. 2) Decompose-based models generate natural language intermediate steps that lead to the final answer (e.g., question decomposing which decomposes a complex question into sub-questions (Min et al., 2019; Perez et al., 2020; Deng et al., 2022), chain-of-thought prompting (Wei et al., 2022; Dua et al., 2022; Khot et al., 2022), etc.). Here, the intermediate steps shows the rationale of reasoning. Although achieving significant results, both directions have key limitations. For neuro-symbolic methods, the formal representation can only be executed on KBs. However, even the largest KBs are incomplete, thus limits the recall of model. For decompose-based methods, they employ free-text corpora as the knowledge source, and the diversity of natural language makes XQA difficult. In fact, integrating knowledge from heterogeneous sources is of great importance to QA (Wolfson et al., 2020), especially for answering complex questions. Several attempts have been made for knowledge integration (e.g., KBs, text corpora) (Sun et al., 2018, 2019; Shi et al., 2021). Although promising, these graph-based methods suffer from lacking explainarXiv:2305.15056v1 [cs.CL] 24 May 2023 \fability or are constrained to limited reasoning capability. Intuitively, leveraging question decomposing to integrate heterogeneous knowledge sources is a promising direction, since we can flexibly select the appropriate knowledge source for each subquestion. The challenges lie in: 1) How to determine the granularity of question decomposing, since certain complex questions can be directly answered with a knowledge source, and further decomposition increases the possibility of error. For example, in Figure 1, q1 can be answered with the Wikipedia corpus without further decomposition. 2) How to find the optimal solution among various possible ones, since question decomposing and answering are both uncertain. For example, q0 can also be decomposed as \u201cWhich mountains are in North America or Afirica\u201d, \u201cWhat\u2019s the height of #1\u201d, \u201c[SelectAmong] [largest] #2\u201d. To this end, we propose a novel two-stage XQA framework Reasoning over Hierarchical Question Decomspotion Tree, dubbed RoHT. First, we propose to understand the complex question by building its hierarchical question decomposition tree (HQDT). In this tree, the root node is the original complex question, and each non-root node is a subquestion of its parent. The leaf nodes are atomic questions that cannot be further decomposed. Compared with existing representations that directly decompose a question into the atomic ones, e.g., QDMR (Wolfson et al., 2020), our tree structure provides the flexibility to determine solving a question whether by directly answering or further decomposing. Second, we propose probabilistic reasoning over HQDT, to fuse the knowledge from KB and text at different levels of the tree, and take into consideration the probability score of both tree generation and answering. The reasoning process is recursive, from the root to leaves, and constitues three steps: 1) a scheduler determines the appropriate knowledge sources for a particular question (from KB, text, or solving its children sequentially); 2) the corresponding executors output the answers with probabilities; 3) an aggregator aggregates the candidate answers from all the knowledge sources and outputs the best ones. In evaluation, we instantiate our RoHT framework on two complex QA datasets: KQA Pro (Cao et al., 2022a), where we remove half of the triples in its KB and supplement it with Wikipedia corpus, and Musique (Trivedi et al., 2022), where we take Wikidata (Vrandecic and Kr\u00f6tzsch, 2014) as additional KB besides the given text paragraphs. Experimental results show that, RoHT improves the performance significantly under the KB+Text setting, by 29.7% and 45.8% EM score on KQA Pro and Musique compared with existing SOTA model. In addition, compared with the decompose-based methods, RoHT improves the SOTA by 11.3% F1 score on Musique. Our contributions include: 1) proposing to leverage question decomposing to integrate heterogeneous knowledge sources for the first time; 2) designing a novel two-stage XQA famework RoHT by first building HQDT and then reasoning over HQDT; 3) demonstrating the effectiveness of our RoHT framework through extensive experiments and careful ablation studies on two benchmark datasets. 2 Related Work 2.1 QA over Text and KB Over time, the QA task has evolved into two main streams: 1) QA over unstructured data (e.g., freetext corpora like Wikipedia); 2) QA over structured data (e.g., large structured KBs like DBpedia (Lehmann et al., 2015), Wikidata (Vrandecic and Kr\u00f6tzsch, 2014)). As structured and unstructured data are intuitively complementary information sources (Oguz et al., 2022), several attempts have been made to combines the best of both worlds. An early approach IBM Watson (Ferrucci, 2012) combines multiple expert systems and re-ranks them to produce the answer. (Xu et al., 2016) maps relational phrases to KB and text simultaneously, and use an integer linear program model to provide a globally optimal solution. Universal schema based method (Das et al., 2017) reasons over both KBs and text by aligning them in a common embedded space. GraftNet (Sun et al., 2018) and its successor PullNet (Sun et al., 2019) incorporate free text into graph nodes to make texts amenable to KBQA methods. TransferNet (Shi et al., 2021) proposes the relation graph to model the label-form relation from KBs and text-form relation from corpora uniformly. Although achieving promising results, these methods lack interpretability or are constrained to limited question type, i.e., TransferNet shows interpretability with transparent step transfering, however, it can only answer multi-hop questions, \fand cannot deal with questions that require attribute comparison or value verification. In contrast, our proposed framework shows great interpretability with HQDT and cover more question types. 2.2 Question Decomposing For datasets, KQA Pro (Cao et al., 2022a) proposes to decompose a complex question into a multi-step program KoPL, which can be executed on KBs. BREAK (Wolfson et al., 2020) proposes to decompose questions into QDMR, which constitutes the ordered list of steps, expressed through natural language. Musique (Trivedi et al., 2022) is a QA dataset constructed by composing single-hop questions obtained from existing datasets, and thus naturally provides question decompositions. For models, several attempts have been made for learning to decompose with weak-supervision, such as span prediction based method (Min et al., 2019), unsupervised sequence transduction method ONUS (Perez et al., 2020), AMR-based method QDAMR (Deng et al., 2022). Another line of work is to employ large language models with in-context learning, such as Least-to-most Prompting (Zhou et al., 2022), decomposed prompting (Khot et al., 2022), successive prompting (Dua et al., 2022). Compared with existing works, we are the first to design a hierarchical question decomposition tree for integrating information from multiple knowledge sources. 3 Definition of HQDT Formally, given a complex question, its HQDT is a tree T. Each node qi \u2208T represents a question. For root node, it represents the given complex question, and for non-root nodes, it represents a sub-question of its parent node. The leaf nodes are simple (\"atomic\") questions that cannot be decomposed. Note that HQDT is a 3-ary ordered tree. As shown in Figure 1, we enumerate the nodes of T with BFS ordering, and q0 is the root question. A question qi = w1, \u00b7 \u00b7 \u00b7 , wj, \u00b7 \u00b7 \u00b7 , w|qi| \u000b can be categorized into one of the three types according to the token vocabulary: 1) natural language question (e.g., q4: \u201cWhich mountain is the highest in North America?\u201d), here, wj \u2208V, and V is the word vocabulary; 2) bridge question (e.g., q5: \u201cHow high is #4?\u201d), here, wj \u2208V \u222aR, and R is the reference token vocabulary. In this question, \u201c#4\u201d refers to the answer of q4, which is the sibling question of q5; 3) symbolic operation question (e.g., q3: \u201c[SelectBetween][greater] #1 #2\u201d), here, wj \u2208V \u222aR \u222aO, and O is the vocabulary of pre-defined symbolic operations, which are designed for supporting various reasoning capacity (e.g., attribute comparison and set operation) and are shown in appendix A in details. Note that all the bridge questions and symbolic operation questions are atomic questions and can only appear in leaf nodes. For every non-leaf question qi, we define two ordered lists: \u2022 qi.children = qsti, \u00b7 \u00b7 \u00b7 , qedi\u000b , which are children of qi, successively indexed from sti to edi. For example, for question q1 in Figure 1, q1.children is q4, q5\u000b . \u2022 qi.atoms = ai 1, \u00b7 \u00b7 \u00b7 , ai ni \u000b , which is a list of atomic questions deduced from the ni leaf nodes of the sub-tree rooted by qi, by rearranging the reference tokens. For example, for q0 in Figure 1, its leaf nodes is q4, q5, q6, q7, q3\u000b , and correspondingly, q0.atoms is q4, \u02dc q5, q6, \u02dc q7, \u02dc q3\u000b , with \u02dc q5 as \u201cHow high is #1?\u201d, \u02dc q7 as \u201cHow high is #3\u201d, and \u02dc q3 as \u201c[SelectBetween][greater] #2 #4\u201d. The detailed deduction algorithm is in appendix B due to space limit. We also call qi.atoms the atomic representation of qi. Specially, among qi.children, qsti, . . . , qedi\u22121 are all natural language questions, and qedi is either a bridge question or a symbolic operation question. Answering qi is semantically equivalent to answering sub-questions in qi.children or in qi.atoms sequentially. The last question in qi.children or qi.atoms returns the answer of qi. 4 Methodology Our framework RoHT is composed of two stages: 1) Building HQDT. We understand the hierarchical compositional structure of a complex question q0 by generating its HQDT T with probability, where each question qi \u2208T has a score pi g that represents the certainty of its generation. 2) Probabilistic Reasoning over HQDT. We conduct recursive probabilistic reasoning over the HQDT from root to leaves to solve q0. For each question qi, we will utilize KBs, text and its child questions together to get a list Ri, which contains answers of qi with probabilistic scores. Finally the answer with the highest score in R0 will be picked out as the final answer of q0. \fThe details are introduced as follows. 4.1 Building HQDT To build the HQDT for a complex question, we first generate its atomic representation, which corresponds the leaf nodes of HQDT, then generate every non-leaf nodes based on this atomic representation. We compute certainty score of each node based on the likelihood of each step of generation. Building Leaf Nodes Given a complex question q0, we first use a BART (Lewis et al., 2020)-based question decomposer M\u03b8 to generate its atomic representation and output the likelihood of generation: L0, ld = M\u03b8(q0). (1) Here, L0 = a0 1 \u27e8sep\u27e9a0 2 \u27e8sep\u27e9. . . \u27e8sep\u27e9a0 n0 is the serialization of q0.atoms, where \u27e8sep\u27e9is a separating token. ld = Pr(L0|q0; \u03b8) is the likelihood of generation. Since q0 is the root of T, each atomic question in q0.atoms corresponds to a leaf node in T (with the deterministic algorithm in Appendix C), and the certainty score of each leaf node in T is ld. Building Non-leaf Nodes Based on q0.atoms, we can generate all the non-leaf questions in HQDT. The root question is just q0 and thus has certainty score p0 g = 1. For every other non-leaf question qi, its atomic representation qi.atoms = \u27e8ai 1, . . . , ai ni\u27e9can be translated from a specific subset of q0.atoms by rearranging the reference tokens. The subset can be determined by considering the reference relations of a bridge or symbolic operation question a0 j \u2208q0.atoms, which corresponds to the leaf node qedi, with other questions in q0.atoms. We show the details in Appendix C. For example, q2.atoms in Figure 1 is (\u201cWhich mountain is the highest in Africa?\u201d, \u201cHow high is #1?\u201d), and it can be obtained from (a0 3, a0 4) in q0.atoms. Then we can use a BART-based question generator M\u03d5 to generate qi from qi.atoms: qi, li g = M\u03d5(Li), (2) where Li = ai 1 \u27e8sep\u27e9ai 2 \u27e8sep\u27e9. . . \u27e8sep\u27e9ai ni is the serialized qi.atoms, and li g = Pr(qi|Li; \u03d5) is the likelihood of qi given Li. The certainty score of qi is computed as: pi g = ld \u00b7 li g. (3) Learning of Question Decomposer and Generator The question decomposer M\u03b8 can be trained with paired (q0, q0.atoms) data, where the atomic representation can be from either given annotation or unsupervised construction. The question generator M\u03d5 can also be trained with the same data by exchanging the input and output. The details are shown in Section 5.2. 4.2 Probabilistic Reasoning over HQDT f(qi, pi g, G, C) \u2192Ri : {(ansi j, pi j)}, (4) where ansi j is an answer of qi, and score pi j represents the certainty of ansi j. As shown in Figure 3, the implementation of f contains tree steps: 1) a scheduler determines the suitable knowledge sources for a particular question, i.e., whether the question can be answered from KB, text, or by solving its child questions sequentially; 2) according to the suitable sources output by the scheduler, executors aim to get the answers with probabilities via executing on KB (KB executor) or retrieving from text (text executor), or answering the child questions (call f recursively); 3) an aggregator aggregates candidate answers from all the knowledge sources and outputs the top-k answers according to their probabilities. In the following, we will introduce their details when answering qi. Scheduler We formalize the scheduler as: suitkb, suittext, suitchild = Scheduler(qi, G, C), (5) Where suitkb, suittext and suitchild are 0/1 variables, respectively representing whether the answers of qi are suitable to get from the KB G, the corpus C, or by solving qi.children sequentially. Specifically, to check whether G is suitable, the scheduler employs a semantic parser (Cao et al., 2022a) Msp to parse qi into a program K with probability pparse: K, pparse = Msp(qi). (6) Then it classifies the type of qi according to the function skeleton of K. For example, the function skeleton of K in Figure 2 is \u201cFind-RelateFilterConcept-SelectAmong\u201d. If the precision of G on the questions that have the same function skeleton with K is larger than a predefined threshold \u03b3 1, the scheduler will set suitkb to be 1. 1The precision of KB is calculated with questions in training set \f\ud835\udc92\ud835\udc8a: How high is the highest mountain in North America? \ud835\udc91\ud835\udc88 \ud835\udc8a: 0.98 Scheduler \ud835\udc94\ud835\udc96\ud835\udc8a\ud835\udc95\ud835\udc2d\ud835\udc1e\ud835\udc31\ud835\udc2d: Yes \ud835\udc6a\ud835\udc86 \ud835\udc8a: [ \u201cDenali is the highest mountain peak in North America, with a summit elevation of 20,310 feet above sea level\u201d , \u2026] \ud835\udc94\ud835\udc96\ud835\udc8a\ud835\udc95\ud835\udc24\ud835\udc1b: No \ud835\udc72: [Find] North America [Relate] mountain range backward [FilterConcept] mountain range [SelectAmong] elevation above sea level largest \ud835\udc91\ud835\udc91\ud835\udc82\ud835\udc93\ud835\udc94\ud835\udc86: 0.96 \ud835\udc94\ud835\udc96\ud835\udc8a\ud835\udc95\ud835\udc1c\ud835\udc21\ud835\udc22\ud835\udc25\ud835\udc1d: Yes \ud835\udc79\ud835\udc95\ud835\udc86\ud835\udc99\ud835\udc95 \ud835\udc8a : [ (\u201c20310 feet\u201d, 0.96) ] Text Executor KB Executor \ud835\udc79\ud835\udc8c\ud835\udc83 \ud835\udc8a: [] Aggregator \ud835\udc79\ud835\udc86\ud835\udc85\ud835\udc8a: [ (\u201c6190m\u201d, 0.95), (\u201c20310 feet\u201d, 0.92) ] \ud835\udc92\ud835\udc94\ud835\udc95\ud835\udc8a Scheduler \u2026 Aggregator \ud835\udc79\ud835\udc94\ud835\udc95\ud835\udc8a \ud835\udc92\ud835\udc86\ud835\udc85\ud835\udc8a Scheduler Aggregator \u2026 \ud835\udc92\ud835\udc94\ud835\udc95\ud835\udc8a6\ud835\udfcf Scheduler Aggregator \ud835\udc79\ud835\udc94\ud835\udc95\ud835\udc8a6\ud835\udfcf \u2026 \u2026 \ud835\udc79\ud835\udc8a: [ (\u201c20310 feet\u201d, 0.96), (\u201c6190m\u201d, 0.95) ] Return empty set Return empty set if suitable if suitable else else \u2026 Replace reference tokens Knowledge sources: knowledge base \ud835\udc3a, text corpus \ud835\udc36. Figure 2: Illustration of the recursive reasoning function f. For a question qi, f uses the scheduler to determine suitable knowledge sources and calls executors to retrieve answers from them. f also recursively calls itself to get answers from the children of qi. Finally the answers from different sources are fused by the aggregator. To check whether the corpus C is suitable, the scheduler tries to find a set of evidence paragraphs for qi. If C is too large, the scheduler will first use BM25 (Robertson and Zaragoza, 2009) to recall dozens of most relevant paragraphs. For each paragraphs, we train a RoBERTa (Liu et al., 2019)based selector Msl to classify whether it is an evidence paragraph for qi. Suppose the set of selected evidence paragraphs, Ce is not empty, the scheduler will set suittext as 1. To make best use of knowledge from all levels, the scheduler simply set suitchild to be 1 if qi is a non-leaf question otherwise 0. Executors For the KB executor, it takes the program K in Equation 6 on KB G to get the answers, and takes the parsing score pparse in Equation 6 to calculate the probability score for each answer: Ri kb = {(ansi kb,j, pi g \u00b7 pparse)}. (7) For the text executor, it takes the selected paragraph set Ce as described above, and employs a Transformer-based reading comprehension model Mrc to extract answers from Ce: {(ansi text,j, pi ex,j)} = Mrc(qi, Ce), Ri text = {(ansi text,j, pi g \u00b7 pi ex,j)}. (8) where pi ex,j is the extraction probability of ansi text,j given by Mrc. For solving qi by answering its children, f will recursively call itself to solve qsti, . . . , qedi in order: Rsti = f(qsti, psti g , G, C), Rsti+1 = f(qsti+1, psti+1 g , G, C), (9) . . . Redi = fref(qedi, pedi g , G, C, [Rsti, . . . , Redi\u22121]), and let Ri child = Redi. (10) Here, fref is a variant of f to solve bridge and symbolic questions, which refer to the answers of their sibling questions. Suppose qedi refers to the answers of its siblings qr1, . . . , qrhi in order. If qedi is a bridge question, fref will 1) convert qedi into several possible natural language question q1 nl, . . . , qK nl by replacing the reference tokens with every combination ((xk 1, vk 1), . . . , (xk hi, vk hi)) \u2208Rr1 \u00d7 \u00b7 \u00b7 \u00b7 \u00d7 Rrhi, 2) call f to solve each qk nl and 3) fuse the answers from each Rk nl and select the top-k answers with the highest scores: {(ansk nl,j, pk nl,j)} = f(qj nl, pi g, G, C), Rk nl = {(ansk nl,j, Avg(pk nl,j, vk 1, . . . , vk hi))}, Redi = Select(R1 nl, . . . , RK nl ) (11) Note that the score of answer ansk nl,j is computed by averaging pk nl,j and vk 1, . . . , vk hi, instead of multiplying them, to avoid exponential shrink during recursion. If qedi is a symbolic operation question with operation op and arguments, fref will execute simple program to apply the operation op over Rr1, . . . , Rrhi to get Redi. The score of each answer ansedi j is computed as the average of pedi g and the scores of answers in Rr1, . . . , Rrhi used by the program to get ansedi j . \fAggregator The aggregator fuses Ri kb, Ri text and Ri child by selecting the top-k answers with the highest scores from them. If several answers have the same surface form, only the one with the highest score will be preserved. Ri = Aggregator(Ri kb, Ri text, Ri child). (12) 5 Experiments 5.1 Datasets Currently, there are few high-quality complex QA datasets based on both KBs and text. Previous methods (Sun et al., 2018, 2019; Shi et al., 2021) evaluated their models on MetaQA (Zhang et al., 2018) by pairing its KB with the text corpus of WikiMovies (Miller et al., 2016). However, the questions in MetaQA are too simple since there are only 9 relations in its KB. Therefore, we conduct our experiments on two more challenging complex QA datasets: KQA Pro and Musique, and their details are as follows. KQA Pro (Cao et al., 2022a) is a large scale complex QA dataset, including 120k diverse natural language questions up to 5 hops over KB. Its KB is a subset of Wikidata (Vrandecic and Kr\u00f6tzsch, 2014), and consists of 16k entities, 363 predicates, 794 concepts and 890k triple facts. For each question, KQA Pro also provides the corresponding KoPL program. To simulate the realistic case where KB is incomplete, following (Sun et al., 2019; Shi et al., 2021), we randomly discard 50% triples in the KB and take Wikipedia as supplementary text corpus. Musique (Trivedi et al., 2022) is a multi-hop QA dataset over text, including 25k 2-4 hop questions. We evaluate our framework under Musique-Ans setting where all the questions are answerable. Its questions are carefully constructed from several single-hop QA datasets via manually composition and paraphrase, and are hard to cheat via reasoning shortcut. For each complex question, Musique gives 20 paragraphs (including annotated evidence paragraphs and distractor paragraphs) as the corpus. Specially, for each question in the training set, Musique also provides a golden atomic representation, together with the answer and the evidence paragraph of each atomic question. In addition to the given paragraphs, we choose Wikidata as the KB to acquire additional knowledge. 5.2 Implementations KQA Pro For the experiments of KQA Pro, a key challenge is that there are no annotations for atomic representation, which are required for training the question decomposer and generator in RoHT. Because the KoPL program of a complex question follows context free grammar, every atomic question will correspond to a specific span of the program. Therefore we first split the KoPL program into subprograms according to the grammar, then use each sub-program to generate the atomic question by applying BART model fintuned with the (KoPL, question) pairs from the original dataset. For the answers for each atomic question, we execute the corresponding sub-programs on the KB to get corresponding answers. Using these constructed atomic representations, we train two BART-base models as the question decomposer and generator, respectively. For the scheduler, we directly use the semantic parser trained by (Cao et al., 2022a) on KQAPro, and set the precision threshold \u03b3 to be 0.7. We train a RoBERTa-large as the evidence selector via weak supervised method: for each question in the training set and constructed atomic representations, we first use BM25 to recall 10 related paragraphs from wikipedia, then take the paragraphs that contain the answer as positive samples and take other recalled paragraphs as negative samples. For the text executor, we also train a BART-large reading comprehension model on these positive samples. Musique Since Musique provides golden atomic representation for every complex question in the training set, we directly use them to train BARTbase models as question decomposer and generator. For the scheduler, we adapt semantic parser trained by (Cao et al., 2022a) on Wikidata. The KB precision threshold \u03b3 is set to be 0.4, which is determined by the top-10 types of questions with the highest precision. We train the RoBERTa selector model on complex and atomic questions in the training set together, taking annotated evidence paragraphs as positive samples and distractor paragraphs as negative samples. For the text executor, we pre-train a Longformer-large (Beltagy et al., 2020) reading comprehension model on SQUAD (Rajpurkar et al., 2016), then finetune it on complex questions and atomic questions of Musique. \fModel Overall Multihop Qualifier Comparison Logical Count Verify Zero-shot 50% KB KVMemNN 17.72 17.63 18.53 1.39 15.48 28.38 59.30 0.06 RGCN 34.77 33.71 28.44 31.46 35.39 39.76 64.27 0.06 BART KoPL 38.04 33.10 29.40 51.81 29.92 33.69 60.12 29.03 RoHTKB 38.94 34.16 31.54 50.91 31.61 33.69 60.4 30.52 50%KB + Text TransferNet 16.80 15.94 17.93 45.35 14.84 10.47 0.00 8.43 RoHTmix 46.45 41.76 41.73 52.21 41.95 31.26 65.45 38.76 Table 1: EM results on the dev set of KQA Pro. RoHT outperforms all the baselines by a large margin and achieves the best performance on most types of questions. 5.3 Baselines we compare RoHT with several representative methods for complex QA, including memory-based methods, graph-based methods, and XQA methods. KVMemNN (Miller et al., 2016) stores encoded knowledge in key-value memory and iteratively reads the memory to update the query vector to conduct multi-hop reasoning. RGCN (Schlichtkrull et al., 2018) is a variant of graph convolutional network and utilizes the graph structure of KB to tackle complex questions. BART KoPL (Cao et al., 2022a) is a BART-based semantic parser which can convert complex question into KoPL program. It achieves over 90% accuracy on KQA Pro on the complete KB. SA (Trivedi et al., 2022) is a two-stage model that first uses a RoBERTa-large selector to rank and select the K most relevant paragraphs with the question and then uses a Longformer-large answerer to predict answer based on selected paragraphs. EX(SA) (Trivedi et al., 2022) is the state-of-the-art model on Musique. It first explicitly decomposes the complex question into atomic representation and then calling SA model repeatedly to answer each atomic question in order. TransferNet (Shi et al., 2021) iteratively transfer entity scores via activated path on the relation graph that consists of both text-form relations and KB-form relations. It is existing state-of-the-art model that utilizes both KBs and text as knowledge soruces, and nearly solves MetaQA. We reimplement it on both KQA Pro and Musique, and the details are shown in Appendix D. RoHT: RoHTKB, RoHTtext and RoHTmix denote the RoHT models that only use KB, only use text and use both KB and text, respectively. 5.4 Main Results 5.4.1 Results on KQA Pro The experimental results for KQA Pro are shown in Table 1. When using only the incomplete KB, RoHTKB model respectively improves EM by 21.22, 4.17 and 0.90 compared to KVMemNN, RGCN and BART KoPL, showing the benefit of integrating the answers of sub-questions of different levels. After adding Wikipedia as supplementary text corpus, RoHTmix yields substantial improvement compared with RoHTKB (7.51 on EM), demonstrating the effectiveness of utilizing knowledge from KB and text together. RoHTmix also outperforms TransferNet, which is end-to-endly trained with a mixed relation graph, by a large margin (29.65 on EM). This is because unlike graphbased methods, RoHT explicitly shows the compositional structure of a complex question in natural language form via HQDT generation, and thus can retrieve answers from the KB and text with more advanced and flexible sub-modules (e.g., semantic parser and reading comprehension model). Moreover, our designed atomic operations in the HQDT also enable RoHT to solve a wide variety of complex questions: we can see that RoHTmix achieves the best results on 6 types of questions among 7 types, showing comprehensive reasoning capacity. 5.4.2 Results on Musique Table 2 presents the results on the dev set of Musique dataset. As expected, our RoHT models show significant improvement over all the baselines. With only given paragraphs, RoHTtext improves EM/F1 by 13.8/14.3 and 11.6/11.9 compared with SA and EX(SA), respectively; With both text and KB, the performance of RoHTmix is also remarkably better than TransferNet (62.3 v.s. 10.9 on F1). Comparing RoHTtext and RoHTmix, \fModel EM F1 Text SA 39.3 47.3 EX(SA) 41.5 49.7 RoHTtext 53.1 61.6 Text+KB TransferNet 8.6 10.9 RoHTmix 54.4 62.3 Table 2: EM and F1 results on the dev set of Musique. Compared with state-of-the-art methods, RoHT achieves significant improvement. Model KQA Pro Musique RoHTmix 46.5 54.4 w/o scheduler 40.7 47.0 RoATmix 32.3 47.6 Table 3: EM performance of RoHTmix with and without scheduler, and EM performance of RoATmix. we can also see some benefits of supplementing the text information with KB information, though the improvement is smaller than supplementing the KB with text on KQA Pro because KBs have lower coverage than text and the semantic parser is not specially finetuned for questions of Musique. We submit the predictions of RoHTmix on the test set and achieve 63.6 F1 score, which significantly outperforms the best public result 52.3. 5.5 Further Analysis 5.5.1 Effect of Scheduler To show the effect of the scheduler module, we remove it from the RoHTmix model, i.e, default that the KB and recalled/given text paragraphs are suitable for all questions in the HQDT, and evaluate the performance again on the dev set of KQA Pro and Musique. The results are shown in Table 3. We can see that after discarding the scheduler, the EM performance on KQA Pro and Musique drops by 5.8 and 7.4, respectively. Therefore, it is important to use the scheduler to select suitable knowledge sources for each question. 5.5.2 Effect of Hierarchical Decomposition Many existing methods generate non-hierarchical decomposition of complex questions, similar to the atomic representation, to assist reasoning (Min et al., 2019; Wolfson et al., 2020; Deng et al., 2022). To demonstrate the superiority of hierarchical decomposition, we compare our RoHTmix model with \ud835\udc92\ud835\udfce: Why did Roncalli leave the city where the painter of Venus with a Mirror died? \ud835\udc92\ud835\udfcf: Where did the creator of The Venus with a Mirror die? \ud835\udc92\ud835\udfd0: Why did Roncalli leave #1? \ud835\udc92\ud835\udfd2: Where did #3 die? \ud835\udc92\ud835\udfd1: The Venus with a Mirror was made by whom? Question: Why did Roncalli leave the city where the painter of Venus with a Mirror died? Suitable sources: KB, text KB ans: [ (\u201cTitian\u201d, 0.93) ] Text ans: [ (\u201cTitian\u201d, 0.97) ] Final ans: [ (\u201cTitian\u201d, 0.97) ] Suitable sources: KB, text KB ans: [] Text ans: [ (\u201cWashington\u201d, 0.95) ] Final ans: [ (\u201cWashington\u201d, 0.95) ] Suitable sources: KB, text, children KB ans: [] Text ans: [ (\u201cVenice\u201d, 0.88) ] Child ans: [ (\u201cWashington\u201d, 0.95) ] Final Ans: [ (\u201cWashington\u201d, 0.95), (\u201cVenice\u201d, 0.88) ] Suitable sources: text Text ans: [ (\u201cthe death of Pope Pius XII\u201d, 0.81), (\u201cfor the conclave in Rome\u201d, 0.93) ] Final ans: [ (\u201cfor the conclave in Rome\u201d, 0.93), (\u201cthe death of Pope Pius XII\u201d, 0.81) ] Suitable sources: text, children Text ans: [ (\u201cfor the conclave in Rome\u201d, 0.91) ] Child ans: [ (\u201cfor the conclave in Rome\u201d, 0.93), (\u201cthe death of Pope Pius XII\u201d, 0.81) ] Final Ans: [ (\u201cfor the conclave in Rome\u201d, 0.93), (\u201cthe death of Pope Pius XII\u201d, 0.81) ] RoHT RoAT \ud835\udc82\ud835\udfcf \ud835\udfce: The Venus with a Mirror was made by whom? Suitable sources: KB, text KB ans: [ (\u201cTitian\u201d, 0.93) ] Text ans: [ (\u201cTitian\u201d, 0.97) ] Final ans: [ (\u201cTitian\u201d, 0.97) ] \ud835\udc82\ud835\udfd0 \ud835\udfce: Where did #1 die? Suitable sources: KB, text KB ans: [] Text ans: [ (\u201cWashington\u201d, 0.95) ] Final ans: [ (\u201cWashington\u201d, 0.95) ] \ud835\udc82\ud835\udfd1 \ud835\udfce: Why did Roncalli leave #2? Suitable sources: text Text ans: [ (\u201cthe death of Pope Pius XII\u201d, 0.81)] Final ans: [ (\u201cthe death of Pope Pius XII\u201d, 0.81) ] replace reference tokens return children answers Figure 3: A case from Musique. We mark the correct answers in green and the wrong answers in red. RoATmix model, which uses the same scheduler, executors, and aggregator as RoHTmix, but solves the complex question by directly answering the atomic questions in its atomic representation in order. As shown in Table 3, RoHTmix outperforms RoATmix by a large margin on both KQA Pro and Musique. This is because the hierarchical structure of HQDT enables RoHT model to fuse the knowledge from KBs and text at different question levels, and to discard wrong answers via comparing the problisitic scores of answers. To further understand the reason, we show a case from Musique in Figure 3. We can see that both RoHTmix and RoATmix fail to answer the question \u201cWhere did (Titian) die?\u201d (q4 in the left, a0 2 in the right). However, RoHTmix directly extracts the correct answer of q1 from text and finally gets the correct answer of q0 with the highest score, while RoHTmix fails to solve a0 3 because it must rely on the wrong answer from a0 2. 6" + } + ], + "Xiaochen Liu": [ + { + "url": "http://arxiv.org/abs/2204.04413v2", + "title": "PSP: Pre-trained Soft Prompts for Few-Shot Abstractive Summarization", + "abstract": "Few-shot abstractive summarization has become a challenging task in natural\nlanguage generation. To support it, we designed a novel soft prompts\narchitecture coupled with a prompt pre-training plus fine-tuning paradigm that\nis effective and tunes only extremely light parameters. The soft prompts\ninclude continuous input embeddings across an encoder and a decoder to fit the\nstructure of the generation models. Importantly, a novel inner-prompt placed in\nthe text is introduced to capture document-level information. The aim is to\ndevote attention to understanding the document that better prompts the model to\ngenerate document-related content. The first step in the summarization\nprocedure is to conduct prompt pre-training with self-supervised pseudo-data.\nThis teaches the model basic summarizing capabilities. The model is then\nfine-tuned with few-shot examples. Experimental results on the CNN/DailyMail\nand XSum datasets show that our method, with only 0.1% of the parameters,\noutperforms full-model tuning where all model parameters are tuned. It also\nsurpasses Prompt Tuning by a large margin and delivers competitive results\nagainst Prefix-Tuning with 3% of the parameters.", + "authors": "Xiaochen Liu, Yang Gao, Yu Bai, Jiawei Li, Yinan Hu, Heyan Huang, Boxing Chen", + "published": "2022-04-09", + "updated": "2022-10-04", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "main_content": "Introduction Given the high labor-costs of obtaining quality abstractive summaries, few-shot abstractive summarization is very demanding and highly challenging. A widely accepted paradigm for almost all NLP tasks is to \ufb01ne-tune the entire set of parameters for a large pre-trained language model to suit the target task (Liu and Lapata, 2019; Liu et al., 2020). However, the \ufb01ne-tuning with few-shot examples usually leads to disappointing results, especially with generation tasks like abstractive summarization (Fabbri et al., 2020; Yu et al., 2021). \u2217 Corresponding author. The likely outcome is an over\ufb01t model. Further, for every speci\ufb01c task, a large number of pre-trained parameters need to be updated and stored, which is not ef\ufb01cient to use. Pre-trained language models are few-shot learners, i.e., GPT-3 (Brown et al., 2020) that surprisingly perform generation tasks from a few examples without any further gradient updates. Although it lacks a rigorously theoretical proof, prompt learning inherits the few-shot property (Li and Liang, 2021; Schick and Sch\u00fctze, 2020; Jin et al., 2021; Liu et al., 2021). Commonly, this type of learning is considered to retrieve relevant knowledge from frozen language models, only tuning continuous prompts to quickly adapt to new tasks with very few examples. More recently, Prompt Tuning (Lester et al., 2021) has received much attention. With large frozen language models (say, >10 billion parameters), Prompt Tuning simply adds a tunable soft prompt to the input of the encoder, achieving results that are comparable to full-model tuning. Yet, our empirical results, in Section 2, demonstrate that Prompt Tuning for abstractive summarization yields simply abysmal performance. Pre\ufb01xTuning (Li and Liang, 2021) extends the use of prompt learning in the natural language generation area. With this technique, continuous prompts are applied to every layer of the pre-trained model and even shows increase in few-shot generation tasks over \ufb01ne-tuning. Yet the training process is not stable and updates are required that add to the memory and training costs.1 Given the shortcomings of these two methods, we have developed a soft prompts tuning method that is speci\ufb01cally designed for summarization. The structure is given in Figure 1. The method is capable of performing few-shot language generation task (i.e., abstractive summarization) with an ef\ufb01cient amount of training parameters. Prompt 1See more related work in Section 5. arXiv:2204.04413v2 [cs.CL] 4 Oct 2022 \fFigure 1: The comparison between PSP and previous methods. \u201cE\u201d and \u201cD\u201d represents the encoder and the decoder, respectively. tokens are added before the decoder input tokens to guide the generation process toward the target summary. Moreover, we have designed three inner prompts \u2013 interval, sequential, and \ufb01xed-length \u2013 one of which is placed among the source input tokens. The aim is to capture the structure in the source document and aid in understanding its semantics, so as to better prompt the model to generate document-related content. Each kind of inner prompts focuses on different semantic units (e.g., phrases, sentences, and etc.), differentiating important units from non-informative ones. To bolster the summarization ability of the model and assist the prompts to understand the documents, prompt pre-training is performed before the tuning process, and leveraged by self-supervised pseudo data. As a last step, all the prompts are \ufb01ne-tuned with fewshot training examples. Experiments conducted on two commonly used datasets CNNDM (See et al., 2017) and XSum (Narayan et al., 2018) demonstrate that our method outperforms full-model tuning under few-shot settings only with 0.1% of the parameters. It also surpasses naive Prompt Tuning by a large margin. Our model also yields a performance competitive to Pre\ufb01x-Tuning with 3% of the trainable parameters. A detailed analysis shows that the designed prompt-pre-training phase and the inner prompts are effective for few-shot text summarization. Thus, the major contributions of this work include : 1) A novel soft prompt architecture for few-shot abstractive summarization. With the well-designed prompts in embedding layer, our model ful\ufb01lls the task effectively and ef\ufb01ciently; 2) It is necessary to perform prompt pre-training strategy which bene\ufb01ts soft prompts model for fewshot summarization and shows excellent zero-shot capabilities; 3) Experiments that investigate the effect of different prompts by probing the attention weights. The results show our model is able to: extract knowledge from the encoder language model; understand the discourse in the document; and guide the decoder language model to generate \ufb02uent summaries. 2 Pilot Experiments In a pilot study, we experimented with using Prompt Tuning under 300-shots settings to \ufb01nd reasonable clues as to how to design summaryprompts for the task. Our \ufb01ndings follow. Consider an encoder-decoder language model p\u03b8(y|x) based on the Transformer architecture (Vaswani et al., 2017) (e.g., BART (Lewis et al., 2020)) and parameterized by \u03b8. To conduct a few-shot summarization task, we have some few-shot training pairs of a document X = {x1, x2, . . . , x|X|} and a corresponding summary Y = {y1, y2, . . . , y|Y |}. Speci\ufb01cally, we divided X into different subsets with sentences2 as our unit, X = {x1 1, . . . xi j, . . . , xn m}, where xi j denotes the jth token in the ith sentence. First, original Prompt Tuning is applied by concatenating a series of prompt tokens Pen, parameterized by \u03b8pen, to the encoder input Xen = {e1 1, . . . , ei j, . . . en m}, where e represents the embedding of each token (the leftmost structure in Figure 1). The gradients are backpropagated through the prompts and the weights \u03b8 of language model are frozen (Lester et al., 2021). In this way, the model maximizes the likelihood of the output Y : p\u03b8;\u03b8pen(Y |[Pen; Xen]) (1) The result of original Prompt Tuning is shown on the \ufb01rst line in Table 1, where we see it severely underperforms versus full-model tuning. In further experiments, we added a series of prompts Pde to the decoder inputs Xde following the generation p\u03b8;\u03b8pde(Y |Xen, Pde). Here, we found the results to be even worse than the last. Necessary Prompts for Generation For generation-based tasks, prompts in both the encoder and decoder are equivalently useful. Therefore, our model employs a combination of the two series of prompts mentioned above, and generates Y conditioning on Xen, Pen and Pde: p\u03b8;\u03b8pen;\u03b8pde(Y |[Pen; Xen], Pde) (2) 2Note that, throughout this work, a \u201csentence\u201d can be an arbitrary span of contiguous text (e.g., \ufb01xed length of 10 tokens), or an actual linguistic sentence. \fModel ROUGE-1 ROUGE-2 ROUGE-L Prompt in encoder 32.87 11.92 21.73 Prompt in decoder 26.77 11.73 16.71 Prompt in en.&de. 36.37 14.41 24.46 Full-Model Tuning 37.01 14.49 23.91 Table 1: Results of BART-base on CNN/DailyMail Datasets. Best results are bold. Figure 2: Visualization of the encoder-decoder attention weights. The x-axis are the encoder input, including prompts across the encoder Pen and the source document X. The y-axis are the decoder input, including prompts across the decoder Pde and the target summary Y . The area in the red box represents the attentions of Pde assigning to Pen. The area in the yellow box represents the attentions of Y assigning to X. Darker color shows the more highly related associations between tokens. The result on the third line in Table 1 again verify our hypothesis. Prompts across the encoder and decoder even achieve comparable results with fullmodel tuning under few-shot settings. This veri\ufb01es two things for us. First, prepending simple prompts to only the input embedding layer is effective and ef\ufb01cient for few-shot abstractive summarization. Second, prompts across the encoder and decoder are both necessary for generation tasks. Lack of Attention on the Document We further explored the encoder-decoder attention to investigate the effect of the prompts and freezing the language model. From Figure 2, we \ufb01nd the generating output is mainly focused on the soft prompts to come with little attention given to the document itself. This outcome is detrimental to summarization that requires to understand the semantics and inner discourse structure of documents (Wang et al., 2019). Without the associations of target summaries and source documents, it is impossible to obtain high-quality summaries using current prompt architectures. From Figure 2, we can observe that prompts in the encoder and the ones in decoder are consistently Figure 3: Architecture and training scheme of PSP. Squares in blue and red indicates frozen and tuned parameters, respectively. and directly associated with each other. We speculate that the mechanism is that encoder prompts retrieve relevant knowledge from the frozen encoder language model as a document representation, and decoder prompts copy the encoder\u2019s behaviour, guiding the decoder language model to generate text. 3 Method In light of our \ufb01ndings about the current architectures, we developed a new architecture of pretrained soft prompts, for few-shot abstractive summarization called PSP. The framework includes continuous prompts across the encoder and decoder inputs, as well as inner-prompts to capture the dependencies between documents and target summaries. To better understand a given document, we add a prompt pre-training process before fewshot tuning. It also brings a good initialization for the prompting. The overall architecture and training scheme are illustrated in Figure 3. 3.1 Encoder-Decoder Basic Prompts As mentioned in Section 2, in the training phase of current architectures, Pen is responsible for extracting knowledge from the encoder\u2019s frozen language model as a document representation. Meanwhile, Pde mostly copies the behavior of Pen and guides the frozen decoder\u2019s language model to generate \ufb02uent text as a summary. To strengthen the model\u2019s ability to understand a document, the dependencies and attentions given to the source document need to be embodied in the prompt architecture. \fFigure 4: Different inner prompts for one example source document. Different colors indicate different inner prompt embeddings. \u201cNO. of words\u201d means the length of the text span. 3.2 Inner-Prompts for Document Understanding To achieve our goal, we propose the notion of adding inner-prompts within the source document, denoted as Pin = {p1 in, p2 in, . . . , pn in} with the parameters \u03b8Pin to be updated. Each pi in corresponds to a single sentence. These inner-prompts are added to the corresponding token embedding, which gives rise to a new X\u2032 in: X\u2032 in = {e1 1+p1 in, e1 2+p1 in, . . . , ei j+pi in, . . . , en m+pn in} (3) We believe that by prompting different semantic units (e.g., sentences, phrases, etc.), more attention can be given to understanding the document\u2019s discourse. Furthermore, the inner-prompts help the model to quickly interpret the document by strengthening the associations between outputs and documents. What follows are three different strategies for incorporating the three different innerprompts. Note that there is more discussion on this point in Section 4.2. Interval Following Liu and Lapata (2019), the interval inner-prompts comprises two inner-prompt tokens are assigned to each sentence senti, depending on whether i is odd. Speci\ufb01cally, Pin = {p1 in, p2 in, p1 in, . . . , p(n\u22121)mod2+1 in } (4) In this way, the model can identify important sentences to encode the document at sentence level. Sequential To highlight the complex discourse structure of documents, sentence positions need to be considered. Therefore, different tokens are set in sentences by their sequences, formulated as: Pin = {p1 in, p2 in, . . . , pn in} (5) Fixed-length To discover more \ufb01ne-grained semantic units, a text span with a \ufb01xed length k is manipulated into a new \u201csentence\u201d and a corresponding sequential token is assigned to it. Further, prompts are assigned to the newly divided sentences [sent1, sent2, ..., sentn], as {p1 in, p2 in, . . . , pn in}. Figure 4 illustrates some examples where the above strategies have been used. 3.3 Self-supervised Prompt Pre-training To improve ability of the prompts to understand the documents and to help the model to adapt to the summarization tasks, soft prompts are further pretrained on the corpus using summarization-oriented self-supervised objectives. Doing this also means that the prompts are well initialized for few-shot tuning. We tested two strategies for constructing the selfsupervised data. Each strategy was designed to suit a particular type of writing bias in the document. These are \u201clead\u201d and \u201cgap sentences generation\u201d. Lead Lead bias is common in news articles, which usually follow an inverted pyramid structure where the \ufb01rst few sentences contain the most salient information (See et al., 2017; Yang et al., 2020). With this type of bias, we initially select the \ufb01rst three sentences as our target summary, and treated the rest of the document as the source text. With this type of prompt pre-training process, the model was able to infer the salient information based on the remaining text. GSG Gap sentences generation applies to all documents that do not follow the lead bias structure (e.g., XSum (Narayan et al., 2018)). The strategy used here follows Zhang et al. (2020) , where we used ROUGE1-F1 (Lin, 2004) between each sentence xi and the rest of the document as a proxy for the principal score, si = rouge(xi, D \\ {xi}), \u2200i. The top-m most important sentences were selected according to si, and removed from the document. Then these m sentences are concatenated in the same order as the original text in the form of a pseudo summary. The remainder of the text is treated as a pseudo document. With the constructed data, our designed prompts can be pre-trained and further tuned with few-shot examples. 3.4 Training Objective The model is trained with maximum likelihood estimation (MLE). Given a ground-truth summary \fDatasets CNNDM XSum train dev test train dev test Avg.Passage 697.45 676.64 717.92 396.53 387.62 380.55 Avg.Sum 55.91 51.97 58.62 22.90 23.29 22.11 Labled data 300 300 11,490 300 300 11,333 Table 2: Datasets statistics. \u201cAvg.Passage\u201d means the average length of passages and \u201cAvg.Sum\u201d means the average length of summaries. Y = [y1, y2, ..., y|Y |] for an input passage X, the objective is to minimize the negative log-likelihood of the target word sequence: L = \u2212 |Y | X t=1 log p\u03b8\u2217(yt|[Pen; X\u2032 in], [Pde; y1, ...yt\u22121]) \u03b8\u2217= {\u03b8; \u03b8pen; \u03b8pde; \u03b8pin} (6) Note that only these prepended-prompts parameters (\u03b8pen, \u03b8pde) and the inner-prompts parameters (\u03b8pin) are optimized, the language model parameters (\u03b8) are all frozen. 4 Experiments Datasets We experimented with the CNN/DailyMail (CNNDM) dataset (Hermann et al., 2015) and the XSum dataset (Narayan et al., 2018). We chose these datasets because they differ in abstraction level and text length, which helps to show the generalization ability of our results. We constructed the self-supervised pre-training data for CNNDM with Lead, and for XSum with GSG. We show details in Section A.1 in the appendix. Given that the lead bias structure exists only in some domain-speci\ufb01c datasets, we also conducted experiments to demonstrate the universality of the GSG to construct pseudo-data. The results are shown in Section A.3 in the appendix. Our fewshot training set Dtrain contained 300 documentsummary pairs randomly sampled from the original training data. To tune the hyper-parameters and select the best checkpoint, we composed a validation set Ddev from the original validation data. Here, we were careful to ensure that |Dtrain| = |Ddev| so that it \ufb01t into a true few-shot learning setting, following Perez et al. (2021). Since few-shot learning may have high variance, we sampled the examples with 5 different random seeds. We used the original test set to report our results, including the mean value and the standard deviation. Table 2 shows the statistics of the pre-processed corpus. Setup The base version of BART was used in our work. Following Lester et al. (2021), we used 100 prompt tokens for both the encoder inputs and the decoder inputs. These prompts were randomly initialized from the set of vocabularies. The sequential and \ufb01xed-length inner-prompts require a maximum number. Hence, we counted the number of sentences in each document and divided the results into two groups \u2013 the 85% with the least sentences (Group A) and the 15% with the most sentences (Group B)3. We then set the number of prompts to the most number of sentences in Group A plus one, i.e., n + 1. For CNNDM, that number was 61 and, for XSum, it was 33. In this way, one inner-prompt token was assigned to each sentence up to n. For the excessively long documents in Group B, the text after n sentences was assigned an n + 1-th token. Further, we drew from a normal distribution N(0, 0.05) to initialize the inner-prompt embeddings4. Taking CNNDM as an example, all the tunable parameters that need to be stored amount to only 2\u00d7105. This is compared to the (1.4\u00d7108) parameters of full-model tuning. That equates to around 0.1% of the parameters for each dataset that need to be tuned and stored. Evaluation Metrics We adopted ROUGE (Lin, 2004) to measure the quality of the summaries produced in our experiments. The F1 scores for ROUGE-1, ROUGE-2, and ROUGE-L between the ground-truth and the generated summaries are each reported. Baseline Models We compared PSP to: Prompt Tuning (Lester et al., 2021), which only concatenates soft prompts into the encoder input; Pre\ufb01x Tuning (Li and Liang, 2021), which adds a pre\ufb01x to all the encoder layers, cross-attention layers, and the decoder layers; and Full-Model Tuning, which does not have any prompts and \ufb01ne-tunes all the parameters of the pre-trained language model. 4.1 Experimental Results of Our Method Table 3 presents the results of all PSP variants and baselines across CNNDM and XSum datasets. With the exception of the ROUGE-2 and ROUGEL scores for the Pre\ufb01x-Tuning on the CNNDM dataset, our proposed PSP, outperforms the others. However, PSP delivered a competitive result with only 3% of the parameters, which is an acceptable 3We made our division at 85% to ensure all embeddings of inner-prompt tokens could be fully trained, because sentences after the n-th only exist in 15% of the data. 4More information about implementation details are shown in Section A.2 in the appendix. \fCNNDM XSum Model ROUGE-1 ROUGE-2 ROUGE-L PPL ROUGE-1 ROUGE-2 ROUGE-L PPL Prompt Tuning 30.582.07 11.930.46 21.731.86 141.56 29.631.21 8.840.55 22.001.23 101.96 Pre\ufb01x-Tuning 37.120.15 16.590.09 26.280.06 52.59 32.180.16 11.130.08 25.500.14 39.58 Full-Model Tuning 38.030.56 16.010.79 25.210.70 65.73 32.850.25 10.520.24 25.150.29 51.63 PSPInterval 37.820.29 15.400.31 25.100.36 45.54 32.860.21 11.270.08 25.640.11 44.25 PSPSequential 37.820.39 15.580.32 25.160.32 48.10 32.570.11 10.970.07 25.390.05 35.70 PSPFixed\u2212k 38.310.15 15.940.21 25.410.25 58.50 32.810.10 11.150.10 25.480.13 52.10 Table 3: Results on CNNDM and XSum Datasets. The experiments are conducted with 300 training samples and 300 validation samples on each dataset. We report the mean value and the standard deviation over 5 sampled datasets. k = 10 is chosen for PSPFixed\u2212k. \u201cPPL\u201d represents the perplexity of generated summaries. A low perplexity indicates the summaries are \ufb02uent. Best results are bold and underline means our models outperform Full-model tuning. place to start. To our surprise, we observe that 50% of PSP\u2019s results surpass the full-model tuning, especially on XSum, as underlined in the table. Besides, results on the PPL metric show that PSP can generate more \ufb02uent summaries than other models. These results indicate that \ufb01ne-tuning large language models is not necessarily a good or ef\ufb01cient idea with few-shot generation. It also shows that soft prompts with frozen language models are effective for few-shot abstractive summarization. Moreover, it statistically veri\ufb01es that PSP with its three inner-prompt strategies is effective. Ef\ufb01ciency v.s. effectiveness. We gave an overall comparison to baseline models on effectiveness and memory-ef\ufb01ciency, evaluated by ROUGE and the number of parameters, respectively. The results are shown in Table 4. Prompt Tuning has the least number of parameters, while its capacity is limited to this and lacks control over the decoder side, hence it can not perform natural language generation tasks well. We can see that substantial gains are made when going from vanilla Prompt Tuning to PSP. However, even if Pre\ufb01x-Tuning is nearly thirty times more parameters than ours, there is either a marginal improvement or even performance decrease on some metrics. Besides, Pre\ufb01x-Tuning relies on reparameterization tricks to stabilize the training, i.e., adds a MLP with large number of parameters to the training stage. Our method provides the best effectiveness-ef\ufb01ciency trade off, and outperforms full-model tuning with only 0.1% parameters, and presents competitive results against Pre\ufb01x-Tuning with 3% parameters. Human Evaluation We conducted a human evaluation study. To this end, we randomly selected 20 instances from the test set of each dataset. Ten Model # Train # Store ROUGE-1 CNNDM XSUM PSP 2.0 \u00d7 105 2.0 \u00d7 105 38.32 32.86 Pre\ufb01x-Tuning 2.4 \u00d7 107 5.5 \u00d7 106 37.12 32.18 Prompt Tuning 7.7 \u00d7 104 7.7 \u00d7 104 30.58 29.63 Full-Model Tuning 1.4 \u00d7 108 1.4 \u00d7 108 38.03 32.85 Table 4: Comparison with baseline models on effectiveness and ef\ufb01ciency. \u201c# Train\u201d means the number of tuned parameters during training. \u201c # Store\u201d means the number of stored parameters. Best results are bold. graduate students with high levels of \ufb02uency in English were asked to assess the generated summaries and golden summaries from independent perspectives (Wang et al., 2021): Informativeness (how much useful information does the summary provide?), Relevance (how well does the summary re\ufb02ect the input document?), and Fluency (how grammatically correct are the summary sentences and how easy are they to read?). Scoring followed the Best-Worst Scaling method (Kiritchenko and Mohammad, 2017). Participants were asked to select the best and worst summaries from each perspective. The scores were computed as the percentage of times a summary was chosen as the best minus the times it was selected as the worst. The scores ranged from -1 (worst) to 1 (best). Results are shown in Table 5. Qualitatively, we show several examples generated by different models and the reference in Table 14 and Table 15 in the appendix. Compared with all baselines, the summaries generated by PSP are always more \ufb02uent and relevant to the source document, consistent with the results of human evaluation. Further more, we found summaries generated by PSP and Pre\ufb01xTuning are always similar in sentence patterns and expressions. However, Pre\ufb01x-Tuning tends to generate texts shorter than PSP, which often leads to \fMethods CNNDM XSum IF RL FL IF RL FL PSP 0.500 0.708 0.667 0.217 0.275 0.492 Prompt Tuning -0.317 -0.758 -0.975 -0.336 -0.400 -0.867 Pre\ufb01x-Tuning -0.233 0.067 0.158 0.017 -0.008 0.292 Full-Model Tuning 0.067 -0.025 0.075 0.117 0.092 0.075 Table 5: Human evaluation results. Best results are bold. k Ddev Dtest R-1 R-2 R-L R-1 R-2 R-L 5 34.27 11.90 26.41 31.90 10.28 24.20 10 35.31 12.88 26.85 32.89 11.13 25.51 15 34.98 11.68 26.45 32.11 10.46 24.72 30 34.48 12.57 26.55 32.20 11.03 25.30 Table 6: Results of different \ufb01xed length k on validation set Ddev and test set Dtest of XSum. \u201cR-1\u201d is short for \u201cROUGE-1\u201d, the same for \u201cR-2\u201d and \u201cR-L\u201d. lack of information. Selection of \ufb01xed length k. As shown in Table 3, PSPFixed\u2212k performs consistently well on both datasets. So we further explored the in\ufb02uence of different length k, i.e., k = 5, 10, 15, 30, for inner-prompt tokens of the PSPFixed\u2212k5. Table 6 presents the results of the variants on XSum. We observe the segmented spans with 10 tokens achieve the best performance. Interestingly, it can be induced that, to understand a document, it is possible to reorganize the sentence into several semantic units, where the number of the tokens is 10 on average. We also report results of different k on our validation set in Table 6. The ranking is consistent with the test set. From a practical perspective, when applying PSP to a new dataset, we can choose the best k based on the validation set. 4.2 Analyses on Soft Prompts Whether our model attends to understand documents? According to Figure 2, we further present the encoder-decoder attention distribution of the PSP. The comparison visualization is shown in Figure 5. We \ufb01nd the following enhancement of our model by introducing the inner prompts. First, the PSP model strengthens the associations between the encoder prompts and the decoder prompts compared to the original model. Second, the soft prompt Pen has more opportunities to be related to the output Y , indicating the semantic re5The average number of tokens per sentence in both datasets was about 18, so we did not consider \ufb01xed lengths of 20, for its similarity to the PSPSequential. Model CNNDM XSum R-1 R-2 R-L R-1 R-2 R-L Soft prompts (en.&de., 100) 36.89 14.96 24.63 29.36 9.90 22.92 Soft prompts (en.&de., 150) 35.71 14.86 23.97 28.94 9.52 22.24 Soft prompts (en.&de.&ip., 100) 37.87 15.83 25.37 31.95 10.52 24.80 Table 7: Results of different architectures of soft prompts on CNNDM and XSum, where \u201cen.\u201d \u201cde.\u201d \u201cip.\u201d are short for encoder, decoder and inner prompts, respectively. Numbers in parentheses represent the number of prompt tokens we prepended before the encoder and decoder input. Model ROUGE-1 ROUGE-2 ROUGE-L Soft prompts (en.&de., shared) 36.06 14.30 24.24 Soft prompts (en.&de., separate) 36.37 14.41 24.46 Table 8: Results of basic soft prompts on the CNNDM. lations between them. Third, the output Y assigns more attention to the source document X. This suggests that the hidden structure of the document is emphasized, increasing the capability of understanding its semantics. As such, these prompts can properly elect salient information from the document and prompt the model to generate the output. Figure 5: Visualization of the encoder-decoder attention weights of the model with only prompts across the encoder and the decoder (left) and PSP (right). Detailed descriptions refer to Figure 2. Do inner prompts assist the model to understand the content of documents or simply increase the model\u2019s capacity? Instead of using inner-prompts, we prepended additional tunable tokens (i.e. 150 tokens) in front of the encoder and the decoder inputs. Comparison results are shown in Table 7. Despite the larger capacity, soft prompts with 150 tunable tokens before the input performed the worst, denoted as soft prompts (en.&de., 150). This suggests the inner-prompts with a few parameters do help to understand the document by prompting the structures, rather than simply add more trainable parameters to increase the model\u2019s capacity. Further insight on soft prompts across the encoder and the decoder. To verify our hypothe\fFigure 6: k-shot summarization results on XSum. Model ROUGE-1 ROUGE-2 ROUGE-L Full-Model Tuning 11.69 2.67 7.74 Pre\ufb01x-Tuning 11.76 2.63 7.93 Prompt Tuning 9.40 1.86 6.19 PSPInterval 17.16 3.36 12.65 Table 9: Zero-shot results on XSum. sis that the decoder prompts largely copy the behaviour of the encoder prompts, we shared similar embeddings of the soft prompts before the encoder and the decoder. In Table 8, we observe the Soft prompts (en.&de., shared) and (en.&de., separate) almost perform identical results. Although the parameters are only half of the original model, the performance consistently remains competitive. This shows that the shared prompts can extract important information from the document and further guide the language model to generate consistently good summaries more ef\ufb01ciently. 4.3 Analysis on Few-shot and Zero-shot Summarization To examine the performance of different methods under few-shots, we further randomly sampled number of {50, 100, 200} as the settings. Figure 6 reports a more detailed overview of all models\u2019 performance across a range of different few-shots. The ROUGE scores of our model generally outperform other baselines and remain steady across different scenarios. Especially, the PSP with only 50 examples receives the most signi\ufb01cant improvements, while the Pre\ufb01x-Tuning doesn\u2019t even work (tuning based on BARTbase) possibly due to its instability of the model. Moreover, we report the results of zero-shot on XSum in Table 9. Bene\ufb01ting from the knowledge gained in the pre-training phase, our model shows a signi\ufb01cant advantage of zero-shot adaptation in generating quality summaries. 4.4 The Performance of Pre-training on Pre\ufb01x-Tuning A crucial strategy for PSP is the pre-training of soft prompts. To give a fairly comparison, we performed pre\ufb01x pre-training for Pre\ufb01x-Tuning in the Method CNNDM XSum ROUGE-1 ROUGE-2 ROUGE-L ROUGE-1 ROUGE-2 ROUGE-L Pre\ufb01x-Tuning 37.120.15 16.590.09 26.280.06 32.180.16 11.130.08 25.500.14 Pre\ufb01x-Tuning w/ Pre. 37.350.58 16.080.37 25.950.50 33.390.10 11.610.06 26.070.09 Table 10: Test set results of Pre\ufb01x-Tuning. \u201cw/ Pre.\u201d means that we pre-trained the pre\ufb01x with pseudo data. same way with the PSP. The results are shown in Table 10. We can \ufb01nd that the Pre\ufb01x model obtains improvements on the XSum dataset after adopting the pre-training strategy, but underperforms the original one on the CNNDM dataset. It indicates that Pre\ufb01x-Tuning shows limited potential compared to our model. We induce that the pre-training for Pre\ufb01x-Tuning raises over-\ufb01tting risk due to its sensitivity to different data or parameter settings. 4.5 Ablation Study We conducted experiments to examine the effectiveness of the major components of our model, and Table 11 shows the ablation results across the two datasets. We observed both the prompt pretraining operation and the inner-prompts component contribute to the main model. Notably, with the removal of each component, the model becomes considerably unstable, indicated by the variance shown in the ablation results. Comparably, prompt pre-training in our model accounts for more importance on the XSum dataset whose summaries have a higher abstract level (we assume it\u2019s more \u201cdif\ufb01cult\u201d) than the CNNDM. In sum, these two components support the performance and stability of our model in terms of summarization adaption (by prompt pre-training) and structural documents understanding (by inner-prompts). Method CNNDM XSum ROUGE-1 ROUGE-2 ROUGE-L ROUGE-1 ROUGE-2 ROUGE-L PSPFixed\u2212k 38.310.15 15.940.21 25.410.25 32.810.10 11.150.10 25.480.13 w/o PP 37.300.56 15.450.39 24.930.38 32.170.16 10.690.13 25.020.21 w/o IP 37.760.28 15.220.31 24.800.40 32.590.17 11.140.17 25.460.24 w/o PP & IP 36.880.42 14.960.45 24.630.40 29.351.5 9.870.43 22.891.19 Table 11: Ablation study of PSP on two datasets. \u201cw/o\u201d means without. \u201cPP\u201d and \u201cIP\u201d are short for Prompt Pretraining and Inner-Prompts, respectively. The variance of each result is provided. 5 Related Work Few-Shot Abstractive Summarization In practical application scenarios, the lack of manual constructed document-summary pairs or labeled data makes data-driven neural models performs badly (Hu et al., 2021, 2020). Fabbri et al. (2020) condense characteristics of the target dataset into \fWikipedia data to construct pseudo-summaries. Bra\u017einskas et al. (2020) introduce plug-in networks to reproduce characteristics of the target dataset with only a small set of labeled examples. Bai et al. (2021) conduct cross-lingual summarization in a low-resource setting. Yu et al. (2021) design the second phase of pre-training on large-scale generative models before \ufb01ne-tuning. In this paper, we construct pseudo-summary corpus with heuristic rules, providing a better parameter initialization for soft prompts under few-shot settings. More importantly, we design summarization-oriented soft prompts to help the model produce few-shot summaries. Prompt Learning The emergence of GPT3 (Brown et al., 2020) introduces the concept of \u201cprompting\u201d. One only needs to assemble a task description and few examples into a prompt, and then prepend it to the task input. With the largescale frozen parameters, a pre-trained model can generate the output without any task-speci\ufb01c tuning. However, task description is error-prone while there is no uni\ufb01ed, explicit, and effective way to build these hard prompts manually (Logan IV et al., 2021). Hence, several works (Gao et al., 2020; Jiang et al., 2020; Shin et al., 2020) are proposed to generate prompts automatically, but they all restrict prompts to discrete spaces. These discrete prompts are less expressive and sub-optimal. To overcome the shortcomings of hard prompts, Li and Liang (2021) propose \u201cPre\ufb01x-Tuning\u201d. This method only tunes pre\ufb01x activation prepended to all transformer layers, and keeps the LM parameters frozen. To further simplify, Prompt Tuning (Lester et al., 2021) only prepends tunable tokens to the encoder input, and keeps all other parameters frozen. Logan IV et al. (2021) and Gu et al. (2021) propose to use pre-training to boost the low performance of Prompt Tuning for few-shot learning. In this work, we \ufb01t the structure of Prompt Tuning to text generation models, proposing encoder prompts, decoder prompts, and inner prompts. We successfully apply prompt tuning methods to few-shot abstractive summarization task. 6" + }, + { + "url": "http://arxiv.org/abs/2103.09954v1", + "title": "Analytic model for feature maps in the primary visual cortex", + "abstract": "A compact analytic model is proposed to describe the combined orientation\npreference (OP) and ocular dominance (OD) features of simple cells and their\nlayout in the primary visual cortex (V1). This model consists of three parts:\n(i) an anisotropic Laplacian (AL) operator that represents the local neural\nsensitivity to the orientation of visual inputs; (ii) a receptive field (RF)\noperator that models the anisotropic spatial RF that projects to a given V1\ncell over scales of a few tenths of a millimeter and combines with the AL\noperator to give an overall OP operator; and (iii) a map that describes how the\nparameters of these operators vary approximately periodically across V1. The\nparameters of the proposed model maximize the neural response at a given OP\nwith an OP tuning curve fitted to experimental results. It is found that the\nanisotropy of the AL operator does not significantly affect OP selectivity,\nwhich is dominated by the RF anisotropy, consistent with Hubel and Wiesel's\noriginal conclusions that orientation tuning width of V1 simple cell is\ninversely related to the elongation of its RF. A simplified OP-OD map is then\nconstructed to describe the approximately periodic OP-OD structure of V1 in a\ncompact form. Specifically, the map is approximated by retaining its dominant\nspatial Fourier coefficients, which are shown to suffice to reconstruct the\noverall structure of the OP-OD map. This representation is a suitable form to\nanalyze observed maps compactly and to be used in neural field theory of V1.\nApplication to independently simulated V1 structures shows that observed\nirregularities in the map correspond to a spread of dominant coefficients in a\ncircle in Fourier space.", + "authors": "Xiaochen Liu, Peter A. Robinson", + "published": "2021-03-17", + "updated": "2021-03-17", + "primary_cat": "q-bio.NC", + "cats": [ + "q-bio.NC", + "physics.bio-ph" + ], + "main_content": "Introduction V1 is the \ufb01rst cortical area that processes visual inputs from the lateral geniculate nucleus (LGN) of the thalamus before projecting output signals to higher visual areas (Miikkulainen et al., 2005). The feedforward visual pathway from the eyes to V1 involves two main processing steps: (i) light levels at a given spatial location are detected and converted into neural signals by the retina ganglion cells; and (ii) the neural signals are transmitted to V1 through the lateral geniculate nuclei (LGN) of the thalamus (Schiller and Tehovnik, 2015). LGN neurons have approximately circular receptive \ufb01elds with either a central ON region (activity enhanced by light incident there) surrounded by an OFF annulus (activity suppressed by light incident there), or vice versa (DeAngelis et al., 1995; Hubel and Wiesel, 1961). Similarly to other parts of the cortex, V1 can be approximated as a two-dimensional sheet when studying the spatial structure of the retinotopic map (Tov\u00e9e, 1996). V1 neurons, which respond to same eye preference or orientation preference, are arranged in columns perpendicular to the cortical surface. Columns do not have sharp boundaries; rather, feature preferences gradually vary across the surface of arXiv:2103.09954v1 [q-bio.NC] 17 Mar 2021 \fV1. These maps are overlaid such that a given neuron responds to several features (Hubel and Wiesel, 1962b, 1968, 1974a; Miikkulainen et al., 2005). Two prominent feature preferences of V1 cells are their layout in a combined the OP-OD map, as seen in Fig. 1(a), which shows an example from experiment (Blasdel, 1992). Hubel and Wiesel (1968) found that neurons that respond preferentially to stimuli from one eye or the other are arranged in alternating bands across layer 4C of V1 in macaque monkeys, and these bands are termed left and right OD columns. The average OD column width in mammals ranges from \u223c0.5\u22121 mm (Adams et al., 2007; Horton and Adams, 2005). An OP column, sometimes called an iso-orientation slab, comprises neurons that respond to similar edge orientation in a visual \ufb01eld. Each OP column not only spans several cortical layers vertically, but also extends 25 \u221250 \u00b5m laterally in monkey. Moreover, OP normally varies continuously as a function of the cortical position, covering the complete range 0\u25e6to 180\u25e6of edge orientations (Hubel and Wiesel, 1974a, 1977; Obermayer and Blasdel, 1993). Optical imaging reveals that OP columns are quasiperiodic, and are arranged as pinwheels, within which each of the OPs varies azimuthally around a center called a singularity (Blasdel, 1992; Bonhoe\ufb00er and Grinvald, 1991, 1993; Swindale, 1996). Furthermore, the OP in each pinwheel increases either clockwise (negative pinwheel) or counterclockwise (positive pinwheel) and most neighboring pinwheels have opposite signs (G\u00f6tz, 1987, 1988). Examples of positive and negative OP pinwheels are outlined in Fig. 1(a). The superimposed OD and OP maps have speci\ufb01c relationships, including that: (i) most pinwheels are centered near the middle of OD stripes; (ii) linear zones, which are formed by near-parallel OP columns, usually connect two singularities and cross the border of OD stripes at right angles (Bartfeld and Grinvald, 1992), as highlighted in the white rectangle in Fig. 1(a). Additionally, various studies (Chklovskii and Koulakov, 2004; Koulakov and Chklovskii, 2001; Mitchison, 1991, 1995) have argued that the appearance of the OP-OD map re\ufb02ects wiring optimization of local neuron connectivity, in which the distance between neurons with similar feature preference is kept as small as possible. According to the quasiperiodicity of the feature preference of V1, previous studies (Bresslo\ufb00and Cowan, 2002; Veltz et al., 2015) suggested that the functional maps of V1 can be approximated by a spatially periodic network of fundamental domains, each of which is called a hypercolumn. Each hypercolumn represents a small piece of V1, which consists of left and right OD stripes with a pair of positive and negative pinwheels in each, so as to ensure the complete coverage of the OP and OD selectivity. Orientation selectivity plays a primary role in early-stage visual processing. One way to characterize the preferred orientations of a single neuron is to measure the tuning curve from its neuronal response to visual stimuli with various orientations. Figure 1(b) shows an experimental orientation tuning curve obtained from single unit responses in area 17 of adult cat, with optimal orientation angle of 90\u25e6 (Swindale, 1998). A typical full width at half maximum (FWHM) of such a curve is \u223c35\u25e6. The mapping of inputs from the retina to V1 is organized in a retinotopic manner. Visual information 2 \fFigure 1: (a) Combined OP-OD map of macaque monkey, adapted from Blasdel (1992). The borders of OD stripes are shown in solid black, and singularities (pinwheel centers) are labeled by white stars. Oriented color bars indicate di\ufb00erent OPs. The blue and red circles outline examples of positive and negative OP pinwheels, and the white rectangle outlines a linear zone. (b) Experimental orientation tuning curve, adapted from Swindale (1998). The preferred orientation angel is around 90\u25e6. The dots are the data points, and the solid curve is the \ufb01tted tuning curve using a von Mises function. in nearby regions within the visual \ufb01eld is projected to neighboring ganglion cells in the retina. This spatial arrangement is maintained through the LGN to V1, where the visual signals are further processed by neighboring V1 neurons. The subregion in the visual \ufb01eld, within which certain features of the visual object tend to evoke or suppress neural \ufb01ring of a given V1 neuron, is termed the receptive \ufb01eld (RF) of that neuron (Hubel and Wiesel, 1962a; Schiller and Tehovnik, 2015; Skottun et al., 1991; Smith et al., 2001; Tootell et al., 1982). This paper focuses on the RF of V1 simple cells, which respond best to oriented bars. The spatial RF of a V1 simple cell has separate ON (excitatory) and OFF (inhibitory) subareas, which are elongated in a speci\ufb01c orientation, and these subareas relate to the ON and OFF regions of LGN RFs that project to the V1 RF of a given cell. The neuron will be excited when light illuminates the ON subarea, and be depressed when light exposed to the OFF subarea (Hubel and 3 \fWiesel, 1968; Mechler and Ringach, 2002). It was \ufb01rst proposed by Hubel and Wiesel (1962a) that the RF of a V1 simple cell can be predicted by a feed-forward model. Speci\ufb01cally, they suggested that RF of a V1 simple cell is formed by by combining the circular RFs of several LGN cells to produce an elongated RF with central ON region \ufb02anked by OFF regions. Figure 2 shows a schematic of this feed-forward model, which produces a RF with a three-lobed pattern, as shown in the bottom left corner. In addition, several studies have suggested that the lateral intracortical excitatory and inhibitory connectivities from surrounding neurons also play important roles in a cell\u2019s orientation tuning and RF formation (Ferster and Miller, 2000; Finn et al., 2007; Gardner et al., 1999; Mari\u00f1o et al., 2005; Moore IV and Freeman, 2012). Moreover, some studies have discussed the relationship between the size of the RF and the width of the orientation tuning curve (Hubel and Wiesel, 1962a; Lampl et al., 2001). These authors predicted that the width of orientation tuning curve should be inversely associated with the elongation of the RF. Figure 2: Schematic of the elongated RF of a V1 simple cell, showing the convergence of several LGN RFs into the V1 RF. The four cells in the right half of the \ufb01gure represents LGN cells with circular ON center, OFF surround RFs. The outputs of these LGN cells project to form the elongated V1 RF shown at the bottom left. Adapted from Hubel and Wiesel (1962a). Numerous experiments have used optical imaging or functional magnetic resonance imaging (fMRI) to reveal the spatial structure of the OP-OD map in mammals including humans (Bartfeld and Grinvald, 1992; Blasdel, 1992; Bonhoe\ufb00er and Grinvald, 1991, 1993; Bosking et al., 1997; Obermayer and Blasdel, 1993; Yacoub et al., 2008). Additionally, a number of neural network models have been proposed for the development of OP-OD maps and simulated numerically (Bednar, 2009; Erwin et al., 1995; Obermayer et al., 1992b; Swindale, 1992; Miikkulainen et al., 2005; Stevens et al., 2013). The resulting OP-OD maps obtained from experiments or simulation are only semiregular, as illustrated in Fig. 1(a). Hence, it usually requires many data points to describe the structure of such maps, which impedes understanding and requires extensive computation to integrate OP-OD maps into models of neural activity in V1. Such 4 \fmodels would bene\ufb01t from a compact analytic representation of the OP-OD map; e.g., to incorporate its structure into existing spatiotemporal correlation analyses of gamma-band oscillations of neural activity using neural \ufb01eld theory (NFT), or to understand propagation via patchy neural connections in V1 (Robinson, 2005, 2006, 2007; Liu et al., 2020). These issues motivate us to derive a compact analytical representation of the OP-OD map for use in prediction and analysis of brain activity, and for analysis of properties of OP maps obtained from in-vivo experiments or computer simulations. To achieve the above aim, we \ufb01rst note that it has long been suggested that oriented visual features such as edges can be detected by the Laplacian operator (Marr, 1982; Marr and Hildreth, 1980; Marr and Ullman, 1981; Ratli\ufb00, 1965; Young, 1987). Hence, we approximate the local sensitivity of V1 neuron to the orientation of stimuli by such an operator, which we allow to be anisotropic. This operator incorporates the details of LGN receptive \ufb01elds, as projected to the cortex, and any anisotropic response at V1, as implemented through local excitatory and inhibitory wiring between neurons. Secondly, we introduce an RF operator to approximate the anisotropy of the visual region that projects activity to a given V1 simple cell. The combined Laplacian and RF operators yield an overall OP operator at each point, whose parameters can be \ufb01tted to yield observed OP tuning widths. Thirdly, we allow the properties of the OP operator to vary across V1 approximately periodically, as described by dominant Fourier coe\ufb03cients. The paper is structured as follows: Section 2 approximates the hypercolumn with its structure being compatible with the general features of OP-OD map. Meanwhile, it also is compatible with NFT. In Sec. 3, we describe the local sensitivity to stimulus orientation by applying an anisotropic Laplacian operator to the stimulus. A RF operator is then introduced to project activity from neighboring neurons to a given point. The resulting combined OP operator maximizes the response of V1 neurons to their preferred stimulus orientation and its parameters are adjusted to match experimental tuning curves. In Sec. 4, we Fourier decompose the OP variation on the period of an idealized hypercolumn and investigate the properties of the resulting Fourier coe\ufb03cients. The results are then applied to compactly represent and analyze OP maps generated from widely used simulation models are then investigated in Sec. 5. Finally, the results are discussed and summarized in Sec. 6. 2 Hypercolumns and OD-OP maps In this section, an approximate OP-OD map within a hypercolumn is proposed, which reproduces the main aspects of observed maps. It is Fourier decomposed in later sections to generate a sparse set of Fourier coe\ufb03cients that can be used in NFT to study activity across many hypercolumns. 5 \f2.1 Hypercolumn arrangement All feature preferences within a small visual \ufb01eld are mapped to a hypercolumn in V1 (Hubel and Wiesel, 1962b, 1974a; Miikkulainen et al., 2005). Based on the pinwheel model introduced by De Valois and De Valois (1990), we approximate the hypercolumn as a square domain, which consists of left and right OD stripes of equal width; each OD stripe contains a pair of positive and negative pinwheels for continuous (except at pinwheel centers) feature preference coverage within a hypercolumn. The hypercolumn is consistent with general observations of visual cortical map formation from experimental studies (Adams et al., 2007; Bartfeld and Grinvald, 1992; Bonhoe\ufb00er and Grinvald, 1991; Erwin et al., 1995; M\u00fcller et al., 2000; Obermayer et al., 1992a; Obermayer and Blasdel, 1993), including that: (i) left-eye and right-eye OD columns are arranged as alternating stripes in V1 of average width a \u22481 mm; (ii) OP angles are arranged as pinwheels; (iii) each pinwheel center coincides with the center of its OD band; (iv) OP is continuous at OD boundaries; and (iv) neighboring pinwheels have opposite signs. A + sign denotes a counterclockwise increase of OP around the pinwheel center, whereas a \u2212sign denotes a clockwise increase. According to the rules described above, two distinct arrangements of the hypercolumn are possible, as illustrated in Fig. 3, where each hypercolumn is 2a wide. Pinwheel arrangement I in Fig. 3(a) is used for further study in later sections, without loss of generality. However, other arrangements produce analogous results due to the fact that the hypercolumn are assumed to be continuous at boundary and periodic across V1, and other hypercolumn arrangements can be obtained by either rotating the pinwheels clockwise/counterclockwise or swapping the left/right column or top/bottom row horizontally or vertically of Fig. 3(a). Note also that one is free to consider a hypercolumn whose lower left corner is at the center of the current hypercolumn in Fig. 3(a). Such a hypercolumn has exactly the same structure as in Fig. 3(b), except for interchange of the leftand right-eye columns. Figure 3: Schematics of possible hypercolumn arrangements. The two vertical columns of each hypercolumn represent the left and right OD bands. The orientated bars represent the OP within a pinwheel, and the +/\u2212signs indicate the polarity of the pinwheels. (a) Pinwheel arrangement I. (b) Pinwheel arrangement II. 6 \f2.2 Single OP pinwheel structure Our model determines the OP as a function of cortical location within each hypercolumn. The spatial coordinates of the hypercolumn are set by placing the origin of the coordinates at the center of the hypercolumn, whose boundaries are at x = \u00b1a and y = \u00b1a, as shown in Fig. 3. The four pinwheels in a single hypercolumn are modeled by \ufb01rst generating the right-top pinwheel, and other pinwheels are produced by mirroring the right-top pinwheel across the x-axis, the y-axis, and then both. When generating the right-top pinwheel, the x and y coordinates range from 0 to a and the OP angle \u03d5(x, y) at each cortical position (x, y) is approximated by the inverse tangent function de\ufb01ned in Eq. (1). \u03d5(x, y) = 1 2 \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 arctan \u0010 y\u2212y0 x\u2212x0 \u0011 , x > x0, y > y0 \u03c0 2, x = x0, y > y0 arctan \u0010 y\u2212y0 x\u2212x0 \u0011 + \u03c0, x < x0, y > y0 3\u03c0 2 , x = x0, y < y0 arctan \u0010 y\u2212y0 x\u2212x0 \u0011 + 2\u03c0, x > x0, y < y0 (1) where (x0, y0) = (a/2, a/2) is the center of the right-top pinwheel. The 1/2 coe\ufb03cient in front of the inverse tangent functions is to make the range of \u03d5(x, y) to be 0\u25e6to 180\u25e6. The negative and positive pinwheels on the top half of hypercolumn are illustrated in Figs 4(a) and (b). Figure 4(c) shows a hypercolumn containing four pinwheels. As mentioned previously, the OP and OD features are approximated as continuous and periodic for the moment, so V1 can be approximated as lattice of hypercolumns. Thus, we can construct an array of our approximated hypercolumns to represent a piece of V1, which is shown in Fig. 4(d). In such an array, the OP structure resembles maps reconstructed from in-vivo experiments (e.g., Fig. 1(a)), although the OD stripes are approximated as straight here (Blasdel, 1992; Bonhoe\ufb00er and Grinvald, 1991, 1993; Obermayer and Blasdel, 1993). 3 OP Operator In this section, we derive analytic representations for the OP of V1 neurons. Firstly, we adopt an anisotropic Laplacian operator to describe the local response of the system to an edge, which can include both the near-isotropic response of the LGN plus any anisotropy introduced by local wiring in V1. We also introduce the RF operator that describes the projection of activity from nearby neurons in an anistropic surrounding region. The combination of these two operators gives an over OP operator, whose parameters we \ufb01t to match experimental tuning curves. 7 \fFigure 4: Schematics of visual feature preference maps in V1 with color bars indicating OP in degrees. (a) Negative pinwheel. (b) Positive pinwheel. Both pinwheels are 180\u25e6periodic. (c) Hypercolumn. The vertical line divides the hypercolumn into left and right OD columns of equal width, while the horizontal and vertical lines split the hypercolumn into four squares, each containing one OP pinwheel. The short bars highlight the OP at various locations. The 0\u25e6and 180\u25e6orientations both appear as horizontal bars. (d) Periodic spatial structure of OP and OD columns across a small piece of V1 comprising 25 hypercolumns. Dashed lines bound left (L) and right (R) OD columns. One pinwheel is outlined in white and one hypercolumn is outlined in black. 3.1 Anisotropic Laplacian (AL) Operator In computer vision, edges with di\ufb00erent orientation can be detected by linearly combining the secondorder partial derivatives of the input (Marr and Hildreth, 1980; Marr, 1982; Torre and Poggio, 1986). Similarly, we can model the local sensitivity of V1 neurons to stimulus orientation, by calculating the weighted sum of the second-order partial derivatives of activity projected from the oriented bar stimuli, using rotated axes for simplicity. Figure 5 shows an original x \u2212y coordinate system, and x\u2032 \u2212y\u2032 axes obtained by rotating the original axes by an OP angle \u03d5 that is the angle of an oriented bar. A short bar in an image gives rise to localized, anisotropic intensity changes (Torre and Poggio, 1986). In the rotated coordinates, this 2-dimensional intensity change can be detected by the weighted second order partial derivatives in the x\u2032 and y\u2032 directions. Hence, we de\ufb01ne an anisotropic Laplacian 8 \fFigure 5: Coordinates used to analyze the anisotropic Laplacian operator. Original axes x and y are shown in solid, while the rotated axes x\u2032 and y\u2032 are dashed. The input stimulus is a bar oriented at angle \u03d5. operator P as a weighted linear combination of the second order partial derivatives: P = a2 \u22022 \u2202x\u20322 + b2 \u22022 \u2202y\u20322 , (2) where the rotated coordinates satisfy x\u2032 = x cos \u03d5 + y sin \u03d5 , (3) y\u2032 = \u2212x sin \u03d5 + y cos \u03d5 , (4) where the OP \u03d5 ranges from 0 to \u03c0, and a2 and b2 are constants. Taking the Fourier transform of both sides of Eq. (2) yields P(k) = \u2212a2k2 x\u2032 \u2212b2k2 y\u2032 , (5) where k2 x\u2032 = (kx cos \u03d5 + ky sin \u03d5)2 , (6) k2 y\u2032 = (\u2212kx sin \u03d5 + ky cos \u03d5)2 . (7) Substituting Eqs (6) and (7) into Eq. (5) and then performing inverse Fourier transform yields P = (a2 cos2 \u03d5 + b2 sin2 \u03d5) \u22022 \u2202x2 + (a2 \u2212b2) sin (2\u03d5) \u22022 \u2202x\u2202y + (a2 sin2 \u03d5 + b2 cos2 \u03d5) \u22022 \u2202y2 . (8) Since the stimulus S is oriented along the x\u2032 axis, we would need more weight on \u22022/\u2202y\u20322 than on \u22022/\u2202x\u20322 9 \fto have a maximal response to the desired orientation; this implies that b2 \u2265a2. 3.2 Receptive \ufb01eld (RF) operator Previous studies (Ferster and Miller, 2000; Finn et al., 2007; Hubel and Wiesel, 1977; Mari\u00f1o et al., 2005; Toth et al., 1997; Schummers et al., 2002; Troyer et al., 1998) have suggested that the orientation tuning of V1 neuron is altered by the excitatory (inhibitory) inputs from locally connected neighboring neurons (either from the same orientation column or from neighboring columns). Thus, we now introduce an RF operator to model the anisotropic RF that projects to V1 cells. This consists of a weight function G(R \u2212r), which describes the strength of the neural projection from R to a cell at r. Figure 6 shows the schematic of the RF operator on a piece of cortex. It shows the location r of the cells whose OP is being approximated and locations R whose activity projects to r. Figure 6: Schematic of operators leading to the OP response at cells at r on the cortex. An oriented bar is mapped to locations R in the receptive \ufb01eld, which projects to r via the anisotropic weight function G(r \u2212R) indicated by the solid elliptic contour. The local anisotropic Laplacian operator P then acts at r. The arrow shows how the neural response are project to measurement point r via the weight function. The xg and yg a a are the major and minor axis of the weight function, respectively. We approximate G(r \u2212R) as an anisotropic Gaussian function whose long axis is oriented at the local OP \u03d5 at R (Jones and Palmer, 1987). If R = (xR, yR) and r = (x, y), we have G(r \u2212R) = 1 2\u03c0\u03c3x\u03c3y exp \u0014 \u22121 2 \u0012x2 g \u03c32 x + y2 g \u03c32 y \u0013\u0015 , (9) where xg = (x \u2212xR) cos \u03d5 + (y \u2212yR) sin \u03d5, (10) yg = \u2212(x \u2212xR) sin \u03d5 + (y \u2212yR) cos \u03d5. (11) The appropriate width of G(r\u2212R) along the xg axis is determined by two factors: (i) the approximate 10 \fRF size near the fovea measured from experiments. Previous studies (Dow et al., 1981; Hubel and Wiesel, 1974b; Keliris et al., 2019) yielded an RF size of \u22480.082\u25e6at the eccentricity of 1\u25e6in macaque Monkey. We then transform the RF size in visual degree to the corresponding cortical size in mm by adopting the magni\ufb01cation factor (mm/deg) from Horton and Hoyt (1991), where the magni\ufb01cation factor M in Monkey in approximated as M = 12 E + 0.75, (12) where E is the eccentricity in degrees. The corresponding cortical RF size is then \u223c0.56 mm; (ii) the width of the weight function should ensure an approximate 30\u25e6fall o\ufb00from the measuring neuron\u2019s maximum response when it is activated by its optimal orientation (De Valois et al., 1982; Moore IV and Freeman, 2012; Gur et al., 2005; Ringach et al., 2002; Swindale, 1998). In order to satisfy both factors, we choose \u03c3x = 0.16 mm. 3.3 Combined OP operator Here we combine the anisotropic Laplacian and RF operators from above to obtain an overall OP operator and adjust its parameters to match experimental OP tuning curves. The input at location R is approximated by applying the AL operator to the stimulus S (i.e., P{S(R)} in Fig. 6), so that the local orientation sensitivity is picked out, while the weight function G(R \u2212r) determines how much response from locations R are projected to r. Hence, the response at r can be approximated by convolving the weight function with the OP operator on the stimulus at di\ufb00erent location R. It can be written as I(r) = Z G(r \u2212R)P {S(R)} dR . (13) In the Fourier domain, the convolution theorem yields I(k) = G(k)P(k)S(k) . (14) where the algebraic function P(k) is de\ufb01ned in Eq. (5). We can thus reverse the order of P(k) and G(k) on the right hand side of Eq. (14) and inverse Fourier transform to obtain I(r) = Z P {G(r \u2212R)} S(R)dR . (15) Hence, the response at r becomes the convolution of a new combined OP operator P {G(r \u2212R)} with the stimulus itself. This result agrees with previous studies (Graham, 1989; Movshon et al., 1978), which indicated that V1 simple cell can be modeled as a linear \ufb01lter and its responses are computed as the weighted integral of the Laplacian-transformed stimulus, with the weights given by the RF pattern. 11 \fFigure 7(a) shows a contour plot of the RF operator P {G(r \u2212R)} with a preferred orientation angle of \u03d5 = 22.5\u25e6. The operator has an elongated three-lobe pattern with its major axis along the direction \u03d5, with an ON center lobe and two OFF side lobes. Figure 7(b) shows a measured RF of V1 simple cells of macaque monkey (Ringach, 2002), showing that our OP operator closely resembles the experimental one in spatial structure. Figure 7: (a) Receptive \ufb01eld operator P {G(R \u2212r)} with OP=22.5\u25e6. The color bar indicates the amplitude of the operator. (b) RF of macaque monkey V1 simple cell from experiment (Ringach, 2002). The color bar indicates normalized impulse response strength of the neurons. 3.4 Angular selectivity of the OP operator The full width at half maximum (FWHM) of the bell-shaped OP angle tuning curve plotted from the neuron response by convolving the RF operator and the stimulus can be used to parameterize the OP selectivity of the RF operator. The parameters are tunable by adjusting the ratio \u03c32 x/\u03c32 y of the weight function G(r\u2212R) de\ufb01ned in Eq. (9), and the ratio b2/a2 of the anisotropic Laplacian operator P de\ufb01ned in Eq. (8). Hence, we can \ufb01nd the optimal parameter values of the combined OP operator by adjusting its parameters so its tuning curve matches experiment. We \ufb01rst vary the values of a2 and b2 while keeping their sum constant by writing a2 = sin2 \u03c8 , (16) b2 = cos2 \u03c8 , (17) where \u03c8 ranges from 0 to \u03c0/4 to ensure b2 \u2265a2. We also vary the ratio \u03c3x/\u03c3y from 1.5 to 6.5 to elongate the weight function along the xg axis de\ufb01ned in Eq. (10). Figure 8 shows the resulting contour map of the FWHM vs. b2/a2 and \u03c3x/\u03c3y. The FWHM varies rapidly with \u03c3x/\u03c3y, with sharper tuning as \u03c3x/\u03c3y increases. In contrast, the FWHM only sharpens slightly when b2/a2 increases. In order to illustrate the insensitivity of the FWHM to b/a more clearly, Fig. 9(a) shows the nor12 \fFigure 8: Contour map of the FWHM of the OP tuning curve vs b2/a2, and \u03c3x/\u03c3y. The preferred orientation angle at measurement point is 135\u25e6. The value of \u03c3x/\u03c3y is given by x axis, and the ratio of b2 and a2 is given by y axis. The color bar represents the FWHM width in degrees. malized tuning curves for \ufb01xed \u03c3x/\u03c3y = 2.5, varying b2/a2 from 1 to 100. The FWHM decreases by only \u223c2\u25e6when b2/a2 changes from 1 to 5, and it does not decrease signi\ufb01cantly further for larger b2/a2. The reason for this is that the RF operator envelope de\ufb01ned by G(r \u2212R) limits the e\ufb00ective lengths of its three lobes to the Gaussian envelope\u2019s characteristic width, so they do not change much when b2/a2 increases. This agrees with the predictions of previous studies, which argued that the OP tuning width of a V1 simple cell varies inversely with the size of its RF (Hubel and Wiesel, 1962a; Lampl et al., 2001). It is also consistent with the Gaussian derivative model proposed by Young et al. (2001) for modeling the spatiotemporal RF of V1 cells. Our results also match Hubel and Wiesel\u2019s feed-forward model, in which the overall V1 RF results from the net e\ufb00ect of aggregating isotropic LGN RFs via anisotropic connections in V1 (Hubel and Wiesel, 1962a). The above analysis implies that we can simplify the OP operator by setting b2 = a2 because their ratio does not a\ufb00ect the OP tuning width signi\ufb01cantly. Then the OP operator P {G(r \u2212R)} becomes the Laplacian of the weight function L {G(r \u2212R)} and the tuning width is controlled by the elongation of the weight function G(r \u2212R). Previous studies suggested that the FWHM of orientation tuning curve of most V1 neurons is 35\u25e6 to 40\u25e6(De Valois et al., 1982; Gur et al., 2005; Ringach et al., 2002; Swindale, 1996). This corresponds to \u03c3x/\u03c3y ranging roughly from 2.3 to 3.2 in Fig. 8. Thus, to be consistent with experimental results, we choose \u03c3x/\u03c3y = 2.6 and b2 = a2, which gives a FWHM of 37\u25e6and the tuning curves shown in Fig. 9(b). 13 \fFigure 9: (a) Normalized tuning curves with b2/a2 set to 1 (blue), 2 (red), 10 (yellow), 20 (purple), and 100 (green); and \ufb01xing \u03c3x/\u03c3y = 2.5. The preferred orientation angle is set to 45\u25e6. (b) OP tuning curves with orientation angles 0\u25e6(black), 30\u25e6(red), 60\u25e6(green), 90\u25e6(blue), 120\u25e6(yellow), 150\u25e6(purple), and 180\u25e6(black), with \u03c3x/\u03c3y = 2.6 and b2/a2 = 1. 3.5 Tuning curves vs. Distance to Pinwheel Center The tuning curve of a cell (e.g., located at r0) describes its responses to di\ufb00erent OPs. We compute the overall response of the cell for a particular OP [i.e., IOP(r0)] by taking a weighted average of the neural responses within a small circular region surrounding the cell and tightly coupled to it; this is achieved by integrating the responses of all the cells with a Gaussian weight function over the region: IOP(r0) = Z I(r)W(r \u2212r0)dr , (18) where W(r \u2212r0) = 1 2\u03c0\u03c32 r exp \u0014 \u2212(r \u2212r0)2 2\u03c32 r \u0015 . (19) 14 \fThe width of the weight function is set to 40 \u00b5m, to approximate the characteristic width of an OP microcolumn and the experimental range of pinwheel-center e\ufb00ects on OP selectivity (Maldonado et al., 1997; Nauhaus et al., 2008; Obermayer and Blasdel, 1993; Ohki et al., 2006). Figure 10: Locations of cells in a hypercolumn for computing the tuning curves. Cells near a pinwheel center and in an iso-orientation domain are marked with crosses, and the circle around each indicates the characteristic width of the integration region. The color bar shows the orientation angle in degrees. We consider two cases of tuning curves for cells with di\ufb00erent locations in the hypercolumn, one near a pinwheel center and another one in an iso-orientation domain. These locations are marked with crosses in Fig. 10, and the circle around each cross indicates the characteristic width of the weight function in Eq. (18). Figure 11(a) shows the resulting tuning curve of the overall responses of the cell located in an iso-orientation domain (i.e., the location marked with black cross in Fig. 10) with preferred orientation \u224860\u25e6. It is sharply peaked at the preferred angle with FWHM \u224841\u25e6. In Fig. 11(b), we plot the tuning curves for an array of cells that are around the measurement site within the circular region. Since all the cells are located in the iso-orientation domain, they have very similar orientation preferences and tuning curves. The overall response of the cell near the pinwheel center is plotted in Fig. 11(c), it is much broader than the tuning curve shown in Fig. 11(a) due to the fact that we average the responses from neurons with a wide range of OPs, as shown in Fig. 11(d). Our predictions agree with the experimental results (Maldonado et al., 1997; Ohki et al., 2006), who found that individual neurons near the pinwheel center are just as orientation selective as the ones in the iso-orientation domain, and the overall broadly tuned response is the averaged response of nearby cells with a wide range of OPs. We have also investigated how the tuning width and response strength vary with distance from the pinwheel center, and compare them with experiment in Fig. 12. We calculate half width at half maximum (HWHM) here, in order to be consistent with the experimental plots. As expected, our predicted HWHM 15 \fFigure 11: (a) Tuning curve of averaged responses at measurement site located in iso-orientation domain (i.e. marked as black cross in Fig. 10). (b) Tuning curves of all the cells surrounding the measurement site within the circular region in iso-orientation domain. (c) Tuning curve of averaged responses at measurement site located near pinwheel center (i.e. marked as white cross in Fig. 10). (d) Tuning curves of all the cells surrounding the measurement site within the circular region near pinwheel center. decreases when moving away from the pinwheel center to an iso-orientation domain, while the response strength increases with the distance (Fig. 12(b)). Both results match the experimental \ufb01ndings shown in Fig. 12(a), except the experimental HWHM in iso-orientation domain is wider than ours; However, our plots do not reproduce the overshoot and dip in the responses strength and HWHM curves, respectively, shown in Fig. 12(a). In order to achieve a better \ufb01t to the experimental data, we thus try the Mexican hat function, Wmex(r \u2212r0) = \u0014 1 \u2212(r \u2212r0)2 2\u03c32 r \u0015 exp \u0014 \u2212(r \u2212r0)2 2\u03c32 r \u0015 . (20) as the weight function to average the responses, and the resulting plots are shown in Fig. 12(c). Overshoot and dip features are visible in this case, implying a better match. Thus, it is potentially possible to deduce the shape of the weight function from experimental results such as these, but detailed exploration is beyond the scope of the present paper. 16 \fFigure 12: (a) Experimental HWHM and response strength vs. distance in \u00b5m from pinwheel center from Swindale et al. (2003), averaged over 13 pinwheels. The \ufb01lled circles shows the response strength in arbitrary units, while the open circles show the HWHM in degrees and the bottom-most curve shows the baseline activity. (b) Predicted HWHM vs. distance (blue) and Responses strength vs. distance (orange) from pinwheel center, by using Gaussian function as weight function for averaging the responses. (c) Predicted HWHM vs. distance (blue) and Responses strength vs. distance (orange) from pinwheel center, by using Mexican hat function as weight function for averaging the responses. 17 \f4 Fourier decomposition of the OP-OD Map We seek a representation of the OP map in the Fourier domain so that we can apply it to compactly represent experimental data and to study the spatiotemporal neural activity patterns in periodic V1 structures using NFT, which requires such Fourier coe\ufb03cients as input. Thus, in this section, we decompose the OP-OD map of the hypercolumn that is de\ufb01ned in Sec. 2 in the Fourier domain, derive the Fourier coe\ufb03cients that represents the spatial frequency components of the OP-OD map structure and discuss their properties. We also determine the least number of Fourier coe\ufb03cients we need in NFT analysis while maintaining the essential features of the OP-OD map. This is achieved by reconstructing the OP-OD map with a subset of the coe\ufb03cients using the inverse Fourier transform. We decompose the OP map in the hypercolumn by \ufb01rst applying a spatial operator O to the map, with O de\ufb01ned as O = exp [i2\u03d5(x, y)]. (21) This operator preserves the structure and periodicity of the OP-OD map and allows us to avoid the spurious discontinuities between 0\u25e6and 180\u25e6orientations, which actually correspond to the same stimulus orientation. This is important because representation of such discontinuities would require use of high spatial frequencies, and thus many Fourier coe\ufb03cients. We then perform a 2D Fourier transform on the resulting map, which yields a sparse set of Fourier coe\ufb03cients. Figure 13(a) shows the magnitude of the Fourier coe\ufb03cients of a lattice of 5 \u00d7 5 hypercolumns [i.e., Fig. 4(d)]. We note that: (i) the coe\ufb03cients have 4-fold symmetry, and the 4 lowest K modes are dominant; and (ii) the lowest K modes are located at (\u00b1\u03c0/a, 0) and (0, \u00b1\u03c0/a), where 2a is the width of hypercolumn. One of our main aims in \ufb01nding these Fourier coe\ufb03cients of the OP-OD map is to incorporate the OP map structure into the patchy propagator theory introduced to treat periodic V1 structure in previous studies (Robinson, 2006, 2007; Liu et al., 2020). We want to use as few coe\ufb03cients as possible to simplify computation, while preserving the essential OP-OD structure. In order to test how well a small subset of Fourier coe\ufb03cients can approximate the OP-OD map, we reconstruct the lattice of hypercolumns from these coe\ufb03cients and compare it to the original one. We \ufb01rst perform the inverse Fourier transform on a small subset of the coe\ufb03cients. The resulting complex data represents the values that have been transformed from the OP angles after applying the operator O de\ufb01ned in Eq. (21). We then transform the complex data back to OP angles via ei\u03d5(x,y) = cos[\u03d5(x, y)] + i sin[\u03d5(x, y)], (22) whence \u03d5(x, y) = tan\u22121 \u0014 sin \u03d5(x, y) cos \u03d5(x, y) \u0015 . (23) 18 \fFigure 13: (a) Magnitude of Fourier coe\ufb03cients of the OP-OD map after applying the operator O to a lattice of 25 hypercolumns. Each square on the \ufb01gure represents one spatial mode K, and the color bar indicates the magnitude of it. (b) Reconstructed OP map of a lattice of 25 hypercolumns. The two squares on the top-left are the zoomed-in patches of the reconstructed (top) and the original(bottom) lattice. These are extracted from the same location that is marked by dashed-line oval. The color bar indicates the OP in degrees. (c) Absolute di\ufb00erences between the original hypercolumn OP in Fig. 4(c) and the reconstructed one. The square on the left shows a zoomed-in patch that are extracted from the location marked by blue dashed-line. The color bar indicates the di\ufb00erence in degrees. 19 \fWe \ufb01nd that the four lowest K modes (the yellow squares in Fig. 13)(a) su\ufb03ce to reproduce the main features of the hypercolumn lattice, and the reconstructed OP map is shown in Fig. 13(b), which is very similar to the original one [i.e., Fig. 4(d)] except some angular contours are smoothed out due to the absence of higher-K modes. This detail is shown in the top left frame of Fig. 13(b), which is to be compared with the frame below it, which is from the same part of the original lattice. Figure 13(c) shows the absolute di\ufb00erences between the original hypercolumn and the reconstructed one. The square on the left is a zoomed-in patch that are marked by the dashed-line square, and it is extracted in the same location as we do for the zoomed-in patch in Fig. 13(b). The largest di\ufb00erence is \u22484.5\u25e6, and are around the edges of each pinwheel. Nevertheless, the basic structure and periodicity of the hypercolumn are all preserved in the reconstructed lattice. Thus, we can conclude that the 4 lowest K modes are su\ufb03cient for incorporating the OP map structure into NFT computations. Going beyond idealized hypercolumns, we also can model more irregular and biologically realistic OP maps by adding more spatial modes around the 4 lowest K modes. We have noticed that the real OP-OD map does not have straight OD columns running vertically as we de\ufb01ned in Sec. 2; rather, these columns are bent and oblique. In image processing, a rotation of Fourier coe\ufb03cients produces a rotation of the image in spatial domain by the same angle (Ballard and Brown, 1982). Hence, we can alter the OP-OD map by modifying the modes in Fourier domain. Our approach for reconstructing the more realistic OP-OD map is to add the K modes around the 4 lowest K with its magnitude Mk de\ufb01ned by Gaussian envelopes in Kx, Ky, and azimuthal angle \u0398, de\ufb01ned as, Mk = exp \u0002 \u2212(K \u2212K0)2/2\u22062 K \u0003 exp \u0002 \u2212(\u0398 \u2212\u03980)2/2\u22062 \u0398 \u0003 , (24) where K0 and \u03980 are the location and azimuth angle of the original lowest K modes, \u2206K and \u2206\u0398 are the variance of the Gaussian envelope and we set these to 1 2K mm\u22121 (K = \u03c0/a, where 2a is the width of the hypercolumn) and 20\u25e6, respectively, to match experimental observations. The resulting magnitude plot of K modes is shown in Fig. 14(a) and reconstructed OP-OD map using this set of K is shown in Fig. 14(b). Fig. 14(b) resembles the biologically realistic OP-OD maps obtained in experiments (Blasdel, 1992; Bonhoe\ufb00er and Grinvald, 1993; Obermayer and Blasdel, 1993), and it reproduces the general observations of the OP-OD maps we mentioned in Sec. 2.1, including: (i) it has both positive and negative pinwheels, and the neighboring pinwheels have opposite signs; (ii) linear zones connect two pinwheel centers. 5 Application to OP maps from a neural network model In order to analyze the general properties of more realistic OP maps, We perform the same Fourier analysis as on the idealized hypercolumns in previous sections for the OP-OD map generated from a 20 \fFigure 14: Reconstructed OP-OD map with extra K in Gaussian envelope. (a) The magnitude plot of K modes. The color bar indicates the magnitude of it. (b) Reconstructed OP-OD map using set of K modes shown in (a). The color bar indicates the OP angle in degree. computational neural network model. The model of V1 we use here is the Gain Control, Adaptation, Laterally Connected (GCAL) model (Stevens et al., 2013), which treats the retina, LGN, and V1 as 2-dimensional sheets, with neurons in each sheet connected topographically. Neurons not only connect to a small group of neurons of the lower level sheet, but also laterally connect to the neurons within the same sheet. A Hebbian learning rule is adopted in the model for updating the connection weights between neurons (Stevens et al., 2013). Figure 15(a) shows an example output from the GCAL model simulation (Bednar, 2009), and we use this map for further analysis in the Fourier domain. Figure 15: (a) OP map generated from GCAL model (Bednar, 2009), and the color bar indicates the OP angle in degrees. (b) Magnitude plot of the Fourier coe\ufb03cients obtained from GCAL OP map. Each pixel-like square represents one spatial mode K, and the color bar indicates its magnitude. We apply the operator O to the GCAL OP map and then do a Fourier transform. One thing worth mentioning here is that the OP has discontinuities at the edges of the map if we repeat the whole map in x or y direction, which is implicit in the 2-D discrete Fourier transform. This introduces edge e\ufb00ects 21 \finto the Fourier transform, which we minimize by doubling the linear dimensions of the array and zero padding the added region. Figure 15(b) shows the Fourier coe\ufb03cients after performing Discrete Fourier transform on the zero padded map. The dominant K terms are shown in black. Except the ring-shaped enhancement near the center, which arises from the zero padding of the map and the linear size of the overall simulation area. The other dominant K terms correspond to the periodicity of the hypercolumns and have square symmetry similar to the pattern in Fig. 13(a), but with \u223c\u00b115\u25e6spread of K. The square symmetry is most likely at least partly due to a combination of: (i) the artifacts introduced by the approximately square OP-OD unit cell, and (ii) a structural bias introduced by the square grid on which the GCAL model is simulated. 6 Summary and" + }, + { + "url": "http://arxiv.org/abs/2011.00427v1", + "title": "Efficient Pipelines for Vision-Based Context Sensing", + "abstract": "Context awareness is an essential part of mobile and ubiquitous computing.\nIts goal is to unveil situational information about mobile users like locations\nand activities. The sensed context can enable many services like navigation,\nAR, and smarting shopping. Such context can be sensed in different ways\nincluding visual sensors. There is an emergence of vision sources deployed\nworldwide. The cameras could be installed on roadside, in-house, and on mobile\nplatforms. This trend provides huge amount of vision data that could be used\nfor context sensing. However, the vision data collection and analytics are\nstill highly manual today. It is hard to deploy cameras at large scale for data\ncollection. Organizing and labeling context from the data are also labor\nintensive. In recent years, advanced vision algorithms and deep neural networks\nare used to help analyze vision data. But this approach is limited by data\nquality, labeling effort, and dependency on hardware resources. In summary,\nthere are three major challenges for today's vision-based context sensing\nsystems: data collection and labeling at large scale, process large data\nvolumes efficiently with limited hardware resources, and extract accurate\ncontext out of vision data. The thesis explores the design space that consists\nof three dimensions: sensing task, sensor types, and task locations. Our prior\nwork explores several points in this design space. We make contributions by (1)\ndeveloping efficient and scalable solutions for different points in the design\nspace of vision-based sensing tasks; (2) achieving state-of-the-art accuracy in\nthose applications; (3) and developing guidelines for designing such sensing\nsystems.", + "authors": "Xiaochen Liu", + "published": "2020-11-01", + "updated": "2020-11-01", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.IR", + "cs.NI" + ], + "main_content": "Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.2 Background, Motivation, and Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.3 Gnome Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.3.2 Data sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.3.3 Estimating Building Height . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.3.4 Estimating Path In\ufb02ation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.3.5 Location Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.3.6 Scaling Gnome . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.4 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 2.4.1 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 2.4.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.4.3 Evaluating Gnome components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 3 ALPS: Accurate Landmark Positioning at City Scales . . . . . . . . . . . . . . . . . . . . . . 31 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 3.2 Motivation and Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 3.3 The Design of ALPS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 3.3.1 Approach and Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 3.3.2 Base Image Retrieval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 3.3.3 Landmark Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 3.3.4 Image Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 3.3.5 Adaptive Image Retrieval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 3.3.6 Landmark Positioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 3.3.7 Putting it All Together . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 3.3.8 Flexibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 3.4 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 3.4.1 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 3.4.2 Coverage and Accuracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 3.4.3 Scalability: Bottlenecks and Optimizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 3.4.4 Accuracy and Coverage Optimizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 4 Caesar: Cross-Camera Complex Activity Recognition . . . . . . . . . . . . . . . . . . . . . . 50 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 4.2 Background and Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 4.3 Caesar Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 4.3.1 Rule De\ufb01nition and Parsing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 4.3.2 Object Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 4.3.3 Tracking and Re-Identi\ufb01cation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 4.3.4 Action Detection and Graph Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 4.4 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 iii \f4.4.1 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 4.4.2 Accuracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 4.4.3 Scalability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 4.4.4 Data Transfer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 4.4.5 Energy Consumption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 5 Grab: A Cashier-Free Shopping System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 5.2 Grab Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 5.2.1 Identity tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 5.2.2 Shopper Action Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 5.2.3 GPU Multiplexing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 5.3 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 5.3.1 Grab Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 5.3.2 Methodology, Metrics, and Datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 5.3.3 Accuracy of Grab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 5.3.4 The Importance of Ef\ufb01ciency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 5.3.5 GPU multiplexing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 6 TAR: Enabling Fine-Grained Targeted Advertising in Retail Stores . . . . . . . . . . . . . . . 92 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 6.2 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 6.3 TAR Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 6.3.1 Design Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 6.3.2 A Use Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 6.3.3 Vision-based Tracking (VT) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 6.3.4 People Tracking with BLE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 6.3.5 Real-time Identity Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 6.4 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 6.4.1 Methodology and Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 6.4.2 TAR Runtime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 6.4.3 TAR Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 7 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 7.1 Outdoor Localization and GPS Error Mitigation . . . . . . . . . . . . . . . . . . . . . . . . . . 116 7.2 Roadside Landmark Discovery and Localization . . . . . . . . . . . . . . . . . . . . . . . . . . 117 7.3 Cross-camera Person Re-identi\ufb01cation and Tracking . . . . . . . . . . . . . . . . . . . . . . . . 118 7.4 Targeted Advertising and Cashier-free Shopping . . . . . . . . . . . . . . . . . . . . . . . . . . 120 7.5 Complex Activity Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 7.6 Wireless Camera Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 7.7 Scaling DNN Pipelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 8" + }, + { + "url": "http://arxiv.org/abs/2001.01033v1", + "title": "Grab: Fast and Accurate Sensor Processing for Cashier-Free Shopping", + "abstract": "Cashier-free shopping systems like Amazon Go improve shopping experience, but\ncan require significant store redesign. In this paper, we propose Grab, a\npractical system that leverages existing infrastructure and devices to enable\ncashier-free shopping. Grab needs to accurately identify and track customers,\nand associate each shopper with items he or she retrieves from shelves. To do\nthis, it uses a keypoint-based pose tracker as a building block for\nidentification and tracking, develops robust feature-based face trackers, and\nalgorithms for associating and tracking arm movements. It also uses a\nprobabilistic framework to fuse readings from camera, weight and RFID sensors\nin order to accurately assess which shopper picks up which item. In experiments\nfrom a pilot deployment in a retail store, Grab can achieve over 90% precision\nand recall even when 40% of shopping actions are designed to confuse the\nsystem. Moreover, Grab has optimizations that help reduce investment in\ncomputing infrastructure four-fold.", + "authors": "Xiaochen Liu, Yurong Jiang, Kyu-Han Kim, Ramesh Govindan", + "published": "2020-01-04", + "updated": "2020-01-04", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.IR" + ], + "main_content": "INTRODUCTION While electronic commerce continues to make great strides, in-store purchases are likely to continue to be important in the coming years: 91% of purchases are still made in physical stores [40, 41] and 82% of millennials prefer to shop in these stores [39]. However, a significant pain point for in-store shopping is the checkout queue: customer satisfaction drops significantly when queuing delays exceed more than four minutes [10]. To address this, retailers have deployed selfcheckout systems (which can increase instances of shoplifting [7, 24, 30]), and expensive vending machines. The most recent innovation is cashier-free shopping, in which a networked sensing system automatically (a) identifies a customer who enters the store, (b) tracks the customer through the store, (c) and recognizes what they purchase. Customers are then billed automatically for their purchases, and do not need to interact with a human cashier or a vending machine, or scan items by themselves. Over the past year, several large online retailers like Amazon and Alibaba [1, 35] * The work was done at Hewlett-Packard Labs. have piloted a few stores with this technology, and cashierfree stores are expected to take off in the coming years [9, 36]. Besides addressing queue wait times, cashier-free shopping is expected to reduce instances of theft, and provide retailers with rich behavioral analytics. Not much is publicly known about the technology behind cashier-free shopping, other than that stores need to be completely redesigned [1, 8, 35] which can require significant capital investment (\u00a72). In this paper, we ask: Is cashier-free shopping viable without having to completely redesign stores? To this end, we observe that many stores already have, or will soon have, the hardware necessary to design a cashierfree shopping system: cameras deployed for in-store security, sensor-rich smart shelves [37] that are being deployed by large retailers [21] to simplify asset tracking, and RFID tags being deployed on expensive items to reduce theft. Our paper explores the design and implementation of a practical cashierfree shopping system called Grab1 using this infrastructure, and quantifies its performance. Grab needs to accurately identify and track customers, and associate each shopper with items he or she retrieves from shelves. It must be robust to visual occlusions resulting from multiple concurrent shoppers, and to concurrent item retrieval from shelves where different types of items might look similar, or weigh the same. It must also be robust to fraud, specifically to attempts by shoppers to confound identification, tracking, or association. Finally, it must be cost-effective and have good performance in order to achieve acceptable accuracy: specifically, we show that, for vision-based tasks, slower than 10 frames/sec processing can reduce accuracy significantly (\u00a74). Contributions. An obvious way to architect Grab is to use deep neural networks (DNNs) for each individual task in cashier-free shopping, such as identification, pose tracking, gesture tracking, and action recognition. However, these DNNs are still relatively slow and many of them cannot process frames at faster than 5-8 fps. Moreover, even if they 1A shopper only needs to grab items and go. arXiv:2001.01033v1 [cs.CV] 4 Jan 2020 \fhave high individual accuracy, their effective accuracy would be much lower if they were cascaded together. Grab\u2019s architecture is based on the observation that, for cashier-free shopping, we can use a single vision capability (body pose detection) as a building block to perform all of these tasks. A recently developed DNN library, OpenPose [28] accurately estimates body \"skeletons\" in a video at high frame rates. Grab\u2019s first contribution is to develop a suite of lightweight identification and tracking algorithms built around these skeletons (\u00a73.1). Grab uses the skeletons to accurately determine the bounding boxes of faces to enable feature-based face detection. It uses skeletal matching, augmented with color matching, to accurately track shoppers even when their faces might not be visible, or even when the entire body might not be visible. It augments OpenPose\u2019s elbow-wrist association algorithm to improve the accuracy of tracking hand movements which are essential to determining when a shopper may pickup up items from a shelf. Grab\u2019s second contribution is to develop fast sensor fusion algorithms to associate a shopper\u2019s hand with the item that the shopper picks up (\u00a73.2). For this, Grab uses a probabilistic assignment framework: from cameras, weight sensors and RFID receivers, it determines the likelihood that a given shopper picked up a given item. When multiple concurrent such actions occur, it uses an optimization framework to associate hands with items. Grab\u2019s third contribution is to improve the costeffectiveness of the overall system by multiplexing multiple cameras on a single GPU (\u00a73.3). It achieves this by avoiding running OpenPose on every frame, and instead using a lightweight feature tracker to track the joints of the skeleton between successive frames. Using data from a pilot deployment in a retail store, we show (\u00a74) that Grab has 93% precision and 91% recall even when nearly 40% of shopper actions were adversarial. Grab needs to process video data at 10 fps or faster, below which accuracy drops significantly: a DNN-only design cannot achieve this capability (\u00a74.4). Grab needs all three sensing modalities, and all of its optimizations: removing an optimization, or a sensor, can drop precision and recall by 10% or more. Finally, Grab\u2019s design enables it to multiplex up to 4 cameras per GPU with negligible loss of precision. 2 APPROACH AND CHALLENGES Cashier-free shopping systems. A cashier-free shopping system automatically determines, for every customer in a shop, what items the customer has picked from the shelves, and directly bills each customer for those items. Cashier-free shopping is achieved using a networked system containing several sensors that together perform three distinct functions: identifying each customer, tracking each customer through 3) RFID: Signal blockage a) Vision: Similar packages 2) Weight: Similar weight RFID Blocking Figure 1: Real-world challenges in identifying items. a) Different items with similar packages; b) Different items with similar weight; c) Occluded RFID tags that would be hard to read at a checkout gate. the shop, and identifying every item pickup or item dropoff on a shelf to accurately determine which items the customer leaves the store with. These systems have several requirements. First, they must be non-intrusive in the sense that they must not require customers to wear or carry sensors or any form of electronic identification, since these can detract from the shopping experience. Second, they must be robust to real-world conditions (Figure 1), in being able to distinguish between items that are visually similar or have other similarities (such as weight), as well as to be robust to occlusion. Third, they must be robust to fraud: specifically, they must be robust to attempts by shoppers to circumvent or tamper with sensors used to identify customers, items and the association between customers and items. Finally, they must be cost-effective: they should leverage existing in-store infrastructure to the extent possible, while also being computationally efficient in order to minimize computing infrastructure investments. Today\u2019s cashier-free shopping systems. Despite widespread reports of cashier-free shopping deployments [1, 8, 33, 35], not much is known about the details of their design, but they appear to fall into three broad categories. Vision-Only. This class of systems, exemplified by [23, 33], identifies customers and items, and tracks customers, only using cameras. It trains a deep learning model to recognize customers and objects, and uses this to bill them. However, such a system can fail to distinguish between items that look similar (Figure 1(a)) especially when these items are small in the image (occupy a few pixels), or items that are occluded by other objects (Figure 1(c)) or by the customer. Vision and Weight. Amazon Go [1] uses both cameras and weight sensors on shelves, where the weight sensor can be used to identify when an item is removed from a shelf even if it is occluded from a camera. One challenge such a system faces is the ability to discriminate between items of similar weight (Figure 1(b)). Moreover, their design requires a significant redesign of the store: user check-in gates, an array of cameras on the ceiling and the shelf, and additional 2 \fsensors at the exit [2, 22]. Finally, Amazon Go also reportedly encounters issues when shoppers put back items randomly [3]. Vision and RFID. The third class of approaches, used by Taobao Cafe [35] and Bingo Box [8], does not track shoppers within the store, but uses vision to identify customers and RFID scanners at a checkout gate that reads all the items being carried by the customer. Each object needs to be attached with an RFID tag, and users have to queue at the checkout gate. This approach has drawbacks as well: RFID tags can be expensive relative to the price of some items [29], and RFID readers are known to have trouble when scanning tags that are stacked, blocked, or attached to conductors [72, 76, 77]. Approach and Challenges. While all of these approaches are non-intrusive, it is less clear how well they satisfy other requirements: robustness to real-world conditions and to fraud, and cost-effectiveness. In this paper, with a goal towards understanding how well these requirements can be met in practice, we explore the design, implementation, and evaluation of a cashier-free shopping system called Grab, which combines the three technologies described above (vision, weight scales, and RFID). At a high-level, Grab combines advances in machine vision, with lightweight sensor fusion algorithms to achieve its goals. It must surmount four distinct challenges: (a) how to identify customers in a lightweight yet robust manner; (b) how to track customers through a store even when the customer is occluded by others or the customer\u2019s face is not visible in a camera; (c) how to determine when a customer has picked up an item, and which item the customer has picked up, and to make this determination robust to concurrent item retrievals, customers putting back items, and customers attempting to game the system in various ways; (d) how to meet these challenges in a way that minimizes investments in computing infrastructure. 3 GRAB DESIGN Grab addresses these challenges by building upon a visionbased keypoint-based pose tracker DNN for identification and tracking, together with a probabilistic sensor fusion algorithm for recognizing item pickup actions. These ensure a completely non-intrusive design where shoppers are not required to scan item codes or pass through checkout gates while shopping. Grab consists of four major components (Figure 2). Identity tracking recognizes shoppers\u2019 identities and tracks their movements within the store. It includes efficient and accurate face and body pose detection and tracking, adapted to work well in occluded environments, and to deal with corner cases in body pose estimation that can increase error in item pickup detection (\u00a73.1). Action recognition uses a probabilistic algorithm to fuse vision, weight and RFID inputs to determine item pickup or dropoff actions by a customer (\u00a73.2). This algorithm is Smart Shelf Shopper Identity Tracking GPU Multiplexing Action Recognition Output User: Alice Bought: Coke x 1 Cups x 1 Customer Registration Figure 2: Grab is a system for cashier-free shopping and has four components: registration, identity tracking, action recognition, and GPU multiplexing. Pose Detection Face Detection & Matching Keypoint-based Pose Tracking Optimized Limb Association Alice Figure 3: Grab\u2019s identity tracking module uses a feature-based face detector and uses key-point base pose tracking. designed to be robust to real-world conditions and to theft. When multiple users pickup the same type of item simultaneously, the algorithm must determine which customer takes how many items. It must be robust to: customers concealing items or to attempts to tamper with the sensors (e.g., replacing an expensive item with an identically weighted item). GPU multiplexing enables processing multiple cameras on a single GPU (\u00a73.3). DNN-based video processing usually requires a dedicated GPU for each video stream for reasonable performance. Retail stores need tens of cameras, and Grab contains performance optimizations that permit it to multiplex the processing of multiple streams on a single GPU, thereby reducing cost. Grab also has a fourth, offline component, registration. Customers must register once online before their first store visit. Registration involves taking a video of the customer to enable matching the customer subsequently (\u00a73.1), in addition to obtaining information for billing purposes. If the identity tracking component detects a customer who has not registered, she may be asked to register before buying items from the store. 3.1 Identity tracking Identity tracking consists of two related sub-components (Figure 3). Shopper identification determines who the shopper is among registered users. A related sub-component, shopper tracking, determines (a) where the shopper is in the store at each instant of time, and (b) what the shopper is doing at each instant. 3 \fRequirements and Challenges. In designing Grab, we require first that customer registration be fast, even though it is performed only once: ideally, a customer should be able to register and immediately commence shopping. Identity tracking requires not just identifying the customer, but also detecting each person\u2019s pose, such as hand position and head position. These tasks have been individually studied extensively in the computer vision literature. More recently, with advances in deep learning, researchers in computer vision have developed different kind of DNNs for people detection [70, 79, 81], face detection [14, 93] and hand gesture recognition [48, 89]. Each of these detectors performs reasonably well: e.g., people detectors can process 35 frames per second (fps), face detectors can process 30 fps, and hand recognizers can run at 12 fps. However, Grab requires all of these components. Dedicating a GPU for each component is expensive: recall that a store may have several cameras (Grab proposes to repurpose surveillance cameras for visual recognition tasks, \u00a71), and using one GPU per detection task per camera is undesirable (\u00a72) as it would require significant investment in computing infrastructure. The other option is to run these on a single GPU per camera, but this would result in lower frame rates. Lower frame rates can miss shopper actions: as \u00a74.4 shows, at frame rates lower than 10 fps, Grab\u2019s precision and recall can drop dramatically. This highlights a key challenge we face in this paper: designing fast end-to-end identity tracking algorithms that do not compromise accuracy. Approach. In this paper, we make the following observation: we can build end-to-end identity tracking using a state-ofthe-art pose tracker. Specifically, we use, as a building block, a keypoint based body pose tracker, called OpenPose [28]. Given an image frame, OpenPose detects keypoints for each human in the image. Keypoints identify distinct anatomical structures in the body (Figure 4(a)) such as eyes, ears, nose, elbows, wrists, knees, hips etc. We can use these skeletons for identification, tracking and gesture recognition. OpenPose requires no pose calibration (unlike, say, the Kinect [46]), so it is attractive for our setting, and is fast, achieving up to 15 fps for body pose detection. (OpenPose also has modes where it can detect faces and hand gestures using many more keypoints than in Figure 4(a), but using these reduces the frame rate dramatically, and also has lower accuracy for shoppers far away from the camera). However, fundamentally, since OpenPose operates only on a single frame, Grab needs to add identification, tracking and gesture recognition algorithms on top of OpenPose to continuously identify and tracks shoppers and their gestures. The rest of this section describes these algorithms. Original skeleton output Frontal face bounding box Bounding box w/o direction adjustment Bounding box w/ direction adjustment (b) (c) (e) (d) (a) Figure 4: (a) Sample OpenPose output. (b,c,d,e) Grab\u2019s approach adjusts the face\u2019s bounding box using the keypoints detected by OpenPose. (The face shown is selected from OpenPose project webpage [28]) Shopper Identification. Grab uses fast feature-based face recognition to identify shoppers. While prior work has explored other approaches to identification such as body features [43, 49, 94] or clothing color [73], we use faces because (a) face recognition has been well-studied by vision researchers and we are likely to see continued improvements, (b) faces are more robust for identification than clothing color, and (c) face features have the highest accuracy in large datasets (\u00a75). Feature-based face recognition. When a user registers, Grab takes a video of their face, extracts features, and builds a fast classifier using these features. To identify shoppers, Grab does not directly use a face detector on the entire image because traditional HAAR based detectors [69] can be inaccurate, and recent DNN-based face detectors such as MTCNN [93] can be slow. Instead, Grab identifies a face\u2019s bounding box using keypoints from OpenPose, specifically, the five keypoints of the face from the nose, eyes, and ears (Figure 4(b)). Then, it extracts features from within the bounding box and applies the trained classifier. Grab must (a) enable fast training of the classifier since this step is part of the registration process and registration is required to be fast (\u00a73.1), (b) must robustly detect the bounding box for different facial orientations relative to the camera to avoid classification inaccuracy. Fast Classification. Registration is performed once for each customer. During registration, Grab extracts features from the customer\u2019s face. To do this, we evaluated several face feature extractors [15, 44, 85], and ultimately selected ResNet-34\u2019s feature extractor [15] which produces a 128dimension feature vector, performs best in both speed and accuracy (\u00a74.6). With these features, we can identify faces by comparing feature distances, build classifiers, or train a neural network. After experimenting with these options, we found that a k nearest neighbor (kNN) classifier, in which each customer 4 \fis trained as a new class, worked best among these choices (\u00a74.6). Grab builds one kNN-based classifier for all customers and uses it across all cameras. Tightening the face bounding box. During normal operation, Grab extracts facial features after drawing a bounding box (derived from OpenPose keypoints) around each customer\u2019s face. Grab infers the face\u2019s bounding box width using the distance between two ears, and the height using the distance from nose to neck. This works well when the face points towards the camera (Figure 4(c)), but can result in an inaccurate bounding box when a customer faces slightly away from the camera (Figure 4(d)). This inaccuracy can degrade classification performance. To obtain a tighter bounding box, we estimate head pitch and yaw using the keypoints. Consider the line between the nose and neck keypoints: the distance of each eye and ear keypoint to this axis can be used to estimate head yaw. Similarly, the distance of the nose and neck keypoints to the axis between the ears can be used to estimate pitch. Using these, we can tighten the bounding box significantly (Figure 4(e)). To improve detection accuracy (\u00a74) when a customer\u2019s face is not fully visible in the camera, we also use face alignment [44], which estimates the frontal view of the face. Shopper Tracking. A user\u2019s face may not always be visible in every frame, since customers may intentionally or otherwise turn their back to the camera. However, Grab needs to be able to identify the customer in frames where the customer\u2019s face is not visible, for which it uses tracking. Grab assumes the use of existing security cameras, which, if placed correctly, make it unlikely that a customer can evade all cameras at all times (put another way, if the customer is able to do this, the security system\u2019s design is faulty). Skeleton-based Tracking. Existing human trackers use bounding box based approaches [68, 75, 82, 90, 92], which can perform poorly in in-store settings with partial or complete occlusions (Figure 5(a)). We quantify this in \u00a74.4 with the state-of-the-art bounding box based tracker, DeepSort [92], but Figure 5 demonstrates this visually. Instead, we use the skeleton generated by OpenPose to develop a tracker that uses geometric properties of the body frame. We use the term track to denote the movements of a distinct customer (whose face may or may not have been identified). Suppose OpenPose identifies a skeleton in a frame: the goal of the tracker is to associate the skeleton with an existing track if possible. Grab uses the following to track customers. It tries to align each keypoint in the skeleton with the corresponding keypoint in the last seen skeleton in each track, and selects that track whose skeleton is the closest match (the sum of match errors is smallest). Also, as soon as it is able to identify the face, Grab associates the customer\u2019s identity with the track (to be robust to noise, Grab requires To be updated a) Bounding box tracking result b) Our tracking result Figure 5: Bounding-box based approaches (b) have trouble tracking multiple users in crowds, but our approach (a) works well in these settings. that the customer\u2019s face is identified in 3 successive frames). To work well, the tracking algorithm needs to correctly handle partial and complete occlusions. Dealing with Partial Occlusions. When a shopper\u2019s body is not completely visible (e.g., because she is partially obscured by another customer, Figure 5(b)), OpenPose can only generate a subset of the key points. In this case, Grab matches only on the visible subset. However, with significant occlusions, very few key points may be visible. In this case, Grab attempts to increase matching confidence using the color histogram of the visible upper body area. However, if the two matching approaches (color and skeletal) conflict with each other, Grab skips matching attempts until subsequent frames when this process is repeated. Dealing with Complete Occlusions. In some cases, a shopper may be completely obscured by another. Grab uses lazy tracking (Figure 6) in this case. When an existing track disappears in the current frame, Grab checks if, in the previous frame, the track was close to the edge of the image, in which case it assumes the customer has moved out of the camera\u2019s field of view and deletes the track. Otherwise, it marks the track as blocked. When the customer reappears in a subsequent frame, it reactivates the blocked track. Shopper Gesture Tracking. Grab must recognize the arms of each shopper in order to determine which item he or she purchases (\u00a73.2). OpenPose has a built-in limb association algorithm, which associates shoulder joints to elbows, and elbows to wrists. We have found that this algorithm is a little brittle in our setting: it can miss an association (Figure 7(a)), or mis-associate part of a limb of one shopper with another (Figure 7(b)). How limb association in OpenPose works. OpenPose first uses a DNN to associate with each pixel confidence value of it being part of an anatomical key point (e.g., an elbow, or a wrist). During image analysis, OpenPose also generates vector fields (called part affinity fields [47]) for upper-arms and forearms whose vectors are aligned in the direction of the arm. Having generated keypoints, OpenPose then estimates, for each pair of keypoints, a measure of alignment between an arm\u2019s part affinity field, and the line between 5 \fApproach A is close, B is far A B A B B A Occlusion B no longer detected Keep B\u2019s track after A Resume Update B with new detection Frame 2 Frame 1 Frame 3 Figure 6: When a shopper is occluded by another, Grab resumes tracking after the shopper re-appears in another frame (lazy tracking). Connection Failure Wrong Assignment (a) (b) Figure 7: OpenPose can (a) miss an assignment between elbow and wrist, or (b) wrongly assign one person\u2019s joint to another. the keypoints (e.g., elbow and wrist). It then uses a bipartite matching algorithm to associate the keypoints. Improving limb association robustness. One source of brittleness in OpenPose\u2019s limb association is the fact that the pixels for the wrist keypoint are conflated with pixels in the hand (Figure 7(a)). This likely reduces the part affinity alignment, causing limb association to fail. To address this, for each keypoint, we filtered outlier pixels by removing pixels whose distance from the mediod [78] was greater than the 85th percentile. The second source of brittleness is that OpenPose\u2019s limb association treats each limb independently, resulting in cases where the key point from one person\u2019s elbow may get associated with another person\u2019s wrist (Figure 7(b)). To avoid this failure mode, we modify OpenPose\u2019s limb association algorithm to treat one person\u2019s forearms or upper-arms as a pair (Figure 8). To identify forearms (or upper-arms) as belonging to the same person, we measure the Euclidean distance ED(.) between color histograms F(.) belonging to the two forearms, and treat them as a pair if the distance is less than an empirically-determined threshold thresh. Mathematically, we formulate this as an optimization problem: max i, j \u00d5 i \u2208E \u00d5 j \u2208W Ai,jzi,j s.t. \u00d5 j \u2208W zi,j \u22641 \u2200i \u2208E, \u00d5 i \u2208E zi,j \u22641 \u2200j \u2208W , ED(F(i, j), F(i\u2032, j\u2032)) < thresh \u2200j, j\u2032 \u2208W i,i\u2032 \u2208E Person A (a) (b) Person B w0 w1 w\u20190 e2 e1 e0 Elbow Wrist Figure 8: OpenPose associates limb joints using bipartite matching. where E and W are the sets of elbow and wrist joints, and Ai,j is the alignment measure between the i-th elbow and the j-th wrist, while zi,j is an indicator variable indicating connectivity between the elbow and the wrist. The third constraint models whether two elbows belong to the same body, using the Euclidean distance between the color histograms of the body color. This formulation reduces to a max-weight bipartite matching problem, and we solve it with the Hungarian algorithm [67]. Tag RSSI Matching Visual Feature Matching Output User: Alice Bought: Coke x 1 Cups x 1 Customer Pose Tracks Proximity Event Detection Weight Change Matching Figure 9: Grab recognizes the items a shopper picks up by fusing vision with smart-shelf sensors including weight and RFID. 3.2 Shopper Action Recognition When a shopper is being continuously tracked, and their hand movements accurately detected, the next step is to recognize hand actions, specifically to identify item(s) which the shopper picks up from a shelf. Vision-based hand tracking alone is insufficient for this in the presence of multiple shoppers concurrently accessing items under variable lighting conditions. Grab leverages the fact that many retailers are installing smart shelves [18, 37] to deter theft. These shelves have weight sensors and are equipped with RFID readers. Weight sensors cannot distinguish between items of similar weight, while not all items are likely to have RFID tags for cost reasons. So, rather than relying on any individual sensor, Grab fuses detections from cameras, weight sensors, and RFID tags to recognize hand actions. Modeling the sensor fusion problem. In a given camera view, at any instant, multiple shoppers might be reaching out to pick items from shelves. Our identity tracker (\u00a73.1) tracks hand movement, the goal of the action recognition 6 \f(X, Y, Z) World coordinates Camera coordinates (x, y) X Y Z x y (Xc, Yc, Zc) Transform Figure 10: When the shopper\u2019s hand is obscured, Grab infers proximity to shelves by determining when a shoppers ankle joint is near a shelf. problem is to associate each shopper\u2019s hand with the item he or she picked up from the shelf. We model this association between shopper\u2019s hand k and item m as a probability pk,m derived from fusing cameras, weight sensors, and RFID tags (Figure 9).pk,m is itself derived from association probabilities for each of the devices, in a manner described below. Given these probabilities, we then solve the association problem using a maximum weight bipartite matching. In the following paragraphs, we discuss details of each of these steps. Proximity event detection. Before determining association probabilities, we need to determine when a shopper\u2019s hand approaches a shelf. This proximity event is determined using the identity tracker module\u2019s gesture tracking (\u00a73.1). Knowing where the hand is, Grab uses image analysis to determine when a hand is close to a shelf. For this, Grab requires an initial configuration step, where store administrators specify camera view parameters (mounting height, field of view, resolution etc.), and which shelf/shelves are where in the camera view. Grab uses a threshold pixel distance from hand to the shelf to define proximity, and its identity tracker reports start and finish times for when each hand is within the proximity of a given shelf (a proximity event). In some cases, the hand may not be visible. In these cases, Grab estimates proximity using the skeletal keypoints identified by OpenPose (\u00a73.1). Specifically, Grab knows, from the initial configuration step, the camera position (including its height), its orientation, and its field of view. From this, and simple geometry, it can estimate the pixel position of any point on the visible floor. In particular, it can estimate the pixel location of a shopper\u2019s ankle joint (Figure 10), and use this to estimate the distance to a shelf. When the ankle joint is occluded, we extrapolate its position from the visible part of the skeleton to estimate the position. Association probabilities from the camera. When a proximity event starts, Grab starts tracking the hand and any item in the hand. It uses the color histogram of the item to classify the item. To ensure robust classification, Grab performs (Figure 11(a)) (a) background subtraction to remove other items that may be visible and (b) eliminates the hand itself from the item by filtering out pixels whose color matches typical skin colors. Grab extracts a 384 dimension color histogram from the remaining pixels. During an initial configuration step, Grab requires store administrators to specify which objects are on which shelves. Grab then builds, for each shelf (a single shelf might contain 10-15 different types of items), builds a feature-based kNN classifier (chosen both for speed and accuracy). Then, during actual operation, when an item is detected, Grab runs this classifier on its features. The classifier outputs an ordered list of matching items, with associated match probabilities. Grab uses these as the association probabilities from the camera. Thus, for each hand i and each item j, Grab outputs the camera-based association probability. Association probabilities from weight sensors. In principle, a weight sensor can determine the reduction in total weight when an item is removed from the shelf. Then, knowing which shopper\u2019s hand was closest to the shelf, we can associate the shopper with the item. In practice, this association needs to consider real-world behaviors. First, if two shoppers concurrently remove two items of different weights (say a can of Pepsi and a peanut butter jar), the algorithm must be able to identify which shopper took which item. Second, if two shoppers are near the shelf, and two cans of Pepsi were removed, the algorithm must be able to determine if a single shopper took both, or each shopper took one. To increase robustness to these, Grab breaks this problem down into two steps: (a) it associates a proximity event to dynamics in scale readings, and (b) then associates scale dynamics to items by detecting weight changes. Associating proximity events to scale dynamics. Weight scales sample readings at 30 Hz. At these rates, we have observed that, when a shopper picks up an item or deposits an item on a shelf, there is a distinct \"bounce\" (a peak when an item is added, or a trough when removed) because of inertia (Figure 11(b)). If d is the duration of this peak or trough, and d\u2032 is the duration of the proximity event, we determine the association probability between the proximity event and the peak or trough as the ratio of the intersection of the two to the union of the two. As Figure 11(b) shows, if two shoppers pick up items at almost the same time, our algorithm is able to distinguish between them. Moreover, to prevent shoppers from attempting to confuse Grab by temporarily activating the weight scale with a finger or hand, Grab filters out scale dynamics where there is high frequency of weight change. Associating scale dynamics to items. The next challenge is to measure the weight of the item removed or deposited. Even when there are multiple concurrent events, the 30 Hz sampling rate ensures that the peaks and troughs of two concurrent actions are likely distinguishable (as in Figure 11(b)). In this case, we can estimate the weight of each item from the sensor reading at the beginning of the peak or trough ws and the 7 \fBackground subtraction Original image with hand detection Hand Elimination Visual Features (a) Proximity Threshold Hand-A Hand-B Weight Drop 1 Weight Drop 2 (b) (c) Figure 11: (a) Vision based item detection does background subtraction and removes the hand outline. (b) Weight sensor readings are correlated with hand proximity events to assign association probabilities. (c) Tag RSSI and hand movements are correlated, which helps associate proximity events to tagged items. reading at the end we. Thus |ws \u2212we | is an estimate of the item weight w. Now, from the configuration phase, we know the weights of each type of item on the shelf. Define \u03b4j as |w \u2212wj | where wj is the known weight of the j-th type of item in the shelf. Then, we say that the probability that the item removed or deposited was the j-th item is given by 1/\u03b4j \u00cd i(1/\u03b4i). This definition accounts for noise in the scale (the estimates for w might be slightly off) and for the fact that some items may be very similar in weight. Combining these association probabilities. From these steps, we get two association probabilities: one associating a proximity event to a peak or trough, another associating the peak or trough to an item type. Grab multiplies these two to get the probability, according to the weight sensor, that hand i picked item j. Association probabilities from RFID tag. For items which have an RFID tag, it is trivial to determine which item was taken (unlike with weight or vision sensors), but it is still challenging to associate proximity events with the corresponding items. For this, we leverage the fact that the tag\u2019s RSSI becomes weaker as it moves away from the RFID reader. Figure 11(c) illustrates an experiment where we moved an item repeatedly closer and further away from a reader; notice how the changes in the RSSI closely match the distance to the reader. In smart shelves, the RFID reader is mounted on the back of the shelf, so that when an object is removed, its tag\u2019s RSSI decreases. To determine the probability that a given hand caused this decrease, we use probability-based Dynamic Time Warping [45], which matches the time series of hand movements with the RSSI time series and assigns a probability which measures the likelihood of association between the two. We use this as the association probability derived from the RFID tag. Putting it all together. In the last step, Grab formulates an assignment problem to determine which hand to associate with which item. First, it determines a time window consisting of a set of overlapping proximity events. Over this window, it first uses the association probabilities from each sensor to define a composite probability pk,m between the k-th hand and the m-th item: pk,m is a weighted sum of the three probabilities from each sensor (described above), with the weights being empirically determined. Then, Grab formulates the assignment problem as an optimization problem: max k,m \u00d5 pk,mzk,m s.t. \u00d5 k \u2208H zk,m \u22641 \u2200m \u2208I, \u00d5 l \u2208It zk,l \u2264ul \u2200k \u2208H where H is the set of hands, I is the set of items, and It is the set of item types, and zk,m is an indicator variable that determines if hand k picked up item m. The first constraint models the fact that each item can be removed or deposited by one hand, and the second models the fact that sometimes shoppers can pick up more than one item with a single hand: ul is a statically determined upper bound on the number of items of the l-th item that a shopper can pick up using a single hand (e.g., it may be physically impossible to pick up more than 3 bottles of a specific type of shampoo). This formulation is a max-weight bipartite matching problem, which we can optimally solve using the Hungarian [67] algorithm. 3.3 GPU Multiplexing Because retailer margins can be small, Grab needs to minimize overall costs. The computing infrastructure (specifically, GPUs) is an important component of this cost. In what we have described so far, each camera in the store needs a GPU. Grab actually enables multiple cameras to be multiplexed on one GPU. It does this by avoiding running OpenPose on every frame. Instead, Grab uses a tracker to track joint positions from frame to frame: these tracking algorithms are fast and do not require the use of the GPU. Specifically, suppose Grab runs OpenPose on frame i. On that frame, it computes 8 \fFigure 12: ORB features from each joint bounding box are tracked across successive frames to permit multiplexing the GPU across multiple cameras. ORB [84] features around every joint (Figure 12(a)): ORB features can be computed faster than previously proposed features like SIFT and SURF. Then, for each joint, it identifies the position of the joint in frame i + 1 by matching ORB features between the two frames. Using this it can reconstruct the skeleton in frame i + 1 without running OpenPose on that frame. Grab uses this to multiplex a GPU over N different cameras. It runs OpenPose from a frame on each camera in a roundrobin fashion. If a frame has been generated by the k-the camera, but Grab is processing a frame from another (say, the m-th) camera, then Grab runs feature-based tracking on the frame from the k camera. Using this technique, we show that Grab is able to scale to using 4 cameras on one GPU without significant loss of accuracy (\u00a74). 4 EVALUATION We now evaluate the end-to-end accuracy of Grab and explre the impact of each of our optimizations on overall performance. 2 4.1 Grab Implementation Weight-sensing Module. To mimic weight scales on smart shelves, we built scales costing $6, with fiberglass boards and 2 kg, 3 kg, 5G kg pressure sensors. The sensor output is converted by the SparkFun HX711 load cell amplifier [19] to digital serial signals. An Arduino Uno Micro Control Unit (MCU) [6] (Figure 13(a)-left) batches data from the ADCs and sends it to a server. The MCU has nine sets of serial Tx and Rx so it can collect data from up to nine sensors simultaneously. The sensors have a precision of around 510 g, with an effective sampling rate of 30 Hz3. RFID-sensing Module. For RFID, we use the SparkFun RFID modules with antennas and multiple UHF passive RFID tags [32] (Figure 13(a)-right). The module can read up to 150 tags per second and its maximum detection range is 4 m with 2Demo video of Grab: https://vimeo.com/245274192 3The HX711 can sample at 80 Hz, but the Arduino MCU, when used with several weight scales, limits the sampling rate to 30 Hz. a) b) RFID Antenna RFID Module Arduino MCU RFID Tag Weight Scale ADC Arduino MCU (a) (b) Figure 13: (a) Left: Weight sensor hardware, Right: RFID hardware; (b) Grab sample output. and antenna. The RFID module interfaces with the Arduino MCU to read data from tags. Video input. We use IP cameras [4] for video recording. In our experiments, the cameras are mounted on merchandise shelves and they stream 720p video using Ethernet. We also tried webcams and they achieved similar performance (detection recall and precision) as IP cameras. Identity tracking and action recognition. These modules are built on top of the OpenPose [28] library\u2019s skeleton detection algorithm. As discussed earlier, we use a modified limb association algorithm. Our other algorithms are implemented in Python, and interface with OpenPose using a boost.python wrapper. Our implementation has over 4K lines of code. 4.2 Methodology, Metrics, and Datasets In-store deployment. To evaluate Grab, we collected traces from an actual deployment in a retail store. For this trace collection, we installed the sensors described above in two shelves in the store. First, we placed two cameras at the ends of an aisle so that they could capture both the people\u2019s pose and the items on the shelves. Then, we installed weight scales on each shelf. Each shelf contains multiple types of items, and all instances of a single item were placed on a single shelf at the beginning of the experiment (during the experiment, we asked users to move items from one shelf to another to try to confuse the system, see below). In total, our shelves contained 19 different types of items. Finally, we placed the RFID reader\u2019s antenna behind the shelf, and we attached RFID tags to all instances of 8 types of items. Trace collection. We then recorded five hours worth of sensor data from 41 users who registered their faces with Grab. We asked these shoppers to test the system in whatever way they wished to (Figure 13(b)). The shoppers selected from among the 19 different types of items, and interacted with the items (either removing or depositing them) a total of 307 times. Our cameras saw an average of 2.1 shoppers and a maximum of 8 shoppers in a given frame. In total, we collected over 10GB of video and sensor data, using which we analyze Grab\u2019 performance. Adversarial actions. During the experiment, we also asked shoppers to perform three kinds of adversarial actions. (1) 9 \fItem-switching: The shopper takes two items of similar color or similar weight and then puts one back, or takes one item and puts it on a different scale; (2) Hand-hiding: The shopper hides the hand from the camera and grabs the item; (3) Sensor-tampering: The shopper presses the weight scale with their hand. Of the 307 recorded actions, nearly 40% were adversarial: 53 item-switching, 34 hand-hiding, and 31 sensortampering actions. Metrics. To evaluate Grab\u2019s accuracy, we use precision and recall. In our context, precision is the ratio of true positives to the sum of true positives and false positives. Recall is the ratio of true positives to the sum of true positives and false negatives. For example, suppose a shopper picks items A, B, and C, but Grab shows that she picks items A, B, D, and E. A and B are correctly detected so the true positives are 2, but C is missing and is a false negative. The customer is wrongly associated with D and E so there are 2 false positives. In this example, recall is 2/3 and precision is 2/4. 4.3 Accuracy of Grab Overall precision and recall. Figure 14(a) shows the precision and recall of Grab, and quantifies the impact of using different combinations of sensors: using vision only (V Only), weight only (W only), RFID only (R only) or all possible combinations of two of these sensors. Across our entire trace, Grab achieves a recall of nearly 94% and a precision of over 91%. This is remarkable, because in our dataset nearly 40% of the actions are adversarial (\u00a74.2). We dissect Grab failures below and show how these are within the loss margins that retailers face today due to theft or faulty equipment. Using only a single sensor4 degrades recall by 12-37% and precision by 16-36% (Figure 14(a)). This illustrates the importance of fusing readings from multiple sensors for associating proximity events with items (\u00a73.2). The biggest loss of accuracy comes from using only the vision sensors to detect items. RFID sensors perform the best, since RFID can accurately determine which item was selected5. Even so, an RFID-only deployment has 12% lower recall and 16% lower precision. Of the sensor combinations, using weight and RFID sensors together comes closest to the recall performance of the complete system, losing only about 3% in recall, but 10% in precision. Adversarial actions. Figure 14(b) shows precision and recall for only those actions in which users tried to switch items. In these cases, Grab is able to achieve nearly 90% precision and recall, while the best single sensor (RFID) has 7% lower 4For computing the association probabilities \u00a73.2. Cameras are still used for identity tracking and proximity event detection. 5In general, since RFID is expensive, not all objects in a store will have RFID tags. In our deployment, a little less than half of the item types were tagged, and these numbers are calculated only for tagged items. recall and 13% lower precision, and the best 2-sensor combination (weight and RFID) has 5% lower precision and recall. As expected, using a vision sensor or weight sensor alone has unacceptable performance because the vision sensor cannot distinguish between items that look alike and the weight sensor cannot distinguish items of similar weight. Figure 14(c) shows precision and recall for only those actions in which users tried to hide the hand from the camera when picking up items. In these cases, Grab estimates proximity events from the proximity of the ankle joint to the shelf (\u00a73.2) and achieves a precision of 80% and a recall of 85%. In the future, we hope to explore cross-camera fusion to be more robust to these kinds of events. Of the single sensors, weight and RFID both have more than 24% lower recall and precision than Grab. Even the best double sensor combination has 12% lower recall and 20% lower precision. Finally, Figure 14(d) shows precision and recall only for those items in which the user trying to tamper with the weight sensors. In these cases, Grab is able to achieve nearly 87% recall and 80% precision. RFID, the best single sensor, has more than 10% lower precision and recall, while predictably, vision and RFID have the best double sensor performance with 5% lower recall and comparable precision to Grab. In summary, Grab has slightly lower precision and recall for the adversarial cases and these can be improved with algorithmic improvements, its overall precision and recall on a trace with nearly 40% adversarial actions is over 91%. When we analyze only the non-adversarial actions, Grab has a precision of 95.8% and a recall of 97.2%. Taxonomy of Grab failures. Grab is unable to recall 19 of the 307 events in our trace. These failures fall into two categories: those caused by identity tracking, and those by action recognition. Five of the 19 failures are caused either by wrong face identification (2 in number), false pose detection (2 in number) (Figure 13(c)), or errors in pose tracking (one). The remaining failures are all caused by inaccuracy in action recognition, and fall into three categories. First, Grab uses color histograms to detect items (\u00a73.2), but these can be sensitive to lighting conditions (e.g., a shopper takes an item from one shelf and puts it in another when the lighting condition is slightly different) and occlusion (e.g., a shopper deposits an item into a group of other items which partially occlude the items). Incomplete background subtraction can also reduce the accuracy of item detection. Second, our weight scales were robust to noise but sometimes still could not distinguish between items of similar, but not identical, weight. Third, our RFID-to-proximity event association failed at times when the tag\u2019s RFID signal disappeared for a short time from the reader, possibly because the tag was temporarily occluded by 10 \fFigure 14: Grab has high precision and recall across our entire trace (a), relative to other alternatives that only use a subset of sensors (W: Weight; V: Vision; R: RFID), even under adversarial actions such as (b) Item-switching; (c) Hand-hiding; (d) Sensor-Tampering . other items. Each of these failure types indicates directions or future work for Grab. Contextualizing the results. From the precision/recall results, it is difficult to know if Grab is within the realm of feasibility for use in today\u2019s retail stores. Grab\u2019s failures fall into two categories: Grab associates the wrong item with a shopper, or it associates an item with the wrong shopper. The first can result in inventory loss, the second in overcharging a customer. A survey of retailers [27] estimates the inventory loss ratio (if a store\u2019s total sales are $100, but $110 worth of goods were taken from the store, the inventory loss rate is 10%) in today\u2019s stores to be 1.44%. In our experiments, Grab\u2019s failures result in only 0.79% inventory loss. Another study [34] suggests that faulty scanners can result in up to 3% overcharges on average, per customer. In our experiments, we see a 2.8% overcharge rate. These results are encouraging and suggest that Grab may be with the realm of feasibility, but larger scale experiments are needed to confirm this. Additional investments in sensors and cameras, and algorithm improvements, could further improve Grab\u2019s accuracy. 4.4 The Importance of Efficiency Grab is designed to process data in near real-time so that customers can be billed automatically as soon as they leave the store. For this, computational efficiency is important to lower cost (\u00a74.5), but also to achieve high processing rates in order to maintain accuracy. Impact of lower frame rates. If Grab is unable to achieve a high enough frame rate for processing video frames, it can have significantly lower accuracy. At lower frame rates, Grab can fail in three ways. First, a customer\u2019s face may not be visible at the beginning of the track in one camera. It usually takes several seconds before the camera can capture and identify the face. At lower frame rates, Grab may not capture frames where the shopper\u2019s face is visible to the camera, so it might Figure 15: Grab needs a frame rate of at least 10 fps for sufficient accuracy, reducing identity switches and identification delay. take longer for it to identify the shopper. Figure 15(a) shows that this identification delay decreases with increasing frame rate approaching sub-second times at about 10 fps. Second, at lower frame rates, the shopper moves a greater distance between frames, increasing the likelihood of identity switches when the tracking algorithm switches the identity of the shopper from one registered user to another. Figure 15(b) shows that the ratio of identity switches approaches negligible values only after about 8 fps. Finally, at lower frame rates, Grab may not be able to capture the complete movement of the hand towards the shelf, resulting in incorrect determination of proximity events and therefore reduced overall accuracy. Figure 15(c) shows precision6 approaches 90% only above 10 fps. Infeasibility of a DNN-only architecture. In \u00a73 we argued that, for efficiency, Grab could not use separate DNNs for different tasks such as identification, tracking, and action 6In this and subsequent sections, we focus on precision, since it is lower than recall (\u00a74.3), and so provides a better bound on Grab performance. 11 \frecognition. To validate this argument, we ran the state-ofthe-art open-source DNNs for each of these tasks on our data set. These DNNs were at the top of the leader-boards for various recent vision challenge competitions [13, 25, 26]. We computed both the average frame rate and the precision achieved by these DNNs on our data (Table 1). For face detection, our accuracy measures the precision of face identification. The OpenFace [42] DNN can process 15 fps and achieve the precision of 95%. For people detection, our accuracy measures the recall of bounding boxes between different frames. Yolo [79] can process at a high frame rate but achieves only 91% precision, while Mask-RCNN [56] achieves 97% precision, but at an unacceptable 5 fps. The DNNs for people tracking showed much worse behavior than Grab, which can achieve an identity switch rate of about 0.027 at 10 fps, while the best existing system, DeepSORT [92] has a higher frame rate but a much higher identity switch rate. The fastest gesture recognition DNN is OpenPose [47] (whose body frame capabilities we use), but its performance is unacceptable, with low (77%) accuracy. The best gesture tracking DNN, PoseTrack [62], has a very low frame rate. Thus, today\u2019s DNN technology either has very low frame rates or low accuracy for individual tasks. Of course, DNNs might improve over time along both of these dimensions. However, even if, for each of the four tasks, DNNs can achieve, say, 20 fps and 95% accuracy, when we run these on a single GPU, we can at best achieve 5 fps, and an accuracy of 0.954 = 0.81. By contrast, Grab is able to process a single camera on a single GPU at over 15 fps (Figure 16), achieving over 90% precision and recall (Figure 14(a)). Face Detection FPS Accuracy OpenFace [42] 15 95.1 RPN [54] 5.8 95.1 People detection FPS Accuracy YOLO-9000 [79] 35 91.0 Mask-RCNN [56] 5 97.4 People tracking FPS Avg ID switch MDP [59] 1.43 1.3 DeepSORT [92] 17 0.8 Gesture Recognition FPS Accuracy* OpenPose [47] 15.5 77.3 DeeperCut [61] 0.09 88 Gesture Tracking FPS Avg ID switch PoseTrack [62] 1.6 1.8 Table 1: State-of-the-art DNNs for many of Grab\u2019s tasks either have low frame rates or insufficient accuracy. (* Average pose precision on MPII Single Person Dataset) 4.5 GPU multiplexing In the results presented so far, Grab processes each camera on a separate GPU. The bottleneck in Grab is pose detection, Module Avg time per frame (ms) Pose detection 63.3 Face detection 4.1 Face identification 7 Pose tracking 5.0 Table 2: Pose detection is the bottleneck in Grab. which requires about 63 ms per frame: our other components require less than 7 ms each (Table 2). In \u00a73.3, we discussed an optimization that uses a fast feature tracker to multiplex multiple cameras on a single GPU. This technique can sacrifice some accuracy, and we are interested in determining the sweet spot between multiplexing and accuracy. Figure 16 quantifies the performance of our GPU multiplexing optimization. Figure 16(a) shows that Grab can support up to 4 cameras with a frame rate of 10 fps or higher with fast feature tracking; without it, only a single camera can be supported on the GPU (the horizontal line in the figure represents 10 fps). Up to 4 cameras, Figure 16(b) shows that the precision can be maintained at nearly 90% (i.e., negligible loss of precision). Without fast feature tracking, multiplexing multiple cameras on a single GPU reduces the effective frame rate at which each camera can be processed, reducing accuracy for 4 cameras to under 60%. Thus, with GPU multiplexing using fast feature tracking, Grab can reduce the investment in GPUs by 4\u00d7. Figure 16: GPU multiplexing can support up to 4 multiple cameras at frame rates of 10 fps or more, without noticeable lack of accuracy in action detection. 4.6 Evaluating Design Choices In this section, we experimentally validate design choices and optimizations in identification and tracking. Identification. Customer identification in Grab consists of three steps (\u00a73.1): face detection, feature extraction, and feature classification. For face detection, Grab adjusts the bounding box from OpenPose output. It could have used the default OpenPose output box or run a separate neural network for face detection. Table 3 shows that our design choice preserves detection accuracy while being an order of magnitude faster. 12 \fFor feature extraction, we compared our ResNet face features with another face feature (FaceNet), with a neural net generated body feature, and with a body color histogram. Table 4 shows that our approach has the highest accuracy. Finally, for feature classification, we tried three approaches: comparing features\u2019 cosine distance, using kNN, or using a simple neural network. Their re-training time7, running speed, and accuracy, are shown in Table 5. We can see that kNN has the best accuracy with retraining overhead of 2 s and classification overhead less than 2 ms. Speed (ms/img) Accuracy (%) Adjusted box (Grab) 4.1 95.1 Original box from pose <1 83.0 Box from DNN model 93 95.1 Table 3: Adjusting the face bounding box in Grab has comparable accuracy to a neural network based approach, while having significantly lower overhead. Method Accuracy Grab\u2019s model (ResNet) 95.1% FaceNet 89.4% Body deep feature 51.2% Color histogram 31.7% Table 4: Grab\u2019s ResNet based face features have the highest accuracy. Tracking. Replacing our pose tracker with a bounding box tracker [92] can result in below 50% precision. Removing the limb association optimization drops precision by about 11%, and removing the optimization that estimates proximity when the hand is not visible reduces precision by over 7%. Finally, removing lazy tracking, which permits accurate tracking even in the presence of occlusions can reduce precision by over 15%. Thus, each optimization is necessary to achieve high precision. 5 RELATED WORK We are not aware of published work on end-to-end design and evaluation of cashier-free shopping. Commercial cashier-free shopping systems. Amazon Go was the first set of stores to permit cashier-free shopping. Several other companies have deployed demo stores, including Standard Cognition [33], Taobao [35], and Bingobox [8]. Amazon Go and Standard Cognition use deep learning and computer vision to determine shopper-to-item association ([2, 17, 22, 23]). Amazon Go does not use RFID [2, 22] but needs many ceiling-mounted cameras. Imagr[20] uses a camera-equipped cart to recognize the items put into the cart by the user. Alibaba and Bingobox use RFID reader to scan all items held by the customer at a \"checkout gate\" ([11, 12]). Grab incorporates many of these elements in its design, but 7Fast re-training is essential to minimize the time the customer needs to wait between registration and shopping. Cosine Dist kNN Neural Net Retraining latency (s) 0 2.1 68.6 Classification latency (ms) 0.1* 1.9 10.7 Accuracy (%) 75.6 95.1 92.8 Table 5: Grab\u2019s kNN-based algorithm has highest accuracy while having low retraining and classification latency. (* Cosine distance runtime is 0.1 ms per person) uses a judicious combination of complementary sensors (vision, RFID, weight scales). Person identification. Person (re)-identification has used face features and body features. Body-feature-based re-identification [43, 49, 94] can achieve the precision of up to 80%, insufficient for cashier-free shopping. Proprietary face feature based re-identification [5, 16, 31, 38] can reach 99% precision. Recent academic research using face features has achieved an accuracy of more than 95% on public datasets, but such systems are either unavailable [50, 55, 95] or too slow [74, 91]. Grab uses fast feature-based face re-identification with comparable accuracy while using a pose tracker to accurately bound the face (\u00a73.1). People tracking. Bounding box based trackers [71, 83, 92] can track shopper movement, but can be less effective in crowds (\u00a74.4) since they do not detect limbs and hands. Some pose trackers [60, 63] can do pose detection and tracking at same time, but are too slow for Grab (\u00a74.4) which uses a skeleton-based pose tracker both for identity tracking and gesture recognition. Action detection. Action detection is an alternative approach to identifying shopping actions. Publicly available state-of-the-art DNN-based solutions [52, 57, 65, 88] have not yet been trained for shopping actions, so their precision and recall in our setting is low. Item detection and tracking. Prior work has explored item identification using Google Glass [53] but such devices are not widely deployed. RFID tag localization can be used for item tracking [64, 86, 87] but that line of work does not consider frequent tag movements, tag occlusion, or other adversarial actions. Vision-based object detectors [51, 56, 80] can be used to detect items, but need to be trained for shopping items and can be ineffective under occlusions and poor lighting (\u00a74.3). Single-instance object detection scales better for training items but has low accuracy [58, 66]. 6" + } + ], + "Yi Hu": [ + { + "url": "http://arxiv.org/abs/2402.17709v1", + "title": "Case-Based or Rule-Based: How Do Transformers Do the Math?", + "abstract": "Despite the impressive performance in a variety of complex tasks, modern\nlarge language models (LLMs) still have trouble dealing with some math problems\nthat are simple and intuitive for humans, such as addition. While we can easily\nlearn basic rules of addition and apply them to new problems of any length,\nLLMs struggle to do the same. Instead, they may rely on similar \"cases\" seen in\nthe training corpus for help. We define these two different reasoning\nmechanisms as \"rule-based reasoning\" and \"case-based reasoning\". Since\nrule-based reasoning is essential for acquiring the systematic generalization\nability, we aim to explore exactly whether transformers use rule-based or\ncase-based reasoning for math problems. Through carefully designed intervention\nexperiments on five math tasks, we confirm that transformers are performing\ncase-based reasoning, no matter whether scratchpad is used, which aligns with\nthe previous observations that transformers use subgraph matching/shortcut\nlearning to reason. To mitigate such problems, we propose a Rule-Following\nFine-Tuning (RFFT) technique to teach transformers to perform rule-based\nreasoning. Specifically, we provide explicit rules in the input and then\ninstruct transformers to recite and follow the rules step by step. Through\nRFFT, we successfully enable LLMs fine-tuned on 1-5 digit addition to\ngeneralize to up to 12-digit addition with over 95% accuracy, which is over 40%\nhigher than scratchpad. The significant improvement demonstrates that teaching\nLLMs to explicitly use rules helps them learn rule-based reasoning and\ngeneralize better in length.", + "authors": "Yi Hu, Xiaojuan Tang, Haotong Yang, Muhan Zhang", + "published": "2024-02-27", + "updated": "2024-02-27", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CL" + ], + "main_content": "Introduction Large language models (LLMs) such as ChatGPT (OpenAI, 2022) and GPT-4 (OpenAI, 2023) have exhibited remarkable capabilities in a wide range of tasks from some classical NLP tasks such as translation and summarization to complex reasoning tasks about commonsense, math, logic and so on (OpenAI, 2022; 2023; Brown et al., 2020; Touvron et al., 2023a; Chowdhery et al., 2023; Thoppilan et al., 2022). Some people believe LLMs present a seemingly promising route to AGI (Bubeck et al., 2023). At the same time, some theoretical work also give their support to this applauded prospect by proving that transformer-based LLMs can learn an intrinsic mechanism for some complex tasks such as linear regression (Aky\u00a8 urek et al., 2023), dynamic programming (Feng et al., 2023) or modular addition (Zhong et al., 2023; Nanda et al., 2023; Power et al., 2022; Liu et al., 2022). Although LLMs have demonstrated impressive results and possibility both in performance and theory, they are, surprisingly, still puzzled by some basic calculation tasks (Qin et al., 2023; Bian et al., 2023; Koralus & Wang-Ma\u00b4 scianica, 2023; Dziri et al., 2023; Xu et al., 2023; Zhou et al., 2023b). Notably, there have been a line of work paying efforts to teach transformers to perform addition of two large numbers (Nye et al., 2021; Qian et al., 2022; Zhou et al., 2022; 2023b; Shen et al., 2023; Kazemnejad et al., 2023; Lee et al., 2023; Zhou et al., 2024). Despite ongoing efforts, transformers have yet to successfully generalize to new inputs that are significantly longer than the training data, without relying on external tools. In contrast, we humans can easily solve addition of two numbers of any length after learning basic rules of column addition. Language models often astonish us with their proficiency in complex tasks, yet they can also perplex us with unexpected failures in seemingly straightforward tasks. This dichotomy in performance raises intriguing questions about their underlying reasoning mechanisms. Previous work has argued over the open questions. Nanda et al. (2023); Zhong et al. (2023) study how transformers do modular addition and claim that they derive certain algorithms to solve the problem, such as the clock algorithm where input numbers are represented as angles and then added together. However, another line of work (Dziri et al., 2023; Wu et al., 2023; Zhang et al., 2023) worries that the 1 arXiv:2402.17709v1 [cs.AI] 27 Feb 2024 \fCase-based or Rule-based: How Do Transformers Do the Math? 1. adding unit\u2019s digit: 2+7=9, carry 0; 2. adding ten\u2019s digit: 4+5=9, carry 0; \u2192the result is 99 case-based reasoning rule-based reasoning QUESTION: 42+57=? 42+57=? 42+56=98 52+57=109 32+57=89 42+58=100 42+67=109 42+47=89 43+57=100 41+57=98 RULE: adding digit by digit referring to similar cases to reason Figure 1. Illustrations of case-based and rule-based reasoning. impressive reasoning ability of LLMs can be mainly attributed to the extensive training corpus. They argue that transformers are just recalling similar instances from seen data to solve reasoning tasks, instead of capturing certain underlying rules and applying them to new problems. In this paper, we study the hotly-debated questions in a more direct way through intervention experiments. We hypothesize that transformers significantly depend on certain cases in training data to do math reasoning, which we denote as \u201ccase-based reasoning\u201d. It should be noted that here by \u201ccasebased reasoning\u201d we do not mean a non-parameterized machine learning algorithm that really retrieves similar cases from a database. Rather, we describe a behavior that transformers show in reasoning. Specifically, if a model employs case-based reasoning, the removal of those dependent cases from the training set would significantly affect its accuracy on certain test examples. On the contrary, if a model does not rely on similar cases but instead masters the underlying rules for reasoning\u2014a mechanism we define as \u201crule-based reasoning\u201d\u2014the absence of these cases should not affect the performance. An illustration of these two contrasting reasoning paradigms is shown in Figure 1. To verify our hypothesis, we investigate five basic and representative math tasks: addition, modular addition, base addition, linear regression, and chicken & rabbit problems. As a sanity check for each task, we first make sure that the model achieves 100% performance on the test set when the dataset is randomly split. Then, we artificially split the dataset by leaving out some continuous regions of examples as the test set with the remaining ones as the training set and re-train the model. This method ensures that most test examples do not have close training cases to support their inference. Our results show that in all tasks, the model performance drops significantly in the second setting, despite that the size of the training set (above 95% of the whole dataset) is entirely sufficient to achieve 100% accuracy under random split. See Figure 2 and Figure 3 for example. The results of our intervention experiments provide direct evidence suggesting that transformers perform case-based reasoning for math problems. This also aligns with previous work (Dziri et al., 2023) showing transformers rely on seen computation subgraphs for multi-step reasoning. However, there are notable distinctions in our approach and findings: Dziri et al. (2023) look at the frequency difference of seen subgraphs in correct and incorrect samples respectively as indirect evidence that models rely on seen subgraphs to generate correct answers, while we present direct evidence of case-based reasoning by showing the performance gap before and after removing the cases. Besides, we study both single-step and multi-step reasoning while Dziri et al. (2023) mainly focus on compositional reasoning. So why is rule-based reasoning so important? Rule-based reasoning is essential for models to achieve systematic and length generalization, so that they can be applied to new, unseen scenarios without re-training. As our last contribution, we propose a method aimed at shifting transformers from case-based to rule-based reasoning, thereby fostering a more robust and generalizable reasoning paradigm. Focusing again on the addition of large numbers, we propose a technique that teaches transformers to follow rules step by step. Specifically, we explicitly put rules in the input and enforce the model to step-by-step recite and follow the necessary rule to complete reasoning, which we call Rule-Following Fine-Tuning (RFFT). Through RFFT, LLMs trained on addition of numbers of 1-5 digits successfully generalize to up to 12-digit addition, verifying its effectiveness in teaching LLMs to perform rule-based reasoning. It is noteworthy that the training set is as small as 100 samples, demonstrating that RFFT enables models with sufficient fundamental capabilities to grasp the rules through a small set of examples, which aligns with human\u2019s few-shot rule learning ability. 2. Related Work LLM reasoning. Recent years have seen enormous improvement in LLM capabilities. LLMs show impressive performance in a wide range of tasks (OpenAI, 2022; 2023; Brown et al., 2020; Touvron et al., 2023a; Chowdhery et al., 2023; Thoppilan et al., 2022). However, various tasks of complex reasoning are still challenging for LLMs (Srivastava et al., 2022). In particular, Dziri et al. (2023); Xu et al. (2023); Zhou et al. (2023b; 2024) show that LMs still struggle with math reasoning, even with basic calculation operations. Previous work has come up with methods to simplify the tasks by decomposing them to simpler intermediate steps. For example, Nye et al. (2021); Zhou et al. (2022) introduce finetuning models with cases containing scratchpads to improve arithmetic reasoning of LLMs. Wei et al. (2023); Kojima et al. (2023); Zhou et al. (2023a); Khot et al. (2023); Zhu et al. (2023) propose various prompting methods to teach the model to generate rationales before the final an2 \fCase-based or Rule-based: How Do Transformers Do the Math? swer with in-context learning. However, even with these methods, LLMs are still far from completely solving arithmetic reasoning tasks. The failures inspire us to study how exactly LLMs perform math reasoning. Besides, we study the effects of the methods of task simplification on casebased reasoning in our paper. Specially, Zhu et al. (2023) improve the model performance by providing the cases the reasoning process may depend on in the input, which in fact aligns with our case-based reasoning paradigm. Memorization or generalization. As reasoning capabilities of LLMs can be mainly attributed to the scaling effects of the training corpus and the model size, the question of whether the seemingly impressive reasoning abilities are the results of capturing general rules lying under the natural language or just reciting seen cases from the huge training corpus is drawing more and more attention. Wu et al. (2023); Zhang et al. (2023) investigate into the gap of capabilities of LLMs to conduct reasoning over factual and counterfactual situations and show the significant performance drop in counterfactual cases, suggesting LLMs are reciting answers of common cases in the training corpus. A recent work Dziri et al. (2023) models reasoning tasks as computation graphs and show empirically that LLMs conduct reasoning via subgraph matching instead of developing systematic problem-solving skills. We study the question of interest in a straightforward way by removing certain samples from the training set and show significant performance gap. By tracing back to the effective training datapoints, we confirm that transformer-based LLMs are relying on surrounding cases in the training set to do math reasoning instead of learning generalizable rules. On the other hand, Hou et al. (2023) study the problem through probing the models\u2019 attention patterns and claim that transformers are implementing reasoning trees in the reasoning process. Yang et al. (2023) propose that LLM\u2019s reasoning ability comes from memorizing some templates, which are some fixed parts in the reasoning process, enabling generalization within tasks. Grokking. Recent work has shown the phenomenon of model capturing generalizable rules of arithmetic reasoning tasks long after overfitting the training set, known as grokking (Power et al., 2022; Liu et al., 2022). Nanda et al. (2023); Zhong et al. (2023) study the algorithms transformers learn in the task of modular addition. The series of work show through experiments that the model learns systematic rules to solve modular addition through embedding the numbers as angles and operating on their trigonometric functions. We also try to observe the phenomenon in the same setting as in Zhong et al. (2023) with certain samples removed from the training set. Although we observe the growth of test performance after the model overfitting the training set, there is still a wide gap between training accuracy and test accuracy, suggesting the model fails to learn the rules. This phenomenon indicates that even the ability to learn and apply generalizable arithmetic algorithms in grokking deeply depends on certain cases in the training set. The results and experiments are described in Appendix D. Theoretical expressiveness. (Feng et al., 2023; Aky\u00a8 urek et al., 2023; Dai et al., 2023; von Oswald et al., 2023; Garg et al., 2023) There have been a large number of work studying the expressive power of transformers. Yun et al. (2020) proved that transformers are universal approximators of continuous sequence-to-sequence functions on a compact domain. More recently, Garg et al. (2023) reveals that autoregressive transformers can learn basic functions including sparse linear functions, MLPs and decision trees. Furthermore, Aky\u00a8 urek et al. (2023) demonstrates that transformers can in-context learn linear regression by implementing the algorithm of gradient descent (Dai et al., 2023; von Oswald et al., 2023). Feng et al. (2023) shows how chain-of-thought prompting help transformers complete tasks including basic calculations, linear equations and dynamic programming. In our work, we conduct empirical experiments and show how auto-regressive transformers do basic math reasoning in practical. We include tasks like addition, linear regression and linear functions studied in the theoretical work. Length generalization. Length generalization calls for the ability to generalize to longer sequences than seen in training samples, which remains a challenge for transformers (Abbe et al., 2023; Anil et al., 2022; Zhou et al., 2023b). Previous work has shown that data format and positional encoding are crucial to length generalization ability through experiments on small transformers across various tasks such as arithmetic reasoning (Lee et al., 2023; Kazemnejad et al., 2023; Shen et al., 2023; Zhou et al., 2023b; 2024). However, these works require specifically designed tricks for each task and train small transformers from scratch. Our work explores length generalization in the settings of fine-tuning pre-trained LLMs and shows that the technique of RFFT we propose in \u00a75 greatly enhances length generalization. Furthermore, we demonstrate that the models with sufficient fundamental capabilities can generalize well with only a small set of training samples. 3. Case-based and Rule-based Reasoning One main focus of our paper is to discuss whether autoregressive transformer-based language models are solving basic math problems based on cases or rules. In this section, we intuitively motivate these two reasoning paradigms and provide a direct method to distinguish them through data intervention. 3 \fCase-based or Rule-based: How Do Transformers Do the Math? Case-based Reasoning. A model engaging in case-based reasoning exhibits sensitivity in its test performance to the division of the dataset into training and test sets. Specifically, if a model relies on shortcuts, either by referencing similar cases encountered during training or by merely repeating previously seen examples to solve new problems, its effectiveness diminishes when these cases are removed from the training set. This reduction in relevant training data results in a notable decrease in the model\u2019s ability to accurately respond to test questions. Rule-based Reasoning. In contrast to case-based reasoning, the paradigm of rule-based reasoning allows the model to learn the underlying rules, which are insensitive to the data split. For example, if a model is developing the systematic rules of addition during the training process, its test performance should not be affected severely if we leave some of the training samples out of the training set and add some others to keep the same training-test ratio. It should be noted that the training set should always provide the necessities for the model to learn the underlying rule. For example, the training set should at least cover all the tokens used in the test set in order to develop a systematic rule that applies to the whole dataset. In all our experiments, we carefully design the setups to ensure the above. Based on the above discrimination, we propose a natural method to determine whether a model is performing casebased reasoning or rule-based reasoning through data intervention. That is, we artificially remove certain regions of the training data to see its effect on test performance. For example, in math reasoning tasks such as addition, if the model is severely relying on some seen cases to do reasoning, a natural hypothesis is that it is relying on some surrounding cases of the test question, as shown in Figure 1 left. Based on the hypothesis, we can remove a small set of surrounding cases from the training set and see whether the model can still answer the question. If it succeeds when we leave the surrounding cases in the training set but fails when we take them out, we can judge that the model is relying on the small set of surrounding cases to do math reasoning. Otherwise, if the model can perform well in the test set no matter how we split the dataset, it is likely performing rule-based reasoning which guarantees robust generalization independent of dataset split. It is important to recognize that rule-based reasoning also involves a degree of memorization. For example, in the process of digit-by-digit addition, we inherently rely on memorized knowledge of possible single-digit sums. Take the calculation of 42+57 as an instance; it is essential to know that 2+7 equals 9 and 4+5 equals 9. We refer to this fundamental knowledge required for rule-based reasoning as \u201cunit rules\u201d. These unit rules are tailored to specific reasoning patterns. The more basic these unit rules are, the less memorization the reasoning process requires, indicating a more pronounced reliance on rule-based reasoning. Conversely, if a model relies on case-based reasoning through sheer memorization\u2014learning that 42+57 equals 99 only by encountering this exact case, then the unit rules for this pattern of reasoning are the cases themselves. So how do we judge whether the unit rules are elemental enough to ensure a rule-based rather than case-based reasoning? We define the model is performing rule-based reasoning if the set of unit rules the model requires to solve the task is finite and can be easily covered by a training set of a reasonable size. Otherwise, if it is hard or even impossible for a training set to cover all the unit rules, we consider the model performing case-based reasoning. 4. Transformers are Doing Case-based Reasoning In this section, we provide direct evidence that transformers perform case-based reasoning through intervention experiments on five representative math tasks. 4.1. Experimental Setup Datasets We focus on binary operations, which take two numbers a, b as inputs. Denoting c as the target label, we construct datasets like D = {((ai, bi), ci)} for five math tasks including addition, modular addition, base addition, linear regression, and chicken & rabbit problem: \u2022 Addition. The input to the transformer is \u201ca + b\u201d, the output is \u201cc\u201d, where c = a + b. a, b range from 0 to 99. \u2022 Modular addition. The input to the transformer is \u201ca + b\u201d, the output is \u201cc\u201d, where c = a+b mod P. a, b range from 0 to 112. We set P = 113 as a constant. \u2022 Base addition. This task is the same as addition, except that all numbers a, b, c are expressed in the base-n numerical system. In this paper, we set n = 9 as a constant. \u2022 Linear regression. This task requires the transformer to learn a linear regression function. The input is \u201c(a, b) =\u201d, the output is \u201cc\u201d, where c = m \u00b7 a + n \u00b7 b + p. a, b range from 0 to 99. We set m = 1, n = 2, p = 3 as constants. \u2022 Chicken & rabbit problem. We construct a dataset of chicken & rabbit problems with natural language questions and answers. The input to the transformer is \u201cQ: Rabbits have 4 legs and 1 head. Chickens have 2 legs and 1 head. There are a legs and b heads on the farm. How many rabbits and chickens are there?\u201d. The output is \u201cA: There are c rabbits and d chickens.\u201d, where c = (a \u22122b)/2, d = (4b \u2212a)/2. b ranges from 0 to 99. For each b, a ranges from 2b to 4b with a step of 2. It is a representative task involving solving a system of linear equations. 4 \fCase-based or Rule-based: How Do Transformers Do the Math? 0 8 16 24 32 40 48 56 64 72 80 88 96 0 7 14 21 28 35 42 49 56 63 70 77 84 91 98 addition 0 9 18 27 36 45 54 63 72 81 90 99 108 0 8 16 24 32 40 48 56 64 72 80 88 96 104 112 mod_addition 0 8 16 24 32 40 48 56 64 72 80 88 96 0 7 14 21 28 35 42 49 56 63 70 77 84 91 98 base_addition 0 8 16 24 32 40 48 56 64 72 80 88 96 0 7 14 21 28 35 42 49 56 63 70 77 84 91 98 linear_regression 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 Figure 2. Accuracy of Leave-Square-Out method on addition, modular addition, base addition, and linear regression. The vertical and horizontal axes are a and b, respectively. The area inside red boxes represents the test squares. During generation, we set the model temperature to 1 and sample 10 generations to evaluate the accuracy on each test point. We only leave one test square out in this experiment. The square center (ak, bk) is (50, 50) for addition, base addition and linear regression and (56, 56) for modular addition. 0 8 16 24 32 40 48 56 64 72 80 88 96 0 7 14 21 28 35 42 49 56 63 70 77 84 91 98 addition 0 9 18 27 36 45 54 63 72 81 90 99 108 0 8 16 24 32 40 48 56 64 72 80 88 96 104 112 mod_addition 0 8 16 24 32 40 48 56 64 72 80 88 96 0 7 14 21 28 35 42 49 56 63 70 77 84 91 98 base_addition 0 8 16 24 32 40 48 56 64 72 80 88 96 0 7 14 21 28 35 42 49 56 63 70 77 84 91 98 linear_regression 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 Figure 3. We randomly select 3 centers of test squares (ak, bk) and corresponding lengths lk ranging from 20 to 40 to see whether the locations and the side lengths affect the case-based reasoning behavior for datasets including addition, modular addition, base addition and linear regression. The area inside red boxes represents the test squares. We sample 10 generations at each data point and report the accuracy. The figure shows that holes consistently appear with the locations and side lengths of the test squares varying. Models We use GPT-2, GPT-2-medium (Radford et al., 2019), and Llama-2-7B (Touvron et al., 2023b) in this section. We fine-tune GPT-2 by default respectively on each dataset for 100 epochs under different training-test splits, with batch size set to 30 and learning rate set to 10\u22124. 4.2. Method Leave-Square-Out To test whether the model is relying on certain cases to solve the problem, we need to first locate such cases and then remove them from the training set to see whether they affect the model performance. Our hypothesis is that when facing a certain test sample, transformers tend to rely on training samples \u201cclose\u201d to the test sample to perform reasoning. Thus, we construct a square test set to isolate the test samples from the training samples. For example, suppose the square center is (ak, bk) and the side length is lk, we construct a square test set as Tk = {((ai, bi), ci) | ak\u2212lk 2 \u2264ai \u2264ak+ lk 2 , bk\u2212lk 2 \u2264bi \u2264bk+ bk 2 }. All the remaining samples constitute the training set. According to our hypothesis, case-based models should fail to generate correct answers for test samples near (ak, bk), as there are no close cases in the training set. 4.3. Appearance of Holes Verifies Case-Based Reasoning In our study, we apply the Leave-Square-Out method to each dataset. Specifically, we extract a square comprising 441 samples (from a total of approximately 10,000 samples) with a side length of 20 to form our test set, leaving the remainder as the training set. It is important to note that, despite removing a small portion of training samples, we ensure that all tokens present in the dataset appear in the training set. This precaution is to prevent the models from failing simply due to encountering unseen tokens. We then proceed to fine-tune GPT-2 and GPT-2-medium models using this specific training-test split for each dataset. For comparison, we also fine-tune these models on datasets that are randomly split, where each training set comprises 70% of the total dataset. Models achieve 100% accuracy easily in the random split settings across all datasets, which suggests that the size of training sets in the Leave-Square-Out setting (above 95% of each dataset) is totally sufficient to complete the task. However, in the Leave-Square-Out setting, as shown in Figure 2, there are \u201choles\u201d appearing in the accuracy distribution of the test squares over a and b. The appearance of holes in the figure indicates that the test samples away from the boundary of the training set are hard for the models to correctly infer, while the models can easily handle the test samples 5 \fCase-based or Rule-based: How Do Transformers Do the Math? near the boundary. This suggests that in the basic math reasoning tasks, when faced with an unseen test case, transformers rely on the surrounding training cases to predict the answer, verifying the case-based reasoning hypothesis. As for random split, every test sample has close training samples to support its inference, thus reaching 100% accuracy. In Figure 2, we only show the results of GPT-2 on the first four tasks; the results of GPT-2-medium and the results of chicken & rabbit problem are shown in Appendix A. 4.3.1. DO LOCATIONS OF TEST SQUARES MATTER? To see whether the locations of test squares affect the experimental results, we randomly select three square centers (ak, bk) and three corresponding side lengths lk ranging from 20 to 40 for each dataset. As shown in Figure 3, holes consistently appear in various locations of the dataset, suggesting that the model behavior of performing case-based reasoning does not change with the location of test sets. 4.3.2. DOES THE SIZE OF TEST SQUARE MATTER? We test on various lengths of test squares including lk set to 10, 20, 30, 40 with GPT-2 and GPT2-medium. As shown in Figure 5, test accuracy drops with the side length of the test square increasing. This phenomenon is natural in the context of case-based reasoning. As the test square becomes larger, the ratio of test samples that do not have close supporting training samples becomes higher, thus decreasing the test accuracy. Besides, it is shown in Figure 5 that GPT2 achieves 100% accuracy when we set lk to 10. In other words, the hole disappears when the test square shrinks to less than a small size where all the samples in the test set have close training samples for the model to refer to. 4.3.3. DOES ADDING SCRATCHPAD HELP? Nye et al. (2021) has proposed a technique of teaching models to explicitly generate intermediate computation steps into a \u201cscratchpad\u201d before arriving at the final answer to improve their math reasoning capabilities. The scratchpad technique enables the model to decompose addition into incremental digit-by-digit operations, potentially reducing the model\u2019s dependence on surrounding cases. An example input-output pair of scratchpad is shown in the bottom left of Figure 6 (scratchpad). We employ scratchpad fine-tuning to examine its impact on the model\u2019s tendency towards case-based reasoning, specifically investigating whether the scratchpad technique can enable transformers to perform rule-based reasoning. In particular, we alter the input of the addition dataset by providing scratchpad steps of adding two numbers digit by digit before presenting the final answer, instead of directly providing the answer following the question. Then we perform Leave-Square-Out on the altered dataset with GPT-2 and GPT-2-medium. The test accuracy vs. side length results are also shown in Figure 5. In the settings where side length of the left-out square lk \u226520, adding the scratchpad greatly boosts the model performance. However, for lk = 10, models trained with scratchpad inputs lag behind those trained with direct answers. Besides, the test accuracy of models trained with scratchpad maintain relatively stable with the increase of test square\u2019s side length, in contrast to the sharp decline in performance seen in models trained with direct answers. To explain the phenomenon, we show the test accuracy distribution over a and b of models trained with scratchpad in Figure 4. It is clear that the model behavior of relying on cases \u201cnearby\u201d to solve new problems has changed. The holes shift to (a series of) triangles with their hypotenuses along the \u201ccarry boundary\u201d at the unit\u2019s and ten\u2019s digits. For example, in the setting of lk = 20 (the second subfigure), there are two triangle holes where the model shows almost zero accuracy. We explain why the model fails in each triangle and why the model succeeds in the rest of the test set as follows. Firstly for the small triangle, the model fails to answer questions like 47+48. 47+48 can be decomposed into 2 steps: 7+8=5, carry 1; 4+4+1=9. As there are no cases in the training set containing the step of 4+4+1 in the ten\u2019s digit, the model fails. In contrast, for those test points that do not involve carry in the ten\u2019s digit, like 42+43, the model succeeds because it can learn 4+4=8 from plenty of training data. Secondly for the large triangle, the model fails to answer 57+58. 57+58 can be decomposed into 2 steps: 7+8=5, carry 1; 5+5+1=1, carry 1. As there are no training cases performing 5+5 in the ten\u2019s digit which requires carry 1 to the hundred\u2019s digit, the model fails. The shapes and locations of the holes indicate that the models succeed in test cases where every step of the corresponding scratchpad has appeared in the training set and fail otherwise. This conclusion aligns with Dziri et al. (2023) that transformers rely on seen computation subgraphs for complex reasoning. More importantly, this phenomenon demonstrates even scratchpad cannot teach transformers to perform rule-based reasoning\u2014the models still mechanically recite the seen unit rules, but fail to flexibly generalize them. 4.3.4. DOES THE MODEL AND DATA SIZE MATTER? As the emergent ability (Wei et al., 2022) suggests that the model size is crucial to unlocking a wide range of complex tasks, we first explore the effects of model size on the reasoning mechanisms through experiments on GPT-2, GPT-2-medium and Llama-2-7B. GPT-2 has 124M parameters, and GPT-2-medium has 355M parameters. As shown in Figure 5, when trained with direct answers instead of scratchpads, GPT-2-medium generally outperforms GPT-2 when lk \u226520. On the contrary, GPT-2-medium lags behind GPT-2 when lk = 10. Besides, when trained with scratch6 \fCase-based or Rule-based: How Do Transformers Do the Math? 4546474849505152535455 45 46 47 48 49 50 51 52 53 54 55 lk=10 40 42 44 46 48 50 52 54 56 58 60 40 42 44 46 48 50 52 54 56 58 60 lk=20 35 38 41 44 47 50 53 56 59 62 65 35 38 41 44 47 50 53 56 59 62 65 lk=30 30 33 36 39 42 45 48 51 54 57 60 63 66 69 30 33 36 39 42 45 48 51 54 57 60 63 66 69 lk=40 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 Figure 4. Test accuracy distribution of GPT-2 trained with scratchpad in the task of addition. Note that all points in the figure are test samples; each subfigure here corresponds to a left-out square in the original plane. From left to right, the side length of test square is set to lk = 10, 20, 30, 40. For each test point, we sample 10 generations and show the accuracy of generating the correct answer. 10 20 30 40 Length of T est Square 0.2 0.4 0.6 0.8 1.0 Avg Accuracy GPT2 GPT2 + Scratchpad GPT2-medium GPT2-medium + Scratchpad Figure 5. In the task of addition, we show the average accuracy over all test samples (samples within the square) with side length lk = 10, 20, 30, 40. We test four models: GPT-2, GPT-2 with scratchpad, GPT-2-medium and GPT-2-medium with scratchpad. pad, GPT-2-medium performs slightly better than GPT-2. Overall, model size has a more pronounced impact on test performance in scenarios of training with direct answers, as opposed to training with scratchpad, probably because the single steps in scratchpad is easier to memorize. We put the experiments on Llama-2-7B to Appendix A. Llama-2-7B also shows holes within the test square, indicating that the trend of case-based reasoning still exists. We also study how data size affects the behavior of casebased reasoning. We expand the range of a, b from 100 to 200 and 500, respectively. We also scale up the side length of the test square linearly with the data range. With the increasing of data size, the holes still appear, suggesting that increasing the data size helps little. We show the test accuracy distribution in Appendix A: Ablation for data size. 4.4. In-Context Learning Another aspect of LLMs\u2019 reasoning ability is attributed to incontext learning (ICL). This method draws upon knowledge not only ingrained during the pre-training but also from specific examples supplied within the context. The underlying mechanisms that make ICL effective are among the most intriguing and unanswered questions in the field. In this section, we extend our investigations to ICL, revealing that LLMs\u2019 ICL reasoning ability also exhibits characteristics of case-based learning. Because ICL is an emergent ability (Brown et al., 2020), we choose a stronger model: GPT-3.5-turbo-0125. We use the base addition task where we randomly add two base9 integers with 3 digits. The adopted GPT-3.5 can rarely solve the task only with a task description, making sure that the investigated reasoning power comes from ICL. (See the zero-shot results in Appendix E.1). To study whether ICL reasoning relies on rules or similar cases in the context, we randomly collected pairs of base9 integers whose zero-shot addition accuracy is less than 20%. Then, we provide 10 few-shot examples with the correct answers for each pair of integers, five of which are randomly selected (called random group), and another five are obtained by simultaneously replacing only one digit of the pair (thus are considered as more similar examples than the first five, and called similar group). Scratchpad is used in each few-shot example to provide step-by-step intermediate results. We choose the 14 test samples where the improvement with few-shot examples is more than 80%. To determine the contribution of each example, we adopt an intervention experiment similar to \u00a74.2 where we mask some in-context examples from either the similar group or the random group and compare the accuracy drop. Considering the interaction between individual examples, we choose to traverse all mask possibilities within a group instead of masking only one example. For example, for the similar group, we will have 25 \u22121 = 31 possible masks (excluding the empty mask). Specifically, we measure the contribution of the i-th in-context example as ci = X m\u2208M 1{i \u2208m} \u00b7 accum \u2212accuorig accuicl \u2212accuorig !, Ni , where accuorig, accuicl and accum represent the accuracy without few-shot examples, with all examples and with 7 \fCase-based or Rule-based: How Do Transformers Do the Math? input: 59+13= output: 72 input: Follow the code step by step to answer the question: [5, 9]+[1, 3]= def sum_digit_by_digit(num1, num2): result=[] carry=0 while num1 or num2: digit1=num1.pop() if num1 else 0 digit2=num2.pop() if num2 else 0 total=digit1+digit2+carry result.insert(0,total%10) carry=total//10 if carry: result.insert(0,carry) return result output: num1 = [5,9] num2 = [1,3] 1. Initialize Result and Carry result = [] carry = 0 2. Loop Through Each Digit ```while num1 or num2:``` 2.1 check the stop criterion num1 = [5,9] num2 = [1,3] bool(num1) = True bool(num2) = True num1 or num2 = True enter the loop 2.2 one iteration ```digit1=num1.pop() if num1 else 0``` num1 = [5,9] bool(num1) = True num1.pop() num1 = [5] digit1 = 9 ```digit2=num2.pop() if num2 else 0``` num2 = [1,3] bool(num2) = True num2.pop() num2 = [1] digit2 = 3 ```total=digit1+digit2+carry``` total = digit1+digit2+carry = 9+3+0 = 12 ```result.insert(0,total%10)``` result = [] total%10 = 12%10 = 2 result = [2] ```carry=total//10``` carry = 12//10 = 1 2.3 back to the start of the loop ... 3. Check Remaining Carry ```if carry: result.insert(0,carry)``` result = [7,2] carry = 0 bool(carry) = False pass result = [7,2] 4. Return Result ```return result``` result = [7,2] direct answer input: 59+13= output: 59+13, , C:0 # added 9+3+0=2 5+1, 2, C:1 # added 5+1+1=7 +, 72, C:0 72 scratchpad rule following Figure 6. Examples of input-output sequence of question 59 + 13 in 3 different settings, including direct answer, scratchpad and rule following. In the setting of rule-following, we provide the Python program of adding two numbers together digit by digit in the input, and provide the step-by-step rule-following process in the output. Examples of the full input-output pairs are shown in Appendix F. Average T est 0 T est 1 T est 2 T est 3 T est 4 0.0 0.2 0.4 0.6 0.8 1.0 Contribution similar random Figure 7. The contribution of similar and random ICL examples. The leftmost column consists of the average contribution of 5 similar (or random) contributions in each test. The second to sixth columns show the individual results of 5 out of 14 test samples. For both the average contribution in 14 experiments and the concrete contribution in each experiment, the contribution of similar examples is significantly larger than that of random ones. non-masked examples, respectively. M is the mask set that contains all possible combinations in the random and similar group. Ni is the number of masks that contain i (which is a constant 16). We report the contribution of similar and random examples to the 14 test samples in Figure 7. The leftmost column shows the average contributions of similar and random examples to each test sample, and we also show concrete contribution values of five test samples. See the complete results of 14 test samples in Appendix E.2. The contribution of the similar group is significantly greater than that of the random group in all experiments, with the p-value of 14 average values < 0.001. These results suggest that the model relies more on directly discovering shortcuts from similar cases rather than summarizing the reasoning rules of the task. This phenomenon seems contrary to some previous views on ICL which point out that the contribution of in-context examples lies mainly on hints about \u201ctasks\u201d and \u201cdomains\u201d rather than specific functions, implying a more rule-based method. We believe the difference comes from the basic capacity of LLMs to solve the task. For some tasks where LLMs have captured the essential reasoning abilities, ICL examples may help them \u201crecall\u201d the task so that the model can benefit from even some dissimilar examples. In contrast, when the model is unfamiliar with the task, it is difficult to solve the problem through recalling the pre-training knowledge. In this case, only similar examples can improve model performance by providing more direct shortcuts. In a word, our experiments suggest that it may not be possible to expect the model to extract rules that were not obtained during the pre-training phase by summarizing ICL examples. 5. Teaching Transformers to Do Rule-Based Reasoning by Rule-Following Fine-Tuning In \u00a74, we show that transformers are performing case-based reasoning in a wide range of math problems. However, the case-based reasoning behavior sets strong limits to the generalization ability of transformers. To be more specific, based on the results in \u00a74, transformers rely on surrounding cases to do addition, so they naturally cannot generalize in length by training on finite-digit addition data. In contrast, rule-based reasoning can robustly generalize in length. In this section, we explore how to teach transformers to do rule-based reasoning. We first revisit the failure of the scratchpad attempt. Despite providing step-by-step intermediate computations, scratchpad fine-tuning fails to teach transformers the actually ap8 \fCase-based or Rule-based: How Do Transformers Do the Math? plied \u201crule\u201d behind each step. This is like teaching children addition only by showing them examples, without telling them the rationales behind each step. Motivated by this intuition, we propose Rule-Following Fine-Tuning (RFFT) to explicitly teach transformers to use rules at each step. RFFT has two steps. First, we explicitly list the rules for solving a given task in the input. For example, in the task of addition, we provide the code of adding two long integers digit by digit in the input. It should be noted that there are various ways to represent the rules, including programs, pseudo-code, first-order logic, natural language, etc. We use programs in this section, and explore using natural language representations of rules in Appendix B.4. Second, we finetune the model to follow the rules step by step. Specifically, the model need to explicitly recite which rule it is using in each step, as well as updating the intermediate variables after applying this rule, as shown in Figure 6 right. 5.1. Experimental Setup In this section, we use two models, Llama-2-7B and GPT3.5-turbo-1106. We fine-tune Llama-2-7B ourselves, and fine-tune GPT-3.5-turbo-1106 through the OpenAI API service. We focus on the length generalization problem of addition of two large numbers a and b, and put additional experiments on the task of concatenating last letters to Appendix B.6. We randomly sample a and b to construct the training data, where the numbers of digits of a and b range from 1 to 5, constituting about 500k samples in total for Llama-2-7B. When fine-tuning GPT-3.5, we reduce the training set to as small as 100 samples. We expect models with sufficient fundamental capabilities to be able to grasp rules through only a small set of training cases, which aligns with how humans learn calculations. During test, we randomly generate 1,500 samples for each digit length from 1 to 9 for Llama-2-7B, and generate 500 samples for each digit length from 6 to 15 for GPT-3.5. The digit length considers the context window size of each model. For GPT-3.5, due to the smaller training set, we perform five independent experiments and report the average accuracy and standard deviation. We employ direct answer, scratchpad, and RFFT as three fine-tuning methods for comparison. The training details are shown in Appendix B.1. 5.2. Results and Analysis Overall Results The results are presented in Figure 8. Overall, rule-following significantly outperforms direct and scratchpad. When using Llama-2-7B with Rule-Following Fine-Tuning (RFFT), the model shows impressive generalization capabilities in performing addition with 6 to 9 digits, maintaining 91.1% accuracy even with 9-digit sums. In comparison, the scratchpad method achieves less than 40% accuracy in similar tasks. With GPT-3.5-turbo, which 1 2 3 4 5 6 7 8 9 Digit Length 0.0 0.2 0.4 0.6 0.8 1.0 Accuracy Direct Scratchpad Rule-following (a) Accuracy of Llama-7B fine-tuned with three methods tested on addition with 1-9 digits. 6 7 8 9 10 11 12 13 14 15 Digit Length 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Accuracy Direct Scratchpad Rule-following (b) Accuracy of GPT-3.5 fine-tuned with three methods tested on addition with 6-15 digits. Figure 8. Accuracy of Llama-2-7B and GPT-3.5-turbo fine-tuned with direct answer, scratchpad and rule following on addition. possesses more advanced foundational abilities, the RFFT method enables it to astonishingly generalize to additions involving up to 12 digits, with still over 95% accuracy on 12digit addition despite seeing only 100 training samples. This significantly surpasses the results from the scratchpad and direct answer fine-tuning methods. These results highlight the effectiveness of our Rule-Following Fine-Tuning technique in steering transformers towards rule-based reasoning, showcasing its potential in enhancing model generalization. Error Analysis We also delve into failure cases to investigate why rule-following fails to achieve a perfect generalization with 100% accuracy. We find that the models can always select the right rule to execute in each step in a recursive way, but sometimes make mistakes when executing some basic operations, such as \u201cpop\u201d. Consider the example \u201cnum2=[9,0,7,6,9,3,7]\u201d; the expected output after \u201cnum2.pop()\u201d should be \u201cnum2=[9,0,7,6,9,3]\u201d, while the models in some rare cases will generate \u201cnum2=[9,0,7,6,9]\u201d. As the length increases (e.g., more than 9-digit addition), the phenomenon becomes more severe, which could be attributed to hallucinations or the limited long context abilities of current LLMs (Li et al., 2023). As mentioned in Min et al. (2023), the tendency for hallucinations grows as the length of the generated content 9 \fCase-based or Rule-based: How Do Transformers Do the Math? expands. These basic capabilities of LLMs might be the bottleneck that limits their strict length generalization under RFFT. It is also analogous to that we humans also tend to make sloppy mistakes when calculating long numbers by copying the wrong digits or forgetting to carry. Comparison to Scratchpad Our RFFT technique provides the explicit rules in the input and also teaches LLMs to quote the part of rules used in each step, which helps LLMs understand what each step is doing without having to refer to the long preceding texts. For example, with clear instructions \u201ctotal=digit1+digit2+carry\u201d, an LLM knows it need to find and add these three variables together. In comparison, scratchpad requires LLMs to learn that the third number \u201c0\u201d in the formula \u201c7+6+0=3\u201d is the carry from last digit, increasing the difficulty of learning. Some example errors of RFFT and scratchpad are included in Appendix B.3. We also discuss RFFT\u2019s differences from scratchpad tracing in Appendix B.4. RFFT as a Meta Learning Ability As mentioned in \u00a75.1, we find that Llama-2-7B requires 150k training samples to generalize to 9 digits while GPT-3.5 can grasp the rules and generalize to 12 digits with only 100 samples. Thus, we hypothesize that rule-following is a meta learning ability\u2014 it might be \u201clearned\u201d through pre-training on diverse rulefollowing data and transfer to new unseen domains, and the stronger the foundation model is, the easier it can understand and learn the rules. This also aligns with human\u2019s ability to learn new rules, where experienced learners often learn much faster. To provide more evidence, we further fine-tune a larger model Llama-2-70b than Llama-2-7b and a slightly weaker model davinci-002 than GPT-3.5. Our results show that stronger models indeed need less examples to learn rules. See details in Appendix B.2. Scratchpad vs Direct Answer We observe that GPT-3.5, when fine-tuned with scratchpad, underperforms that with direct answer fine-tuning, which contradicts with our intuition that scratchpad is more suitable for arithmetic tasks as well as the results observed in Llama-2-7B. This phenomenon might be attributed to the different mechanisms of addition between scratchpad and direct answer. For example, scratchpad performs digit-by-digit addition from the lowest digit to the highest one, while direct answer always generates the highest digits first. Fine-tuning with scratchpad would strongly change the inherent addition mechanism of the model. At the same time, integer addition is in fact a relatively familiar task for GPT-3.5, wherein the model exhibits some degree of addition ability even when asked to directly generate the answer with an accuracy of 46.2% on 15-digit addition. This makes adopting scratchpad not always more helpful than direct answer fine-tuning. In contrast, RFFT explicitly interpret the step-by-step mechanism, making learning the addition rules much easier. To further support our hypothesis, we increase the number of training examples for scratchpad to 5,000 and observe much improved performance (better than direct answer with 100 examples but still worse than RFFT with 100 examples). See Appendix C for details. 5.3. In-context Learning As we discussed in \u00a74.4, LLMs encounter difficulty in autonomously extracting rules from ICL examples. The subsequent inquiry pertains to the capacity of LLMs to follow explicit rules supplied by in-context examples. Our conclusion is that given detailed rules, LLMs have certain abilities to follow the rules, which allows the models to show some reasoning ability on unfamiliar tasks. However, they do not gain a competitive edge from the rules in tasks already familiar to them. See Appendix E.3. 6." + } + ] + }, + "edge_feat": {} + } +} \ No newline at end of file